MSSQL: Managing Schema Changes on Large Tables
Microsoft SQL Server Development Customer Advisory Team posts an article about best practices for managing schema changes for large tables in MSSQL - Managing Schema Changes (Part 1) accompanied with examples, tests and result tables:
"Lets assume we have a Billion row table which is quite common these days. Schema changes have to be completed in the fastest, least intrusive manner with high availability. Some schema changes include implicit reads and writes. These operations, extending the duration of the schema change transaction, can reduce concurrency on large tables by taking shared locks for reads and exclusive locks for writes."
For example - on changing column datatypes:
"You can change a columns base datatype, or change its length (for numeric includes scale and precision) using the alter table alter column statement. In these cases, performance and concurrency is affected by the accompanying batch update. If a NULL property is changed to NOT NULL, a batch update also occurs. The exception is when changing a NOT NULL property to NULL. This is a fast metadata only operation."
Very useful, IMHO.
Friday, March 3, 2006 4:46 AM