Mastering Microsoft SQL Server Consolidation Planning - Part 4

In this online guide, I will help my readers overcome the challenges in Microsoft SQL Server consolidation planning. You can access the main page of "Mastering SQL Server Consolidation Planning" here.

In the previous part, I described how to monitor the existing environment. The topic this time is SQL Server diagnostics and health checks. SQL Governor software supports all these features.

Disk allocation unit size

SQL Server prefers a 64 KB allocation unit size for database files. The default setting of 4 KB is not optimal for storing SQL Server databases. This is because SQL Server does IO in chunks of 8 x 8 KB pages.

Get our guide to SQL Server consolidation planning

Tempdb file configuration

The tempdb data and log file sizes should be large enough. In addition, ever since the SQL Server’s architectural change in version 2008, it is able to dedicate multiple threads on tempdb data file processing. The amount of tempdb data files should match the dedicated CPU core count on current instance, up to maximum of 8 data files.

Tempdb is the cornerstone of SQL Server’s performance because temporary data objects such as user objects, internal objects and row versions are processed there. In earlier versions of SQL Server this was not all that relevant. Today, you should create as many files as needed to maximize your disk bandwidth because it reduces tempdb storage contention and improves scalability.

Note! Do not create too many files because that can reduce performance and increase management overhead.

SQL Server trace flags 1117 and 1118

Trace flags 1117 and 1118 are introduced as default server trace flags for SQL Server 2016. You should consider turning these trace flags on in the earlier SQL Server versions. These trace flags boost the SQL Server data file processing performance.

Trace flag 1117 causes all files in a file group to grow equally. This trace flag helps in cases where there is heavy tempdb latch wait pressure. Another way to solve this issue is to pre-allocate tempdb data files into their maximum allowed capacity instead of using the trace flag. This is sometimes preferable because trace flag 1117 affects all user databases as well and may cause too much concurrent page allocations, especially on instances with a large number of big databases.

Trace flag 1118 favors full extents instead of mixed extents. Every new allocated object for each database on the instance gets its private 64KB of data. This is good because non-harmonized tempdb files can cause contention on some pages.

SQL Server instance configuration properties

There are a few SQL Server instance configuration properties that should be checked. These are max degree of parallelism, cost threshold for parallelism, and fill factor (%). Max degree of parallelism sets the maximum amount of parallel CPU cores to dedicate to each individual statement during parallel plan execution. A value of 0 (and a value equal to the CPU count) means that all the CPU cores up to 64 may be used. A value of 1 means that only one core should be used. SQL Server will evaluate parallel execution plans for queries, index DDL operations, as well as static and keyset-driven cursor population.

If the estimated execution plan duration is equal to or greater than the cost threshold for parallelism value for a serial plan in seconds, then the max degree of parallelism number of cores is allowed to process the statement in parallel plan mode.

The fill factor setting fine-tunes the SQL Server index performance. Every time when index is being created or rebuilt, its fill factor determines how many percent of the index leaf-level pages are left empty for allocating new index entries in the future. The default fill factor of 100% equals fill factor values of 0 and 100.

Fill factor means how full the index pages will be. The default of 100% is rarely optimal in OLTP database use. A better default value is 95% to avoid maximum page splitting when creating new indexes without defining a fill factor.

A good fill factor value can reduce hypothetical page splits by providing enough space for expanding the index as data is added to the table. On the other hand, a fill factor that is too low will cause poor read performance on the index because more index pages need to be read. You should always define an index fill factor explicitly when creating a new index. A good rule of thumb is to use lower fill factors for “hot” indexes, i.e. indexes that are under constant change.

Wait statistics

Wait statistics contain essential information of the overall health status of the instance. At most, the CPU signal waits vs other resource waits ratio should be 20 vs. 80, respectively, with CPU waits not exceeding 20%. Anything over 20% indicates increased CPU pressure.

User database file groups

The bigger user databases should contain more file groups in order to maximize disk performance. There are different prerequisites for different SAN systems, but there are still some certain general principles, especially for dedicated storage devices. It is a good idea to dedicate the PRIMARY file group only for system objects and references to secondary data files (.ndf). If the database contains big data tables, there should be a file group of its own for them, as well as file groups for ordinary data tables and for non-clustered indexes. Each of these file groups should contain one or more data files, depending on the storage system / LUN configuration. This scales up the storage system performance. Set the ordinary DATA file group as DEFAULT.

Note! Do not use PRIMARY file group as DEFAULT. We want only system objects to be stored in the larger databases.

Learn how to master the consolidation planning of SQL Server platforms

Unused indexes

Unused indexes are indexes that exist in a table but have not been used after the last SQL Server service restart. Keep in mind that there may be a lot of unused indexes, if the service was just restarted. However, if the service has already been running a long time (e.g. for several months) and still some indexes are unused, it is a good idea to check whether they really are unnecessary. If so, the indexes should be dropped. This is because excessive indexes cause table insert, update, and delete operations perform slower.

Missing indexes

Missing indexes are indexes that should exist but don’t. There are often queries that don’t use indexing effectively. These indexes can be found for example in the SQL Server Management Studio’s estimated query plans.

Fragmented indexes

Physical and logical Index fragmentation causes indexes to perform slower. That’s why it is important to plan index fill factors well, and check index fragmentation on a regular basis. This should be a standard routine of every well-designed index maintenance run. Indexes with a fragmentation level of over 30% should be rebuilt and indexes over 5% fragmentation reorganized with a full scan. You should update all index statistics regularly.

Foreign keys without supporting index

Foreign keys without a supporting index will slow down the query performance. It is a best practice to always create an index for each foreign key column.

Page life expectancy

Page life expectancy tells us the amount of time, in seconds, the data page will be kept in the buffer pool without references. This kind of memory pressure is good to check on the instance level. A decent formula to calculate the lowest acceptable value for this counter is: maximum memory in GB / 4 * 300. If page life expectancy is too low, the instance needs more RAM in order to prevent too many physical reads from the disk system.

Our SQL Governor software already supports many of the features that were defined in this part of the guide.

The next part will be about the requirements specification phases called "Lifecycle planning” and "Data center environment prestudy”.

Access the next part here.

How to master the consolidation planning of a SQL Server platform 

Jani K. Savolainen
Founder & CTO
DB Pro Oy