In the fourth part of the SQL capacity planning blog series, we conduct a recurring workload analysis of the environment.
In case you haven't read the previous parts of the blog series, click and read more on:
Our hypothesis is as follows: What if the CPU usage % time series data for the SOURCEDB1 and SOURCEDB2 is highly seasonal?
We have all the reasons to assume this, based on the earlier mutual observations of the SQL Server workloads. And if so, it would be a great help, after the trend extrapolation and benchmark extrapolation of the time series data, to stack those datasets into one hypothetical target server. Then we would be able to demonstrate the Active-Active FCI failover scenario in very precise way and forecast the most compact SQL Server hardware setup to handle the workload.
Picking sample data from population
In order to prove our hypothesis, we will need to analyze the existing time series data gathered from SOURCEDB1 and SOURCEDB2 servers. Down below is a pick from the SQL Governor software comparing two consecutive weeks from SOURCEDB1 not hitting to the change of the month. You can select desired weeks to compare for any performance counter in any server, instance or database and you can constraint only service times to be taken in count.
The image below showcases a comparison of week 52 time series values compared to corresponding ones in week 51 during the service time hours (Monday to Friday 07am – 06pm). From here we can see how much these hourly values deviate in between the two weeks.
And then an opposing pick from the SQL Governor software about comparing two consecutive weeks from SOURCEDB2 not hitting to the change of the month.
Analyzing the results
Based on brief analysis:
- SOURCEDB1, the average CPU usage % seems to be well predictable on hourly basis, so it is safe to create a forecast with planned threshold levels.
- SOURCEDB2, the average CPU usage % workloads also seem to be very predictable with the exception of one relatively remarkable outlier in the data (peak marked Friday), which we were able to spot to be a manually driven maintenance job on that particular week causing a 1 hour peak in CPU consumption during in the middle of the Friday.
As you can see, our hypothesis is proven to be relevant. After this, we would conduct a similar analysis about maximum CPU usage % before continuing to the final step of conducting FCI failover analysis and selecting the right-sized hardware.
Note! It is a good idea to go through all the weeks of the monitored time series data to be able to understand the volatility in workloads affecting our forecasts, especially when there is not that much monitoring data available. Not all the performance counter data is weekly seasonal. Some seasonality comes up on monthly or annual basis – so remember to keep track of such exceptions.
The more you have monitoring data as a baseline, the more accurate your actual capacity plan will be. SQL Governor helps in identifying these exceptions in data.
SQL Governor process for SQL Server capacity planning
With SQL Governor software, the overall process for the capacity planning is as follows:
- Diagnosing existing FCI nodes and instances
- Conducting a heatmap analysis of the environment
- Conducting best fit analysis of the environment
- Conducting a recurring workload analysis of the environment
- Conducting FCI failover analysis and selecting the right-sized hardware
In the next blogpost, we will go through the fifth and final step: Conducting FCI failover analysis and selecting the right-sized hardware.
Founder & CTO
Download our Capacity Planning guide to learn more: