In this online guide, I will help my readers overcome the challenges in Microsoft SQL Server consolidation planning. You can access the main page of "Mastering SQL Server Consolidation Planning" here.
In the previous part I wrote about non-functional requirements and risk analysis. The topic of this part is Analysis of external dependencies, defining architecture alternatives and Setting the metadata.
It is important to understand which kind of data systems, processes and applications are integrated into the SQL Server architecture. Similarly, we should understand how external data systems affect existing SQL Server deployments. Some of the aspects of an SQL Server deployment are:
It is a good idea to draw a graph of the data flows between the systems and write down the main points. It is also useful to draw a process chart with a schedule for each data flow in order to understand the business logic behind the scenes.
So why are these external dependencies so important? It is common that the whole backbone of the business information data flow is built on these external dependencies: If one node fails, the rest will fail too.
In this phase we investigate the existing SQL Server system architecture. First, we need to figure out which kind of installations there are: standalone instances, failover cluster / Always On availability groups, replicated instances, etc. Second, we have to determine how well the current model has worked. Third, we should find out whether there is anything that needs to be changed or cannot be changed in the installations. Finally, we should identify the non-functional requirements of the installations and any issues that have arisen during the diagnostics and health check phase.
When defining the target system architecture, the main driver should be how the source system architecture is built. It is wise to discuss the functionality and demands of the current architecture with project stakeholders in order to see what should be kept and what should be changed. A good rule of thumb is to favor simple architectures such as standalone and failover cluster installations or Always On availability groups. Keep in mind that for consolidation purposes the simpler the architecture, the better it suits for consolidation. An active-active cluster architecture has the disadvantage of needing extra “empty space” in terms of CPU processing power and RAM usage if the same service level is required after a failover. For an active-passive cluster configuration there is no such drawback.
Our international patent pending Governor® product uses a configurable, automatic mechanism for defining the logical topology of the target architecture when it comes to instance level consolidation. This makes capacity planning more flexible and allows you to fulfill different architecture needs.
It is a good idea to define a couple of different target architecture scenarios. The cost model prestudy will show which is the most cost-effective solution. All in all, pay attention to simplifying the overall architecture if possible – the key is to reduce the number of existing servers, not to increase them.
There are two types of metadata in the source system: one that is updated automatically, and one that is updated manually. There is a lot of different kinds of metadata that describes the special characteristics of the source system data more precisely. It would be a good idea to keep track of at least some of the following metadata:
Our SQL Governor® product contains lots of automatically and manually updated metadata fields. The same metadata can be then used in capacity planning projects, which makes grouping of source system DBMS instances much easier.
Next up, the final part of this guide. There we discuss the requirements specification phases called "Cost model prestudy”, "Calculating capacity distribution” and "Selecting the desired end architecture”.
Access the final part of this guide.
Jani K. Savolainen
Founder & CTO
DB Pro Oy