In this online guide, I will help my readers overcome the challenges in Microsoft SQL Server consolidation planning. You can access the main page of "Mastering SQL Server Consolidation Planning" here.
In the previous part I wrote about non-functional requirements and risk analysis. The topic of this part is Analysis of external dependencies, defining architecture alternatives and Setting the metadata.
Analysis of external dependencies
It is important to understand which kind of data systems, processes and applications are integrated into the SQL Server architecture. Similarly, we should understand how external data systems affect existing SQL Server deployments. Some of the aspects of an SQL Server deployment are:
- How do applications connect to servers/databases: using a connection string, SQL Server alias, DNS alias, configuration files, registry settings, hard coded values, or something else?
- Do SQL Server and applications use mixed or integrated security mode?
- Do applications use a single technical user name to connect to SQL Server?
- Are there firewall rules and are SQL Server instances using fixed TCP ports?
- Is there point-to-point interoperability between SQL Server systems, Linked Servers, Analysis Services, Reporting Services and Integration Services?
- Is there any interoperability between SQL Server systems and external systems?
- Is the interoperability message-based, ETL-based or both?
- When does this interoperability occur?
- What are the criticality levels of these systems – are there any SLA’s defined for the dependencies?
- Where would a malfunction affect the system and how badly?
It is a good idea to draw a graph of the data flows between the systems and write down the main points. It is also useful to draw a process chart with a schedule for each data flow in order to understand the business logic behind the scenes.
So why are these external dependencies so important? It is common that the whole backbone of the business information data flow is built on these external dependencies: If one node fails, the rest will fail too.
Defining SQL Server target platform architecture alternatives
In this phase we investigate the existing SQL Server system architecture. First, we need to figure out which kind of installations there are: standalone instances, failover cluster / Always On availability groups, replicated instances, etc. Second, we have to determine how well the current model has worked. Third, we should find out whether there is anything that needs to be changed or cannot be changed in the installations. Finally, we should identify the non-functional requirements of the installations and any issues that have arisen during the diagnostics and health check phase.
When defining the target system architecture, the main driver should be how the source system architecture is built. It is wise to discuss the functionality and demands of the current architecture with project stakeholders in order to see what should be kept and what should be changed. A good rule of thumb is to favor simple architectures such as standalone and failover cluster installations or Always On availability groups. Keep in mind that for consolidation purposes the simpler the architecture, the better it suits for consolidation. An active-active cluster architecture has the disadvantage of needing extra “empty space” in terms of CPU processing power and RAM usage if the same service level is required after a failover. For an active-passive cluster configuration there is no such drawback.
Our international patent pending Governor® product uses a configurable, automatic mechanism for defining the logical topology of the target architecture when it comes to instance level consolidation. This makes capacity planning more flexible and allows you to fulfill different architecture needs.
It is a good idea to define a couple of different target architecture scenarios. The cost model prestudy will show which is the most cost-effective solution. All in all, pay attention to simplifying the overall architecture if possible – the key is to reduce the number of existing servers, not to increase them.
Setting the metadata for capacity planning
There are two types of metadata in the source system: one that is updated automatically, and one that is updated manually. There is a lot of different kinds of metadata that describes the special characteristics of the source system data more precisely. It would be a good idea to keep track of at least some of the following metadata:
- Which DBMS instances should be dedicated to one specific server (means server level right-sizing or server level consolidation)?
- Which servers have additional services like SSIS, SSAS or SSRS running on them (means server level right-sizing or server level consolidation)?
- What is the planned consolidation date for each server or instance?
- Which servers are production servers, and which are dedicated for testing or development purposes?
- What is the service, customer, and owner for each server, instance, and database?
- What is the criticality level of each server or instance?
- What are the generic virtualization parameters such as whether hyper-threading is used on the server?
- What is the metadata about index maintenance, database backups, and so on?
Our SQL Governor® product contains lots of automatically and manually updated metadata fields. The same metadata can be then used in capacity planning projects, which makes grouping of source system DBMS instances much easier.
Next up, the final part of this guide. There we discuss the requirements specification phases called "Cost model prestudy”, "Calculating capacity distribution” and "Selecting the desired end architecture”.
Jani K. Savolainen
Founder & CTO
DB Pro Oy