Understanding DCIM In-depth for Maximizing Its Utilization
Data Center Infrastructure Management (DCIM) is not necessarily new but its adoption increases as it matures. As data center leaders search for ways to increase efficiencies and to increase availability while reducing costs, they are looking to leverage what DCIM has promised to deliver. Experts are predicting the global market for DCIM will grow 15 percent per year for the next 3 years as organizations look to improve efficiencies and reduce operating costs in aging data centers.
If you have not yet implemented a DCIM, you likely have data center information scattered in many places. Information is spread across a multitude of spreadsheets and applications, each with different owners managing the information within them. Keeping all of this information in different places is difficult at best and if a conflict arises it is nearly impossible to know which document contains the correct information. Even with the best of intentions, tracking assets and environmental information independently almost always provides subpar results.
Before choosing a DCIM product, you must understand what you want to achieve. Most DCIM products can provide: Asset Lifecycle Management, room and rack capacity planning, power and temperature monitoring, power and network mapping, and wrap analytics and trending around off of this. But, unless you commit the necessary resources to DCIM, you likely will achieve only a portion of your goals or even question its value.
To realize all that a DCIM can provide it is critical to do all of the upfront planning and to commit the resources that are needed to keep the data current
At ProMedica, we started our DCIM journey by defining exactly what we wanted from our DCIM, even before we picked a product. By doing so first, it made the process of choosing the solution that best fit our needs easier. We wanted our DCIM to assist us in tracking and managing all of our infrastructure and IT assets in the data centers, and keep our Configuration Management Database (CMDB) updated by synchronizing between them. Synchronizing our CMDB with our DCIM reduced errors by eliminating the need to keep two independent records. We also required our DCIM to monitor temperature and humidity throughout the data center to identify hot and cold spots in real time and monitor data center AC units to track utilization and make sure cooling load is evenly distributed. To adequately track and balance power load in our data center racks, we required in-rack IT equipment to be able to feed data into our DCIM allowing planned configurations to be validated with real data. Another requirement was for Power Distribution Units (PDU), Cabinet Distribution Units (CDU), and Uninterruptable Power Supplies’ (UPS) to feed real time data into our DCIM to help identify any part of the power distribution system that was over utilized. Mapping of all power connections and network connections to document upstream and downstream dependencies was vital. This would help us quickly identify systems that were impacted by a planned change or an unplanned outage. Since we are highly virtualized, it was important that our DCIM was able to connect into our VMware vSphere instances to map workloads to specific locations in the data center. We also wanted dashboards and reports that would allow for easy interpretation of the data that was being collected, provide utilization reports and calculate Power Usage Effectiveness (PUE).
Getting all this data into a DCIM and making sure it is accurate isn’t easy and will not happen overnight. Dedicating resources to the implementation and continued operation is important if you want to extract meaningful information out of your DCIM. A small group of IT and data center engineers was assigned the task of implementation, but as our DCIM project grew closer to go-live, we found that it was important to dedicate a single resource to be responsible for it and assigning additional resources as necessary. This was important because the person in charge of it needed to think about the entire stack of technology that was feeding into it and being fed from it. Rightfully so, IT engineers were generally concerned only with their equipment and data center engineers were focused on floor and rack space, and power and cooling.
This person—who was accorded with the responsibility— was asked to think about the big picture and keep the reasons we wanted to implement a DCIM as their target. The person is also responsible for all of the care and feeding of the system including onboarding new assets, performing asset audits, making sure the system is accurately capturing all assigned measurements, performing updates of the DCIM system, and working with the DCIM vendor to address any support issues.
When the dust settles after all the work, what you have can be a powerful tool. It has been less than a year since go-live of our DCIM and so far, we have been able to realize tangible savings. Because we can now display real-time temperature not only in our data center but from all of our in rack IT equipment, we have been able to increase our data center temperatures reducing the amount of energy needed to keep them cool. We have also identified racks with power loads in excess of their fail over capacity that would have caused an unplanned outage if redundant power was interrupted. Infrastructure capacity baselines have been created from the information provided from our DCIM. We know exactly how much room and rack capacity, and power and cooling capacity in real time. Our support costs will decrease this year because we now have better asset inventories and can reduce our maintenance cost by removing unused equipment.
To realize all that a DCIM can provide, it is critical to do all of the upfront planning and to commit the resources that are needed to keep the data current. Without this commitment, your DCIM will just become an expensive tool that contains stale and useless information.