Are You Ready To Virtualize Your Enterprise Applications and Databases?
In today’s datacenter, the enterprise applications customers run range from tier 0/1 applications and databases, to nextgeneration cloud-native applications, to messaging and collaboration apps, and customized applications. Each application has unique requirements around a variety of factors such as performance, scalability, and availability. For instance, in terms of scalability, tier 0 and tier 1 application workloads might require rapid scaling of performance and capacity, whereas messaging and collaboration apps might enjoy a more gradual growth in capacity.
Tier 0 and tier 1 database performance has long been the primary criteria for selecting server and storage infrastructure. But as mentioned earlier, performance is now only one slice of the pie for datacenter infrastructure teams. Infrastructure and database architects and administrators need their infrastructure to meet availability, scalability and management requirements across a diverse set of business applications.
If infrastructure is lacking in even one of these areas, it can create substantial problems. Say your company doesn’t have continuous access to your Enterprise Resource Planning (ERP) system running on SAP NetWeaver and Oracle database. Losing access to such a critical application will seriously hamper their productivity and your bottom line, immediately and over time.
Application teams, including database administrators, are increasingly turning to virtualization to meet the collective needs of these applications. The rewards for virtualization can be substantial, with benefits ranging from smaller datacenter footprint, to better control of costs, to accelerating provisioning, to the ability to deal with scaling and growth. But it isn’t a silver bullet, and before diving in it’s important to realize that if not done correctly, virtualization can add even more challenges.
Hyper convergence requires thinking beyond legacy constructs such as storagecentric shared storage
Virtualizing critical databases can easily become an expensive proposition. According to a survey we performed here at Nutanix, cost along with SW licensing is the leading inhibitor IT faces when virtualizing enterprise applications, including tier 1 databases. Companies must factor in not only the upfront capital expense associated with virtualizing applications, but the ability to scale compute and storage–hopefully in a linear, granular fashion. On the operational front, they must also consider the overhead that comes with managing infrastructure and the virtualization stack.
Licensing costs is one of the top reasons why more and more enterprises are evaluating multi-hypervisor environments. According to IDC, more than 72 percent of enterprises in 2015 are using more than one hypervisor—up from 59 percent in 2014. While adopting multiple hypervisors in the datacenter can help control licensing costs—and deliver other benefits like minimized vendor lock-in and support for important applications—there are also drawbacks to multiple hypervisors, as each adds to infrastructure complexity and overhead. IT administrators and architects can overcome these challenges by carefully selecting the right hypervisor for the right job. This includes using modern, lean hypervisors, which deliver close to bare-metal performance without unnecessary software, functionality or overhead.
As discussed earlier, in addition to the considerations on the hypervisor front, ensuring maximum impact for your virtualized database deployments requires the right compute and storage architecture. Achieving this requires a fundamentally different approach to enterprise application needs. Hyperconverged infrastructure (HCI) represents one such area. In five short years, hyperconvergence has marched from the fringes to the mainstream as it becomes adopted by businesses of all sizes. Its appeal is that it delivers the performance, availability, scalability and manageability needed to virtualize even the most demanding of enterprise applications, including tier 0/1 databases, all with the benefits and flexibility of 100 percent software-defined solution.
Hyper convergence requires thinking beyond legacy constructs such as storage-centric shared storage. The most effective HCI solutions are built using a scalable web-scale foundation, which has propelled companies like Google, Facebook, and Amazon to leadership positions. They enable enterprises to consolidate virtualized databases and associated enterprise applications on a single platform with best in class performance for compute and storage, typical of local flash. With such a solution, enterprises can keep pace with rapidly growing business needs without big upfront investments or disruptive forklift upgrades.
A well designed HCI solution also tackles the storage complexity head on by aligning it to the virtualization layout, practically making it invisible. This helps reduce storage operational costs without giving up availability and flexibility. Given the criticality of enterprise applications being virtualized, a HCI solution should have the ability to minimize, if not eliminate, planned downtime and protect against unplanned issues for high availability, using self healing features and functionality.
In the race to match increasing business demands stride for stride, today’s datacenters are being forced to undergo a transformation in order to support critical databases and enterprise applications. If executed properly, this metamorphosis can free IT staff from laborious maintenance and empower them to fulfill their true role—focusing on enterprise applications and delivering innovation.