As we head into Q4 on the heels of VMworld and in advance of KubeCon, I wanted to touch on how unified orchestration and automation can help bridge those two worlds and provide developer self-service.
Organizations are building out Kubernetes clusters in their own data centers to meet security and compliance requirements and to take advantage of existing hardware investments, along with myriad other reasons. According to recent Gartner research over 70% of enterprises are using on-prem or hosted IaaS based container management. The data indicates this work is led by IT Operations and Platform Engineering teams who consistently struggle with skills gaps as they attempt to span virtualized, containerized, and public-cloud workloads.
Building Kubernetes clusters on-prem gives developers the ability to quickly develop and iterate on applications in a way that offers advanced capabilities over and above traditional hypervisors.
Eager to get going, your team will likely run into hiccups when you want to stand up your on-prem Kubernetes cluster in an automated fashion. Here’s what I mean.
How manual processes create bottlenecks
A production Kubernetes cluster is typically composed of a minimum of six nodes, virtual machines, or servers. Three are dedicated to the control plane of the cluster.
The other nodes are what actually run the containers that the development teams have created as part of application development process.
Here comes the challenge from an IT operations perspective: You need to provision these Kubernetes clusters in a way that aligns with what your organization has specified. This means you need to deploy to the infrastructure (VMware or Nutanix, for example) running in the on-prem data center.
This is where the headaches come in when you’re still relying on manual processes – because a number of things commonly done as part of provisioning any production workload still need to be done for the workloads using the Kubernetes resources.
Important things like:
- Allocating and reserving IP addresses as the cluster is being provisioned for the Kubernetes cluster nodes
- Integrating with domain name system (DNS) to create name records as part of the cluster provisioning process
- Connecting to the configuration management (CMDB) to populate the records for the Kubernetes cluster nodes for tracking and auditing purposes
Having to deal with multiple teams to accomplish these things can slow you down. Depending on the structure of your organization, you might have to reach out to disparate teams to facilitate various requests. And, as is often the case when working with downstream or external teams, aligning on priorities and schedules can create additional bottlenecks.
Reap the benefits of automation
Choosing to bring automation into your Kubernetes cluster provisioning process offers familiar (but always compelling) benefits:
- Speed and agility that come with being able facilitate tasks related to IP addresses management, DNS integration, and CMDB management in an automated fashion, minus any team-to-team manual handoffs
- Consistency and peace of mind in knowing that ever every time a cluster is deployed, it’s ensured that yes it got an IP address, yes it had a DNS record created, and yes it had a CMDB entry created – all automatically part of the process
Automation also eliminates inevitable human errors that come with having to perform and track manual processes.(It happens to everyone, like: You forget to integrate that DNS record and now the Kubernetes cluster standup process is stalled.)
All the points above about incorporating automation into the Kubernetes cluster provisioning process are available out-of-the box with Morpheus. We offer some additional advantages as well.
For example, many other Kubernetes solutions require an additional first step – that is, you need an additional bootstrap node or cluster to build your actual cluster before it’s ready for consumption. That’s not needed with the Morpheus platform. We keep it simple. You can come into Morpheus and get exactly what you want and need to get going faster.
Morpheus also makes it easier to utilize and integrate automated scripts for more than provisioning the Kubernetes cluster. Oftentimes enterprise organizations have other external systems that may need to interface or communicate with the Kubernetes cluster such as ArgoCD.
The Morpheus platform can attach an automation workflow to the provisioning of the Kubernetes cluster to facilitate the execution of automation tasks using common tools such as Python, Ansible, or bash. This provides a framework for cluster administrators to integrate “glue code”– the automation used to orchestrate the integration of systems in a robust and simple fashion. This integration capability simplifies the installation of additional software such as applications used for security monitoring to provide a more robust production deployment for the cluster as well as integration with external systems.
Lastly, our unified orchestration platform means you don’t need to context switch between hypervisors, Kubernetes platforms, and public cloud portals. It’s a single control plane to bring these disparate technologies together under a simple self-service umbrella.
Automate and go
With Morpheus, you can automate processes—from what’s needed to completely stand up a new Kubernetes clusters on prem to easily tying it into all the external systems and moving parts that most enterprise orgs have to deal with when deploying clusters and workloads. It’s that simple.
Take a look at this datasheet for a full list of Morpheus capabilities for Kubernetes. If you are attending the physical or virtual KubeCon event this year, please stop by for a chat or request a demo online and we can go into depth on your unique set of hybrid cloud challenges.