Google Kubernetes Engine: It Shouldn’t Be This Hard
- By Winston Thomas
- August 18, 2022
You can’t deny that Google Kubernetes Engine (GKE) provides real-world benefits.
The most significant benefit of Google’s managed Kubernetes service offering made generally available roughly a year after Kubernetes is that it makes running Kubernetes clusters much simpler.
GKE solves the management headache that Kubernetes introduced through its two principal planes of operation: the control plane and the data plane. The control plane manages the overall cluster and worker nodes; the data plane routes and handles the data that passes through the networks.
If you are running a small number of Kubernetes clusters, this should make life easier. But having two planes to manage makes it a major headache when spinning up and down clusters, orchestrating nodes, and moving pods here and there across a disparate development environment.
So why not have Google manage most of the control plane? That’s what GKE did when it was introduced in 2015. It essentially abstracted the underlying infrastructure. This means developers can do what they’ve always done best: write good code. They do not have to worry about the infrastructure bits that Kubernetes clusters manage (which is a concern for new supply chain attacks).
This means your DevOps teams can be more efficient, whether you are a startup or part of an agile team within the enterprise. They can focus on their CI/CD pipeline management and spend less time moving nodes (or introducing any errors as a result). As a result, you save time and maximize efficiency.
Another significant benefit is that GKE, built on Kubernetes, supports that common Docker container format. DevOps teams can store and access Docker images and run Docker containers easily.
Taking the complexity out of GKE
Yet, for all of GKE’s numerous benefits, many developers see it as a bit of rocket science. In a talent-scarce landscape, where getting the right people in DevOps is already hard enough (try finding a Scrum Master), learning to use GKE optimally can be a tall call.
Yes, it can simplify Kubernetes complexity, but it does have a learning curve that many DevOps teams have to learn. Still, developers had to know how to create a production-ready environment. And it only entered mainstream development a few years ago.
“GKE technology has been around for a while (since 2006) but became mainstream a few years ago,” said Ravi Paul, country lead for Malaysia at Searce.
Then there is the scaling complexity. Scalability is another issue. Moving nodes to another new cluster is simple enough if your development and production environment are well aligned and straightforward. That’s not usually the case in many companies. This means companies are hampered with cluster provisioning and management tasks.
Lastly, GKE faces a flexibility reckoning. While it was designed to bring some management sensibility to Kubernetes environments, it requires talent. But without proper training, you can end up with a GKE environment that can potentially blow up budgets.
“Overall experience ppl using GKE has not been all positive. It takes effort to build and therefore to be able to reap the benefits,” said Paul.
Google understood these challenges and introduced new solutions.
Making GKE practical for mortals
In 2019, Google introduced a serverless container offering called Google Cloud Run. It abstracted the infrastructure parts that developers do not want to manage. It did this by managing the data plane.
Google Cloud Run had limitations. It did not support stateless applications, did not address flexibility, and too much abstraction may not necessarily be good for security.
Last year’s introduction of GKE Autopilot takes a step further to address the gaps. While it offered the same management simplicity as Google Cloud Run by managing the control and data planes, it catered for flexibility of customization and better scalability management. A strong security posture (set as default) also offered the enterprise security many developers needed to meet.
GKE Autopilot introduced the idea of auto health monitoring and auto repair. Developers do not have to scratch their heads about calculating the compute capacity needed for their workloads, and you streamline your cost management to the pods you use and not be billed additional for underutilized nodes.
Essentially, GKE Autopilot does help you to manage a Kubernetes development environment almost like a professional.
The value of partnerships
Even with GKE Autopilot deployed, it requires a particular understanding of how Kubernetes works and what it is you’re automating. It is where a company like Searce comes in.
With decades of Kubernetes and development knowledge, a robust set of development resources, and experience working with startups and development teams within enterprises, it can work with DevOps teams to take the more optimized route.
Paul also says there is no such thing as one-size-fits-all, especially when it comes to development.
For example, understanding whether to choose the standard GKE route, Google Cloud Run, or GKE Autopilot is not just a talent or efficiency question. You also need to understand your team’s experience managing the data plane and what security constraints you’re working with.
Searce takes the guesswork from deploying and running GKE using a five-phase process.
-
Assess precisely the environment that the customer is working with
-
Presenting data about the assessment
-
Conduct workshops to ensure proper knowledge transfer
-
Appoint a dedicated tech account manager
-
Use managed service offerings to manage the customer’s environment 24/7
While many consulting firms may offer similar approaches, Paul highlighted Searce’s competitive differentiators.
“We have been around longer than our competitors. So, we’ve created strong, trusted relationships with many clients. We’ve also done multiple deployments around the globe while many of our competitors are usually born and raised in Southeast Asia. And when you work with Searce, we are putting this tremendous experience and expertise in your hands,” he said.
Winston Thomas is the editor-in-chief of CDOTrends and DigitalWorkforceTrends. He’s a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/Jelena Danilovic
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.