Category Archives: Kubernetes

Kubernetes… Kubernetes… Kubernetes!

Kubernetes is getting centre stage at KubeCon and Cloud Native Con this week and it has attracted 12000 attendees which is amazing for an open source non vendor specific conference. Kubernetes has dominated the IT water-cooler talk this year and a lot of people talk about it… but are they also doing it? This post is more or less a little social experiment testing the lure of industry keywords and current trend topics.

I have written a legitimate opinon piece today, which can be found here around KubeCon and some thoughts on Kubernetes in relation to OpenStack, Docker and the rest of the Cloud Native landscape. Apart from this being honeypot(ish) post, I am legitimately interested in polling people on the following:

Have you installed, configured and used Kubernetes?

View Results

Loading ... Loading ...

Is your company actively deploying Cloud Native Applications today?

View Results

Loading ... Loading ...

Is your company actively deploying Containerised Applications today?

View Results

Loading ... Loading ...

#Kubcon 12000 people cant be wrong…right?

#Kubcon – 12,000 People Can’t Be Wrong… Right?

This week, KubeCon and Cloud Native Con is happening in San Diego. In the lead up to the event, there was talk about 12,000 registrations which puts it up there with one of the fastest growing industry events by numbers and apparently the biggest independent, non vendor specific event ever in our industry. When you look at the fact that VMworld US had approx. 20,000 attendees the attendance for KubeCon is impressive to say the least.

Most people are aware that Kubernetes is hot right now. I’ve written a couple of articles this year on the subject and been also tracking the rise of Kubernetes in the vernacular of the more traditional infrastructure IT community over the past few years.

There is no doubt there is a significant element of #FOMO associated with the rise of Kubernetes, but looking at the breath of the conference on a whole it’s more about the Cloud Native aspect of the ecosystem. Kubernetes as a “theme” is the draw, but I would put a bet on the fact that a large chunk of the attendees (Dev community and those directly associated with Cloud Native aside) have not grasped the cloud native movement that powers KubeCon and Cloud Native Con… I will admit that it is something that I am yet to comes to terms with as well.

Are there shades of OpenStack here?

Docker Enterprise was acquired last week my Mirantis which in it’s self started life as an OpenStack offering to rival the likes of more managed OpenStack platforms from VMware and a like. OpenStack is now just a block in the overall picture of what Mirantis offers. OpenStack was set to dominate the IT industry and change the world and back when it was ontop of its own hype curve I remember a lot of similar FOMO conversations happening.

Kubernetes appears to be more than a block at the moment. I’ve been talking to a lot of people and discovering the power of Kubernetes myself. While I will reserve judgement on the holy war battleground fight that appears to have been run and won. What some may find interesting is that Docker is still the containerisation platform that powers the orchestration and management layer that is Kubernetes. It obviously extends and is being extended to more use cases, but the parallels in terms of hype between Kubernetes and OpenStack are noted.

Large Complex Ecosystem

Orchestration engines for Docker was always the battleground in this cloud native space. That space only exists because there is a ground swell of developers creating applications on Cloud Native Platforms. While traditional/monolithic software development isn’t going away any time soon, it’s clear that the Cloud Native approach is well and truly mainstream.

That doesn’t make things easier, now that Cloud Native is more mainstream. In fact the CNCF confirms the existing complex ecosystem via its Cloud Native Trail Map that begins to point organizations down the right path as they start their Cloud Native journey.

To finish off, i’ll leave you with the image below. It’s the Cloud Native Landscape as it sits today. This isn’t your every day IT Infrastructure ecosystem. There are literally hundreds (thousands) of different permutations and choices consumers of IT need to think about when looking to go down their Cloud Native journeys. Kubernetes is a building block and one part of the puzzle… though an important one that does bring together a lot of the other elements you see below.

1,277 cards with a total market cap of $14.55T and funding of $63.28B cant be wrong… right?

This is also worth a watch from theCUBE guys.

References:

KubeCon + CloudNativeCon North America 2019

 

Deploying a Kubernetes Sandbox on VMware with Terraform

Terraform from HashiCorp has been a revelation for me since I started using it in anger last year to deploy VeeamPN into AWS. From there it has allowed me to automate lab Veeam deployments, configure a VMware Cloud on AWS SDDC networking and configure NSX vCloud Director Edges. The time saved by utilising the power of Terraform for repeatable deployment of infrastructure is huge.

When it came time for me to play around with Kubernetes to get myself up to speed with what was happening under the covers, I found a lot of online resources on how to install and configure a Kubernetes cluster on vSphere with a Master/Node deployment. I found that while I was tinkering, I would break deployments which meant I had to start from scratch and reinstall. This is where Terraform came into play. I set about to create a repeatable Terraform plan to deploy the required infrastructure onto vSphere and then have Terraform remotely execute the installation of Kubernetes once the VMs had been deployed.

I’m not the first to do a Kubernetes deployment on vSphere with Terraform, but I wanted to have something that was simple and repeatable to allow quick initial deployment. The above example uses KubeSpray along with Ansible with other dependancies. What I have ended up with is a self contained Terraform plan that can deploy a Kubernetes sandbox with Master plus a dynamic number of Nodes onto vSphere using CentOS as the base OS.

I haven’t automated is the final step of joining the nodes to the cluster automatically. That step takes a couple of seconds once everything else is deployed. I also haven’t integrated this with VMware Cloud Volumes and prepped for persistent volumes. Again, the idea here is to have a sandbox deployed within minutes to start tinkering with. For those that are new to Kubernetes it will help you get to the meat and gravy a lot quicker.

The Plan:

The GitHub Project is located here. Feel free to clone/fork it.

In a nutshell, I am utilising the Terraform vSphere Provider to deploy a VM from a preconfigured CentOS template which will end up being the Kubernetes Master. All the variables are defined in the terraform.tfvars file and no other configuration needs to happen outside of this file. Key variables are fed into the other tf declarations to deploy the Master and the Nodes as well as how to configure the Kubernetes cluster IP networking.

[Update] – It seems as though Kubernetes 1.16.0 was released over the past couple of days. This resulted in the scripts not installing the Master Node correctly due to an API issue when configuring the POD networking. Because of that i’ve updated the code to now use a variable that specifies the Kubernetes version being installed. This can be found on Line 30 of the terraform.tfvars. The default is 1.15.3.

The main items to consider when entering in your own variables for the vSphere environment is to look at Line 18, and then Line 28-31. Line 18 defines the Kubernetes POD network which is used during the configuration and then Line 28-31 sets the number of nodes, the starting name for the VM and then uses two seperate variables to build out the IP addresses of the nodes. Pay attention to the format here of the network on Line 30 and then choose the starting IP for the Nodes on Line 31. This is used as a starting IP for the Node IPs and is enumerated in the code using the Terraform Count construct. 

By using Terraforms remote-exec provisioner, I am then using a combination of uploaded scripts and direct command line executions to configure and prep the Guest OS for the installation of Docker and Kubernetes.

You can see towards the end I have split up the command line scripts to ensure that the dynamic nature of the deployment is attained. The remote-exec on Line 82 pulls in the POD Network Variable an executes it inline. The same is done for Line 116-121 which configures the Guest OS hosts file to ensure name resolution. They are used together with two other scripts that are uploaded and executed.

The scripts have been build up from a number of online sources that go through how to install and configure Kubernetes manually. For the networking, I went with Weave Net after having a few issues with Flannel. There are lots of other networking options for Kubernetes… this is worth a read.

For better DNS resolution on the Guest OS VMs, the hosts file entries are constructed from the IP address settings set in the terraform.tfvars file.

Plan Execution:

The Nodes can be deployed dynamically using a Terraform var option when applying the plan. This allows for zero to as many nodes as you want for the sandbox… though three seems to be a nice round number.

The number of nodes can also be set in the terraform.tfvars file on Line 28. The variable set during the apply will take precedence over the one declared in the tfvars file. One of the great things about Terraform is we can alter the variable either way which will end up with nodes being added or removed automatically.

Once applied, the plan will work through the declaration files and the output will be similar to what it shown below. You can see in just over 5 minutes we have deployed one Master and three Nodes ready for further config.

The next step is to use the kubeadm join command on the nodes. For those paying attention the complete join command was outputted via the Terraform apply. Once applied on all nodes you should have a ready to go Kubernetes Cluster running on CentOS ontop of vSphere.

Conclusion:

While I do believe that the future of Kubernetes is such that a lot of the initial installation and configuration will be taken out of our hands and delivered to us via services based in Public Clouds or through platforms such as VMware’s Project Pacific having a way to deploy a Kubernetes cluster locally on vSphere is a great way to get to know what goes into making a containerisation platform tick.

Build it, break it, destroy it and then repeat… that is the beauty of Terraform!

References:

https://github.com/anthonyspiteri/terraform/tree/master/deploy_kubernetes_CentOS