Tag Archives: Containers

VMworld 2019 Review – Project Pacific is a Stroke of Kubernetes Genius… but not without a catch!

Kubernetes Kubernetes, Kubernetes… say Kubernetes one more time… I dare you!

If it wasn’t clear what the key take away from VMworld 2019 was last week in San Francisco then I’ll repeat it one more time… Kubernetes! It was something which I predicted prior to the event in my session breakdown. And all jokes aside, with the amount of times we heard Kubernetes mentioned last week, we know that VMware signalled their intent to jump on the Kubernetes freight train and ride it all the way.

When you think about it, the announcement of Project Pacific isn’t a surprise. Apart from it being an obvious path to take to ensure VMware remains viable with IT Operations (IT Ops) and Developers (Devs) holistically, the more I learned about what it actually does under the hood, the more I came to belief that it is a stroke of genius. If it delivers technically on the its promise of full ESX and Kubernetes integration into the one vSphere platform, then it will be a huge success.

The whole premise of Project Pacific is to use Kubernetes to manage workloads via declarative specifications. Essentially allowing IT Ops and Devs to tell vSphere what they want and have it deploy and manage the infrastructure that ultimately serves as a platform for an application. This is all about the application! Abstracting all infrastructure and most of the platform to make the application work. We are now looking at a platform platform that controls all aspects of that lifecycle end to end.

By redesigning vSphere and implanting Kubernetes into the core of vSphere, VMware are able to take advantage of the things that make Kubernetes popular in todays cloud native world. A Kubernetes Namespace is effectively a tenancy in Kubernetes that will manage applications holistically and it’s at the namespace level where policies are applied. QoS, Security, Availability, Storage, Networking, Access Controls can all be applied top down from the Namespace. This gives IT Ops control, while still allowing devs to be agile.

I see this construct similar to what vCloud Director offers by way of a Virtual Datacenter with vApps used as the container for the VM workloads… in truth, the way in which vCD abstracted vSphere resources into tenancies and have policies applied was maybe ahead of it’s time?

DevOps Seperation:

DevOps has been a push for the last few years in our industry and the pressure to be a DevOp is huge. The reality of that is that both sets of disciplines have fundamentally different approaches to each others lines of work. This is why it was great to see VMware going out of their way to make the distinction between IT Ops and Devs.

Dev and IT Ops collaboration is paramount in todays IT world and with Project Pacific, when a Dev looks at the vSphere platform they see Kubernetes. When an IT Ops guy looks at vSphere he still sees vSphere and ESXi. This allows for integrated self service and allows more speed with control to deploy and manage the infrastructure and platforms the run applications.

Consuming Virtual Machines as Containers and Extensibility:

Kubernetes was described as a Platform Platform… meaning that you can run almost anything in Kubernetes as long as its declared. The above image shows a holistic application running in Project Pacific. The application is a mix of Kubernetes containers, VMs and other declared pieces… all of which can be controlled through vSphere and lives under that single Namespace.

When you log into the vSphere Console you can see a Kubernetes Cluster in vSphere and see the PODs and action on them as first class citizens. vSphere Native PODs are an optimized run time… apparently more optimized than baremetal… 8% faster than baremetal as we saw in the keynote on Monday. The way in which this is achieved is due to CPU virtualization having almost zero cost today. VMware has taken advantage of the advanced ESXi scheduler of which vSphere/ESXi have advanced operations across NUMA nodes along with the ability to strip out what is not needed when running containers on VMs so that there is optimal runtime for workloads.

vSphere will have two APIs with Project Pacific. The traditional vSphere API that has been refined over the years will remain and then, there will be the Kubernetes API. There is also be ability to create infrastructure with kubectl. Each ESXi Cluster becomes a Kubernetes cluster. The work done with vSphere Integrated Containers has not gone to waste and has been used in this new integrated platform.

PODs and VMs live side by side and declared through Kubernetes running in Kubernetes. All VMs can be stored in the container registry. Critical Venerability scans, encryption, signing can be leveraged at a container level that exist in the container ecosystem and applied to VMs.

There is obviously a lot more to Project Pacific, and there is a great presentation up on YouTube from Tech Field Day Extra at VMworld 2019 which I have embedded below. In my opinion, they are a must for all working in and around the VMware ecosystem.

The Catch!

So what is the catch? With 70 million workloads across 500,000+ customers VMware is thinking that with this functionality in place the current movement of refactoring of workloads to take advantage of cloud native constructs like containers, serverless or Kubernetes doesn’t need to happen… those, and existing workloads instantly become first class citizens on Kubernetes. Interesting theory.

Having been digging into the complex and very broad container world for a while now, and only just realising how far on it has become in terms of it being high on most IT agendas my currently belief is that the world of Kubernetes and containers is better placed to be consumed on public clouds. The scale and immediacy of Kubernetes platforms on Google, Azure or AWS without the need to ultimately still procure hardware and install software means that that model of consumption will still have an advantage over something like Project Pacific.

The one stroke of genius as mentioned is that by combining “traditional” workloads with Kubernetes as its control plane within vSphere the single, declarative, self service experience that it potentially offers might stop IT Operations from moving to public clouds… but is that enough to stop the developers forcing their hands?

It is going to be very interesting to see this in action and how well it is ultimately received!

More on Project Pacific

The videos below give a good level of technical background into Project Pacific, while Frank also has a good introductory post here, while Kit Colbert’s VMworld session is linked in the references.

References:

https://videos.vmworld.com/global/2019/videoplayer/28407

Kubernetes Everywhere…Time to Take off the Blinkers!

This is more or less a follow up post to the one I wrote back in 2015 about the state of containers in the IT World as I saw it at the time. I started off that post talking about the freight train that was containerization along with a cheeky meme… fast forward four years and the narrative around containers has changed significantly, and now there is new cargo on that freight train… and it’s all about Kubernetes!

In my previous role working at a Cloud Provider, shortly after writing that 2015 post I started looking at ways to offer containers as a service. At the time there wasn’t much, but I dabbled a bit in Docker and if you remember at the time, VMware’s AppCatalyst… which I used to deploy basic Docker images on my MBP (think it’s still installed actually) with the biggest highlight for me at the time being able to play Docker Doom!

I also was involved in some of the very early alphas for what was at the time vSphere Integrated Containers (Docker containers as VMs on vCenter) which didn’t catch on compared to what is currently out there for the mass deployment and management of containers. VMware did evolve it’s container strategy with Pivotal Container Services, however those outside the VMware world where already looking elsewhere as the reality of containerised development along with serverless and cloud has taken hold and become accepted as a mainstream IT practice.

Even four or five years ago I was hearing the word Kubernetes often. I remember sitting in my last VMware vChampion session with where Kit Colbert was talking about Kuuuuuuuurbenites (the American pronunciation stuck in my mind) and how we all should be ready to understand how it works as it was about to take over the tech world. I didn’t listen… and now, I have a realisation that I should have started looking into Kubernetes and container management in general more seriously sooner.

Not because it’s fundamental to my career path…not because I feel like I was lagging technically and not because there have been those saying for years that Kubernetes will win the race. There is an opportunity to take off the blinkers and learn something that is being adopted by understanding the fundamentals about what makes it tick. In terms of discovery and learning, I see this much like what I have done over the past eighteen months with automation and orchestration.

From a backup and recovery point of view, we have been seeing an increase in the field of customers and partners asking how they backup containers and Kubernetes. For a long time the standard response was “why”. But it’s becoming more obvious that the initial stateless nature of containers is making way for more stateful persistent workloads. So now, it’s not only about backing up the management plane.. but also understanding that we need to protect the data that sits within the persistent volumes.

What I’ll Be Doing:

I’ve been interested for a long time superficially about Kubernetes, reading blogs here and there and trying to absorb information where possible. But as with most things in life, you best learn by doing! My intention is to create a series of blog posts that describe my experiences with different Kubernetes platforms to ultimately deploy a simple web application with persistent storage.

These posts will not be how-tos on setting up a Kubernetes cluster etc. Rather, I’ll look at general config, application deployment, usability, cost and whatever else becomes relevant as I go through the process of getting the web application online.

Off the top of my head, i’ll look to work with these platforms:

  • Google Kubernetes Engine (GKE)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Container Service (AKS)
  • Docker
  • Pivotal Container Service (PKS)
  • vCloud Director CSE
  • Platform9

The usual suspects are there in terms of the major public cloud providers. From a Cloud and Service Provider point of view, the ability to offer Kubernetes via vCloud Director is very exciting and if I was still in my previous role I would be looking to productize that ASAP. For a different approach, I have always likes what Platform 9 has done and I was also an early tester of their initial managed vSphere support, which has now evolved into managed OpenStack and Kubernetes. They also recently announced Managed Applications through the platform which i’ve been playing with today.

Wrapping Up:

This follow up post isn’t really about the state of containers today, or what I think about how and where they are being used in IT today. The reality is that we live in a hybrid world and workloads are created as-is for specific platforms on a need by need basis. At the moment there is nothing to say that virtualization in the form of Virtual Machines running on hypervisors on-premises are being replaced by containers. The reality is that between on-premises, public clouds and in between…workloads are being deployed in a variety of fashions… Kubernetes seems to have come to the fore and has reached some level of maturity that makes it a viable option… that could no be said four years ago!

It’s time for me (maybe you) to dig underneath the surface!

Link:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Kubernetes is mentioned 18 times in this and on this page

Containers Everywhere…Are we really ready?

Depending on what you read, certain areas of the IT Industry are telling us that there is a freight train coming our way…and that train is bringing with it containers.

With the recent release of container platforms from Microsoft and VMware it seems as though those that control the vast majority of the x86 platforms around the world are taking notice of the containers movement. Docker is the poster child of the push towards 3rd Platform Applications with many others looking to cash in.While there is no doubt there is a lot of benefit in the basic premise of what containerized applications offer the biggest question for me is how seriously we take the push in the real world of IT.

What do I mean by the real world of IT?

Well this is a word where orginizations are only just now starting to accept Cloud based platforms to deliver Platform and Software as a Service. It takes time for trends to reach the top of enterprise, and this is what we are certainly seeing now when it comes to the uptake of those Cloud services.

In the real world of IT, organizations are still running legacy applications on what some people call legacy platforms. Depending on who you talk to that definition of legacy platforms differs…some even say that virtuatization is legacy even now. Being realistic about what is legacy or not…the way in which IT is consumed today is not going to suddenly switch on mass to a containerized model any time soon. IT is only just now working out ways of better consuming Cloud based services by way of maximizing APIs interfaces and using the middleware that harneses their power.

In reality the real shift to a wider adoption of 3rd Platforms is happening in a place that you may not think about too often…University Campuses and the students of today who will become the IT professionals of tomorrow.

My peripeteia moment in coming to the conclusion that it is important to start to learn and understand about containers and 3rd platform applications came when I asked a couple of local software developers (who are quiet accomplished) about Docker and if they had done any container development…to my surprise the response I got was…”What are Containers and what is Docker?

Now, before the conclusion is drawn that the devs in question where out of touch…consider this. When this particular generation of developers went through university they may have started coding in Pascal (as I did), but more likely started in Java or C++…they also didn’t have virutalization in their lives until the mid to late 2000’s…When they where writing code for projects it wasn’t being uploaded and run off AWS based instances or anything to do with Cloud.

We live in a “legacy” world today because the generation of coders that create and produce the applications we consume today know about how best to exploit the tools they learnt with…There will be a shift…and I believe a dramatic one to 3rd platform apps when the current generation of university students graduate and get out in to the world and start to develop and create applications based on what they know best.

So yes, lets be aware of containers and ensure we are ready to host and consume 3rd Platform apps…but lets not go nuts and say that the current way we consume IT services and applications is dead and will be going away anytime too soon…

VMware Photon: vCloud Air Network Deployment

This week VMware announced information around their Cloud Native Apps strategy…VMware Photon and Lightwave are aimed at the ever growing Container market with VMware open sourcing their own lightweight Linux Microservice Server released as Photon.

Photon provides the following benefits:

  • Support for the most popular Linux container formats including Docker, rkt, and Garden from Pivotal
  • Minimal footprint (approximately 300MB), to provide an efficient environment for running containers
  • Seamless migration of container workloads from development to production
  • All the security, management, and orchestration benefits already provided with vSphere offering system administrators with operational simplicity

Photon is optimized for vSphere and vCloud Air …and by extension vCloud Air Network Service Provider platforms. I wanted to be able to offer Photon pretty much right away for ZettaGrid clients so I went about downloading the Tech Preview and created a shared Catalog vApp that can be deployed on any of ZettaGrid’s three Availability Zones.

In the video below I go through deployment of the vApp from the ZettaGrid Public Catalog, setup and run the nginx Docker container app example on the Photon VM and configure the networking using the MyAccount Portal in combination with the vCloud Director UI.

Requirements:

  • vCloud Air Network Account Details (ZettaGrid used in example)
  • Virtual Datacenter with at least 500MB of vRAM and 20GB of available storage
  • DHCP Configured on the Edge Device (VSE in this case)
  • A Spare IP Address to publish the nginx web server.

Video Walk through:



So there you go…Photon is good to go and hopefully we can start to see an uptake of Container Based workloads running on vCloud Air and Air Network Platforms. Looking forward to what’s to come in this space!

Further Reading:

http://www.vmware.com/company/news/releases/vmw-newsfeed/VMware-Introduces-New-Open-Source-Projects-to-Accelerate-Enterprise-Adoption-of-Cloud-Native-Applications/1943792

http://www.theregister.co.uk/2015/04/20/vmware_rolls_its_own_linux_for_microservices_stack/

http://www.virtuallyghetto.com/2015/04/collection-of-vmware-project-photon-lightwave-resourceslinks.html