Category Archives: Containers

Deploying a Kubernetes Sandbox on VMware with Terraform

Terraform from HashiCorp has been a revelation for me since I started using it in anger last year to deploy VeeamPN into AWS. From there it has allowed me to automate lab Veeam deployments, configure a VMware Cloud on AWS SDDC networking and configure NSX vCloud Director Edges. The time saved by utilising the power of Terraform for repeatable deployment of infrastructure is huge.

When it came time for me to play around with Kubernetes to get myself up to speed with what was happening under the covers, I found a lot of online resources on how to install and configure a Kubernetes cluster on vSphere with a Master/Node deployment. I found that while I was tinkering, I would break deployments which meant I had to start from scratch and reinstall. This is where Terraform came into play. I set about to create a repeatable Terraform plan to deploy the required infrastructure onto vSphere and then have Terraform remotely execute the installation of Kubernetes once the VMs had been deployed.

I’m not the first to do a Kubernetes deployment on vSphere with Terraform, but I wanted to have something that was simple and repeatable to allow quick initial deployment. The above example uses KubeSpray along with Ansible with other dependancies. What I have ended up with is a self contained Terraform plan that can deploy a Kubernetes sandbox with Master plus a dynamic number of Nodes onto vSphere using CentOS as the base OS.

I haven’t automated is the final step of joining the nodes to the cluster automatically. That step takes a couple of seconds once everything else is deployed. I also haven’t integrated this with VMware Cloud Volumes and prepped for persistent volumes. Again, the idea here is to have a sandbox deployed within minutes to start tinkering with. For those that are new to Kubernetes it will help you get to the meat and gravy a lot quicker.

The Plan:

The GitHub Project is located here. Feel free to clone/fork it.

In a nutshell, I am utilising the Terraform vSphere Provider to deploy a VM from a preconfigured CentOS template which will end up being the Kubernetes Master. All the variables are defined in the terraform.tfvars file and no other configuration needs to happen outside of this file. Key variables are fed into the other tf declarations to deploy the Master and the Nodes as well as how to configure the Kubernetes cluster IP networking.

[Update] – It seems as though Kubernetes 1.16.0 was released over the past couple of days. This resulted in the scripts not installing the Master Node correctly due to an API issue when configuring the POD networking. Because of that i’ve updated the code to now use a variable that specifies the Kubernetes version being installed. This can be found on Line 30 of the terraform.tfvars. The default is 1.15.3.

The main items to consider when entering in your own variables for the vSphere environment is to look at Line 18, and then Line 28-31. Line 18 defines the Kubernetes POD network which is used during the configuration and then Line 28-31 sets the number of nodes, the starting name for the VM and then uses two seperate variables to build out the IP addresses of the nodes. Pay attention to the format here of the network on Line 30 and then choose the starting IP for the Nodes on Line 31. This is used as a starting IP for the Node IPs and is enumerated in the code using the Terraform Count construct. 

By using Terraforms remote-exec provisioner, I am then using a combination of uploaded scripts and direct command line executions to configure and prep the Guest OS for the installation of Docker and Kubernetes.

You can see towards the end I have split up the command line scripts to ensure that the dynamic nature of the deployment is attained. The remote-exec on Line 82 pulls in the POD Network Variable an executes it inline. The same is done for Line 116-121 which configures the Guest OS hosts file to ensure name resolution. They are used together with two other scripts that are uploaded and executed.

The scripts have been build up from a number of online sources that go through how to install and configure Kubernetes manually. For the networking, I went with Weave Net after having a few issues with Flannel. There are lots of other networking options for Kubernetes… this is worth a read.

For better DNS resolution on the Guest OS VMs, the hosts file entries are constructed from the IP address settings set in the terraform.tfvars file.

Plan Execution:

The Nodes can be deployed dynamically using a Terraform var option when applying the plan. This allows for zero to as many nodes as you want for the sandbox… though three seems to be a nice round number.

The number of nodes can also be set in the terraform.tfvars file on Line 28. The variable set during the apply will take precedence over the one declared in the tfvars file. One of the great things about Terraform is we can alter the variable either way which will end up with nodes being added or removed automatically.

Once applied, the plan will work through the declaration files and the output will be similar to what it shown below. You can see in just over 5 minutes we have deployed one Master and three Nodes ready for further config.

The next step is to use the kubeadm join command on the nodes. For those paying attention the complete join command was outputted via the Terraform apply. Once applied on all nodes you should have a ready to go Kubernetes Cluster running on CentOS ontop of vSphere.

Conclusion:

While I do believe that the future of Kubernetes is such that a lot of the initial installation and configuration will be taken out of our hands and delivered to us via services based in Public Clouds or through platforms such as VMware’s Project Pacific having a way to deploy a Kubernetes cluster locally on vSphere is a great way to get to know what goes into making a containerisation platform tick.

Build it, break it, destroy it and then repeat… that is the beauty of Terraform!

https://github.com/anthonyspiteri/terraform

References:

https://github.com/anthonyspiteri/terraform/tree/master/deploy_kubernetes_CentOS

 

The Separation of Dev and Ops is Upon Us!

Apart from the K word, there was one other enduring message that I think a lot of people took home from VMworld 2019. That is, that Dev and Ops should be considered as seperate entities again. For the best part of the last five or so years the concept of DevOps, SecOps and other X-Ops has been perpetuated mainly due to the rise of consumable platforms outside the traditional control of IT operations people.

The pressure to DevOp has become very real in the IT communities that I am involved with. These circles are mainly made up of traditional infrastructure guys. I’ve written a few pieces around how the industry trend to try turn everyone into developers isn’t one that needs to be followed. Automation doesn’t equal development and there are a number of Infrastructure as Code tools that looks to bridge the gap between the developer and the infrastructure guy.

That isn’t to say that traditional IT guys shouldn’t be looking to push themselves to learn new things and improve and evolve. In fact, IT Ops needs to be able to code in slightly abstracted ways to work with APIs or leverage IaC tooling. However my view is that IT Ops number one role is to understand fundamentally what is happening within a platform, and be able to support infrastructure that developers can consume.

I had a bit of an aha moment this week while working on some Kubernetes (that word again!) automation work with Terraform which I’ll release later this week. The moment was when I was trying to get the Sock Shop demo working on my fresh Kubernetes cluster. I finally understood why Kubernetes had been created. Everything about the application was defined in the json files and deployed as is holistically through one command. It’s actually rather elegant compared to how I worked with developers back in the early days of web hosting on Windows and Linux web servers with their database backends and whatnot.

Regardless of the ease of deployment, I still had to understand the underlying networking and get the application to listen on external IPs and different ports. At this point I was doing dev and doing IT Ops in one. However this is all contained within my lab environment that has no bearing on the availability of the application, security or otherwise. This is where separation is required.

For developers, they want to consume services and take advantages of the constructs of a containerised platform like Docker paired together with the orchestrations and management of those resources that Kubernetes provides. They don’t care what’s under the hood and shouldn’t be concerned what their application runs on.

For IT Operations they want to be able to manage the supporting platforms as they did previously. The compute, the networking the storage… this is all still relevant in a hybrid world. They should absolutely still care about what’s under the hood and the impact applications can have to infrastructure.

VMware has introduced that (re)split of dev and ops with the introduction of Project Pacific and I applaude them for going against the grain and endorsing the separation of roles and responsibilities. Kubernetes and ESXi in one vSphere platform is where that vision lies. Outside of vSphere, it is still very true that devs can consume public clouds without a care about underlying infrastructure… but for me… it all comes back down to this…

Let devs be devs… and let IT Ops be IT Ops! They need to work together in this hybrid, multi-cloud world!

VMworld 2019 Review – Project Pacific is a Stroke of Kubernetes Genius… but not without a catch!

Kubernetes Kubernetes, Kubernetes… say Kubernetes one more time… I dare you!

If it wasn’t clear what the key take away from VMworld 2019 was last week in San Francisco then I’ll repeat it one more time… Kubernetes! It was something which I predicted prior to the event in my session breakdown. And all jokes aside, with the amount of times we heard Kubernetes mentioned last week, we know that VMware signalled their intent to jump on the Kubernetes freight train and ride it all the way.

When you think about it, the announcement of Project Pacific isn’t a surprise. Apart from it being an obvious path to take to ensure VMware remains viable with IT Operations (IT Ops) and Developers (Devs) holistically, the more I learned about what it actually does under the hood, the more I came to belief that it is a stroke of genius. If it delivers technically on the its promise of full ESX and Kubernetes integration into the one vSphere platform, then it will be a huge success.

The whole premise of Project Pacific is to use Kubernetes to manage workloads via declarative specifications. Essentially allowing IT Ops and Devs to tell vSphere what they want and have it deploy and manage the infrastructure that ultimately serves as a platform for an application. This is all about the application! Abstracting all infrastructure and most of the platform to make the application work. We are now looking at a platform platform that controls all aspects of that lifecycle end to end.

By redesigning vSphere and implanting Kubernetes into the core of vSphere, VMware are able to take advantage of the things that make Kubernetes popular in todays cloud native world. A Kubernetes Namespace is effectively a tenancy in Kubernetes that will manage applications holistically and it’s at the namespace level where policies are applied. QoS, Security, Availability, Storage, Networking, Access Controls can all be applied top down from the Namespace. This gives IT Ops control, while still allowing devs to be agile.

I see this construct similar to what vCloud Director offers by way of a Virtual Datacenter with vApps used as the container for the VM workloads… in truth, the way in which vCD abstracted vSphere resources into tenancies and have policies applied was maybe ahead of it’s time?

DevOps Seperation:

DevOps has been a push for the last few years in our industry and the pressure to be a DevOp is huge. The reality of that is that both sets of disciplines have fundamentally different approaches to each others lines of work. This is why it was great to see VMware going out of their way to make the distinction between IT Ops and Devs.

Dev and IT Ops collaboration is paramount in todays IT world and with Project Pacific, when a Dev looks at the vSphere platform they see Kubernetes. When an IT Ops guy looks at vSphere he still sees vSphere and ESXi. This allows for integrated self service and allows more speed with control to deploy and manage the infrastructure and platforms the run applications.

Consuming Virtual Machines as Containers and Extensibility:

Kubernetes was described as a Platform Platform… meaning that you can run almost anything in Kubernetes as long as its declared. The above image shows a holistic application running in Project Pacific. The application is a mix of Kubernetes containers, VMs and other declared pieces… all of which can be controlled through vSphere and lives under that single Namespace.

When you log into the vSphere Console you can see a Kubernetes Cluster in vSphere and see the PODs and action on them as first class citizens. vSphere Native PODs are an optimized run time… apparently more optimized than baremetal… 8% faster than baremetal as we saw in the keynote on Monday. The way in which this is achieved is due to CPU virtualization having almost zero cost today. VMware has taken advantage of the advanced ESXi scheduler of which vSphere/ESXi have advanced operations across NUMA nodes along with the ability to strip out what is not needed when running containers on VMs so that there is optimal runtime for workloads.

vSphere will have two APIs with Project Pacific. The traditional vSphere API that has been refined over the years will remain and then, there will be the Kubernetes API. There is also be ability to create infrastructure with kubectl. Each ESXi Cluster becomes a Kubernetes cluster. The work done with vSphere Integrated Containers has not gone to waste and has been used in this new integrated platform.

PODs and VMs live side by side and declared through Kubernetes running in Kubernetes. All VMs can be stored in the container registry. Critical Venerability scans, encryption, signing can be leveraged at a container level that exist in the container ecosystem and applied to VMs.

There is obviously a lot more to Project Pacific, and there is a great presentation up on YouTube from Tech Field Day Extra at VMworld 2019 which I have embedded below. In my opinion, they are a must for all working in and around the VMware ecosystem.

The Catch!

So what is the catch? With 70 million workloads across 500,000+ customers VMware is thinking that with this functionality in place the current movement of refactoring of workloads to take advantage of cloud native constructs like containers, serverless or Kubernetes doesn’t need to happen… those, and existing workloads instantly become first class citizens on Kubernetes. Interesting theory.

Having been digging into the complex and very broad container world for a while now, and only just realising how far on it has become in terms of it being high on most IT agendas my currently belief is that the world of Kubernetes and containers is better placed to be consumed on public clouds. The scale and immediacy of Kubernetes platforms on Google, Azure or AWS without the need to ultimately still procure hardware and install software means that that model of consumption will still have an advantage over something like Project Pacific.

The one stroke of genius as mentioned is that by combining “traditional” workloads with Kubernetes as its control plane within vSphere the single, declarative, self service experience that it potentially offers might stop IT Operations from moving to public clouds… but is that enough to stop the developers forcing their hands?

It is going to be very interesting to see this in action and how well it is ultimately received!

More on Project Pacific

The videos below give a good level of technical background into Project Pacific, while Frank also has a good introductory post here, while Kit Colbert’s VMworld session is linked in the references.

References:

https://videos.vmworld.com/global/2019/videoplayer/28407

Kubernetes Everywhere…Time to Take off the Blinkers!

This is more or less a follow up post to the one I wrote back in 2015 about the state of containers in the IT World as I saw it at the time. I started off that post talking about the freight train that was containerization along with a cheeky meme… fast forward four years and the narrative around containers has changed significantly, and now there is new cargo on that freight train… and it’s all about Kubernetes!

In my previous role working at a Cloud Provider, shortly after writing that 2015 post I started looking at ways to offer containers as a service. At the time there wasn’t much, but I dabbled a bit in Docker and if you remember at the time, VMware’s AppCatalyst… which I used to deploy basic Docker images on my MBP (think it’s still installed actually) with the biggest highlight for me at the time being able to play Docker Doom!

I also was involved in some of the very early alphas for what was at the time vSphere Integrated Containers (Docker containers as VMs on vCenter) which didn’t catch on compared to what is currently out there for the mass deployment and management of containers. VMware did evolve it’s container strategy with Pivotal Container Services, however those outside the VMware world where already looking elsewhere as the reality of containerised development along with serverless and cloud has taken hold and become accepted as a mainstream IT practice.

Even four or five years ago I was hearing the word Kubernetes often. I remember sitting in my last VMware vChampion session with where Kit Colbert was talking about Kuuuuuuuurbenites (the American pronunciation stuck in my mind) and how we all should be ready to understand how it works as it was about to take over the tech world. I didn’t listen… and now, I have a realisation that I should have started looking into Kubernetes and container management in general more seriously sooner.

Not because it’s fundamental to my career path…not because I feel like I was lagging technically and not because there have been those saying for years that Kubernetes will win the race. There is an opportunity to take off the blinkers and learn something that is being adopted by understanding the fundamentals about what makes it tick. In terms of discovery and learning, I see this much like what I have done over the past eighteen months with automation and orchestration.

From a backup and recovery point of view, we have been seeing an increase in the field of customers and partners asking how they backup containers and Kubernetes. For a long time the standard response was “why”. But it’s becoming more obvious that the initial stateless nature of containers is making way for more stateful persistent workloads. So now, it’s not only about backing up the management plane.. but also understanding that we need to protect the data that sits within the persistent volumes.

What I’ll Be Doing:

I’ve been interested for a long time superficially about Kubernetes, reading blogs here and there and trying to absorb information where possible. But as with most things in life, you best learn by doing! My intention is to create a series of blog posts that describe my experiences with different Kubernetes platforms to ultimately deploy a simple web application with persistent storage.

These posts will not be how-tos on setting up a Kubernetes cluster etc. Rather, I’ll look at general config, application deployment, usability, cost and whatever else becomes relevant as I go through the process of getting the web application online.

Off the top of my head, i’ll look to work with these platforms:

  • Google Kubernetes Engine (GKE)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Container Service (AKS)
  • Docker
  • Pivotal Container Service (PKS)
  • vCloud Director CSE
  • Platform9

The usual suspects are there in terms of the major public cloud providers. From a Cloud and Service Provider point of view, the ability to offer Kubernetes via vCloud Director is very exciting and if I was still in my previous role I would be looking to productize that ASAP. For a different approach, I have always likes what Platform 9 has done and I was also an early tester of their initial managed vSphere support, which has now evolved into managed OpenStack and Kubernetes. They also recently announced Managed Applications through the platform which i’ve been playing with today.

Wrapping Up:

This follow up post isn’t really about the state of containers today, or what I think about how and where they are being used in IT today. The reality is that we live in a hybrid world and workloads are created as-is for specific platforms on a need by need basis. At the moment there is nothing to say that virtualization in the form of Virtual Machines running on hypervisors on-premises are being replaced by containers. The reality is that between on-premises, public clouds and in between…workloads are being deployed in a variety of fashions… Kubernetes seems to have come to the fore and has reached some level of maturity that makes it a viable option… that could no be said four years ago!

It’s time for me (maybe you) to dig underneath the surface!

Link:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Kubernetes is mentioned 18 times in this and on this page

VMworld 2016: Top Session Picks

VMworld 2016 is just around the corner (10 days and counting) and the theme this year is be_Tomorrow …which looks to build on the Ready for Any and Brave IT messages from the last couple of VMworld events. It’s a continuation of VMware’s call to arms to get themselves and their partners and customers prepared for the shift in the IT of tomorrow. This will be my fourth VMworld and I am looking forward to spending time networking with industry peers, walking around the Solutions Exchange on the look out out for the next Rubrik or Platform9 and attending Technical Sessions.

http://www.vmworld.com/uscatalog.jspa

The Content Catalog went live a few weeks ago and the Session Builder has also been live allowing attendees to lock in sessions. There are a total of 817 sessions this year, up from the 752 sessions last year. I’ve listed the main tracks with the numbers fairly similar to last year.

Cloud Native Applications (17)
End-User Computing (97)
Hybrid Cloud (63)
Partner Exchange @ VMworld (74)
Software-Defined Data Center (504)
Technology Deep Dives & Futures (22)

VMware’s core technology focus around VSAN and NSX again has the lions share of sessions this time year, with EUC still a very popular subject. It’s pleasing to see a lot of vCloud Air Network related sessions in the list (for a detailed look at the vCAN Sessions read my previous post) and there is a solid amount of Cloud Native Application content. Below are my top picks for this year:

  • Virtual SAN – Day 2 Operations [STO7534]
  • Advanced Network Services with NSX [NET7907]
  • A Day in the Life of a VSAN I/O [STO7875]
  • vSphere 6.x Host Resource Deep Dive [INF8430]
  • The Architectural Future of Network Virtualization [NET8193R]
  • Conducting a Successful Virtual SAN 6.2 Proof of Concept [STO7535]
  • How to design and implement VMware’s vCloud in production [SDDC9612-SPO]
  • PowerNSX and PyNSXv: Using PowerShell and Python for Automation and Management of VMware NSX for vSphere [NET7514]
  • Evolving the vSphere API for the Modern Era [INF8255]
  • Multisite Networking and Security with Cross-vCenter NSX: Part 2 [NET7861R]

My focus seems to have shifted back towards more vCloud Director and Network/Hybrid Cloud automation of late and it’s reflected in the choices above. Along side that I am also very interested to see how VMware position vCloud Air after the shambles of the past 12 months and I always I look forward to hearing from respected industry technical leads Frank Denneman, Chris Wahl and Duncan Epping as they give their perspective on storage and software defined datacenters and automation. This year I’m also looking at what the SABU Tech Marketing Team are up to around VSAN and VSAN futures.

As has also become tradition, there are a bunch of bloggers who put out their Top picks for VMworld…check out the links below for more insight into what’s going to be hot in Las Vegas this VMworld. Hope to catch up with as many community folk as possible while over so if you are interested in a chat, hit me up!

My top 15 VMworld sessions for 2016

http://blogs.vmware.com/management/2016/08/top-5-log-insight-vmworld-sessions.html

https://blogs.vmware.com/virtualblocks/2016/08/16/be_tomorrow-vmworld-2016-key-storage-availability-activities/

 

My Top Session picks for VMworld 2016

http://www.mindthevirt.com/top-vmworld-sessions-category-1247

vCloud Air and Virtustream – Just kill vCloud Air Already?!?

I’ve been wanting to write some commentary around the vCloud Air and Virtustream merger since rumours of it took place just before VMworld in Auguest and I’ve certainly been more interested in the whole state of play since news of the EMC/VMware Cloud Services spin off was announced in late October…the basis of this new entity is to try and get a strangle hold in the Hybrid Cloud market which is widely known to make up the biggest chunk of the Cloud market for the foreseeable future topping $90 billion by 2020.

Below are some of the key points lifted from the Press Release:

  • EMC and VMware  plan to form new cloud services business creating the industry’s most comprehensive hybrid cloud portfolio
  • Will incorporate and align cloud capabilities of EMC Information Infrastructure, Virtustream and VMware to provide the complete spectrum of on- and off-premises cloud offerings
  • The new cloud services business will be jointly owned 50:50 by VMware and EMC and will operate under the Virtustream brand led by CEO Rodney Rogers
  • Virtustream’s financial results to be consolidated into VMware financial statements beginning in Q1 2016
  • Virtustream is expected to generate multiple hundreds of millions of dollars in recurring revenue in 2016, focused on enterprise-centric cloud services, with an outlook to grow to a multi-billion business over the next several years
  • VMware will establish a Cloud Provider Software business unit incorporating existing VMware cloud management offerings and Virtustream’s software assets — including the xStream cloud management platform and others.

I’ve got a vested interest in the success or otherwise of vCloud Air as it directly impacts Zettagrid and the rest of the vCloud Air Network as well as my current professional area of focus however I feel I am still able to provide leveled feedback when it comes to vCloud Air and the time was finally right to comment after yesterday evening comming across the following LinkedIn Post from Nitin Bahdur

It grabbed my attention not only because of my participation in the vCloud Air Network but also because the knives have been out for vCloud Air almost before the service was launched as vCloud Hybrid Services. The post its self from Nitin though brief, was suggesting that VMware should further embrace it’s partnership with Google Cloud and just look to direct VMware Cloud customers onto the Google Cloud. The suggestion was based on letting VMware Orchestrate workloads on Google while letting Google do what it’s best at…which was surprisingly Infrastructure.

With that in mind I want to point out that vCloud Air is nowhere near the equal of AWS, Azure or Google in terms of total service offerings but in my opinion it’s never been about trying to match those public cloud players platform services end to end. Where VMware (and by extension it’s Service Provider Partners) does have an advantage is in the fact that in reality, VMware does do Infrastructure brilliantly and has the undisputed market share among other hypervisor platforms therefore giving it a clear advantage when talking about the total addressable market for Hybrid Cloud services.

As businesses look to go through their natural hardware refresh cycles the current options are:

  • Acquire new compute and storage hardware for existing workloads (Private – CapEx)
  • Migrate VM workloads to a cloud based service (IaaS – OpEx)
  • Move some application workloads into modern Cloud Services (SaaS)
  • Move all workloads to cloud and have third parties provide all core business services (SaaS, PaaS)

Without going into too much detail around each option…at a higher level where vCloud Air and the vCloud Air Network has the advantage in that most businesses I come across are not ready to move into the cloud holistically and for the next three to five years existing VM workloads will need a home as businesses work out a way to come to terms with an eventual move towards the next phase of cloud adoption which is all about platform and software delivered in a cloud native way.

Another reason why vCloud Air and the Air Network is attractive is because migration and conversion of VMs is still problematic and a massive pain (in the you know what) for most businesses to contemplate undertaking…let alone spend the additional capital on. A platform that offers the same underlying infrastructure as what’s out there as what vCloud Air, the vCloud Air Network partners and what Virtustream offers should continue to do well and there is enough ESXi based VMs out there to keep VMware based cloud providers busy for a while yet.

vCloud Air isn’t even close to being perfect and has a long way to go to even begin to catch up with the bigger players and VMware/EMC/DELL might well choose to wrap it up but my feel is that that would be a mistake…certainly it needs to evolve but the platform has a great advantage and it, along with the vCloud Air Network should be able to cash in.

In the next part I will look at what Virtustream brings to the table and how VMware can combine the best of both entities into a service that can and should do well over the next 3-5 years as the Cloud Market starts to mature and move into different territory leading into the next shift in cloud delivery.

References:

https://www.linkedin.com/pulse/should-vmware-kill-vcloud-air-instead-use-google-cloud-nitin-bahadur

http://www.crn.com/news/cloud/300077924/sources-vmware-cutting-back-on-vcloud-air-development-may-stop-work-on-new-features.htm

http://www.vmware.com/company/news/releases/vmw-newsfeed/EMC-and-VMware-Reveal-New-Cloud-Services-Business/2975020-manual?x-src=paidsearch_vcloudair_general_google_search_anz&kw=nsx%20+vcloud%20air&mt=b&gclid=Cj0KEQiAj8uyBRDawI3XhYqOy4gBEiQAl8BJbWUytu-I8GaYbrRGIiTuUQe9j6VTPMAmKJoqtUyCScAaAuGv8P8HAQ

http://www.marketsandmarkets.com/PressReleases/hybrid-cloud.asp

Containers Everywhere…Are we really ready?

Depending on what you read, certain areas of the IT Industry are telling us that there is a freight train coming our way…and that train is bringing with it containers.

With the recent release of container platforms from Microsoft and VMware it seems as though those that control the vast majority of the x86 platforms around the world are taking notice of the containers movement. Docker is the poster child of the push towards 3rd Platform Applications with many others looking to cash in.While there is no doubt there is a lot of benefit in the basic premise of what containerized applications offer the biggest question for me is how seriously we take the push in the real world of IT.

What do I mean by the real world of IT?

Well this is a word where orginizations are only just now starting to accept Cloud based platforms to deliver Platform and Software as a Service. It takes time for trends to reach the top of enterprise, and this is what we are certainly seeing now when it comes to the uptake of those Cloud services.

In the real world of IT, organizations are still running legacy applications on what some people call legacy platforms. Depending on who you talk to that definition of legacy platforms differs…some even say that virtuatization is legacy even now. Being realistic about what is legacy or not…the way in which IT is consumed today is not going to suddenly switch on mass to a containerized model any time soon. IT is only just now working out ways of better consuming Cloud based services by way of maximizing APIs interfaces and using the middleware that harneses their power.

In reality the real shift to a wider adoption of 3rd Platforms is happening in a place that you may not think about too often…University Campuses and the students of today who will become the IT professionals of tomorrow.

My peripeteia moment in coming to the conclusion that it is important to start to learn and understand about containers and 3rd platform applications came when I asked a couple of local software developers (who are quiet accomplished) about Docker and if they had done any container development…to my surprise the response I got was…”What are Containers and what is Docker?

Now, before the conclusion is drawn that the devs in question where out of touch…consider this. When this particular generation of developers went through university they may have started coding in Pascal (as I did), but more likely started in Java or C++…they also didn’t have virutalization in their lives until the mid to late 2000’s…When they where writing code for projects it wasn’t being uploaded and run off AWS based instances or anything to do with Cloud.

We live in a “legacy” world today because the generation of coders that create and produce the applications we consume today know about how best to exploit the tools they learnt with…There will be a shift…and I believe a dramatic one to 3rd platform apps when the current generation of university students graduate and get out in to the world and start to develop and create applications based on what they know best.

So yes, lets be aware of containers and ensure we are ready to host and consume 3rd Platform apps…but lets not go nuts and say that the current way we consume IT services and applications is dead and will be going away anytime too soon…

VMware Photon: vCloud Air Network Deployment

This week VMware announced information around their Cloud Native Apps strategy…VMware Photon and Lightwave are aimed at the ever growing Container market with VMware open sourcing their own lightweight Linux Microservice Server released as Photon.

Photon provides the following benefits:

  • Support for the most popular Linux container formats including Docker, rkt, and Garden from Pivotal
  • Minimal footprint (approximately 300MB), to provide an efficient environment for running containers
  • Seamless migration of container workloads from development to production
  • All the security, management, and orchestration benefits already provided with vSphere offering system administrators with operational simplicity

Photon is optimized for vSphere and vCloud Air …and by extension vCloud Air Network Service Provider platforms. I wanted to be able to offer Photon pretty much right away for ZettaGrid clients so I went about downloading the Tech Preview and created a shared Catalog vApp that can be deployed on any of ZettaGrid’s three Availability Zones.

In the video below I go through deployment of the vApp from the ZettaGrid Public Catalog, setup and run the nginx Docker container app example on the Photon VM and configure the networking using the MyAccount Portal in combination with the vCloud Director UI.

Requirements:

  • vCloud Air Network Account Details (ZettaGrid used in example)
  • Virtual Datacenter with at least 500MB of vRAM and 20GB of available storage
  • DHCP Configured on the Edge Device (VSE in this case)
  • A Spare IP Address to publish the nginx web server.

Video Walk through:



So there you go…Photon is good to go and hopefully we can start to see an uptake of Container Based workloads running on vCloud Air and Air Network Platforms. Looking forward to what’s to come in this space!

Further Reading:

http://www.vmware.com/company/news/releases/vmw-newsfeed/VMware-Introduces-New-Open-Source-Projects-to-Accelerate-Enterprise-Adoption-of-Cloud-Native-Applications/1943792

http://www.theregister.co.uk/2015/04/20/vmware_rolls_its_own_linux_for_microservices_stack/

http://www.virtuallyghetto.com/2015/04/collection-of-vmware-project-photon-lightwave-resourceslinks.html