Tag Archives: Automation

Deploying a Kubernetes Sandbox on VMware with Terraform

Terraform from HashiCorp has been a revelation for me since I started using it in anger last year to deploy VeeamPN into AWS. From there it has allowed me to automate lab Veeam deployments, configure a VMware Cloud on AWS SDDC networking and configure NSX vCloud Director Edges. The time saved by utilising the power of Terraform for repeatable deployment of infrastructure is huge.

When it came time for me to play around with Kubernetes to get myself up to speed with what was happening under the covers, I found a lot of online resources on how to install and configure a Kubernetes cluster on vSphere with a Master/Node deployment. I found that while I was tinkering, I would break deployments which meant I had to start from scratch and reinstall. This is where Terraform came into play. I set about to create a repeatable Terraform plan to deploy the required infrastructure onto vSphere and then have Terraform remotely execute the installation of Kubernetes once the VMs had been deployed.

I’m not the first to do a Kubernetes deployment on vSphere with Terraform, but I wanted to have something that was simple and repeatable to allow quick initial deployment. The above example uses KubeSpray along with Ansible with other dependancies. What I have ended up with is a self contained Terraform plan that can deploy a Kubernetes sandbox with Master plus a dynamic number of Nodes onto vSphere using CentOS as the base OS.

I haven’t automated is the final step of joining the nodes to the cluster automatically. That step takes a couple of seconds once everything else is deployed. I also haven’t integrated this with VMware Cloud Volumes and prepped for persistent volumes. Again, the idea here is to have a sandbox deployed within minutes to start tinkering with. For those that are new to Kubernetes it will help you get to the meat and gravy a lot quicker.

The Plan:

The GitHub Project is located here. Feel free to clone/fork it.

In a nutshell, I am utilising the Terraform vSphere Provider to deploy a VM from a preconfigured CentOS template which will end up being the Kubernetes Master. All the variables are defined in the terraform.tfvars file and no other configuration needs to happen outside of this file. Key variables are fed into the other tf declarations to deploy the Master and the Nodes as well as how to configure the Kubernetes cluster IP networking.

[Update] – It seems as though Kubernetes 1.16.0 was released over the past couple of days. This resulted in the scripts not installing the Master Node correctly due to an API issue when configuring the POD networking. Because of that i’ve updated the code to now use a variable that specifies the Kubernetes version being installed. This can be found on Line 30 of the terraform.tfvars. The default is 1.15.3.

The main items to consider when entering in your own variables for the vSphere environment is to look at Line 18, and then Line 28-31. Line 18 defines the Kubernetes POD network which is used during the configuration and then Line 28-31 sets the number of nodes, the starting name for the VM and then uses two seperate variables to build out the IP addresses of the nodes. Pay attention to the format here of the network on Line 30 and then choose the starting IP for the Nodes on Line 31. This is used as a starting IP for the Node IPs and is enumerated in the code using the Terraform Count construct. 

By using Terraforms remote-exec provisioner, I am then using a combination of uploaded scripts and direct command line executions to configure and prep the Guest OS for the installation of Docker and Kubernetes.

You can see towards the end I have split up the command line scripts to ensure that the dynamic nature of the deployment is attained. The remote-exec on Line 82 pulls in the POD Network Variable an executes it inline. The same is done for Line 116-121 which configures the Guest OS hosts file to ensure name resolution. They are used together with two other scripts that are uploaded and executed.

The scripts have been build up from a number of online sources that go through how to install and configure Kubernetes manually. For the networking, I went with Weave Net after having a few issues with Flannel. There are lots of other networking options for Kubernetes… this is worth a read.

For better DNS resolution on the Guest OS VMs, the hosts file entries are constructed from the IP address settings set in the terraform.tfvars file.

Plan Execution:

The Nodes can be deployed dynamically using a Terraform var option when applying the plan. This allows for zero to as many nodes as you want for the sandbox… though three seems to be a nice round number.

The number of nodes can also be set in the terraform.tfvars file on Line 28. The variable set during the apply will take precedence over the one declared in the tfvars file. One of the great things about Terraform is we can alter the variable either way which will end up with nodes being added or removed automatically.

Once applied, the plan will work through the declaration files and the output will be similar to what it shown below. You can see in just over 5 minutes we have deployed one Master and three Nodes ready for further config.

The next step is to use the kubeadm join command on the nodes. For those paying attention the complete join command was outputted via the Terraform apply. Once applied on all nodes you should have a ready to go Kubernetes Cluster running on CentOS ontop of vSphere.

Conclusion:

While I do believe that the future of Kubernetes is such that a lot of the initial installation and configuration will be taken out of our hands and delivered to us via services based in Public Clouds or through platforms such as VMware’s Project Pacific having a way to deploy a Kubernetes cluster locally on vSphere is a great way to get to know what goes into making a containerisation platform tick.

Build it, break it, destroy it and then repeat… that is the beauty of Terraform!

References:

https://github.com/anthonyspiteri/terraform/tree/master/deploy_kubernetes_CentOS

 

Orchestration of NSX by Terraform for Cloud Connect Replication with vCloud Director

That is probably the longest title i’ve ever had on this blog, however I wanted to highlight everything that is contained in this solution. Everything above works together to get the job done. The job in this case, is to configure an NSX Edge automatically using the vCloud Director Terraform provider to allow network connectivity for VMs that have been replicated into a vCloud Director tenant organization with Cloud Connect Replication.

With the release of Update 4 for Veeam Backup & Replication we enhanced Cloud Connect Replication to finally replicate into a Service Providers vCloud Director platform. In doing this we enabled tenants to take advantage of the advanced networking features of the NSX Edge Services Gateway. The only caveat to this was that unlike the existing Hardware Plan mechanism, where tenants where able to configure basic networking on the Network Extension Appliance (NEA), the configuration of the NSX Edge had to be done directly through the vCloud Director Tenant UI.

The Scenario:

When VMs are replicated into a vCD organisation with Cloud Connect Replication the expectation in a full failover is that if a disaster happened on-premises, workloads would be powered on in the service provider cloud and work exactly as if they where still on-premises. Access to services needs to be configured through the edge gateway. The edge gateway is then connected to the replica VMs via the vOrg Network in vCD.

In this example, we have a LAMP based web server that is publishing a WordPress site over HTTP and HTTPs.

The VM is being replicated to a Veeam Cloud Service Provider vCloud Director backed Cloud Connect Replication service.

During a disaster event at the on-premises end, we want to enact a failover of the replica living at in the vCloud Director Virtual Datacenter.

The VM replica will be fired up and the NSX Edge (the Network Extension Appliance pictured is used for partial failovers) associated to the vDC will allow the HTTP and HTTPS to be accessed from the outside world. The internal IP and Subnet of the VM is as it was on-premises. Cloud Connect Replication handles the mapping of the networks as part of the replication job.

Even during the early development days of this feature I was thinking about how this process could be automated somehow. With our previous Cloud Connect Replication networking, we would use the NEA as the edge device and allow basic configuration through the Failover Plan from the Backup & Replication console. That functionality still exists in Update 4, but only for non vCD backed replication.

The obvious way would be to tap into the vCloud Director APIs and configure the Edge directly. Taking that further, we could wrap that up in PowerShell and invoke the APIs from PowerShell, which would allow a simpler way to pass through variables and deal with payloads. However with the power that exists with the Terraform vCloud Director provider, it became a no brainer to leverage this to get the job done.

Configuring NSX Edge with Terraform:

In my previous post around Infrastructure as Code vs APIs I went through a specific example where I configured an NSX Edge using Terraform. I’m not going to go over that again, but what I have done is published that Terraform plan with all the code to GitHub.

The GitHub Project can be found here.

The end result after running the Terraform Plan is:

  • Allowed HTTP, HTTPS, SSH and ICMP access to a VM in a vDC
    • Defined as a variable as the External IP
    • Defined as a variable as the Internal IP
    • Defined as a variable as the vOrg Subnet
  • Configure DNAT rules to allow HTTP, HTTPS and SSH
  • Configure SNAT rule to allow outbound from the vOrg subnet

The variables that align with the VM and vORG network are defined in the terraform.tfvars file and need to be modified to match the on-premises network configuration. The variables are defined in the variables.tf file.

To add additional VMs and/or vOrg networks you will need to define additional variables in both files and add additional entires under the firewall_rules.tf and nat_fules.tf. I will look at ways to make this more elegant using Terraform arrays/lists and programatic constructs in future.

Creating PowerShell for Execution:

The Terraform plan can obviously be run standalone and the NSX Edge configuration can be actioned at any time, but the idea here is to take advantage of the script functionality that exists with Veeam backup and replication jobs and have the Terraform plan run upon completion of the Cloud Connect Replication job every time it is run.

To achieve this we need to create a PowerShell script:

GitHub – configure_vCD_VCCR_NSX_Edge.ps1

The PowerShell script initializes Terraform and downloads the Provider, ensures there is an upgrade in the future and then executes the Terraform plan. Remembering that that variables will change within the Terraform Plan its self, meaning these scripts remain unchanged.

Adding Post Script to Cloud Connect Replication Job:

The final step is to configure the PowerShell script to execute once the Cloud Connect Replication job has been run. This is done via a post script settings that can be found in Job Settings -> Advanced -> Scripts. Drop down to selected ps1 files and choose the location of the script.

That’s all that is required to have the PowerShell script executed once the replication job completes.

End Result:

Once the replication component of the job is complete, the post job script will be executed by the job.

This triggers the PowerShell, which runs the Terraform plan. It will check the existing state of the NSX Edge configuration and work out what configuration needs to be added. From the vCD Tenant UI, you should see the recent tasks list modifications to the NSX Edge Gateway by the user configured to access the vCD APIs via the Provider.

Taking a look at the NSX Edge Firewall and NAT configuration you should see that it has been configured as specified in the Terraform plan.

Which will match the current state of the Terraform plan

Conclusion:

At the end of the day, what we have done is achieved the orchestration of Veeam Cloud Connect Replication together with vCloud Director and NSX… facilitated by Terraform. This is something that Service Providers offering Cloud Connect Replication can provide to their clients as a way for them to define, control and manage the configuration of the NSX edge networking for their replicated infrastructure so that there is access to key services during a DR event.

While there might seem like a lot happening, this is a great example of leveraging Infrastructure as Code to automated as otherwise manual task. Once the Terraform is understood and the variables applied, the configuration of the NSX Edge will be consistent and in a desired state with the config checked and applied on every run of the replication job. The configuration will not fall out of line with what is required during a full failover and will ensure that services are available if a disaster occurs.

References:

https://github.com/anthonyspiteri/automation/tree/master/vccr_vcd_configure_nsx_edge

Quick Fix: Terraform Plan Fails on Guest Customizations and VMware Tools

Last week I was looking to add the deployment of a local CentOS virtual machine to the Deploy Veeam SDDC Toolkit project so that it included the option to deploy and configure a local Linux Repository. This could then can be added to the Backup & Replication server. As part of the deployment I call the Terraform vSphere Provider to clone and configure the virtual machine from a pre loaded CentOS template.

As shown below, I am using the Terraform customization commands to configure VM name, domain details as well as network configuration.

In configuring the CentOS template i did my usual install of Open VM Tools. When the Terraform plan executes we applied the VM was cloned without issue, but it failed at the Guest Customizations part.

The error is pretty clear and to test the error and fix, I tried applying the plan without any VMware Tools installed. In fact without VMware Tools the VM will not finish the initial deployment after the clone and be deleted by Terraform. I next installed open-vm-tools but ended up with the same scenario of the plan failing and the VM not being deployed. For some reason it does not like this version of the package being deployed.

Next test was to deploy the open-vm-tools-deploypkg as described in this VMwareKB. Now the Terraform plan executed to the point of cloning the VM and setting up the desired VM hardware and virtual network port group settings but still failed on the custom IP and hostname components of the customisation. This time with a slightly different error.

The final requirement is to pre-install the perl package onto the template. This allows for the in guest customizations to take place together with VMware Tools. Once I added that to the template the Terraform Plan succeeded without issue.

References:

https://kb.vmware.com/s/article/2075048

 

 

Automated Configuration of Backup & Replication with PowerShell

As part of the Veeam Automation and Orchestration for vSphere project myself and Michael Cade worked on for VMworld 2018, we combined a number of seperate projects to showcase an end to end PowerShell script that called a number of individual modules. Split into three parts, we had a Chef/Terraform module that deployed a server with Veeam Backup & Replication installed. A Terraform module that deployed and configured an AWS VPC to host a Linux Repository with a Veeam PN Sitegateway. And finally a Powershell module that configured the Veeam server with a number of configuration items ready for first use.

The goal of the project was to release a PowerShell script that fully deployed and configured a Veeam platform on vSphere with backup repositories, vCenter server and default policy based jobs automatically configured and ready for use. This could then be adapted for customer installs, used on SDDC platforms such as VMware Cloud on AWS, or for POCs or lab use.

While we are close to releasing the final code on GitHub for the project, I thought I would branch out the last section of the code and release it separately. As I was creating this script, it became apparent to me that it would be useful for others to use as is or as an example from which to simplify manual and repetitive tasks that go along with configuring Backup & Replication after installation.

Script Overview:

The PowerShell script (found here on GitHub) performs a number of configuration actions against any Veeam Backup & Replication Server as per the included functions.

All of the variables are configured in a config.json file meaning nothing is required to be modified in the main PowerShell script. There are a number of parameters that can be called to trigger or exclude certain functions.

There are some pre-requisites that need to be in place before the script can be executed…most importantly the PowerShell needs to be executed on a system where the Backup & Replication Console is installed to allow access to the Veeam PowerShell Snap-in. From there you just need a new Veeam Backup & Replication server and a vCenter server plus their login credentials. If you want to add a Cloud Connect Provider offering Cloud Connect Backup or/and Replication you enter in all the details in the config.json file as well. Finally, if you want to add a Linux Repository you will need the details of that plus have it configured for key based authentication.

You can combine any of the parameters listed above. An example is shown above where -ClearVBRConfig has been used to reverse the -RunVBRConfigure parameter that was executed first to do an end to end configure. For Cloud Connect Replication, if you want to configure and deploy an NEA there is a specific parameter for that. If you didn’t want to configure Cloud Connect or the Linux Repository the parameters can be used individually, or together. If those two parameters are used, the Default Backup Repository will be used for the jobs that are created.

Automating Policy Based Backup Jobs:

Part of the automation that we where keen to include was the automatic creation of default backup jobs based on vSphere Tags. The idea was to have everything in place to ensure that once the script had been run, VMs could be backed up dependant on them being added to vSphere Tags. Once done the backup jobs would protect those VMs based on the policies set in the config.json.

The corresponding jobs are all using the vSphere Tags. From here the jobs don’t need to be modified when VMs are added…VMs assigned those Tags will be included in the job.

Conclusion:

Once the script has been run you are left with a fully configured Backup & Replication server that’s connected to vCenter and if desired (by default) has local and Cloud Connect repositories added with a set of default policy based jobs ready to go using vSphere Tags.

There are a number of improvements that I want to implement and I am looking out for Contributors on GitHub to help develop this further. At its base it is functional…but not perfect. However it highlights the power of the automation that is possible with Veeam’s PowerShell Snap-In and PowerCLI. One of the use-cases for this was for repeatable deployments of Veeam Backup & Replication into POCs or labs and for those looking to standup those environments, this is a perfect companion.

Look out for the full Veeam SDDC Deploy Toolkit being released to GitHub shortly.

References:

https://github.com/anthonyspiteri/powershell/tree/master/BR-Configure-Veeam

Creating Policy Based Backup Jobs for vCloud Director Self Service Portal with Tenant Creation

For a long time Veeam has lead the way in regard to the protection of workloads running in vCloud Director. Veeam first released deep integration into vCD back in version 7 of Backup & Replication that talked directly to the vCD APIs to facilitate the backup and recovery of vCD workloads and their constructs. More recently in version 9.5, the vCD Self Service Portal was released which also taps into vCD for tenant authentication.

This portal leverages Enterprise Manager and allows service providers to grant their tenants self-service management of their vCD workloads. It’s possible that some providers don’t even know that this portal exists let alone the value it offers. I’ve covered the basics of the portal here…but in this post, I want to talk about how to use the Veeam APIs and PowerShell SnapIn to automatically enable a tenant, create a default backup jobs based on policies, tie backup copy jobs to default job for longer retention and finally import the jobs into the vCD Self Service Portal ready for use.

Having worked with a service provider recently, they requested to have previously defined service definitions for tenant backups ported to Veeam and the vCD Self Service Portal. Part of this requirement was to have tenants apply backup policies to their VMs…this included short term retention and longer term GFS based backup.

One of the current caveats with the Veeam vCD Self Service Portal is that backup copy jobs are not configurable via the web based portal. The reason for this is that It’s our belief that service providers should be in control of longer term restore operations, however some providers and their tenants still request this feature.

Translated to a working solution, the PowerShell script combines a previously released set of code by Markus Kraus that uses the Enterprise Manager API to setup a new tenant in the vCD Self Service portal and a set of new functions that create default backup and backup copy jobs for vCD and then imports them into the portal ready for use. The variables are controlled by a JSON file making the script portable for Veeam Cloud and Service Providers to use as a base and build upon.

The end result is that when a tenant first logs into the vCD Self Service Portal they have jobs, dictated by the desired polices ready for use. The backup jobs are set to disabled without a schedule set. The scope of the default jobs is the tenant’s Virtual Datacenter. If there is a corresponding backup copy job, this is tied to the backup job and is ready to do its thing.

From here, the tenant can choose which policy that want to apply to their workloads and edit the desired job, change or leave the scope and add a schedule. The job name in the Backup and Replication console is modified to indicate which policy the tenant selected.

Again, if the tenant chooses a policy that requires longer term retention, the corresponding backup copy job is enabled in the Backup & Replication console…though not managed by the tenant.

Self service recovery is possible by the tenant for through the portal as per usual, including full VM recovery, file and application item level recovery. For recovery of the longer term workloads and/or items, this is done by the Service Provider.

This is a great example of the power of the Veeam API and PowerShell SnapIn providing a solution to offer more than what is out of the box and enhance the offering around the backup of vCloud Director workloads with Veeam’s integration. Feel free to use as is, or modify and integrate into your service offerings.

GitHub Page: https://github.com/anthonyspiteri/powershell/tree/master/vCD-Create-SelfServiceTenantandPolicyJobs

Automating the Creation of AWS VPC and Subnets for VMware Cloud on AWS

Yesterday I wrote about how to deploy a Single Host SDDC through the VMware Cloud on AWS web console. I mentioned some pre-requisites that where required in order for the deployment to be successful. Part of those is to setup an AWS VPC up with networking in place so that the VMC components can be deployed. While it’s not too hard a task to perform through the AWS console, in the spirit of the work I’m doing around automation I have gotten this done via a Terraform plan.

The max lifetime for a Single Instance deployment is 30 days from creation, but the reality is most people will/should be using this to test the waters and may only want to spin the SDDC up for a couple of hours a day, run some tests and then destroy it. That obviously has it’s disadvantages as well. The main one being that you have to start from scratch every time. Given the nature of the VMworld session around the automation and orchestration of Veeam and VMC, starting from scratch is not an issue however it was desirable to look for efficiencies during the re-deployment.

For those looking to save time and automate parts of the deployment beyond the AWS VPC, there are a number of PowerShell code example and modules available that along with the Terraform plan, reduce the time to get a new SDDC firing.

I’m using a combination of the above scripts to deploy a new SDDC once the AWS VPC has been created. The first one actually deploys the SDDC through PowerShell while the second one is a module that allows some interactivity via commandlets to do things such as export and import Firewall rules.

Using Terraform to Create AWS VPC for VMware Cloud on AWS:

The Terraform plan linked here on GitHub does a couple of things:

  • Creates a new VPC
  • Creates a VPC Network
  • Creates three VPC subnets across different Availability Zones
  • Associates the three VPN subnets to the main route table
  • Creates desired security group rules

https://github.com/anthonyspiteri/vmc_vpc_subnet_create

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy…so that needs to be factored in in terms of the window to work on the instance.

Using Terraform to Deploy and Configure a Ready to use Backup Repo into an AWS VPC

A month of so ago I wrote a post on deploying Veeam Powered Network into an AWS VPC as a way to extend the VPC network to a remote site to leverage a Veeam Linux Repository running as an EC2 instance. During the course of deploying that solution I came across a lot of little check boxes and settings that needed to by tweaked in order to get things working. After that, I set myself the goal of trying to automate and orchestrate the deployment end to end.

For an overview of the intended purpose behind the solution head to the original blog post here. That post was mainly focused around the Veeam PN component, however I was using that as a mechanism to create a site-to-site connection to allow Veeam Backup & Replication to talk to the other EC2 instance which was the Veeam Linux Repository.

Terraform by HashiCorp:

In order to automate the deployment into AWS, I looked at Cloudformation first…but found that learning curve to be a little steep…so I went back to HashiCorp’s Terraform which I have been familiar with for a number of years, but never gotten my hands dirty with. HashiCorp specialise in Cloud Infrastructure Automation and their provisioning product is called Terraform.

Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

Terraform supports a host of providers and once you wrap your head around the basics and view some example code, provisioning Infrastructure as Code can be achieved with relatively no coding experience…however, as I did find out, you need to be careful in this world and not make the same initial mistake I did as explained in this post.

Going from Manual to Orchestrated with Automation:

The Terraform AWS provider is what I used to create the code required to deploy the required components. Like everything that’s automated, you need to understand the manual process first and that is where the previous experience came in handy. I knew what the end result was…I just needed to work backwards and make sure that the Terraform provider had all the instructions it needed to orchestrate the build.

the basic flow is:

  • Fetch AWS Access Key and Secret
  • Fetch AWS Key Pair
  • Create AWS VPC
    • Configure Networking and Routing for VPC
  • Create CentOS EC2 Instance for Veeam Linux Repo
    • Add new disk and set size
    • Execute configuration script
      • Install PERL modules
  • Create Ubuntu EC2 Instance for Veeam PN
    • Execute configuration script
      • Install VeeamPN modules from repo
  • Login to Veeam PN Web Console and Import Site Configuration.

I’ve uploaded the code to a GitHub project. An overview and instructions for the project can be found here. I’ve also posted a video to YouTube showing the end to end process which i’ve embedded below (best watched at 2x speed):

In order to get the Terraform plan to work there are some variables that need modifying in the GitHub Project and you will need to download, install and initialise Terraform. I’m intending to continue to tweak the project and complete the provisioning end to end, including the Veeam PN site configuration part at the end. The remote execution feature of Terraform allows some pretty cool things by way of script initiation.

References:

https://github.com/anthonyspiteri/automation/aws_create_veeamrepo_veeampn

https://www.terraform.io/intro/getting-started/install.html

 

Veeam Vault #9: Backup for Office 365 1.5 GA, Azure Stack and Vanguard Roundup

Welcome to another Veeam Vault! This is the ninth edition and given the last edition was focused around VMware and VMworld I thought just for a change, the focus for this edition will be Microsoft. Reason for that is over the past couple of weeks we have had some significant announcements around Azure Stack and the GA release of Backup for Office 365 1.5. I’ll cover both of those announcements, share some Veeam employee automation work that shows off the power of our new APIs and see what the Veeam Vanguard’s have been blogging about in the last month or so.

Backup for Office 365 1.5 GA:

The early part of my career was dedicated to Exchange Server however I drifted away from that as I made the switch to server virtualization and cloud computing. The old Exchange admin in my is still there however and it’s for that reason that I’m excited about the GA of our Backup for Office 365 product which is now at version 1.5. This release caters specifically for service providers adding scalability and automation enhancements as well as extended support for on-premises and hybrid Exchange setups.

New features and enhancements:

  • Distributed, scalable architecture: Enhanced scalability in distributed environments with several Remote Offices/Branch Offices and in service providers infrastructures
  • Backup proxies: take the workload off the management server, providing flexible throttling policy settings for performance optimization.
  • Support for multiple repositories: Streamlines data backup and restore processes.
  • Support for backup and restore of on-premises and hybrid Exchange organizations: Allows a variety of configurations and usage scenarios and implement those that meet your particular needs.
  • Increased performance: Restore operations allows for up to 5 times faster restores than in v1.0.
  • Restore of multiple datastore mailboxes using Veeam Explorer for Microsoft Exchange: simplifies workflow and minimizes workload for restore operators, as well as 1-Click restore of a mailbox to the original location.
  • RESTful API and PowerShell cmdlets: Helpful for automation of routine tasks and integration into existing or new portals.
  • UI Enhancements: Including main window, wizards, dialogs, and other elements, facilitating administration of the solution.
Examples of the Power of the Veeam APIs:

One of the features of Backup for Office 365 was the addition of a power set of RESTful APIs and PowerShell commandlets that are aimed are service providers automating the setup and management of their offerings around the product. A couple of our employees have written example interfaces for the Backup for Office 365 product and it shows that any service provider with some in house programming skill set can build customer portals that enhances their offerings and increases efficiency through automation.

Special welcome to Niels who this week joined our team. Great to have you on board!

Microsoft Azure Stack Support:

Last week at Microsoft Ignite, we announce our supportability for Azure Stack. This is based around our Windows Agent, Cloud Connect and Availability Console products that combine together to off an availability solution

Key benefits of Veeam’s support for the Azure Stack include:

  • Multi-tenancyVeeam Cloud Connect isolates backup copies for each tenant ensuring security and compliance; 
  • Multiple recovery options: Veeam Backup & Replication supports both granular item level recovery through Veeam Explorers for Microsoft Exchange, SQL Server, Microsoft SharePoint, Microsoft Active Directory and for Oracle, as well as full file level restores for tenant files that were deleted or corrupted;
  • Reporting & Billing: Veeam Availability Console supports real-time monitoring and chargeback on tenant usage, allow either Hosting providers or Enterprise organizations to easily manage and bill their tenants for Availability usage.

Veeam Vanguard Blog Post Roundup:

References:

https://helpcenter.veeam.com/docs/vbo365/guide/vbo_what’s_new_in_v1_5.html?ver=15

It’s ok to steal… VMUG UserCon Key Take Aways

Last week I attended the Sydney and Melbourne VMUG UserCons and apart from sitting in on some great sessions I came away from both events with a renewed sense of community spirit and enjoyed catching up with industry peers and good friends that I don’t see often enough. While the VMUG is generally struggling a little around the world at this point in time, kudos goes to both Sydney and Melbourne chapter leaders and steering committee in being able to bring out a superstar bunch of presenters (see panel below)…there might not be a better VMUG lineup anywhere in the world this year!

There was a heavy automation focus this year…which in truth was the same as last years events however last years messaging was more around the theory of _change or die_ this year there was more around the practical. This was a welcome change because, while it’s all well and good to beat the change messaging into people…actually taking them through real world examples and demo’s tends to get people more excited and keen to dive into automation as they get a sense of how to apply it to their every day jobs.

In the VMware community, there are not better examples of automation excellence than Alan Renouf and William Lam and their closing keynote sessions where they went through and deployed a fully functional SDDC vSphere environment on a single ESXi host from a USB Key was brilliant and hopefully will be repeated at other VMUGs and VMworld. This project was born out of last years VMworld Hackerthon’s and ended up being a really fun and informative presentation that showed off the power of automation along with the benefits of what undertaking an automation project can deliver.

“Its not stealing, its sharing” 

During the presentation Alan Renouf shared this slide which got many laughs and resonated well with myself in that apart from my very early failed uni days, I don’t think I have ever created a bit of code or written a script from scratch. There is somewhat of a stigma attached with “borrowing” or “stealing” code used to modify or create scripts within the IT community. There might also be some shame associated in admitting that a bit of code wasn’t 100% created by someone from scratch…I’ve seen this before and I’ve personally been taken to task when presenting some of the scripts that I’ve modified for purpose during my last few roles.

What Alan is pointing out there is that it’s totally ok to stand on the shoulders of giants and borrow from what’s out there in the public domain…if code is published online via someones personal blog or put up on GitHub then it’s fair game. There is no shame in being efficient…no shame in not having to start from scratch and certainly no shame in claiming success after any mods have been done… Own it!

Conclusion and Event Wrap Up:

Overall the 2017 Sydney and Melbourne UserCons where an excellent event and on a personal note I enjoyed being able to attend with Veeam as the Platinum Sponsor and present session on our vSAN/VVOL/SPBM support and introduce our Windows and Linux Agents to the crowd. The Melbourne crowd was especially engaged and asked lots of great questions around our agent story and where looking forward to the release of Veeam Agent for Windows.

Again the networking with industry peers and customers is invaluable and there was a great sense of community once again. The UserCon events are of a high quality and my thanks goes out to the leaders of both Sydney and Melbourne for working hard to organise these events. And which one was better? …I won’t go there but those that listened to my comment during our Sponsor giveaways at the end of the event knows how I really feel.

Until next year UserCon!

First Look: ManageIQ vCloud Director Orchestration

Welcome to 2017! To kick off the year I thought I’d do a quick post on a little known product (at least in my circles) from Red Hat Inc called ManageIQ. I stumbled across ManageIQ by chance having caught wind that they where soon to have vCloud Director support added to the product. Reading through some of the history behind ManageIQ I found out that in December of 2012 Red Hat acquired ManageIQ and integrated it into its CloudForms cloud management program…they then made it open source in 2014.

ManageIQ is the open source project behind Red Hat CloudForms. The latest product features are implemented in the upstream community first, before eventually making it downstream into Red Hat CloudForms. This process is similar for all Red Hat products. For example, Fedora is the upstream project for Red Hat Enterprise Linux and follows the same upstream-first development model.

CloudForms is a cloud management platform that also manages traditional server virtualization products such as vSphere and oVirt. This broad capability makes it ideal as a hybrid cloud manager as its able to manage both public clouds and on-premises private clouds and virtual infrastructures. This acts as a single management interface into hybrid environments that enables cross platform orchestration to be achieved with relative ease. This is backed by a community that contributes workflows and code to the project.

The supported platforms are shown below.

The October release was the first iteration for the vCloud provider which supports authentication, inventory (including vApps), provisioning, power operations and events all done via the use of the API provided by vCloud Director. First and foremost I see this as a client facing tool rather than an internal orchestration tool for vCAN SPs however given it can go cross platform there can be a use for VM or Container orchestration that SPs could tap into.

While it’s still relatively immature compared to the other platforms it supports, I see great potential in this and I think all vCAN Service Providers running vCloud Director should look at this as a way for their customers to better consume and operate vCD coming from a more modern approach, rather than depending on the UI.

Adding vCloud Director as a Cloud Provider:

Once the Appliance is deployed, head to Compute and Add New Cloud Provider. From the Type dropdown select VMware vCloud

Depending on which version of vCD SP your Service Provider is running, select the appropriate API Version. For vCD SP 8.x it should be vCloud API 9.0

Next add in the URL of the vCloud Director endpoint with it’s port…which is generally 443. For the username, you use the convention of [email protected] which allows you to login specifically to your vCD Organization. If you want to login at an admin enter in [email protected] to get top level access.

Once connected you can add as many vCD endpoints as you have. As you can see below I am connected to four seperate instances of vCloud.

Clicking through you get a Summary of the vCloud Zone with it’s relationships.

Clicking on the Instances you get a list of your VM’s, but this also has views for Virtual Datacenter, vApps and other vCD objects. As you can see below there is detailed views on the VM and it does have basic Power functions in this build.

I’ve just started to look into the power of CloudForms and have been reading through the ManageIQ automation guide. It’s one of those things that needs a little research plus some trial and error to master, but I see this form of cloud consumption where the end user doesn’t have to directly manipulate the various API endpoints as the future. I’m looking forward to how the vCloud Director provider matures and I’ll be keeping an eye on the forums and ManageIQ GitHub page for more examples.

Resources:

http://manageiq.org/docs/get-started/
http://manageiq.org/docs/reference/
https://pemcg.gitbooks.io/mastering-automation-in-cloudforms-and-manageiq/content/chapter1.html

« Older Entries