Tag Archives: PowerCLI

Automated Configuration of Backup & Replication with PowerShell

As part of the Veeam Automation and Orchestration for vSphere project myself and Michael Cade worked on for VMworld 2018, we combined a number of seperate projects to showcase an end to end PowerShell script that called a number of individual modules. Split into three parts, we had a Chef/Terraform module that deployed a server with Veeam Backup & Replication installed. A Terraform module that deployed and configured an AWS VPC to host a Linux Repository with a Veeam PN Sitegateway. And finally a Powershell module that configured the Veeam server with a number of configuration items ready for first use.

The goal of the project was to release a PowerShell script that fully deployed and configured a Veeam platform on vSphere with backup repositories, vCenter server and default policy based jobs automatically configured and ready for use. This could then be adapted for customer installs, used on SDDC platforms such as VMware Cloud on AWS, or for POCs or lab use.

While we are close to releasing the final code on GitHub for the project, I thought I would branch out the last section of the code and release it separately. As I was creating this script, it became apparent to me that it would be useful for others to use as is or as an example from which to simplify manual and repetitive tasks that go along with configuring Backup & Replication after installation.

Script Overview:

The PowerShell script (found here on GitHub) performs a number of configuration actions against any Veeam Backup & Replication Server as per the included functions.

All of the variables are configured in a config.json file meaning nothing is required to be modified in the main PowerShell script. There are a number of parameters that can be called to trigger or exclude certain functions.

There are some pre-requisites that need to be in place before the script can be executed…most importantly the PowerShell needs to be executed on a system where the Backup & Replication Console is installed to allow access to the Veeam PowerShell Snap-in. From there you just need a new Veeam Backup & Replication server and a vCenter server plus their login credentials. If you want to add a Cloud Connect Provider offering Cloud Connect Backup or/and Replication you enter in all the details in the config.json file as well. Finally, if you want to add a Linux Repository you will need the details of that plus have it configured for key based authentication.

You can combine any of the parameters listed above. An example is shown above where -ClearVBRConfig has been used to reverse the -RunVBRConfigure parameter that was executed first to do an end to end configure. For Cloud Connect Replication, if you want to configure and deploy an NEA there is a specific parameter for that. If you didn’t want to configure Cloud Connect or the Linux Repository the parameters can be used individually, or together. If those two parameters are used, the Default Backup Repository will be used for the jobs that are created.

Automating Policy Based Backup Jobs:

Part of the automation that we where keen to include was the automatic creation of default backup jobs based on vSphere Tags. The idea was to have everything in place to ensure that once the script had been run, VMs could be backed up dependant on them being added to vSphere Tags. Once done the backup jobs would protect those VMs based on the policies set in the config.json.

The corresponding jobs are all using the vSphere Tags. From here the jobs don’t need to be modified when VMs are added…VMs assigned those Tags will be included in the job.


Once the script has been run you are left with a fully configured Backup & Replication server that’s connected to vCenter and if desired (by default) has local and Cloud Connect repositories added with a set of default policy based jobs ready to go using vSphere Tags.

There are a number of improvements that I want to implement and I am looking out for Contributors on GitHub to help develop this further. At its base it is functional…but not perfect. However it highlights the power of the automation that is possible with Veeam’s PowerShell Snap-In and PowerCLI. One of the use-cases for this was for repeatable deployments of Veeam Backup & Replication into POCs or labs and for those looking to standup those environments, this is a perfect companion.

Look out for the full Veeam SDDC Deploy Toolkit being released to GitHub shortly.



It’s ok to steal… VMUG UserCon Key Take Aways

Last week I attended the Sydney and Melbourne VMUG UserCons and apart from sitting in on some great sessions I came away from both events with a renewed sense of community spirit and enjoyed catching up with industry peers and good friends that I don’t see often enough. While the VMUG is generally struggling a little around the world at this point in time, kudos goes to both Sydney and Melbourne chapter leaders and steering committee in being able to bring out a superstar bunch of presenters (see panel below)…there might not be a better VMUG lineup anywhere in the world this year!

There was a heavy automation focus this year…which in truth was the same as last years events however last years messaging was more around the theory of _change or die_ this year there was more around the practical. This was a welcome change because, while it’s all well and good to beat the change messaging into people…actually taking them through real world examples and demo’s tends to get people more excited and keen to dive into automation as they get a sense of how to apply it to their every day jobs.

In the VMware community, there are not better examples of automation excellence than Alan Renouf and William Lam and their closing keynote sessions where they went through and deployed a fully functional SDDC vSphere environment on a single ESXi host from a USB Key was brilliant and hopefully will be repeated at other VMUGs and VMworld. This project was born out of last years VMworld Hackerthon’s and ended up being a really fun and informative presentation that showed off the power of automation along with the benefits of what undertaking an automation project can deliver.

“Its not stealing, its sharing” 

During the presentation Alan Renouf shared this slide which got many laughs and resonated well with myself in that apart from my very early failed uni days, I don’t think I have ever created a bit of code or written a script from scratch. There is somewhat of a stigma attached with “borrowing” or “stealing” code used to modify or create scripts within the IT community. There might also be some shame associated in admitting that a bit of code wasn’t 100% created by someone from scratch…I’ve seen this before and I’ve personally been taken to task when presenting some of the scripts that I’ve modified for purpose during my last few roles.

What Alan is pointing out there is that it’s totally ok to stand on the shoulders of giants and borrow from what’s out there in the public domain…if code is published online via someones personal blog or put up on GitHub then it’s fair game. There is no shame in being efficient…no shame in not having to start from scratch and certainly no shame in claiming success after any mods have been done… Own it!

Conclusion and Event Wrap Up:

Overall the 2017 Sydney and Melbourne UserCons where an excellent event and on a personal note I enjoyed being able to attend with Veeam as the Platinum Sponsor and present session on our vSAN/VVOL/SPBM support and introduce our Windows and Linux Agents to the crowd. The Melbourne crowd was especially engaged and asked lots of great questions around our agent story and where looking forward to the release of Veeam Agent for Windows.

Again the networking with industry peers and customers is invaluable and there was a great sense of community once again. The UserCon events are of a high quality and my thanks goes out to the leaders of both Sydney and Melbourne for working hard to organise these events. And which one was better? …I won’t go there but those that listened to my comment during our Sponsor giveaways at the end of the event knows how I really feel.

Until next year UserCon!

PowerCLI Script to Calculate VSAN vCAN Points Per Month

There is no doubt that new pricing introduced to vCAN Service Providers announced just after VSAN 6.2 was released meant that Service Providers looking at VSAN for their IaaS or MSP offerings that had previously written it off due to price, could once again consider it as a viable and price competitive option. As of writing this blog post there is no way to meter the new reporting mechanism automatically through the existing vCloud Usage Meter with the current 3.5 beta also lacking the ability to report billing info.

I had previously come across a post from @virten that contained a PowerCLI script to calculate VSPP points based on the original allocated GB model. With VSAN 6.2 pricing was now based on a consumed GB model which was a significant win for those pushing for a more competitive pricing structure to be able to push a now mature VSAN as a platform of choice.

Before I post the code it’s worth noting that I am still not 100% happy with the interpretation of the reporting:

The VsanSpaceUsage(vim.cluster.VsanSpaceUsage) data object has the following two properties which vCAN partners can use to pull Virtual SAN usage information: a) totalCapacityB (total Virtual SAN capacity in bytes) and b) freeCapacityB (free Virtual SAN capacity in bytes). Subtracting b) from a) should yield the desired “Used Capacity” information for monthly reporting.

I read that to say that you report for any fault tolerance or data resiliency overheads…that is to say if you have a VM with a 100GB hard disk consuming 50GB on a VSAN datastore utilizing RAID1 and an FTT=1 you will pay for the 100GB that is actually consumed.

With that in mind I had to add in a multiplier into the original script I had hacked together to cater for the fault tolerance and raid level you may run. The rest is pretty self explanatory and I have built on @virtens original script by asking for which vCenter you want to log into, what VSAN licensing model you are using and then finally ask for the RAID and FTT levels you are running. The result is the total amount of consumed storage of all VM disks residing on the VSAN datastore (which is the only value hard coded) and then the amount of vCAN points you would be up for per month with and without the overhead tax.

The code is below, please share and improve and note that I provide it as is and should be used as such. Please let me know if I’ve made any glaring mistakes…

If someone can also let me know how to round numbers and capture an incorrect vCenter login gracefully and exit that would be excellent! – [EDIT] Thanks to Virten for jumping on that! Code updated!


PowerCLI Script to Calculate VSAN VSPP Points

Quick Post: Removing Datastore Tags and Mounts with PowerCLI

Over the past couple of weeks i’ve been helping our Ops Team decommission an old storage array. Part of the process is to remove the datastore mounts and paths to ensure a clean ESXi Host config as well as remove any vCenter Tags that are used for vCloud Director Storage Policies.

Looking through my post archive I came across this entry from 2013 that (while relating to ESXi 4.1) shows you that there can be bad consequences if you pull a LUN from a host in the incorrect manner. Also if you are referencing datastores through storage policies and vCenter Tags in vCloud Director an incorrectly removed datastore will throw errors for the Virtual DC and Provider vDC from where the datastores used to be referenced.

With that, below is the process I refined with the help of an excellent set of PowerCLI commandlets provided by the Module created by Alan Renouf.

Step 1 – Remove Any vCenter Tags:

After this has been done you can go into vCloud Director and Refresh the Storage Policies which will remove the datastores from the Providers.

Step 2 – Import Datastore Function Module:

Step 3 – Connect to vCenter, Dismount and Detach Datastore

What the above commands do is check to see what Hosts are connected to the datastore being removed and what paths exist. You then run the Unmount command to unmount from the host and the Detach command removes all the paths from the host.

Step 4 – Refresh Storage on Hosts

The last step is to refresh the storage to remove all reference of the datastore from the host.

I did encounter a problem on a couple of hosts during the unmount process that returned the error as shown below:

This error is actually caused by a VSAN module that actively stores traces needed to debug any VSAN related issues on VMFS datastores…not really cool when VSAN isn’t being used, but the fix is a simple one as specified in this KB.




PowerCli IOPS Metrics: vCloud Org and VPS Reporting

We have recently been working through a product where knowing and reporting on VM Max Read/Write IOPS was critical. We needed a way to be able to provide reporting on our clients VPSs and vCloud Organisation VMs.

vCOPs is a seriously great monitoring and analytics tool, but it has got a flaw in it’s reporting in that you can’t search, export or manipulate metrics relating to VM IOPS in a useful way. VeeamOne gives you a Top 10 list of IOPS, CloudPhysics has a great card showing DataStore/VM performance…but again, not exportable or granular enough for what we needed.

If you search on Google for IOPS Reporting you will find a number of guys who have created excellent PowerCLI Scripts. Problem I found was that most worked in some cases, but not for what we required. One particular post I came across this Post on the VMware Community Forums gave a quick and dirty script to gather IOPS stats for all VMs. This lead me to the Alpacapowered Blog. So initial credit for the following goes to MKguy…I merely hacked around it to provide us with additional functionality.

Before You Start:

Depending on your Logging Level in vCenter (I have run this against vCenter 5.1 with PowerCLI 5.5) you may not be collecting the stats required to get Read/Write IOPS. To check this run the following in PowerCLI connected to your vCenter

If you don’t get the output it means your logging level is set to a lower level than is required. Read through this Post to have vCenter logging the required metrics on a granular level. Once thats been done, give vCenter about 30 minutes to collect its 5 minute samples. If you ever want to check individually how many samples you have for a particular VM you can run the following command. It will also show you the Min/Max Count plus the average.

The Script:

I’ve created two versions of the script (one for Single VMs and on for vCloud Org VMs) and as you can see below, I added in a couple niceties to make this more user friendly and easy to trigger for our internal support staff. Idea is that anyone with the right access to vCenter can double-click on the .ps1 script, and with the right details produce a report for either a single VM or a vCloud Organisation.

Script Notes:

Line 1: Adds the PowerCLI Snap-In to be able to call ESXi Commandlets from PowerShell on click of the .ps1

Line 3: Without notes from MKguy, i’m assuming this is telling us to use the last 30 days of stats if they exist.

Line 7: I discovered the -menu flag for Connect-VIServer which lists a 10 list of your most recently connects vCenter or ESXi servers…from there you enter a number to connect (ease of use for helpdesk)

Line 16: Does uses the Get-Folder command to allow us to get all the VMs in a vCloud Org…you can obviously enter in your own preferred search flags here.

Lines 17-22 are the ones I picked up form the Community post which basically takes the command we used above to check for samples metrics and feeds it into a read/write variable which is then displayed in a series of columns as shown below.

Script Output:

Executing the .ps1 will open a PowerShell window, Ask you to enter in the vCenter/Host and finally the VM name or vCloud Org Description. If you have a folder with a number of VMs, the script can take a little time going through the math and spit out the values.

From there you can do a select and copy to export the values out for manipulation…I haven’t done a csv export option due to time constraints, however if anyone want to add that to the end of the script, please do and let me know 🙂

Hope this script is useful for some!

vCloud Reporting: Org and OrgvDC VM Report (PowerCLI)

I had been looking for a way to get quick reports from our vCloud Zones using PowerCLI that reports on VM Allocated Usage. Basically I wanted to get a list of VMs/vAPPs and return values for allocated vCPU, vRAM and Storage.

I cam across this blog from Geek After Five @jakerobinson which uses the PowerCLI CI command Get-CIVM, which typically can be used to report on name, vCPU and vRAM count…but not storage. I’ve slightly extended the script to list vCloud Orgs and created another script that can list vCloud vDCs and then return values for all VMs contained the contained vAPPs. The smarts of the script is all Jakes, so thankyou for creating and sharing. #community



Example Output Below:

I would have liked to format the List a little better, but was running into a double format-table issue in the array, so for the moment it’s a fairly messy list, but none the less helpful. Next step is to add an email function to get the CSV info delivered for further use.