Tag Archives: PowerCLI

PowerCLI Script to Calculate VSAN vCAN Points Per Month

There is no doubt that new pricing introduced to vCAN Service Providers announced just after VSAN 6.2 was released meant that Service Providers looking at VSAN for their IaaS or MSP offerings that had previously written it off due to price, could once again consider it as a viable and price competitive option. As of writing this blog post there is no way to meter the new reporting mechanism automatically through the existing vCloud Usage Meter with the current 3.5 beta also lacking the ability to report billing info.

I had previously come across a post from @virten that contained a PowerCLI script to calculate VSPP points based on the original allocated GB model. With VSAN 6.2 pricing was now based on a consumed GB model which was a significant win for those pushing for a more competitive pricing structure to be able to push a now mature VSAN as a platform of choice.

Before I post the code it’s worth noting that I am still not 100% happy with the interpretation of the reporting:

The VsanSpaceUsage(vim.cluster.VsanSpaceUsage) data object has the following two properties which vCAN partners can use to pull Virtual SAN usage information: a) totalCapacityB (total Virtual SAN capacity in bytes) and b) freeCapacityB (free Virtual SAN capacity in bytes). Subtracting b) from a) should yield the desired “Used Capacity” information for monthly reporting.

I read that to say that you report for any fault tolerance or data resiliency overheads…that is to say if you have a VM with a 100GB hard disk consuming 50GB on a VSAN datastore utilizing RAID1 and an FTT=1 you will pay for the 100GB that is actually consumed.

With that in mind I had to add in a multiplier into the original script I had hacked together to cater for the fault tolerance and raid level you may run. The rest is pretty self explanatory and I have built on @virtens original script by asking for which vCenter you want to log into, what VSAN licensing model you are using and then finally ask for the RAID and FTT levels you are running. The result is the total amount of consumed storage of all VM disks residing on the VSAN datastore (which is the only value hard coded) and then the amount of vCAN points you would be up for per month with and without the overhead tax.

The code is below, please share and improve and note that I provide it as is and should be used as such. Please let me know if I’ve made any glaring mistakes…

If someone can also let me know how to round numbers and capture an incorrect vCenter login gracefully and exit that would be excellent! – [EDIT] Thanks to Virten for jumping on that! Code updated!

References:

PowerCLI Script to Calculate VSAN VSPP Points

Quick Post: Removing Datastore Tags and Mounts with PowerCLI

Over the past couple of weeks i’ve been helping our Ops Team decommission an old storage array. Part of the process is to remove the datastore mounts and paths to ensure a clean ESXi Host config as well as remove any vCenter Tags that are used for vCloud Director Storage Policies.

Looking through my post archive I came across this entry from 2013 that (while relating to ESXi 4.1) shows you that there can be bad consequences if you pull a LUN from a host in the incorrect manner. Also if you are referencing datastores through storage policies and vCenter Tags in vCloud Director an incorrectly removed datastore will throw errors for the Virtual DC and Provider vDC from where the datastores used to be referenced.

With that, below is the process I refined with the help of an excellent set of PowerCLI commandlets provided by the Module created by Alan Renouf.

Step 1 – Remove Any vCenter Tags:

After this has been done you can go into vCloud Director and Refresh the Storage Policies which will remove the datastores from the Providers.

Step 2 – Import Datastore Function Module:

Step 3 – Connect to vCenter, Dismount and Detach Datastore

What the above commands do is check to see what Hosts are connected to the datastore being removed and what paths exist. You then run the Unmount command to unmount from the host and the Detach command removes all the paths from the host.

Step 4 – Refresh Storage on Hosts

The last step is to refresh the storage to remove all reference of the datastore from the host.

I did encounter a problem on a couple of hosts during the unmount process that returned the error as shown below:

This error is actually caused by a VSAN module that actively stores traces needed to debug any VSAN related issues on VMFS datastores…not really cool when VSAN isn’t being used, but the fix is a simple one as specified in this KB.

References:

http://blogs.vmware.com/vsphere/2012/01/automating-datastore-storage-device-detachment-in-vsphere-5.html

https://communities.vmware.com/docs/DOC-18008

PowerCli IOPS Metrics: vCloud Org and VPS Reporting

We have recently been working through a product where knowing and reporting on VM Max Read/Write IOPS was critical. We needed a way to be able to provide reporting on our clients VPSs and vCloud Organisation VMs.

vCOPs is a seriously great monitoring and analytics tool, but it has got a flaw in it’s reporting in that you can’t search, export or manipulate metrics relating to VM IOPS in a useful way. VeeamOne gives you a Top 10 list of IOPS, CloudPhysics has a great card showing DataStore/VM performance…but again, not exportable or granular enough for what we needed.

If you search on Google for IOPS Reporting you will find a number of guys who have created excellent PowerCLI Scripts. Problem I found was that most worked in some cases, but not for what we required. One particular post I came across this Post on the VMware Community Forums gave a quick and dirty script to gather IOPS stats for all VMs. This lead me to the Alpacapowered Blog. So initial credit for the following goes to MKguy…I merely hacked around it to provide us with additional functionality.

Before You Start:

Depending on your Logging Level in vCenter (I have run this against vCenter 5.1 with PowerCLI 5.5) you may not be collecting the stats required to get Read/Write IOPS. To check this run the following in PowerCLI connected to your vCenter

If you don’t get the output it means your logging level is set to a lower level than is required. Read through this Post to have vCenter logging the required metrics on a granular level. Once thats been done, give vCenter about 30 minutes to collect its 5 minute samples. If you ever want to check individually how many samples you have for a particular VM you can run the following command. It will also show you the Min/Max Count plus the average.

The Script:

I’ve created two versions of the script (one for Single VMs and on for vCloud Org VMs) and as you can see below, I added in a couple niceties to make this more user friendly and easy to trigger for our internal support staff. Idea is that anyone with the right access to vCenter can double-click on the .ps1 script, and with the right details produce a report for either a single VM or a vCloud Organisation.

Script Notes:

Line 1: Adds the PowerCLI Snap-In to be able to call ESXi Commandlets from PowerShell on click of the .ps1

Line 3: Without notes from MKguy, i’m assuming this is telling us to use the last 30 days of stats if they exist.

Line 7: I discovered the -menu flag for Connect-VIServer which lists a 10 list of your most recently connects vCenter or ESXi servers…from there you enter a number to connect (ease of use for helpdesk)

Line 16: Does uses the Get-Folder command to allow us to get all the VMs in a vCloud Org…you can obviously enter in your own preferred search flags here.

Lines 17-22 are the ones I picked up form the Community post which basically takes the command we used above to check for samples metrics and feeds it into a read/write variable which is then displayed in a series of columns as shown below.

Script Output:

Executing the .ps1 will open a PowerShell window, Ask you to enter in the vCenter/Host and finally the VM name or vCloud Org Description. If you have a folder with a number of VMs, the script can take a little time going through the math and spit out the values.

From there you can do a select and copy to export the values out for manipulation…I haven’t done a csv export option due to time constraints, however if anyone want to add that to the end of the script, please do and let me know 🙂

Hope this script is useful for some!

vCloud Reporting: Org and OrgvDC VM Report (PowerCLI)

I had been looking for a way to get quick reports from our vCloud Zones using PowerCLI that reports on VM Allocated Usage. Basically I wanted to get a list of VMs/vAPPs and return values for allocated vCPU, vRAM and Storage.

I cam across this blog from Geek After Five @jakerobinson which uses the PowerCLI CI command Get-CIVM, which typically can be used to report on name, vCPU and vRAM count…but not storage. I’ve slightly extended the script to list vCloud Orgs and created another script that can list vCloud vDCs and then return values for all VMs contained the contained vAPPs. The smarts of the script is all Jakes, so thankyou for creating and sharing. #community

Get-vORG-VM-Detail

Get-vCD-VM-Detail

Example Output Below:

I would have liked to format the List a little better, but was running into a double format-table issue in the array, so for the moment it’s a fairly messy list, but none the less helpful. Next step is to add an email function to get the CSV info delivered for further use.