Monthly Archives: May 2014

CloudPhysics: Enhanced Storage Analytics Cards [Part 2] – Snapshots Gone Wild 2

Following up from Part 1 which focused on the Datastore Contention v2 Card, I’ll shift focus to what can sometimes be a Virutalization Admins worst nightmare…Snapshots. Snapshots are not backups (repeat x100) …but backup systems such as Veeam utilize VMware Snapshots to do their thing…and sometimes the process of SnapShot consolidation can fail, leaving unplanned SnapShots in play. Outside of that, the most dangerous Snapshot is the one that is manually created and forgotten.

There are plenty of ways to report on snapshots and most monitoring tools can be used to check, warn and alert on their presence. I still use a combination of reporting methods to ensure that the worst case scenario of a filled up datastore…or worse, corrupt snapshot do not occur.

Before looking at how CloudPhysics does it’s thing, traditionally I would have (and still do) use a simple PowerCli command to search all VMs and report on the age and size of any snapshots in a vCenter.

Pretty basic, but does the job.

CloudPhysics first introduced a SnapShots Gone Wild Card back during VMWorld 2012 and while it’s gone through a few changes and improvements since then it’s core look and feel has remained the same:

CloudPhysics have recently released V2 of the SnapShots Gone Wild Card and much the same as the Datastore Contention V2 Card there are quiet a few enhancements to go along with a more dynamic look and feel.

The one thing that I keep on commenting on in regards to CloudPhysics is that they present the data that matters so it’s right in your face. The new Card defaults to those SnapShots that need attention relative to SnapShot size, age and growth.

The graphics at the top break down DataStore Space Usage across the datastores with SnapShots and also provides a rough Savings Opportunity which is handy for Enterprises looking to put a $$ value on having SnapShots sticking around consuming otherwise useful datastore space. You can modify the price per GB to suit your own storage costs.

From a Service Providers point of view the value of this Card is in the quick visual representation of SnapShots in the CloudPhysics monitored environment…and while not proactive in nature (for that I would strongly suggest monitoring and reporting on the size of the SnapShot vmdk file useing Nagios, OpsView or using the vCenter SnapShot Alarm + Trigger mechanism)  it’s brilliant as a way to keep visual tabs on whats going on under the surface of your VMs.

The one thing I would like to have added to this Card is some form of automated reporting and/or alerting. I know that the guys at CloudPhysics are working on this set of features…this is one Card that would certainly benefit from that!

vCloud Director 5.5 Upgrade: Storage Profiles and Missing Datastores:

Sometimes I think there is change just for the sake of change…or to keep us on our toes 🙂 Either way I just came across an interesting upgrade gotchya when going from vCloud Director 5.1.x to 5.5.x involving previously configured Datastore Storage Profiles which translate back to vCenter into User Defined Storage Capabilities and Storage Profiles.

Most of the upgrade posts I’ve searched on out there don’t have reference to this change which comes into play in the following scenarios:

vCloud 5.5 + vCenter 5.1 – Traditional Storage Profiles configured in vCenter (Web or Client)

vCloud 5.5 + vCenter 5.5 – Tags with Storage Policy configured on vCenter (Web Only)

Nothing mentioned in the Official Upgrade KB Or, Install Best Practice KB:

If you are upgrading or have vCenter 5.5 and are upgrading vCloud 5.5 from 5.1, you will need to get up to speed with vCenter Tags (Great Post here from @gabvirtualworld) After the upgrade you will be confronted with the below after an upgrade:

Note that you are not clicking on Storage Profiles anymore, but Storage. Policies. If you where to head up to check out what datastores you can see against your Provider vDCs you will see a possible system alert and 0 datastores as shown below:

So, lets fix this and avoid any potential heart attack moments or angry forum posts to VMware…You will need to log into the vSphere Web Client and navigate your way to a view that shows your Datastores, click on Manage and go to Tags:

Click on the Assign Icon and you will be presented with your Legacy Storage Profiles. Note that you can create new Tags and assign different categories, but for the quickest outcome I would go legacy. Click on Assign:

You will now see the Assigned Tag and Category appear in the datastores Tag tab window.

Go back to the home menu of the vSphere Web Client click on Storage Profiles (now called Policies). Select one of them and Click on Edit. You will see a window open with Name and Description. The second menu item is Rule-Set 1 which I found to automatically populate with the Tag applied in the above step. Assuming here that there is some carry over during the upgrade process relative to vCenter Profiles matching with the old User-Defined Storage Capabilities.

Once you Save the changes and if you have existing VMs attached to the legacy Storage Profiles you will be asked if you wanted to Reapply the new Storage Policy to those machines. I haven’t been able to test this extensively yet, so not too sure on the realization of that warning relating to significant time and system resources.

Head back to vCloud Director and Refresh your Storage Policies under the vCenter menu option under vSphere Resources and you should have your Datastores and Datastore Cluster back populated with storage statistics.

Again, i’m not across the specific reason for the moves towards Tag and Storage Policies (feel free to comment below), but I do know it potentially will add some development time in our backend automation to verify the provisioning workflows we have in place that are still compatible. I know this initially threw us off in our upgrade Labs.

VMWorld 2014: Session Voting

Voting is Open for VMWorld 2014 and I have submitted three sessions for your consideration. The first session is a VMUG User Conference Presentation I Co-Presented at the Sydney and Melbourne Events in February and I’ve decided to take it solo to VMWorld…the other two are joint sessions I will present with CloudPhysics as talk sponsors.

The IT CAN’T BE DONE! session is explained below and I’m hoping that there is enough interest in the vCloud Suite to get the numbers together to present this talk again…it had excellent feedback from the VMUG Conferences and is a true community sharing session and not a set of marketing slides…

Title SESSION 1229 – “IT CAN’T BE DONE!” vCloud Platform Upgrade
Abstract ZettaGrid recently undertook what may have been Australia’s largest vCloud Platform upgrade from 1.5.1 to 5.1. In this presentation I would like to share with you how we successfully planned and executed the upgrade of our 3000 workload production environment with zero downtime and zero customer impact. 

As presented at the Melbourne and Sydney VMUG Conferences this year I will take you through our considerations and planning process, and why we went against the prevailing best practise advice of a side by side migration to go down the in-place upgrade path.

Despite the best planning and lab modelling efforts we allowed for and in fact did run into a few gotchas and roadblocks which we also look forward to sharing with you.

Last but by no means least I’ll discuss the lessons learned and advice for anyone else contemplating the path from 1.5 to 5.x and beyond followed by a peek into what the future holds in vCloud vNext.

http://www.vmworld.com/voting.jspa

So, login using our VMWorld Username and Password (even if your not going this year, register and vote) and do a Filter Search for “Spiteri”…you will see the three sessions listed as below:

It would be an extremely humbling and exciting experience to have an opportunity to speak at VMWorld…one I know doesn’t come easy…so here is hoping! Thanks in advance for the vote!

For those that need visual direction 🙂 I’ve included below a How-To-Vote Video: