Monthly Archives: February 2015

Released: PernixData FVP 2.5

PernixData have GA’ed FVP 2.5 and it’s got a number of enhancements over the previous 2.0 release. For those running ESXi an haven’t heard about FVP here is a quick summary:

PernixData virtualizes server-side flash and server RAM across all hypervisor nodes in a compute cluster and hooks the high-speed server-side resources into existing VM I/O paths to transparently reduce the IOPS burden on an existing storage system. Customers leverage PernixData’s FVP with their existing primary storage deployments and manage them within the context of their familiar hypervisor management tools

In a nutshell it’s awesome and if Veeam hadn’t recently patented a certain phrase, I would associate that particular phrase with FVP.

New features in the FVP 2.5:

  • Distributed Fault Tolerant Memory-Z which compresses data stored in RAM.
  • Intelligent I/O profiling, which provides a way for administrators to temporarily suspend and resume acceleration on virtual machines, and without deleting the flash footprint for those VMs.
  • Role-based access control
  • Network acceleration for NFS datastores.

For me the standout being Memory compression which opens the door for more efficient use of FVP to use precious/expensive RAM resources as acceleration cache. Check out Frank Denneman’s blog here where he goes through the feature in detail. Some of the numbers accosiated with FVP RAM acceleration are just silly, so the ability to compress and be smart with the data used in memory makes it more attractive to add a little more RAM to those Host Build sheets.

The I/O Profiling is manual at the moment, but is a good start for those who know VM workloads that could potentially inject “dirty” blocks into the FVP cache during backup or Virus Scanning operations…I’d like to see this become even more intelligent at become self aware of data that’s dirty…this would be a big help for Service Providers.

In terms of Bug Fixes, there is a fix for those using Using Veeam to restore a VM to a different datastore on the same host which had resulted in the VM remaining in a stalled state.

Thanks for PernixData who made me a PernixPro last week…I love the technology and it’s certainly done wonders within our ESXi platforms to help solve issue with higher than wanted storage latency. If you have spare SSD or want to check out the RAM cache features of FVP, head to the site and download a full 30 day trial…easy to install and low impact in terms of configuring FVP against datastores or VMs.

VMUG User Conference 2015 – Melbourne Community Session

The Australian legs of the VMUG User Conferences are happening next week in Sydney and Melbourne…This year the event is even bigger than last years and if you are into all things VMware and can get to Sydney or Melbourne next week do your self a favour and register. The agenda is full of VMWorld level goodness and the keynote speakers are some of the best going round the VMware Community.

Check out the Agenda here and if you are going, download the VMUG Mobile App and plan out your sessions for the day. If you are coming to the Melbourne leg, i’ll be there and presenting a Community Session around NSX and my experiences around working with NSX at ZettaGrid.

Get down and say hello and take advantage of this is awesome free event that provides an excellent opportunity to network and learn from some of the best local and international guys in the community. Melbourne is the Vitualization Capital of Australia and spiritual home of the aussievMafia!

Released: vCloud Director 5.6.4 SP – Upgrade from 5.5.2.x and NSX 6.1.2 Support

In October 2014 VMware released vCD SP 5.6.3 which was the first version of vCloud Director that was forked for Service Providers. As I mentioned in this post there was a catch with the 5.6.3 release in that if you were running vCloud 5.5.2.x an in place upgrade was not possible. I was made aware that a 5.6.4 SP build was in the works and I got to test an advanced build in early January which let me upgrade 5.5.2.x lab instances to 5.6.4 which meant the new functionality available in the 5.6.x SP build was accessible for testing.

As promised in that post the official 5.6.4 SP Build was released last Friday (vmware-vcloud-director-5.6.4-2496071.bin) and can be downloaded from here for those with the correct MyVMware entitlements. The release notes show that the only new additions in this release is the ability to upgrade from 5.5.2 and as well as some additional GuestOS Support.

Interoperability and NSX-v:

One big new feature of this build is it’s official interoperability support for NSX-v 6.1.2 (PDF here) No mention of vSphere 6.0 Support just yet…but that is something I am chasing up internally as well. There are also a bunch of bug fixes …some that carry over from 5.5.2.x which were contained in the initial 5.6.3 SP release.

An interesting side note…I saw this entry in the PDF

vCloud Director 5.6 for service providers will be generally supported through 10/6/2016.

I will chase up exactly what that means and report back……in the mean time those that have access to the SP builds…download the binaries and get testing ready for production upgrades.

UPDATE: I’ve been able to clarify what the above supportability statement refers to in the PDF. Each major or minor release is supported for 2 years. This means that the 5.6.x versions are supported until this date. Once 6.x (or the next major/minor version) the support clock moves forward for that version. A pretty good indication that vCD SP will not be going away any time in the near future.

NSX vCloud Retrofit: Using NSX 6.1.x with vCloud Director 5.5.2.x VSE Redeployment Issue

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

In the latest round of VMware KB Updates posted this week I came across an interesting KB relating to vCD 5.5.2.x and NSX 6.1.x and an apparent error when redploying an existing vShield Edge (which run at version 5.3)

VMware KB: Using NSX 6.1.x with vCloud Director 5.5.2.x.

Details: When you redeploy an edge gateway from vCloud Director 5.5.2.x after you upgrade from vCloud Networking and Security 5.5.x to VMware NSX 6.1.x, the edge gateway is upgraded to version 6.x, which is not supported by vCloud Director.

Solution: Configure vCloud Director to use Edge gateway version 5.5 with NSX 6.1.x on an Microsoft SQL Server database
Add the following statement to the database.
INSERT INTO config (cat, name, value, sortorder) VALUES (‘vcloud’, ‘networking.edge_version_for_vsm6.1’, ‘5.5’, 0);
Restart the vCloud Director cell.

However during my work deploying NSX 6.1.x into vCD 5.5.2 I couldn’t remember coming across this behaviour… In my VSE Validation Post I go through doing a test deploying of a 5.3 VSE once vCNS is upgraded to NSX Manager. My fellow vCD expert Mohammed Salem triggered this KB and posted about it here as he saw this behaviour…

After upgrading NSX to 6.1.2, I had an interesting issue, When I was trying to redeploy an existing GW, I found out the GW was upgraded to v6.1.2 instead of 5.5.3. This caused me an issue because vCloud Director (at least v5.5.x) will not recognize GWs with a version higher than 5.5.3 (Which is the latest version supported by vCNS).

I decided to give this a go in my labs..I have two VSE’s deployed and Managed by vCloud…EDGE-192 was brought in from the upgrade to NSX and I created EDGE-208 with NSX Manager handling the deployment from vCD.

I went through the Re-Deploy Option and watching the progress from the Web Client Networking & Security Edges Menu

After this had completed the version remained at 5.5.3 and I was able to manage the VSE from the vCD GUI without issue. I did the same on the other VSE and that worked as well.

I ran the script from Mohammed’s post and found the expected entry before the addition put forward in the KB

So it seems that this behaviour is not consistent across instances…as it stands NSX and vCloud Director Integration is still in it’s infancy and I expect there to be differing behaviours as more and more people deploy NSX…however this sort of inconsistency is unexpected. One possible answer is that my instances are all mature vCD installs that have been upgraded from 1.5 onwards…that said, seems strange I don’t have the issue.

Point and case, test this behaviour first before looking to apply the DB entry…would be interesting to see if more people come across it…however there is no harm in adding the entry regardless…as Mohammed commented, this behaviour doesn’t seem to exist in vCD SP 5.6.3.


vExpert 2015: Passion and Community

I’m honoured to be recognized as a VMware vExpert for 2015…this is my 4th year as a vExpert and without doubt the passion that drives this community remains as impressive as ever. There are now over 1000 vExperts worldwide and while I have questioned the swelling of the vExpert numbers over the past couple of years I believe that the community is as strong as ever and the nomination/vetting process undertaken by the team at VMware ensures all those that get the badge…earn it. There are tens of thousands VMware IT Professionals worldwide…to be 1 of 1000 is very unique!

About 10 months ago I renamed by blog to Virtualization is Life! and reflected on how my career path had shifted from traditional hosting and moved more towards virtualization…a direction driven out of what I was able to achieve over the past couple of years which I contribute in a large part to becoming a vExpert in 2012

Over the last 12 months I’ve been able to increase the frequency of posts on this site and I was lucky enough to present at the Melbourne and Sydney VMUG User Conferences as well as a TechTalk community session at VMWorld 2014. I also continue to champion VMware products through my role as Lead Architect at ZettaGrid…all while staying engaged and entertained on Twitter where the vExpert community is strong.

I wanted to point out a blog post and shout out to Dan McGee who I met at a partner dinner at VMworld last year. We happened to sit across from one another during the dinner and engaged in some general chit chat…I was humbled to hear that Dan knew of my blog as was a keen follower on Twitter…once Dan told me his Twitter handle I recognised the work he had been doing for his local VMUG. As he mentioned in the post he was the guy on stage during the vExpert Gameshow where he got to sit down next to VMware Legends…this community lets us engage with industry leaders and there was no better example of what Dan was able to do that afternoon at VMworld.

Finally I call on all vExperts to be passionate about virtualization…engage with work and industry peers and always look to serve the community…we collectively do some pretty amazing things with pretty amazing technology…we are privileged…and we should feel privileged to be in a position to share, teach and learn with others.

vSphere 6.0 Launch: What’s in it for Service Providers #vmw28days

Today vSphere 6.0 was officially announced and will be GA in about 6-8 weeks…I’ve had limited time myself to tinker with the BETA in great depth, however I have been keeping a close eye on some of the key features being released in the v6.0 edition. I’ve gone through and pulled out the top new features/enhancements as I think they relate to vCloud Air Network Ecosystem Partners and how SPs can look at enhancing an already strong IaaS platform with vSphere 6.x

Enhanced Fault Tolerance:

This has always been cool, but impracticable given the vCPU limitations placed on previous iterations, however with FT in v6.0 there is now support for VMs up to 4 vCPUs

  • Enhanced virtual disk format support for thin and thick disks
  • Ability to hot configure FT
  • Backup support with snapshots
  • Uses copies of VMDKs for added storage redundancy

There are a couple of benefits for SPs with enhanced FT…first and foremost it offers a way to protect vCenter and other critical management appliances like an NSX Manager Appliance or SQL Server without the need to rely on MS Clustering. There is also a case to look to productize VM FT and offer it as a check box feature for clients…while not completely straightforward (thinking about vCD/CMP Awareness) there is a serious value add to be exploited here.

Virtual Volumes (VVOLS) and Enhanced Storage APIs:

Storage is the number one pain point for any SP with scaleability and reliability all factors in storage pain driven by VM Latency. VVOLS are a logical extension of virtualization into the storage layer and offer policy based management of storage on per VM basis. In theory this will eliminate LUN management as every VM is treated as it’s own object with attached policy based management that leverages enhanced VASA APIs to map storage to policies/capabilities. All major storage vendors are looking at offering VVOL support…for SPs it should be top of the list when looking at any new storage platform from today on wards.

Enhanced vMotion and Hybrid Cloud Support:

vMotion has been given a facelift in v6.0 and can now do Cross vSwitch, Cross vCenter, Long Distance and vMotion across L3 boundaries. This opens the door for SPs to offer greater flexibility for On-Premises to Cloud Migrations, Cloud to Cloud Migrations and SP intra zone movement of VMs…all without any downtime! This is truly amazing tech that we probably take for granted these days, but make no mistake there is something very special about being to move a VM from one location to another on the fly.

vCenter also now has more direct path through to vCloud Air and vCloud Air Network Partners (still to be 100% with vCloud SP 5.6.x) by way of the Hybrid Cloud Plugin. Still to confirm if this will allow for vMotion between on-premises and hosted, but the future is looking promising for Hybrid deployments when you add in the features of NSX Gateway Services into the mix.

vCenter Scalability Enhancements:

  • 64 Hosts per cluster
  • 8000 VMs per cluster
  • Up to 1000 VMs per Host
  • 10,000 VMs per vCenter

I’m not one to push configuration maximums but extending vCenters capability to manage more Host’s per Cluster (up from 32) and more VMs per host and cluster will give Service Providers greater opportunity to drive higher host/VM density and work on larger pools of compute to abstract and offer tenants.

The vCenter Appliance is now pound for pound with the Windows version which means SP’s can now seriously consider replacing/upgrading to the appliance while still having the ability to scale out VC roles and even enable Linked Mode…if required.

There are other vCenter enhancements that lay the foundation for even more scale out functionality with the addition of Platform Services and more complex deployment scenarios.

vCenter Content Library:

The Content Library is a Repository for VM templates, ISOs and vApps. You can allow content to be stored in one location and replicated out to other vCenters. This is handy for Service Providers than run multiple zones with separate vCenters and allows all content to be sync’ed between different hosting locations. Also has the added benefit of keeping vCloud Director Catalog’s up to date and consistent across the board.


Many people have said that VMware’s time is numbered and that the hypervisor has become irrelevant…those that believe either those points of views in my opinion are greatly mistaken and need to pay close attention to the list of feature enhancements announced today.

There is no doubt in my mind that VMware’s vSphere and ESXi will continue to hold a significant advantage when looking at competing hypervisor platforms…this release proves that VMware are serious about continuing to power Virtual Platforms of all sizes as people transition even further towards the Hybrid Cloud while maintaining operational efficiency and stability though the most mature and proven hypervisor on the market.

A number of vExperts where featured on the VMware Blog Page and there are a number of great articles going through features in depth.

FT Ref: 

Feature List Ref:

Tintri VVOLs:

vMotions Enhancements:

vCenter Roles and Design: