Monthly Archives: July 2016

Work Life Balance: My Impossible Reality

I’ve been wanting to write about this topic for a while but haven’t been able to articulate myself in terms of the message I wanted get across until now. This post is about work life balance and how it’s so critical to maintain. This is about not letting yourself become consumed by work and career. This is about realizing what’s important in life…what really matters.

Last year I was driving my family to see the Christmas lights a local street puts on every year. While stopped a set of traffic lights I remember my brain ticking over trying to resolve an issue at work…I can’t remember exactly what it was but it was one of those times where your brain is on a loop and you can’t switch it off. I remember looking down at my phone to check something and then drove off. The only problem was that the light was still red and I found myself half way through the intersection with traffic still cutting across.

My wife yelled and only then did I realise what I was doing…to be honest I have no idea why I took off with the light still red…I just did! Luckily the other cars had noticed my mistake and stopped before anything serious happened. This wasn’t inattention…this was total absorption. Total absorption of mind and body in whatever problem it was and the total disconnect with the task at hand. Whatever it was I was trying to work out while waiting at those lights, it had resulted in me putting my family at risk.

People that know me know that I am find it almost impossible to switch off. If I am not at work I am thinking about work or thinking about checking my Twitter stream…seeing what’s happening on Slack or trying to work out the next blog post. I have a serious and very real case of FOMO. I realise that having this addiction or for a better word, dedication to my career which doubles as my hobby which doesn’t help isn’t healthy.

The inability to switch off is a dangerous one because I find that my brain will always be ticking…consumed by whatever issue I am working on…whatever product or tech I am researching. This means that other parts of my life get relegated to the background task section of my brain…almost irrelevant and not worth wasting precious capacity on!

As that near miss has made me realise…there must be a time to switch off…a time to disconnect and move the background tasks to the foreground. Those background tasks are in fact the most important…family, health and wellbeing. I’m still not where I would like to be in regards to being able to balance this out but I’m trying to be better. Better when I come home and spending time with my wife and kids…better in trying to remained focused on them instead of relegating them to the background…better in understanding that work and career is important…but not that important that all else suffers.

I realise the irony in getting this post out while on a flight traveling away from my family for work on my MBP at 30,000 feet…but hey, at least I now recognise that 🙂

VSAN Upgrading from 6.1 to 6.2 Hybrid to All Flash – Part 3

When VSAN 6.2 was released earlier this year it came with new and enhanced features and with the price of SSDs continuing to fall and an expanding HCL it seems like All Flash instances are becoming more the norm and for those that have already deployed VSAN in a Hybrid configuration the temptation to upgrade to All Flash is certainly there. Duncan Epping has previously blogged the overview of migrating from Hybrid to All Flash so I wanted to expand on that post and go through the process in a little more detail. This is the final part of a three part blog series with the process overview outlined below.

Use the links below to page jump.

In part one I covered upgrading existing hosts, expanding an existing VSAN cluster and upgrading the license and disk format. In part two I covered the actual Hybrid to All Flash migration steps and in this last part I will finish off by going through the process of creating a new VSAN Policy, migrate existing VMs to the new policy and  then enable deduplication and compression.

Before continuing it’s worth pointing out that after the Hybrid to All Flash migration you are going to be left with an unbalanced VSAN cluster as the full data evacuation off the last Hybrid host will leave that host without objects. Any new objects created will work to re-balance the cluster however if you want to initiate a proactive re-balance you can tit the re-balance button from the Health status window. For more on this process check out this post from Cormac Hogan.

Create new Policy and Migrate VMs:

To take advantage of the new erasure coding now in the VSAN 6.2 All Flash cluster we will need to create a new storage policy and apply that policy to any existing VMs. In my case all VMs where on the Default VSAN Policy with FTT=1. The example below shows the creation of a new Storage Policy that uses RAID5 erasure coding with FTT=1. If you remember from previous posts the reason for expanding the cluster to four hosts was to cater for this specific policy.

To create the new Storage Policy head to VM Storage Policies from the Home page of the Web Client and click on Create New VM Storage Policy. Give policy a name, click Next and construct Rule-Set 1 which is based on VSAN. Select the Failure tolerance method and choose RAID-5/6 (Erasure Coding) – Capacity.

In this case with FTT=1 chosen RAID5 will be used. Clicking on Next should show that the existing VSAN datastore is compatible with the policy. With that done we can migrate existing VMs off the Default VSAN Policy onto the newly created one.

To get an list of what VMs are going to be migrated have a look at the PowerCLI commands below to get the VMs on the VSAN Datastore and then get their Storage Policy. The last command below gets a list of existing policies.

To apply the new Erasure Coding Storage Policy its handy to get the full name of the policy.

To migrate the VMs to the new policy you can either do it one by one via the Web Client of do it on mass via the following PowerCLI script.

Once run the VMs will have the new policy applied and VSAN will work in the background to get those VM objects compliant. You can see the status of Virtual Disk Placement in the Virtual SAN tab of the Monitor Tab of the cluster.

Enable DeDupe and Compression:

Before I go into the details…for a brilliant overview and explanation of DeDupe and Compression with VSAN 6.2 head to this post from Cormac Hogan. To enable this feature we need to double check that the licensing is correct as detailed in the first post and also ensure that all previous steps relating to the Hybrid to All Flah migration has taken place. To turn on this feature head to the General window under the Virtual SAN Settings menu on the cluster Manage tab and click on the Edit button next to Virtual SAN is Turned ON.

Choose Enabled in the drop down and take note of the checkbox that talks about Allow Reduced Redundancy understanding what that means by reading the info box as shown above. Once you click on the process to enable DeDuplication and Compress will begin…this process will go through an reconfigure all Disk Groups similar to to the process to upgrade from between Hybrid and All Flash. Again this will take some time depending on the number of host, number of disk groups and type of disks in the cluster.

Below I have shown the before and after of the Capacity window under the Virtual SAN tab in the Monitor section of the Cluster view. You can see that before enabled, there is a message saying that DeDeuplication and Compression is disabled.

And after enabling DeDuplication and Compression you start to get some statistics relating to both of them in the window relating to savings and ratios. Even in my small lab environment I started to see some benefits.

With that complete we have finished this series and have gone through all the steps in order to get to an All Flash VSAN Cluster with the newest features enabled.

References:

VSAN 6.2 Part 1 – Deduplication and Compression

VSAN 6.2 Part 2 – RAID-5 and RAID-6 configurations

 

VSAN Upgrading From 6.1 To 6.2 Hybrid To All Flash – Part 2

When VSAN 6.2 was released earlier this year it came with new and enhanced features and with the price of SSDs continuing to fall and an expanding HCL it seems like All Flash instances are becoming more the norm and for those that have already deployed VSAN in a Hybrid configuration the temptation to upgrade to All Flash is certainly there. Duncan Epping has previously blogged the overview of migrating from Hybrid to All Flash so I wanted to expand on that post and go through the process in a little more detail. This is part two of what is now a three part blog series with the process overview outlined below.

Use the links below to page jump.

In part one I covered upgrading existing hosts, expanding an existing VSAN cluster and upgrading the license and disk format. In this part am going to go through the simple task of extending the cluster by adding new All Flash Disk Groups on the host I added in part one and then go through the actual Hybrid to All Flash migration steps.

The configuration of the VSAN Cluster after the upgrade will be:

  • Four Host Cluster
  • vCenter 6.0.0 Update 2
  • ESXi 6.0.0 Update 2
  • One Disk Groups Per Host
  • 1x 480GB SSD Cache and 2x 1000GB SSD Capacity
  • VSAN Erasure Coding Raid 5 FTT=1
  • DeDuplication and Compression On

As mentioned in part one I added a new host to the cluster in order to give me some breathing room while doing the Hybrid to All Flash upgrade as we need to perform rolling maintenance on each hosts in the cluster in order to get to the All Flash configuration. Each host will be entered into maintenance mode and all data evacuated. Before the process is started on the initial three hosts lets go ahead and create a new All Flash Disk Group on the new hosts.

To create the new Disk Group head to Disk Management under the Virtual SAN section of the Manage Tab whilst the Cluster and click on the Create New Disk Group Button. As you can see below I have the option of selecting any of the flash devices claimed as being ok for VSAN.

After the disk selection is made and the disk group created, you can see below that there is now a mixed mode scenario happening where the All Flash host is participating in the VSAN Cluster and contributing to the capacity.

Upgrade Disk Group from Hybrid to All Flash:

Ok, now that there is some extra headroom the process to migrate the existing Hybrid Hosts over to All Flash can begin. Essentially what the process involves is placing the hosts in maintenance mode with a full data migration, deleting any existing Hybrid disk groups, removing the spinning disk, replacing them with flash and then finally creating new All Flash disk groups.

If you are not already aware about maintenance mode with VSAN then it’s worth reading over this VMware Blog Post to ensure you understand that using the VI Client is a big no no. In this case I wanted to do a full data migration which moves all VSAN components onto remaining hosts active in the cluster.

You can track this process by looking at the Resyncing Components section of the Virtual SAN Monitor Tab to see which objects are being copied to other hosts.

As you can see the new host is actively participating in the Hybrid mixed mode cluster now and taking objects.

Once the copy evacuation has completed we can now delete the existing disk groups on the host by highlights the disk group and clicking on the Remove Disk Group button. A warning appears telling us that data will be deleted and also lets us know how much data is currently on the disks. The previous step has ensured that there should be no data on the disk group and it should be safe to (still) select Full data migration and remove the disk group.

Do this for all existing Hybrid disk groups and once all disk groups have been deleted from the host you are ready to remove the existing spinning disks and replace them with flash disks. The only thing to ensure before attempting to claim the new SSDs is that they don’t have any previous partitions on them…if so you can use the ESXi Embedded Host Client to remove any existing partitions.

Warning: Again it’s worth mentioning that any full data data migration is going to take a fair amount of time depending on the consumed storage of your disk groups and the types of disks being used.

Repeat this process on all remaining hosts in the cluster with Hybrid disk groups until you have a full All Flash cluster as shown above. From here we are now able to take advantage of erasure coding, DeDuplication and compression…I will finish that off in part three of this series.

 

VSAN Upgrading from 6.1 to 6.2 Hybrid to All Flash – Part 1

When VSAN 6.2 was released earlier this year it came with new and enhanced features and depending on what version you where running you might not have been able to take advantage of them all right away. Across all versions, Software Checksum was added with Advanced and Enterprise versions getting VSANs implementation of Erasure Coding (RAID 5/6) with Deduplication and Compression available for the All Flash version and QOS IOPS Limiting available in Enterprise only.

With the price of SSDs continuing to fall and an expanding HCL it seems like All Flash instances are becoming more the norm and for those that have already deployed VSAN in a Hybrid configuration the temptation to upgrade to All Flash is certainly there. Duncan Epping has previously blogged the overview of migrating from Hybrid to All Flash so I wanted to expand on that post and go through the process in a little more detail. This is a two part blog post with a lot of screen shots to compliment the process which is outlined below.

Use the links below to page jump.

Warning: Before I begin it’s worth mentioning that this is not a short process so make sure you plan this out relative to the existing size of your VSAN cluster. In talking with other people who have gone through the disk format upgrade the average rate seems to be about 10TB of consumed data per day depending on the type of disks being used. I’ll reference some posts at the end that relates to the disk upgrade process as it has been troublesome for some however also worth pointing out that the upgrade process is non disruptive for running workloads.

Existing Configuration:

  • Three Host Cluster
  • vCenter 6.0.0 Update 2
  • ESXi 6.0.0 Update 1
  • Two Disk Groups Per Host
  • 1x 200GB SSD and 2x 600GB HDD
  • VSAN Default Policy FTT=1

Upgrade Existing Hosts to 6.0 Update 2:

At the time of writing ESXi 6.0.0 Update 2 is the latest release and the builds that contain the VSAN 6.2 codebase. From the official VMware Upgrade matrix it seems you can’t upgrade from VSAN versions older than 6.1, so if you are on 5.x or 6.0 releases you will need to take note of this VMwareKB to get to ESXI 6.0.0 Update 2. A great resource for the latest builds as well as links to upgrade from head here:

https://esxi-patches.v-front.de/ESXi-6.0.0.html

For a quick upgrade directly from the VMware online host update repository you can do the following on each host in the cluster after putting them into VSAN Maintenance Mode. Note that there are also some advanced settings that are recommended as part of the VSAN Health Checks in 6.2

After rolling through each host in the cluster make sure that you have an updated copy of the VSAN HCL and run a health check to see where you stand. You should see a warning about the disks needing an upgrade and if any hosts didn’t have the above advanced settings applied you will have a warning about that as well.

Expanding VSAN Cluster:

As part of this upgrade I am also adding an additional host to the existing three to expand to a four host cluster. I am doing this for a couple of reasons, not withstanding the accepted design position on four host being better than three from a data availability point of view you also need a minimum of four hosts if you want to enable RAID5 erasure coding (six is required as a minimum for RAID6). The addition of the fourth host also allowed me to roll through the Hybrid to AF upgrade with a lot more headroom.

Before adding the new host to the existing cluster you need to ensure that the build is consistent with the existing hosts in terms of versioning and more importantly networking. Ensure that you have configured an VMkernel Interface for VSAN traffic and marked it as such through the Web Client. If you don’t do this prior to putting the host into the existing cluster I found that the management VMKernel interface was enabled by default for VSAN.

If you notice below this cluster is also NSX enabled, hence the events relating to Virtual NICs being added. Most importantly the host can see other hosts in the cluster and is enabled for HA.

Once in the cluster the host can be used for VM placement with data served from the existing hosts with configured disk groups over the VSAN network.

Upgrade License:

At this point I upgraded the licenses to enable the new features in VSAN 6.2. As a refresher on VSAN licensing there are three editions with the biggest change from previous versions being that to get the Deduplication and Compression, Erasure Coding and QoS features you need to be running All Flash and have an Enterprise license key.

To upgrade the license you need to head to Licensing under the Configuration section of the Manage Tab whilst the Cluster is selected. Apply the new license and you should see the following.

Upgrade Disk Format:

If you have read up around upgrading VSAN you know that there is a disk format upgrade required to get the benefits of the newer versions. Once you have upgraded both vCenter and Hosts to 6.0.0 Update 2 if you check the VSAN Health under the Monitor Tab of the Cluster you should see an failure talking about v2 disks not working with v3 disks as shown below.

You can click on the Upgrade On-Disk Format button here to kick off the process. This can also be triggered from the Disk Management section under the Virtual San menu in the Manage cluster section of the Web Client. Once triggered you will see some events trigger and an update in progress message near the version number.

Borrowing from one of Cormac Hogan’s posts on VSAN 6.2 the following explains what is happening during the disk format upgrade. Also described in the blog post is a way using the Ruby vSphere Client to monitor the progress in more detail.

There are a few sub-steps involved in the on-disk format upgrade. First, there is the realignment of all objects to a 1MB address space. Next, all vsanSparse objects (typically used by snapshots) are aligned to a 4KB boundary. This will bring all objects to version 2.5 (an interim version) and readies them for the on-disk format upgrade to V3. Finally, there is the evacuation of components from a disk groups, then the deletion of said disk group and finally the recreation of the disk group as a V3. This process is then repeated for each disk group in the cluster, until finally all disks are at V3.

As explained above the upgrade can take a significant amount of time depending on the amount of disk groups, data consumed on your VSAN datastore as well as the type of disks being used (SAS based vs SATA/NL-SAS) Once complete you should have a green tick and the On-Disk format version reporting 3.0

With that done we can move ahead to the Hybrid to All Flash conversion. For details on the look out for Part 2 of this series coming soon.

References:

Hybrid vs All-flash VSAN, are we really getting close?

VSAN 6.2 Part 2 – RAID-5 and RAID-6 configurations

VSAN 6.2 Part 12 – VSAN 6.1 to 6.2 Upgrade Steps

Azure Stack – Microsoft’s White Elephant?

Microsoft’s World Wide Partner Conference is currently on again in Toronto and even though my career has diverged from working on the Microsoft stack (no pun) over the past four or five years I still attend the local Microsoft SPLA monthly meetings where possible and keep a keen eye on what Microsoft is doing in the cloud and hosting spaces.

The concept of Azure Stack has been around for a while now and it entered Technical Preview early this year. Azure Stack was/is touted as an easily deployable end to end solution that gives enterprises Azure like flexibility on premises covering IaaS, PaaS and Containers. The premise of the solution is solid and Microsoft obviously see an opportunity to cash in on the private and hybrid cloud market that at the moment, hasn’t been locked down by any one vendor or solution. The end goal though is for Microsoft to have workloads that are easily transportable into the Azure Cloud.

Azure Stack is Microsoft’s emerging solution for enabling organizations to deploy private Azure cloud environments on-premises. During his Day 2 keynote presentation at the Worldwide Partner Conference (WPC) in Toronto, Scott Guthrie, head of Microsoft’s Cloud and Enterprise Group, touted Azure Stack as a key differentiator for Microsoft compared to other cloud providers.

The news overnight at WPC is that apart from the delay in it’s release (which wasn’t unexpected given the delays in Windows Server 2016) Microsoft have now said that the Azure Stack will only be available via pre-validated hardware partners which means that customers can’t deploy the solution themselves meaning the stack loses flexibility.

Neil said the move is in response to feedback from customers who have said they don’t want to deal with the complexities and downtime of doing the deployments themselves. To that end, Microsoft is making Azure Stack available only through pre-validated hardware partners, instead of releasing it as a solution that customers can deploy, manage and customize.

This is an interesting and in my opinion risky move by Microsoft. There is a precedence to suggest that going down this path leads to lesser market penetration and could turn the Azure Stack into that white elephant that I suggested in a tweet and in the title of this post. You only have to look at how much of a failure VMware’s EVO:Rail product was to understand the risks of tying a platform to vendor specific hardware and support. Effectively they are now creating a Converged Infrastructure Stack with Azure bolted on where as before there was absolute freedom in enterprises being able to deploy Azure Stack into existing hardware deployments allowing for a way to realise existing costs and extending that to provide private cloud services.

As with EVO:Rail and other Validated Designs, I see three key areas where they suffer and impact customer adoption.

Validated Design Equals Cost:

If I take EVO:Rail as an example there was a premium placed on obtaining the stack through the validated vendors and this meant a huge premium on what could have been sourced independently when you took hardware, software and support costs into account. Potentially this will be the same for the Azure Stack…vendors will add their percentage for the validated design, plus ongoing maintenance. As mentioned above, there is also now the fact that you must buy new hardware (compute, network, storage) meaning any existing hardware that can and should be used for private cloud is now effectively dead weight and enterprises need to rethink long term about existing investments.

Validated Design Equals Inherit Complexity:

When you take something in-house and not let smart technical people deploy a solution my mind starts to ask the question why? I understand the argument will be that Microsoft want a consistent experience for the Azure Stack and there are other examples of controlled deployments and tight solutions (VMware NSX comes to mind in the early days) but when the market you are trying to break into is built on the premise of reduced complexity…only allowing certain hardware and partners to run and deploy your software tells me that it walks a fine line between being truly consumable and it being a black box. I’ve talked about Complex Simplicity before and this move suggests that Azure Stack was not ready or able to be given to techs to install, configure and manage.

Validated Design Equals Inflexibility:

Both of the points above lead into the suggestion that the Azure Stack looses it’s flexibility. Flexibility in the private and hybrid cloud world is paramount and the existing players like Openstack and others are extremely flexible…almost to a fault. If you buy from a vendor you loose the flexibility of choice and can then be impacted at will by costs pressures relating to maintenance and support. If the Azure stack is too complex to be self managed then it certainly looses the flexibility to be used in the service provider space…let alone the enterprise.

Final Thoughts:

Worryingly the tone of the offical Blog Announcement over the delay suggest that Microsoft is reaching to try and justify the delay and the reasoning for going down the different distribution model. You just have to read the first few comments on the blog post to see that I am not alone in my thoughts.

Microsoft is committed to ensuring hardware choice and flexibility for customers and partners. To that end we are working closely with the largest systems vendors – Dell, HPE, Lenovo to start with – to co-engineer integrated systems for production environments. We are targeting the general availability release of Azure Stack, via integrated systems with our partners, starting mid-CY2017. Our goal is to democratize the cloud model by enabling it for the broadest set of use-cases possible.

 

With the release of Azure Stack now 12+ months away Microsoft still has the opportunity to change the perception that the WPC2016 announcements has in my mind created. The point of private cloud is to drive operational efficiency in all areas. Having a fancy interface with all the technical trimmings isn’t what will make an on-premises stack gain mainstream adoption. Flexibility, cost and reduced complexity is what counts.

References:

https://azure.microsoft.com/en-us/blog/microsoft-azure-stack-delivering-cloud-infrastructure-as-integrated-systems/?utm_campaign=WPC+2016&utm_medium=bitly&utm_source=MNC+Microsite

https://rcpmag.com/articles/2016/07/12/wpc-2016-microsoft-delays-azure-stack.aspx

http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

http://www.techworld.com.au/article/603302/microsoft-delays-its-azure-stack-software-until-mid-2017/

Sneak Peek – Veeam 9.5 vCloud Director Self Service Portal

Last month Veeam announced that they had significantly enhanced the capabilities around the backup and recovery of vCloud Director. This will give vCloud Air Network Service Providers the ability to tap into a new set of RESTful APIs that adds tenanted, self service capabilities and be able to offer a more complete service that is totally controlled and managed by the vCloud tenant.

As part of the Veeam Vanguard program, I have been given access to an early beta of Veeam v9.5 and have had a chance to take the new functionality for a spin. Given the fact this is an very early beta of v9.5 I was surprised to see that the installation and configuration of the vCloud Director Self Service functionality was straight forward and like most things with Veeam…It just worked.

NOTE: The following is based on an early access BETA and as such features, functions and menu items are subject to change.

Basic Overview:

The new vCloud Director integration lets you back up and restore single VMs, vApps, Organization vDC and whole Organization. This is all done via a web UI based on Veeam Backup Enterprise Manager. Only vCD SP versions are compatible with the feature. Tenants have access to Self-Service web portal where they can manage their vCloud Director jobs, as well as restore VMs, files and application items within their vCloud Director organization.

The Service Provider exposes the following URL to vCD tenants:

https://Enterprise-Manager-IP/vcloud/OrgName:9443

As shown in the diagram below Enterprise Manager than talks to the vCloud Director Cells to authenticate the tenant and retrieve information relating to the tenant vCloud Organization.

Configuring a Tenant Job:

Anyone who is familiar with Veeam will recognize the steps below and the familiar look of the menu options that the Self Service Portal provides. As shown below the landing page once the tenant has authenticated is similar to what you see when logging into Enterprise Manager…in fact the beauty of this portal is that Veeam didn’t have to reinvent the wheel…they just retrofited vCD multi-tenancy into the views.

To configure a job click on the the Jobs Tab and hit the Create Button.

Give the Job a Name and set the number of restore points to keep.

Next select the VMs you want to add to the Job. As mentioned above you can add the whole Org, vDC, vApp and as granular as per VM.

Next select any Guest Processing you want done for Application Aware backups.

And then set the Job Schedule to you liking.

Finally configure email notification

Once that has been done you have the option to Run the Job manually or wait for the schedule to kick in. As you can see below you have a lot of control over the backup job and you can even start Active Full Jobs.

Once a job has been triggered you have access to view logs on what is happening during the backup process. The details is just as you would expect from the Veeam Backup & Recovery Console and keeps tenant’s informed as to the status of their jobs.

More to Come:

There is a lot more that I could post but for the moment I will leave you all with that first sneak peak. Once again Veeam have come to the party in a big way with this feature and every service provider who run vCloud Director should be looking at Veeam 9.5 so as to enhance the value of their IaaS offering.

#LongLivevCD

Quick Post: VLAN Trunking with vCloud Director

This week one of our Vitualisation Engineer’s (James Smith) was trying to come up with a solution for a client that wanted the flexibility to bring in his own VLANs mapped into our vCloud networking stack. We get this request quiet often and we generally configure a one to one relationship between the VLAN being mapped externally to our networking stack and then brought into vCD via a externally connected vORG Network.

As you all know you can configure an ESXi Portgroup either with no VLAN, a single VLAN, multiple VLANs or Private VLANs. In this case the customer wanted preconfigured VLANS as part of the one Portgroup so taking vCloud Director out of play we would configure the Portgroup as follows:

This allows for the tagging of the VLAN at the GuestOS level while allowing those VMs to be on the same Portgroup. The problem arises when you then try to create the External Network in vCloud Director. As shown below vCloud Director looks at the Portgroup, sees the multiple VLANs and marks it down at VLAN 4095.

Regardless of the fact that it’s picked up as VLAN 4095 which wouldn’t be ideal even if we had configured the Portgroup with 4095 you can’t finish off the configuration of the External Network as vCD throws the error seen below.

Another cryptic error from vCD, but in a nutshell it’s telling you that 4095 is in use and the network can’t be created meaning you won’t be able to tie any vORG Network to the ESXi Portgroup. There is a VMwareKB that relates to this error, however searching through the vCD database shows that 4095 isn’t in use as is expected. So it would appear that this is a default vCD behaviour in dealing with a Portgroup configured with multiple VLANs.

Workaround:

We eventually came up with a workaround for this that isn’t 100% fool proof and should be undertaken with the understanding that you could cause issues if VLANs are not managed and noted down on some configuration database. What we did was go back an modify the Portgroup to only have one VLAN. This allows us to create the External Network in vCD and from there create the vORG Network.

From there we go back and edit the Portgroup to make it a trunk as we had it initially. vCD will now show the External Network still created with VLAN 4095 listed as shown below.

From here you can create VMs in vCD, connect them up to the vORG Network and use VLAN tagging in the Guest OS to pass the correct network traffic through. Again just be wary that vCD doesn’t recognize the VLANs being trunked and there is a possibility a duplicate VLAN could be assigned via another External Network.

As a side note, I’ll be chasing this up with the vCloud Director Product team as I believe it should be an option that is allowed…even though VXLAN is taking over there is still a need to cater for traditional VLAN configurations.

References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003988

http://www.virtualbrigade.com/2014/08/what-is-vlan-id-4095-when-is-it-used.html

Multiple VLANs for an External network in vCloud? from vmware

Top vBlog 2016: Aussie (vMafia) Representation

The Top vBlog for 2016 Results where announced a couple of nights ago and Australia had an ok representation this year, though the number of active bloggers on the list has decreased from last year. There where 321 blogs listed at vSphere-Land.com. I know of a lot more bloggers locally so if you have a chance head over and register your site on the list ready for next year’s revamp.

http://vsphere-land.com/news/top-vblog-2016-full-results.html

I’ve pulled out the Aussie Blogs and listed them below…Those with the Rank highlighted in Red are contributors to the @aussvMafia site with myself, Craig Waters, Rene Van Den Bedem and @JoshOdgers taking out a Top 50 spots this year. Those not familiar with Aussie vMafia, head here and take advantage of one of the best aggregation sites focused on VMware Vitualization going round. Great to also see three new blogs appear in the list as well.

Blog Rank Previous Change Total Votes Total Points #1 Votes
CloudXC (Josh Odgers) 17 15 -2 189 1342 24
VCDX133 (Rene Van Den Bedem) 19 37 18 167 1284 24
Craig Waters 37 58 21 75 579 4
Virtualization is Life! (Anthony Spiteri) 44 105 61 77 544 14
Penguinpunk.net (Dan Frith) 78 229 151 52 320 2
Virtual 10 (Manny Sidhu) 82 246 164 41 303 7
Proudest Monkey (Grant Orchard) 93 98 5 45 278 1
Pragmatic IO (Brett Sinclair) 153 224 71 30 199 4
Musings of Rodos (Rodney Haywood) 214 319 105 20 140 0

Virtualization is Life! managed to jump up 61 places from last year to #44 which is a great feeling and humble reward for the work I put into this site. It also shows that there is strong interest in vCloud Director, NSX and the vCloud Air Network in general. The list of bloggers that are ranked higher (and lower) shows the extraordinary power of community generated content. There is quality throughout!

Thanks again to Eric Siebert for taking his time to go through the process and organise the voting and all the good and bad that goes with that…and thanks to all that voted!

#TopvBlog2016 #LongLivevCD

ps. Please let me know if I’ve left anyone off the list..I worked through the list in quick time so might have left someone out.

Nutanix Buying PernixData: My Critical Analysis

Overnight The Register posted an article claiming that Nutanix is about to buyout PernixData…this has apparently come through reliable sources and hasn’t been denied by those who have been asked at PernixData. I tweeted that I was pretty bummed at the news and as a PernixData Customer and PernixPro I feel inclined to comment on why I do feel that PernixData missed out on being able to maybe go at it alone. This isn’t going to be a post around Nutanix or why they felt they needed PernixData technology, though it’s apparent that they potentially required a performance boost somewhere along the line in their architecture. Possibly they where after the advanced analytics that Architect provides…or maybe they needed both.

FVP is Brilliant Tech!:

There is no doubt that FVP is brilliant and pretty much anybody who has it deployed will attest to the fact that it delivers as promised. On a personal note it came to our rescue when we had performance issues in one of our storage platforms and allowed us to deliver services with low latency and decent performance. The ease and elegance of the solution meant that you could install FVP within 15 minutes across an existing Cluster utilizing investment in flash or memory meant that it should have been a no brainer for a lot of people with existing performance issues.

Apart from the “band-aid” use case the premise of in host caching should have lead storage and platform architects to consider installing FVP into hosts for accelerated performance of read and write IO and allow for cheaper dumb storage to provide the capacity. It’s something that I guess serves as the basis for more storage platforms but the beauty here was that you could potentially cater the solution to fit specific requirements or budgets and have the flexibility to upgrade/downgrade when required.

The introduction of using memory to cache was truly amazing and the possibilities for extreme caching though FVP does have some limitations and gotchya’s but overall it’s very slick technology.

Architect is Very Handy:

Architect was released last year and is installed with the FVP binaries so it’s implanted into the kernel ready for action. Once unlocked it presents probably the best set of platform analytics in the industry. The granularity of the data and metrics it presents is brilliant for any architect or operational person to use in either planning for, or managing storage. The potential to combine FVP and Architect into a full self healing analytics platform was pretty exciting. The Register post mentioned the fact that PernixData where looking at developing their own storage platform to combine all elements…this would have…and still may make sense though there is a lot of competition in the storage array space and maybe the risk of getting into the hardware game means this was just a rumour.

The Problem:

While I don’t pretend to fully understand the ins and outs of how a company prices their solutions, but I can say from experience that FVP and Architect are expensive. I’ve been involved in trying to justify spend on FVP internally and I can tell you that it was/is a hard sell. For the most part if there wasn’t a need to solve a storage problem most would, and have, found FVP too expensive. This isn’t to say that it doesn’t add value, cause certainly it does, however when you break it down and look at the cost of FVP compared to a physical storage array the numbers look kind of sick…especially given that FVP is just software that utilizes existing or new hardware.

Based on some internal workings, breaking it down the cost of FVP over three years could equate to the cost of 2 or 3 New Generation Flash Based Arrays or legacy SAN systems. That’s a hard pill to swallow and it comes down the crux of why I think FVP was priced too expensively for it to gain the market penetration that might have meant this sale to Nutanix could have avoided.

Maybe the exit was always planned and I am coming to understand more and more that startups in todays technology world are governed by the investors who want a return on their risk…VCs don’t pander to what’s cool or what’s great tech…they want their investment to be realized. In this case it looks as though PernixData missed a golden opportunity to get the pricing right, not sell FVP as traditional hardware and penetrate the market more. I know lots of people who would have looked at and implemented FVP if the price was right.

Again, I might be over simplifying the economics of it all but proof is now possibly in the pudding. I’ve blogged about FVP Freedom which was a great initiative to get FVP into as many labs and peoples platforms as possible. Rather than give it away for free maybe the focus should have been on getting the pricing right for FVP. Would have gone a long way to many today not feeling bummed, disappointed and annoyed that a company with such great people and technology may be swallowed up. I am guessing that in today’s tech sector with so many vendors playing in the same space we should come to expect more situations like this…doesn’t make it easier to swallow.

End of the day I feel that the pricing model forced people to see FVP as a purely band-aid only solution that was used only in times of desperation…it didn’t deserve that reputation, but it was widely known as truth both inside and outside of PernixData. Be interesting to see how this pans out after/when/if the deal is finalized. To quote one person …FVP users seem to be the big losers if this goes through.

Investors will always try to get their money!