Featured Posts

<< >>

Bug Fix: vCloud 5.x IP Sub Allocation Pool Error (5.5.2 Update)

A few months ago I wrote a quick post on a bug that existed in vCloud Director 5.1 in regards to IP Sub Allocation Pools and IP’s being marked as in use when they should be available to allocate. What this leads to is a bunch of unusable IPs…meaning that they go to waste and pools

VMware Fling: ESXi Mac Learning dvFilter for Nested ESXi

Late last year I was load testing against a new storage platform using both physical and nested ESXi hosts…at the time I noticed decreased network throughput while using Load Test VMs hosted on the nested hosts. I wrote this post and reached out to William Lam who responded with an explanation as to what was happening and

EVO:RAIL – Who are VMware Really Targeting?

Probably the biggest announcement from last week’s VMworld was the unveiling of the Project Marvin/Mystic as EVO:RAIL. Most of the focus on VMware releasing their own OEM distributed hyperconverged solution was to compete head to head with established hyper converged players like Nutanix…however after reading through Duncan Eppings blog post and seeing the UI Demo

VCAP-DCA – Experience

I couple of months ago I wouldn’t have seen myself writing up one of these blog posts which seems to be customary for any blogger who has taken a VCAP. Having only secured my VCP last October I wasn’t thinking about VCAPs until the lure of the 50% discount and realisation that I needed to

Bug Fix: vCloud 5.x IP Sub Allocation Pool Error (5.5.2 Update)

A few months ago I wrote a quick post on a bug that existed in vCloud Director 5.1 in regards to IP Sub Allocation Pools and IP’s being marked as in use when they should be available to allocate. What this leads to is a bunch of unusable IPs…meaning that they go to waste and pools can exhaust quicker…

  • Unused external IP addresses from sub-allocated IP pools of the gateway failed after upgrading from vCloud Director 1.5.1 to vCloud Director 5.1.2
    After upgrading vCloud Director from version 1.5.1 to version 5.1.2, attempting to remove unused external IP addresses from sub-allocated IP pools of a gateway failed saying that IPs are in use. This issue is resolved in vCloud Director 5.1.3.

This condition also presents it’s self in vCloud 5.5 environments that have 1.5 lineage. Greenfields deployments don’t seem affected…vCD 5.1.3 was suppose to contain the fix but the release notes where released in error…we where then told that the fix would come in vCD 5.5…but when we upgraded our zones we still had the issue.

We engaged VMware Support recently and they finally had a fix for the bug which has now been officially released in vCD 5.5.2 (BUILD 2000523). It’s great to see a nice long list of bug fixes in addition to the one specified in this post.

https://www.vmware.com/support/vcd/doc/rel_notes_vcloud_director_552.html#featurehighlights

Looking forward to the next major release of vCloud Director which will be the first of the Service Provider Releases…more on that when it comes closer to GA.

#longlivevCD

VMware Fling: ESXi Mac Learning dvFilter for Nested ESXi

Late last year I was load testing against a new storage platform using both physical and nested ESXi hosts…at the time I noticed decreased network throughput while using Load Test VMs hosted on the nested hosts. I wrote this post and reached out to William Lam who responded with an explanation as to what was happening and why promiscuous mode was required for nested ESXi installs.

http://www.virtuallyghetto.com/2013/11/why-is-promiscuous-mode-forged.html

Forward to VMworld 2014 and in a discussion I had with William at The W Bar (where lots of great discussions are had) after the Official Party he mentioned that a new Fling was about to be released that addresses the issues with nested ESXi hosts and promiscuous mode enabled on the Virtual Switches. As William explains in his new blog post he took the problem to VMware Engineering who where having similar issues in their R&D Labs and have come up with a workaround…this workaround is now an official Fling! Apart from feeling a little bit chuffed that I sparked interest in this problem which has resulted in a fix, I decided to put it to the test in my lab.

I ran the same tests that I ran last year. Running one load test on a 5.5 ESXi host nested on a physical 5.5 Host I saw equal network utilization across all 6 nested hosts.

The Load VM was only able to push 15-17MBps on a random read test. As William saw in his post ESXTOP shows you more about whats happening

About even network throughput across all NICs on all Hosts that are set for Promiscuous Mode…Overall throughput is reduced

After installing the VIB on the Physical host, you have to add the Advanced Virtual Machine settings to each Nested Host to enable the Mac Learning. Unless you do this via an API call you will need to shutdown the VM to edit the VMX/Config. I worked through a set of PowerCLI commands shown below to bulk add the Advanced Setting to running Nested Hosts. Below works for any VM matching ESX in a resource pool and has two NICs.

Checking back in on ESXTOP it looks to have an instant effect and only the Nested Host generating the traffic shows significant network throughput…the other hosts are doing nothing and I am now seeing about 85-90MBps against the load test.

Taking a look at Network Throughput graphs (below) you can see an example of two Nested Hosts in the group with the same throughput until the dvFilter was installed at which point traffic dropped on the host not running the load test. Throughput increased almost five fold on the host running the test.

The effect on Nested Host CPU utilization is also dramatic. Only the host generating the load has significant CPU usage while the other hosts return to normal operations…meaning overall the physical host CPUs are not working as hard.

As William mentions in his post this is a no brainer install for anyone using nested ESXi hosts for lab work…thinking about further implications of this fix I am thinking about the possibility of being able to support full nested environments within Virtual Data Centers without the fear of increased host CPU and decreased network throughput…for this to happen though VMware would need to change their stance on supportability of Nested ESXi environments…but this Fling, together with the VMTools Fling certainly makes nested hosts all that more viable.

EVO:RAIL – Who are VMware Really Targeting?

Probably the biggest announcement from last week’s VMworld was the unveiling of the Project Marvin/Mystic as EVO:RAIL. Most of the focus on VMware releasing their own OEM distributed hyperconverged solution was to compete head to head with established hyper converged players like Nutanix…however after reading through Duncan Eppings blog post and seeing the UI Demo doing the rounds on YouTube (see below) it occurred to me that possiblely there is more at play here than VMware competing for the SMB/E hyperconverged market.

From a services point of view the battle for the Private Cloud between VMware and Microsoft is well advanced…Hyper-V is certainly an alternative option for companies looking to (wrongly or rightly) “save” money or to try out something different after having VMware as an incumbent technology. The Windows Azure Pack adds Cloud like functionality to a Hyper-V Platform allowing people the ability to use Azure’s pretty interfaces internally. It also offers PaaS functionality like DBaaS and Web Sites Creation…to be honest something that’s been available for years…The WAP is also a direct pathway for consumers to push VMs and Platform servers to Azure which is Microsoft’s ultimate play…everything in Azure.

Enter EVO:RAIL…a scale out hyperconverged platform that offers an pretty new interface on top of vSphere with VSAN thrown in to the mix. For me its got the WAP in its sights and offers a readymade private cloud alternative built on superior vSphere technology. Without doubt one of the complaints I hear often in the services space (outside of Cloud and IaaS) is that vSphere and vCloud don’t have an intuitive enough interface…specially compared with the Azure Pack (lipstick on a pig) but the EVO Interface changes all that.

So VMware have positioned themselves well in the market with EVO:RAIL…if you look beyond the comparisons to Nutanix and other current market players in the space I think there is a play here to keep the on-prem market firmly in the grasp of VMware and offer a superior alternative to WAP.

VCAP-DCA – Experience

I couple of months ago I wouldn’t have seen myself writing up one of these blog posts which seems to be customary for any blogger who has taken a VCAP. Having only secured my VCP last October I wasn’t thinking about VCAPs until the lure of the 50% discount and realisation that I needed to push myself further. Four weeks before VMworld I decided to accept the challenge and booked the exam for the Sunday at 2pm. Another big driver for me to take the exam at VMworld was that I thought it was means to avoid the dreaded latency that seems to plague takers in Australia.

510 or 550?:

This was an interesting choice for me…it seemed that even with the 550 available most people where choosing the 510. To make up my mind I read through both blueprints and saw that certain sections where missing from the 510 (vMA, Autodeploy) while newer features like vSphere Replication and vFlash Read Cache where added along with vCO. That said when I booked my exam I decided to do the 510, however there where no slots at VMworld so I was forced to book the 550. End of the day I think that that was the right DCA and from what I understand the exam format has been better optimized for takers. I can see how the additional items on the blueprint could put people off, but in reality it shouldn’t be daunting for seasoned vSphere Admins. So end of the day I gave myself just over 3 weeks to prep.

Materials and Study:

Having the Blueprint by your side throughout the prep is critical…know it back to front and use resources out there like Chris Wahls Study Guide to work through the objectives and know what you know…and what you need working on…you really can’t afford to skip any section.

Having had a spare Amazon Voucher I had back ordered the VCAP-DCA Official Study Guide in February without any real intent of taking the DCA any time soon, however having this book allowed me to structure study based on it’s excellent chapter content which follows the 5.1 and 5.5 Blueprint objectives.

Pluralsights pay per the month for all training content is worth its weight in gold and Jason Nash’s Optimize and Scale Course is what I considered to be my most valuable study asset. The offline mode can be consumed anywhere and I spent most train rides and gym sessions working through chapters. I went through the content about 4-5 times in total stepping up the play speed each time…by the end of it I had Jason coming at me in 2x…

I also took the official VMware Optimize and Scale Course 5.1 in October of 2013. While I don’t think it ultimately helped me pass or fail the exam, its still worth a shot if the training expense can be justified.

The Lab:

For me, without a decent lab there is no chance of passing this exam…I am/was lucky in that I have a very decent lab at my disposal through ZettaGrid, but I still loaded up a mini lab on my Mac Book Pro to help me study and revise while on the plane ride over to VMworld. You need to go over and execute CLI commands because speed is the key in this exam…I would also learn up on 5.5 Web Client menu context and where to configure the new features listed in the Blueprint. A Lab with access to iSCSI/NFS shares is recommended and work through relevant blueprint items again and again…see how I am saying that repetition is key here? ☺

The Exam:

The format has changed fairly significantly from the 500 and 510 exams from my research and in asking others of their experience…You now get 23 questions over 180 minutes and the Exam Lab has five ESXi Hosts, 2 vCenters and a bunch of datastores…you also get a vSphere Replication Appliance and vCO Server instance.

Throughout the exam you are repeatedly warned to not change anything but what actions are stated in the question…Modding the Management Networking in error could end the exam. There seems to be a little more leeway on that and I found out the hard way when I completely misread one question and almost bricked a host..lucky it was a 5.5 host or else my dVS might not have come back.

As expected I didn’t have any issues with Latency and performance of the lab…this was a big thing and meant I could attack questions without the worry of screen refresh issues due to latency or sticky keys for CLI commands.

On that note, time management is absolutely key…I was working on 8 questions an hour, but quickly found the time coming down quick…some questions take longer than others, but I felt each question was roughly equal in terms of whats expected. Two questions stumped me initially and I left them for the end if I had time. I got to the end of question 23 with about 14 minutes left and attempted to go back and answer the 2 that caused me trouble…end of the day I had to leave those unanswered.

When time expired I got the message saying that your results would be sent out in 15-20 days. Overall I left the exam fairly positive I had done more than enough to pass.

The Result and Final Thoughts:

I got a Tweet from SOSTech_WP later in the evening while enjoying the Redwood Room at The Clift saying to check my email…when I got back to the hotel I was extremely surprised to see the results email waiting for me in my inbox. And while I didn’t smash it like I thought I was going to upon coming out…a pass is a pass is a pass! I was relieved and fairly happy that the plan had worked and all the hard work of the previous three weeks had paid off.

In reality it wasn’t just about the last three weeks and this VCAP-DCA exam is a true representation of an acquired skill level in administrating a complex vSphere platform and in that it was a validation of the work I do in and around vSphere. People commented to me prior to taking the exam that it was a fun exam to take…and I certainly understand that point of view… I had a great time taking it…but had a better time passing it!

Next for me is to decide on a more challenging path and journey…certainly one that may have me taking at least one more VCAP Administrator Exam and a Design Exam.

Good luck to all those taking the VCAP-DCA in the future.

VMworld: Community Tech Talk – Differentiate or Die?

As mentioned in a previous post I was lucky enough to be able to present a #vBrownBag Tech Talk at this years VMworld. Even though the timing meant a smallish audience in the Community Hang Space it was a thrill to present…somehow I even got my still on the frontpage of the VMworld Video Page:

Below is the talk which presents the Slide Deck with an audio overlay…interested in hearing comments so feel free to post below.

The original Blog Post can be found here:

http://www.vmworld.com/community/videos#videoarea

The Complete vBrownBag VMworld 2014 Playlist can be accessed below: