Featured Posts

<< >>

CloudPhysics: Heartbleed Health Checks for vCenter and ESXi

The power of a service like CloudPhysics continues to grow almost weekly as they add new features and Cards. Not only is it brilliant for analytics and metrics but we are now seeing CloudPhysics release cards that aim to help VMware Administrators keep track of possible security vulnerabilities in their platforms. This week they started

Hyper-V is better tech than ESXi – What are Microsoft Smoking??

“We are 4x cheaper with better technology versus VMware.” I’ve been fairly open in my opinion against the latest round of Microsoft FUD coming out of their Worldwide Partner Conference this week but I felt strongly enough by the utter crap coming out of their mouths to respond in a post. Turner says MSFT Hyper-V now 4x

NSX Bytes: Deploying vShield Endpoint with NSX Manager

I recently had to deploy a solution into our Labs that required the installation of vShield Endpoint VMs to facilitate a 3rd Party service. No worries there…but when I logged into the Lab I was faced with an updated vShield Manager instance which was now NSX Manager…Where the heck do you deploy Endpoints? Where is the

Follow Up: vCloud 5.x IP Sub Allocation Pool Error …Fix Coming

A few months ago I wrote a quick post on a bug that existed in vCloud Director 5.1 in regards to IP Sub Allocation Pools and IP’s being marked as in use when they should be available to allocate. What this leads to is a bunch of unusable IPs…meaning that they go to waste and

CloudPhysics: Heartbleed Health Checks for vCenter and ESXi

The power of a service like CloudPhysics continues to grow almost weekly as they add new features and Cards. Not only is it brilliant for analytics and metrics but we are now seeing CloudPhysics release cards that aim to help VMware Administrators keep track of possible security vulnerabilities in their platforms.

VMware-Heartbleed

This week they started a campaign to alert CloudPhysics subscribers to the fact that approx. 57% of vCenter servers and 58% of ESXi Hosts are still not patched with the latest Heartbleed update builds. Based on their own big data analytics they have found that 40% of their client base is still unprotected.

Emails where sent out to affected users earlier this week and @virtualirfan has written a blog on the CloudPhysics site detailing the concern over those numbers quoted above. If you are a current CloudPhysics user, go to the Card Store and search for Heartbleed. Add the card to the desk and run the check.

cp_heartbleed

If you are not a current subscriber follow the steps below:

Three easy steps to rid yourself of Heartbleed

Is your organization part of the 40%? There’s no reason you should be. The fix is easy and the risk is not worth taking. And CloudPhysics is making it even easier: we’ve packaged up the VMware Heartbleed analytic we ran across our global data set, and it’s now available in our community (free) edition for users to run on their own VMware environments. What you can do:

  1. If you haven’t already, get CloudPhysics up and running in your datacenter (takes just a few minutes).
  2. Select and run the “Heartbleed Check.” You’ll find it in the Card Store. It will immediately show you precisely which ESXi hosts remain unprotected in your datacenter.
  3. Apply the patch(es). Here’s the table listing build numbers for the patches we’ve discussed here.

Source: http://www.cloudphysics.com/blog/vmware-heartbleed/

Hyper-V is better tech than ESXi – What are Microsoft Smoking??

“We are 4x cheaper with better technology versus VMware.”

I’ve been fairly open in my opinion against the latest round of Microsoft FUD coming out of their Worldwide Partner Conference this week but I felt strongly enough by the utter crap coming out of their mouths to respond in a post.

It’s not so much about the claim to be 4x cheaper than the VMware Cloud Suite…but more the outright incorrect claims that their technology is somehow superior to that of VMware’s.

I’ve found myself in the position to have been exposed to both Hyper-V and ESXi (not counting the Management and Orchestration suites) and in fact I cut my teeth in the Virtualization world on Hyper-V…so unlike others out there who see things only through the rose colored glasses Microsoft seem to sew onto peoples faces… I go by a real world operational perspective that’s not blinkered.

So here it is…Microsoft Hyper-V is not the equal or superior to VMware’s ESXi! And rather than go through feature by feature..In the interest of keeping this post short and to the point, I would challange anybody to sit someone who has had zero exposure to the Virtualization market to evaluate both Hyper-V and ESXi side by side…without bias or without prejudice there is no doubt in anyones mind that no logical person would choose Hyper-V as the better hypervisor platform over ESXi. To reinforce that…ESXi will come out ontop.

It’s that simple!

Of course I now fall firmly with the side of VMware and some will argue that my own view is blurred but I can tell you that my current opinions are based on fact and experience…not desperate attempts to discredit otherwise far far superior technology…but then again…Microsoft have made a habit of this so it doesn’t surprise me.

Kevin Turner you are a disgrace!

Read more: http://www.crn.com.au/News/389695,vmware-google-apple-catch-a-spray-in-turners-keynote.aspx?utm_source=feed&utm_medium=rss&utm_campaign=CRN+All+Articles+feed#ixzz37Vca3hea

NSX Bytes: Deploying vShield Endpoint with NSX Manager

I recently had to deploy a solution into our Labs that required the installation of vShield Endpoint VMs to facilitate a 3rd Party service. No worries there…but when I logged into the Lab I was faced with an updated vShield Manager instance which was now NSX Manager…Where the heck do you deploy Endpoints? Where is the option to select a host and deploy an Endpoint in NSX?

The process below is for installing VMware EndPoints with the NSX 6.x GUI.

In the Networking and Security Section of the vSphere Web Client, go to NSX Managers -> IP Address of Manager. Click on the Manage and then Grouping Objects Tabs. Go to IP Pools and Add a new Pool for the vShield Endpoint.

NSX_VS_EP_1

Go back to the Networking and Security Section and go to Service Deployments under Installation. Click on Add and you are presented with the Deploy Network & Security Services Wizard. Select VMware Endpoint and click next.

NSX_VS_EDGE_2

Select the Cluster you want to deploy the Endpoints to and click next. Note you can not select individual hosts.

NSX_VS_EP_3

Select the datastore you you want to use for the the Endpoint…Shared storage is recommended…once selected hit next. There is a Specified on Host option…have a read of the online doco to understand what that’s in relation to.

NSX_VS_EP_4

You can now select the Management Network for the Endpoint. Ensure that the IP Pool Created matches the Network PortGroup and click next.

NSX_VS_EP_5

At this point the Wizard begins to deploy the Endpoints. If you take a look at the vCenter Task Console you should see tasks similar to below. The Endpoint Agent is installed on the hosts and the actual Endpoints are deployed via OVF Templates exactly the same as vShield Endpoints where.

NSX_VS_EP_7

Once the Installation Status has changed from In progress to Succeeded your Endpoints have deployed. At this stage you are done…you can’t do anything with the deployed service accept remove it. Nothing to edit, nothing to worry about…3rd party Services should now be able to work just as if these where vShield Endpoints.

NSX_VS_EP_8

The NSX Endpoints are named simply VMware Endpoint (1), VMware Endpoint (2) and so on and are deployed to a new Resource Pool called ESX Agents.

NSX_VS_EP_9

The NSX Online Documentation is about the only searchable location up to this point that goes through the process. As mentioned above, there is a caveat that I have not been able to find further info on…That is, you can not deploy Endpoints to individual hosts…only to a cluster and all hosts in that cluster. I’ve searched for the API calls which may or may not have a mechanism to select hosts without luck. Feel free to comment below if you know this is possible.

Ref: http://pubs.vmware.com/NSX-6/index.jsp#com.vmware.nsx.install.doc/GUID-62B22E0C-ABAC-42D8-93AA-BDFCD0A43FEA.html

 

 

 

 

 

Follow Up: vCloud 5.x IP Sub Allocation Pool Error …Fix Coming

A few months ago I wrote a quick post on a bug that existed in vCloud Director 5.1 in regards to IP Sub Allocation Pools and IP’s being marked as in use when they should be available to allocate. What this leads to is a bunch of unusable IPs…meaning that they go to waste and pools can exhaust quicker…

ip_all_1
  • Unused external IP addresses from sub-allocated IP pools of the gateway failed after upgrading from vCloud Director 1.5.1 to vCloud Director 5.1.2
    After upgrading vCloud Director from version 1.5.1 to version 5.1.2, attempting to remove unused external IP addresses from sub-allocated IP pools of a gateway failed saying that IPs are in use. This issue is resolved in vCloud Director 5.1.3.

This condition also presents it’s self in vCloud 5.5 environments that have 1.5 lineage. Greenfields deployments don’t seem affected…vCD 5.1.3 was suppose to contain the fix but the release notes where released in error…we where then told that the fix would come in vCD 5.5…but when we upgraded our zones we still had the issue.

We engaged VMware Support again recently and they finally have a fix for the bug due in vCD 5.5.2 (No word on those still running with 5.1.x) My suggestion for those that can’t wait for the next point release and are affected badly enough by the bug is to raise an SR and ask for the Hotfix which is an advanced build of the 5.5.2 release.

Thanking the vCloud SP Development team for their continued support of vCD #longlivevCD

ESXi 5.x NFS IOPS Limit Bug – Latency and Performance Hit

There is another NFS bug hidden in the latest ESXi 5.x releases…while not as severe as the 5.5 Update 1 NFS Bug it’s been the cause of increased Virtual Disk Latency and Overall Poor VM performance across a couple of the environments I manage.

The VMwareKB Article referencing the bug can be found here:

Symptoms

  • When virtual machines run on the same host and use the same NFS datastore and the IOPS limit is set on at least one virtual machine, you experience high virtual disks latency and low virtual disk performance.
  • Even when a different IOPS limit is specified for different virtual machines, IOPS limit on all virtual machines is set to the lowest value assigned to a virtual machine in the NFS datastore.
  • You do not experience the virtual disks latency and low virtual disk performance issues when virtual machines:
    • Reside on the VMFS datastore.
    • Run on the same ESXi host, but are placed on the NFS datastore in which none of the virtual machines have an IOPS limit set.
    • Run on the same ESXi host, but are placed on different NFS datastores that do not share the client connection
    • Run on different ESXi hosts, but are placed on the same NFS datastore

So in a nutshell if you have a large NFS Datastore with many VMs with IOPS Limits to prevent against Noisy Neighbours you may experience, what looks like unexplained VM Latency and overall bad performance.

Some proof of this bug can be seen in the screenshot below, where a VM residing on an NFS datastore with IOPS limits applied was exhibiting high Disk Command Latency. It had an IOPS limit of 1000 and wasn’t constrained by setting…yet it had Disk Command Latency in the 100s. The red arrow represents the point at which we migrated the VM to another host. Straight away the latency dropped and the VM returned to expected performance levels. This resembles a couple of the symptoms above…

NFS_IOPS_BUG_1

We also experimented by removing all IOPS Disk Limits on a subset of NFS datastores and looked at the affect on overall latency that had. The results where almost instant as you can see below. Both peaks represent us removing and then adding back in the IOPS Limits.

NFS_IOPS_BUG_2

As we are running ESXi 5.1 hosts we applied the latest patch release (ESXi510-201406001) which includes ESXi510-201404401-BG that addresses the bug in 5.1. After applying that we did see a noticeable drop in overall latency on the previously affected NFS datastores.

Annoyingly there is no available patch for ESXi 5.5, but I have been told by VMware Support that it’s due as of Update 2 for 5.5..no time frame on that release though.

One thing I’m interested in comments on is around VM Virtual Disk IOPS limits… Designed to lesson the impact of noisy neighbours…but what overall affect can it have…or does it have to LUN based latency? Or does it self contain latency to the VM thats restricted? I assume it works differently to SIOC and doesn’t choke disk queue depth? The IOPS limit simply puts the breaks on any Virtual Disk IO?