Author Archives: Anthony Spiteri

VMware vSphere 6.5 Host Resources Deep Dive – A Must Have!

Just after I joined Zettagrid in June of 2013 I decided to load up vSphere 5.1 Clustering Deepdive by Duncan Epping and Frank Denneman on my iPad to read on my train journey to and from work. Reading that book allowed me to gain a deeper understanding of vSphere through the in depth content that Duncan and Frank had produced. Any VMware administrator worth their salt would be familiar with the book (or the ones that proceeded it) and it’s still a brilliant read.

Fast forward a few versions of vSphere and we finally have follow up:

VMware vSphere 6.5 Host Resources Deep Dive

This time around Frank has been joined by Niels Hagoort and together they have produced another must have virtualization book…though it goes far beyond VMware virtualization. I was lucky enough to review a couple of chapters of the book and I can say without question that this book will make your brain hurt…but in a good way. It’s the deepest of deep dives and it goes beyond the previous books best practice and dives into a lot of the low level compute, storage and networking fundamentals that a lot of us have either forgotten about, never learnt or never bothered to learn about.

This book explains the concepts and mechanisms behind the physical resource components and the VMkernel resource schedulers, which enables you to:

  • Optimize your workload for current and future Non-Uniform Memory Access (NUMA) systems.
  • Discover how vSphere Balanced Power Management takes advantage of the CPU Turbo Boost functionality, and why High Performance does not.
  • How the 3-DIMMs per Channel configuration results in a 10-20% performance drop.
  • How TLB works and why it is bad to disable large pages in virtualized environments.
  • Why 3D XPoint is perfect for the vSAN caching tier.
  • What queues are and where they live inside the end-to-end storage data paths.
  • Tune VMkernel components to optimize performance for VXLAN network traffic and NFV environments.
  • Why Intel’s Data Plane Development Kit significantly boosts packet processing performance.

If any of you have read Frank’s NUMA Deep Dive blog series you will start to get an appreciation of the level of technical detail this book covers, however it is written in a way that allows you absorb the information in a way that is digestible, though some parts may need to be read twice over. Well done to Frank and Niels on getting this book out and again, if you are working in and around anything to do with computers this is a must read so do yourself a favour and grab a copy.

The current Amazon locals that have access to purchase the book can be found below:

Amazon US: http://www.amazon.com/dp/1540873064
Amazon France: https://www.amazon.fr/dp/1540873064
Amazon Germany: https://www.amazon.de/dp/1540873064
Amazon India: http://www.amazon.in/dp/1540873064
Amazon Japan: https://www.amazon.co.jp/dp/1540873064
Amazon Mexico: https://www.amazon.com.mx/dp/1540873064
Amazon Spain: https://www.amazon.es/dp/1540873064
Amazon UK: https://www.amazon.co.uk/dp/1540873064

CPU Overallocation and Poor Network Performance in vCD – Beware of Resource Pools

For the longest time all VMware administrators have been told that resource pools are not folders and that they should only be used under circumstances where the impact of applying the resource settings is fully understood. From my point of view I’ve been able to utilize resource pools for VM management without too much hassle since I first started working on VMware Managed Service platforms and from a managed services point of view they are a lot easier to use as organizational “folders” than vSphere folders themselves. For me, as long as the CPU and Memory Resources Unlimited checkbox was ticked nothing bad happened.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Working with vCloud Director however, resource pools are heavily utilized as the control mechanism for resource allocation, sharing and management. It’s still a topic that can cause confusion when trying to wrap ones head around the different allocation models vCD offers. I still reference blog posts from Duncan Epping and Frank Denneman written nearly seven years ago to refresh my memory every now and then.

Before moving onto an example of how overallocation or client undersizing in vCloud Director can cause serious performance issues it’s worth having a read of this post by Frank that goes through in typical Frank detail around what resource management looks like in vCloud Director.

Proper Resource management is very complicated in a Virtual Infrastructure or vCloud environment. Each allocation models uses a different combination of resource allocation settings on both Resource Pool and Virtual Machine level

Undersized vDCs Causing Network Throughput Issue:

The Allocation Pool model was the one that I worked with the most and it used to throw up a few client related issues when I worked at Zetttagrid. When using the Allocation Pool method which is the default model you are specifying the amount of resources for your Org vDC and also specifying how much of these resources are guaranteed. The guarantee means that a reservation will be set and that the amount of guaranteed resources is taken from the Provider vDC. The total amount of resources specified is the upper boundary, which is also the resource pool limit.

Because tenants where able to purchase Virtual Datacenters of any size there was a number of occasions where the tenants undersized their resources. Specifically, one tenant came to us complaining about poor network performance during a copy operation between VMs in their vDC. At first the operations team thought that is was the network causing issues…we where also running NSX and these VMs where also on a VXLAN segment so fingers where being pointed there as well.

Eventually, after a bit of troubleshooting we where able to replicate the problem…it was related to the resources that the tenant had purchased or lack thereof. In a nutshell because the allocation pool model allows the over provisioning or resources not enough vCPU was purchased. The vDC resource pool had 1000Mhz of vCPU with a 0% reservation but he had created 4 dual vCPU VMs. When the network copy job started it consumed CPU which in turn exhausted the vCD CPU allocation.

What happened next can be seen in the video below…

With the resource pool constrained ready time is introduced to throttle the CPU which in turn impacts the network throughput. As shown in the video when the resource pool has the the unlimited button checked the ready goes away and the network throughput returns to normal.

Conclusion:

Again, its worth checking out the impact on the network throughput in the video as it clearly shows what happens what tenants underprovision or overallocate their Virtual Datacenters in vCloud Director. Outside of vCloud Director it’s also handy to understand the impact of applying reservations on Resource Pools in terms of VM compute and networking performance.

It’s not always the network!

References:

http://www.vmware.com/resources/techresources/10325

http://frankdenneman.nl/2010/09/24/provider-vdc-cluster-or-resource-pool/

http://www.yellow-bricks.com/2012/02/28/resource-pool-shares-dont-make-sense-with-vcloud-director/

https://kb.vmware.com/kb/2006684

Allocation Pool Organization vDC Changes in vCloud Director 5.1

VMworld 2017 – #vGolf Las Vegas

#vGolf is back! Bigger and better than the inaugural #vGolf held last year at VMworld 2016!

Last year we had 24 participants and everyone who attended had a blast at the majestic Bali Hai Golf complex which is in view of the VMworld 2017 venue, Mandalay Bay. This year the event will expand with more sponsors and a more structured golfing competition with prizes going out for the top 2 placed two ball teams.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Details will be updated on this site and on the Eventbrite page once the day is finalised. For the moment, if you are interested please reserve your spot by securing a ticket. At this stage there are 32 spots, but depending on popularity that could be extended.

Last year the golfing fee’s where heavily subsidised to $40 USD per person (green fees usually $130-150) thanks to the sponsors and I expect the same or lower depending on final sponsorship numbers this year. For now, please head to the Eventbrite page and reserve your ticket and wait for further updates as we get closer to the event.

Registration Page

There is a password on the registration page to protect against people registering directly via the public page. The password is vGolf2017Vegas. I’m looking forward to seeing you all there bright and early on Sunday morning!

Take a look at what awaits you…don’t miss out!

Sponsorship Call:

If you, or your company can offer some sponsorship for the event, please email [email protected] to discuss arrangements. I am looking to subsidise most of the green fee’s if possible and for that we would need four to five sponsors.

Important ESXi 6.0 Patch – Upgrade Now!

Last week VMware released a new patch (ESXi 6.0 Build 5572656) that addresses a number of serious bugs with Snapshot operations. Usually I wouldn’t blog about a patch release, but when I looked through the rest of the fixes in the VMwareKB it was apparent to me that this was more than your average VMware patch and addresses a number of issues around storage but again, a lot around Snapshot operations which is so critical to most VM backup operations.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Here are some of the key resolutions that I’ve picked out from the patch release:

  • When you take a snapshot of a virtual machine, the virtual machine might become unresponsive
  • After you create a virtual machine snapshot of a SEsparse format, you might hit a rare race condition if there are significant but varying write IOPS to the snapshot. This race condition might make the ESXi host stop responding
  • Because of a memory leak, the hostd process might crash with the following error: Memory exceeds hard limit. Panic. The hostd logs report numerous errors such as Unable to build Durable Name. This kind of memory leak causes the host to get disconnected from vCenter Server
  • Using SESparse for both creating snapshots and cloning of virtual machines, might cause a corrupted Guest OS file system
  • During snapshot consolidation a precise calculation might be performed to determine the storage space required to perform the consolidation. This precise calculation can cause the virtual machine to stop responding, because it takes a long time to complete
  • Virtual Machines with SEsparse based snapshots might stop responding, during I/O operations with a specific type of I/O workload in multiple threads
  • When you reboot the ESXi host under the following conditions, the host might fail with a purple diagnostic screen and a PCPU xxx: no heartbeat error.
    • You use the vSphere Network Appliance (DVFilter) in an NSX environment
    • You migrate a virtual machine with vMotion under DVFilter control
  • Windows 2012 domain controller supports SMBv2, whereas Likewise stack on ESXi supports only SMBv1. With this release, the likewise stack on ESXi is enabled to support SMBv2
  • When the unmap commands fail, the ESXi host might stop responding due to a memory leak in the failure path. You might receive the following error message in the vmkernel.log file: FSDisk: 300: Issue of delete blocks failed [sync:0] and the host gets unresponsive.
  • In case you use SEsparse and enable unmapping operation to create snapshots and clones of virtual machines, after the wipe operation (the storage unmapping) is completed, the file system of the guest OS might be corrupt. The full clone of the virtual machine performs well.

There is also a number of vSAN related fixes in the patch so overall it’s worth looking to apply this patch as soon as is possible.

References:

https://kb.vmware.com/kb/2149955

Attack from the Inside – Protecting Against Rogue Admins

In July of 2011, Distribute.IT, a domain registration and web hosting services provider in Australia was was hit with a targeted, malicious attack that resulted in the company going under and their customers left without their hosting or VPS data. The attack was calculated, targeted and vicious in it’s execution… I remember the incident well as I was working for Anittel at the time and we where offering similar services…everyone in the hosting organization was concerned when starting to think about the impact a similar attack would have within our systems.

“Hackers got into our network and were able to destroy a lot of data. It was all done in a logical order – knowing exactly where the critical stuff was and deleting that first,”

While it was reported at the time that a hacker got into the network, the way in which the attack was executed pointed to an inside job and all though it was never proved to be so it almost 100% certain that the attacker was a disgruntled ex-employee. The very real issue of an inside attack has popped up again…this time Verelox, a hosting company out of the Netherlands has effectively been taken out of business with a confirmed attack from within by an ex-employee.

My heart sinks when I read of situations like this and for me, it was the only thing that truely kept me up at night as someone who was ultimately responsible for similar hosting platforms. I could deal and probably reconcile with myself if I found myself in a situation where a piece of hardware failed causing data loss…but if an attacker had caused the data loss then all bets would have been off and I might have found myself scrambling to save face and along with others in the organization, may well have been searching for a new company…or worse a new career!

What Can Be Done at an Technical Level?

Knowing a lot about how hosting and cloud service providers operate my feeling is that 90% of organizations out there are not prepared for such attacks and are at the mercy of an attack from the inside…either by a current or ex-employee. Taking that a step further there are plenty that are at risk of an attack from the inside perpetrated by external malicious individuals. This is where the principal of least privileged access needs to be taken to the nth degree. Clear separation of operational and physical layers needs to be considered as well to ensure that if systems are attacked, not everything can be taken down at once.

Implementing some form of certification or compliancy such as ISO 27001, SOC and iRAP will force companies to become more vigilant through the stringent processes and controls that are forced upon companies once they meet compliancy. This in turn naturally leads to better and more complete disaster and business continuity scenarios that are written down and require testing and validation in order to pass certification.

From a backup point of view, these days with most systems being virtual it’s important to consider a backup strategy that not only looks to make use of the 3-2-1 rule of backups, but also look to implement some form of air-gapped backups that in theory are completely seperate and unaccessible from production networks, meaning that only a few very trusted employees have access to the backup and restore media. In practice implementing a complete air-gapped solution is complex and potentially costly and this is where service providers are chancing their futures on scenarios that have a small percentage chance of happening however the likelihood of that scenario playing out is greater than it’s ever been.

In a situation like Verelox, I wonder if, like most IaaS providers they didn’t backup all client workloads by default, meaning that backup services was an additional service charge that some customers didn’t know about…that said, if backup systems are wiped clean is there any use of having those services anyway? That is to say…is there a backup of the backup? This being the case I also believe that businesses need to start looking at cross cloud backups and not rely solely on their providers backup systems. Something like the Veeam Agent’s or Cloud Connect can help here.

So What Can Be Done at an Employee Level?

The more I think about the possible answer to this question, the more I believe that service providers can’t fully protect themselves from such internal attacks. At some point trust supersedes all else and no amount of vetting or process can stop someone with the right sort of access doing damage. To that end making sure that you are looking after your employee’s is probably the best defence against someone feeling aggrieved enough to carry out an malicious attack such as the one Verelox has just gone through. In addition to looking after employee’s well being it’s also a good idea to…within reason, keep tabs on an employee’s state in life in general. Are they going through any personal issues that might make them unstable, or have they been done wrong by someone else within the company? Generally social issues should be picked up during the hiring process, but complete vetting of employee stability is always going to be a lottery.

Conclusion

As mentioned above, this type of attack is a worst case scenario for every service provider that operates today…there are steps that can be taken to minimize the impact and protect against an employee getting to the point where they choose to do damage but my feeling is we haven’t seen the last of these attacks and unfortunately more will suffer…so where you can, try to implement policy and procedure to protect and then recover when or if they do happen.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Resources:

https://www.crn.com.au/news/devastating-cyber-attack-turns-melbourne-victim-into-evangelist-397067/page1

https://www.itnews.com.au/news/distributeit-hit-by-malicious-attack-260306

https://news.ycombinator.com/item?id=14522181

Verelox (Netherlands hosting company) servers wiped by ex-admin from sysadmin

Homelab – Lab Access Made Easy with Free Veeam Powered Network

A couple of weeks ago at VeeamON we announced the RC of Veeam PN which is a lightweight SDN appliance that has been released for free. While the main messaging is focused around extending network availability for Microsoft Azure, Veeam PN can be deployed as a stand alone solution via a downloadable OVA from the veeam.com site. While testing the product through it’s early dev cycles I immediately put into action a use case that allowed me to access my homelab and other home devices while I was on the road…all without having to setup and configure relatively complex VPN or remote access solutions.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

There are a lot of existing solutions that do what Veeam PN does and a lot of them are decent at what they do, however the biggest difference for me with comparing say the VPN functionality with a pfSense is that Veeam PN is purpose built and can be setup within a couple of clicks. The underlying technology is built upon OpenVPN so there is a level of familiarity and trust with what lies under the hood. The other great thing about leveraging OpenVPN is that any Windows, MacOS or Linux client will work with the configuration files generated for point-to-site connectivity.

Homelab Remote Connectivity Overview:

While on the road I wanted to access my homelab/office machines with minimal effort and without the reliance on published services externally via my entry level Belkin router. I also didn’t have a static IP which always proved problematic for remote services. At home I run a desktop that acts as my primary Windows workstation which also has VMware Workstation installed. I then have my SuperMicro 5028D-TNT4 server that has ESXi installed and runs my NestedESXi lab. I need access to at least RDP into that Windows workstation, but also get access to the management vCenter, SuperMicro IPMI and other systems that are running on the 192.168.1.0/24 subnet.

As seen above I also wanted to directly access workloads in the NestedESXi environment specifically on the 172.17.0.1/24 and 172.17.1.1/24 networks. A little more detail on my use case in a follow up post but as you can see from the diagram above, with the use of the Tunnelblick OpenVPN Client on my MBP I am able to create a point-to-site connection to the Veeam PN HUB which is in turn connected via site-to-site to each of the subnets I want to connect into.

Deploying and Configuring Veeam Powered Network:

As mentioned above you will need to download the Veeam PN OVA from the veeam.com website. This VeeamKB describes where to get the OVA and how to deploy and configure the appliance for first use. If you don’t have a DHCP enabled subnet to deploy the appliance into you can configure the network as a static by accessing the VM console, logging in with the default credentials and modifying the /etc/networking/interface file as described here.

Components

  • Veeam PN Hub Appliance x 1
  • Veeam PN Site Gateway x number of sites/subnets required
  • OpenVPN Client

The OVA is 1.5GB and when deployed the Virtual Machine has the base specifications of 1x vCPU, 1GB of vRAM and a 16GB of storage, which if thin provisioned consumes a tick over 5GB initially.

Networking Requirements

  • Veeam PN Hub Appliance – Incoming Ports TCP/UDP 1194, 6179 and TCP 443
  • Veeam PN Site Gateway – Outgoing access to at least TCP/UDP 1194
  • OpenVPN Client – Outgoing access to at least TCP/UDP 6179

Note that as part of the initial configuration you can configure the site-to-site and point-to-site protocol and ports which is handy if you are deploying into a locked down environment and want to have Veeam PN listen on different port numbers.

In my setup the Veeam PN Hub Appliance has been deployed into Azure mainly because that’s where I was able to test out the product initially, but also because in theory it provides a centralised, highly available location for all the site-to-site connections to terminate into. This central Hub can be deployed anywhere and as long as it’s got HTTPS connectivity configured correctly you can access the web interface and start to configure your site and standalone clients.

Configuring Site Clients (site-to-site):

To complete the configuration of the Veeam PN Site Gateway you need to register the sites from the Veeam PN Hub Appliance. When you register a client, Veeam PN generates a configuration file that contains VPN connection settings for the client. You must use the configuration file (downloadable as an XML) to set up the Site Gateway’s. Referencing the digram at the beginning of the post I needed to register three seperate client configurations as shown below.

Once this has been completed I deployed three Veeam PN Site Gateway’s on my Home Office infrastructure as shown in the diagram…one for each Site or subnet I wanted to have extended through the central Hub. I deployed one to my Windows VMware Workstation instance  on the 192.168.1.0/24 subnet and as shown below I deployed two Site Gateway’s into my NestedESXi lab on the 172.17.0.0/24 and 172.17.0.1/24 subnets respectively.

From there I imported the site configuration file into each corresponding Site Gateway that was generated from the central Hub Appliance and in as little as three clicks on each one, all three networks where joined using site-to-site connectivity to the central Hub.

Configuring Remote Clients (point-to-site):

To be able to connect into my home office and home lab which on the road the final step is to register a standalone client from the central Hub Appliance. Again, because Veeam PN is leveraging OpenVPN what we are producing here is an OVPN configuration file that has all the details required to create the point-to-site connection…noting that there isn’t any requirement to enter in a username and password as Veeam PN is authenticating using SSL authentication.

For my MPB I’m using the Tunnelblick OpenVPN Client I’ve found it to be an excellent client but obviously being OpenVPN there are a bunch of other clients for pretty much any platform you might be running. Once I’ve imported the OVPN configuration file into the client I am able to authenticate against the Hub Appliance endpoint as the site-to-site routing is injected into the network settings.

You can see above that the 192.168.1.0, 172.17.0.0 and 172.17.0.1 static routes have been added and set to use the tunnel interfaces default gateway which is on the central Hub Appliance. This means that from my MPB I can now get to any device on any of those three subnets no matter where I am in the world…in this case I can RDP to my Windows workstation, connect to vCenter or ssh into my ESXi hosts.

Conclusion:

Summerizing the steps that where taken in order to setup and configure the extension of my home office network using Veeam PN through its site-to-site connectivity feature to allow me to access systems and services via a point-to-site VPN:

  • Deploy and configure Veeam PN Hub Appliance
  • Register Sites
  • Register Endpoints
  • Deploy and configure Veeam PN Site Gateway
  • Setup Endpoint and connect to Hub Appliance

Those five steps took me less than 15 minutes which also took into consideration the OVA deployments as well…that to me is extremely streamlined, efficient process to achieve what in the past, could have taken hours and certainly would have involved a more complex set of commands and configuration steps. The simplicity of the solution is what makes it very useful for home labbers wanting a quick and easy way to access their systems…it just works!

Again, Veeam PN is free and is deployable from the Azure Marketplace to help extend availability for Microsoft Azure…or downloadable in OVA format directly from the veeam.com site. The use case i’ve described and have been using without issue for a number of months adds to the flexibility of the Veeam Powered Network solution.

References:

https://helpcenter.veeam.com/docs/veeampn/userguide/overview.html?ver=10

https://www.veeam.com/kb2271

 

Service Providers Be Aware: Samba Vulnerability is out there! SambaCry

Having worked in and around the service provider space for most of my career when I heard about the Linux variant of WannaCry, SambaCry last week, I thought to myself that it had the potential to be fairly impactful given there would be significant numbers of systems that use Samba for file services in the wild. In fact this post from GuardiCore puts the number at approximately 110,000 and I know that a lot of the storage appliances I use for my labs have Samba services that are exposed to the exploit.

The Samba team released a patch on May 24 for a critical remote code execution vulnerability in Samba, the most popular file sharing service for all Linux systems. Samba is commonly included as a basic system service on other Unix-based operating systems as well.

This vulnerability, indexed CVE-2017-7494, enables a malicious attacker with valid write access to a file share to upload and execute an arbitrary binary file which will run with Samba permissions.

The flaw can be exploited with just a few lines of code, requiring no interaction on the part of the end user. All versions of Samba from 3.5 onwards are vulnerable.

It’s worth reading the GuardiCore post in detail as it lists the differences between WannaCry and SambaCry and why potentially the linux exploit has more potential for damage due to the fact it targets weak passwords that allow lateral movement. They have written an NMAP script to easily detect vulnerable Samba servers.

Apart from upgrading to the lastest builds there is a workaround in place…If your Samba server is vulnerable and patching or upgrading is not an option, add the following line to the Samba configuration file (smb.conf):

nt pipe support = no

Then restart the network’s SMB daemon (smbd)

Pretty simple workaround to stop systems potentially being impacted. Again to service providers out there, if you haven’t already done so, put out an advisory to your tenant’s to ensure they upgrade or put in the workaround! Also for all those homelab users out there, as Anton Gostev pointed out in his weekly Veeam Forum Digest, older NAS devices and even routers might be impacted and those are the type of devices that won’t get updates and generally those are the devices that hold valuable personal information…so again make sure everything is checked and the workaround put into play.

References:

https://www.samba.org/samba/history/security.html

https://twitter.com/hashtag/sambacry?f=tweets&vertical=default

 

Quick Fix: VCSA 503 Service Unavailable Error

I’ve just had to fix one of my VCSA’s again from the infamous 503 Service Unavailable error that seems to be fairly common with the VCSA even though it’s was claimed to be fixed in vCenter version 6.5d. I’ve had this error pop up fairly regularly since deploying my homelab’s vCenter Server Appliance as a version 6.5 GA instance and for the most part I’ve refrained from rebooting the VCSA just in case the error pops up upon reboot and have even kept a snapshot against the VM just in case I needed to revert to it on the high change that it would error out.

503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http20NamedPipeServiceSpecE:0x0000559b1531ef80] _serverNamespace = / action = Allow _pipeName =/var/run/vmware/vpxd-webserver-pipe)

After doing a Google search for any permanent solutions to the issue, I came across a couple of posts referencing USB passthrough devices that could trigger the error which was plausible given I was using an external USB Hard Drive. IP changes seem to also be a trigger for the error though in my case, it wasn’t the cause. There is a good Reddit thread here that talks about duplicate keys…again related to USB passthrough. It also links externally to some other solutions that where not relevant to my VCSA.

Solution:

As referenced in this VMware communities forum post, to fix the issue I had to first find out if I did have a duplicate key error in the VCSA logs. To do that I dropped into the VCSA shell and went into /var/logs and did a search for any file containing device_key + already exists. As shown in the image above this returned a number of entries confirming that I had duplicate keys and that it was causing the issue.

The VMware vCenter Server Appliance vpxd 6.5 logs are located in the /var/log/vmware/vmware-vpx folder

What was required next was to delete the duplicate embedded PostGres database table entries. To connect to the embedded postgres database you need to run the following command from the VCSA shell:

To remove the duplicate key I ran the following command and rebooted the appliance, noting that the id and device_key will vary.

Once everything rebooted all the services started up and I had a functional vCenter again which was a relief given I was about five minutes away from a restore or a complete rebuild…and ain’t nobody got time for that!

vCenter (VCSA) 6.5 broken after restart from vmware

Reference:

https://communities.vmware.com/thread/556490

 

Released: vCloud Director SP 8.20.0.1

Last week VMware released an update for vCloud Director SP (Build 5439762) and while the small version increment suggest a small release, it actually contains a couple of important new features and introduces support for VVOL datastores which hopefully will get vCAN Service Providers looking at VVOLs a little more as well as a new Cell Management Tool command to help the debugging of the auto import feature. There are also a number of resolved issues that are worth reading through the release notes for.

  • Support for VVol (Virtual Volume) datastores
    These datastores were introduced in vSphere 6.0 and can now be selected for use by vCloud Director. See VMware Knowledge Base article 2113013 for more information about VVols.
  • New Cell Management Tool subcommand debug-auto-import 
    You can use this command to get more information about why an adopted VM was not imported. See the debug-auto-import command help for information about command options.

Also, just as a reminder that if you are a vCAN Service Provider currently running vCloud director or looking to run it, the vCloud Director Team has a VMLive session in June that will provide a sneak peek at vCloud Director.Next roadmap. Looking forward to seeing what goodies the vCD Product Team are going to announce in regards to vCD enhancements.

Those with the the correct entitlements can download the build here.

References:

http://pubs.vmware.com/Release_Notes/en/vcd/8-20/rel_notes_vcloud_director_8-20-0-1.html

VeeamON 2017 Wrap

VeeamON 2017 has come and gone and even though I left New Orleans on Friday afternoon, I just arrived back home…54 hours of travel, transit and delays has meant that my VeeamOFF continued longer than most! What an amazing week it was though for Veeam, our partners and our customers…The announcements that we made over the course of the event have been extremely well received and it’s clear to me that the Availability Platform vision that we first talked about last year is in full execution mode.

The TPM team executed brilliantly and along with the core team and the other 300 Veeam employee’s that where in New Orleans it was great to see all the hard work pay off. The Technical Evangelist’s main stage live demo’s all went off (if not for some dodgy HDMI) without a hitch and we all felt privileged to be able to demo some of the key announcements. On a personal note, It was a career highlight to be able to present to approximately 2000 people and be part of a brand new product launch for Veeam with Veeam PN.

From a networking point of view it was great to meet so many new people and put faces to Twitter handles. It was also great to see the strong Veeam Vanguard representation at the event and even though I couldn’t party with the group like previous years, it looked like they got a lot out of week, both from a Veeam technical point of view and without doubt on the social front…I was living vicariously through them as they where partying hard in New Orleans.

VeeamON Key Announcements:

Availability Suit 10

  • Built-in Management for Veeam Agent for Linux and Veeam Agent for Microsoft Windows
  • Scale-Out Backup Repository — Archive Tier
  • NAS Backup Support for SMB and NFS Shares
  • Veeam CDP (Continuous Data Protection)
  • Primary Storage Integrations — Universal Storage Integration API
  • DRaaS Enhancements (for service providers)
  • Additional enterprise scalability enhancements

For me, the above list shows our ongoing commitment to the Enterprise but more importantly for me working on enhancing our platform so that our Veeam Cloud and Service Providers can continue to leverage our technology to create and offer cloud based Disaster Recovery and Backup services.

Product Announcements and Releases:

I have been lucky enough to work as the TPM lead on Veeam PN and I was extremely excited to be able to demo it for the first time to the world. I’ve written a blog post here that goes into some more detail around Veeam PN and if you want to view the main stage demo I’ve linked to the video in the last section…I start the demo at the 29th minute mark if you want to skip through.

vCloud Director Cloud Connect Enhancements:

As mentioned above we have enhanced core capabilities in v10 when it comes to Cloud Connect Replication and Cloud Connect Backup. Obviously, the announcement that we will be supporting vCloud Director is significant and one that a lot of our Cloud and Service Providers are extremely happy with. It just makes the DRaaS experience that much more complete and when you add that to the CDP features in the core platform which will allow for sub minute RPO’s for replica’s it firmly places Cloud Connect as the market leader in Replication as a Service technologies.

We also announced backup to tape features for Cloud Connect Backup which will allow Cloud and Service Providers to offload long term backup files to cheaper storage. Note that this isn’t limited to tape if used in conjunction with a Virtual Tape Library. Hopefully our VCSP’s can create revenue generating service offerings around this feature as well.

VCSP Council Meeting:

On Thursday, our R&D leads met with a select group of our top Cloud and Service Provider partners over a three hour lunch meeting which could have gone all day if time permitted. It was great to be on the other side of the fence for the first time and hear all the great feedback, advice and suggestions from the group. It’s encouraging to hear about how Veeam Backup & Replication had become the central platform for IaaS, Cloud Replication an Backup offerings and with the v10 enhancements I expect that to be even more the case moving forward.

Main Stage Recordings:

Wednesday and Thursday morning both saw main stage general sessions where we announced our new products and features along with keynotes from Sanjay Poonen and Mark Russinovich as well as co-CEO Peter McKay and co-founder Ratmir Timashev. They are worth a look and I’ve posted links to the video recordings below. Note that they are unedited and contain all change overs and wait times.

https://www.veeam.com/veeamon/live

Press Releases:

« Older Entries