Monthly Archives: December 2017

Top Posts 2017

2017 is done and dusted and looking back on the blog over the last twelve months I’ve not been able to keep the pace up compared to the previous two years in terms of churning out content. In 2017 I managed 90 posts (including this one) which was down on the 124 last year and the 110 in 2016. My goal has always been to put out at least two quality posts a week, however I found that the travel component of my new role has impacted my productivity and tinkering time, which is where a lot of the content comes from…however it was still a record year for site visits (up over 200K) and I did manage to publish the 400th blog post on Virtualization is Life! since going live in 2012.

Looking back through the statistics generated via JetPack, I’ve listed the Top 10 Blog Posts from the last 12 months. This year the VCSA, NSX, vCenter Upgrades/Migrations and Homelab posts dominating the top ten. As I posted about a couple months back the common 503 error for the VCSA is a trending search topic. I was also happy that my post on my Working from Home experience over the last 12 months resonated with a lot of people.

  1. Quick Fix: VCSA 503 Service Unavailable Error
  2. HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix
  3. ESXi 6.5 Storage Performance Issues and Fix
  4. What I’ve Learnt from 12 Months Working From Home
  5. NSX Bytes: Updated – NSX Edge Feature and Performance Matrix
  6. Upgrading Windows vCenter 5.5 to 6.0 In-Place: Issues and Fixes
  7. Homelab – Lab Access Made Easy with Free Veeam Powered Network
  8. NSX Bytes: NSX-v 6.3 Host Preparation Fails with Agent VIB module not installed
  9. Quick Look – vSphere 6.5 Storage Space Reclamation
  10. NSX Edge vs vShield Edge: Part 1 – Feature and Performance Matrix

In terms of the Top 10 new posts created in 2017, the list looks more representative of my Veeam content with a lot of interest for Veeam PN and also, as I would hope my vCloud Director posts.

  1. ESXi 6.5 Storage Performance Issues and Fix
  2. What I’ve Learnt from 12 Months Working From Home
  3. Upgrading Windows vCenter 5.5 to 6.0 In-Place: Issues and Fixes
  4. Homelab – Lab Access Made Easy with Free Veeam Powered Network
  5. NSX Bytes: NSX-v 6.3 Host Preparation Fails with Agent VIB module not installed
  6. migrate2vcsa – Migrating vCenter 6.0 to 6.5 VCSA
  7. Veeam is now in the Network Game! Introducing Veeam Powered Network.
  8. NestedESXi – Network Performance Improvements with Learnswitch
  9. Released: vCloud Director 9.0 – The Most Significant Update To Date!
  10. VMware Flings: Top 5 – 2017 Edition

This year I was honoured to have this blog voted #19 in the TopvBlog2017 which I am very proud and I’d like to thank the readers and supporters of this blog for voting for me! And thanks must also go to my site sponsors who are all listed on the right hand side of this page.

Again while I found it difficult to keep up the pace with previous years I fully intend to keep on pushing this blog by keeping it strong to it’s roots of vCloud Director and core VMware technologies like NSX and vSAN. There will be a lot of Veeam posts around product deep dives, release info and I’ll continue to generate content around what I am passionate about…and that includes all things hosting, cloud and availability!

I hope you can join me in 2018!

#LongLivevCD

A Year of Travel – A Few Interesting Stats

This year was my first full year working for Veeam and my role being global, requires me to travel to locations and events where my team presents content and engages with technical and social communities. We also travel to various Veeam related training and enablement events throughout the year as well as customer and partner meetings where and when required. I had set expectations about what a travel year might look like and in truth I found 2017 to be just right in terms of time away working verses being at home working and also being with the family.

Without doubt the highlight of the year was VeeamON in New Orleans where I was able to participate in an industry event working for the vendor holding the show. Other highlights include presenting at VMworld, attending and presenting at a number of VeeamON and VMware Forums, Tours and user groups around APJ, attending EMEA SE Training in Warsaw and my first visit to Russia to meet with our R&D teams. I started the year with Sales Kick off in Orlando and finished with a team meeting in Boston, Thanksgiving in Phoenix and finally AWS re:Invent in Las Vegas.

So…what does all that travel look like?

Being homed in Perth, Western Australia I’m pretty much in the most isolated capital city in the world, meaning any flight is going to be significant. Even just flying to Sydney takes four to five hours…the same time it takes me to fly to Singapore. I love looking at stats and there are a number of tools out there that manage flight info. I use Tripit to keep track of all my tips, and there are now a number of sites that let you import your flight data to analyis.

With that my raw stats for 2017 are shown below:

Trips 17
Days 104
Distance 262,769 km
Cities 24
Countries 9

Upon reflection I probably didn’t travel as much as I thought I would with my away from home percentage being a relatively modest 28.4% of which I know isn’t high compared with others in my team, others at Veeam and certainly others in the industry. Where I did come out on top was in the distance travelled. Almost 263 thousand kilometers…a byproduct of living in Perth.

Of those 104 days away apparently I spent nearly 15 days in the air which is amazing when you think about it. When I travel to the USA I do take some of the longest routes in the world however my longest flight was not SYD-DFW but LAX-MEL.

I took 67 total flights across 21 airports and 6 airlines.

Interestingly I made it 70% to the moon in terms of distance, flew mostly on a Saturday which surprised me and my average flight time to 5:12 hours.

In terms of delays I think I got off pretty lightly with only 6 hours of departure delays and 4 hours of arrival delays…though I did have an interesting experience on my way back from VeeamON that technically delayed me a whole day…the less talked about that the better 🙂

Those that know me know that I am a bit of a plane snob and though I don’t have the plane nerd knowledge of Rick Vanover, I do like my planes big, new, shiny and modern. I still can’t go past the A380 and A330 but of late, the more I travel to Singapore the more I appreciate the more modern 737s.

So that’s a quick round up of what my year looked like living the life of a Global Evangelist/Technologist at Veeam. In one years time i’ll be very interested to see how 2018 shaped up compared to 2017!

References:

https://www.jetitup.com/MyStats/See/?name=Anthony~Spiteri

All stats were generated by Jet It Up and flight info was imported from Tripit.

Quick Look: Installing Veeam Powered Network Direct from a Linux Repo

Last week, Veeam Powered Network (Veeam PN) was released to GA. As a quick reminder Veeam PN allows administrators to create, configure and connect site-to-site or point-to-site VPN tunnels easily through an intuitive and simple UI all within a couple of clicks. Previously during the RC period there where two options for deployment…The appliance was available through the Azure Marketplace or downloadable from the veeam.com website and deployable on-premises from an OVA.

With the release of the GA a third option is available which is installation direct from the Veeam Linux Repositories. This gives users the option to deploy their own Ubuntu Linux server and install the packages required through the Advanced Package Tool (APT). This is also the mechanism that works in the background to update Veeam PN through the UI via the Check for Updates button under Settings.

The requirements for installation are as follows:

  • Ubuntu 16.04 and above
  • 1 vCPU (Minimum)
  • 1 GB vRAM (Minimum)
  • 16 GB of Hard Drive space
  • External Network Connectivity

The Azure Marketplace Image and the OVA Appliance have been updated to GA build 1.0.0.380.

Installation Steps:

To install Veeam PN and it’s supporting modules you need to first add the Veeam Linux Repository to you system and configure APT to be on the lookout for the Veeam PN packages. To do this you need to download and add the Veeam Software Repository Key, add Veeam PN to the list of sources in APT and run an APT update.

Once done you need to install two packages via the apt-get install command. As shown below there is the Server and UI component installed. This will pick up a significant list of dependancies that need to be installed as well.

There is a lot that is deployed and configured as it goes through the package installs and you may be prompted along the way to ask to overwrite the existing iptables rules if any existing on the system prior to install. Once completed you should be able to go to the Veeam PN web portal and perform the initial configuration.

The username to use at login will be the root user of your system.

So that’s it…an extremely easy and quick way to deploy Veeam Power Network without having to download the OVA or deploy through the Azure Marketplace.

As a reminder, i’ve blogged about the three different use cases for Veeam PN:

Clink on the links to visit the blog posts that go through each scenario and download or deploy the GA from the Veeam.com website or Azure Marketplace and now directly from the Veeam Linux Repos and give it a try. Again, it’s free, simple, powerful and a great way to connect or extend networks securely with minimal fuss.

Quick Look: Veeam Agent for Linux 2.0 – Now With Cloud Connect

Just over a year ago Veeam Agent for Linux version 1.0 was released and for me still represents an important milestone for Veeam. During various presentations over the last twelve months I have talked about the fact that Linux backups haven’t really changed for twenty or so years and that the tried and trusted method for backing up Linux systems was solid…yet antiquated. For me, the GitLab backup disaster in Feburary highlighted this fact and the Veeam Agent for Linux takes Linux backups out of the legacy and into the now.

Yesterday, Veeam Agent for Linux 2.0 (Build 2.0.0.400) was released and with it came a number of new features and enhancements improving on the v1.1 build released in May. Most important for me is the ability to now backup straight to a Cloud Connect Repository.

Integration with Veeam Cloud Connect provides the following options:

  • Back up directly to a cloud repository: Veeam Agent for Linux provides a fully integrated, fast and secure way to ship backup files directly to a Cloud Connect repository hosted by one of the many Veeam-powered service providers.
  • Granular recovery from a cloud repository: Volume and file-level recovery can be performed directly from a backup stored within the cloud repository, without having to pull the entire backup on-premises first.
  • Bare-metal recovery from a cloud repository: The updated Veeam Recovery Media allows you to connect to your service provider, select the required restore point from the cloud repository and restore your entire computer to the same or different hardware.
Configuration Overview:

To install, you need to download the relevant Linux Packages from here. For my example below, I’m installing on an Ubuntu machine but we do support a number of popular Linux Distros as explained here.

Once installed you want to apply a Server License to allow backing up to Cloud Connect Repositories.

Before configuring a new job through the Agent for Linux Menu you can add Cloud Providers via the agent CLI. There are a number of cli menu options as shown below.

From here, you can use the cli to configure a new Backup Job but i’ve shown the process though the Agent UI. If you preconfigure the Service Provider with the cli once you select Veeam Cloud Connect Repository you don’t need to enter in the details again.

Once done and the job has run you will see that we have the backup going direct to the Cloud Connect Repository!

From the cli you can also get a quick overview of the job status.

Wrap Up:

I’ve been waiting for this feature for a long time and with the amount of Linux server instances (both physical and virtual) that exist today across on-premises, partner hosts IaaS platforms, or hyper-scale clouds, I hope that Veeam Cloud & Service Providers really hone in on the opportunity that exists with this new feature.

For more on What’s New in 2.0 of Veeam Agent for Linux you click here.

References:

https://www.veeam.com/veeam_agent_linux_2_0_whats_new_wn.pdf

Veeam Backup & Replication 9.5 Update 3 – Top New Features

Earlier today we at Veeam released Update 3 for Veeam Backup & Replication 9.5 (Build 9.5.0.1536) and with it comes a couple of very anticipated new features. Back in May at VeeamOn we announced a number of new features that where scheduled to be released as part of the next version of Backup & Replication (v10), however things have worked out such that we have brought some of those features forward into Update 3 for v9.5. It’s a credit to the Product Managers, QA and R&D that we have been able to deliver these ground breaking features into a Update release.

Together with Update 3 we have also released:

Focusing back on Backup & Replication…Update 3 is a fairly significant update and contains a number of enhancements and fixes with a lot of those enhancements aimed at improving the scalability of our flagship Backup & Replication platform. The biggest and most anticipated feature is the built in Agent Management meaning Backup & Replication can now manage virtual, physical and cloud-based workloads from a single console. Further to that we have added offical support for VMware Cloud on AWS and vCloud Director 9.0.

Below are the major features included in Update 3.

  • Built-in agent management
  • Insider protection for Veeam Cloud Connect
  • Data location tagging
  • IBM Spectrum Virtualize Integration
  • Universal Storage Integration API

Other notable enhancements and feature updates include supportability for 4TB virtual disks when using Direct Restore to Azure and support for SQL Server 2017 with that also now a possible database target for the platform. There is extended support for the latest Windows 10, Server and Hyper-V releases. In terms of storage apart from the addition of IBM support and the Universal Storage Integration API we added enhancements to Cisco HyperFlex, Data Domain and HPE 3PAR StoreServ as well as support for Direct NFS to be more efficient with HCI platforms like Nutanix.

For the agents you can now do backup mapping for seeding and restore from backup copies. For VMware there is a significant fix for a condition which reset CBT data for all disks belonging to a VM rather than just the resized disk and there is support again for non encrypted NDB transport.

There is also a lot of new features and enhancements for VCPS and i’ll put together a couple of seperate posts over the next few days outlining those feature…though I did touch on a few of them in the Update 3 RTM post here.

A quick note also for VCSPs that you can upgrade from the RTM to the GA build without issue.

For a full list check out the release notes below and download the update here.

References:

https://www.veeam.com/kb2353

 

Homelab : Supermicro 5028D-TNT4 One Year On

It’s been just over a year since I unboxed my Supermicro 5028D-TNT4 and setup my new homelab. A year is a long time in computing so I though I would write up some thoughts on how the server has performed for me and give some feedback on what’s worked and what hasn’t worked with having the Supermicro system as my homelab machine.

As a refresher, this is what I purchased…

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

One Year On:

The system has been rock solid however I haven’t been able to squeeze the two NestedESXi instances that I wanted to initially. 128GB of RAM just isn’t enough to handle a full suit of systems. As you can see below, I am running three NestedESXi hosts with NSX, vCloud Director and Veeam. The supporting systems for the NestedESXi lab make up the majority of the resource consumption but I also run the parent VCSA and domain controller on the server leaving me with not a lot of breathing room RAM wise.

In fact I have to keep a couple of servers offline at any one point to keep the RAM resources in check.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

One Year On:

As mentioned above to get to two NestedESXi lab stacks I would have needed to double the amount of RAM from 128GB to 256GB however the one stack that I am running covers most of my needs and I have been able to work within the NestedESXi platform to do most of my day to day tinkering and testing. The CPU hasn’t been an issue and i’ve even started using some of the spare capacity to mine cryptocurrency…something that I had no intention of doing one year earlier.

In terms of the power consumption the Xeon-D processor is amazing and I have not noticed any change in my power bill over the last 12 months…for me this is where the 5028D-TNT4 really shines and because of the low power consumption the noise is next to nothing. In fact as I type this out I can hear the portable room fan only…the Micro Tower it’s self is unnoticeable.

From a networking point of view I have survived this far without the need for external switching while still being able to take advantage of the vSphere private VLANs to accomodate my routing needs within the system.

Conclusion:

Looking at the WiredZone pages the SuperMicro systems haven’t really changed much in the past 12 months and prices seem to have stayed the same, however the price of RAM is still higher than when I purchased the system. For me you can’t beat the value and relative bang for buck of the Xeon-D processors. My only real issues where with not having a “management cluster” meaning that I had to take down all the systems to perform upgrades on the VCSA and hosts. To get around that I might consider purchasing a smaller NUC to run the core management VMs which would free up 16GB of RAM on the SuperMicro meaning I could squeeze a little more into the system.

All in all I still highly recommend this system for homelab use as it’s not only proven to be efficient, quiet and powerful…but also extremely reliable.

NSX Bytes – What’s new in NSX-T 2.1

In Feburary of this year VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push of NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has big plans for NSX beyond vSphere and during the NSX vExpert session we saw how the future of networking is all in software…having just come back from AWS re:Invent I tend to agree with this statement as organisations look to extend networks beyond traditional on-premises or cloud locations.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-T are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

Layer 3 accessibility across all types of platforms.

What’s new in NSX-T 2.1:

Today at Pivotal SpringOne, VMware is launching version 2.1 of NSX-T and with it comes a networking stack underpinning Pivotal Container Services, direct integration with Pivotal Cloud Foundry and significant enhancements to load balancing capabilities for OpenStack Neutron and Kubernetes ingress. These load balancers can be virtual or bare metal. There is also native networking and security for containers and Pivotal operations manager integration.

NSX-T Native Load Balancer:
NSX-T has two levels of routers as shown above…then ones that connect to the physical world and the ones which are labeled T1 in the diagram above. Load balancing will be active on the T1 routers and have the following features:

  • Algorithms – Round Robin, Weighted Round Robin, Least Connections and Source IP Hash
  • Protocols – TCP, UDP, HTTP, HTTPS with passthrough, SSL Offload and End to end SSL
  • Health Checks – ICMP, TCP, UDP, HTTP, HTTPS
  • Persistance – Source IP, Cookie
  • Translation – SNAT, SNAT Automap and No SNAT

As well as the above it will have L7 manipulation as will as OpenStack and Kubernetes ingress. Like NSX-v these edges can be deployed in various sizes depending on the workload.

Pivotal Cloud Foundry and NSX-T:

For those that may not know, PCF is a cloud native platform for deploying and operating modern applications and in that NSX-T providers the networking to support those modern application. This is achieved via the Network Container Plugin. Cloud Foundry NSX-T topology include a separate network topology per orginization with every organization getting one T1 router. Logical switches are then attached per space. High performance north/south routing uses NSX routing infrastructure, including dynamic routing to the physical network.

For east/west traffic that happens container to container with every container having distributed firewall rules applied on it’s interface. There is also a number of visibility and troubleshooting counters attached to every container. NSX also controls the IP management by supplying subnets from IP blocks to namespaces and individual IPs and MACs to containers.

Log Insight Content Pack:

As part of this release there is also a new Log Insight NSX-T Content Pack that builds on the new visibility and troubleshooting enhancements mentioned above and allows Log Insight to monitor a lot of the container infrastructure with NSX.

Conclusion:

When it comes to the NSX-T 2.1 feature capabilities, the load balancing is a case of bringing NSX-T up to speed to where NSX-v is, however the thing to think about is that how those capabilities will or could be used beyond vSphere environments…that is the big picture to consider here around the future of NSX and it can be seen with the deeper integration into Pivotal Cloud Foundry.

AWS re:invent Thursday Keynote – Evolution of the Voice UI

Given this was my first AWS re:invent I didn’t know what to expect from the keynotes and while Wednesday’s keynote focused on new release announcements, Thursday’s keynote with Werner Vogels was more geared towards thought leadership on where in AWS want’s to take the industry that it has enabled over the next two to five years. He titled this, 21st Century Architecture and talked around and how AWS don’t go about the building of their platforms by themselves in an isolated environment…they take feedback from clients which allows them to radically change the way they build their systems.

The goal is for them to design very nimble and fast tools from which their customers can decide exactly how to use them. The sheer number of new tools and services i’ve seen AWS release since I first used them back in 2011 is actually quiet daunting. As someone who is not a developer but has come from a hosting and virtualization background I sometimes look at AWS as offering complex simplicity. In fact I wrote about that very thing in this post from 2015. In that post I was a little cynical of AWS and while I still don’t have the opinion that AWS is the be all and end all of all things cloud, I have come around to understanding the way they go about things…..

Treating the machine as Human:

I wanted to take some time to comment on Vogels thoughts on voice and speech recognition. The premise was that all past and current interactions with computers has been driven my the machinery…screen, keyboard, mouse and fingers are all common however up to this point it could be argued that it’s not the way in which we naturally interact with other people. Because of the fact this interaction is driven by the machine we know how to not only interact with machines, but also manipulate the inputs so we get what we want as efficiently as possible.

If I look at the example of SIRI or Alexa today…when I try to ask them to answer me based on a query I have I know to fashion the question in such a way that will allow the technology to respond…this works most of the time because I know how to structure to questions to get the right answer. I treat the machine as a machine! If I look at how my kids interact with the same devices their way of asking questions is not crafted as if they where talking to a computer…for them they ask Alexa a question as if she was real. They treat the machine as a person.

This is where Vogels started talking about his vision for interfaces of the future to by more human centric all based around advances in neural network technology which allow for near realtime responses which will drive the future of interfaces to these digital systems. The first step in that is going to be voice and Amazon has looked to lead the way in which home users interact with Amazon.com with Alexa. With the release of Alexa for Business this will look to extend beyond the home.

For IT pros there is a future in voice interfaces that allow you to not only get feedback on current status of systems, but also (like in many SciFi movies of the last 30 to 40 years) allow us to command functions and dictate through voice interfaces the configuration, setup and management of core systems. This is already happening today with a few project that I’ve seen using Alex to interact with VMware vCenter, or like the video below showing Alex interacting with a Veeam API to get the status of backups.

There are negatives to voice interfaces with the potential to commit voice triggered mistakes high, however as these systems become more human centric voice should allow us to have a normal and more natural way of interacting with systems…at that point we may stop being able to manipulate the machine because the interaction will become natural. AWS is trying to lead the way with products like Alexa but almost every leading computer software company is toying with voice and AI which means we are quickly nearing an inflection point from which we will see an acceleration of the technology which will lead to it become a viable alternative to today’s more commonly used interfaces.