Author Archives: Anthony Spiteri

Homelab : Supermicro 5028D-TNT4 One Year On

It’s been just over a year since I unboxed my Supermicro 5028D-TNT4 and setup my new homelab. A year is a long time in computing so I though I would write up some thoughts on how the server has performed for me and give some feedback on what’s worked and what hasn’t worked with having the Supermicro system as my homelab machine.

As a refresher, this is what I purchased…

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

One Year On:

The system has been rock solid however I haven’t been able to squeeze the two NestedESXi instances that I wanted to initially. 128GB of RAM just isn’t enough to handle a full suit of systems. As you can see below, I am running three NestedESXi hosts with NSX, vCloud Director and Veeam. The supporting systems for the NestedESXi lab make up the majority of the resource consumption but I also run the parent VCSA and domain controller on the server leaving me with not a lot of breathing room RAM wise.

In fact I have to keep a couple of servers offline at any one point to keep the RAM resources in check.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

One Year On:

As mentioned above to get to two NestedESXi lab stacks I would have needed to double the amount of RAM from 128GB to 256GB however the one stack that I am running covers most of my needs and I have been able to work within the NestedESXi platform to do most of my day to day tinkering and testing. The CPU hasn’t been an issue and i’ve even started using some of the spare capacity to mine cryptocurrency…something that I had no intention of doing one year earlier.

In terms of the power consumption the Xeon-D processor is amazing and I have not noticed any change in my power bill over the last 12 months…for me this is where the 5028D-TNT4 really shines and because of the low power consumption the noise is next to nothing. In fact as I type this out I can hear the portable room fan only…the Micro Tower it’s self is unnoticeable.

From a networking point of view I have survived this far without the need for external switching while still being able to take advantage of the vSphere private VLANs to accomodate my routing needs within the system.

Conclusion:

Looking at the WiredZone pages the SuperMicro systems haven’t really changed much in the past 12 months and prices seem to have stayed the same, however the price of RAM is still higher than when I purchased the system. For me you can’t beat the value and relative bang for buck of the Xeon-D processors. My only real issues where with not having a “management cluster” meaning that I had to take down all the systems to perform upgrades on the VCSA and hosts. To get around that I might consider purchasing a smaller NUC to run the core management VMs which would free up 16GB of RAM on the SuperMicro meaning I could squeeze a little more into the system.

All in all I still highly recommend this system for homelab use as it’s not only proven to be efficient, quiet and powerful…but also extremely reliable.

NSX Bytes – What’s new in NSX-T 2.1

In Feburary of this year VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push of NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has big plans for NSX beyond vSphere and during the NSX vExpert session we saw how the future of networking is all in software…having just come back from AWS re:Invent I tend to agree with this statement as organisations look to extend networks beyond traditional on-premises or cloud locations.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-T are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

Layer 3 accessibility across all types of platforms.

What’s new in NSX-T 2.1:

Today at Pivotal SpringOne, VMware is launching version 2.1 of NSX-T and with it comes a networking stack underpinning Pivotal Container Services, direct integration with Pivotal Cloud Foundry and significant enhancements to load balancing capabilities for OpenStack Neutron and Kubernetes ingress. These load balancers can be virtual or bare metal. There is also native networking and security for containers and Pivotal operations manager integration.

NSX-T Native Load Balancer:
NSX-T has two levels of routers as shown above…then ones that connect to the physical world and the ones which are labeled T1 in the diagram above. Load balancing will be active on the T1 routers and have the following features:

  • Algorithms – Round Robin, Weighted Round Robin, Least Connections and Source IP Hash
  • Protocols – TCP, UDP, HTTP, HTTPS with passthrough, SSL Offload and End to end SSL
  • Health Checks – ICMP, TCP, UDP, HTTP, HTTPS
  • Persistance – Source IP, Cookie
  • Translation – SNAT, SNAT Automap and No SNAT

As well as the above it will have L7 manipulation as will as OpenStack and Kubernetes ingress. Like NSX-v these edges can be deployed in various sizes depending on the workload.

Pivotal Cloud Foundry and NSX-T:

For those that may not know, PCF is a cloud native platform for deploying and operating modern applications and in that NSX-T providers the networking to support those modern application. This is achieved via the Network Container Plugin. Cloud Foundry NSX-T topology include a separate network topology per orginization with every organization getting one T1 router. Logical switches are then attached per space. High performance north/south routing uses NSX routing infrastructure, including dynamic routing to the physical network.

For east/west traffic that happens container to container with every container having distributed firewall rules applied on it’s interface. There is also a number of visibility and troubleshooting counters attached to every container. NSX also controls the IP management by supplying subnets from IP blocks to namespaces and individual IPs and MACs to containers.

Log Insight Content Pack:

As part of this release there is also a new Log Insight NSX-T Content Pack that builds on the new visibility and troubleshooting enhancements mentioned above and allows Log Insight to monitor a lot of the container infrastructure with NSX.

Conclusion:

When it comes to the NSX-T 2.1 feature capabilities, the load balancing is a case of bringing NSX-T up to speed to where NSX-v is, however the thing to think about is that how those capabilities will or could be used beyond vSphere environments…that is the big picture to consider here around the future of NSX and it can be seen with the deeper integration into Pivotal Cloud Foundry.

AWS re:invent Thursday Keynote – Evolution of the Voice UI

Given this was my first AWS re:invent I didn’t know what to expect from the keynotes and while Wednesday’s keynote focused on new release announcements, Thursday’s keynote with Werner Vogels was more geared towards thought leadership on where in AWS want’s to take the industry that it has enabled over the next two to five years. He titled this, 21st Century Architecture and talked around and how AWS don’t go about the building of their platforms by themselves in an isolated environment…they take feedback from clients which allows them to radically change the way they build their systems.

The goal is for them to design very nimble and fast tools from which their customers can decide exactly how to use them. The sheer number of new tools and services i’ve seen AWS release since I first used them back in 2011 is actually quiet daunting. As someone who is not a developer but has come from a hosting and virtualization background I sometimes look at AWS as offering complex simplicity. In fact I wrote about that very thing in this post from 2015. In that post I was a little cynical of AWS and while I still don’t have the opinion that AWS is the be all and end all of all things cloud, I have come around to understanding the way they go about things…..

Treating the machine as Human:

I wanted to take some time to comment on Vogels thoughts on voice and speech recognition. The premise was that all past and current interactions with computers has been driven my the machinery…screen, keyboard, mouse and fingers are all common however up to this point it could be argued that it’s not the way in which we naturally interact with other people. Because of the fact this interaction is driven by the machine we know how to not only interact with machines, but also manipulate the inputs so we get what we want as efficiently as possible.

If I look at the example of SIRI or Alexa today…when I try to ask them to answer me based on a query I have I know to fashion the question in such a way that will allow the technology to respond…this works most of the time because I know how to structure to questions to get the right answer. I treat the machine as a machine! If I look at how my kids interact with the same devices their way of asking questions is not crafted as if they where talking to a computer…for them they ask Alexa a question as if she was real. They treat the machine as a person.

This is where Vogels started talking about his vision for interfaces of the future to by more human centric all based around advances in neural network technology which allow for near realtime responses which will drive the future of interfaces to these digital systems. The first step in that is going to be voice and Amazon has looked to lead the way in which home users interact with Amazon.com with Alexa. With the release of Alexa for Business this will look to extend beyond the home.

For IT pros there is a future in voice interfaces that allow you to not only get feedback on current status of systems, but also (like in many SciFi movies of the last 30 to 40 years) allow us to command functions and dictate through voice interfaces the configuration, setup and management of core systems. This is already happening today with a few project that I’ve seen using Alex to interact with VMware vCenter, or like the video below showing Alex interacting with a Veeam API to get the status of backups.

There are negatives to voice interfaces with the potential to commit voice triggered mistakes high, however as these systems become more human centric voice should allow us to have a normal and more natural way of interacting with systems…at that point we may stop being able to manipulate the machine because the interaction will become natural. AWS is trying to lead the way with products like Alexa but almost every leading computer software company is toying with voice and AI which means we are quickly nearing an inflection point from which we will see an acceleration of the technology which will lead to it become a viable alternative to today’s more commonly used interfaces.

AWS re:Invent – Expectations from a VM Hugger…

Today is the first day offical day of AWS re:Invent 2017 and things are kicking off with the global partner summit. Today also is my first day of AWS re:Invent and I am looking forward to experiencing a different type of big IT conference with all previous experiences being at VMworld or the old Microsoft Tech Eds. Just buy looking at the agenda, schedule and content catalog I can already tell re:Invent is a very very different type of IT conference.

As you may or may not know I started this blog as Hosting is Life! and the first half of my career was spent around hosting applications and web services…in that I gravitated towards looking at AWS solutions to help compliment the hosting platforms I looked after and I was actively using a few AWS services in 2011 and 2012 and attended a couple of AWS courses. After joining Zettagrid my use of AWS decreased and it wasn’t until Veeam announced supportability for AWS storage as part of our v10 announcements that I decided to get back into the swing of things.

Subsequently we announced Veeam Availability for AWS which leverages EBS snapshots to perform agentless backups of AWS instances and more recently we where announced as a launch partner for VMware Cloud on AWS data availability solutions. For me, the fact that VMware have jumped into bed with AWS has obviously raised AWS’s profile in the VMware community and it’s certainly being seen as the cool thing to know (or claim to know) within the ecosystem.

Veeam isn’t the only backup vendor looking to leverage what AWS has to offer by way of extending availability into the hyper-scale cloud and every leading vendor is rushing to claim features that offload backups to AWS cloud storage as well as offering services to protect native AWS workloads…as with IT Pros this is also the in thing!

Apart from backup and availability, my sessions are focused on storage, compute, scalability and scale as well as some sessions on home automation with Alexa and alike. This years re:Invent is 100% a learning experience and I am looking forward to attending a lot of sessions and taking a lot of notes. I might even come out taking the whole serverless thing a little more seriously!

Moving away from the tech the AWS world is one that I am currently removed from…unlike the VMware ecosystem and VMworld I wouldn’t know 95% of the people delivering sessions and I certainly don’t know much about the AWS community. While I can’t fix that by just being here this week, I can certainly use this week as a launching pad to get myself more entrenched with the technology, the ecosystem and the community.

Looking forward to the week and please reach out if you are around.

VCSP Important Notice: 9.5 Update 3 RTM Is Out…With Insider Protection and more!

Earlier this week, Veeam made available to our VCSP partners the RTM of Update 3 for Backup & Replication 9.5 (Build 9.5.0.1335). Update 3 is what we term a breaking update, meaning that if a Cloud Connect tenant upgrades from any previous 9.5 version before VCSPs this will break backup or replication functionality. With that in mind the RTM has been made available for our VCSP partners to ensure it is installed and tested before being pushed out to production before the GA release. Veeam Backup & Replication releases from 8.0 (build 8.0.0.2084) can write backups to a cloud repository on 9.5 Update 3, and any release from 9.0 (build 9.0.0.902) can write replicas to a cloud host on 9.5 Update 3.

Update 3 is a very significant update and contains a number of enhancements and known issue fixes with a lot of those enhancements aimed at improving the scalability of the Backup & Replication platform that VCSPs can take advantage of. One important note is around new licensing for Cloud Connect Backup that all VCSPs should be aware of. There is a detailed post in the VCSP Forums and there will be emails sent to explains the changes.

We have also pushed out a number new features for our VCSPs with two of them highlighted below. One of which is the new Insider Protection feature or Recycle Bin for Cloud Connect Backups and the other is the a long awaited ask from our providers in the Maintenance Mode for Cloud Connect.

  • Insider protection: Option to hold backups deleted from a tenant’s cloud repository in a “recycle bin” folder for a designated period of time. For more information, see this post in the VCSP forum.

    • Maintenance Mode: Allows you to temporarily stop tenant backup and backup copy tasks from writing to cloud repositories. Already running tenant tasks are allowed to finish, but new tenant tasks fail with an error message indicating that the service provider infrastructure is undergoing maintenance. This is supported at the tenant end in 9.5 Update 3 GA, Agent for Windows 2.1 and Agent for Linux 2.0.

There has also been a lot of work to improve and enhance scalability in the Backup & Replication Cloud Connect functionality to accomodate the increasing usage of Veeam Agent for Windows of which there is a new version (2.1) coming in early December and prepare for the release of Veeam Agent for Linux (2.0) that will include support for backups to be sent to Cloud Connect repositories. For the recently released Veeam Availability Console, Update 3 is 100% compatible with the 2.0 GA (Build 2.0.1.1319) released last week and is good from Update 2 or later.

Conclusion:

Once again, Update 3 for Veeam Backup & Replication is an important update to apply for VCSPs running Cloud Connect services in preparation for the GA release which will happen in about two weeks. Once released I’ll link to the VeeamKB for a detailed look at the fixes but for the moment, if you have the ability to download the update do so and have it applied to your instances. For more info in the RTM, head to the VCSP Forum post here.

Released: NSX-v 6.3.5 and New Features and Fixes

Last week VMware released NSX-v 6.3.5 (Build 7119875) that contains a few new features and addresses a number of bug fixes from previous releases. Going through the release notes there are a lot of known issues that should be known and there are more than a few that apply to service providers…specifically there are a lot around Logical and Edge Routing functions. The other interesting point to highlight about this release is that this is apparently the same build that runs on VMware on AWS instances as mentioned by Ray Budavari.

The new features in this build are:

  • For vCenter 6.5 and later, Guest Introspection VM’s, on deployment, will be named Guest Introspection (XX.XX.XX.XX), where XX.XX.XX.XX is the IPv4 address of the host on which the GI machine resides. This occurs during the initial deployment of GI.
  • Guest Introspection service VM will now ignore network events sent by guest VMs unless Identify Firewall or Endpoint Monitoring is enabled
  • You can also modify the threshold for CPU and memory usage system events with this API: PUT /api/2.0/endpointsecurity/usvmstats/usvmhealththresholds
  • Serviceability enhancements to L2 VPN including
    • Changing and/or enabling logging on the fly, without a process restart
    • Enhanced logging
    • Tunnel state and statistics
    • CLI enhancements
    • Events for tunnel status changes
  • Forwarded syslog messages now include additional details previously only visible on the vSphere Web Client
  • Host prep now has troubleshooting enhancements, including additional information for “not ready” errors

That last new feature above is seen below…you can see the EAM Status message just below the NSX Manager IP which is a nice touch given the issues that can happen if EAM is down.

If you click on the Not Ready Installation Status you now get a more detailed report of what could be wrong and suggestions of how to resolve.

Important Fixes :

  • VMs migrated from 6.0.x can cause host PSOD When upgrading a cluster from 6.0.x to 6.2.3-6.2.8 or 6.3.x, the VM state exported can be corrupted and cause the receiving host to PSOD
  • “Upgrade Available” link not shown if cluster has an alarm. Users are not be able to push the new service spec to EAM because the link is missing and the service will not be upgraded
  • NSX Manager crashes with high NSX Manager CPU NSX Manager has an OOM (out of memory) error and continuously restarts
  • NSX Controller memory increases with hardware VTEP configuration causing high CPU usage A controller process memory increase is seen with hardware VTEP configurations running for few days. The memory increase causes high CPU usage that lasts for some time (minutes) while the controller recovers the memory. During this time the data path is affected
  • Translated IPs are not getting added to vNIC filters which is causing Distributed Firewall to drop traffic When new VMs are deployed, the vNIC filters do not get updated with the right set of IPs causing Distributed Firewall to block the traffic.

Those with the correct entitlements can download NSX-v 6.3.5 here.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_635.html

Veeam Availability Console – What’s in it for Service Providers

Today, the Veeam Availability Console was made GA meaning that after a long wait our new multi-tenant service provider management and reporting platform is available for download. VAC is an significant evolution of the Managed Backup Portal that was released in 2016 and acts as a central portal for Veeam Cloud and Service Providers to remotely manage and monitor customer instances of Backup & Replication including the ability to monitor Cloud Connect Backup and Replication jobs and failover plans. It also is the central mechanism to deploy and manage (Windows) agents which includes the ability to install agents onto on-premises machines and apply policies to those agents once deployed.

Veeam® Availability Console is a cloud-enabled platform built specifically for Veeam Cloud & Service Provider (VCSP) partners and resellers looking to launch a managed services business. Through its ability to remotely provision, manage and monitor virtual, physical and cloud-based Veeam environments without any special connectivity requirements, Veeam Availability Console enables you to increase revenue and add value to all your customers.

  • Simplified Setup – now allowing on-premises installs
  • Remote backup agent management and monitoring
  • Remote discovery and deployment with enhanced support for Veeam Cloud Connect
  • Web-based multi-tenant portal
  • Native billing and RESTful APIs
Cloud Connect Requirement:

The Cloud Connect Gateway is central to how the Veeam Availability Console operates and all management traffic is tunneled through the Cloud Connect Gateways. If you are a current VCSP offering Cloud Connect services then you already have the infrastructure in place to facilitate VAC, however if you are not a Cloud Connect partner you can apply for a special key that will enable you to deploy a Gateway without the need for specific Cloud Connect backup or Replication licenses.

For a deeper look at VAC architecture for Service Providers, head to Luca Dell’Oca’s VAC series here.

Designed for Service Providers First:

The Veeam Availability Console was designed from the ground up for Service Providers (there is an Enterprise version available) and contains a rich set of APIs that can be consumed for automation and provisioning purposes. There is also a three tier multi-tenancy design allowing VCSPs the ability to create restricted accounts for their partners or resellers from which in turn, another level of accounts can be created for their customers or tenants.

The multi-tenancy aspect means that partners/resellers and customers can control their own backups centrally from the console. Reporting on backup jobs can be viewed and a mechanism to control those jobs is available allowing retry/stop/start tasks against those jobs. If that’s not enough control or more troubleshooting on failed jobs needs to be done the Remote Console feature introduced in Veeam Backup & Replication Update 2 has been integrated into the console.

VAC also includes built in reporting and billing functionality which enables VCSPs who don’t have the capability for automated reporting and billing to offer that to their customers. The reporting can be accessed via the API meaning that if an existing billing engine is being used there is the possibility to have that interface with VAC to pull out key data points.

The Service Provider Opportunity:

Over the past year I’ve talked a lot about the opportunity that exists for Veeam’s Cloud and Service Providers to take advantage of the opportunity that exists with Veeam’s Agents to capture backups for workloads that previously were out of reach. VAC is central to this and opens up the ability to backup instances that live on-premises (physical or virtual) or in any public cloud hyper-scaler or otherwise.

If you are a reseller looking to cash in on the growing data availability market then you should be looking at how VAC can help you get started by leveraging the features mentioned above . Secondly, if you a reseller and not running Cloud Connect Backup or Replication then the time is right to start looking at getting Cloud Connect deployed and start generating revenue around backup and replication services.

For those existing VCSPs that are offering Cloud Connect services, adding VAC into the mix will allow you to take advantage of the agent opportunity that exists as shown above while also adding value to your existing Managed Backup and Cloud Connect services.

References and Product Guides:

https://www.veeam.com/vac_2_0_release_notes_rn.pdf

https://helpcenter.veeam.com/docs/vac/deployment/about.html?ver=20

https://www.veeam.com/availability-console-service-providers-faq.html

https://www.veeam.com/vac_2_0_whats_new_wn.pdf

Awarded vExpert Cloud – A New vExpert Sub Program

Last week Corey Romero announced the inaugural members of the vExpert Cloud sub-program. This is the third vExpert sub-program following the vSAN and NSX programs announced last year. There are 135 initial vExpert Cloud members who have been awarded the title. As it so happens I am now a member of all three which reflects on the focus I’ve had and still have around VMware’s cloud, storage and networking products leading up to and after my move to Veeam last year.

Even with my move, that hasn’t stopped me working around these VMware vertices as Veeam works closely with VMware to offer supportability and integration with vCloud Director as well as being certified with vSAN for data protection. And more recently as it pertains specifically to the vExpert Cloud program, we are going to be supporting vCloud
Director in v10 of Backup & Replication for Cloud Connect Replication and also at VMworld 2017 we where announced as a launch partner for data protection for VMware Cloud on AWS.

For those wondering what does it take to be a part of the vExpert Cloud program:

We are looking for vExperts who are evangelizing VMware Cloud and delivering on the principles of the multi-cloud world being the new normal. Specificity we are looking for community activities which follow the same format as the vExpert program (blogs, books, videos, public speaking, VMUG Leadership, conference sessions speaking and so on).

And in terms of the focus of the vExpert Cloud program:

The program is focused on VMware Cloud influencer activities, VMware, AWS and other cloud environments and use of the products and services in way that delivers the VMware Cloud reality of consistency across multi-cloud environments.

Again, thank you to Corey and team for the award and I look forward to continuing to spread the community messaging around Cloud, NSX and vSAN.

What I’ve Learnt from 12 Months Working From Home

This week marks one year since I started at Veeam and it feels like that twelve months has flown by. Before I started here at Veeam I had only worked for local companies here in Perth, though the last two had national presence which meant some travel interstate and occasionally overseas for events like VMworld. Prior to this role I was office bound however this role, being part of a Global Team meant that I had to work remotely from home. Something that I thought would be a walk in park…however the reality of working from home is far from that.

There is a growing norm (especially in IT) where location doesn’t matter and working remotely is embraced. For the employer they win by getting the person they want…and for the employee the boundaries of locality are lessoned meaning that more opportunities can be pursued. In my case, living in one of the most isolated cities in the world I was aware of other vendor roles where people worked from home so knew if the right role popped up, I had a chance to remain in Perth…travel a lot…and work from home.

The role that I’m in has me traveling roughly once every three days, however that has come in waves and I’ll have periods of travel, followed by periods at home meaning I could have times where i’m working from home for weeks on end. While this isn’t a definitive guide as such to working from home, I wanted to jot down some experiences and lessons learnt from my last 12 months, because the adjustment was tougher than expected.

If you want some generic advice there are lots of articles out there that list the Top Working from Home Tips, but below are my key takeaways from my experiences.

Getting into a Routine:

This is the obvious one, however it’s actually hard to achieve unless you really put your mind to it. Over the first two to three months I was finding myself still stuck int the old routine of getting up and effectively going to work. I sat in front of the computer from 8am to 5pm, had dinner, played with the kids and had family time. Problem was my team was spread across the globe and I was then working from 9pm to 12-1am so my screen time was significant. I wasn’t burning out, but came to the realisation that because of the working from home and the fact that timezones meant nothing I had to stop thinking as a 9-5 worker.

This involved setting a routine that was achievable. When home I now get up, have breakfast with the family (when possible) and then get ready to go into the study. For me, having a shower first thing is still optional and while that might disgust some people out there, I tend to wash up during my first break of the day. That break is usually around 11am after dealing with emails and when the east coast of the US starts to go to bed.

One of the things that I try and do during the middle part of the day is get out the backyard and shoot some hoops…basketball is a great game to play by yourself. Once I have lunch I usually get back on the computer for a couple of hours and then head out to the gym for a workout. Once I get back home the kids are back from school and generally its time for dinner and I try to do some family time where possible.

After family time I then do the nightshift when most of EMEA is well into their day and the US is starting to wake up. From 9pm till 12am (or later) I can work efficiently and tend to get a lot fo work done during this period. It’s also when most of the timezones I deal with are awake at the same time, so interaction with workmates is at it’s peak.

Getting over the Feeling on Loneliness:

Those that know me know that I am a pretty social guy…I love a good chat and enjoy interacting with people in the office. Those that have worked with me, also know that I like to muck around a little bit and have a laugh during the day. All in all I enjoy peoples company, so probably the biggest adjustment to working from home was the fact that I did feel lonely to begin with. It was good to hear that people I have mentioned that to that also work from home had felt that as well…good in that I wasn’t alone in this.

The key that i’ve found to combating that sense of isolation is to ensure that I am not home bound 24 hours a day, five days a week. The thing that solved this for me was developing a routine where I get out of the house to go to the gym to be around other humans…and while I am not exactly having conversations with people at the gym I’m at least physically around people which seemed to help.

In addition to that, social media platforms like Slack, WhatsApp and Skype are critical social interactions and while they can be sometimes distracting…they are critical to making sure that I feel connected with the outside world which in turn helps beat the isolation.

Having a Proper Home Office Setup:

The last thing is around having a decent home office setup. I know a lot of people that work from home but work at the dinner table or on the kitchen bench. This isn’t conducive to being able to work constantly or efficiently. I made sure that there was a decent study when looking for a house and I’m lucky enough to have a good one at the moment. It’s isolated from the main living area of the house and setup in such a way that it closely replicates an proper office.

Apart from all the right technology one of the biggest things for me is keeping this space tidy and organised. It’s important to maintain a high standard even though no one else gets to see the setup. Apart from that the other thing I’ve learnt is to make it as desirable as possible to be around…because I spend all day there I want to feel like I want to be there. Apart from the job being rewarding, for me it’s important to have a sense of pride in your work space so even if working from home it’s an important point to consider.

Wrap Up:

One thing to finish up on is that support comes in many flavours while working from home. I’m lucky that I have a great boss and a great team that I work with…they help tremendously in making the working from home thing work. Without a great team and support structure it would be indeed be a lonely gig.

All in all, after a period of adjustment I’ve settled into a decent routine while keeping myself sane during the periods when I am working from home. Ultimately what I learnt during the first twelve months of working from home is that you have to be disciplined. With the discipline to stick to my routines and get into a rhythm day in and day out it’s become easier and more natural. That said, I still miss the office atmosphere however there is some sacrifice that needs to be taken in order to work in a role that is ultimately very rewarding.

And like actually being at an office…the key is to minimize distractions!

Released: vCloud Director 8.10 and 8.20 Point Updates

Last week VMware snuck out two point releases for vCloud Director 8.10 and 8.20. For those still running those versions you now have 8.10.1.1 (Build 6878548) and for 8.20 there 8.20.0.2 (Build 6875354) available for download. These are both patch upgrades and resolve a number of bugs, some of which appear to be mirrored in both versions.

Scanning the Release Notes, below are some of the more notable fixes:

8.10

  • Resource limit change for a vCloud Edge Gateway Resolves an issue where the memory limit for a compact and full-4 Edge Gateway was insufficient. Memory was increased from 512MB to 2048MB
  • Performing hardware changes to a VM fails Resolves an issue where performing hardware changes to a VM in vCloud Director fails with an error message:
  • Degraded performance due to insufficient memory Resolves an issue that could lead to an insufficient memory reservation of the NSX Edge VMs, which might cause poor performance.
  • Catalog synchronization failure Resolves an issue where synchronization of a remote catalog item fails with an out of memory, causing the vCloud Director cell to crash.

8.20

  • Incorrect status update for VMs storage profile or disk-level storage Resolves an issue that could cause a VM storage profile or disk-level storage profile to be updated incorrectly when the VM is included in a recompose operation. This fix ensures that PvdcComputeGuaranteeValidator runs even when the deployment fails in Pay-As-You-Go allocation model. With this fix, the undeploy workflow ignores the VM deployment state if the undeploy operation is called with a force=true flag.
  • Failure to move virtual machines between shared datastores Resolves a storage issue where moving a virtual machine from one shared datastore to another fails.
  • Failure to revert VM snapshots Resolves an issue that could cause reverting to a virtual machine snapshot to fail
  • Failure to allocate an external IP address and a gateway IP address Resolves several issues in managing the allocation of external IP a gateway IP addresses during VM boot and runtime when the NAT service is enabled and IP Translation is set manually.
  • Failure to delete Organization VDC Resolves an issue that could cause various operations to fail.

So a small point release for good to see the team continuing to improve the platform for those not yet able to upgrade to the latest 9.0 release. If you have the entitlements, head to the MyVMware site to download the builds.

References:

http://pubs.vmware.com/Release_Notes/en/vcd/81011/rel_notes_vcloud_director_8-10-1-1.html

http://pubs.vmware.com/Release_Notes/en/vcd/82002/rel_notes_vcloud_director_8-20-0-2.html

« Older Entries