Category Archives: General

Backing up 6.7 Update 1 VCSA to Cloud Connect Fails

A few weeks ago I upgraded my NestedESXi homelab to vSphere 6.7 Update 1. Even though Veeam does not have offical supportability for this release until our Backup & Replication 9.5 Update 4 release there is a workaround that deals with the change of vSphere API version that out of the box, causes backup to fail. After the upgrade and the application of the workaround I started to get backup errors while trying to process the main lab VCSA VM which was now running vCenter 6.7 Update 1. All other VMs where being backed up without issue.

Processing LAB-VC-67 Error: Requested value ‘vmwarePhoton64Guest’ was not found.

The error was interesting and only impacted the VCSA VM that I had upgraded to 6.7 Update 1. I do have another VCSA VM in my lab which is on the GA of 6.7 which was backing up successfully. What was interesting is that it appears like the GuestOS type of the VM had changed or was being recognised as PhotonOS from within the upgraded vCenter on which it lived it’s self.

Looking at the VM Summary, it was being listed as VMware Photon OS (64-bit)

My first instinct was to change this back to what I saw the other VCSA to be, which was Other 3.x Linux (64-bit)

However, due to the chicken or the egg nature of having the management VCSA on the same vCenter when I logged into the ESXi host (also upgraded to 6.7 Update 1) I saw that it didn’t match what was being shown in vCenter.

Thinking it was due to a mismatch, I changed the Guest OS type here to Photon OS However the same issue occurred. Next I tried to get a little creative and change the Guest OS Type to Other Linux (64-bit) but even though I changed it to that from ESXi…from vCenter (its self) it was still reporting Photon OS and failed.

The Issue:

I submitted a support ticket and from the logs the Support team where able to ascertain that the issue actually lied at the Cloud Connect Providers end. I was sending these backups directly to the Cloud Connect Provider, so my next step to confirm this was to try a local backup test job and sure enough the VM processed without issues.

I then attempted a Backup Copy job from that successful test job to the Cloud Connect Provider and that resulted in the same error.

From the job logs it became clear what the issue was:

[07.11.2018 03:00:12] <01> Info [CloudGateSvc 119.252.79.147:6180]Request: [Service.Connect] SessionType:4, SessionName:Lab Management, JobId:54788e4d-7ba1-488a-8f80-df6014c58462, InstallationId:30ee4690-01c9-4368-94a6-cc7c1bad69d5, JobSessionId:b1dba231-18c2-4a28-9f74-f4fa5a8c463b, IsBackupEncrypted:False, ProductId:b1e61d9b-8d78-4419-8f63-d21279f71a56, ProductVersion:9.5.0.1922,
[07.11.2018 03:00:13] <01> Info [CloudGateSvc xx.xx.xx.xx:6180]Response: CIResult:b4aa56f4-fd02-4446-b893-2c39a16e535e, ServerTime:6/11/2018 7:00:13 PM, Version:9.5.0.1536,

At my end, I am running Backup & Replication 9.5 Update 3a, while at the provider end, they are running Backup & Replication 9.5 Update 3. Update 3a introduced supportability to vSphere 6.7 and other platform updates…this included the list at Veeam’s end of support Guest OS Types. In a nutshell the Veeam Cloud Connect Backup server still needs to understand what type of VM/Guest its backing up in its Cloud Repository. For this to be resolved the provider would need to upgrade their Cloud Connect infrastructure to Update3a…meanwhile, I’m backing up the VM locally for the time being.

Timely Message for VCSPs running Cloud Connect:

As we approach the release of another Update for Backup & Replication it’s important for Veeam Cloud and Service Providers to understand that they need to keep in step with the latest releases. This is why we typically have an RTM build given to providers at least two weeks before GA.

With vSphere 6.7 Update 1 starting to be deployed to more organisations it’s important to be aware of any issues that could stop tenant backups from completing successfully. This has generally been a consideration for providers offering Cloud Connect over the years…especially with Cloud Connect Replication, where the target platform needs to be somewhat in check with the latest platforms that are available.

References:

https://www.veeam.com/kb2443

https://www.veeam.com/kb2784

Hybrid World… Why IBM buying RedHat makes sense!

As Red October came to a close…at a time when US Tech stocks were taking their biggest battering in a long time the news came out over the weekend that IBM had acquired RedHat for 34 billion dollars! This seems to have taken the tech world by surprise…the all-cash deal represents a massive 63% premium on the previous close of RedHat’s stock price…all in all it seems ludicrous.

Most people that I’ve talked to about it and from reading comments on social media and blog sites suggests that the deal is horrible for the industry…but I’ve felt this is more a reaction to IBM than anything. IBM has a reputation as swallowing up companies whole and spitting them out the other side of the merger process a shell of what they once were. There has also been a lot of empathy for the employees of RedHat, especially from ex-IBM employees who have experience inside the Big Blue machine.

I’m no expert on M&A and I don’t pretend to understand the mechanics behind the deal and what is involved…but when I look at what RedHat has in its stable, I can see why IBM have made such an aggressive play for them. On the surface it seems like IBM are in trouble with their stock price and market capitalization falling nearly 20% this year and more than 30% in the last five years…they had to make a big move!

IBM’s previous 2013 acquisition of SoftLayer (for a measly 2 billion USD) helped them remain competitive in the Infrastructure as a Service space and if you believe the stories, have done very well out of integrating the SoftLayer platform into what was BlueMix, and is now IBM Cloud. This 2013 Forbes article on the acquisition sheds some light as to why this RedHat acquisition makes sense and is true to form for IBM.

IBM sees the shift of big companies moving to the cloud as a 20-year trend…

That was five years ago…and since then a lot has happened in the Cloud world. Hybrid cloud is now the accepted route to market with a mix of on-premises, IaaS and PaaS hosted and hyper-scale public cloud services being the norm. There is no one cloud to rule them all! And even though AWS and Azure continue to dominate and be front of mind there is still a lot of choice out there when it comes to how companies want to consume their cloud services.

Looking at RedHat’s stable and taking away the obvious Linux distro’s that are both enterprise and open sources the real sweet spot of the deal lies in RedHat’s products that contribute to hybrid cloud.

I’ve heard a lot more noise of late about RedHat OpenStack becoming the platform of choice as companies look to transform away from more traditional VMware/Hyper-V based platforms. RedHat OpenShift is also being considered as an enterprise ready platform for containerization of workloads. Some sectors of the industry (Government and Universities) have already decided on their move to platforms that are backed by RedHat…the one thing I would comment here is that there was an upside to that that might now be clouded by IBM being in the mix.

Rounding out the stable, RedHat have a Cloud Suite which encompasses most of the products listed above. CloudForms for Infrastructure as Code, with Ansible for orchestration…together with RedHat Virtualization together with OpenStack and OpenShift..it’s a decent preposition!

Put all that together with the current services of IBM Cloud and you start to have a compelling portfolio covering almost all desired aspects of hybrid and multi cloud service offerings. If the acquisition of SoftLayer was the start of a 20 year trend then IBM are trying to keep themselves positioned ahead of the curve and very much in step with the next evolution of that trend. That isn’t to say that they are not playing catchup with the likes of VMware, Microsoft, Amazon, Google and alike, but I truly believe that if they don’t butcher this deal they will come out a lot stronger and more importantly offer valid completion in the market…that can only be a good thing!

As for what it means for RedHat itself, their employees and culture…that I don’t know.

References:

https://www.redhat.com/en/about/press-releases/ibm-acquire-red-hat-completely-changing-cloud-landscape-and-becoming-world%E2%80%99s-1-hybrid-cloud-provider

IBM sees the shift of big companies moving to the cloud as a 20-year trend

Quick Fix: Specified vCloud Director is not supported when trying to add vCD 9.1 to Veeam ONE

Back in May when VMware released vCloud Director 9.1 they also depreciated support for a number of older API versions:

End of Support for Older vCloud API Versions

  • vCloud Director 9.1 no longer supports vCloud API versions 1.5 and 5.1. These API versions were deprecated in a previous release.
  • vCloud Director 9.1 is the last release of vCloud Director to support any vCloud API versions earlier than 20.0. Those API versions are deprecated in this release and will not be supported in future releases.

Due to this, and being mid release cycle, Veeam ONE had issues connecting to vCD instances that where running version 9.1.

The error you would get if you tried to connect was:

Over the past few months i’ve had questions around this and if it was going to be fixed by way of a patch. While we are waiting for the next release of Veeam ONE that is due with Veeam Backup & Replication 9.5 Update 4 there is a way to get vCD 9.1 instances connected into the current build of Veeam ONE.

There is a HotFix available through Veeam Support to resolve the Known Issue. It involves stopping the Veeam ONE services, replacing a couple of DLL’s and then re-starting the services. Once implemented Veeam ONE is able to connect to vCD 9.1.

So if you have this problem, raise a support case, grab the HotFix and the issue will be sorted.

References:

https://docs.vmware.com/en/vCloud-Director/9.1/rn/rel_notes_vcloud_director_91.html#deprecated

Released – Runecast Analyser 2.0

Earlier this week, Runecast released into General Availability version 2.0 of their vSphere analyser platform. I’ve been a keen follower of the progress of Runecast since their inception a couple of years ago. There was a space in the market to be filled and they have been able to improve in the initial release by releasing new functionality often. It wasn’t that long ago that they added vSAN support…and more recently NSX support.

This release brings the following new functionalities:

  • Ability to store and display all detected and resolved issues over time for every connected vCenter.
  • The completely new monitoring dashboard with The Most Affected hosts and trending.
  • Automation of PCI-DSS VMware rules and new PCI-DSS profile UI
  • Support for vSphere 6.7 HTML5 plugin
  • Usability, performance and security improvements for increased ease of use.
  • Latest VMware Knowledge Base updates.

First thing to notice in the new release is the new Dashboard that has been improved and for mine is now more logically laid out. But for me the biggest feature added in this release is the enhancement to Historical Trending and a new analysis function. As someone who spent a time managing and operating vSphere platforms over the years, the ability to see trends is crucial in troubleshooting.


Historical Analysis is new in version 2.0 and aims to help isolate the root cause of a reported incident as fast as possible and detect new problems caused by product update or configuration changes. 2.0 will store at least 3 months worth of vCenter, vSAN and NSX-V scan results, including issue description. This provides trending information on the dashboard.

The introduction of PCI-DSS checks is something that will assist in compliancy situations. As someone who has had the pain of going through compliancy, any tool that makes the process easier is welcomed.

Im looking forward to meeting up with the guys at VMworld 2018 in Las Vegas next week and I would recommend and vSphere admin to take a look at Runecast!
You can download Runecase 2.0 from here and take it for a spin: https://runecast.biz/profile

The State of DRaaS…A Few Thoughts

Over the past week Garter released the 2018 edition of the Magic Quadrant for DR as a Service. The first thing that I noticed was how sparse the quadrant was when comparing it to the 2017 quadrant. Though many hold it in high regard, the Gartner Quadrant isn’t the be all and end all source of information pertaining to those offering DRaaS and succeeding. But It got me thinking as to the state of the current DRaaS market.

Just before I talk about that, what does it mean to see less vendors in the Magic Quadrant this year? Probably not much apart from the fact the ones that dropped out probably don’t see value in undertaking the process. Though, as mentioned in this post it could also be due to the criteria changing. As a comparison, from the past three years you can see above that only ten participants remain down from twenty three the previous year. There has been a shift in position and it’s great to see iLand leading the way beating out global powerhouses like IBM and Microsoft.

But does the lack of participants in this year’s quadrant point to a declining market? Are companies skipping DRaaS for traditional workloads and looking to build availability and resilience into the application layer? Has network extension become so common place and reliable that companies are becoming less inclined to use DRaaS providers and just rely on inbuilt replication and mobility? There is an argument to be had that the push to cloud native applications, the use of public cloud and evolving network technologies has the potential to kill DRaaS…but not yet…and not any time soon!

Hybrid cloud and multi-platform services are here to stay…and while the use of the hyper-scale public clouds, serverless and containerisation has increased, there is still an absolute play to be had in the business of ensuring availability for “traditional” workloads. Those workloads that sit on-premises, in private or public cloud platforms still use the base unit of measurement as the VM.

This is where DRaaS still has the long game.

Depending on region, there is still a smattering of physical servers running workloads (some regions like Asia are 5-10 years behind the rest of the world in Virtualisation…let alone containerization or public cloud). It’s true that most Service Providers who have been successful with Infrastructure as a Service have spent the last few years developing their Backup, Replication and Disaster Recovery as a service offerings.

Underpinning these service offerings are vendors like Veeam, Zerto, VMware and other availability vendors that offer software that Service Providers can leverage to offer DR services both from on-premises locations to their cloud platforms, or between their cloud platforms. Traditional backup vendors offer replication features that can also be used for DR. There is also the likes of Azure that offers DRaaS using technologies like Azure Site Recovery that looks to offer an end to end service.

DRaaS still predominantly focuses on the availability of Virtual Machines and the services and applications they run. The end goal is to have critical line of business applications identified, replicated and then made available in the case of a disaster. The definition of a disaster varies depending on who you speak to and the industry loves to use geo-scale impact events when talking about disasters…but reality is that the failure of a single instance or application is much more likely than whole system failures.

Disaster avoidance has become paramount with DRaaS. Businesses accept that outages will happen but where possible the ramifications of down time needs to kept to a minimum. Or better yet…not happen at all. In my experience, having worked in and with the service provider industry since 2002, all infrastructure/cloud providers will experience outages at some point…and as one of my work colleagues put it…

It’s an immutable truth that outages will occur! 

I’ve written before about this topic before and even had a shirt for sale at once stage stating that Outages are like assholes…everyone has one!

There are those that might challenge my thoughts on the subject, however as I talk to service providers around the world, the one thing they all believe in is that DRaaS is worth investing in and will generate significant revenue streams. I would argue that the DRaaS hasn’t even hit an inflection point yet, whereby it’s been seen to be a critically necessary service to consume for businesses. It’s true to say that Backup as a Service has nearly become a commodity…but DRaaS has serious runway.

References:

https://www.gartner.com/doc/3881865

What’s Changed: 2018 Gartner Magic Quadrant for Disaster Recovery as a Service

Workaround – VCSA 6.7 Upgrade Fails with CURL Error: Couldn’t resolve host name

It’s never an issue with DNS! Even when DNS looks right…it’s still DNS! I came across an issue today trying to upgrade a 6.5 VCSA to 6.7. The new VCSA appliance deployment was failing with an OVFTool error suggesting that DNS was incorrectly configured.

Initially I used the FQDN for source and target vCenter’s and let the installer choose the underlying host to deploy the new VCSA appliance to. Even though everything checked out fine in terms of DNS resolution across all systems I kept on getting the failure. I triple checked name resolution on the machine running the update, both vCenter’s and the target hosts. I even tried using IP addresses for the source and target vCenter but the error remained as it still tried to connect to the vCenter controlled host via it’s FQDN resulting in the error.

After doing a quick Google search and finding nothing, I changed the target to be an ESXi host directly and used it’s IP address over it’s FQDN. This time the OVFTool was able to do it’s thing and deploy the new VCSA appliance.

The one caveat when deploying directly to a host over a vCenter is that you need to have the target PortGroup configured as an ephemeral…but that’s a general rule of bootstrapping a VCSA in any case and it’s the only one that will show up from the drop down list.

While very strange given all DNS checked out as per my testing, the workaround did it’s thing and allowed me to continue with the upgrade. This didn’t find the root cause…however when you need to motor on with anupgrade, a workaround is just as good!

Quick Tip: Let’s Encrypt ACME Powershell Ownership Challenge Can’t see Challenge Data

I’m currently going through the process of acquiring a new Let’s Encrypt free SSL Certificate against a new domain I registered. For a great overview of what Let’s Encrypt is and what is can do for you, head over to Luca Dell’Oca’s blog here. I was following Luca’s instructions for getting the new domain authorised for use with the Let’s Encrypt service via a DNS challenge when I ran into the following.

After running the PowerShell command to generate the challenge, it was not returning the Handler Message as expected form the direct output…well obviously anyway.

After scratching my head for a bit, I checked to see if the data was contained withing the returned PowerShell command.

From here I was able to create the DNS TXT entry and complete the challenge.

Just in case it wasn’t obvious this very quick post will save you a bit of time.

VeeamOn 2018: Recognizing Innovation and what it means to be Innovative

True innovation is solving a real problem…and though for the most, it’s startups and tech giants that are seen to be the innovators, their customers and partners also have the ability to innovate. Innovation drives competitive advantages and allows companies to differentiate themselves compared to others. In my previous roles I was lucky to be involved with teams of talented people that did great things with great technologies. Like others around the world we where innovating with leading vendor technologies to create new service offerings that add value and compliment the underlying technology.

Innovation requires these teams of people to be experimental at heart and try to build or enhance upon already existing technologies. The Service Provider industry has always found a way to innovate ontop of vendor platforms and successful vendors are those that offer the right tools and guidance for providers to creative innovative solutions ontop of their platforms. The are problem solvers!

Orchestrations, automation, provisioning and billing are driving factors in how service providers can differentiate themselves and gain that competitive advantage in the marketplace. Without innovating ontop of these platforms, service offerings become generic, don’t stand out and are generally operationally expensive to manage and maintain.

Introducing the Veeam Innovation Awards for 2018:

When visiting and talking to different partners across the world it’s amazing to see some of the innovation that’s been built ontop of Veeam technologies and we at Veeam want to reward our customers and partners who have done great things with our technologies.

At VeeamON 2018, we’ll be celebrating some of these innovative solutions, so please let us know how you’ve built upon the Veeam Availability Platform. Nominations can be made from March 29 to April 30, with the winners being recognized during the VeeamON main stage keynote. Self nominations or those from partners, providers, or Veeam field-team members are encouraged — click here to nominate for a Veeam Innovation Award.

I can think of a number of VCSPs that have done great things with building upon Cloud Connect, Backup & Replication IaaS backups and working with Veeam’s API’s and PowerShell to solve customer problems and offer value added services. These guys have brought something new to the industry and we want to reward that.

Having previously come from a successfully innovate company within their own space, being innovative is now something I try to preach to all customers and partners I visit. It is an absolute requirement if you want to win business and stand out in the backup and availability industry…innovation is key and we want to hear about it from you!

References:

Nominations for the VeeamON 2018 Innovation Awards are now open

vExpert 2018 – The Value Remains!

After a longer than expected deliberation period the vExpert class of 2018 was announced late last Friday (US Time).  I’ve been a vExpert since 2012 with 2018 marking my seventh year in the program. I’ve written a lot about the program over the past three or four years since it’s “perceived” value started to go downhill. I’ve criticised parts of the program around the relative ease at which some people where accepted and also on the apparent inability for numbers to be better managed.

However, make no mistake I am still a believer in the value of the vExpert and more importantly I have come to realise over the past few years (solidified over the past couple of months) that apart from the advocacy component that’s critical to the programs existence…people continue to hold the program in extremely high regard.

There are a large number of vExpert’s who expect entry year after year, and rightly so. In truth there are a large number that legitimately demand membership. But there are others who have struggled to be accepted year after year and for who, acceptance into the program represents a significant achievement.

That is to say that while many established vExpert’s assume entry there are a number of people that desire entry. This is an important indicator on the strength of the program and the continued high regard the vExpert program should still be held in.  It’s easy to criticise from the inside, however that can’t be allowed to tarnish the reputation of program externally.

This is a great program and one that is valued by the majority of those who actively participate. VMware still commands a loyal community base and the vExpert’s lead from the front in this regard. Remembering that it’s all about the advocacy!

Well done again to the team behind the scenes…The new website is testament to the program moving forward. The vExpert team are critical the success of the program and having been part of the much smaller Veeam Vanguard program, I have a lot of respect for the effort that goes into sorting through two thousand odd applications and renewals.

And finally, well done to those first time vExpert’s! Welcome aboard!

——-

For those wondering, here are the official benefits of the program:

  • Invite to our private #Slack channel
  • vExpert certificate signed by our CEO Pat Gelsinger.
  • Private forums on communities.vmware.com.
  • Permission to use the vExpert logo on cards, website, etc for one year
  • Access to a private directory for networking, etc.
  • Exclusive gifts from various VMware partners.
  • Private webinars with VMware partners as well as NFRs.
  • Access to private betas (subject to admission by beta teams).
  • 365-day eval licenses for most products for home lab / cloud providers.
  • Private pre-launch briefings via our blogger briefing pre-VMworld (subject to admission by product teams)
  • Blogger early access program for vSphere and some other products.
  • Featured in a public vExpert online directory.
  • Access to vetted VMware & Virtualization content for your social channels.
  • Yearly vExpert parties at both VMworld US and VMworld Europe events.
  • Identification as a vExpert at both VMworld US and VMworld EU.

Veeam Vault #10: Latest Veeam Releases and Vanguard 2018 Update

Welcome to the 10th edition of Veeam Vault and the first one for 2018. It’s pretty crazy to think that we have already completed two months of the year. After an extremely hectic first half of January attending two of our Veeam Velocity Sales Kick off events (Bangkok for APJ and Saint Petersburg for EMEA) i’ve been working from the home office for close to six weeks. It’s been a productive time organising content and working with different Cloud teams across the business to help enable our VCSPs to take advantage our our cloud technologies and help them drive services revenue.

Getting stuck into this edition, I’ll cover the releases of Veeam Availability Orchestrator, the Infinidat Storage Plugin and Update 5 for the Veeam Management Pack… all of which happened over the last week. I’ll talk about the Veeam Vanguard Program for 2018 as well as link to Veeam related content the Vanguard crew have put out over the past couple of months.

Veeam Availability Orchestrator:

Veeam Availability Orchestrator has been in the works for a while now and it’s great to see it hit GA. It boasts an automated and resilient orchestration engine for Veeam Backup & Replication replicas, designed specifically to help enterprises with compliance requirements. One of it’s biggest features is helping to reduce the cost and effort associated with planning for and recovering from a disaster through the automatic creation, documentation and testing of disaster recovery plans.

For a deeper look at it’s features and functionality, Michael White has a good overview post on VAO here.

Infinidat Storage Plugin:

Our new Universal Storage Integration API that was introduced with the release of Update 3 for Backup & Replication 9.5 allows approved Veeam Alliance Partners to build their own storage plug-ins to enable rapid development of primary storage integrations. Infinidat is our first Alliance Partner to integrate through the Universal Storage Integration API. This adds to existing integrations with Cisco, Dell EMC, HPE, IBM, Lenovo and NetApp.

My fellow Technologist, Michael Cade has written up a blog post explaining how to download and install the plugin for those customers using Infindat as their storage backend.

Veeam Management Pack Update 5:

Update 5 for Management Pack went GA today and there are a few new things this release that builds off of the Update release 4 last year. below is a quick rundown of what’s new in this update.

  • Built-in monitoring for Veeam Agent for Microsoft Windows
  • Morning Coffee Dashboard for at-a-glance, real-time health status of your Veeam backup environments
  • Monitoring for VMware Cloud on Amazon Web Services (AWS)
  • Additional VMware vSAN & vCenter Alarms

It’s pleasing to see support for VMware Cloud on AWS as that starts to look to gain momentum in the market and also great to see us enhancing our vSAN alarms as that product also evolves. For a detailed description of the new features, read the release post here.

Veeam Vanguard 2018:

Overnight we notified new and returning members of their successful application for the Veeam Vanguard program for 2018. This is one of the most hotly sort after influencer programs in our industry and I can tell you that the process to vote for and accept applicants was tough this year. The Product Strategy team takes a lot of care and effort in selecting the group and it represents the best Veeam advocates going round. We work closely with the group and their feedback plays a key part in our feedback loop as well as help us to promote Veeam and Veeam products within their companies and spheres of influence.

Well done to the 2018 nominees!

Veeam Vanguard Blog Post Roundup:

« Older Entries