Monthly Archives: July 2018

Released: Backup for Office 365 2.0 …Yes! You Need to Backup your SaaS

Last week the much anticipated release of Veeam Backup for Office 365 version 2.0 (build 2.0.0.567) went GA. This new version builds on the 1.5 release that was aimed at scalability and service providers. Version 2.0 adds support for SharePoint and OneDrive. Backup for Office 365 has been a huge success for Veeam with a growing realisation that SaaS based services require an availability strategy. The continuity of data on SaaS platforms like Office 365 is not guaranteed and it’s critical that a backup strategy is put into place.

Version 1.5 was released last October and was focused on laying the foundation to ensure the scalability requirements that come with backing up Office365 services were met. We also enhanced the automation capability of the platform through a RESTful API service allowing our Cloud & Service Providers to tap into the APIs to create saleable and efficient service offerings. In version 2.0, there is also a new set of PowerShell commandlets that have been enhanced from version 1.5.

What’s New in 2.0:

Office 365 Exchange was the logical service to support first, but there was huge demand for the ability to extend that to cover SharePoint and OneDrive. With the release of version 2.0 the platform now delivers on protecting Office 365 in its entirety. Apart from the headline new features and enhancements there are also a number of additional ones that have been implemented into Backup for Microsoft Office 365 2.0.

  • Support for Microsoft SharePoint sites, libraries, items, and documents backup and restore.
  • Support for Microsoft OneDrive documents backup and restore.
  • Support for separate components installation during setup.
  • Support for custom list templates in Veeam Explorer for Microsoft SharePoint.
  • Support for comparing items with Veeam Explorer for Microsoft Exchange.
  • Support for exporting extended logs for proxy and controller components.

We have also redesigned the job wizard that enhances setup, search and maintaining visibility of objects.

Architecture and Components:

There hasn’t been much of a change to the overall architecture of VBO and like all things Veeam, you have the ability to go down an all in one design, or scale out depending on sizing requirements. Everything is handled from the main VBO server and the components are configured/provisioned from here.

Proxies are the work horses of VBO and can be scaled out again depending on the size of the environment being backed up. Again, this could be Office 365 or on-premises Exchange or SharePoint instances.

Repositories must be configured on Windows formatted volumes as we use the JetDB database format to store the data. The repositories can be mapped one to one to tenants, or have a many to one relationship.

The API service is disabled by default, but once enabled can be accessed via a URL to view the API commands in Swagger, or directly via the API endpoint.

Free Community Edition:

In terms of licensing, VBO is licensed per Office 365 user in all organizations. If you install VBO without a license, you will trigger Community Edition mode that allows you to have up to 10 user accounts in all organizations. This includes 1 TB of Microsoft SharePoint data. The Community Edition is not limited in time and doesn’t limit functionality.

Installation Notes:

You can download the the latest version of Veeam Backup for Microsoft Office 365 from this location. The download contains three installers that covers the VBO platform and two new versions of the Explorers. Explorer for Microsoft OneDrive for Business is contained within the Explorer for Microsoft SharePoint package and installed automatically.

  • 0.0.567.msi for Veeam Backup for Microsoft Office 365
  • 6.3.567.msi for Veeam Explorer for Microsoft Exchange
  • 6.3.568.msi for Veeam Explorer for Microsoft SharePoint

To finish off…It’s important to read the release notes here as there are a number of known issues relating to specific situations and configurations.

Links and Downloads:

VeeamON Forum Sydney and Auckland…Not too Late to Register!


Back in May, we held VeeamON in Chicago where we launched Veeam’s vision and strategy to lead the way in intelligent data management. One of the great things about VeeamON is that it spawns a series of VeeamON Forums and Tours globally. There is no other vendor that is committed to taking their message around the world and hitting as many cities as possible. Last week I was in Malaysia for the VeeamON Forum there and this week I’m heading over to Sydney for VeeamON Forum there on Thursday, and then jumping over to Auckland for the event there on Tuesday.

The regional VeeamON event’s give the local teams an opportunity to take the VeeamON message to local partners, customers and prospects. Content is derived from the main VeeamON with local touches added to suit the regions. It’s excellent to have our Co-CEO, Peter McKay headlining the Sydney event, with other executives traveling in for the event. We also have Dave Russell who recently joined us from Gartner presenting in both Sydney and Auckland.

As mentioned, I’ll be in Sydney where I’ll be presenting on What’s coming from Veeam in 2018 and also co-presenting with VMware talking about how Veeam is leading the way in backup and recovery for VMware Cloud on AWS workloads. For the Auckland event, I’ll be doing a main-stage demo and presenting the What’s coming in 2018 session as well as a look at Veeam Availability Console 2.0. I always love presenting in my home region and looking forward to not only presenting the sessions, but also engaging with customers and partners.

There is still time to register for the events if you are local to those cities. The registration and agendas are listed below. There is a lot to take in on the day, with two tracks (Business and Technical) as well as a number of ecosystem partners sponsoring the show floor. This is a seriously well attended event historically and there is a lot to be taken away from it.

https://go.veeam.com/veeamon-forum-aus
https://go.veeam.com/veeamon-forum-nz

Hope to see you there in Sydney or Auckland

The State of DRaaS…A Few Thoughts

Over the past week Garter released the 2018 edition of the Magic Quadrant for DR as a Service. The first thing that I noticed was how sparse the quadrant was when comparing it to the 2017 quadrant. Though many hold it in high regard, the Gartner Quadrant isn’t the be all and end all source of information pertaining to those offering DRaaS and succeeding. But It got me thinking as to the state of the current DRaaS market.

Just before I talk about that, what does it mean to see less vendors in the Magic Quadrant this year? Probably not much apart from the fact the ones that dropped out probably don’t see value in undertaking the process. Though, as mentioned in this post it could also be due to the criteria changing. As a comparison, from the past three years you can see above that only ten participants remain down from twenty three the previous year. There has been a shift in position and it’s great to see iLand leading the way beating out global powerhouses like IBM and Microsoft.

But does the lack of participants in this year’s quadrant point to a declining market? Are companies skipping DRaaS for traditional workloads and looking to build availability and resilience into the application layer? Has network extension become so common place and reliable that companies are becoming less inclined to use DRaaS providers and just rely on inbuilt replication and mobility? There is an argument to be had that the push to cloud native applications, the use of public cloud and evolving network technologies has the potential to kill DRaaS…but not yet…and not any time soon!

Hybrid cloud and multi-platform services are here to stay…and while the use of the hyper-scale public clouds, serverless and containerisation has increased, there is still an absolute play to be had in the business of ensuring availability for “traditional” workloads. Those workloads that sit on-premises, in private or public cloud platforms still use the base unit of measurement as the VM.

This is where DRaaS still has the long game.

Depending on region, there is still a smattering of physical servers running workloads (some regions like Asia are 5-10 years behind the rest of the world in Virtualisation…let alone containerization or public cloud). It’s true that most Service Providers who have been successful with Infrastructure as a Service have spent the last few years developing their Backup, Replication and Disaster Recovery as a service offerings.

Underpinning these service offerings are vendors like Veeam, Zerto, VMware and other availability vendors that offer software that Service Providers can leverage to offer DR services both from on-premises locations to their cloud platforms, or between their cloud platforms. Traditional backup vendors offer replication features that can also be used for DR. There is also the likes of Azure that offers DRaaS using technologies like Azure Site Recovery that looks to offer an end to end service.

DRaaS still predominantly focuses on the availability of Virtual Machines and the services and applications they run. The end goal is to have critical line of business applications identified, replicated and then made available in the case of a disaster. The definition of a disaster varies depending on who you speak to and the industry loves to use geo-scale impact events when talking about disasters…but reality is that the failure of a single instance or application is much more likely than whole system failures.

Disaster avoidance has become paramount with DRaaS. Businesses accept that outages will happen but where possible the ramifications of down time needs to kept to a minimum. Or better yet…not happen at all. In my experience, having worked in and with the service provider industry since 2002, all infrastructure/cloud providers will experience outages at some point…and as one of my work colleagues put it…

It’s an immutable truth that outages will occur! 

I’ve written before about this topic before and even had a shirt for sale at once stage stating that Outages are like assholes…everyone has one!

There are those that might challenge my thoughts on the subject, however as I talk to service providers around the world, the one thing they all believe in is that DRaaS is worth investing in and will generate significant revenue streams. I would argue that the DRaaS hasn’t even hit an inflection point yet, whereby it’s been seen to be a critically necessary service to consume for businesses. It’s true to say that Backup as a Service has nearly become a commodity…but DRaaS has serious runway.

References:

https://www.gartner.com/doc/3881865

What’s Changed: 2018 Gartner Magic Quadrant for Disaster Recovery as a Service

VMworld 2018 – #vGolf Las Vegas

#vGolf is back! Bigger and better than last years event. This is the third year of the event having had the inaugural #vGolf at VMworld 2016.

Last year we had 34 participants and everyone who attended had a blast at the brilliant Bali Hai Golf complex. This year, Bali Hai is closed during the VMworld weekend so we are moving the event to the Royal Links Golf Club which is approximately 8 miles from the Las Vegas Strip.

This year the event will expand with more sponsors and a more structured golfing competition with prizes going out for the top 2 placed two ball teams. Yes, this year we will be competing between foursomes.

Details will be updated on this site and on the Eventbrite page once the day is finalised and sponsors confirmed. For the moment, if you are interested please reserve your spot by securing a ticket. At this stage there are 40 available…depending on popularity that could be extended.

Last year the golfing fee’s where heavily subsidised to $40 USD per person (green fees usually $130-150). Once registered, I will be reaching out to ask that an advance payment is made via PayPal so that the morning of the event is a cash free zone…always hard to count early on a Sunday morning in Vegas!

The cost will include green fees plus buggy and club hire. The clubs also come with 6 brand new Callaway Golf balls. Shoe hire is extra on the day for those that wish to wear proper footwear. My intention is to fund some cold drinks on the day depending on the final sponsorship numbers.

Registration Page

There is a password on the registration page to protect against people registering directly via the public page. The password is vGolf2018. I’m looking forward to seeing you all there bright and early on Sunday morning!

Take a look at what awaits you…don’t miss out!

Sponsorship Call:

If you, or your company can offer some sponsorship for the event, please email [email protected] to discuss arrangements. I am looking to subsidise most of the green fee’s if possible and for that we would need four to five sponsors.

First Look – Zenko, Multi-Platform Data Replication and Management

A couple of weeks ago I stumbled upon Zenko via a LinkedIn post. I was interested in what it had to offer and decided to go and have a deeper look. With Veeam launching our vision to be the leader of intelligent data management at VeeamON this year, I have been on the lookout for solutions that do smart thing with data that addresses the needs related to controlling the accelerated spread and sprawl of that data. Zenko looks to be on the right track with it’s notion of freedom to avoid being locked into a specific cloud platform whether it’s private or public.

Having come from service provider land I have always been against the idea of a Hyper-Scaler Public Cloud monopoly that forces lock-in and diminishes choice. Because of that, I gravitated to Zenko’s mission statement:

We believe that everyone should be in control of their data. Zenko’s mission is to allow everyone to be in control of their data, while leveraging the efficiency of private and public clouds.

This platform looks to do data mobility across multiple cloud platforms through common communication protocols and by sharing a common set of APIs to manage it’s data sets. Zenko is focused on achieving this multi-cloud capability through a unified AWS S3 API based services with data management and federated search capabilities driving it’s use cases. Data mobility between clouds, whether private or public cloud services it what Zenko is aimed at.

Zenko Orbit:

Zenko Orbit is the cloud portal for data placement, workflows and global search. Focused for application developers and “DevOps” the premise of Zenko Orbit is that those guys can spend less time learning multiple interfaces for different clouds while leveraging the power of cloud storage and data management services without needing to be an expert across different platforms.

Orbit provides an easy way to create replication workflows between difference cloud storage platforms…weather it be Amazon s3, Azure Blog, GCP Storage or others. You then have the ability to search across a global namespace for system and user-defined metadata.

Quick Walkthrough:

Given this is open source you have the option to download and install a Zenko instance which will then be registered against the Orbit cloud portal or you can pull the whole stack from GitHub. They also have a sandboxed instance hosted by them that can be used to take the system for a test drive.

Once done, you are presented with a Dashboard that gives you an overview of the amount of data and other metric contained in your instance. Looking at the Settings area you are given details about the instance, account details and endpoints to use to connect up into. They also other the ability to download pre generated Cyberduck Profiles.

You need to create a storage management account to be able to browse your buckets in the Orbit portal.

Once that’s been done you can create a bucket and select a location which in the sandbox defaults to AWS us-east-1.

From here, you can add a new storage location and configure the replication policy. For this, I created a new Azure Blob Storage account as shown below.

From the Orbit menu, I then added a New Storage Location.

Once the location has been added you can configure the bucket replication. This is the cool part that is the premise of the platform. Being able to setup policies to replicate data across multiple cloud platforms. From the sandbox, the policy is one way meaning there is no directional replication. Simply select the source and destination and the bucket from the menu.

Once that has been done you can connect to the endpoint and upload files. I tested this out with the setup above and it worked as advertised. Using the CyberDuck profile I connected in, uploaded some files and monitored the Azure Blog storage end for the files to replicate.

Conclusion: 

While you could say that Zenko feels like DFS-R for the multi-platform storage world, the solution has impressed me. Many would know that it’s not easy to orchestrate the replication of data between different platforms. They are also talking up their capabilities around extensibility of the platform as is relates to data management, backend storage plugins and search.

I think about this sort of technology and how it could be extended to cloud based backups. Customers could have the option to tier into cheaper cloud based storage and then further protect that data by replicating it to another cloud platform which could be cheaper yet. This could achieve added resiliency while offering cost benefits. However there is also the risk that the more spread out the data is, the harder it is to control. That’s where intelligent data management comes into play…interesting times!

References:

Zenko Orbit – Multi-Cloud Data Management Simplified

 

Workaround – VCSA 6.7 Upgrade Fails with CURL Error: Couldn’t resolve host name

It’s never an issue with DNS! Even when DNS looks right…it’s still DNS! I came across an issue today trying to upgrade a 6.5 VCSA to 6.7. The new VCSA appliance deployment was failing with an OVFTool error suggesting that DNS was incorrectly configured.

Initially I used the FQDN for source and target vCenter’s and let the installer choose the underlying host to deploy the new VCSA appliance to. Even though everything checked out fine in terms of DNS resolution across all systems I kept on getting the failure. I triple checked name resolution on the machine running the update, both vCenter’s and the target hosts. I even tried using IP addresses for the source and target vCenter but the error remained as it still tried to connect to the vCenter controlled host via it’s FQDN resulting in the error.

After doing a quick Google search and finding nothing, I changed the target to be an ESXi host directly and used it’s IP address over it’s FQDN. This time the OVFTool was able to do it’s thing and deploy the new VCSA appliance.

The one caveat when deploying directly to a host over a vCenter is that you need to have the target PortGroup configured as an ephemeral…but that’s a general rule of bootstrapping a VCSA in any case and it’s the only one that will show up from the drop down list.

While very strange given all DNS checked out as per my testing, the workaround did it’s thing and allowed me to continue with the upgrade. This didn’t find the root cause…however when you need to motor on with anupgrade, a workaround is just as good!

Veeam 9.5 Update 3a – What’s in it for Service Providers

Earlier this week Update 3a (Build 9.5.1922) for Veeam Backup & Replication was made generally available. This release doesn’t contain any major new features or enhancements but does add support for a number of key platforms. Importantly for our Cloud and Service Providers Update 3a extends our support for vSphere vSphere 6.7, vSphere 6.5 Update 2 (with a small caveat) and vCloud Director 9.1. We also have support for the April update of Windows 10 and the 1803 versions of Windows Server and Hyper-V.

vSphere 6.7 support (VSAN 6.7 validation is pending) is something that our customers and partners have been asking for since it was released in late April and it’s a credit to our R&D and QC teams to reach supportability within 90 days given the amount of underlying changes that came with vSphere 6.7. The performance of DirectSAN and Hot Add transport modes has been improved for backup infrastructure configurations through optimizing system memory interaction.

As mentioned, the recently released vCloud Director 9.1 is supported and maintains our lead in the availability of vCloud Director environments. Storage snapshot only vCloud Director backup jobs are now supported for all storage integrations tht support storage snapshot-only jobs. Update 3a also fully supports the VMware Cloud on AWS version 1.3 release without the requirement for the patch.

One of the new features in Update 3a is a new look Veeam vSphere Client Plug-in based on VMware’s Clarity UX. This is more a port, however with the announcement that the Flex based Web Client will be retired it was important to make the switch.

In terms of key fixes for Cloud and Service Providers, I’ve listed them below from the VeeamKB.

  • User interface performance has been improved for large environments, including faster VM search and lower CPU consumption while browsing through job sessions history.
  • Incremental backup runs should no longer keep setting ctkEnabled VM setting to “true”, resulting in unwanted events logged by vCenter Server.
  • Windows file level recovery (FLR) should now process large numbers of NTFS reparse points faster and more reliably.

Veeam Cloud Connect
Update 3a also includes enhancements and bug fixes for cloud and service providers who are offering Veeam Cloud Connect services, For more information relating to that, please head to this thread on the Veeam Cloud & Service Provider forum. A reminder as well, that if you are running Cloud Connect Replication you need to be aware that clients replicating in on higher VMware VM Hardware versions will error out. Meaning you need to either let the customer know that the replication cluster is at a certain level…or upgrade to the latest version…which is now vSphere 6.7 that gives Version 14.

For a full list check out the release notes below and download the update here. You can also download the update package without backup agents here.

References:

https://www.veeam.com/kb2646