Author Archives: Anthony Spiteri

VeeamON 2018: Top Session Picks

VeeamON is happening next week and the final push towards the event is in full swing. I can tell you that that this years event is going to be extremely valuable for those who can attend! This is going to be my third VeeamOn, and my second being involved with the preparation of elements of the event. Having been behind the scenes, and knowing what our customers and partners are in for in terms of content and event activities…I can’t wait for things to kick off in Chicago.

This year we have 70 breakout sessions with a number of high profile speakers coming over to help delver those sessions. We also have significant keynote speakers for the main stage sessions on each of the three days. You will also hear from our executive team on the vision Veeam has for continuing to provide availability through our industry leading innovations.

Top Session Pick:

The tracks are organised slightly different to last year in that there are no set Technical levels. There are seven tracks available

  • Better Together
  • Architecture and Design
  • Cloud-Powered
  • Deep Tech
  • Implementation Best Practices
  • Operations and Support
  • Vision and Strategy

I’ve gone through all the breakouts and picked out my top sessions that you should consider attending…as usual there is a cloud slant to most of them, but there are also some core technology sessions that are not to be missed. The Veeam Product Strategy team are well represented in the session list so it’s also worth looking to attend talks from Rick Vanover, Michael Cade, Niels Engelen, Melissa Palmer, Dmitry Kniazev, David Chapa and Jason Buffington. Danny Allan will be main stage delivering our core vision and strategy moving beyond 2018.

Veeam Backup for Microsoft Office 365 2.0: Deep Dive

Mike Resseler and Kostya Yasyuk

After learning what is new in Veeam® Backup for Microsoft Office 365 2.0, it is time to look into the details of this solution. Learn about optimization, architecture, under-the-hood workings and much more in this session.

Wed, May 16th, 2:50 PM – 3:50 PM

From zero to hero: A deep dive on RESTful API for Veeam solutions

Niels Engelen and Dmitry Kniazev

Join us for a journey on how to leverage the RESTful API provided in several Veeam® solutions. We will go deeper on how to get started and even develop a full platform with a focus on: Veeam Backup & Replication™ Veeam Backup for Microsoft Office 365 Veeam Availability Console

Tue, May 15th, 2:50 PM – 3:50 PM

Cooking up some Veeam deployment with CHEF automation

Michael Cade and Jeremy Goodrum

A walk-through session showing the open source CHEF cookbook that installs and configures Veeam® Backup & Replication™ based on documented Veeam best practices. Automation in large-scale deployments is a must. This cookbook will allow for a scalable deployment of your Veeam components and the ability for controlled upgrades and configuration best practices across the estate.

Wed, May 16th, 12:15 PM – 1:15 PM

A sneak peek at Veeam Backup & Replication 2018 releases

Anton Gostev

Hear right from Anton Gostev about the details of the next release of Veeam® Backup & Replication™. The details of this will be announced at VeeamON 2018, and this will be your exclusive opportunity to learn more about the next release of Veeam Backup & Replication.

Wed, May 16th, 2:50 PM – 3:50 PM

Getting started with Veeam Availability Orchestrator: Ensure business continuity & DR compliance

Melissa Palmer

As a new product for 2018, Veeam® Availability Orchestrator raises the bar for enterprises of all sizes that need orchestrated disaster recovery (DR) and a strong business continuity plan. In this session, the components and architecture of Veeam Availability Orchestrator will be shown in the context of how they work with each other. This breakout will start with a use case and then apply the capabilities of Veeam Availability Orchestrator to deliver objectives for the use case example. Additionally, this session will provide details of core capabilities of Veeam Availability Orchestrator, including data labs, custom steps and building DR plans. As part of your journey from beginner to expert with Veeam Availability Orchestrator, this session is recommended to attend first before attending “Automate your DR run book with PowerShell and Veeam Availability Orchestrator” and “Plan for disaster with confidence using automated testing in Veeam Availability Orchestrator”.

Tue, May 15th, 11:20 AM – 12:20 PM

Veeam Availability Console usage scenarios

Vitaliy Safarov

Veeam® Availability Console can bring lots of value to a cloud or service provider and enterprise organizations. What are the most common usage scenarios? How can you benefit from the functionality within the solution to lower your daily administration, but at the same time have visibility into your tenant’s environment? If you are a service provider or an enterprise that operates as a service provider, then you will learn a few scenarios that can save you time, effort and money, simply by using this FREE solution.

Wed, May 16th, 12:15 PM – 1:15 PM

The (r)evolution of VMware vSAN

Duncan Epping

The world of hyper-converged infrastructure moves at an extremely rapid pace, and VMware vSAN is one of the biggest enablers. In this session, Duncan Epping will discuss where VMware vSAN began, where it stands today and, most importantly, what to expect in the future. Duncan will start with a brief explanation of the basics of VMware vSAN and then quickly dive into the future by doing a demo of various (potentially) upcoming features.

Wed, May 16th, 1:35 PM – 2:35 PM

Wrap Up:

There are obviously a lot more from which to choose from and the full list can be found here. You can also download the VeeamON Mobile Application to register for sessions, organise and keep tabs on other parts of the event.

Looking forward to seeing you all there!

 

Quick Fix: vSAN Health Reports iSCSI Target Service Stopped

A few weeks ago I wrote about using iSCSI as a backup repository target. While still running this POC in my environment I came across an error in the vSAN Health Checker stating the vSAN iSCSI target service was in a Failed state. Drilling down into the vSAN Health check tree I could see a Service Runtime status of stopped as shown below against the host.

This host had recently been marked as unreachable in vCenter and required a Management Agent reset to bring it back online. There is a chance that that process stopped the iSCSI Target service but did not start it. In any case there is an easy way to see the status of the services and then get them back online.

Once that’s been done, a re-run of the vSAN Health checker will show that the issue has been resolved and the iSCSI Target Service on the host is now running.

References:

https://kb.vmware.com/s/article/2147603

 

Deploying Veeam Powered Network into a AWS VPC

Veeam PN is a very cool product that has been GA for about four months now. Initially we combined the free product together with Veeam Direct Restore to Microsoft Azure to create Veeam Recovery to Microsoft Azure. Of late there has been a push to get Veeam PN out in the community as a standalone product that’s capable of simplifying the orchestration of site-to-site and point-to-site VPNs.

I’ve written a few posts on some of the use cases of Veeam PN as a standalone product. This post will focus on getting Veeam PN installed into an AWS VPC to be used as the VPN gateway. Given that AWS has VPN solutions built in, why would you look to use Veeam PN? The answer to that is one of the core reasons why I believe Veeam PN is a solid networking tool…The simplicity of the setup and ease of use for those looking to connect or extend on-premises or cloud networks quickly and efficiently.

Overview of Use Case and Solution:

My main user case for my wanting to extend the AWS VPC network into an existing Veeam PN Hub connected to my my Homelab and Veeam Product Strategy Lab was to test out using an EC2 instance as a remote Veeam Linux Repository. Having a look at the diagram below you can see the basics of the design with the blue dotted line representing the traffic flow.

 

The traffic flows between the Linux Repository EC2 instance and the Veeam Backup & Replication server in my Homelab through the Veeam PN EC2 instance. That is via the Veeam PN Hub that lives in Azure and the Veeam PN Site Gateway in the Homelab.

The configuration for this includes the following:

  • A virtual private cloud with a public subnet with a size /24 IPv4 CIDR (10.0.100.0/24). The public subnet is associated with the main route table that routes to the Internet gateway.
  • An Internet gateway that connects the VPC to the Internet and to other AWS products.
  • The VPN connection between the VPC network and the Homelab network. The VPN connection consists of a Veeam PN Site Gateway located in the AWS VPC and a the Veeam PN HUB and Site Gateway located at the Homelab side of the VPN connection.
  • Instances in the External subnet with Elastic IP addresses that enable them to be reached from the Internet for management.
  • The main route table associated with the public subnet. The route table contains an entry that enables instances in the subnet to communicate with other instances in the VPC, and two entries that enables instances in the subnet to communicate with the remote subnets (172.17.0.0/24 and 10.0.30.0/24).

AWS has a lot of knobs that need adjusting even for what would normally be assumed functionality. With that I had to work out which knobs to turn to make things work as expected and get the traffic flowing between sites.

Veeam PN Site Gateway Configuration:

To get a Veeam PN instance working within AWS you need to deploy an Ubuntu 16.04 LTS form the Instance Wizard or Marketplace into the VPC (see below for specific configuration items). In this scenario a t2.small instance works well with a 16GB SSD hard drive as provided by the instance wizard. To install the Veeam PN services onto the EC2 instance, follow my previous blog post on Installing Veeam Powered Network Direct from a Linux Repo.

Once deployed along with the EC2 instance that I am using as a Veeam Linux Repository I have two EC2 instances in the AWS Console that are part of the VPC.

From here you can configure the Veeam PN instance as a Site Gateway. This can be done via the exposed HTTP/S Web Console of the deployed VM. First you need to create a new Entire Site Client from the HUB Veeam PN Web Console with the network address of the VPC as shown below.

Once the configuration file is imported into the AWS Veeam PN instance it should connect up automatically.

Jumping on the Veeam PN instance to view the routing table, you can see what networks the Veeam HUB has connected to.

The last two entries there are referenced in the design diagram and are the subnets that have the static routes configured in the VPC. You can see the path the traffic takes, which is reflected in the diagram as well.

Looking at the same info from the Linux Repository instance you can see standard routing for a locally connected server without any specific routes to the 172.17.0.0/24 or 10.0.30.0/24 subnets.

Notice though with the traffic path to get to the 172.17.0.0/24 subnet it’s now going through an extra hop which is the Veeam PN instance.

Amazon VPC Configuration:

For the most part this was a straightforward VPC creation with a IPv4 CIDR block of 10.0.100.0/24 configured. However, to make the routing work and the traffic flowing as desired you need to tweak some settings. After initial deployment of the Veeam PN EC2 instance I had some issues resolving both forward and reverse DNS entries which meant I couldn’t update the servers or install anything off the Veeam Linux software repositories.

By default there are a couple of VPC options that is turned off for some reason which makes all that work.

Enable both DNS Resolution and DNS Hostnames via the menu options highlighted above.

For the Network ACLs the default Allows ALL/ALL for inbound and outbound can be left as is. In terms of Security Groups, I created a new one and added both the Veeam PN and Linux Repository instances into the group. Inbound we are catering for SSH access to connect to and configure the instances externally and as shown below there are also rules in there to allow HTTP and HTTPS traffic to access the Veeam PN Web Console.

These, along with the Network ACLs are pretty open rules so feel free to get more granular if you like.

From the Route Table menu, I added the static routes for the remote subnets so that anything on the 10.0.100.0/24 network trying to get to 172.17.0.0/24 or 10.0.30.0/24 will use the Veeam PN EC2 instance as it’s next hop target.

EC2 Configuration Gotchya:

A big shout out to James Kilby who helped me diagnose an initial static routing issue by discovering that you need to adjust the Source/Destination Check attribute which controls whether source/destination checking is enabled on the instance. This can be done either against the EC2 instance right click menu, or on the Network Interfaces menu as shown below.

Disabling this attribute enables an instance to handle network traffic that isn’t specifically destined for the instance. For example, instances running services such as network address translation, routing, or a firewall should set this value to disabled. The default value is enabled.

Conclusion:

The end result of all that was the ability to configure my Veeam Backup & Replication server in my Homeland to add the EC2 Veeam Linux instance as a repository which allowed me to backup to AWS from home through the Veeam PN network site-to-site connectivity.

Bear in mind this is a POC, however the ability to consider Veeam PN as another options for extending AWS VPCs to other networks in a quick and easy fashion should make you think of the possabilities. Once the VPC/EC2 knobs where turned and the correct settings put in place, the end to end deployment, setup and connecting into the extended Veeam PN HUB network took no more than 10 minutes.

That is the true power of the Veeam Powered Network!

References:

https://docs.aws.amazon.com/glue/latest/dg/set-up-vpc-dns.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#change_source_dest_check

vSphere 6.7 – What’s in it for Service Providers Part 1

A few weeks ago after much anticipation VMware released vSphere 6.7. Like 6.5 before it, this is a lot more than a point release and represents a major upgrade from vSphere 6.5. There is so much packed into this new release that there is an official page with separate blog posts talking about the features and enhancements. As usual, I will go through some of the key features and enhancements that are included in the latest versions of vCenter and ESXi and as they relate back to the Service Providers that use vSphere as the foundation of their Infrastructure as a Service offerings.

There is a lot go get through though and like the vSphere 6.5 release the “whats new” will not fit into one post so i’ll split the highlights between a couple posts and I’ll cover ESXi specifically in a follow-up. I still feel like it’s important to highlight the base hypervisor as well as the management platform. I’ll also talk about current interoperability with vCloud Director and NSX as well as Veeam supportability for vSphere 6.7.

The major features and enhancements as listed in the What’s New PDF are:

  • Scalability Enhancements
  • VMware vCenter Server Appliance Linked Mode
  • VMware vCenter Server Appliance Back Up Scheduler
  • Single Reboot
  • Quick Boot
  • Support for 4K Native Storage
  • Improved HTML 5 based vSphere Client
  • Security-at-Scale
  • Support for Trusted Platform Module (TPM) 2.0 and virtual TPM
  • Cross-vCenter Encrypted vMotion
  • Support for Microsoft’s Virtualization Based Security (VBS)
  • NVIDIA GRID vGPU Enhancements
  • vSphere Persistent Memory
  • Hybrid Linked Mode
  • Per-VM Enhanced vMotion Compatibility (EVC)
  • Cross-vCenter Mixed Version Provisioning – Simplify provisioning across hybrid cloud environments that have diferent vCenter versions

Below are the ones in red fleshed out in the context of Service Providers.

Enhanced vCenter Server Appliance:

The VCSA has been enhanced significantly in this release. Having used the VCSA exclusively for the past year in all my environments I have a love hate relationship with it. I still feel it’s nowhere as stable as vCenter running ontop of Windows and is prone to more issues than a Windows based vCenter…however this 6.7 release will be the last one supporting or offering a Windows based vCenter. With that VMware have had to work hard on making the VCSA more resilient.

Compared to the 6.5 VCSA, 6.7 offers twice the performance in vCenter operations per second with a three times reduction in memory usage and three times faster DRS operations meaning that power on and other VM operations are performed quicker. This is great on a service provider platform with potentially lots of those operations happening during the course of a day. Hopefully this improves the responsiveness overall of the VCSA which I have felt at times to be poor under load or after an extended period of appliance uptime.

There has also been a number of updates to the APIs offered in vSphere, the VCSA and ESXi. William Lam has a great post on what’s new for APIs here, but all Service Providers should have teams looking at the API Explorer as it’s a great way to explore and learn what’s available.

Single Reboot and Quick Reboot:

For Service Providers who need upgrade their platforms to maintain optimal compatibility, upgrading hosts can be time consuming at scale. vSphere 6.7 reduces ESXi host upgrades, by eliminating one of the two reboots normally required for major version upgrades. This is the single reboot feature. There is also vSphere Quick Boot that restarts the ESXi hypervisor without rebooting the physical host. This skips time-consuming server hardware initialization and post boot operation wait times. Both of these significantly reduce maintenance times.

This blog post covers both features in more detail.

Improved HTML 5 based vSphere Client:

While minor in terms of actual under the hood improvements, the efficiencies that are gained when it comes to a decent user interface are significant. When managing Service Provider platforms at scale, having a reliable client is important and with the decommissioning of the VI client and the often frustrating performance of the Flex client a near complete and workable HTML vSphere Client is a big plus for those who work day to day on vCenter.

The vSphere 6.7 vSphere Client has support for vSAN as well as having Update Manager fully built in. As per the last NSX 6.4 update there is also limited management of that. There is also a new vROps plugin…this plugin is available out-of-the-box once vROps has been linked with vCenter and offers dashboards directly in the vSphere client that can view, cluster view, and alerts for both vCenter and vSAN views. This is extremely handy for Service Providers who use vROps dashboard not needing to go to two different locations to get the info.

vCD and NSX Supportability:

Shifting from new features and enhancements to an important subject to talk about when talking service provider platform…VMware product compatibility. For those VCPP Service Providers running a Hybrid Cloud you should be running a combination of vCloud Director SP or/and NSX-v of which, at the moment there is no support for either in vSphere 6.7.

Looking at vCloud Director, it looks like 9.1 is supported however given the fact you need to be running NSX-v with vCD these days and NSX is not yet supported, it doesn’t make too much sense to suggest that there is total compatability.

I suspect we will see NSX-v come out with a supported build shortly…though I’m only expecting vCloud Director SP to support 6.7 form version 9.1 which will mean upgrades.

Veeam Backup & Replication Supportability: 

Veeam commits to supporting major version releases within 90 days or sooner of GA. So with that, those Service Provider that are also VCSPs using Veeam to backup their infrastructure should not upgrade to vSphere 6.7 until Backup & Replication Update 3a is released. For those that are bleeding edge and have updated your only option at that point is our Agents for Windows and Linux until Update 3a is released.

Wrapping up Part 1:

Rounding off this post, in the Known Issues section there is a fair bit to be aware of for 6.7. it’s worth reading through all the known issues just in case there are any specific issues that might impact you. In upcoming posts around vSphere 6.7 for Service Providers series I will cover more vCenter features as well as ESXi enhancements and what’s new in Core Storage.

Happy upgrading!

References:

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html

Introducing Faster Lifecycle Management Operations in VMware vSphere 6.7

Released: vSAN 6.7 – HTML5 Goodness, Enhanced Health Checks and More!

VMware has announced the general availability of vSAN 6.7. As vSAN continues to grow, VMware are very buoyant about how it’s performing in the market. With some 10,000 customers at a run rate of over 600 million they claim to lead the HyperConverged market with a 32% market share. From my point of view it’s great to see vSAN being deployed across 250 cloud providers and have it as the cornerstone storage of the VMware Cloud on AWS solution. vSAN 6.7 is focusing on intuitive operational experience, consistent application experience and holistic support experience.

New Features and Enhancements:

  • HTML5 User Interface
  • Embedded vROPs plugin for HTML5 User Interface
  • Support for Windows Failover Cluster using iSCSI
  • Adaptive Resync Performance Improvements
  • Destaging Performance Improvements
  • More Efficient data placement during Host Decommissioning
  • Improved Space Efficiency
  • Faster Failover with Redundant vSAN Networks
  • Optimized Witness Traffic Seperation
  • Stretched Cluster Improvements
  • Host Affinity for Next-Gen Applications
  • Health Check Enhancements
  • Enhanced Diagnostics
  • vSAN Support Insight
  • 4Kn Device Support
  • Improved FIPS 140-2 Validation Security

There are a lot of enhancements in this release and while not as ground breaking at the 6.6 release last year, there is still a lot to like about how VMware is improving the platform. From the list above, i’ve taken the key ones from my point of view and expanded on them a little.

HTML5 User Interface:

As has been the trend with all VMware products of late, vSAN is getting the Clarity Framework overhaul and is being included in the HTML5 vSphere Web Client with new vSAN tasks and workflows developed from the ground up to simplify the experience. There is also new vSAN functionality that can only be accessed via the HTML5 client.

The legacy Flex client will still be available for use and it’s also worth noting that this is not a direct port of the Flex interface but started from the ground up. This has resulted in a more efficient experience for the user with less clicks and less time to action items. Any new features or enhancements will only be seen in the new HTML5 UI.

Support for Windows Failover Cluster using iSCSI:

A few weeks back I posted around how you could use vSAN as Veeam repository using the iSCSI feature. With vSAN 6.7 there is offical support for Windows Failover Clustering using the vSAN iSCSI service. Lots of people still run MSCS and a lot still use traditional clustering. This supports physical and virtual Guest iSCSI initiators that includes transparent failover of clusters with vSAN iSCSI volumes.

I’m not sure if this now means that iSCSI volumes are supported as Veeam Cloud Repositories…but I will confirm either way.

Adaptive Resync Performance Improvements:

vSAN 6.7 introduces a new Adaptive ReSync feature that will make sure resources are available for VM IO and resync IO. This ensures that under IO stress certain traffic types are not starved of resources and allows more bandwidth to be used when there are periods of less contention. Under contention, resync IO will be guaranteed at least 20% of the bandwidth and if no resync traffic exists, VM IO may consume 100%. This is effectively regulating reads and writes to ensure optimal balance for VM and reync IO.

Destaging Performance Improvements:

vSAN 6.7 looks to be more consistent when talking about data optimizations in the data path. With the faster destaging, data drains more quickly from the write buffer to the capacity tier. This allows the buffer tier to be available for newer IO quicker. This is done via improved in-memory handling of IO during destaging that delivers higher throughput and more consistency which in turn improves the overall performance of VM and resync IO.

More Efficient data placement during Host Decommissioning:

When putting a host in maintenance mode or decommissioning a host you need to select the evacuation type for the objects on that host. This can take time depending on the amount of data. vSAN 6.7 builds on improvements introduced in 6.6 that consolidates replicas living across multiple hosts while maintaining FTT compliance. Is looks for the smallest component to move while results in less data being rebuilt and less temporary space usage. vSAN will provide more intelligence behind the data movement to reduce the time and effort it takes to put a host into maintenance mode.

Improved Space Efficiency:

In previous vSAN versions the VM swap object was always thick provisioned even if the VM it’s self was thin. in vSAN 6.7 this will now be thin by default and also inherit the policy from the VM so that the FTT is the swap object is consistent with the VM which results in more efficient storage. Previous to this, large environments would suffer with a large number of swap files taking up a higher proportionate amount of space.

 

Conclusion:

vSan continues to be improved by VMware and they have addressed some core usability and efficiency features in this 6.7 release. The move to the HTML5 web client was expected, but still good to see while the enhancements in resync and destaging all contributes to platform stability. The enhanced health checks add a new dimension to vSAN troubleshooting and the support insight allows users to get a better view of what’s happening on their instances.

References:

Pre release information and images sourced via VMware EABP

https://blogs.vmware.com/virtualblocks/2018/04/17/whats-new-vmware-vsan-6-7/

 

 

Cloud Connect Subtenants, Veeam Availability Console and Agents!

Cloud Connect Subtenants have gone under the radar for the most but can play an important role in how Service Provider customers consume Cloud Connect services. In a previous post, I described how subtenants work in the context of Cloud Connect Backup.

Subtenants can be configured by either the VCSP or by the tenant consuming a Cloud Connect Backup service. Subtenants are used to carve up and assign a subset of the parent tenant storage quota. This allows individual agents to authenticate against the Cloud Connect service with a unique login allowing backups to Cloud Repositories that can be managed and monitored from the Backup & Replication console.

In this post I’m going to dive into how subtenants are created by the Veeam Availability Console and how they are then used by agents that are managed by VAC. For those that may not know what VAC does, head to this post for a primer.

Automatic Creation of Subtenant Users:

Veeam Availability Console automatically creates subtenant users if a backup policy that is configured to use a cloud repository as a backup target is chosen. When such a backup policy is assigned to an agent, VAC creates a subtenant account on the Cloud Connect Server for each backup agent.

Looking below you can see a list of the Backup Agents under the Discovery Menu.

Looking at the Backup Policy you can see that the Backup Target is a Cloud Repository, which results in the corresponding subtenant account being created.

The backup agents use these subtenant accounts to connect and send data to a Cloud Connect endpoint that are backed by a cloud repository. The name of each subtenant account is created according to the following naming convention:

companyname_computername

At the Cloud Provider end from within the Backup & Replication console under the Cloud Connect Menu and under tenants, clicking on Manage Subtenants will show you the corresponding list of subtenant accounts.

The view above is the same to that seen at the tenant end. A tenant can modify the quota details from the Veeam Backup & Replication console. This will result in a Custom Policy status as shown below. The original policy can be reapplied from VAC to bring it back into line.

The folder structure on the Cloud Repository maps what’s seen above. As you can also see, if you have Backup Protection enable you will also have _RecycleBin objects there.

NOTE: When a new policy is applied to an agent the old subtenant account and data is retained on the Cloud Connect repository. The new policy gets applied and a subtenant account with an _n gets created. Service Providers will need to purge old data manually.

Finally if we look at the endpoint where the agent is installed and managed by VAC you will see the subtenant account configured.

Conclusion:

So there is a deeper look at how subtenants are used as part of the Veeam Availability Console and how they are created, managed and used by the Agent for Windows.

References:

https://helpcenter.veeam.com/docs/vac/provider_admin/create_subtenant_user.html?ver=20

Upgrading Windows Agents with Veeam Availability Console

One of the Veeam Availability Console’s key features is it’s ability to deploy and manage Veeam Agent for Windows. This is done through the VAC Web Console and is achieved through the connectivity of the providers Cloud Connect Gateway to the tenant’s Veeam Backup & Replication instance. Weather this is managed by a service provider or by the tenant, VAC also has the ability to remotely upgrade Windows Agents.

The way that this works is by the Veeam Availability Console periodically connecting to the Veeam Update Server and checks whether a new version of the agent software is available. If a new version is available, VAC displays a warning next to the agents saying that it is outdated as shown below.

Updating the backup agents from the Veeam Update Server is performed via the master agent that sits on-premises. This agent is deployed during the initial Service Provider configuration form the Veeam Backup & Replication server. The master agent downloads the backup agent setup file from the Veeam Update Server and then uploads this setup file to systems selected via the update scope and initiates the update.

To initiate the upgrade, select the agents from the Backup Agents Tab under Clients -> Discovery. Once selected click on the Backup Agent dropdown and click upgrade.

Note: Once you click Upgrade the process will be kicked off…there is no further confirmation. There is also a Patch option which allows you to apply patches to the agents in between major build releases.

Once initiated, all agents will be shown as updating as shown below.

Taking a look at the Resource Monitor of one of the endpoints being updated, you can see that the machine is receiving the update from the local server that has the master agent and that the agent is talking back to the VAC server via Cloud Connect Port 6180.

And you can see the Windows Installer running the agent update msi.

Back to the VAC console, and after a while you will see the update deployment status complete

And the endpoint now has the updated agent version running.

Which is reflected in the VAC Console.

Conclusion:

That’s the very straight forward process of having the Veeam Availability Console upgrade Veeam Windows Agents under it’s management. Again, this can be done by the service provider or it’s a task that can be executed by the tenant through their own console login given the correct permissions. There are a few other options for those that deployed the agents with the help of a 3rd party tool and also for those doing it offline…for a run down of that process, head to the help pages linked below.

References:

https://helpcenter.veeam.com/docs/vac/provider_admin/update_backup_agents.html?ver=20

Released: NSX-v 6.3.6

Last week VMware released NSX-v 6.3.6 (Build 8085122) that doesn’t contain any new features but addresses a number of bug fixes from previous releases. This has been done independently of any updated release of NSX-v 6.4.0 that went GA in January.

This is good to see though interesting to also see that people are still not upgrading to 6.4.0 in droves meaning VMware needs to support both versions. Going through the release notes there are a lot of known issues that should be known and there are more than a few that apply to service providers.

Some key fixes are listed below:

Important Fixes :

  • Network outage of ~40-50 seconds seen on Edge Upgrade – During Edge upgrade, there is an outage of approximately 40-50 seconds
  • After upgrading to 6.3.5, the routing loop between DLR and ESG’s causes connectivity issues in certain BGP configurations –  A routing loop is causing a connectivity issue
  • NSX Manager CPU high due to edge in read-only file system mode – NSX Manager is slow to respond because it keeps 100% CPU and receives a lot of read-only file system events from edge.
  • After upgrade from vCNS edge 5.5.4 to NSX 6.3.6, customers could not configure Health-Check-Monitor port nor make any changes directly from vCD – Customers will not be able to configure Health-Check-Monitor port nor make any changes directly from vCD.
  • Distributed Firewall stays in Publishing state with certain firewall configurations – Distributed Firewall stays in “Publishing” state if you have a security group that contains an IPSet with 0.0.0.0/0 as an EXCLUDE member, an INCLUDE member, or as a part of ‘dynamic membership containing Intersection (AND)’

Those with the correct entitlements can download NSX-v 6.3.6 here.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_636.html

VeeamOn 2018: Recognizing Innovation and what it means to be Innovative

True innovation is solving a real problem…and though for the most, it’s startups and tech giants that are seen to be the innovators, their customers and partners also have the ability to innovate. Innovation drives competitive advantages and allows companies to differentiate themselves compared to others. In my previous roles I was lucky to be involved with teams of talented people that did great things with great technologies. Like others around the world we where innovating with leading vendor technologies to create new service offerings that add value and compliment the underlying technology.

Innovation requires these teams of people to be experimental at heart and try to build or enhance upon already existing technologies. The Service Provider industry has always found a way to innovate ontop of vendor platforms and successful vendors are those that offer the right tools and guidance for providers to creative innovative solutions ontop of their platforms. The are problem solvers!

Orchestrations, automation, provisioning and billing are driving factors in how service providers can differentiate themselves and gain that competitive advantage in the marketplace. Without innovating ontop of these platforms, service offerings become generic, don’t stand out and are generally operationally expensive to manage and maintain.

Introducing the Veeam Innovation Awards for 2018:

When visiting and talking to different partners across the world it’s amazing to see some of the innovation that’s been built ontop of Veeam technologies and we at Veeam want to reward our customers and partners who have done great things with our technologies.

At VeeamON 2018, we’ll be celebrating some of these innovative solutions, so please let us know how you’ve built upon the Veeam Availability Platform. Nominations can be made from March 29 to April 30, with the winners being recognized during the VeeamON main stage keynote. Self nominations or those from partners, providers, or Veeam field-team members are encouraged — click here to nominate for a Veeam Innovation Award.

I can think of a number of VCSPs that have done great things with building upon Cloud Connect, Backup & Replication IaaS backups and working with Veeam’s API’s and PowerShell to solve customer problems and offer value added services. These guys have brought something new to the industry and we want to reward that.

Having previously come from a successfully innovate company within their own space, being innovative is now something I try to preach to all customers and partners I visit. It is an absolute requirement if you want to win business and stand out in the backup and availability industry…innovation is key and we want to hear about it from you!

References:

Nominations for the VeeamON 2018 Innovation Awards are now open

Setting up vSAN iSCSI and using it as a Veeam Repository

Probably one of the least talked about features of vSAN is it’s ability to serve out iSCSI volumes. The feature was released with vSAN 6.5 and was primarily focused on physical workloads and is easily configurable via the vSphere Web Client. iSCSI targets on vSAN are managed the same as any other vSAN objects using Storage Policy Based Management (SPBM). Deduplication, compression, mirroring, and erasure coding can be utilized with the iSCSI target service as well as CHAP and Mutual CHAP authentication.

Of late, i’ve been asked by service providers about using Object Storage platforms as Veeam Backup & Replication repositories. There are a lot of options out there but someone asked specifically about using vSAN. In theory you could just use a VMDK on a vSAN datastore but I thought it would be interesting to look at using iSCSI to mount a volume and use it as a repository.

Initial iSCSI Configuration for vSAN:

First thing we need to do is enable the iSCSI Target service from the vSphere Web Console. Under the Cluster Configuration tab and in the iSCSI Target menu you need to enabled the iSCSI service. Select the default iSCSI Network kernel interface and then modify the iSCSI port and add security if desired. Take note of the info message around using the Storage Policy for the home object.

From there we setup a new iSCIS Target. From here you will be given the IQN and we will give the target an alias. This window also lets us create the first LUN to the iSCSI Target. The LUN id can be specified along with the alias and finally the size. Just like creating a new VMDK on a vSAN datastore we are given the storage consumption of the object depending on the Storage Policy chosen.

Once completed under the iSCSI Target pane we see the details of the Target and LUN just created. Take note of the I/O Owner Host as that is what we will be using later on as the iSCSI Target from the Veeam repository server.

Configuring Host access and setting iSCSI Access Permissions:

On the creation of a LUN there is a default policy that allows all initiator sources to connect to it. To create specific permissions for host access and to also create access groups you need to first enable the iSCSI initiator at the hosts. For that, I’ve got a Windows VM (note only physicals are officially supported) that’s got Veeam Backup & Replication installed on it. To connect to the iSCSI network we have to add an additional vNIC that’s hooked into a PortGroup that’s configured with the vSAN iSCSI VLAN.

Below we can see the VMKernel configuration and IP address of the I/O Owner hosts.

I’ve created a new PortGroup for the new vNIC to be attached to and added it to the VM.

From there we need to start the Microsoft iSCSI Initiator service which will give us the Initiator name we need to configure host access in the vSphere Web Client. Note that we should also install and enable MPIO for iSCSI if not installed as a Windows Feature.

Under the iSCSI Initiator Groups menu in the Cluster Configuration tab you can add the initiator to a new group. This can contain one or many hosts as you would expect in any iSCSI initiator group configuration.

Once that’s been done we have to allow that new group access to the target where the LUN is contained. Under the iSCSI Target menu and under Target Details in the lower pane click on the + icon and add the group as an allowed initiator.

From here we can go back to the Windows VM and connect to the iSCSI Target. We are using the IP Address of the Host was was highlighted above in the initial configuration.

Once done we should have a connected disk that’s visible in the Devices configuration of the isCSI Initiator.

Configuring new iSCSI Volume as Veeam Repository:

From here the process to setup a Veeam Repository based on the vSAN iSCSI LUN is straight forward. Firstly we need to bring online the volume and create a partition. As you can see below, the disk is of Bus Type iSCSI and Name is VMware Virtual SAN.

As for the partition configuration, I’ve set it up as shown before. ReFS being used as the file system.

From here we can head into the Backup & Replication console and create a new Repository with the new volume selected.

Performance and Limitations:

Once configured I was interested in seeing how a vSAN iSCSI connected object performed against a vSAN disk. The results below show that there is a significant performance hit in going one way or the other. This seems logical as in addition to iSCSI overheads a native VMDK on vSAN is hooked into the ESXi kernel directly and should get line speed rates when it comes to data transfer.

Below are the configuration maximums with vSAN iSCSI as listed below:

  • Maximum 1024 LUNs per vSAN cluster
  • Maximum 128 targets per vSAN cluster
  • Maximum 256 LUNS per target
  • Maximum LUN size of 62TB
  • Maximum 128 iSCSI sessions per host.
  • Maximum 4096 iSCSI IO queue depth per host
  • Maximum 128 outstanding writes per LUN .
  • Maximum 256 outstanding IOs per LUN.
  • Maximum 64 client initiators per LUN

So the max size of an iSCSI LUN matches the max size of a VMDK. Therefore when considering iSCSI as a possible option for Veeam backups, Scale Out Backup Repositories should be used to enable the adding at extents once that limit is reached.

There are also limitation on offical support for virtual machines and other platforms:

  • Currently not supported for implementation for Microsoft clusters.
  • Currently not supported for use as a target for other vSphere hosts.
  • Currently not supported for use with third party hypervisors.
  • Currently not supported for use with virtual machines

So if this becomes a consideration, physical servers will need to be used in order to gain support.

Conclusion:

So after all is said an done, we have a Veeam Repository than is now sitting on vSAN via iSCSI. The question remains weather this is a good application of vSAN or weather it’s worth looking at as an option, however the option is now there. Again, you may be able to look at the native VMDK option, but I like the flexibility of iSCSI for physical repositories at the moment.

Probably the biggest consideration for using vSAN iSCSI as a Veeam repository is the design of the vSAN Cluster. vSAN has not traditionally been considered for storage only purposes, however you could put together some low compute nodes with large disk groups that would present decent storage for repository purposes.

In using vSAN you have the benefit of knowing your data is redundant across multiple nodes as per the vSAN Storage Policies. This is the benefit of using object storage like vSAN as a Veeam Repository.

References:

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-13ADF2FC-9664-448B-A9F3-31059E8FC80E.html 

https://kb.vmware.com/kb/2148216

 

« Older Entries Recent Entries »