Tag Archives: VSAN

Quick Fix: vSAN Health Reports iSCSI Target Service Stopped

A few weeks ago I wrote about using iSCSI as a backup repository target. While still running this POC in my environment I came across an error in the vSAN Health Checker stating the vSAN iSCSI target service was in a Failed state. Drilling down into the vSAN Health check tree I could see a Service Runtime status of stopped as shown below against the host.

This host had recently been marked as unreachable in vCenter and required a Management Agent reset to bring it back online. There is a chance that that process stopped the iSCSI Target service but did not start it. In any case there is an easy way to see the status of the services and then get them back online.

Once that’s been done, a re-run of the vSAN Health checker will show that the issue has been resolved and the iSCSI Target Service on the host is now running.

References:

https://kb.vmware.com/s/article/2147603

 

Released: vSAN 6.7 – HTML5 Goodness, Enhanced Health Checks and More!

VMware has announced the general availability of vSAN 6.7. As vSAN continues to grow, VMware are very buoyant about how it’s performing in the market. With some 10,000 customers at a run rate of over 600 million they claim to lead the HyperConverged market with a 32% market share. From my point of view it’s great to see vSAN being deployed across 250 cloud providers and have it as the cornerstone storage of the VMware Cloud on AWS solution. vSAN 6.7 is focusing on intuitive operational experience, consistent application experience and holistic support experience.

New Features and Enhancements:

  • HTML5 User Interface
  • Embedded vROPs plugin for HTML5 User Interface
  • Support for Windows Failover Cluster using iSCSI
  • Adaptive Resync Performance Improvements
  • Destaging Performance Improvements
  • More Efficient data placement during Host Decommissioning
  • Improved Space Efficiency
  • Faster Failover with Redundant vSAN Networks
  • Optimized Witness Traffic Seperation
  • Stretched Cluster Improvements
  • Host Affinity for Next-Gen Applications
  • Health Check Enhancements
  • Enhanced Diagnostics
  • vSAN Support Insight
  • 4Kn Device Support
  • Improved FIPS 140-2 Validation Security

There are a lot of enhancements in this release and while not as ground breaking at the 6.6 release last year, there is still a lot to like about how VMware is improving the platform. From the list above, i’ve taken the key ones from my point of view and expanded on them a little.

HTML5 User Interface:

As has been the trend with all VMware products of late, vSAN is getting the Clarity Framework overhaul and is being included in the HTML5 vSphere Web Client with new vSAN tasks and workflows developed from the ground up to simplify the experience. There is also new vSAN functionality that can only be accessed via the HTML5 client.

The legacy Flex client will still be available for use and it’s also worth noting that this is not a direct port of the Flex interface but started from the ground up. This has resulted in a more efficient experience for the user with less clicks and less time to action items. Any new features or enhancements will only be seen in the new HTML5 UI.

Support for Windows Failover Cluster using iSCSI:

A few weeks back I posted around how you could use vSAN as Veeam repository using the iSCSI feature. With vSAN 6.7 there is offical support for Windows Failover Clustering using the vSAN iSCSI service. Lots of people still run MSCS and a lot still use traditional clustering. This supports physical and virtual Guest iSCSI initiators that includes transparent failover of clusters with vSAN iSCSI volumes.

I’m not sure if this now means that iSCSI volumes are supported as Veeam Cloud Repositories…but I will confirm either way.

Adaptive Resync Performance Improvements:

vSAN 6.7 introduces a new Adaptive ReSync feature that will make sure resources are available for VM IO and resync IO. This ensures that under IO stress certain traffic types are not starved of resources and allows more bandwidth to be used when there are periods of less contention. Under contention, resync IO will be guaranteed at least 20% of the bandwidth and if no resync traffic exists, VM IO may consume 100%. This is effectively regulating reads and writes to ensure optimal balance for VM and reync IO.

Destaging Performance Improvements:

vSAN 6.7 looks to be more consistent when talking about data optimizations in the data path. With the faster destaging, data drains more quickly from the write buffer to the capacity tier. This allows the buffer tier to be available for newer IO quicker. This is done via improved in-memory handling of IO during destaging that delivers higher throughput and more consistency which in turn improves the overall performance of VM and resync IO.

More Efficient data placement during Host Decommissioning:

When putting a host in maintenance mode or decommissioning a host you need to select the evacuation type for the objects on that host. This can take time depending on the amount of data. vSAN 6.7 builds on improvements introduced in 6.6 that consolidates replicas living across multiple hosts while maintaining FTT compliance. Is looks for the smallest component to move while results in less data being rebuilt and less temporary space usage. vSAN will provide more intelligence behind the data movement to reduce the time and effort it takes to put a host into maintenance mode.

Improved Space Efficiency:

In previous vSAN versions the VM swap object was always thick provisioned even if the VM it’s self was thin. in vSAN 6.7 this will now be thin by default and also inherit the policy from the VM so that the FTT is the swap object is consistent with the VM which results in more efficient storage. Previous to this, large environments would suffer with a large number of swap files taking up a higher proportionate amount of space.

 

Conclusion:

vSan continues to be improved by VMware and they have addressed some core usability and efficiency features in this 6.7 release. The move to the HTML5 web client was expected, but still good to see while the enhancements in resync and destaging all contributes to platform stability. The enhanced health checks add a new dimension to vSAN troubleshooting and the support insight allows users to get a better view of what’s happening on their instances.

References:

Pre release information and images sourced via VMware EABP

https://blogs.vmware.com/virtualblocks/2018/04/17/whats-new-vmware-vsan-6-7/

 

 

Setting up vSAN iSCSI and using it as a Veeam Repository

Probably one of the least talked about features of vSAN is it’s ability to serve out iSCSI volumes. The feature was released with vSAN 6.5 and was primarily focused on physical workloads and is easily configurable via the vSphere Web Client. iSCSI targets on vSAN are managed the same as any other vSAN objects using Storage Policy Based Management (SPBM). Deduplication, compression, mirroring, and erasure coding can be utilized with the iSCSI target service as well as CHAP and Mutual CHAP authentication.

Of late, i’ve been asked by service providers about using Object Storage platforms as Veeam Backup & Replication repositories. There are a lot of options out there but someone asked specifically about using vSAN. In theory you could just use a VMDK on a vSAN datastore but I thought it would be interesting to look at using iSCSI to mount a volume and use it as a repository.

Initial iSCSI Configuration for vSAN:

First thing we need to do is enable the iSCSI Target service from the vSphere Web Console. Under the Cluster Configuration tab and in the iSCSI Target menu you need to enabled the iSCSI service. Select the default iSCSI Network kernel interface and then modify the iSCSI port and add security if desired. Take note of the info message around using the Storage Policy for the home object.

From there we setup a new iSCIS Target. From here you will be given the IQN and we will give the target an alias. This window also lets us create the first LUN to the iSCSI Target. The LUN id can be specified along with the alias and finally the size. Just like creating a new VMDK on a vSAN datastore we are given the storage consumption of the object depending on the Storage Policy chosen.

Once completed under the iSCSI Target pane we see the details of the Target and LUN just created. Take note of the I/O Owner Host as that is what we will be using later on as the iSCSI Target from the Veeam repository server.

Configuring Host access and setting iSCSI Access Permissions:

On the creation of a LUN there is a default policy that allows all initiator sources to connect to it. To create specific permissions for host access and to also create access groups you need to first enable the iSCSI initiator at the hosts. For that, I’ve got a Windows VM (note only physicals are officially supported) that’s got Veeam Backup & Replication installed on it. To connect to the iSCSI network we have to add an additional vNIC that’s hooked into a PortGroup that’s configured with the vSAN iSCSI VLAN.

Below we can see the VMKernel configuration and IP address of the I/O Owner hosts.

I’ve created a new PortGroup for the new vNIC to be attached to and added it to the VM.

From there we need to start the Microsoft iSCSI Initiator service which will give us the Initiator name we need to configure host access in the vSphere Web Client. Note that we should also install and enable MPIO for iSCSI if not installed as a Windows Feature.

Under the iSCSI Initiator Groups menu in the Cluster Configuration tab you can add the initiator to a new group. This can contain one or many hosts as you would expect in any iSCSI initiator group configuration.

Once that’s been done we have to allow that new group access to the target where the LUN is contained. Under the iSCSI Target menu and under Target Details in the lower pane click on the + icon and add the group as an allowed initiator.

From here we can go back to the Windows VM and connect to the iSCSI Target. We are using the IP Address of the Host was was highlighted above in the initial configuration.

Once done we should have a connected disk that’s visible in the Devices configuration of the isCSI Initiator.

Configuring new iSCSI Volume as Veeam Repository:

From here the process to setup a Veeam Repository based on the vSAN iSCSI LUN is straight forward. Firstly we need to bring online the volume and create a partition. As you can see below, the disk is of Bus Type iSCSI and Name is VMware Virtual SAN.

As for the partition configuration, I’ve set it up as shown before. ReFS being used as the file system.

From here we can head into the Backup & Replication console and create a new Repository with the new volume selected.

Performance and Limitations:

Once configured I was interested in seeing how a vSAN iSCSI connected object performed against a vSAN disk. The results below show that there is a significant performance hit in going one way or the other. This seems logical as in addition to iSCSI overheads a native VMDK on vSAN is hooked into the ESXi kernel directly and should get line speed rates when it comes to data transfer.

Below are the configuration maximums with vSAN iSCSI as listed below:

  • Maximum 1024 LUNs per vSAN cluster
  • Maximum 128 targets per vSAN cluster
  • Maximum 256 LUNS per target
  • Maximum LUN size of 62TB
  • Maximum 128 iSCSI sessions per host.
  • Maximum 4096 iSCSI IO queue depth per host
  • Maximum 128 outstanding writes per LUN .
  • Maximum 256 outstanding IOs per LUN.
  • Maximum 64 client initiators per LUN

So the max size of an iSCSI LUN matches the max size of a VMDK. Therefore when considering iSCSI as a possible option for Veeam backups, Scale Out Backup Repositories should be used to enable the adding at extents once that limit is reached.

There are also limitation on offical support for virtual machines and other platforms:

  • Currently not supported for implementation for Microsoft clusters.
  • Currently not supported for use as a target for other vSphere hosts.
  • Currently not supported for use with third party hypervisors.
  • Currently not supported for use with virtual machines

So if this becomes a consideration, physical servers will need to be used in order to gain support.

Conclusion:

So after all is said an done, we have a Veeam Repository than is now sitting on vSAN via iSCSI. The question remains weather this is a good application of vSAN or weather it’s worth looking at as an option, however the option is now there. Again, you may be able to look at the native VMDK option, but I like the flexibility of iSCSI for physical repositories at the moment.

Probably the biggest consideration for using vSAN iSCSI as a Veeam repository is the design of the vSAN Cluster. vSAN has not traditionally been considered for storage only purposes, however you could put together some low compute nodes with large disk groups that would present decent storage for repository purposes.

In using vSAN you have the benefit of knowing your data is redundant across multiple nodes as per the vSAN Storage Policies. This is the benefit of using object storage like vSAN as a Veeam Repository.

References:

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-13ADF2FC-9664-448B-A9F3-31059E8FC80E.html 

https://kb.vmware.com/kb/2148216

 

Released: Runecast Analyzer 1.7 with vSAN Support

Runecast has released version 1.7 of their Analyzer today and it has added support for VMware vSAN. By using a number of resources within VMware’s knowledge base Runecast offers a platform that looks at best practices, log information and security hardening guides to monitor your vSphere infrastructure which in turn brings to your attention issues through a simple yet intuitive interface. This now extends to vSAN as well. Also in this release is an improved dashboard called the VMware Stack view and improved vSphere Web Plugin.

Version 1.7 focuses on VMware vSAN support and proactive issue detection with remediation. vSAN, having gained market lead in the HCI space is deployed in vSphere environments more commonly these days as the storage component. It is critical to not only monitor performance but also keep the vSAN configuration in the best condition and prevent from any future failures or outages.

Runecast Analyzer v1.7 scans vSAN clusters and looks at cluster configurations against a large database of VMware Knowledge Base and Best Practices rules. This results in the ability to list issues and then offer suggestions on how to fix those issues which may affect vSAN availability or functionality. This acts as a good way to stop issues before they become more serious problems that impact environments.

As mentioned version 1.7 also offers an upgrade to the vSphere Web Client and as you can see below the integration is tight with the HTML5 client.

Finally, I wanted to highlight the new VMware Stack dashboard. This new visual component aims to very quickly prioritize what problem to solve and where it exists. The VMware stack contains 5 layers, Management, VM, Compute, Network and Storage. Runecast prioritizes and sorts all detected problems into those five categories so an admin can easily see where the critical issues are and what is the risk they pose.

Overall for those that have vSAN in their environments I would recommend a look at this release. The guys at Runecast are taking a unique approach to monitoring and I’m looking forward to future releases as they expand even more beyond vSphere and vSAN.

The latest version is available for a free 14-day trial.

vSAN 6.6 – What’s In It For Service Providers

Last February when VMware released VSAN 6.2 I stated that “Things had gotten Interesting” with regards to the 6.2 release of vSAN finally marking it’s arrival as a serious player in the Hyper-converged Infrastructure (HCI) market. vSAN was ready to be taken very seriously by VMware’s competitors. Fast forward fourteen months and apart from the fact we have confirmed the v in vSAN is a lower case with the product name officially changing from Virtual SAN to vSAN…Version 6.6 was announced last week is set to GA today, and with it comes the biggest list of new features and enhancements in vSANs history.

VMware has decided to break with the normal vSphere release cycle for vSAN and move to patch releases for vSphere that are actually major updates of vSAN. This is why this release is labeled vSAN 6.6 and will be included in the vSphere 6.5EP2 build. The move allows the vSAN team to continue to enhance the platform outside of the core vSphere platform and I believe it will deliver at least 2 update releases per year.

Looking at the new features and enhancements of the vSAN 6.6 release it’s clear to see that the platform has matured and given the 7000+ strong customer base it’s also clear to see that its being accepted more and more for critical workloads. From a service provider point of view I know of a lot more vCloud Air Network partners that have implemented vSAN as not only their Management HCI platform, but also now their customer HCI compute and storage  platforms.

A lot for Service Providers to like:

As shown in the feature timeline above there are over 20+ new features and enhancements but for me the following ones are most relative to vCAN Service Providers who are using, or looking to use vSAN in their offerings. I will expand on the ones in red as I see them as being the most significant of the new features and enhancements for service providers.

  • Native encryption for data-at-rest
  • Compliance certifications
  • vSAN Proactive Drive HA for failing drives
  • Resilient management independent of vCenter
  • Rapid recovery with smart, efficient rebuilds
  • Certified file service & data protection solutions
  • Enhanced vSAN SDK and PowerCLI
  • Simple networking with Unicast
  • vSAN Cloud Analytics for performance
  • vSAN Cloud Analytics with real-time support notification and recommendations*
  • vSAN Config Assist with 1-click hardware lifecycle management
  • Extended Health Services
  • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
  • Optimized for latest flash technologies
  • Expanded caching tier choice
  • New Docker Volume Driver

Simple networking with Unicast:

As John Nicholson wrote on the Virtual Blocks blog…it’s time to say goodbye to the multicast requirements around vSAN networking traffic. For a history as to why multicast was used, click here. Also it’s worth reading John’s post and also the he goes through the upgrade process as if you are upgrading from previous versions, multicast will still be used unless you make the change as also specified here.

I can attest first hand to the added complexity when it comes to setting up vSAN with multicast and have gone through a couple of painful deployments where the multicast configuration was an issue during initial setup and also caused issue with switching infrastructure that needed to be upgraded to before vSAN could work reliably. In my mind unicast offers a simpler less complex solution with minimal overheads and makes it more transportable across networks.

Performance Improvements:

Service Providers are always trying to squeeze the most out of their hardware purchases and with VMware claiming 50% greater IOPS for all-flash through optimized data services that in theory can enable 150K IOPS per host it appears they will be served well. in addition to optimized checksum and dedupe along with support for the latest flash technologies. The increased performance helps accelerate tenant workloads and provides higher consolidation ratios for those workloads.

Service providers can accelerate new hardware technologies with the support of the latest flash technologies, including solutions like the new breed of NVMe SSDs. These solutions can deliver up to 250% greater performance for write-intensive applications. vSAN 6.6 now offers larger caching drive options that includes 1.6TB flash drives, so that service providers can take advantage of larger capacity flash drives.

Disk Performance Enhancements:

For those that have gone through a vSAN rebuild operation you would know that is can be a long exercise depending on the amount of data and configuration of the vSAN datastore. vSAN 6.6 introduces a new smart rebuild and rebalancing feature along with partial repairs of degraded or absent components. There is also resync throttling and improved visibility into the rebuilding status through the Health Status. Cormac Hogan goes through the improvements in detail here.

From a Service Provider point of view having these enhanced features around the rebuilds it critical to continued quality of service for IaaS customer who live on shared vSAN storage. Shorter and more efficient rebuild times means less impact to customers.

Health Checks and Monitoring Improvements:

vSAN Encryption:

VMware has introduced VM encryption native at the vSAN datastore level. This can be enabled per vSAN Cluster and works with deduplication and compression across hybrid and all-flash cluster configurations. vSAN 6.6 data Encryption is hardware agnostic, there is no requirement to use specialized and more expensive Self-Encrypting Drives (SEDs) which is also a bonus. Jase McCarty has another Virtual Blocks article here that goes through this feature in great detail.

From a Service Provider point of view you can now potentially offer two classes of vSAN backed storage for IaaS customers. One that lives on an Encrypted enabled cluster that’s charged at a premium over non Encrypted clusters. In talking with service providers across the globe, data at rest encryption has become something that potential customers are asking for and most leading storage companies have an encryption story…now so does vSAN and it appears to be market leading.

vSAN 6.6 Licensing:

In terms of the licensing Matrix, nothing too drastic has changed except for the addition of Data at Rest Encryption in the Enterprise bundle, however in a significant move for vCAN Service Providers, QoS IOPS Limiting has been extended across all license types and can now be taken advantage across the board. This is good for Service Providers who look to offer different tiers or storage performance based on IOPS limited…previously it was only available under Enterprise licensing.

Bootstrapping UI:

As a bonus feature that I think will assist vCAN Service Providers is the new Native Bootstrap installer in vSAN 6.6. William Lam has written about the feature here, but for those looking to install their first vSAN node without vSphere available the ability to bootstrap is invaluable. The old manual process is still worth looking at as it’s always beneficial to know what’s going on in the background, but it’s all GUI based now via the VCSA installer.

Conclusion:

vSAN 6.6 appears to be a great step forward for VMware and Service Providers will no doubt be keen to upgrade as soon as possible to take advantage of the features and enhancements that have been delivered in this 6.6 release.

References:

http://cormachogan.com/2017/04/11/whats-new-vsan-6-6/ 

https://storagehub.vmware.com/#!/vmware-vsan/vmware-vsan-6-5-technical-overview

http://vsphere-land.com/news/an-overview-of-whats-new-in-vmware-vsan-6-6.html

https://storagehub.vmware.com/#!/vmware-vsan/vsan-multicast-removal/multicast-removal-steps-and-requirements/1

vSAN 6.6 Encryption Configuration

vSAN 6.6 – Native Data-at-Rest Encryption

Goodbye Multicast

Native VCSA bootstrap installer in vSAN 6.6

vExpert Pivot: NSX and VSAN Program Announcements

This week the VMware vExpert team officially lifted the lid on two new subprograms that focus on NSX and VSAN. The announcements signal a positive move for the vExpert program that had come under some criticism over the past two or so years around the fact that the program had lost some of it’s initial value. As I’ve mentioned previously the program is unmistakably an advocacy program first and foremost and those who are part of the vExpert group should be active contributors in championing VMware technologies as well as being active in their spheres of influence.

Corey and the rest of the team have responded to the calls for change by introducing vExpert Specialties now more in line to what Microsoft does with it’s MVP Program. The first specializations are focused on VMware’s core focus products of NSX and VSAN…these programs are built on the base vExpert program and the group is chosen from existing vExperts who have shown and demonstrated contribution to each technology. The VSAN announcement blog articulates the criteria perfectly.

This group of individuals have passion and enthusiasm for technology, but more importantly, have demonstrated significant activity and evangelism around VSAN.

With that, I am extremely proud to be part of both the inaugural NSX and VSAN vExpert program. It’s some reward and acknowledgment for the content I have created and contributed to for both technologies since their release. Substance is important when it comes to awarding community contribution and as I look through the list I see nothing but substance and quality in the groups.

Again, this is a great move by the vExpert team and I’m looking forward to it reinvigorating the program. I’ve pasted linked below to my core NSX and VSAN content…I’m especially proud of the NSX Bytes series which continues to do well in terms of people still seeking out the content. More recently I have done a bit of work around VSAN and upgrading VSAN from Hybrid to All Flash series was well received. Feel free to browse the content below and look forward to catching up with everyone at VMworld US.

References:

vExpert NSX 2016 Award Announcement

Announcing the 2016 VSAN vExperts

VMworld 2016: Top Session Picks

VMworld 2016 is just around the corner (10 days and counting) and the theme this year is be_Tomorrow …which looks to build on the Ready for Any and Brave IT messages from the last couple of VMworld events. It’s a continuation of VMware’s call to arms to get themselves and their partners and customers prepared for the shift in the IT of tomorrow. This will be my fourth VMworld and I am looking forward to spending time networking with industry peers, walking around the Solutions Exchange on the look out out for the next Rubrik or Platform9 and attending Technical Sessions.

http://www.vmworld.com/uscatalog.jspa

The Content Catalog went live a few weeks ago and the Session Builder has also been live allowing attendees to lock in sessions. There are a total of 817 sessions this year, up from the 752 sessions last year. I’ve listed the main tracks with the numbers fairly similar to last year.

Cloud Native Applications (17)
End-User Computing (97)
Hybrid Cloud (63)
Partner Exchange @ VMworld (74)
Software-Defined Data Center (504)
Technology Deep Dives & Futures (22)

VMware’s core technology focus around VSAN and NSX again has the lions share of sessions this time year, with EUC still a very popular subject. It’s pleasing to see a lot of vCloud Air Network related sessions in the list (for a detailed look at the vCAN Sessions read my previous post) and there is a solid amount of Cloud Native Application content. Below are my top picks for this year:

  • Virtual SAN – Day 2 Operations [STO7534]
  • Advanced Network Services with NSX [NET7907]
  • A Day in the Life of a VSAN I/O [STO7875]
  • vSphere 6.x Host Resource Deep Dive [INF8430]
  • The Architectural Future of Network Virtualization [NET8193R]
  • Conducting a Successful Virtual SAN 6.2 Proof of Concept [STO7535]
  • How to design and implement VMware’s vCloud in production [SDDC9612-SPO]
  • PowerNSX and PyNSXv: Using PowerShell and Python for Automation and Management of VMware NSX for vSphere [NET7514]
  • Evolving the vSphere API for the Modern Era [INF8255]
  • Multisite Networking and Security with Cross-vCenter NSX: Part 2 [NET7861R]

My focus seems to have shifted back towards more vCloud Director and Network/Hybrid Cloud automation of late and it’s reflected in the choices above. Along side that I am also very interested to see how VMware position vCloud Air after the shambles of the past 12 months and I always I look forward to hearing from respected industry technical leads Frank Denneman, Chris Wahl and Duncan Epping as they give their perspective on storage and software defined datacenters and automation. This year I’m also looking at what the SABU Tech Marketing Team are up to around VSAN and VSAN futures.

As has also become tradition, there are a bunch of bloggers who put out their Top picks for VMworld…check out the links below for more insight into what’s going to be hot in Las Vegas this VMworld. Hope to catch up with as many community folk as possible while over so if you are interested in a chat, hit me up!

My top 15 VMworld sessions for 2016

Top 5 Log Insight VMworld Sessions

be_TOMORROW at VMworld 2016 – Key Storage and Availability Activities

 

My Top Session picks for VMworld 2016

http://www.mindthevirt.com/top-vmworld-sessions-category-1247

PowerCLI Script to Calculate VSAN vCAN Points Per Month

There is no doubt that new pricing introduced to vCAN Service Providers announced just after VSAN 6.2 was released meant that Service Providers looking at VSAN for their IaaS or MSP offerings that had previously written it off due to price, could once again consider it as a viable and price competitive option. As of writing this blog post there is no way to meter the new reporting mechanism automatically through the existing vCloud Usage Meter with the current 3.5 beta also lacking the ability to report billing info.

I had previously come across a post from @virten that contained a PowerCLI script to calculate VSPP points based on the original allocated GB model. With VSAN 6.2 pricing was now based on a consumed GB model which was a significant win for those pushing for a more competitive pricing structure to be able to push a now mature VSAN as a platform of choice.

Before I post the code it’s worth noting that I am still not 100% happy with the interpretation of the reporting:

The VsanSpaceUsage(vim.cluster.VsanSpaceUsage) data object has the following two properties which vCAN partners can use to pull Virtual SAN usage information: a) totalCapacityB (total Virtual SAN capacity in bytes) and b) freeCapacityB (free Virtual SAN capacity in bytes). Subtracting b) from a) should yield the desired “Used Capacity” information for monthly reporting.

I read that to say that you report for any fault tolerance or data resiliency overheads…that is to say if you have a VM with a 100GB hard disk consuming 50GB on a VSAN datastore utilizing RAID1 and an FTT=1 you will pay for the 100GB that is actually consumed.

With that in mind I had to add in a multiplier into the original script I had hacked together to cater for the fault tolerance and raid level you may run. The rest is pretty self explanatory and I have built on @virtens original script by asking for which vCenter you want to log into, what VSAN licensing model you are using and then finally ask for the RAID and FTT levels you are running. The result is the total amount of consumed storage of all VM disks residing on the VSAN datastore (which is the only value hard coded) and then the amount of vCAN points you would be up for per month with and without the overhead tax.

The code is below, please share and improve and note that I provide it as is and should be used as such. Please let me know if I’ve made any glaring mistakes…

If someone can also let me know how to round numbers and capture an incorrect vCenter login gracefully and exit that would be excellent! – [EDIT] Thanks to Virten for jumping on that! Code updated!

References:

PowerCLI Script to Calculate VSAN VSPP Points

VSAN 6.2: Reminder About Important Fix

[UPDATE] This issue is resolved in VMware ESXi 6.0, Patch Release ESXi600-201608001. For more information, see VMware ESXi 6.0, Patch Release ESXi600-201608001 (2145663).

Last week VMware released an important KB based around an issue with VSAN 6.2 where some VMs residing on existing Hybrid VSAN datastores may exhibit reduced disk IO performance after an upgrade. In a nutshell the issue is caused by a new operation that’s linked to the new deduplication and compression features in VSAN 6.2. The issue affects only VSAN 6.2 Hybrid deployments and is obviously not applicable to All Flash VSAN Clusters.

If impacted you may see:

  • A significantly lower than expected read cache hit ratio is observed on VSAN caching tier.
  • A higher percentage of IOPS may be observed on capacity tier disks on Hybrid diskgroups when compared from previous 6.x systems.
  • Overall increased VM observed latency

This issue is caused by VSAN 6.2 performing low level scanning for unique blocks, which is related to deduplication, can still occur on VSAN hybrid disk groups. This causes performance deterioration on Hybrid Disk groups, as it has a significant read caching performance impact on the SSD cache tier of VSAN disk groups.

The Workaround:

To work around this issue, if you are using a Hybrid configuration, you can turn off the dedup scanner option on each VSAN host in the VSAN Hybrid cluster. The way to turn it off is to modify the advanced setting lsomComponentDedupScanType which is set to a default value of 2. For the workaround you set that to 0. The easiest way to archive this is through PowerCLI as shown below.

Note that each host needs to be rebooted for the settings to take affect so go through the normal process of ensuring hosts go into VSAN maintenance mode before reboot.

Also worth mentioning a PowerCLI script that Jase McCarty has put up on GitHub that Gets/Sets the Deduplication Scanner settings with the use of some checks via a PowerCLI script that accepts variables.

https://github.com/jasemccarty/DedupeScan

References:

https://kb.vmware.com/kb/2146267

VSAN Upgrading from 6.1 to 6.2 Hybrid to All Flash – Part 3

When VSAN 6.2 was released earlier this year it came with new and enhanced features and with the price of SSDs continuing to fall and an expanding HCL it seems like All Flash instances are becoming more the norm and for those that have already deployed VSAN in a Hybrid configuration the temptation to upgrade to All Flash is certainly there. Duncan Epping has previously blogged the overview of migrating from Hybrid to All Flash so I wanted to expand on that post and go through the process in a little more detail. This is the final part of a three part blog series with the process overview outlined below.

Use the links below to page jump.

In part one I covered upgrading existing hosts, expanding an existing VSAN cluster and upgrading the license and disk format. In part two I covered the actual Hybrid to All Flash migration steps and in this last part I will finish off by going through the process of creating a new VSAN Policy, migrate existing VMs to the new policy and  then enable deduplication and compression.

Before continuing it’s worth pointing out that after the Hybrid to All Flash migration you are going to be left with an unbalanced VSAN cluster as the full data evacuation off the last Hybrid host will leave that host without objects. Any new objects created will work to re-balance the cluster however if you want to initiate a proactive re-balance you can tit the re-balance button from the Health status window. For more on this process check out this post from Cormac Hogan.

Create new Policy and Migrate VMs:

To take advantage of the new erasure coding now in the VSAN 6.2 All Flash cluster we will need to create a new storage policy and apply that policy to any existing VMs. In my case all VMs where on the Default VSAN Policy with FTT=1. The example below shows the creation of a new Storage Policy that uses RAID5 erasure coding with FTT=1. If you remember from previous posts the reason for expanding the cluster to four hosts was to cater for this specific policy.

To create the new Storage Policy head to VM Storage Policies from the Home page of the Web Client and click on Create New VM Storage Policy. Give policy a name, click Next and construct Rule-Set 1 which is based on VSAN. Select the Failure tolerance method and choose RAID-5/6 (Erasure Coding) – Capacity.

In this case with FTT=1 chosen RAID5 will be used. Clicking on Next should show that the existing VSAN datastore is compatible with the policy. With that done we can migrate existing VMs off the Default VSAN Policy onto the newly created one.

To get an list of what VMs are going to be migrated have a look at the PowerCLI commands below to get the VMs on the VSAN Datastore and then get their Storage Policy. The last command below gets a list of existing policies.

To apply the new Erasure Coding Storage Policy its handy to get the full name of the policy.

To migrate the VMs to the new policy you can either do it one by one via the Web Client of do it on mass via the following PowerCLI script.

Once run the VMs will have the new policy applied and VSAN will work in the background to get those VM objects compliant. You can see the status of Virtual Disk Placement in the Virtual SAN tab of the Monitor Tab of the cluster.

Enable DeDupe and Compression:

Before I go into the details…for a brilliant overview and explanation of DeDupe and Compression with VSAN 6.2 head to this post from Cormac Hogan. To enable this feature we need to double check that the licensing is correct as detailed in the first post and also ensure that all previous steps relating to the Hybrid to All Flah migration has taken place. To turn on this feature head to the General window under the Virtual SAN Settings menu on the cluster Manage tab and click on the Edit button next to Virtual SAN is Turned ON.

Choose Enabled in the drop down and take note of the checkbox that talks about Allow Reduced Redundancy understanding what that means by reading the info box as shown above. Once you click on the process to enable DeDuplication and Compress will begin…this process will go through an reconfigure all Disk Groups similar to to the process to upgrade from between Hybrid and All Flash. Again this will take some time depending on the number of host, number of disk groups and type of disks in the cluster.

Below I have shown the before and after of the Capacity window under the Virtual SAN tab in the Monitor section of the Cluster view. You can see that before enabled, there is a message saying that DeDeuplication and Compression is disabled.

And after enabling DeDuplication and Compression you start to get some statistics relating to both of them in the window relating to savings and ratios. Even in my small lab environment I started to see some benefits.

With that complete we have finished this series and have gone through all the steps in order to get to an All Flash VSAN Cluster with the newest features enabled.

References:

VSAN 6.2 Part 1 – Deduplication and Compression

VSAN 6.2 Part 2 – RAID-5 and RAID-6 configurations

 

« Older Entries