Dell PowerEdge FX2: VSAN Disk Configuration Steps

When you get your new DELL FX2s out of the box and powered on for the first time you will notice that the disk configuration has not been setup with VSAN in mind…If you where to log into ESXi on the blades in SLOT1a and 1c you would see that each host will have each SAS disk configured as a datastore. There is a little pre-configuration you need to do in order to get the drives presented correctly to the blades servers as well as remove and reconfigure the datastores and disks from within ESXi.

With my build I had four FC430 Blades with two FD332 Storage Sleds that contained 4x200GB SSDs and 8x600GB SAS drives in each sled.  By default the storage mode is configured in Split Single Host mode which results in all the disks being assigned to the hosts in SLOT1a and SLOT1c and both controllers as also assigned to the single host.

You can configure individual storage sleds containing two RAID controllers to operate in the following modes:

  • Split-single – Two RAID controllers are mapped to a single compute sled. Both the controllers are enabled and each controller is connected to eight disk drives
  • Split-dual – Both RAID controllers in a storage sled are connected to two compute sleds.
  • Joined – The RAID controllers are mapped to a single compute sled. However, only one controller is enabled and all the disk drives are connected to it.

To take advantage of the FD332-PERC (Dual ROC) controller you need to configure Split-Dual mode. All hosts need to be powered off to change the default configuration and change it to Split Dual Hosts for the VSAN configuration.

Head to Server Overview -> Power and from here Gracefully Shutdown all four servers

Once the servers have been powered down, click on the Storage Sleds in SLOT-03 and SLOT-04 and go to the Setup Tab. Change the Storage Mode to Split Dual Host and Click Apply.

To check the distribution of the disks you can Launch the iDRAC to each blade and go to Storage -> Enclosures and check to see that each Blade now has 2xSSDs and 4xHDD drives assigned. With the FD332 there are 16 total slots with 0-7 belonging to the first blade and 8-16 belonging to the seconds blade. As shown below we are looking at the config of SLOT1a.

The next step is to reconfigure the disks within ESXi to make sure VSAN can claim them when configuring the Disk Groups. Part of the process below is to delete any datastores that exist and clear the partition table…by far the easiest way to achieve this is via the new Embedded Host Client.

Install the Embedded Host Client on each Host

Log into the Hosts via the Embedded Client from https://HOST_IP/ui and go to the Storage Menu and delete any datastores that where preconfigured by DELL.

Click on Devices Tab in the Storage Menu and Clear the Partition Table so the VSAN can claim the disks that have been just deleted.

From here all disks should be available to be claimed by VSAN to create your disk groups.

As a side note it’s important to update to the latest driver for the PERC.



  • Hi Anthony. I’m looking into FX2 as well, but i was wondering how the hot-swapping of the disks would work. Is every disk swappable without downtime for the server blades?

    • Hey there…yep, there is an easy process for hotswapping of disks via the FD332 sleds…basically you can pull out the disks and have three minutes to swap out any failed disks.

  • I have an almost identical setup as the one in your article, except I have 3 FC430’s instead of 4. Would you still recommend using split-dual mode in this 3 blade setup that will also be running VSAN?

    • For the third blade…depending on the number of disks you have bought in the FD332 you would not use split mode if more than 8 disks…otherwise yea…

  • Anthony – do you recommend deleting the Dell datastores and clearing the partition tables on the hosts before or after the VSAN cluster is created?

  • How do you configure the I/o aggregators? Are you using vtl? I have been fighting multicast issues on our FX2

    • We are running the pass through modules so I can’t help you there sorry…have you raised a case with DELL?

    • Did you get this figured out I’m running the FN410S and I’m having issues with Multicast as well.

      • Hi Jon, how did you get on with your Multicast issue? I am interested in this setup but am looking at a scaled down ready node so 4 hosts rather than 3. The thing i cant seem to get clarification on is if the FN410S is supported or if we need two separate 10GB switches

        • Jeremy Sermersheim

          We had the same issue and had to place the switches into PMUX mode which disables IGMP snooping by default. You will then have to configure the ports, port channels, and vlans per your requirements. It also resets the root username on the switches as well but keeps NTP and timezone settings for whatever reason.

  • Alex Romanenko

    For the folks having multicast issues with FX2 and FN410S IOMs, please make sure that you have configured an IGMP Querier IP for every vlan that requires multicast to work across multiple switches (IOM itself is a Force10 switch and it is usually connected to some TOR switch(es)). As soon as I configured an IGMP Querier IP for my VSAN vlan (and all other vlans that required multicast) all my multicast issues went away!

  • Also chiming in on the need for the IGMP Quierier IP on your TOR switch whether it be Cisco or DELL. We almost went down the road of configuring our FX2 IOAs in PMUX mode. After days of long support calls to DELL and VMware, we dug deeper and configured our
    Cisco 2960G as an IGMP Quierier IP on the same subnet as our VSAN nodes. The command we believe worked was ‘ip igmp snooping querier address’ . was our vsan subnet. Hopefully these comments are useful to people as we spent a good week stressing over what we hoped would be plug and play.

  • Cyndie Bubis

    Excellent article! I support the FX2 and came across this article because I wanted screenshots of the CMC and didn’t have a FD332 in my lab. I think people forget that the I/O Aggregator in standalone is pretty much in simple mode not exactly plug and play. It’s great if you only require a LAG uplink. PMUX is really the only way to go and really not too difficult to setup. Just remember to clear config before you switch it. 🙂