Tag Archives: Homelab

Homelab – Lab Access Made Easy with Free Veeam Powered Network

A couple of weeks ago at VeeamON we announced the RC of Veeam PN which is a lightweight SDN appliance that has been released for free. While the main messaging is focused around extending network availability for Microsoft Azure, Veeam PN can be deployed as a stand alone solution via a downloadable OVA from the veeam.com site. While testing the product through it’s early dev cycles I immediately put into action a use case that allowed me to access my homelab and other home devices while I was on the road…all without having to setup and configure relatively complex VPN or remote access solutions.

There are a lot of existing solutions that do what Veeam PN does and a lot of them are decent at what they do, however the biggest difference for me with comparing say the VPN functionality with a pfSense is that Veeam PN is purpose built and can be setup within a couple of clicks. The underlying technology is built upon OpenVPN so there is a level of familiarity and trust with what lies under the hood. The other great thing about leveraging OpenVPN is that any Windows, MacOS or Linux client will work with the configuration files generated for point-to-site connectivity.

Homelab Remote Connectivity Overview:

While on the road I wanted to access my homelab/office machines with minimal effort and without the reliance on published services externally via my entry level Belkin router. I also didn’t have a static IP which always proved problematic for remote services. At home I run a desktop that acts as my primary Windows workstation which also has VMware Workstation installed. I then have my SuperMicro 5028D-TNT4 server that has ESXi installed and runs my NestedESXi lab. I need access to at least RDP into that Windows workstation, but also get access to the management vCenter, SuperMicro IPMI and other systems that are running on the 192.168.1.0/24 subnet.

As seen above I also wanted to directly access workloads in the NestedESXi environment specifically on the 172.17.0.1/24 and 172.17.1.1/24 networks. A little more detail on my use case in a follow up post but as you can see from the diagram above, with the use of the Tunnelblick OpenVPN Client on my MBP I am able to create a point-to-site connection to the Veeam PN HUB which is in turn connected via site-to-site to each of the subnets I want to connect into.

Deploying and Configuring Veeam Powered Network:

As mentioned above you will need to download the Veeam PN OVA from the veeam.com website. This VeeamKB describes where to get the OVA and how to deploy and configure the appliance for first use. If you don’t have a DHCP enabled subnet to deploy the appliance into you can configure the network as a static by accessing the VM console, logging in with the default credentials and modifying the /etc/networking/interface file as described here.

Components

  • Veeam PN Hub Appliance x 1
  • Veeam PN Site Gateway x number of sites/subnets required
  • OpenVPN Client

The OVA is 1.5GB and when deployed the Virtual Machine has the base specifications of 1x vCPU, 1GB of vRAM and a 16GB of storage, which if thin provisioned consumes a tick over 5GB initially.

Networking Requirements

  • Veeam PN Hub Appliance – Incoming Ports TCP/UDP 1194, 6179 and TCP 443
  • Veeam PN Site Gateway – Outgoing access to at least TCP/UDP 1194
  • OpenVPN Client – Outgoing access to at least TCP/UDP 6179

Note that as part of the initial configuration you can configure the site-to-site and point-to-site protocol and ports which is handy if you are deploying into a locked down environment and want to have Veeam PN listen on different port numbers.

In my setup the Veeam PN Hub Appliance has been deployed into Azure mainly because that’s where I was able to test out the product initially, but also because in theory it provides a centralised, highly available location for all the site-to-site connections to terminate into. This central Hub can be deployed anywhere and as long as it’s got HTTPS connectivity configured correctly you can access the web interface and start to configure your site and standalone clients.

Configuring Site Clients (site-to-site):

To complete the configuration of the Veeam PN Site Gateway you need to register the sites from the Veeam PN Hub Appliance. When you register a client, Veeam PN generates a configuration file that contains VPN connection settings for the client. You must use the configuration file (downloadable as an XML) to set up the Site Gateway’s. Referencing the digram at the beginning of the post I needed to register three seperate client configurations as shown below.

Once this has been completed I deployed three Veeam PN Site Gateway’s on my Home Office infrastructure as shown in the diagram…one for each Site or subnet I wanted to have extended through the central Hub. I deployed one to my Windows VMware Workstation instance  on the 192.168.1.0/24 subnet and as shown below I deployed two Site Gateway’s into my NestedESXi lab on the 172.17.0.0/24 and 172.17.0.1/24 subnets respectively.

From there I imported the site configuration file into each corresponding Site Gateway that was generated from the central Hub Appliance and in as little as three clicks on each one, all three networks where joined using site-to-site connectivity to the central Hub.

Configuring Remote Clients (point-to-site):

To be able to connect into my home office and home lab which on the road the final step is to register a standalone client from the central Hub Appliance. Again, because Veeam PN is leveraging OpenVPN what we are producing here is an OVPN configuration file that has all the details required to create the point-to-site connection…noting that there isn’t any requirement to enter in a username and password as Veeam PN is authenticating using SSL authentication.

For my MPB I’m using the Tunnelblick OpenVPN Client I’ve found it to be an excellent client but obviously being OpenVPN there are a bunch of other clients for pretty much any platform you might be running. Once I’ve imported the OVPN configuration file into the client I am able to authenticate against the Hub Appliance endpoint as the site-to-site routing is injected into the network settings.

You can see above that the 192.168.1.0, 172.17.0.0 and 172.17.0.1 static routes have been added and set to use the tunnel interfaces default gateway which is on the central Hub Appliance. This means that from my MPB I can now get to any device on any of those three subnets no matter where I am in the world…in this case I can RDP to my Windows workstation, connect to vCenter or ssh into my ESXi hosts.

Conclusion:

Summerizing the steps that where taken in order to setup and configure the extension of my home office network using Veeam PN through its site-to-site connectivity feature to allow me to access systems and services via a point-to-site VPN:

  • Deploy and configure Veeam PN Hub Appliance
  • Register Sites
  • Register Endpoints
  • Deploy and configure Veeam PN Site Gateway
  • Setup Endpoint and connect to Hub Appliance

Those five steps took me less than 15 minutes which also took into consideration the OVA deployments as well…that to me is extremely streamlined, efficient process to achieve what in the past, could have taken hours and certainly would have involved a more complex set of commands and configuration steps. The simplicity of the solution is what makes it very useful for home labbers wanting a quick and easy way to access their systems…it just works!

Again, Veeam PN is free and is deployable from the Azure Marketplace to help extend availability for Microsoft Azure…or downloadable in OVA format directly from the veeam.com site. The use case i’ve described and have been using without issue for a number of months adds to the flexibility of the Veeam Powered Network solution.

References:

https://helpcenter.veeam.com/docs/veeampn/userguide/overview.html?ver=10

https://www.veeam.com/kb2271

 

HomeLab – SuperMicro 5028D-TNT4 Unboxing and First Thoughts

While I was at Zettagrid I was lucky enough to have access to a couple of lab environments that where sourced from retired production components and I was able to build up a lab that could satisfy the requirements of R&D, Operations and the Development team. By the time I left Zettagrid we had a lab that most people envied and I took advantage of it in terms of having a number of NestedESXi instances to use as my own lab instances but also, we had an environment that ensured new products could be developed without impacting production while having multiple layers of NestedESXi instances to test new builds and betas.

With me leaving Zettagrid for Veeam, I lost access to the lab and even though I would have access to a nice shiny new lab within Veeam I thought it was time to bite the bullet and go about sourcing a homelab of my own. The main reasons for this was to have something local that I could tinker with which would allow me to continue playing with the VMware vCloud suite as well as continue to look out for new products allowing me to engage and continue to create content.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

What I Got:

To be honest, the search didn’t take that long mainly thanks to a couple of Homelab Channels that I am a member of in the vExpert and Homelabs-AU Slack Groups. Given my requirements it quickly came down to the SYS-5028D-TN4T Xeon D-1541 Mini-tower or the SYS-5028D-TN4T-12C Xeon D-1567 Mini-tower. Paul Braren at TinkerTry goes through in depth why the Xeon D processors in these SuperMicro Super Servers are so well suited to homelabs so I won’t repeat what’s been written already but for me the combination of a low power CPU (45w) that still has either 8 or 12 cores that’s packaged up in such a small form factor meant that my only issue was trying to find a supplier that would ship the unit to Australia for a reasonable price.

Digicor came to the party and I was able to source a great deal with Krishnan from their Perth office. There are not too many SuperMicro dealers in Australia, and there was a lot of risk in getting the gear shipped from the USA or Europe and the cost of shipping plus import duties meant that going local was the only option. For those that are in Australia, looking for SuperMicro Homelab gear, please email/DM me and I can get you in touch with the guys at Digicor.

What’s Inside:

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

Unboxing Pics:

Small package, that hardly weighs anything…not surprising given the size of the case.

Nicely packaged on the inside.

Came with a US and AU kettle cord which was great.

The RAM came separately boxed and well wrapped in anti-static bags.

You can see a size comparison with my 13″ MBP in the background.

The back is all fan, but that doesn’t mean this is a loud system. In fact I can barely hear it purring in the background as I sit and type less than a meter away from it.

One great feature is the IPMI Remote Management which is a brilliant and convenient edition for a HomeLab server…the network port is seen top left. On the right are the 2x10Gig and 2x1Gig network ports.

The X10SDV-TLN4F motherboard is well suited to this case and you can see how low profile the CPU fan is.

Installing the RAM wasn’t too difficult even through there isn’t a lot of room to work with inside the case.

Finally, taking a look at the HotSwap drive bays…I had to buy a 3.5 to 2.5 inch adapter to fit in the SSDs, however I did find that the lock in ports could hold the weight of the EVO’s with ease.

BIOS and Initialization’s boot screens

Overall First Thoughts:

This is a brilliant bit of kit and it’s perfect for anyone wanting to do NestedESXi at home without worrying about the RAM limits of NUCs or the noise and power draw of more traditional servers like the R710’s that seem to make their way out of datacenters and into homelabs. The 128GB of RAM means that unless you really want to go fully physical you should be able to nest most products and keep everything nicely contained within the ESXi Host compute, storage and networking.

Thanks again to Krishnan at Digicor for supplying the equipment and to Paul Braren for all the hard work he does up at TinkerTry. Special mention also to my work colleague, Michael White who was able to give me first hand experience of the Super Servers and help make it a no brainer to get the 5028D-TNT4.

I’ll follow this post up with a more detailed a look at how I went about installing ESXi and how the NestedESXi labs look like and what sort of performance I’m getting out the the system.

More 5028D Goodness:

 

HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix

Ok, i’ll admit it…i’ve had serious lab withdrawals since having to give up the awesome Zettagrid Labs. Having a lab to tinker with goes hand in hand with being able to generate tech related content…point and case, my new homelab got delivered on Monday and I have been working to get things setup so that I can deploy my new NestedESXi lab environment.
The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.
By way of an quick intro (longer first impression post to follow) I purchased a SuperMicro SYS-5028D-TN4T that I based off this TinkerTry Bundle which has become a very popular system for vExpert homelabers. It’s got an Intel Xeon D-1541 CPU and I loaded it up with 128GB or RAM. The system comes with an embedded Lynx Point AHCI Controller that allows up to six SATA devices and is listed on the VMware Compatibility Guide for ESXi 6.5.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectivly…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatability Guide the listed driver for the conrtoller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatability guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.


References
:

http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044993