Search Results for: vExpert

NSX Bytes – What’s New in NSX-T 2.4

A little over two years ago in Feburary of 2017 VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has had big plans for NSX beyond vSphere for as long as I can remember, and during the NSX vExpert session we saw how this is becoming more of a reality with NSX-T 2.4. NSX-T is targeted at more cloud native workloads which also leads to a more devops focused marketing effort on VMware’s end.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors.

What’s new in NSX-T 2.4:

[Update] – The Offical Release Notes for NSX-T 2.4 have been releases and can be found here. As mentioned by Anthony Burke

I only touch on the main features below…This is a huge release and I don’t think i’ve seen a larger set of release notes from VMware. There are also a lot of Resolved Issues in the release which are worth a look for those who have already deployed NSX-T in anger. [/Update]

While there are a heap of new features in NSX-T 2.4, for me one of the standout enhancements is the migration options that now exist to take NSX-v platforms and migrate them to NSX-T. While there will be ongoing support for both platforms, and in my opinion NSX-v still hold court in more traditional scenarios, there is clear direction on the migration options.

In terms of the full list of what’s new:

  • Policy Management
    • Simplified UI with rich visualisations
    • Declarative Policy API to configure networking, security and services
  • Advanced Network Services
    • IPv6 (L2, L3, BGP, FW)
    • ENS Support for Edge and DFW
    • VPN (L2, L3)
    • BGP Enhancements (allow-as in, multi-path-asn relax, iBGP support, Inter-SR routing)
  • Intrinsic Security
    • Identity Based FW
    • FQDN/URL whitelisting for DFW
    • L7 based application signatures for DFW
    • DFW operational enhancements
  • Cloud and Container Updates
    • NSX Containers (Scale, CentOS support, NCP 2.4 updates)
    • NSX Cloud (Shared NSX gateway placement in Transit VPC/VNET, VPN, N/S Service Insertion, Hybrid Overlay support, Horizon Cloud on Azure integration)
  • Platform Enhancements
    • Converged NSX Manager appliance with 3 node clustering support
    • Profile based installs, Reboot-less maintenance mode upgrades, in-place mode upgrades for vSphere Compute Clusters, n-VDS visualization, Traceflow support for centralized services like Edge Firewall, NAT, LB, VPN
    • v2T Migration: In-built UI wizards for “vDS to N-vDS” as well as “NSX-v to NSX-T” in-place migrations
    • Edge Platform: Proxy ARP support, Bare Metal: Multi-TEP support, In-band management, 25G Intel NIC support
Infrastructure as Code and NSX-T:

As mentioned in the introduction, VMware is targeting cloud native and devops with NSX-T and there is a big push for being able to deploy and consume networking services across multiple platforms with multiple tools via the NSX API. At it’s heart, we see here the core of what was Nicira back in the day. NSX (even NSX-v) has always been underpinned by APIs and as you can see below, the idea of consuming those APIs with IaC, no matter what the tool is central to NSX-T’s appeal.

Conclusion:

It’s time to get into NSX-T! Lots of people who work in and around the NSBU have been preaching this for the last three to four years, but it’s now apparent that this is the way of the future and that anyone working on virtualization and cloud platforms needs to get familiar with NSX-T. There has been no better time to set it up in the lab and get things rolling.

For a more in depth look at the 2.4 release, head to the official launch blog post here.

References:

vExpert NSX Briefing

https://blogs.vmware.com/networkvirtualization/2019/02/introducing-nsx-t-2-4-a-landmark-release-in-the-history-of-nsx.html/

NSX Bytes – What’s new in NSX-T 2.1

In Feburary of this year VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push of NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has big plans for NSX beyond vSphere and during the NSX vExpert session we saw how the future of networking is all in software…having just come back from AWS re:Invent I tend to agree with this statement as organisations look to extend networks beyond traditional on-premises or cloud locations.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-T are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

Layer 3 accessibility across all types of platforms.

What’s new in NSX-T 2.1:

Today at Pivotal SpringOne, VMware is launching version 2.1 of NSX-T and with it comes a networking stack underpinning Pivotal Container Services, direct integration with Pivotal Cloud Foundry and significant enhancements to load balancing capabilities for OpenStack Neutron and Kubernetes ingress. These load balancers can be virtual or bare metal. There is also native networking and security for containers and Pivotal operations manager integration.

NSX-T Native Load Balancer:
NSX-T has two levels of routers as shown above…then ones that connect to the physical world and the ones which are labeled T1 in the diagram above. Load balancing will be active on the T1 routers and have the following features:

  • Algorithms – Round Robin, Weighted Round Robin, Least Connections and Source IP Hash
  • Protocols – TCP, UDP, HTTP, HTTPS with passthrough, SSL Offload and End to end SSL
  • Health Checks – ICMP, TCP, UDP, HTTP, HTTPS
  • Persistance – Source IP, Cookie
  • Translation – SNAT, SNAT Automap and No SNAT

As well as the above it will have L7 manipulation as will as OpenStack and Kubernetes ingress. Like NSX-v these edges can be deployed in various sizes depending on the workload.

Pivotal Cloud Foundry and NSX-T:

For those that may not know, PCF is a cloud native platform for deploying and operating modern applications and in that NSX-T providers the networking to support those modern application. This is achieved via the Network Container Plugin. Cloud Foundry NSX-T topology include a separate network topology per orginization with every organization getting one T1 router. Logical switches are then attached per space. High performance north/south routing uses NSX routing infrastructure, including dynamic routing to the physical network.

For east/west traffic that happens container to container with every container having distributed firewall rules applied on it’s interface. There is also a number of visibility and troubleshooting counters attached to every container. NSX also controls the IP management by supplying subnets from IP blocks to namespaces and individual IPs and MACs to containers.

Log Insight Content Pack:

As part of this release there is also a new Log Insight NSX-T Content Pack that builds on the new visibility and troubleshooting enhancements mentioned above and allows Log Insight to monitor a lot of the container infrastructure with NSX.

Conclusion:

When it comes to the NSX-T 2.1 feature capabilities, the load balancing is a case of bringing NSX-T up to speed to where NSX-v is, however the thing to think about is that how those capabilities will or could be used beyond vSphere environments…that is the big picture to consider here around the future of NSX and it can be seen with the deeper integration into Pivotal Cloud Foundry.

Veeam Vault #8: VMworld 2017 Edition…Still Best Of!

I’m sitting in the airport lounge waiting to board the first leg of my 26 hour journey to Las Vegas for VMworld 2017 and I thought it was no better time to write the next edition of my Veeam Vault series. This will be my fifth VMworld as as I wrote earlier in the week…I don’t take this event for granted! This year will be a little different for me in that I am there representing Veeam and I am lucky enough to be presenting a couple of sessions while participating in other Veeam related meetings and activities.

Veeam has had a strong presence at VMworld’s past and this year is no exception. In fact from what I understand it’s our biggest VMworld to date and as you walk around Mandalay Bay you will feel Veeam’s presence. Veeam in it’s early years won the Best of Show in 2010 and Best of Technology in 2011 and has a proud history of a strong showing at every VMworld we have been a part of. And while we have challengers nipping at our feet trying to outdo us we remain focus on delivering great technology while being able to be the top contributer to the community and still able to put on the best events at the show.

Veeam Sessions @VMworld:

Officially we have two breakout sessions this year, with Danny Allan and Rick Vanover presenting a Deep Dive on v10 and Michael Cade and myself presenting a session on advanced VMware and Veeam features and integrations. There are also a couple of vBrownBag Tech Talks where Veeam features including talks from Michael Cade, myself and some of our great Veeam Vanguard’s.

The sessions can be viewed and selected from the VMworld Content Catalog here and we also have a number of Sponsor Booth sessions with our ecosystem partners…so keep an eye out for those.

Veeam @VMworld Solutions Exchange:

This year we will have two huge booths on the floor, with a Main Booth Area doing demo’s prize, giveaways, having an Experts Bar and acting as sponsor of the opening night hall crawl. We also have a coffee bar and lounge space called the vBar. This will be a chill out area serving good coffee and offering seats for people to come and relax during the event.

Veeam Community Support @VMworld:

As Eric Siebert wrote last week… Veeam gets the community and has been a strong supporter historically of VMworld community based events. This year again, we have come to the party are have gone all-in in terms of being front and center in supporting community events. Special mention goes to Rick Vanover who leads the charge in making sure Veeam is doing what it can to help make these events possible:

  • #vGolf
  • Opening Acts
  • VMunderground
  • vBrownBag
  • Spousetivities
  • vExpert Breakfast
  • vDestination Giveaway

Party with Veeam @ VMworld:

Finally, it wouldn’t be VMworld without attending Veeam’s seriously legendary party. This year we are looking to top last years event at Light nightclub by taking over the hottest club in Vegas… Hakkasan Nightclub! I know how hard it is to plan evening activities at VMworld and there is no doubt that there are a lot of decent competing parties on the Tuesday night…however whatever you do, you need to make sure that you at least stop by the MGM and party in green. RSVP here.

Final Word:

Again, this year’s VMworld is going to be huge and Veeam will be right there front and center of the awesomeness. Please stop by our sessions, visit our stand and attend our community sponsored events and feel free to chase me down for a chat…I’m always keen to meet other members of this great community. Oh, and don’t forget to get to the party!

VMworld 2017: Don’t Take it for Granted!

This time next week VMworld 2017 will be kicking off with the Sunday evening Welcome Reception among other sponsor and community events and for me, it will mark my fifth VMworld since 2012 having only missed the 2013 event. It’s become an annual pilgrimage to the west coast of the US so much so that my wife locks in the dates at the beginning of every year. It just so happens that Father’s Day in Australia is the Sunday after VMworld and it’s also around the time of my wedding anniversary…so if anything, VMworld reminds to take time out from the event and pick up that year’s anniversary gift.

Having been lucky enough to attend five out of the last six VMworld’s it has almost become automatic that I am at the event, and it could be easy for me to take VMworld for granted. I am very mindful of the fact that while the event is starting to loose a little bit of it’s perceived shine in certain circles it’s still the #1 Information Technology Industry Ecosystem event of the year and with that it’s still the must attend event for IT professionals, customers, partners and vendors alike.

I am also mindful of the fact that even after attending so many VMworld’s to not waste the opportunity that presents it’s self as an attendee. If I think back to my first VMworld in 2012, I still remember being somewhat timid and reluctant to participate in not much more than the sessions and official parties however the one thing I did do was observe how others where using the event to their advantage. While there is brilliant technology to be uncovered and lots of learning to be done, those that have been do VMworld before come to understand that networking is a primary benefit of attending and the networking should be milked for all it’s worth!

Someone told me while at VMworld 2014 that “you never know who is interviewing you”. This is very true and should be something that first timers and regulars understand and use to their advantage as a mechanism for potential career advancement…there is no better event to rub shoulders with industry peers, community leaders a tech rockstars. With that you should always be aware of your surroundings and not to waste any opportunity the may present it’s self. I’m not saying that you will get a new role just by attending and seeking out conversation..but what I am saying is to constantly be on your game!

Even for those, like me that have been lucky enough to attend multiple VMworld’s it’s easy to fly in and just go with the flow. Easy to not appreciate what it means to be there and easy to turn it into a week long drinking event. So my closing message is for everyone attending VMworld this year, be it your 10th or you 1st is to make sure you maximize everything that VMworld has to offer. Take advantage of the opportunity to not only get exposure to new technologies and products but also to network and realize the value that being at such an event offers. You never know when this VMworld could be your last…

Don’t take it for granted!

ESXi 6.5 Storage Performance Issues and Fix

[NOTE] : I decided to republish this post with a new heading and skip right to the meat of the issue as I’ve had a lot of people reach out saying that the post helped them with their performance issues on ESXi 6.5. Hopefully people can find the content easier and have a fix in place sooner.

The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectively…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatibility Guide the listed driver for the controller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatibility guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.


References
:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044993

Veeam Vault #5: $200 Million Give Away, VCSP Roadshow Plus Vanguard Blog Updates

Welcome to the fifth edition of the Veeam Vault and the second of 2017. It’s been a busy four or so weeks for me since the last update preparing for a number of event’s and webinar’s happening over the next month all focusing on Veeam’s Cloud Business. In this Veeam Vault I am going to talk about an exceptional new promotion that Veeam is running to help drive increased adoption of Veeam Cloud Connect, talk briefly about the ANZ VCSP Roadshow and post a round up of all Veeam Vanguard blog posts since the last update.

Cloud Connect $200 Million Free Cloud Services:

On Valentines day we made public an amazing promotion that Veeam through it’s partners will be giving away $1000 in free cloud services to all existing veeam customers powered by the Veeam Cloud & Service Provider community.

This shows just how serious we are about ensuring our customers get the most out of our availability solutions by activating Cloud Connect Backup and Replication services that are included with all Veeam Backup & Replication licenses. A few weeks in and the program has been well received and I am looking forward to this rolling out across EMEA and ANZ over the next few months. For more information about the promotion and information on Cloud Connect Backup and Cloud Connect Replication have a read of my veeam.com Blog Post here.

ANZ VCSP Roadshow 2017:

Last week in Perth we kicked off the ANZ VCSP Roadshow for 2017…this has become an annual event hosted by Veeam ANZ and aims to encourage growth in the VCSP program by presenting to new or existing VCSP partners around Veeam’s Availability Platform that’s anchored by Cloud Connect technologies. If you are in Sydney, Melbourne, Auckland or Adelaide there is still time to register here.

VeeamOn 2017:

VeeamOn 2017 is fast approaching but Veeam is still giving away certification, tickets, flights and accomodation to this years event in May. Our latest competition is a based around our VMCE certification and if you click on the link below you will be taken to the landing page where you need to take a quiz to enter the competition.

Propel your personal career by joining us in New Orleans for a training experience with cutting edge Veeam instructors and complete your VMCE certification. If you are already a VMCE, attend the brand new VMCE-Advanced: Design & Optimization v1.

Take the quiz to win a fully paid trip by taking this quiz by March 20th.

You can register here or you can:

Veeam Vanguard Blog Post Roundup:

NSX Bytes: NSX-T 2.0 Released

A couple of months ago in my NSX-v 6.3 and NSX-T 1.1 release post I focused around NSX-v features as that has become the mainstream version that most people know and work with…however NSX, in it’s Nicira roots has always been about multi-hypervisor and has always had an MH version that worked with Openstack deployments. The NSBU has big plans for NSX beyond vSphere and during the NSX vExpert session we got to see a little about how NSX-T will look beyond version 1.1.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-T are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

What’s in NSX-T 2.0:
The short answer to this is a focus on expanding NSX to public clouds, containers and platform as a service workloads. We have already seen a tech preview at VMworld of NSX working with AWS instances and the partnership between VMware and AWS is even more of a driver for this cross cloud compute and networking landscape to allow NSX-T to shine.
Expanded Networking and Security into Public Cloud and Containers:
  • Centralised security policy management
  • NSX for Public Cloud (AWS)
  • NSX for Cross-Cloud Services (AWS)
  • NSX for Containers and PaaS (Kubernetes, Openshift)

Platform Capabilities:

  • Distributed L3 at scale decoupled from vCenter
  • Intel DPDK Edge Line Rate packet performance
  • L2/L3 redundant control and data plane
  • ESXi and KVM (RHEL/Ubuntu)
  • Independant NSX interface thats multi vCenter
  • Scale out control plane and scale out edge cluster
  • VM and Containers Hosts

Feature Capabilities:

  • Distributed Routing, eBGP, NAT, BFD, ECMP, route-maps, 4 byte ASN
  • REST/JSON OpenAPI Specification
  • VIO, Upstream Openstack support
  • Geneve Encapsulation, QoS, Software L2 Bridge
  • Distributed stateful firewall, tag based security grouping
  • DHCP Server and Relay
  • IPFIX, Port Mirroring, Port Connectivity, Trace Flow, Backup & Restore
  • Log Insight Content Management Pack

Where do NSX-v and NSX-T Play:

Conclusion:

When it comes to the NSX-T 2.0 feature capabilities, many of them are a case of bringing NSX-T up to speed to where NSX-v is, however the thing to think about is that how those capabilities will or could be used beyond vSphere environments…that is the big picture to consider here around the future of NSX!

For an overview of what’s was released in NSX-T 2.0, the release notes can be found here, or have a read of my launch post here.

References:

NSX Bytes: NSX-v 6.3 Host Preparation Fails with Agent VIB module not installed

NSX-v 6.3 was released last week with an impressive list of new enhancements and I wasted no time in looking to upgrades my NestedESXi lab instance from 6.2.5 to 6.3 however I ran into an issue that at first I thought was related to a previous VIB upgrade issue caused by VMware Update Manager not being available during NSX Host upgrades…in this case it presented with the same error message in the vCenter Events view:

VIB module for agent is not installed on host <hostname> (_VCNS_xxx_Cluster_VMware Network Fabri)

After ensuring that my Update Manager was in a good state I was left scratching my head…that was until some back and forth in the vExpert Slack #NSX channel relating to a new VMwareKB that was released the same day as NSX-v 6.3.

https://kb.vmware.com/kb/2053782

This issue occurs if vSphere Update Manager (VUM) is unavailable. EAM depends on VUM to approve the installation or uninstallation of VIBs to and from the ESXi host.

Even though my Upgrade Manager was available I was not able to upgrade through Host Preparation. It seem’s like vSphere 6.x instances might be impacted by this bug but the good news is there is a relatively easy workaround as mentioned in the VMwareKB that bypasses the VUM install mechanism. To enable the workaround you need to enter into the Managed Object Browser of the vCenter EAM by going to the following URL and entering in vCenter admin credentials.

https://vCenter_Server_IP/eam/mob/ 

Once logged in you are presented with a (or list of) agencies. In my case I had more than one, but I selected the first one in the list which was agency-11

The value that needs to be changed is the bypassVumEnabled boolean value as shown below.

To set that flag to True enter in the following URL:

https://vCenter_Server_IP/eam/mob/?moid=agency-x&method=Update

Making sure that the agency number matches your vCenter EAM instance. From there you need to change the existing configuration for that value by removing all the text in the value box and invoking the value listed below:

Once invoked you should be able to go back into the Web Client and click on Resolve under the Cluster name in the Host Preparation Tab of the NSX Installation window.

Once done I was in an all Green state and all hosts where upgraded to 6.3.0.5007049. Once all hosts have been upgraded it might be a useful idea to reverse the workaround and wait for an official fix from VMware.

References:

https://kb.vmware.com/kb/2053782

NSX Bytes: NSX for vSphere 6.3 and NSX-T 1.1 Release Information

VMware’s NSX has been in the wild for almost three years and while the initial adoption was slow, of recent times there has been a calculated push to make NSX more mainstream. The change in licensing that happened last year has not only been done to help drive adoption by traditional VMware customers running vSphere that previously couldn’t look at NSX due to price but also the Transformers project has looked to build on Nicira’s roots in the heterogeneous hypervisor market and offer network virutalization beyond vSphere and beyond Open source platforms and into the public cloud space. The vision for VMware with NSX is to manage security and connectivity for heterogeneous end points through:

  • Security
  • Automation
  • Application Continuity

NSX has seen significant growth for VMware over the past twelve to eighteen months driven mostly from customer demand focusing around micro-segmentation, IT automation and efficiency and also the need to have extended multiple data centre locations that can be pooled together. To highlight the potential that remains with NSX-v less that 5% of the total available vSphere install base has NSX-v installed…and while that could have something to do with the initial restrictions and cost of the software it still represents enormous opportunity for VMware and their partners.

Last week the NSX vExpert group was given a first look at what’s coming in the new releases…below is a summation of what to expect from both NSX-v 6.3 and NSX-T 1.1. Note that we where not given an indication on vSphere 6.5 support so, like the rest of you we are all waiting for the offical release notes.

[Update] vSphere 6.5 will be supported with NSX-v 6.3

Please note that VMware vSphere 6.5a is the minimum supported version with NSX for vSphere 6.3.0. For the most up-to-date information, see the VMware Product Interoperability Matrix. Also, see 2148841.

NSX for vSphere 6.3 Enhancements:

Security:

  • NSX Pre-Assessment Tool based on vRealize Network Insight
  • Micro-Segmentation Planning and application visibility
  • New Security Certifications around ICSA, FIPS, Common Criteria and STIG
  • Linux Guest VM Introspection
  • Increase performance in service chaining
  • Larger scalability of VDI up to 50K desktops
  • NSX IDFW for VDI
  • Active Directory Integration for VDI at scale

Automation:

  • Routing Enhancements
  • Centralized Dashboard for service and ops
  • Reduced Upgrade windows with rebootless upgrades
  • Integration with vRA 7.2 enhancing LB,NAT
  • vCloud Director 8.20 support with advanced routing, DFW, VPN
  • VIO Updates to include multi-vc deployments
  • vSphere Integrated Container Support
  • New Automation Frameworks for PowerNSX, PyNSXv, vRO

Application Continuity:

  • Multi-DC deployments with Cross VC NSX enhancements for security tags
  • Operations enhancements with improved availability
  • L2VPN performance enhancements for cross DC/Cloud Connectivity

Where does NSX-T Fit:

Given there was some confusion about NSX-v vs. NSX-t in terms of everything going to a common code base starting from the transformers release it was highlighted that VMware’s primary focus for 2017 hasn’t shifted away from NSX for vSphere and will still be heavily invested in to add new capabilities in and beyond 6.3 and that there will be a robust roadmap of new capabilities in future releases with support extended will into the future.

NSX-t’s main drivers related to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-t is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-t are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

NSX-T 1.1 Brief Overview:

Again the focus is around private IaaS and multi-hypervisor support for development teams using dev clouds and employing more devops methodologies. There isn’t too much to write home about in the 1.1.0 release but there is some extended hypervisor support for KVM and ESXi, more single or multi-tenant support and some performance and resiliency optimizations.

Conclusion:

There is a lot to like about where VMware is taking NSX and both product streams offer strong network virtualization capabilities for customers to take advantage of. There is no doubt in my mind that the release of NSX-v 6.3 will continue to build on the great foundation laid by the previous NSX versions. When the release notes are made available I will do take a deeper look into all the new features and enhancements and tie them into what’s most useful for service providers.

HomeLab – SuperMicro 5028D-TNT4 Unboxing and First Thoughts

While I was at Zettagrid I was lucky enough to have access to a couple of lab environments that where sourced from retired production components and I was able to build up a lab that could satisfy the requirements of R&D, Operations and the Development team. By the time I left Zettagrid we had a lab that most people envied and I took advantage of it in terms of having a number of NestedESXi instances to use as my own lab instances but also, we had an environment that ensured new products could be developed without impacting production while having multiple layers of NestedESXi instances to test new builds and betas.

With me leaving Zettagrid for Veeam, I lost access to the lab and even though I would have access to a nice shiny new lab within Veeam I thought it was time to bite the bullet and go about sourcing a homelab of my own. The main reasons for this was to have something local that I could tinker with which would allow me to continue playing with the VMware vCloud suite as well as continue to look out for new products allowing me to engage and continue to create content.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

What I Got:

To be honest, the search didn’t take that long mainly thanks to a couple of Homelab Channels that I am a member of in the vExpert and Homelabs-AU Slack Groups. Given my requirements it quickly came down to the SYS-5028D-TN4T Xeon D-1541 Mini-tower or the SYS-5028D-TN4T-12C Xeon D-1567 Mini-tower. Paul Braren at TinkerTry goes through in depth why the Xeon D processors in these SuperMicro Super Servers are so well suited to homelabs so I won’t repeat what’s been written already but for me the combination of a low power CPU (45w) that still has either 8 or 12 cores that’s packaged up in such a small form factor meant that my only issue was trying to find a supplier that would ship the unit to Australia for a reasonable price.

Digicor came to the party and I was able to source a great deal with Krishnan from their Perth office. There are not too many SuperMicro dealers in Australia, and there was a lot of risk in getting the gear shipped from the USA or Europe and the cost of shipping plus import duties meant that going local was the only option. For those that are in Australia, looking for SuperMicro Homelab gear, please email/DM me and I can get you in touch with the guys at Digicor.

What’s Inside:

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

Unboxing Pics:

Small package, that hardly weighs anything…not surprising given the size of the case.

Nicely packaged on the inside.

Came with a US and AU kettle cord which was great.

The RAM came separately boxed and well wrapped in anti-static bags.

You can see a size comparison with my 13″ MBP in the background.

The back is all fan, but that doesn’t mean this is a loud system. In fact I can barely hear it purring in the background as I sit and type less than a meter away from it.

One great feature is the IPMI Remote Management which is a brilliant and convenient edition for a HomeLab server…the network port is seen top left. On the right are the 2x10Gig and 2x1Gig network ports.

The X10SDV-TLN4F motherboard is well suited to this case and you can see how low profile the CPU fan is.

Installing the RAM wasn’t too difficult even through there isn’t a lot of room to work with inside the case.

Finally, taking a look at the HotSwap drive bays…I had to buy a 3.5 to 2.5 inch adapter to fit in the SSDs, however I did find that the lock in ports could hold the weight of the EVO’s with ease.

BIOS and Initialization’s boot screens

Overall First Thoughts:

This is a brilliant bit of kit and it’s perfect for anyone wanting to do NestedESXi at home without worrying about the RAM limits of NUCs or the noise and power draw of more traditional servers like the R710’s that seem to make their way out of datacenters and into homelabs. The 128GB of RAM means that unless you really want to go fully physical you should be able to nest most products and keep everything nicely contained within the ESXi Host compute, storage and networking.

Thanks again to Krishnan at Digicor for supplying the equipment and to Paul Braren for all the hard work he does up at TinkerTry. Special mention also to my work colleague, Michael White who was able to give me first hand experience of the Super Servers and help make it a no brainer to get the 5028D-TNT4.

I’ll follow this post up with a more detailed a look at how I went about installing ESXi and how the NestedESXi labs look like and what sort of performance I’m getting out the the system.

More 5028D Goodness:

 

« Older Entries Recent Entries »