Category Archives: General

VMUG UserCon – Sydney and Melbourne Events!

A few years ago I claimed that the Melbourne VMUG Usercon was the “Best Virtualisation Event Outside of VMworld!” …that was a big statement if ever there was one however, over the past couple of years I still feel like that statement holds court even though there are much bigger UserCons around the world. In fairness, both Sydney and Melbourne UserCons are solid events and even with VMUG numbers generally struggling world wide, the events are still well attended and a must for anyone working around the VMware ecosystem.

Both events happen a couple of days apart from each other on the 19th and 21st of March and both are filled with quality content, quality presenters and a great community feel.

This will be my sixth straight Melbourne UserCon and my fourth Sydney UserCon…The last couple of years I have attended with Veeam and presented a couple of times. This year Veeam has UserCon Global Sponsorship which is exciting as the Global Product Strategy team will be presenting a lot of the UserCons around the world. Both the Sydney and Melbourne Agenda’s are jam packed with virtualisation and automation goodness and it’s actually hard to attend everything of interest with schedule conflicts happening throughout the day.

…the agenda’s are listed on the sites.

As mentioned, Veeam is sponsoring both events a the Global Elite level and I’ll be presenting a session on Automation and Orchestration of Veeam and VMware featuring VMware Cloud on AWS which is an updated followup to the VMworld Session I presented last year. The Veeam SDDC Deployment Toolkit has been evolving since then and i’ll talk about what it means to leverage APIs and PowerShell to achieve automation goodness with a live demo!

Other notable sessions include:

If you are in Sydney or Melbourne next week try and get down to Sydney ICC and The Crown Casino respectively to participate, learn and contribute and hopefully we can catch up for a drink.

More Than Meets the Eye… Veeam Backup Performance

Recently I was sent a link to a video that showed an end user comparing Veeam to a competitors offering covering backup performance, restore capabilities and UI. It mainly focused on the comparison of incremental backup jobs and their completion times. It showed that the Veeam job was taking a significantly longer time to complete for the same dataset. The comparison was chalk and cheese and didn’t paint Veeam in a very good light.

Now, without knowing 100% the backend configuration that the user was testing against or the configuration of the Veeam components, storage platforms and backup jobs vs the competitors setup…the discrepancy between both job completion times was too great and something had to be amiss. This was not an apples to apples comparison.

TL:DR – I was able to cut the time to complete an incremental backup job from 24 minutes to under 4 minutes by scaling out Veeam infrastructure components and tweaking transport mode options to suit the dataset from using the default configuration settings and server setup. Lesson being to not take inferred performance at face value, there are a lot of factors that go into backup speed.

Before I continue, it’s important for me to state that I have seen Veeam perform exceptionally well under a number of different scenarios and know from my own experience at my previous roles at large service providers that it can handle 1000s of VMs and scale up to handle larger environments. That said, like any environment you need to understand how to properly scope and size backup components to suite…that includes more than just the backup server and veeam components… storage obviously plays a huge role in backup performance as does the design of the virtualisation platform as well as networking.

I haven’t set out in this post to put together a guide on how to scale Veeam…rather I have focused on trying to debunk the differential in job completion time I saw in the video. I went into my lab and started to think about how scaling Veeam components and choosing different options for backups and proxies can hugely impact the time it takes for backup jobs to complete. For the testing I used a Veeam Backup & Replication server that I had deployed with the Update 4 release and had active jobs that where in operation for more than a month.

The Veeam Backup & Replication server is on a VMware Virtual Machine running on modest 2vCPU and 8GB of RAM. Initially I had this running as an all in one Backup Server and Proxy setup. I have a SOBR repository consisting of two ReFS formatted local VMDK (underlying storage is vSAN) extents and a Capacity Tier extent going to Amazon S3. The backup job consisted of nine VMs with a footprint of about 162GB. A small dataset but one which was based of real world workloads. The job was running Forward Incremental, keeping 14 restore points running every 4 hours with a Synthetic Full running every 24 hours (initial purpose of was to demo Cloud Tier) and on average the incremental’s where taking between 23 to 25 minutes to complete.

The time to complete the incremental job was not an issue for me in the lab, but it provided a good opportunity to test out what would happen if I looked to scale out the Veeam components and tweak the default configuration settings.

Adding Proxies

As a first step I deployed three virtual proxies (2vCPU and 4GB RAM) into the environment and configured the job to use them in hot-add mode. Right away the job time decreased down by ~50% to 12 minutes. Basically, more proxies means more disks are able to be processed in parallel when in hot-add mode so it’s logical that the speed of the backup would increase.

Adding More Proxies

As a second step I deployed three more proxies into the environment and configured the job to use all six in hot-add mode. This didn’t result in a significantly faster time to what it was at three proxies, but again, this will vary depending on the amount of VMs and size of those VMs disks in a job. Again, Veeam offers the flexibility to scale and grow with the environment. This is not a one size fits all approach and you are not locked into a particular appliance size that may max out requiring additional significant spend.

Change Transport Mode

Next I changed the job back to use three proxies, but this time I forced the proxies to use network mode. To read more about Transport modes, head here.

This resulted in a sub 4 minute job completion to read a similar incremental data set as the previous runs. A ~20 minute difference after just a few tweaks of the configuration!

Removing Surplus Proxies and Balancing Things Out

For the example above I introduced proxies however the right balance of proxies and network mode was the most optimal configuration for this particular job in order to lower the job completion window. In fact in my last test I was able to get the job to complete consistently around the 5 minute mark by just using the one proxy with network mode.

Conclusion:

So with that, you can see that by tweaking some settings and scaling out Veeam components I was able to bring a job completion time down by more than 20 minutes. Veeam offers the flexibility to scale and grow with any environment. This is not a one size fits all approach and you are not locked into a particular appliance size that will scale out requiring additional and significant spend while also locking you in by way of restricted backup date portability. Again, this is just a quick example of what can be done with the flexibility of the Veeam platform and that what you see as a default out of the box experience (or a poorly configured/problematic environment) isn’t what should be expected for all use cases. Milage will vary…but don’t let first/misleading impressions sway you…there is always more than meets the eye!

Sources:

https://bp.veeam.expert/

What Services Providers Need to Think About in 2019 and Beyond…

We are entering interesting times in the cloud space! We should no longer be talking about the cloud as a destination and we shouldn’t be talking about how cloud can transform business…those days are over! We have entered the next level of adoption whereby the cloud as a delivery framework has become mainstream. You only have to look at what AWS announced last year at Re:Invent with its Outposts offering. The rise of automation and orchestration in mainstream IT also has meant that cloud can be consumed in a more structured and repeatable way.

To that end…where does it leave traditional Service Providers who have for years offered Infrastructure as a Service as the core of their offerings?

Last year I wrote a post on how the the VM shouldn’t  be the base unit of measurement for cloud…and even with some of the happenings since then, I remain convinced that Service Providers can continue to exist and thrive through offering value around the VM construct. Backup and DR as a service remains core to this however and there is ample thirst out there in the market for customers wanting to consume services from cloud providers that are not the giant hyper-scalers.

Almost all technology vendors are succumbing to the reality that they need to extend their own offering to include public cloud services. It is what the market is demanding…and it’s what the likes of AWS Azure, IBM and GCP are pushing for. The backup vendor space especially has had to extend technologies to consume public cloud services such as Amazon S3, Glacier or Azure Blob as targets for offsite backups. Veeam is upping the ante with our Update 4 release of Veeam Backup & Replication 9.5 which includes Cloud Tier to object storage and additional Direct Restore capabilities to Azure Stack and Amazon EC2.

With these additional public cloud features, Service Providers have a right to feel somewhat under threat. However we have seen this before (Office 365 for Hosted Exchange as an example) and the direction that Service Providers need to take is to continue to develop offerings based on vendor technologies and continue to add value to the relationship that they have with their clients. I wrote a long time ago when VMware first announced vCloud Air that people tend to buy based on relationship…and there is no more trusted relationship than that of the Service Provider.

With that, there is no doubting that clients will want to look at using a combination of services from a number of different providers. From where I stand, the days of clients going all in with one provider for all services are gone. This is an opportunity for Service Providers to be the broker. This isn’t a new concept and plenty of Service Providers have thought about how they themselves leverage the Public Cloud to not only augment their own backend services, but make them consumable for their clients via there own portals or systems.

With all that in mind…in my opinion, there are five main areas where Service Providers need to be looking in 2019 and beyond:

  1. Networking is central this and the most successful Service Providers have already worked this out and offer a number of different networking services. It’s imperative that Service Providers offer a way for clients to go beyond their own networks and have the option to connect out to other cloud networks. Telco’s and other carriers have built amazing technology frameworks based on APIs to consume networking in ways that mean extending a network shouldn’t be thought of as a complex undertaking anymore.
  2. Backup, Replication and Recovery is something that Service Providers have offered for a long time now, however there is more and more completion in this area today in the form of built in protection at the application and hardware level. Where providers have traditionally excelled at is a the VM level. Again, that will remain the base unit of measurement for cloud moving forward, but Service Providers need to enhance their BaaS, R/DRaaS offerings for them to remain competitive. Leveraging public cloud to gain economies of scale is one way to enhance those offerings.
  3. Gateway Services are a great way to lock in customers. Gateway services are typically those which a low effort for both the Service Provider and client alike. Take the example of Veeam’s Cloud Connect Backup. It’s a simple service to setup at both ends and works without too much hassle…but there is power for the Service Provider in the data that’s being transferred into their network. From there auxiliary services can be offered such as recovery or other business continuity services. It also leads into discussions about Replication services which can be worked into the total service offering as well.
  4. Managed Services is the one thing that the hyper-scalers can’t match Service Providers in and it’s the one thing that will keep all Service Providers relevant. I’ve mentioned already the trusted advisor thought process in the sales cycle. This is all about continuing to offer value around great vendor technologies that aims to secure the Service Provider to client relationship.
  5. Developing a Channel is central to be able to scale without the need to add resources to the business. Again, the most successful Service Providers all have Channel/Partner program in place and it’s the best way to extend that managed service, trusted provider reach. I’ve seen a number of providers not able to execute on a successful channel play due to poor execution, however if done right it’s one way to extend that reach to more clients…staying relevant in the wake of the hyper-scalers.

This isn’t a new Differentiate or Die!? message…it’s one of ensuring that Service Providers continue to evolve with the market and with industry expectation. That is the only way to thrive and survive!

vExpert 2019 – Why The vCommunity is Still Important to me.

Overnight, applications for the 2019 VMware vExperts where opened up and as per usual it’s created a flurry of activity on social media channels and well as private communications such as the vExpert Slack. There is no doubting that IT professionals still hold the vExpert award in high regard…though it’s also true that others have bemoaned (included myself at times) an apparent decline of its relevance over the past few years. That said it still generates lots of interest and the program is still going strong more than a decade since its inception in 2009.

The team running the program within VMware are no doubt looking to re-invigorate the program by emphasising the importance of being thorough in the 2019 application and to not do the bare minimum when it comes to filling out the application. The Application Blog Post clearly sets out what is required for a successful application in any of the qualification paths and there is even an example application that has been created.

Getting back to the title of the post and why the vExpert Award is still important for me…I think back over the years as to what the program has allowed me to achieve both directly and indirectly. Directly, it’s allowed me to network with a brilliant core group of like minded experts and with that allowed me to expand my own personal reach around the vCommunity. It’s also allowed me to grow as an IT Professional through the interactions with others in the program which has enabled me to expand my skills and knowledge on VMware technologies and beyond.

In additional to that, as I work in the vendor space these days and help with an advocacy program of our own…I’ve come to realise the importance that grass roots communities play in the overall health of vendors. When you take your eye off the rank and file, the coal face…whatever you want to call it…there is a danger that your brand will suffer. That is to say, never underestimate the power of the vCommunity as major influences.

And for the knockers…Those that have been in the program for a long time should try to understand that there are others that might have had failed applications, or others that are just learning about what being in a vCommunity is all about and are applying for the first time. Just because one may feel a sense of entitlement due to longevity in the program there are others that are desperate to get in and reap the rewards and for this, I still see the program as being absolutely critical to those that work in and around VMware technologies.

VMware technology is still very much relevant and therefore the communities that are built around those technologies much remain viable as places where members can interact, share, contribute and grow as IT professionals.

To that end, being a member of the vExpert program remains critical to me as I continue my career as an IT professional…have you thought about what it means to you?

References: 

https://blogs.vmware.com/vexpert/2019/01/07/vexpert-2019-applications-are-open/

Top Posts 2018

2018 is done and dusted and looking back on the blog over the last twelve months I’ve not been happy with my output compared to previous years…i’ve found it a little harder to churn out content. Compared to 2017 where I managed 90 posts (including this one) this year I was down to 83. My goal has always been to put out at least two quality posts a week, however again travel comes into play and this impacts my productivity and tinkering time, which is where a lot of the content comes from…that said, I am drawing closer to the 500th blog post on Virtualization is Life! since going live in 2012.

Looking back through the statistics generated via JetPack, I’ve listed the Top 10 Blog Posts from the last 12 months. This year the VCSA, NSX, vCenter Upgrades/Migrations and Homelab posts dominating the top ten. As I posted about last year the common 503 error for the VCSA is still a trending search topic.

  1. Quick Fix: VCSA 503 Service Unavailable Error
  2. Quick Look – vSphere 6.5 Storage Space Reclamation
  3. Upgrading Windows vCenter 5.5 to 6.0 In-Place: Issues and Fixes
  4. ESXi 6.5 Storage Performance Issues and Fix
  5. Quick Fix: OVF package with compressed disks is currently not supported
  6. NSX Bytes: Updated – NSX Edge Feature and Performance Matrix
  7. HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix
  8. NSX Bytes: NSX-v 6.3 Host Preparation Fails with Agent VIB module not installed
  9. Public Cloud and Infrastructure as Code…The Good and the Bad all in One Day!
  10. Released: vCloud Director 9.1 – New HTML5 Features, vCD-CLI and more!

In terms of the Top 10 new posts created in 2018, the list looks representative of my Veeam content with vCloud Director posts featuring as well as

  1. NSX Bytes: Updated – NSX Edge Feature and Performance Matrix
  2. Public Cloud and Infrastructure as Code…The Good and the Bad all in One Day!
  3. Released: vCloud Director 9.1 – New HTML5 Features, vCD-CLI and more!
  4. vSphere 6.7 Update 1 – Top New Features and Platform Supportability
  5. Configuring Service Provider Self Service Recovery with Veeam Backup for Microsoft Office 365
  6. Released: vCloud Director 9.5 – Full HTML5 Tenant UI, NSX-T Thoughts and More!
  7. Setting up vSAN iSCSI and using it as a Veeam Repository
  8. NSX-v 6.4.0 Released! What’s in it for Service Providers
  9. VMworld 2018 Recap Part 1 – Major Announcement Breakdown!
  10. Creating a Single Host SDDC for VMware Cloud on AWS

Again while I found it difficult to keep up the pace with previous years I fully intend to keep on pushing this blog by keeping it strong to it’s roots of vCloud Director and core VMware technologies like NSX and vSAN however I have started to branch out and talk more about automation and orchestration topics. There will be a lot of Veeam posts around product deep dives, release info and I’ll continue to generate content around what I am passionate about…and that includes all things hosting, cloud and availability!

I hope you can join me in 2019!

#LongLivevCD

2018 Year of Travel – A Few Interesting Stats

This year was my second full year working for Veeam and my role being global, requires me to travel to locations and events where my team presents content and engages with technical and social communities. We also travel to various Veeam related training and enablement events throughout the year as well as customer and partner meetings where and when required. This time around, I knew what to expect of a travel year and like 2017, I found this year to be just right in terms of time away working verses being at home working and also being with the family.

There where lots of highlights this year but the one that stands out was Michael Cade and myself presenting at VMworld for the second year in a row. The big difference this year was that we presented around the automating and orchestration of Veeam on VMware Cloud on AWS…to have the live demo work flawlessly after months of work was extremely satisfying. Other highlights include presenting at VeeamON and the regional VeeamOn Forums and Tours and two trips to Prague to visit our new R&D headquarters and be part of the Veeam Vanguard Summit for 2018.

So…what does all that travel look like?

Being homed in Perth, Western Australia I’m pretty much in the most isolated capital city in the world, meaning any flight is going to be significant. Even just flying to Sydney takes four to five hours…the same time it takes me to fly to Singapore. I love looking at stats and there are a number of tools out there that manage flight info. I use Tripit to keep track of all my tips, and there are now a number of sites and mobile application that let you import your flight data to analyse.

With that my raw stats for 2018 are shown below:

2017/2018
Trips 17/17
Days 104/102
Distance 262,769/291,866 km
Cities 24/20
Countries 9/10

Amazingly the numbers where very similar to 2017 however I covered a lot more kilo meters. 102 days away equates to 27.9% travel which is very manageable. Of those days I spent nearly 17 days total fly time in the air which when you think about it is amazing in it’s self. I took 68 flights with 27 domestic and 41 international.

I made it, 7.4x around the Earth, 0.77x to the moon and 0.00199 to the sun.

A new app I discovered this year was App In The Air. It’s the best i’ve used and provides some interesting stats and also helps compile the travel year video embedded above. The summary below gives me great insight into the travel year.

So that’s a quick round up of what my year looked like living the life of a Global Technologist at Veeam. Let’s see how 2019 shapes up!

 

Top vBlog 2018 – Last few Days to Vote!

While I had resisted the temptation to put out a blog on this years Top vBlog voting I thought with the voting coming to an end it was worth giving it a shout just in case there are some of you who hadn’t had the chance to vote or didn’t know about the Top vBlog vLaunchPad list created and maintained by Eric Siebert of vShere-Land.

As Eric mentions the vBlog voting should be based on blog content based around longevity, length, frequency and quality of the posts. There is an amazing amount of great content that gets created daily by this community and all things aside, this Top vBlog vote goes someway to recognizing the hard work most bloggers put into the creation of content for the community.

Good luck to all those who are listed and for those who haven’t voted yet click on the link below to cast your vote. Even though i’ve slowed down a little this year, if you feel inclined and enjoy my content around Veeam, vCloud Director, Availability, NSX, vSAN and Cloud and Hosting in general…It would be an honour to have you consider anthonyspiteri.net in your Top 12

https://topvblog.questionpro.com/

Thanks again to Eric Siebert.

References:

http://vsphere-land.com/news/voting-now-open-for-top-vblog-2018.html 

AWS re:Invent 2018 Recap – Times…they a̶r̶e̶ have a̶ Changi̶n̶g̶ed!

I wrote this sitting in the Qantas Lounge in Melbourne waiting for the last leg back to Perth after spending the week in Las Vegas at AWS re:Invent 2018. I had fifteen hours on the LAX to MEL leg and before that flight took off, I struck up a conversation (something I never usually do on flights) with a guy in the seat next to me. He noticed my 2017 AWS re:Invent jumper (which is 100x better than the 2018 version) and asked me if had attended re:Invent.

It ended up that he worked for a San Francisco based company that wrote middleware integration for Salesforce. After a little bit of small talk, we got into some deep technical discussions about the announcements and around what we did in our day to day roles. Though I shouldn’t have been surprised, just as I had never heard of his company, he had never heard of Veeam…ironically he was from Russia and now working in Melbourne.

The fact he hadn’t heard of Veeam in its self wasn’t the most surprising part…it was the fact that he claimed to be a DevOps engineer. But had never touched any piece of VMware software or virtualisation infrastructure. His day to day was exclusively working with AWS web technologies. He wasn’t young…maybe early 40s…this to me seemed strange in itself.

He worked exclusively around APIs using AWS API Gateway, CloudFormations and other technologies but also used Nginx for reverse proxy purposes. That got me thinking that the web application developers of today are far far different to those that I used to work with in the early 2000’s and 2010’s. I come from the world of LAMP and .NET applications platforms…I stopped working on web and hosting technologies around the time Nginx was becoming popular.

I can still hold a conversion (and we did have a great exchange around how he DevOp’ed his applications) around the base frameworks of applications and components that go into making a web application work…but they are very very different from the web applications I used to architect and support on Windows and Linux.

All In on AWS!

The other interesting thing from the conversation was that his Technical Director commands the exclusive use of AWS services. Nothing outside of the service catalog on the AWS Console. That to me was amazing in itself. I started to talk to him about automation and orchestration tools and I mentioned that i’d been using Terraform of late…he had never used it himself. He asked me about it and in this case I was the one telling him how it worked! That at least made me feel somewhat not totally dated and past it!

My takeaway from the conversation plus what I experienced at re:Invent was that there is a strong, established sector of the IT industry that AWS has created, nurtured and is now helping to flourish. This isn’t a change or die message…this is simply my own realisation that the times have changed and as a technologist in the the industry I owe it to myself to make sure I am aware of how AWS has shifted web and application development from what I (and from my assumption the majority of those reading this post) perceive to be mainstream.

That said, just like the fact that a hybrid approach to infrastructure has solidified as the accepted hosting model for applications, so to the fact that in the application world there will still be a combination of the old and new. The biggest difference is that more than ever…these worlds are colliding…and that is something that shouldn’t be ignored!

Backing up 6.7 Update 1 VCSA to Cloud Connect Fails

A few weeks ago I upgraded my NestedESXi homelab to vSphere 6.7 Update 1. Even though Veeam does not have offical supportability for this release until our Backup & Replication 9.5 Update 4 release there is a workaround that deals with the change of vSphere API version that out of the box, causes backup to fail. After the upgrade and the application of the workaround I started to get backup errors while trying to process the main lab VCSA VM which was now running vCenter 6.7 Update 1. All other VMs where being backed up without issue.

Processing LAB-VC-67 Error: Requested value ‘vmwarePhoton64Guest’ was not found.

The error was interesting and only impacted the VCSA VM that I had upgraded to 6.7 Update 1. I do have another VCSA VM in my lab which is on the GA of 6.7 which was backing up successfully. What was interesting is that it appears like the GuestOS type of the VM had changed or was being recognised as PhotonOS from within the upgraded vCenter on which it lived it’s self.

Looking at the VM Summary, it was being listed as VMware Photon OS (64-bit)

My first instinct was to change this back to what I saw the other VCSA to be, which was Other 3.x Linux (64-bit)

However, due to the chicken or the egg nature of having the management VCSA on the same vCenter when I logged into the ESXi host (also upgraded to 6.7 Update 1) I saw that it didn’t match what was being shown in vCenter.

Thinking it was due to a mismatch, I changed the Guest OS type here to Photon OS However the same issue occurred. Next I tried to get a little creative and change the Guest OS Type to Other Linux (64-bit) but even though I changed it to that from ESXi…from vCenter (its self) it was still reporting Photon OS and failed.

The Issue:

I submitted a support ticket and from the logs the Support team where able to ascertain that the issue actually lied at the Cloud Connect Providers end. I was sending these backups directly to the Cloud Connect Provider, so my next step to confirm this was to try a local backup test job and sure enough the VM processed without issues.

I then attempted a Backup Copy job from that successful test job to the Cloud Connect Provider and that resulted in the same error.

From the job logs it became clear what the issue was:

[07.11.2018 03:00:12] <01> Info [CloudGateSvc 119.252.79.147:6180]Request: [Service.Connect] SessionType:4, SessionName:Lab Management, JobId:54788e4d-7ba1-488a-8f80-df6014c58462, InstallationId:30ee4690-01c9-4368-94a6-cc7c1bad69d5, JobSessionId:b1dba231-18c2-4a28-9f74-f4fa5a8c463b, IsBackupEncrypted:False, ProductId:b1e61d9b-8d78-4419-8f63-d21279f71a56, ProductVersion:9.5.0.1922,
[07.11.2018 03:00:13] <01> Info [CloudGateSvc xx.xx.xx.xx:6180]Response: CIResult:b4aa56f4-fd02-4446-b893-2c39a16e535e, ServerTime:6/11/2018 7:00:13 PM, Version:9.5.0.1536,

At my end, I am running Backup & Replication 9.5 Update 3a, while at the provider end, they are running Backup & Replication 9.5 Update 3. Update 3a introduced supportability to vSphere 6.7 and other platform updates…this included the list at Veeam’s end of support Guest OS Types. In a nutshell the Veeam Cloud Connect Backup server still needs to understand what type of VM/Guest its backing up in its Cloud Repository. For this to be resolved the provider would need to upgrade their Cloud Connect infrastructure to Update3a…meanwhile, I’m backing up the VM locally for the time being.

Timely Message for VCSPs running Cloud Connect:

As we approach the release of another Update for Backup & Replication it’s important for Veeam Cloud and Service Providers to understand that they need to keep in step with the latest releases. This is why we typically have an RTM build given to providers at least two weeks before GA.

With vSphere 6.7 Update 1 starting to be deployed to more organisations it’s important to be aware of any issues that could stop tenant backups from completing successfully. This has generally been a consideration for providers offering Cloud Connect over the years…especially with Cloud Connect Replication, where the target platform needs to be somewhat in check with the latest platforms that are available.

References:

https://www.veeam.com/kb2443

https://www.veeam.com/kb2784

Hybrid World… Why IBM buying RedHat makes sense!

As Red October came to a close…at a time when US Tech stocks were taking their biggest battering in a long time the news came out over the weekend that IBM had acquired RedHat for 34 billion dollars! This seems to have taken the tech world by surprise…the all-cash deal represents a massive 63% premium on the previous close of RedHat’s stock price…all in all it seems ludicrous.

Most people that I’ve talked to about it and from reading comments on social media and blog sites suggests that the deal is horrible for the industry…but I’ve felt this is more a reaction to IBM than anything. IBM has a reputation as swallowing up companies whole and spitting them out the other side of the merger process a shell of what they once were. There has also been a lot of empathy for the employees of RedHat, especially from ex-IBM employees who have experience inside the Big Blue machine.

I’m no expert on M&A and I don’t pretend to understand the mechanics behind the deal and what is involved…but when I look at what RedHat has in its stable, I can see why IBM have made such an aggressive play for them. On the surface it seems like IBM are in trouble with their stock price and market capitalization falling nearly 20% this year and more than 30% in the last five years…they had to make a big move!

IBM’s previous 2013 acquisition of SoftLayer (for a measly 2 billion USD) helped them remain competitive in the Infrastructure as a Service space and if you believe the stories, have done very well out of integrating the SoftLayer platform into what was BlueMix, and is now IBM Cloud. This 2013 Forbes article on the acquisition sheds some light as to why this RedHat acquisition makes sense and is true to form for IBM.

IBM sees the shift of big companies moving to the cloud as a 20-year trend…

That was five years ago…and since then a lot has happened in the Cloud world. Hybrid cloud is now the accepted route to market with a mix of on-premises, IaaS and PaaS hosted and hyper-scale public cloud services being the norm. There is no one cloud to rule them all! And even though AWS and Azure continue to dominate and be front of mind there is still a lot of choice out there when it comes to how companies want to consume their cloud services.

Looking at RedHat’s stable and taking away the obvious Linux distro’s that are both enterprise and open sources the real sweet spot of the deal lies in RedHat’s products that contribute to hybrid cloud.

I’ve heard a lot more noise of late about RedHat OpenStack becoming the platform of choice as companies look to transform away from more traditional VMware/Hyper-V based platforms. RedHat OpenShift is also being considered as an enterprise ready platform for containerization of workloads. Some sectors of the industry (Government and Universities) have already decided on their move to platforms that are backed by RedHat…the one thing I would comment here is that there was an upside to that that might now be clouded by IBM being in the mix.

Rounding out the stable, RedHat have a Cloud Suite which encompasses most of the products listed above. CloudForms for Infrastructure as Code, with Ansible for orchestration…together with RedHat Virtualization together with OpenStack and OpenShift..it’s a decent preposition!

Put all that together with the current services of IBM Cloud and you start to have a compelling portfolio covering almost all desired aspects of hybrid and multi cloud service offerings. If the acquisition of SoftLayer was the start of a 20 year trend then IBM are trying to keep themselves positioned ahead of the curve and very much in step with the next evolution of that trend. That isn’t to say that they are not playing catchup with the likes of VMware, Microsoft, Amazon, Google and alike, but I truly believe that if they don’t butcher this deal they will come out a lot stronger and more importantly offer valid completion in the market…that can only be a good thing!

As for what it means for RedHat itself, their employees and culture…that I don’t know.

References:

https://www.redhat.com/en/about/press-releases/ibm-acquire-red-hat-completely-changing-cloud-landscape-and-becoming-world%E2%80%99s-1-hybrid-cloud-provider

IBM sees the shift of big companies moving to the cloud as a 20-year trend

« Older Entries Recent Entries »