Tag Archives: Cloud

VMware Cloud on AWS Availability with Veeam

It’s been exactly a year since VMware announced their partnership with AWS and it’s no surprise that at this year’s VMworld the solution is front and center and will feature heavily at Monday’s keynote. Earlier today Veeam was announced as an officially supported backup, recovery and replication platform for VMware Cloud on AWS. This is an exciting announcement for existing customers of Veeam who currently use vSphere and are interesting in consuming VMware Cloud on AWS.

In terms of what Veeam has been able to achieve, there is little noticeable difference in the process to configure and run backup or replication jobs from within Veeam Backup & Replication. The VMware Cloud on AWS resources are treated as just another cluster so most actions and features of the core platform work as if the cloud based cluster was local or otherwise.

Below you can see a screen shot of an VMC vCenter from the AWS based HTML5 Web Client. What you can see if the minimum spec for a VMC customer which includes four hosts with 36 cores and 512GB of RAM, plus vSAN and NSX.

In terms of Veeam making this work, there were a few limitations that VMware have placed on the solution which means that our NFS based features such as Instant VM Recovery, Virtual Labs or Surebackups won’t work at this stage. HotAdd mode is the only supported backup transport mode (which isn’t a bad thing as it’s my preferred transport mode) which talks to a new VDDK library that is part of the VMC platform.

With that the following features work out of the box:

  • Backup with In Guest Processing
  • Restores to original or new locations
  • Backup Copy Jobs
  • Replication
  • Cloud Connect Backup
  • Windows File Level Recovery
  • Veeam Explorers

With the above there are a lot of options for VMC customers to stick to the 3-2-1 rule of backups…remembering that just because the compute resources are in AWS, doesn’t mean that they are highly valuable from a workload and application availability standpoint. Customers can also take advantage of the fact that VMC is just another cluster from their on-premises deployments and use Veeam Backup & Replication to replicate VMs into the VMC vCenter to which end it could be used as a DR site.

For more information and the offical blog post from Veeam co-CEO Peter McKay click here.

Cloud to Cloud to Cloud Networking with Veeam Powered Network

I’ve written a couple of posts on how Veeam Powered Network can make accessing your homelab easy with it’s straight forward approach to creating and connection site-to-site and point-to-site VPN connections. For a refresh on the use cases that I’ve gone through, I had a requirement where I needed access to my homelab/office machines while on the road and to to achieve this I went through two scenarios on how you can deploy and configure Veeam PN.

In this blog post I’m going to run through a very real world solution with Veeam PN where it will be used to easily connect geographically disparate cloud hosting zones. One of the most common questions I used to receive from sales and customers in my previous roles with service providers is how do we easily connect up two sites so that some form of application high availability could be achieved or even just allowing access to applications or services cross site.

Taking that further…how is this achieved in the most cost effective and operationally efficient way? There are obviously solutions available today that achieve connectivity between multiple sites, weather that be via some sort of MPLS, IPSec, L2VPN or stretched network solution. What Veeam PN achieves is a simple to configure, cost effective (remember it’s free) way to connect up one to one or one to many cloud zones with little to no overheads.

Cloud to Cloud to Cloud Veeam PN Appliance Deployment Model

In this scenario I want each vCloud Director zone to have access to the other zones and be always connected. I also want to be able to connect in via the OpenVPN endpoint client and have access to all zones remotely. All zones will be routed through the Veeam PN Hub Server deployed into Azure via the Azure Marketplace. To go over the Veeam PN deployment process read my first post and also visit this VeeamKB that describes where to get the OVA and how to deploy and configure the appliance for first use.

Components

  • Veeam PN Hub Appliance x 1 (Azure)
  • Veeam PN Site Gateway x 3 (One Per Zettagrid vCD Zone)
  • OpenVPN Client (For remote connectivity)

Networking Overview and Requirements

  • Veeam PN Hub Appliance – Incoming Ports TCP/UDP 1194, 6179 and TCP 443
    • Azure VNET 10.0.0.0/16
    • Azure Veeam PN Endpoint IP and DNS Record
  • Veeam PN Site Gateways – Outgoing access to at least TCP/UDP 1194
    • Perth vCD Zone 192.168.60.0/24
    • Sydney vCD Zone 192.168.70.0/24
    • Melbourne vCD Zone 192.168.80.0/24
  • OpenVPN Client – Outgoing access to at least TCP/UDP 6179

In my setup the Veeam PN Hub Appliance has been deployed into Azure mainly because that’s where I was able to test out the product initially, but also because in theory it provides a centralised, highly available location for all the site-to-site connections to terminate into. This central Hub can be deployed anywhere and as long as it’s got HTTPS connectivity configured correctly to access the web interface and start to configure your site and standalone clients.

Configuring Site Clients for Cloud Zones (site-to-site):

To configuration the Veeam PN Site Gateway you need to register the sites from the Veeam PN Hub Appliance. When you register a client, Veeam PN generates a configuration file that contains VPN connection settings for the client. You must use the configuration file (downloadable as an XML) to set up the Site Gateway’s. Referencing the digram at the beginning of the post I needed to register three seperate client configurations as shown below.

Once this has been completed you need deploy a Veeam PN Site Gateway in each vCloud Hosting Zone…because we are dealing with an OVA the OVFTool will need to be used to upload the Veeam PN Site Gateway appliances. I’ve previously created and blogged about an OVFTool upload script using Powershell which can be viewed here. Each Site Gateway needs to be deployed and attached to the vCloud vORG Network that you want to extend…in my case it’s the 192.168.60.0, 192.168.70.0 and 192.168.80.0 vORG Networks.

Once each vCloud zone has has the Site Gateway deployed and the corresponding XML configuration file added you should see all sites connected in the Veeam PN Dashboard.

At this stage we have connected each vCloud Zone to the central Hub Appliance which is configured now to route to each subnet. If I was to connect up an OpenVPN Client to the HUB Appliance I could access all subnets and be able to connect to systems or services in each location. Shown below is the Tunnelblick OpenVPN Client connected to the HUB Appliance showing the injected routes into the network settings.

You can see above that the 192.168.60.0, 192.168.70.0 and 192.168.80.0 static routes have been added and set to use the tunnel interfaces default gateway which is on the central Hub Appliance.

Adding Static Routes to Cloud Zones (Cloud to Cloud to Cloud):

To complete the setup and have each vCloud zone talking to each other we need to configure static routes on each zone network gateway/router so that traffic destined for the other subnets knows to be routed through to the Site Gateway IP, through to the central Hub Appliance onto the destination and then back. To achieve this you just need to add static routes to the router. In my example I have added the static route to the vCloud Edge Gateway through the vCD Portal as shown below in the Melbourne Zone.

Conclusion:

Summerizing the steps that where taken in order to setup and configure the configuration of a cloud to cloud to cloud network using Veeam PN through its site-to-site connectivity feature to allow cross site connectivity while allowing access to systems and services via the point-to-site VPN:

  • Deploy and configure Veeam PN Hub Appliance
  • Register Cloud Sites
  • Register Endpoints
  • Deploy and configure Veeam PN Site Gateway in each vCloud Zone
  • Configure static routes in each vCloud Zone

Those five steps took me less than 30 minutes which also took into consideration the OVA deployments as well. At the end of the day I’ve connected three disparate cloud zones at Zettagrid which all access each other through a Veeam PN Hub Appliance deployed in Azure. From here there is nothing stopping me from adding more cloud zones that could be situated in AWS, IBM, Google or any other public cloud. I could even connect up my home office or a remote site to the central Hub to give full coverage.

The key here is that Veeam Power Network offers a simple solution to what is traditionally a complex and costly one. Again, this will not suit all use cases but at it’s most basic functional level, it would have been the answer to the cross cloud connectivity questions I used to get that I mentioned at the start of the article.

Go give it a try!

Attack from the Inside – Protecting Against Rogue Admins

In July of 2011, Distribute.IT, a domain registration and web hosting services provider in Australia was was hit with a targeted, malicious attack that resulted in the company going under and their customers left without their hosting or VPS data. The attack was calculated, targeted and vicious in it’s execution… I remember the incident well as I was working for Anittel at the time and we where offering similar services…everyone in the hosting organization was concerned when starting to think about the impact a similar attack would have within our systems.

“Hackers got into our network and were able to destroy a lot of data. It was all done in a logical order – knowing exactly where the critical stuff was and deleting that first,”

While it was reported at the time that a hacker got into the network, the way in which the attack was executed pointed to an inside job and all though it was never proved to be so it almost 100% certain that the attacker was a disgruntled ex-employee. The very real issue of an inside attack has popped up again…this time Verelox, a hosting company out of the Netherlands has effectively been taken out of business with a confirmed attack from within by an ex-employee.

My heart sinks when I read of situations like this and for me, it was the only thing that truely kept me up at night as someone who was ultimately responsible for similar hosting platforms. I could deal and probably reconcile with myself if I found myself in a situation where a piece of hardware failed causing data loss…but if an attacker had caused the data loss then all bets would have been off and I might have found myself scrambling to save face and along with others in the organization, may well have been searching for a new company…or worse a new career!

What Can Be Done at an Technical Level?

Knowing a lot about how hosting and cloud service providers operate my feeling is that 90% of organizations out there are not prepared for such attacks and are at the mercy of an attack from the inside…either by a current or ex-employee. Taking that a step further there are plenty that are at risk of an attack from the inside perpetrated by external malicious individuals. This is where the principal of least privileged access needs to be taken to the nth degree. Clear separation of operational and physical layers needs to be considered as well to ensure that if systems are attacked, not everything can be taken down at once.

Implementing some form of certification or compliancy such as ISO 27001, SOC and iRAP will force companies to become more vigilant through the stringent processes and controls that are forced upon companies once they meet compliancy. This in turn naturally leads to better and more complete disaster and business continuity scenarios that are written down and require testing and validation in order to pass certification.

From a backup point of view, these days with most systems being virtual it’s important to consider a backup strategy that not only looks to make use of the 3-2-1 rule of backups, but also look to implement some form of air-gapped backups that in theory are completely seperate and unaccessible from production networks, meaning that only a few very trusted employees have access to the backup and restore media. In practice implementing a complete air-gapped solution is complex and potentially costly and this is where service providers are chancing their futures on scenarios that have a small percentage chance of happening however the likelihood of that scenario playing out is greater than it’s ever been.

In a situation like Verelox, I wonder if, like most IaaS providers they didn’t backup all client workloads by default, meaning that backup services was an additional service charge that some customers didn’t know about…that said, if backup systems are wiped clean is there any use of having those services anyway? That is to say…is there a backup of the backup? This being the case I also believe that businesses need to start looking at cross cloud backups and not rely solely on their providers backup systems. Something like the Veeam Agent’s or Cloud Connect can help here.

So What Can Be Done at an Employee Level?

The more I think about the possible answer to this question, the more I believe that service providers can’t fully protect themselves from such internal attacks. At some point trust supersedes all else and no amount of vetting or process can stop someone with the right sort of access doing damage. To that end making sure that you are looking after your employee’s is probably the best defence against someone feeling aggrieved enough to carry out an malicious attack such as the one Verelox has just gone through. In addition to looking after employee’s well being it’s also a good idea to…within reason, keep tabs on an employee’s state in life in general. Are they going through any personal issues that might make them unstable, or have they been done wrong by someone else within the company? Generally social issues should be picked up during the hiring process, but complete vetting of employee stability is always going to be a lottery.

Conclusion

As mentioned above, this type of attack is a worst case scenario for every service provider that operates today…there are steps that can be taken to minimize the impact and protect against an employee getting to the point where they choose to do damage but my feeling is we haven’t seen the last of these attacks and unfortunately more will suffer…so where you can, try to implement policy and procedure to protect and then recover when or if they do happen.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Resources:

https://www.crn.com.au/news/devastating-cyber-attack-turns-melbourne-victim-into-evangelist-397067/page1

https://www.itnews.com.au/news/distributeit-hit-by-malicious-attack-260306

https://news.ycombinator.com/item?id=14522181

Verelox (Netherlands hosting company) servers wiped by ex-admin from sysadmin

Looking Beyond the Hyper-Scaler Clouds – Don’t Forget the Little Guys!

I’ve been on the road over the past couple of weeks presenting to Veeam’s VCSP partners and prospective partners here in Australia and New Zealand on Veeam’s Cloud Business. Apart from the great feedback in response to what Veeam is doing by way of our cloud story I’ve had good conversations around public cloud and infrastructure providers verses the likes of Azure or AWS. Coming from my background working for smaller, but very successful service providers I found it almost astonishing that smaller resellers and MSPs seem to be leveraging the hyper-scale clouds without giving the smaller providers a look in.

On the one hand, I understand why people would choose to look to Azure, AWS and alike to run their client services…while on the other hand I believe that the marketing power of the hyper-scalers has left the capabilities and reputation of smaller providers short changed. You only need to look at last week’s AWS outage and previous Azure outages to understand that no cloud is immune to outages and it’s misjudged to assume that the hyper-scalers offer any better reliability or uptime than the likes of providers in the vCloud Air Network or other IaaS providers out there.

That said, there is no doubt that the scale and brain power that sits behind the hyper-scalers ensures a level of service and reliability that some smaller providers will struggle to match, but as was the case last week…the bigger they are, the harder they fall. The other things that comes with scale is the ability to drive down prices and again, there seems to be a misconception that the hyper-scalers are cheaper than smaller service providers. In fact most of the conversations I had last week as to why Azure or AWS was chosen was down to pricing and kickbacks. Certainly in Azure’s case, Microsoft has thrown a lot into ensuring customers on EAs have enough free service credits to ensure uptake and there are apparently nice sign-up bonuses that they offer to partners.

During that conversation, I asked the reseller why they hadn’t looked at some of the local VCSP/vCAN providers as options for hosting their Veeam infrastructure for clients to backup workloads to. Their response was, that it was never a consideration due to Microsoft being…well…Microsoft. The marketing juggernaut was too strong…the kickbacks too attractive. After talking to him for a few minutes I convinced him to take a look at the local providers who offer, in my opinion more flexible and more diverse service offerings for the use case.

Not surprisingly, in most cases money is the number one factor in a lot of these decisions with service uptime and reliability coming in as an important afterthought…but an afterthought non-the less. I’ve already written about service uptime and reliability in regards to cloud outages before but the main point of this post is to highlight that resellers and MSP’s can make as much money…if not more, with smaller service providers. It’s common now for service providers to offer partner reseller or channel programs that ensure the partner gets decent recurring revenue streams from the services consumed and the more consumed the more you make by way of program level incentives.

I’m not going to do the sums, because there is so much variation in the different programs but those reading who have not considered using smaller providers over the likes of Azure or AWS I would encourage to look through the VCSP Service Provider directory and the vCloud Air Network directory and locate local providers. From there, enquire about their partner reseller or channel programs…there is money to be made. Veeam (and VMware with the vCAN) put a lot of trust and effort into our VCSPs and having worked for some of the best and know of a lot of other service provider offerings I can tell you that if you are not looking at them as a viable option for your cloud services then you are not doing yourself justice.

The cloud hyper-scalers are far from the panacea they claim to be…if anything, it’s worthwhile spreading your workloads across multiple clouds to ensure the best availability experience for your clients…however, don’t forget the little guys!

VMware on AWS: Thoughts on the Impact to the vCloud Air Network

Last week VMware and Amazon Web Services officially announced their new joint venture whereby VMware technology will be available to run as a service on AWS in the form of bare-bones hardware with vCenter, ESXi, NSX and VSAN as the core VMware technology components. This isn’t some magic whereby ESXi is nested or emulated upon the existing AWS platform, but a fully fledged dedicated virtual datacenter offering that clients can buy through VMware and have VMware manage the stack right up to the core vCenter components.

Note: These initial opinions are just that. There has been a fair bit of Twitter reaction over the announcement, with the majority being somewhat negative towards the VMware strategy. There are a lot of smart guys working on this within VMware and that means it’s got technical focus, not just Exec/Board strategy. There is also a lot of time between this initial announcement and it’s release first release in 2017 however initial perception and reaction to a massive shift in direction should and will generate debate…this is my take from a vCAN point of view.

The key service benefits as taken from the AWS/VMware landing page can be seen below:

Let me start by saying that this is a huge huge deal and can not be underestimated in terms of it’s significance. If I take my vCAN hat off, I can see how and why this was necessary for both parties to help each other fight off the growing challenge from Microsoft’s Azure offering and the upcoming Azure Stack. For AWS, it lets them tap into the enterprise market where they say they have been doing well…though in reality, it’s known that they aren’t doing as well as they had hoped. While for VMware, it helps them look serious about offering a public cloud that is truly hyper-scale and also looks at protecting existing VMware workloads from being moved over to Azure…and to a lesser extent AWS directly.

There is a common enemy here, and to be fair to Microsoft it’s obvious that their own shift in focus and direction has been working and the industry is taking note.

Erasing vCloud Air and The vCAN Impact:

For VMware especially, it can and should erase the absolute disaster that was vCloud Air… Looking back at how the vCloud Air project transpired the best thing to come out of it was the refocus in 2015 of VMware to prop back up the vCloud Air Network, which before that had been looking shaky with the vCANs strongest weapon, vCloud Director, being pushed to the side and it’s future uncertain. In the last twelve months there has an been apparent recommitment to vCloud Director and the vCAN and things had been looking good…however that could be under threat with this announcement…and for me, perception is everything!

Public Show of Focus and Direction:

Have a listen to the CNBC segment embedded above where Pat Gelsinger and AWS CEO Andy Jassy discuss the partnership. Though I wouldn’t expect them to mention the 4000+ strong vCloud Air Network (or the recent partnership with IBM for that matter) the fact that they are openly discussing about the unique industry first benefits the VMWonAWS partnership brings to the market, in the same breath they ignore or put aside the fact that the single biggest advantage that the vCloud Air Network had was VMware workload mobility.

Complete VMware Compatibility:

VMware Cloud on AWS will provide VMware customers with full VM compatibility and seamless workload portability between their on-premises infrastructure and the AWS Cloud without the need for any workload modifications or retooling.

Workload Migration:

VMware Cloud on AWS works seamlessly with vSphere vMotion, allowing you to move running virtual machines from on-premises infrastructure to the AWS Cloud without any downtime. The virtual machines retain network identity and connections, ensuring a seamless migration experience.

The above features are pretty much the biggest weapons that vCloud Air Network partners had in the fight against existing or potential client moving or choosing AWS over their own VMware based platform…and from direct experience, I know that this advantage is massive and does work. With this advantage taken away, vCAN Service Providers may start to loose workloads to AWS at a faster clip than what was done previously.

In truth VMware have been very slow…almost reluctant to pass over features that would allow this cross cloud compatibility and migration be even more of a weapon for the vCAN by holding back on features that allowed on-premises vCenter and Workstation/Fusion connect directly to vCloud Air endpoints in products such as Hybrid Cloud Manager. I strongly believed that those products should have been extended from day zero to have the ability to connect to any vCloud Director endpoint…it wasn’t a stretch for that to occure as it is effectively the same endpoint but for some reason it was strategically labeled as a “coming soon” feature.

VMware Access to Multiple AWS Regions:

VMware Virtual Machines running on AWS can leverage over 70 AWS services covering compute, storage, database, security, analytics, mobile, and IoT. With VMware Cloud on AWS, customers will be able to leverage their existing investment in VMware licenses through customer loyalty programs.

I had mentioned on Twitter that the image below was both awesome and scary mainly because all I think about when I look at it is the overlay of the vCloud Air Network and how VMware actively promote 4000+ vCAN partners contributing to existing VMware customers in being able to leverage their existing investments on vCloud Air Network platforms.

Look familiar?

 

In truth of those 4000+ vCloud Air Network providers there are maybe 300 that are using vCloud Director in some shape or form and of those an even smaller amount that can programatically take advantage of automated provisioning and self service. There in lies one of the biggest issues for the vCAN…while some IaaS providers excel, the majority offer services that can’t stack up next to the hyper-scalers. Because of that, I don’t begrudge VMware to forgetting about the capabilities of the vCAN, but as mentioned above, I believe more could, and still can be been done to help the network complete in the market.

Conclusion:

Right, so that was all the negative stuff as it relates the vCloud Air Network, but I have been thinking about how this can be a positive for both the vCAN and more importantly for me…vCloud Director. I’ll put together another post on where and how I believe VMware can take advantage of this partnership to truly compete against the looming threat of the Azure Stack…with vCAN IaaS providers offering vCloud Director SP front and center of that solution.

References:

http://www.vmware.com/company/news/releases/vmw-newsfeed.VMware-and-AWS-Announce-New-Hybrid-Cloud-Service,-%E2%80%9CVMware-Cloud-on-AWS%E2%80%9D.3188645-manual.html

https://aws.amazon.com/vmware/

VMware Cloud™ on AWS – A Closer Look

https://twitter.com/search?f=tweets&vertical=default&q=VMWonAWS

Azure Stack – Microsoft’s White Elephant?

Microsoft’s World Wide Partner Conference is currently on again in Toronto and even though my career has diverged from working on the Microsoft stack (no pun) over the past four or five years I still attend the local Microsoft SPLA monthly meetings where possible and keep a keen eye on what Microsoft is doing in the cloud and hosting spaces.

The concept of Azure Stack has been around for a while now and it entered Technical Preview early this year. Azure Stack was/is touted as an easily deployable end to end solution that gives enterprises Azure like flexibility on premises covering IaaS, PaaS and Containers. The premise of the solution is solid and Microsoft obviously see an opportunity to cash in on the private and hybrid cloud market that at the moment, hasn’t been locked down by any one vendor or solution. The end goal though is for Microsoft to have workloads that are easily transportable into the Azure Cloud.

Azure Stack is Microsoft’s emerging solution for enabling organizations to deploy private Azure cloud environments on-premises. During his Day 2 keynote presentation at the Worldwide Partner Conference (WPC) in Toronto, Scott Guthrie, head of Microsoft’s Cloud and Enterprise Group, touted Azure Stack as a key differentiator for Microsoft compared to other cloud providers.

The news overnight at WPC is that apart from the delay in it’s release (which wasn’t unexpected given the delays in Windows Server 2016) Microsoft have now said that the Azure Stack will only be available via pre-validated hardware partners which means that customers can’t deploy the solution themselves meaning the stack loses flexibility.

Neil said the move is in response to feedback from customers who have said they don’t want to deal with the complexities and downtime of doing the deployments themselves. To that end, Microsoft is making Azure Stack available only through pre-validated hardware partners, instead of releasing it as a solution that customers can deploy, manage and customize.

This is an interesting and in my opinion risky move by Microsoft. There is a precedence to suggest that going down this path leads to lesser market penetration and could turn the Azure Stack into that white elephant that I suggested in a tweet and in the title of this post. You only have to look at how much of a failure VMware’s EVO:Rail product was to understand the risks of tying a platform to vendor specific hardware and support. Effectively they are now creating a Converged Infrastructure Stack with Azure bolted on where as before there was absolute freedom in enterprises being able to deploy Azure Stack into existing hardware deployments allowing for a way to realise existing costs and extending that to provide private cloud services.

As with EVO:Rail and other Validated Designs, I see three key areas where they suffer and impact customer adoption.

Validated Design Equals Cost:

If I take EVO:Rail as an example there was a premium placed on obtaining the stack through the validated vendors and this meant a huge premium on what could have been sourced independently when you took hardware, software and support costs into account. Potentially this will be the same for the Azure Stack…vendors will add their percentage for the validated design, plus ongoing maintenance. As mentioned above, there is also now the fact that you must buy new hardware (compute, network, storage) meaning any existing hardware that can and should be used for private cloud is now effectively dead weight and enterprises need to rethink long term about existing investments.

Validated Design Equals Inherit Complexity:

When you take something in-house and not let smart technical people deploy a solution my mind starts to ask the question why? I understand the argument will be that Microsoft want a consistent experience for the Azure Stack and there are other examples of controlled deployments and tight solutions (VMware NSX comes to mind in the early days) but when the market you are trying to break into is built on the premise of reduced complexity…only allowing certain hardware and partners to run and deploy your software tells me that it walks a fine line between being truly consumable and it being a black box. I’ve talked about Complex Simplicity before and this move suggests that Azure Stack was not ready or able to be given to techs to install, configure and manage.

Validated Design Equals Inflexibility:

Both of the points above lead into the suggestion that the Azure Stack looses it’s flexibility. Flexibility in the private and hybrid cloud world is paramount and the existing players like Openstack and others are extremely flexible…almost to a fault. If you buy from a vendor you loose the flexibility of choice and can then be impacted at will by costs pressures relating to maintenance and support. If the Azure stack is too complex to be self managed then it certainly looses the flexibility to be used in the service provider space…let alone the enterprise.

Final Thoughts:

Worryingly the tone of the offical Blog Announcement over the delay suggest that Microsoft is reaching to try and justify the delay and the reasoning for going down the different distribution model. You just have to read the first few comments on the blog post to see that I am not alone in my thoughts.

Microsoft is committed to ensuring hardware choice and flexibility for customers and partners. To that end we are working closely with the largest systems vendors – Dell, HPE, Lenovo to start with – to co-engineer integrated systems for production environments. We are targeting the general availability release of Azure Stack, via integrated systems with our partners, starting mid-CY2017. Our goal is to democratize the cloud model by enabling it for the broadest set of use-cases possible.

 

With the release of Azure Stack now 12+ months away Microsoft still has the opportunity to change the perception that the WPC2016 announcements has in my mind created. The point of private cloud is to drive operational efficiency in all areas. Having a fancy interface with all the technical trimmings isn’t what will make an on-premises stack gain mainstream adoption. Flexibility, cost and reduced complexity is what counts.

References:

https://azure.microsoft.com/en-us/blog/microsoft-azure-stack-delivering-cloud-infrastructure-as-integrated-systems/?utm_campaign=WPC+2016&utm_medium=bitly&utm_source=MNC+Microsite

https://rcpmag.com/articles/2016/07/12/wpc-2016-microsoft-delays-azure-stack.aspx

http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

http://www.techworld.com.au/article/603302/microsoft-delays-its-azure-stack-software-until-mid-2017/

AWS…Complex Simplicity?

I came across a tweet over the weekend which showed a screen grab of the AWS product catalog (shown below) and a comment pointing out the fact that the sheer number of AWS services on offer by Amazon was part of the reason why they are doing so well.

The implication was that AWSs dominance was in part due to the fact they have what appears to be the complete product service catalog of Cloud and it provides a “simple”one stop shop

I’ve held a view for a while now that in order to go head to head against AWS Cloud Service Providers don’t need to go out of produce 1000+ cloud services…rather they should focus on keeping things figuratively simple by focusing on core strengths and doing what they do really well…really well.

Maybe I lack the impartiality to comment on this, but when I look at the AWS services page I get overwhelmed…and while from a technical point of view I can work through the configuration steps and multiple addon services…For small businesses looking to take their first steps into a hybrid cloud world AWS is not the panacea proclaimed by some. Even for small to large enterprises the simple fact AWS carries so much apparent choice should throw up some flags and be enough to make decision makers look at smaller more streamlined offerings that offer targeted solutions based on actual requirements.

AWS are massive…AWS are a juggernaut backed by seemingly endless research and development funding and enough scale to offer what appear to be cheaper services…and though they don’t market as much as Microsofts Azure they are still front of mind for most when cloud is talked about. Smaller providers such as IaaS in the vCloud Air Network can complete if the focus on delivery of a smaller subset of products and services is done with quality and reliability in mind…in my eyes, that’s enough to offer enough differentiation to compete.

So as a final thought…lets not be caught up with what customers think they might need…but what they actually require!

vCloud Air and Virtustream – Just kill vCloud Air Already?!?

I’ve been wanting to write some commentary around the vCloud Air and Virtustream merger since rumours of it took place just before VMworld in Auguest and I’ve certainly been more interested in the whole state of play since news of the EMC/VMware Cloud Services spin off was announced in late October…the basis of this new entity is to try and get a strangle hold in the Hybrid Cloud market which is widely known to make up the biggest chunk of the Cloud market for the foreseeable future topping $90 billion by 2020.

Below are some of the key points lifted from the Press Release:

  • EMC and VMware  plan to form new cloud services business creating the industry’s most comprehensive hybrid cloud portfolio
  • Will incorporate and align cloud capabilities of EMC Information Infrastructure, Virtustream and VMware to provide the complete spectrum of on- and off-premises cloud offerings
  • The new cloud services business will be jointly owned 50:50 by VMware and EMC and will operate under the Virtustream brand led by CEO Rodney Rogers
  • Virtustream’s financial results to be consolidated into VMware financial statements beginning in Q1 2016
  • Virtustream is expected to generate multiple hundreds of millions of dollars in recurring revenue in 2016, focused on enterprise-centric cloud services, with an outlook to grow to a multi-billion business over the next several years
  • VMware will establish a Cloud Provider Software business unit incorporating existing VMware cloud management offerings and Virtustream’s software assets — including the xStream cloud management platform and others.

I’ve got a vested interest in the success or otherwise of vCloud Air as it directly impacts Zettagrid and the rest of the vCloud Air Network as well as my current professional area of focus however I feel I am still able to provide leveled feedback when it comes to vCloud Air and the time was finally right to comment after yesterday evening comming across the following LinkedIn Post from Nitin Bahdur

It grabbed my attention not only because of my participation in the vCloud Air Network but also because the knives have been out for vCloud Air almost before the service was launched as vCloud Hybrid Services. The post its self from Nitin though brief, was suggesting that VMware should further embrace it’s partnership with Google Cloud and just look to direct VMware Cloud customers onto the Google Cloud. The suggestion was based on letting VMware Orchestrate workloads on Google while letting Google do what it’s best at…which was surprisingly Infrastructure.

With that in mind I want to point out that vCloud Air is nowhere near the equal of AWS, Azure or Google in terms of total service offerings but in my opinion it’s never been about trying to match those public cloud players platform services end to end. Where VMware (and by extension it’s Service Provider Partners) does have an advantage is in the fact that in reality, VMware does do Infrastructure brilliantly and has the undisputed market share among other hypervisor platforms therefore giving it a clear advantage when talking about the total addressable market for Hybrid Cloud services.

As businesses look to go through their natural hardware refresh cycles the current options are:

  • Acquire new compute and storage hardware for existing workloads (Private – CapEx)
  • Migrate VM workloads to a cloud based service (IaaS – OpEx)
  • Move some application workloads into modern Cloud Services (SaaS)
  • Move all workloads to cloud and have third parties provide all core business services (SaaS, PaaS)

Without going into too much detail around each option…at a higher level where vCloud Air and the vCloud Air Network has the advantage in that most businesses I come across are not ready to move into the cloud holistically and for the next three to five years existing VM workloads will need a home as businesses work out a way to come to terms with an eventual move towards the next phase of cloud adoption which is all about platform and software delivered in a cloud native way.

Another reason why vCloud Air and the Air Network is attractive is because migration and conversion of VMs is still problematic and a massive pain (in the you know what) for most businesses to contemplate undertaking…let alone spend the additional capital on. A platform that offers the same underlying infrastructure as what’s out there as what vCloud Air, the vCloud Air Network partners and what Virtustream offers should continue to do well and there is enough ESXi based VMs out there to keep VMware based cloud providers busy for a while yet.

vCloud Air isn’t even close to being perfect and has a long way to go to even begin to catch up with the bigger players and VMware/EMC/DELL might well choose to wrap it up but my feel is that that would be a mistake…certainly it needs to evolve but the platform has a great advantage and it, along with the vCloud Air Network should be able to cash in.

In the next part I will look at what Virtustream brings to the table and how VMware can combine the best of both entities into a service that can and should do well over the next 3-5 years as the Cloud Market starts to mature and move into different territory leading into the next shift in cloud delivery.

References:

https://www.linkedin.com/pulse/should-vmware-kill-vcloud-air-instead-use-google-cloud-nitin-bahadur

http://www.crn.com/news/cloud/300077924/sources-vmware-cutting-back-on-vcloud-air-development-may-stop-work-on-new-features.htm

http://www.vmware.com/company/news/releases/vmw-newsfeed/EMC-and-VMware-Reveal-New-Cloud-Services-Business/2975020-manual?x-src=paidsearch_vcloudair_general_google_search_anz&kw=nsx%20+vcloud%20air&mt=b&gclid=Cj0KEQiAj8uyBRDawI3XhYqOy4gBEiQAl8BJbWUytu-I8GaYbrRGIiTuUQe9j6VTPMAmKJoqtUyCScAaAuGv8P8HAQ

http://www.marketsandmarkets.com/PressReleases/hybrid-cloud.asp

Ninefold: Going Head to Head with AWS and Using Opensource is Risky in Cloud Land

Today Ninefold (an Australian based IaaS and PaaS) provider announced that they where closing their doors an would be migrating their clients to their parent companies (Macquarie Telecom) cloud services. And while this hasn’t come as a surprise to me…having closely watched Ninefold from it’s beta days through to it’s shutdown it does highlight a couple of key points about the current state of play in the public cloud in Australia and also around the world.

As a disclaimer…this post and the view points given are totally my own and I don’t pretend to understand the specific business decisions as to why Ninefold decided to shut up doors apart from what was written in the press today around operational expenditure challenges of upgrading the existing platform.

“After an evaluation of the underlying technical platform, much consideration and deep reflection, we have decided not to embark on this journey,” the company said on Monday.

However, rather than have people simply assume that the IaaS/Cloud game is too hard given the dominance of AWS, Azure and Google I thought i’d write down some thoughts on why choosing the right underlying platform is key to any Clouds success…especially when looking to compete with the big players.

Platform Reliability:

Ninefold had some significant outages in their early days…and when I say significant, I mean significant…we are talking days to weeks where customers couldn’t interact or power on VM instances to go along with other outages all of which I was led to believe due to their adoption of CloudStack and Xen Server as their hypervisor. At the time I was investigating a number of Cloud Management Platforms and CloudStack (at the time) had some horror bugs which ruled out any plans to go with that platform at the time…I remember thinking how much prettier the interface was compared to the just released vCloud Director but the list of show stopping bugs at the time was enough to put me off proceeding.

Platform Choice:

CloudStack was eventually bought by Citrix and then given to the Apache Foundation where is currently resides but for Ninefold the damage to their initial reputation as a IaaS provider for mine did not survive these initial outages and throughout it’s history attempted to transform firstly into a Ruby On Rails platform and more recently looked to jump on the containers bandwagon as well as trying to specialize in Storage as a Service.

This to me highlights a fairly well known belief in the industry that going Opensource may be cheap in the short term but is going to come back and bite you in some form later down the track. The fact that the statement on their closure was mainly focused around the apparent cost of upgrading their platform (assuming a move to Openstack or some other *stack based CMP) highlights the fact that going with more supported stacks such as VMware ESXi with vCloud Director or even Microsoft Hyper-V with Azure is a safer bet long term as their are more direct upgrade paths version to version and there is also official support when upgrading.

Competing against AWS Head On:

http://www.itnews.com.au/news/sydney-based-cloud-provides-price-challenge-247576

Macquarie Telecom subsidiary Ninefold launches next week, promising a Sydney-based public cloud computing service with an interface as seamless as those of Amazon’s EC2 or Microsoft’s Azure.

Ninefold from the early days touted themselves as the Public Cloud alternative and their initial play was to attract Linux based workloads to their platform and offer very very cheap pricing when compared to the other IaaS providers at the time…they where also local in Australia before the likes of AWS and Azure set up shop locally.

I’ve talked previously about what Cloud Service Providers should be offering when it comes to competing against the big public cloud players…offering a similar but smaller slice of the services offered targeting their bread and butter will not work long term. Cloud Providers need to add value to attract a different kind of client base to that of AWS and Azure…there is a large pie out there to be had and I don’t believe we will be in a total duopoly situation for Cloud services short to medium term but Cloud Providers need to stop focusing on price, so much as quality of their products and services.

Final Thoughts:

Ninefold obviously believed that they couldn’t compete on the value of their existing product set and due to their initial choice of platform felt that upgrading to one that did allow some differentiation in the marketplace compared to the big public cloud players was not a viable option moving forward…hence why their existing clients will be absorbed into a platform that does run a best of breed stack and one that doesn’t try to complete head to head with AWS…at least from the outside.

“Those tier two ISPs and managed services outfits standing up wannabe AWS clones cobbled together out of bits of Xen, OpenStack and cable ties?”
Roadkill.
As the industry matures, smaller local players will find they can’t make it pay and go away. The survivors will move into roles as resellers and managed services providers who make public cloud easier for those who don’t like to get hands on with the big boys. This is happening already. By 2015 we’ll see exits from the cloud caper.“

http://www.zdnet.com/article/ninefold-to-shut-operations/

http://www.itnews.com.au/news/ninefold-to-shut-down-411312?utm_source=twitter&utm_medium=social&utm_campaign=itnews_autopost

http://www.crn.com.au/News/411313,macquarie-telecoms-ninefold-closing-down.aspx?utm_source=twitter&utm_medium=social&utm_campaign=crn_autopost

http://www.theregister.co.uk/2013/12/19/australias_new_year_tech_headlines_for_2015/

M$ Price Hike: Is the Race to the Bottom Over?

A couple of weeks ago Microsoft raised the prices of Azure, Office 365, CRM Online and other enterprise cloud services across Australia, Canada and Europe. In the Azure AU Region prices were increased a hefty 26% and there has been a significant outcry from customers and partners alike. The reality is that for partners who resell Azure their margins just got lower and most will have trouble passing on the full 26% increase to their customers.

Effective August 1, 2015, local prices for Azure and Azure Marketplace in Australian dollars will increase by 26% percent to more closely align with prices in most markets.

The reason given by Microsoft was to realign prices with the US Region and adjust for the stronger US Dollar, however in a market where consumers are used to prices going down this was certainly a shock to the system and very much unexpected. Notwithstanding the fact this is Microsoft we are talking about (a company who have a long history of screwing their partners) …the message I get out of this price rise is that we might have reached a potential turning point in the race to the bottom that has been a featured tactic of the big Public Cloud Providers since Azure and Google came into the market to combat Amazon’s dominance.

Since 2011 there have been punches and counter punches between all players trying to drive down prices to entice consumers of Cloud Services. In 2013 I wrote about the Online Storage Wars and what cheaper per GB pricing meant for the average Service Provider…At the time it was companies like Dropbox and Mega contributing to the race to sub cent storage and in the two years that have followed AWS, Azure and others have continued to slash the cost of compute and storage.

“We will continue to drive AWS prices down, even without any competitive pressure to do so,” asserted Amazon CTO Werner Vogels

With the big players driving down prices, smaller providers needed to follow to remain competitive…but in remaining competitive many providers risked becoming unviable. Without scale it’s impossible to drive a return on investment…and without that some smaller providers have forced to close down or sell off. In truth the Microsoft move to raise prices should give fledgling Service Providers hope…There is value in the services offered and customers should be prepared to pay for quality services. Customers need to understand value and understand what it means to pay for quality.

Following from my post a couple of weeks ago around The Reality of Cloud Outages…a continued race to the bottom in my opinion will only mean more risk being put into Service Provider Cloud Design and Architecture…something has to give when it comes to cost vs quality and those providers that don’t have the scale of the big players don’t have a hope in hell in being able to provide long term viable services.

So while I may be jumping the gun a little in reacting to the recent price hikes meaning an end to the race to the bottom…it should defiantly give smaller providers confidence to keep pricing relatively stable and focus on continuing to deliver value by way of providing strong products and services.

Hopefully from this point forward prices are allowed to be governed by technical market forces driven by improved compute and storage densities rather than by the monopoly like forces we had become accustom to.

References:

http://www.crn.com/news/cloud/240153051/are-cloud-prices-becoming-a-race-to-the-bottom.htm

http://www.aidanfinn.com/2015/06/pricing-for-azure-in-the-euro-zone-to-increase-by-13/

http://www.zdnet.com/article/azure-office-365-and-more-microsoft-cloud-price-increases-on-deck-for-august-1/

http://www.theregister.co.uk/2015/04/22/google_vs_aws_race_to_the_bottom_detours_into_super_ssd_spring_sale/

 

« Older Entries