Tag Archives: Azure

Looking Beyond the Hyper-Scaler Clouds – Don’t Forget the Little Guys!

I’ve been on the road over the past couple of weeks presenting to Veeam’s VCSP partners and prospective partners here in Australia and New Zealand on Veeam’s Cloud Business. Apart from the great feedback in response to what Veeam is doing by way of our cloud story I’ve had good conversations around public cloud and infrastructure providers verses the likes of Azure or AWS. Coming from my background working for smaller, but very successful service providers I found it almost astonishing that smaller resellers and MSPs seem to be leveraging the hyper-scale clouds without giving the smaller providers a look in.

On the one hand, I understand why people would choose to look to Azure, AWS and alike to run their client services…while on the other hand I believe that the marketing power of the hyper-scalers has left the capabilities and reputation of smaller providers short changed. You only need to look at last week’s AWS outage and previous Azure outages to understand that no cloud is immune to outages and it’s misjudged to assume that the hyper-scalers offer any better reliability or uptime than the likes of providers in the vCloud Air Network or other IaaS providers out there.

That said, there is no doubt that the scale and brain power that sits behind the hyper-scalers ensures a level of service and reliability that some smaller providers will struggle to match, but as was the case last week…the bigger they are, the harder they fall. The other things that comes with scale is the ability to drive down prices and again, there seems to be a misconception that the hyper-scalers are cheaper than smaller service providers. In fact most of the conversations I had last week as to why Azure or AWS was chosen was down to pricing and kickbacks. Certainly in Azure’s case, Microsoft has thrown a lot into ensuring customers on EAs have enough free service credits to ensure uptake and there are apparently nice sign-up bonuses that they offer to partners.

During that conversation, I asked the reseller why they hadn’t looked at some of the local VCSP/vCAN providers as options for hosting their Veeam infrastructure for clients to backup workloads to. Their response was, that it was never a consideration due to Microsoft being…well…Microsoft. The marketing juggernaut was too strong…the kickbacks too attractive. After talking to him for a few minutes I convinced him to take a look at the local providers who offer, in my opinion more flexible and more diverse service offerings for the use case.

Not surprisingly, in most cases money is the number one factor in a lot of these decisions with service uptime and reliability coming in as an important afterthought…but an afterthought non-the less. I’ve already written about service uptime and reliability in regards to cloud outages before but the main point of this post is to highlight that resellers and MSP’s can make as much money…if not more, with smaller service providers. It’s common now for service providers to offer partner reseller or channel programs that ensure the partner gets decent recurring revenue streams from the services consumed and the more consumed the more you make by way of program level incentives.

I’m not going to do the sums, because there is so much variation in the different programs but those reading who have not considered using smaller providers over the likes of Azure or AWS I would encourage to look through the VCSP Service Provider directory and the vCloud Air Network directory and locate local providers. From there, enquire about their partner reseller or channel programs…there is money to be made. Veeam (and VMware with the vCAN) put a lot of trust and effort into our VCSPs and having worked for some of the best and know of a lot of other service provider offerings I can tell you that if you are not looking at them as a viable option for your cloud services then you are not doing yourself justice.

The cloud hyper-scalers are far from the panacea they claim to be…if anything, it’s worthwhile spreading your workloads across multiple clouds to ensure the best availability experience for your clients…however, don’t forget the little guys!

VMworld 2016: Cross Cloud Platform – Raw Thoughts

I’m still trying to process the VMworld 2016 Day 1 Keynote in my mind…trying to make sense of the mixed messages that myself and others took away from the 90 minute opening. Before I continue, I’ll point out that this is going to be raw post with opinions that are purely driven buy what I saw and heard during the keynote…I haven’t had much time to validate my thoughts although from my brief discussions with others here at the conference (and on Twitter) it’s clear that the Cross Cloud migration tech preview is an attempt at VMware catering to the masses. I’ll explain below why that’s both a good and bad thing and why the vCloud Air Network should be rightly miffed about what we saw demoed on stage.

Yesterday’s opening was all about Pat trying to make sure that everyone who was listening understood that VMware is still cool and relevant. The message around be_tomorrow was lost for me by the overall message that VMware has grown up and matured, but are still capable of producing teen like excitement through cool and hip technologies. If there was ever a direct reaction to the disruptive competitors VMware has had to deal with (looking at you Nutanix) then this was corporates attempt to mitigate that threat. Not sure that it worked, but did it really need to be done when you are effectively preaching to the converted?

Pat Gelsinger used his keynote to introduce the VMware® Cross-Cloud Architecture™. This is a game-changing new architecture that, as he says, “will enable customers to run, manage, connect, and secure applications across clouds and devices in a common operating environment.

During the first part of the keynote things where looking good for the vCAN with vCloud Air not getting much of a mention over the strong growth in the vCAN as shown on stage in the image above. Pat then went through and talked about trends in public and private clouds which lead into the messaging that Hybrid Cloud is the way of the future…no one cloud will rule them all. This isn’t new messaging and I agree 100% that there is a place in the world for all types of clouds, from the HyperScalers through to the smaller but more agile IaaS providers and managed private clouds.

AWSworld? – vCloud Air Network Concerns:

The second part of the keynote was where things got a little confusing for me. We saw two demo’s of Cross Cloud Architecture in tech preview. Let me start by saying that the UI looked consistent and modern and even managed to integrate vRealize Network Insight (Arkin) seamlessly and the NSX network extension is a brilliant step forward in being able to extend cloud networks between on-premises to public to vCAN Service Provider.

Where things got a little awkward for me was when the demo of the Cross Cloud Management console went through managing services and instances on AWS and Azure…without any mention or example or listing of any vCAN service provider. Not withstanding the focus on the growing partnership with IBM Softlayer in the new Cloud Foundation ecosystem that naturally competes directly against vCAN service providers the specific focus of AWS made a lot of providers uneasy.

Now, I understand that the vCAN can’t do everything and the there is an existing and future sense of inevitability around clients using more hyper-scale cloud services…but here is why I found this to be a bit of a slap in the face to the 4000+ strong vCAN. If you are going to demo the use of cross cloud why not focus on what the hyper-scalers do best that is PaaS? Don’t demo creating and moving traditional workload instances on AWS and then move it to Azure.

Again, this is a raw post and I do need to digest this a little more and I will follow up with a more in depth post and make no mistake that I do see value in the tool…but it does nothing to build and grow the vCAN…and that is the sore point at this point in time.

Azure Stack – Microsoft’s White Elephant?

Microsoft’s World Wide Partner Conference is currently on again in Toronto and even though my career has diverged from working on the Microsoft stack (no pun) over the past four or five years I still attend the local Microsoft SPLA monthly meetings where possible and keep a keen eye on what Microsoft is doing in the cloud and hosting spaces.

The concept of Azure Stack has been around for a while now and it entered Technical Preview early this year. Azure Stack was/is touted as an easily deployable end to end solution that gives enterprises Azure like flexibility on premises covering IaaS, PaaS and Containers. The premise of the solution is solid and Microsoft obviously see an opportunity to cash in on the private and hybrid cloud market that at the moment, hasn’t been locked down by any one vendor or solution. The end goal though is for Microsoft to have workloads that are easily transportable into the Azure Cloud.

Azure Stack is Microsoft’s emerging solution for enabling organizations to deploy private Azure cloud environments on-premises. During his Day 2 keynote presentation at the Worldwide Partner Conference (WPC) in Toronto, Scott Guthrie, head of Microsoft’s Cloud and Enterprise Group, touted Azure Stack as a key differentiator for Microsoft compared to other cloud providers.

The news overnight at WPC is that apart from the delay in it’s release (which wasn’t unexpected given the delays in Windows Server 2016) Microsoft have now said that the Azure Stack will only be available via pre-validated hardware partners which means that customers can’t deploy the solution themselves meaning the stack loses flexibility.

Neil said the move is in response to feedback from customers who have said they don’t want to deal with the complexities and downtime of doing the deployments themselves. To that end, Microsoft is making Azure Stack available only through pre-validated hardware partners, instead of releasing it as a solution that customers can deploy, manage and customize.

This is an interesting and in my opinion risky move by Microsoft. There is a precedence to suggest that going down this path leads to lesser market penetration and could turn the Azure Stack into that white elephant that I suggested in a tweet and in the title of this post. You only have to look at how much of a failure VMware’s EVO:Rail product was to understand the risks of tying a platform to vendor specific hardware and support. Effectively they are now creating a Converged Infrastructure Stack with Azure bolted on where as before there was absolute freedom in enterprises being able to deploy Azure Stack into existing hardware deployments allowing for a way to realise existing costs and extending that to provide private cloud services.

As with EVO:Rail and other Validated Designs, I see three key areas where they suffer and impact customer adoption.

Validated Design Equals Cost:

If I take EVO:Rail as an example there was a premium placed on obtaining the stack through the validated vendors and this meant a huge premium on what could have been sourced independently when you took hardware, software and support costs into account. Potentially this will be the same for the Azure Stack…vendors will add their percentage for the validated design, plus ongoing maintenance. As mentioned above, there is also now the fact that you must buy new hardware (compute, network, storage) meaning any existing hardware that can and should be used for private cloud is now effectively dead weight and enterprises need to rethink long term about existing investments.

Validated Design Equals Inherit Complexity:

When you take something in-house and not let smart technical people deploy a solution my mind starts to ask the question why? I understand the argument will be that Microsoft want a consistent experience for the Azure Stack and there are other examples of controlled deployments and tight solutions (VMware NSX comes to mind in the early days) but when the market you are trying to break into is built on the premise of reduced complexity…only allowing certain hardware and partners to run and deploy your software tells me that it walks a fine line between being truly consumable and it being a black box. I’ve talked about Complex Simplicity before and this move suggests that Azure Stack was not ready or able to be given to techs to install, configure and manage.

Validated Design Equals Inflexibility:

Both of the points above lead into the suggestion that the Azure Stack looses it’s flexibility. Flexibility in the private and hybrid cloud world is paramount and the existing players like Openstack and others are extremely flexible…almost to a fault. If you buy from a vendor you loose the flexibility of choice and can then be impacted at will by costs pressures relating to maintenance and support. If the Azure stack is too complex to be self managed then it certainly looses the flexibility to be used in the service provider space…let alone the enterprise.

Final Thoughts:

Worryingly the tone of the offical Blog Announcement over the delay suggest that Microsoft is reaching to try and justify the delay and the reasoning for going down the different distribution model. You just have to read the first few comments on the blog post to see that I am not alone in my thoughts.

Microsoft is committed to ensuring hardware choice and flexibility for customers and partners. To that end we are working closely with the largest systems vendors – Dell, HPE, Lenovo to start with – to co-engineer integrated systems for production environments. We are targeting the general availability release of Azure Stack, via integrated systems with our partners, starting mid-CY2017. Our goal is to democratize the cloud model by enabling it for the broadest set of use-cases possible.

 

With the release of Azure Stack now 12+ months away Microsoft still has the opportunity to change the perception that the WPC2016 announcements has in my mind created. The point of private cloud is to drive operational efficiency in all areas. Having a fancy interface with all the technical trimmings isn’t what will make an on-premises stack gain mainstream adoption. Flexibility, cost and reduced complexity is what counts.

References:

https://azure.microsoft.com/en-us/blog/microsoft-azure-stack-delivering-cloud-infrastructure-as-integrated-systems/?utm_campaign=WPC+2016&utm_medium=bitly&utm_source=MNC+Microsite

https://rcpmag.com/articles/2016/07/12/wpc-2016-microsoft-delays-azure-stack.aspx

http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

http://www.techworld.com.au/article/603302/microsoft-delays-its-azure-stack-software-until-mid-2017/

The Reality of Cloud – Outages are Like *holes…

It’s been a bad couple of weeks for cloud services both around the world and locally…Over the last three days we have seen AWS have issues which may have been indirectly related to the Leap Second on Tuesday night and this morning, Azure’s Sydney Zone had serious network connectivity issues which disrupted services for approximately three to four hours.

Closer to home, Zettagrid had a partial outage of our Sydney Zone last Wednesday morning which impacted a small subset of client VMs and services and this was on the back of a major (unnamed) provider in Europe being down for a number of days as pointed out in a blog post by Massimo Re Ferre’ linked below.

http://it20.info/2015/06/iaas-cloud-outages-get-over-it/ 

Massimo struck a cord with me and as the title of Massimo’s blog post suggests it’s time for consumers of public cloud services to get over outages and understand that when it comes to Cloud and IaaS…Outages will happen.

When you hear someone saying “I moved to the cloud because I didn’t want to experience downtime” it is fairly clear to me that you either have been heavily misinformed or you misunderstood what the benefits of a (IaaS) public cloud are

Regardless if you are juggernauts like Amazon, Microsoft or Google…or one of the smaller Service Providers…the reality of cloud services is that outages are a fact of life. Even SaaS based application are susceptible to outages and it must be understood that there is no magic that goes into the architecture of cloud platforms and while every effort goes into ensuring availability and resiliency Massimo sums it up well below.

Put it in (yet) another way: a properly designed public cloud is not intrinsically more reliable than a properly designed Enterprise data center (assuming like for like IT budgets).

That is because sh*t happens…

The reality of what can be done to prevent service disruption is for consumers of cloud services to look beyond the infrastructure and think more around the application. This message isn’t new and the methods undertaken by larger companies when deploying business critical service and applications is starting to change…however not every company can be a NetFlix or a Facebook so in breaking it down to a level that’s achievable for most…the question is.

How can everyday consumers of cloud services architect applications to work around the inevitable system outage?

  1. Think about a multi cloud or hybrid cloud strategy
  2. Look for Cloud Service Providers that have multiple Availability Zones
  3. Make sure that the Availability Zones are independent of one an other
  4. Design and deploy business critical applications across multiple Zones
  5. Watch out for Single Points of Failures within Availability Zones
  6. Employ solid backup and recovery strategies

They key to the points above is to not put all your eggs into one basket and then cry foul when that basket breaks…do not set an expectation whereby you become complacent in the fact that all Cloud Service Providers guarantee a certain level of system up time through SLA’s and then act surprised when an outage occurs. Most providers who are worth their salt do offer separate availability zones…but it’s very much up to the people designing and building services upon Services Provider Clouds to ensure that they are built to take advantage of this fact…you can’t come in stamping your feet and crying foul when the resources that are placed at your disposal to ensure application and service continuity are not taken advantage of.

Do not plan for 100% uptime…it does not exist! Anyone who tries to tell you otherwise is lying! You only have to search online to see that Outages are indeed like Assholes…everyone has them!

References:

http://au.pcmag.com/internet-products/35269/news/aws-outage-takes-down-netflix-pinterest

http://it20.info/2015/06/iaas-cloud-outages-get-over-it/

https://downdetector.com/status/aws-amazon-web-services/news/71670-problems-at-amazon-web-services