Tag Archives: Cloud

Hybrid World… Why IBM buying RedHat makes sense!

As Red October came to a close…at a time when US Tech stocks were taking their biggest battering in a long time the news came out over the weekend that IBM had acquired RedHat for 34 billion dollars! This seems to have taken the tech world by surprise…the all-cash deal represents a massive 63% premium on the previous close of RedHat’s stock price…all in all it seems ludicrous.

Most people that I’ve talked to about it and from reading comments on social media and blog sites suggests that the deal is horrible for the industry…but I’ve felt this is more a reaction to IBM than anything. IBM has a reputation as swallowing up companies whole and spitting them out the other side of the merger process a shell of what they once were. There has also been a lot of empathy for the employees of RedHat, especially from ex-IBM employees who have experience inside the Big Blue machine.

I’m no expert on M&A and I don’t pretend to understand the mechanics behind the deal and what is involved…but when I look at what RedHat has in its stable, I can see why IBM have made such an aggressive play for them. On the surface it seems like IBM are in trouble with their stock price and market capitalization falling nearly 20% this year and more than 30% in the last five years…they had to make a big move!

IBM’s previous 2013 acquisition of SoftLayer (for a measly 2 billion USD) helped them remain competitive in the Infrastructure as a Service space and if you believe the stories, have done very well out of integrating the SoftLayer platform into what was BlueMix, and is now IBM Cloud. This 2013 Forbes article on the acquisition sheds some light as to why this RedHat acquisition makes sense and is true to form for IBM.

IBM sees the shift of big companies moving to the cloud as a 20-year trend…

That was five years ago…and since then a lot has happened in the Cloud world. Hybrid cloud is now the accepted route to market with a mix of on-premises, IaaS and PaaS hosted and hyper-scale public cloud services being the norm. There is no one cloud to rule them all! And even though AWS and Azure continue to dominate and be front of mind there is still a lot of choice out there when it comes to how companies want to consume their cloud services.

Looking at RedHat’s stable and taking away the obvious Linux distro’s that are both enterprise and open sources the real sweet spot of the deal lies in RedHat’s products that contribute to hybrid cloud.

I’ve heard a lot more noise of late about RedHat OpenStack becoming the platform of choice as companies look to transform away from more traditional VMware/Hyper-V based platforms. RedHat OpenShift is also being considered as an enterprise ready platform for containerization of workloads. Some sectors of the industry (Government and Universities) have already decided on their move to platforms that are backed by RedHat…the one thing I would comment here is that there was an upside to that that might now be clouded by IBM being in the mix.

Rounding out the stable, RedHat have a Cloud Suite which encompasses most of the products listed above. CloudForms for Infrastructure as Code, with Ansible for orchestration…together with RedHat Virtualization together with OpenStack and OpenShift..it’s a decent preposition!

Put all that together with the current services of IBM Cloud and you start to have a compelling portfolio covering almost all desired aspects of hybrid and multi cloud service offerings. If the acquisition of SoftLayer was the start of a 20 year trend then IBM are trying to keep themselves positioned ahead of the curve and very much in step with the next evolution of that trend. That isn’t to say that they are not playing catchup with the likes of VMware, Microsoft, Amazon, Google and alike, but I truly believe that if they don’t butcher this deal they will come out a lot stronger and more importantly offer valid completion in the market…that can only be a good thing!

As for what it means for RedHat itself, their employees and culture…that I don’t know.

References:

https://www.redhat.com/en/about/press-releases/ibm-acquire-red-hat-completely-changing-cloud-landscape-and-becoming-world%E2%80%99s-1-hybrid-cloud-provider

IBM sees the shift of big companies moving to the cloud as a 20-year trend

First Look – Zenko, Multi-Platform Data Replication and Management

A couple of weeks ago I stumbled upon Zenko via a LinkedIn post. I was interested in what it had to offer and decided to go and have a deeper look. With Veeam launching our vision to be the leader of intelligent data management at VeeamON this year, I have been on the lookout for solutions that do smart thing with data that addresses the needs related to controlling the accelerated spread and sprawl of that data. Zenko looks to be on the right track with it’s notion of freedom to avoid being locked into a specific cloud platform whether it’s private or public.

Having come from service provider land I have always been against the idea of a Hyper-Scaler Public Cloud monopoly that forces lock-in and diminishes choice. Because of that, I gravitated to Zenko’s mission statement:

We believe that everyone should be in control of their data. Zenko’s mission is to allow everyone to be in control of their data, while leveraging the efficiency of private and public clouds.

This platform looks to do data mobility across multiple cloud platforms through common communication protocols and by sharing a common set of APIs to manage it’s data sets. Zenko is focused on achieving this multi-cloud capability through a unified AWS S3 API based services with data management and federated search capabilities driving it’s use cases. Data mobility between clouds, whether private or public cloud services it what Zenko is aimed at.

Zenko Orbit:

Zenko Orbit is the cloud portal for data placement, workflows and global search. Focused for application developers and “DevOps” the premise of Zenko Orbit is that those guys can spend less time learning multiple interfaces for different clouds while leveraging the power of cloud storage and data management services without needing to be an expert across different platforms.

Orbit provides an easy way to create replication workflows between difference cloud storage platforms…weather it be Amazon s3, Azure Blog, GCP Storage or others. You then have the ability to search across a global namespace for system and user-defined metadata.

Quick Walkthrough:

Given this is open source you have the option to download and install a Zenko instance which will then be registered against the Orbit cloud portal or you can pull the whole stack from GitHub. They also have a sandboxed instance hosted by them that can be used to take the system for a test drive.

Once done, you are presented with a Dashboard that gives you an overview of the amount of data and other metric contained in your instance. Looking at the Settings area you are given details about the instance, account details and endpoints to use to connect up into. They also other the ability to download pre generated Cyberduck Profiles.

You need to create a storage management account to be able to browse your buckets in the Orbit portal.

Once that’s been done you can create a bucket and select a location which in the sandbox defaults to AWS us-east-1.

From here, you can add a new storage location and configure the replication policy. For this, I created a new Azure Blob Storage account as shown below.

From the Orbit menu, I then added a New Storage Location.

Once the location has been added you can configure the bucket replication. This is the cool part that is the premise of the platform. Being able to setup policies to replicate data across multiple cloud platforms. From the sandbox, the policy is one way meaning there is no directional replication. Simply select the source and destination and the bucket from the menu.

Once that has been done you can connect to the endpoint and upload files. I tested this out with the setup above and it worked as advertised. Using the CyberDuck profile I connected in, uploaded some files and monitored the Azure Blog storage end for the files to replicate.

Conclusion: 

While you could say that Zenko feels like DFS-R for the multi-platform storage world, the solution has impressed me. Many would know that it’s not easy to orchestrate the replication of data between different platforms. They are also talking up their capabilities around extensibility of the platform as is relates to data management, backend storage plugins and search.

I think about this sort of technology and how it could be extended to cloud based backups. Customers could have the option to tier into cheaper cloud based storage and then further protect that data by replicating it to another cloud platform which could be cheaper yet. This could achieve added resiliency while offering cost benefits. However there is also the risk that the more spread out the data is, the harder it is to control. That’s where intelligent data management comes into play…interesting times!

References:

Zenko Orbit – Multi-Cloud Data Management Simplified

 

Quick Look – Backing up AWS Workloads with Cloud Protection Manager from N2WS

Earlier this year Veeam acquired N2WS after announcements last year of a technology partnership at VeeamON 2017. The more I tinker with Cloud Protection Manager the more I understand why we made the acquisition. N2WS was founded in 2012 with their first product shipping in 2013. Purpose built for AWS supporting all types of EC2 instances, EBS volumes, RDS, DynamoDB & Redshift and AMI creation and distributed as an AMI through the AWS Marketplace. The product is easy to deploy and has extended it’s feature set with the release of 2.3d announced during VeeamON 2018 a couple weeks ago.

From the datasheet:

Cloud Protection Manager (CPM) is an enterprise-class backup, recovery, and disaster recovery solution purpose-built for Amazon Web Services EC2 environments. CPM enhances AWS data protection with automated and flexible backup policies, application consistent backups, 1-click instant recovery, and disaster recovery to other AWS region or AWS accounts ensuring cloud resiliency for the largest production AWS environment. By extending and enhancing native AWS capabilities, CPM protects the valuable data and mission-critical applications in the AWS cloud.

In this post, I wanted to show how easy it is to deploy and install Cloud Protection Manager as well as look at some of the new features in the 2.3d release. I will do a follow up post going into more detail about how to protect AWS Instances and services with CPM.

What’s new with CPM 2.3:

  • Automated backup for Amazon DynamoDB: CPM provides backup and recovery for Amazon DynamoDB, you can now apply existing policies and schedules to backup and restore their DynamoDB tables and metadata.
  • RESTful API:  Completely automate backup and recovery operations with the new Cloud Protection Manager API. This feature provides seamless integration between CPM and other applications.
  • Enhanced reporting features: Enhancements include the ability to gather all reports in one tab, run as a CSV, view both protected and unprotected resources and include new filtering options as well.

Other new features that come as part of the CPM 2.3 release include full cross-region and cross-account disaster recovery for Aurora databases, enhanced permissions for users and a fast and efficient on boarding process using CloudFormation’s 1-click template.

Installing, Configuring and Managing CPM:

The process to install Cloud Protection Manager from the AWS Marketplace is seamless and can be done via a couple different methods including a 1-Click deployment. The offical install guide can be read here. The CPM EC2 instance is deployed into a new or existing VPC configured with a subnet and must be put into an existing, or new Security Group.

Once deployed you are given the details of the installation.

And you can see it from the AWS Console under the EC2 instances. I’ve added a name for the instance just for clarities sake.

One thing to note is that there is no public IP assigned to the instance as part of the deployment. You can create a new Elastic IP and attach it to the instance, or you can access the configuration website via it’s internal IP if you have access to the subnet via some form of VPN or network extension.

There is an initial configuration wizard that guides you through the registration and setup of CPM. Note that you do need internet connectivity to complete the process otherwise you will get this error.

The final step will allow you to configure a volume for CPM use. With that the wizard finalises the setup and you can log into the Cloud Protection Manager.

Conclusion: 

The ability to backup AWS services natively has it’s advantages over traditional methods such as agents. Cloud Protection Manager from N2WS can be installed and ready to go within 5 minutes. In the next post, i’ll walk through the CPM interface and show how you backup and recover AWS instances and services.

References:

https://n2ws.com/cpm-install-guide

https://support.n2ws.com/portal/kb/articles/release-notes-for-the-latest-v2-3-x-cpm-release

Public Cloud and Infrastructure as Code…The Good and the Bad all in One Day!

I’m ok admitting that I am still learning as I progress through my career and I’m ok to admit when things go wrong. Learning from mistakes is a crucial part of learning…and I learnt a harsh lesson today! That Infrastructure as Code is as dangerous as it is awesome…and that the public cloud is an unforgiving place!

Earlier today I created a new GitHub Repository for a project i’ve been working on. Before I realised my mistake I had uploaded a Terraform variables file with my AWS Access and Secret Key. I picked up on this probably two minutes after I pushed the contents up to the public repository. Roughly five minutes later I deleted the repository and was about to start fresh without the credentials but then realised than my Terraform plan was failing with a credential error.

I logged into the AWS Console and saw that my main VPC and EC2 instances had been terminated and that there was 20 new instances in it’s place. I knew exactly at that point what had happened! I’d been compromised and I had handed over the keys on a silver web scraper platter.

My access key had been deleted and new ones created along with VPCs and Key Pairs in every single AWS region across the world. I deleted the new access key the malicious user created locking him out from doing any more damage, however in the space of ten minutes 240 EC2 instances in total where spun up. This was a little more than the twenty I thought I had dealt with initially…costing only $4.50…Amazing!

I contacted AWS support and let them know what happened. To their credit (and to my surprise) I had a call back within a few hours. Meanwhile they automatically restricted my account until I had satisfied a series of clean up steps so as to limit any more potential damage. The billing will be reversed as well so I am a little less in a panic when I see my current month breakdown.

The Bad Side of Infrastructure as Code and Public Cloud:

This example shows how dangerous the world we are living in can be. With AWS and alike providing brilliant API access into their provisioning platforms malicious users have seen an opportunity to use Infrastructure as Code as a way to spin up cloud resources in a matter of seconds. All they need is an in. And in my case, that in was a moment of stupidity…and even though I realised what I had done, all it took was less than five minutes for them to take advantage of my lack of concentration and exploit my security lapse. They also exploited the fact that I am new to this space and had not learnt best practice for storing credentials.

I was lucky that everything I had in AWS was just there for demo purpose and I had nothing of real important there. However, if this happened to be someone running business critical applications they would be in for a very very bad day. Everything was wiped! Even the backup software I had running in there using local snapshots…as ever a case for offsite copies if there was one! (Ergo – Veeam Agents and N2WS)

The Good Side of Infrastructure as Code and Public Cloud:

What good could come of this? Well, apart from learning a little more about Terraform and how to store credentials the awesome part was that all the work I had put in over the past couple of weeks getting a start with Infrastructure as Code and Terraform was that I was able to reprovision everything that I lost within 5 minutes…once my account restriction was lifted.

That’s the power of APIs and the applications that take advantage of them. And even though I copped a slap in the face today…I’m converted. This stuff is cool! We just need to be aware of the dangers that come and the fact that the coolness can be used and exploited in the wrong way as well.

VMware Cloud Briefing Roundup – VMware Cloud on AWS and other Updates

VMware has held it’s first ever VMware Cloud Briefing today. This is an online, global event with an agenda featuring a keynote from Pat Gelsinger, new announcements and demos relating to VMware Cloud as well as discussions on cloud trends and market momentum. Key to the messaging is the fact that applications are driving cloud initiatives weather that be via delivering new SaaS or cloud applications as well as extending networks beyond traditional barriers while modernizing the datacenter.

The VMware Cloud is looking like a complete vision at this point and the graphic below highlights that fact. There are multiple partners offering VMware based Cloud Infrastructure along with the Public Cloud and SaaS providers. On top of that, VMware now talks about a complete cloud management layer underpinned by vSphere and NSX technologies.

VMware Cloud on AWS Updates:

The big news on the VMware Cloud on AWS front is that there is a new UK based service offering and continued expansion into Germany. This will extend into the APAC region later in the year.

VMware Cloud on AWS will also have support for stretch clusters using the same vSAN and NSX technologies used on-premises on top of the underlying AWS compute and networking platform. This looks to extend application uptime across AWS Availability Zones within AWS regions.

This will feature

  • Zero RPO high Availability across AZs
  • Built into the infrastructure layer with synchronous replication
  • Stretched Cluster with common logical networks with vSphere HA/DRS
  • If an AZ goes down it’s treated as a HA event and impacted VMs brought back in other AZ

They are also adding vSAN Compression and Deduplication for VMware Cloud on AWS services which in theory will save 40% in storage.

VMware Cloud Services Updates:

Hybrid Cloud Extension HCX (first announced at VMworld last year) has a new on-premises offering and is expanding availability through VMware Cloud Provider Partners. This included VMware Cloud on AWS, IBM Cloud and OVH. The promise here is an any-to-any vSphere migration that cross version while being still secure. We are talking about Hybridity here!

Log Intelligence is an interesting one…it looks like Log Insight delivered as a SaaS application. It is a real-time big data log management platform for VMware Cloud on AWS adding real-time visibility into infrastructure and application logs for faster troubleshooting. It support any SYSLOG source and will ingest over the internet in theory.

Cost Insight is an assessment tool for private cloud to VMware Cloud on AWS Migration. It calculates VMware Cloud on AWS capacity required to migrate from on-premises to VMC. It has integration with Network insight to calculate networking costs during migration as well.

Finally there is an update to Wavefront that expands inputs and integrations to enhance visibility and monitoring. There are 45 new integrations, monitoring of native AWS services and integration into vRealize Operations.

You can watch the whole event here.

AWS re:Invent – Expectations from a VM Hugger…

Today is the first day offical day of AWS re:Invent 2017 and things are kicking off with the global partner summit. Today also is my first day of AWS re:Invent and I am looking forward to experiencing a different type of big IT conference with all previous experiences being at VMworld or the old Microsoft Tech Eds. Just buy looking at the agenda, schedule and content catalog I can already tell re:Invent is a very very different type of IT conference.

As you may or may not know I started this blog as Hosting is Life! and the first half of my career was spent around hosting applications and web services…in that I gravitated towards looking at AWS solutions to help compliment the hosting platforms I looked after and I was actively using a few AWS services in 2011 and 2012 and attended a couple of AWS courses. After joining Zettagrid my use of AWS decreased and it wasn’t until Veeam announced supportability for AWS storage as part of our v10 announcements that I decided to get back into the swing of things.

Subsequently we announced Veeam Availability for AWS which leverages EBS snapshots to perform agentless backups of AWS instances and more recently we where announced as a launch partner for VMware Cloud on AWS data availability solutions. For me, the fact that VMware have jumped into bed with AWS has obviously raised AWS’s profile in the VMware community and it’s certainly being seen as the cool thing to know (or claim to know) within the ecosystem.

Veeam isn’t the only backup vendor looking to leverage what AWS has to offer by way of extending availability into the hyper-scale cloud and every leading vendor is rushing to claim features that offload backups to AWS cloud storage as well as offering services to protect native AWS workloads…as with IT Pros this is also the in thing!

Apart from backup and availability, my sessions are focused on storage, compute, scalability and scale as well as some sessions on home automation with Alexa and alike. This years re:Invent is 100% a learning experience and I am looking forward to attending a lot of sessions and taking a lot of notes. I might even come out taking the whole serverless thing a little more seriously!

Moving away from the tech the AWS world is one that I am currently removed from…unlike the VMware ecosystem and VMworld I wouldn’t know 95% of the people delivering sessions and I certainly don’t know much about the AWS community. While I can’t fix that by just being here this week, I can certainly use this week as a launching pad to get myself more entrenched with the technology, the ecosystem and the community.

Looking forward to the week and please reach out if you are around.

VMware Cloud on AWS Availability with Veeam

It’s been exactly a year since VMware announced their partnership with AWS and it’s no surprise that at this year’s VMworld the solution is front and center and will feature heavily at Monday’s keynote. Earlier today Veeam was announced as an officially supported backup, recovery and replication platform for VMware Cloud on AWS. This is an exciting announcement for existing customers of Veeam who currently use vSphere and are interesting in consuming VMware Cloud on AWS.

In terms of what Veeam has been able to achieve, there is little noticeable difference in the process to configure and run backup or replication jobs from within Veeam Backup & Replication. The VMware Cloud on AWS resources are treated as just another cluster so most actions and features of the core platform work as if the cloud based cluster was local or otherwise.

Below you can see a screen shot of an VMC vCenter from the AWS based HTML5 Web Client. What you can see if the minimum spec for a VMC customer which includes four hosts with 36 cores and 512GB of RAM, plus vSAN and NSX.

In terms of Veeam making this work, there were a few limitations that VMware have placed on the solution which means that our NFS based features such as Instant VM Recovery, Virtual Labs or Surebackups won’t work at this stage. HotAdd mode is the only supported backup transport mode (which isn’t a bad thing as it’s my preferred transport mode) which talks to a new VDDK library that is part of the VMC platform.

With that the following features work out of the box:

  • Backup with In Guest Processing
  • Restores to original or new locations
  • Backup Copy Jobs
  • Replication
  • Cloud Connect Backup
  • Windows File Level Recovery
  • Veeam Explorers

With the above there are a lot of options for VMC customers to stick to the 3-2-1 rule of backups…remembering that just because the compute resources are in AWS, doesn’t mean that they are highly valuable from a workload and application availability standpoint. Customers can also take advantage of the fact that VMC is just another cluster from their on-premises deployments and use Veeam Backup & Replication to replicate VMs into the VMC vCenter to which end it could be used as a DR site.

For more information and the offical blog post from Veeam co-CEO Peter McKay click here.

Cloud to Cloud to Cloud Networking with Veeam Powered Network

I’ve written a couple of posts on how Veeam Powered Network can make accessing your homelab easy with it’s straight forward approach to creating and connection site-to-site and point-to-site VPN connections. For a refresh on the use cases that I’ve gone through, I had a requirement where I needed access to my homelab/office machines while on the road and to to achieve this I went through two scenarios on how you can deploy and configure Veeam PN.

In this blog post I’m going to run through a very real world solution with Veeam PN where it will be used to easily connect geographically disparate cloud hosting zones. One of the most common questions I used to receive from sales and customers in my previous roles with service providers is how do we easily connect up two sites so that some form of application high availability could be achieved or even just allowing access to applications or services cross site.

Taking that further…how is this achieved in the most cost effective and operationally efficient way? There are obviously solutions available today that achieve connectivity between multiple sites, weather that be via some sort of MPLS, IPSec, L2VPN or stretched network solution. What Veeam PN achieves is a simple to configure, cost effective (remember it’s free) way to connect up one to one or one to many cloud zones with little to no overheads.

Cloud to Cloud to Cloud Veeam PN Appliance Deployment Model

In this scenario I want each vCloud Director zone to have access to the other zones and be always connected. I also want to be able to connect in via the OpenVPN endpoint client and have access to all zones remotely. All zones will be routed through the Veeam PN Hub Server deployed into Azure via the Azure Marketplace. To go over the Veeam PN deployment process read my first post and also visit this VeeamKB that describes where to get the OVA and how to deploy and configure the appliance for first use.

Components

  • Veeam PN Hub Appliance x 1 (Azure)
  • Veeam PN Site Gateway x 3 (One Per Zettagrid vCD Zone)
  • OpenVPN Client (For remote connectivity)

Networking Overview and Requirements

  • Veeam PN Hub Appliance – Incoming Ports TCP/UDP 1194, 6179 and TCP 443
    • Azure VNET 10.0.0.0/16
    • Azure Veeam PN Endpoint IP and DNS Record
  • Veeam PN Site Gateways – Outgoing access to at least TCP/UDP 1194
    • Perth vCD Zone 192.168.60.0/24
    • Sydney vCD Zone 192.168.70.0/24
    • Melbourne vCD Zone 192.168.80.0/24
  • OpenVPN Client – Outgoing access to at least TCP/UDP 6179

In my setup the Veeam PN Hub Appliance has been deployed into Azure mainly because that’s where I was able to test out the product initially, but also because in theory it provides a centralised, highly available location for all the site-to-site connections to terminate into. This central Hub can be deployed anywhere and as long as it’s got HTTPS connectivity configured correctly to access the web interface and start to configure your site and standalone clients.

Configuring Site Clients for Cloud Zones (site-to-site):

To configuration the Veeam PN Site Gateway you need to register the sites from the Veeam PN Hub Appliance. When you register a client, Veeam PN generates a configuration file that contains VPN connection settings for the client. You must use the configuration file (downloadable as an XML) to set up the Site Gateway’s. Referencing the digram at the beginning of the post I needed to register three seperate client configurations as shown below.

Once this has been completed you need deploy a Veeam PN Site Gateway in each vCloud Hosting Zone…because we are dealing with an OVA the OVFTool will need to be used to upload the Veeam PN Site Gateway appliances. I’ve previously created and blogged about an OVFTool upload script using Powershell which can be viewed here. Each Site Gateway needs to be deployed and attached to the vCloud vORG Network that you want to extend…in my case it’s the 192.168.60.0, 192.168.70.0 and 192.168.80.0 vORG Networks.

Once each vCloud zone has has the Site Gateway deployed and the corresponding XML configuration file added you should see all sites connected in the Veeam PN Dashboard.

At this stage we have connected each vCloud Zone to the central Hub Appliance which is configured now to route to each subnet. If I was to connect up an OpenVPN Client to the HUB Appliance I could access all subnets and be able to connect to systems or services in each location. Shown below is the Tunnelblick OpenVPN Client connected to the HUB Appliance showing the injected routes into the network settings.

You can see above that the 192.168.60.0, 192.168.70.0 and 192.168.80.0 static routes have been added and set to use the tunnel interfaces default gateway which is on the central Hub Appliance.

Adding Static Routes to Cloud Zones (Cloud to Cloud to Cloud):

To complete the setup and have each vCloud zone talking to each other we need to configure static routes on each zone network gateway/router so that traffic destined for the other subnets knows to be routed through to the Site Gateway IP, through to the central Hub Appliance onto the destination and then back. To achieve this you just need to add static routes to the router. In my example I have added the static route to the vCloud Edge Gateway through the vCD Portal as shown below in the Melbourne Zone.

Conclusion:

Summerizing the steps that where taken in order to setup and configure the configuration of a cloud to cloud to cloud network using Veeam PN through its site-to-site connectivity feature to allow cross site connectivity while allowing access to systems and services via the point-to-site VPN:

  • Deploy and configure Veeam PN Hub Appliance
  • Register Cloud Sites
  • Register Endpoints
  • Deploy and configure Veeam PN Site Gateway in each vCloud Zone
  • Configure static routes in each vCloud Zone

Those five steps took me less than 30 minutes which also took into consideration the OVA deployments as well. At the end of the day I’ve connected three disparate cloud zones at Zettagrid which all access each other through a Veeam PN Hub Appliance deployed in Azure. From here there is nothing stopping me from adding more cloud zones that could be situated in AWS, IBM, Google or any other public cloud. I could even connect up my home office or a remote site to the central Hub to give full coverage.

The key here is that Veeam Power Network offers a simple solution to what is traditionally a complex and costly one. Again, this will not suit all use cases but at it’s most basic functional level, it would have been the answer to the cross cloud connectivity questions I used to get that I mentioned at the start of the article.

Go give it a try!

Attack from the Inside – Protecting Against Rogue Admins

In July of 2011, Distribute.IT, a domain registration and web hosting services provider in Australia was was hit with a targeted, malicious attack that resulted in the company going under and their customers left without their hosting or VPS data. The attack was calculated, targeted and vicious in it’s execution… I remember the incident well as I was working for Anittel at the time and we where offering similar services…everyone in the hosting organization was concerned when starting to think about the impact a similar attack would have within our systems.

“Hackers got into our network and were able to destroy a lot of data. It was all done in a logical order – knowing exactly where the critical stuff was and deleting that first,”

While it was reported at the time that a hacker got into the network, the way in which the attack was executed pointed to an inside job and all though it was never proved to be so it almost 100% certain that the attacker was a disgruntled ex-employee. The very real issue of an inside attack has popped up again…this time Verelox, a hosting company out of the Netherlands has effectively been taken out of business with a confirmed attack from within by an ex-employee.

My heart sinks when I read of situations like this and for me, it was the only thing that truely kept me up at night as someone who was ultimately responsible for similar hosting platforms. I could deal and probably reconcile with myself if I found myself in a situation where a piece of hardware failed causing data loss…but if an attacker had caused the data loss then all bets would have been off and I might have found myself scrambling to save face and along with others in the organization, may well have been searching for a new company…or worse a new career!

What Can Be Done at an Technical Level?

Knowing a lot about how hosting and cloud service providers operate my feeling is that 90% of organizations out there are not prepared for such attacks and are at the mercy of an attack from the inside…either by a current or ex-employee. Taking that a step further there are plenty that are at risk of an attack from the inside perpetrated by external malicious individuals. This is where the principal of least privileged access needs to be taken to the nth degree. Clear separation of operational and physical layers needs to be considered as well to ensure that if systems are attacked, not everything can be taken down at once.

Implementing some form of certification or compliancy such as ISO 27001, SOC and iRAP will force companies to become more vigilant through the stringent processes and controls that are forced upon companies once they meet compliancy. This in turn naturally leads to better and more complete disaster and business continuity scenarios that are written down and require testing and validation in order to pass certification.

From a backup point of view, these days with most systems being virtual it’s important to consider a backup strategy that not only looks to make use of the 3-2-1 rule of backups, but also look to implement some form of air-gapped backups that in theory are completely seperate and unaccessible from production networks, meaning that only a few very trusted employees have access to the backup and restore media. In practice implementing a complete air-gapped solution is complex and potentially costly and this is where service providers are chancing their futures on scenarios that have a small percentage chance of happening however the likelihood of that scenario playing out is greater than it’s ever been.

In a situation like Verelox, I wonder if, like most IaaS providers they didn’t backup all client workloads by default, meaning that backup services was an additional service charge that some customers didn’t know about…that said, if backup systems are wiped clean is there any use of having those services anyway? That is to say…is there a backup of the backup? This being the case I also believe that businesses need to start looking at cross cloud backups and not rely solely on their providers backup systems. Something like the Veeam Agent’s or Cloud Connect can help here.

So What Can Be Done at an Employee Level?

The more I think about the possible answer to this question, the more I believe that service providers can’t fully protect themselves from such internal attacks. At some point trust supersedes all else and no amount of vetting or process can stop someone with the right sort of access doing damage. To that end making sure that you are looking after your employee’s is probably the best defence against someone feeling aggrieved enough to carry out an malicious attack such as the one Verelox has just gone through. In addition to looking after employee’s well being it’s also a good idea to…within reason, keep tabs on an employee’s state in life in general. Are they going through any personal issues that might make them unstable, or have they been done wrong by someone else within the company? Generally social issues should be picked up during the hiring process, but complete vetting of employee stability is always going to be a lottery.

Conclusion

As mentioned above, this type of attack is a worst case scenario for every service provider that operates today…there are steps that can be taken to minimize the impact and protect against an employee getting to the point where they choose to do damage but my feeling is we haven’t seen the last of these attacks and unfortunately more will suffer…so where you can, try to implement policy and procedure to protect and then recover when or if they do happen.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

Resources:

https://www.crn.com.au/news/devastating-cyber-attack-turns-melbourne-victim-into-evangelist-397067/page1

https://www.itnews.com.au/news/distributeit-hit-by-malicious-attack-260306

https://news.ycombinator.com/item?id=14522181

Verelox (Netherlands hosting company) servers wiped by ex-admin from sysadmin

Looking Beyond the Hyper-Scaler Clouds – Don’t Forget the Little Guys!

I’ve been on the road over the past couple of weeks presenting to Veeam’s VCSP partners and prospective partners here in Australia and New Zealand on Veeam’s Cloud Business. Apart from the great feedback in response to what Veeam is doing by way of our cloud story I’ve had good conversations around public cloud and infrastructure providers verses the likes of Azure or AWS. Coming from my background working for smaller, but very successful service providers I found it almost astonishing that smaller resellers and MSPs seem to be leveraging the hyper-scale clouds without giving the smaller providers a look in.

On the one hand, I understand why people would choose to look to Azure, AWS and alike to run their client services…while on the other hand I believe that the marketing power of the hyper-scalers has left the capabilities and reputation of smaller providers short changed. You only need to look at last week’s AWS outage and previous Azure outages to understand that no cloud is immune to outages and it’s misjudged to assume that the hyper-scalers offer any better reliability or uptime than the likes of providers in the vCloud Air Network or other IaaS providers out there.

That said, there is no doubt that the scale and brain power that sits behind the hyper-scalers ensures a level of service and reliability that some smaller providers will struggle to match, but as was the case last week…the bigger they are, the harder they fall. The other things that comes with scale is the ability to drive down prices and again, there seems to be a misconception that the hyper-scalers are cheaper than smaller service providers. In fact most of the conversations I had last week as to why Azure or AWS was chosen was down to pricing and kickbacks. Certainly in Azure’s case, Microsoft has thrown a lot into ensuring customers on EAs have enough free service credits to ensure uptake and there are apparently nice sign-up bonuses that they offer to partners.

During that conversation, I asked the reseller why they hadn’t looked at some of the local VCSP/vCAN providers as options for hosting their Veeam infrastructure for clients to backup workloads to. Their response was, that it was never a consideration due to Microsoft being…well…Microsoft. The marketing juggernaut was too strong…the kickbacks too attractive. After talking to him for a few minutes I convinced him to take a look at the local providers who offer, in my opinion more flexible and more diverse service offerings for the use case.

Not surprisingly, in most cases money is the number one factor in a lot of these decisions with service uptime and reliability coming in as an important afterthought…but an afterthought non-the less. I’ve already written about service uptime and reliability in regards to cloud outages before but the main point of this post is to highlight that resellers and MSP’s can make as much money…if not more, with smaller service providers. It’s common now for service providers to offer partner reseller or channel programs that ensure the partner gets decent recurring revenue streams from the services consumed and the more consumed the more you make by way of program level incentives.

I’m not going to do the sums, because there is so much variation in the different programs but those reading who have not considered using smaller providers over the likes of Azure or AWS I would encourage to look through the VCSP Service Provider directory and the vCloud Air Network directory and locate local providers. From there, enquire about their partner reseller or channel programs…there is money to be made. Veeam (and VMware with the vCAN) put a lot of trust and effort into our VCSPs and having worked for some of the best and know of a lot of other service provider offerings I can tell you that if you are not looking at them as a viable option for your cloud services then you are not doing yourself justice.

The cloud hyper-scalers are far from the panacea they claim to be…if anything, it’s worthwhile spreading your workloads across multiple clouds to ensure the best availability experience for your clients…however, don’t forget the little guys!

« Older Entries