Tag Archives: Hosting

Differentiate…Or Die?

I spent the last week on holiday in the Wine Region of Western Australia’s South West. I’ve been holidaying down south since I was a teenager and I’ve seen the region transform over the years…I can’t speak for the years prior to my time spent around Margaret River, Dunsborough and Yallingup, but I had a thought as I was visiting one of the newer Wineries/Breweries that, in some ways… the Wine Industry down south shares similar traits to the Hosting/Service Provider Industry.


Winaries of the South West View Larger Map


Cloud Hosting Providers of Perth View Larger Map

So what has wine and tourism got to do with Hosting and Cloud?

I remember a conversation with a local Microsoft SPLA guy (those in Australia know there is only really one guy who fits that bill…@PhileMeAU) during a Hosting Partners dinner at TechEd 2010 where by the group was talking about the possible impact of BPOS/Office365 and what it meant for traditional hosters. Out of that conversation the strong advice given at the time was that we had to Differentiate, or Die…That was to say, there was really no future in hosting vanilla applications like Exchange or MSCRM because commodity based public clouds will eventually swallow all before them. Three years on and the same could be said for those doing IaaS and the thought that traditional Virtual Machine hosting is now the realm of the bigger players.

In some ways the rise of AWS, Azure and other public clouds has shifted the industry closer to a Demolition Man style Taco Bell monopoly. But there are enough alternative Service Providers competing against the big guns and winning that proves that, for all the marketing money aimed at perpetuating FUD…somewhere along the lines those smaller players are doing something right? Have they taken on the differentiation threat? Or is something else responsible for their continued existence and success?

Back to the Wine Industry example, going back 20-30+ years there might have been 5-10 Wineries that dominated the industry until the smaller players starting buying up land and producing their own vintages. Pretty soon the market became flooded with Margaret River wines and competition was at it’s peak. For those wineries lucky enough to be not too far off the Caves Road (the main road running parallel to the coast) there was a guarantee of a steady stream of customers…What I have seen over the last 5 years or so is a number of Wineries trying to differentiate themselves from the others by bringing out more exotic vintages and even branching off into brewing of Beers and Spirits. The region was trying to become as famous for it’s liqueur’s as it’s vino’s.

With that going on, its still the more established wineries that attract the majority of the tourist dollar…this is as much due to reputation, and market muscle as it is for the quality of their product. Differentiation hasn’t worked…at the end of the day, people visiting will find their ways to the bigger players and the smaller players will continue to exist to serve their own particular market niche.

The same can be said for the Hosting and Cloud industry…lots of service providers have tried to differentiate their services so as to try and ward off the threat from an AWS, or an Azure…but in doing that I’ve seen (and been part of) companies loosing focus on getting the simple things right. Being a jack of all trades and a master of none is dangerous in the Service Provider industry…unless you have a bottomless pit of resources (both money and people based) there is no way you can achieve an excellent standard across a number of product sets. You also risk not focusing on the key areas of automation and process that goes hand in hand with a successful product set.

Small to Medium Service Providers can still thrive if they stick to core competencies and strive to excel within those narrower, but focused areas. The key that I’ve found of late (and am of the strong belief) is that you just have to keep it simple and do what you do well. That is to say…pick a course and stick with it. If you do IaaS well, why try to offer Platforms or Applications? If your strength lies in Hosting .NET…why try to branch out to a LAMP platform? All that’s achieved in my experiences in a thinning out of the quality of service leading to a situation where brand name is impacted.

As with the wineries down south Service Providers need to be wary of trying to keep up with the big boys…just because Winery BXT has released an updated blend why try to match that? Similarly core focus will be lost if Service Providers try to keep up with “new/justfixesforpoorinitialrelease” features AWS and Azure and others seem to be releasing every month or so to keep on looking like they are adding value add…when really all they are really doing is filling gaps.

So, take away here is to not take the differentiate or die message literally…Service Providers should focus on being excellent at what made them strong in the first place…the differentiate message may have been perpetuated by those that would want to see SP’s lose focus and die…a slow death!


Passion

During last weeks #APACVirtual Podcast (Episode 70 – Engineers Anonymous pt1 – Engineer2PreSales) the panelists (of which, I was one) where discussing what it took to become a successful candidate in transitioning from a technical engineering role to a pre-sales/architecture role. It was universally agreed upon that passion is a much sort after trait in those roles. Someone who is passionate about what they are doing can overcome almost any professional deficiency and succeed where others might fail. It was discussed that someone who is seen to be passionate is a more sort after asset than someone who is simply technically brilliant.

I’m a passionate guy…those that know me generally would describe me as such. When I find something I love I tend to embrace it with all that I have and it becomes a driving force in life…I wear my heart on my sleeve in most aspects of life…be it family, playing cricket or work, and for each of those…passion manifests it’s self in different ways.

I’ve mulled over this post for about a week now…it’s been written and re-written a number of times as I try to best represent and explain passion and how it can contribute to a successful and rewarding career in IT. At the end of the day I can’t explain passion with any great level of verbal prowess…it’s too much of a basic raw emotion!

Passion is something you have, or don’t have…it’s a driving force that makes you strive to better yourself and it fuels the fire within to drives you to succeed and excel in anything you attempt in life.

Passion has the ability to lay down the foundation of a lasting legacy…

I posses a driving force when it comes to my work…I truly believe in the technology I work with…When talking with colleagues and clients alike, I am always passionate in my evangalization of those products and technologies.

My current passion lies within Hosting and Cloud technologies and i’m a big believer in what VMware is doing in the market at the moment. Previously I was (still am to a lessor extent) passionate around Hosted Exchange services and other Microsoft technologies…in that, the driver of passion can change depending on current circumstance and in my case, the agent of change was directly related to the way Microsoft started treating their partners…that and I was consumed by the vSphere, ESX, vCloud Virtualization stack and the power of transformational change it can offer clients…look no further than the EUC push for evidence of this change.

Not everyone possess passion, and I see examples of people without passion everyday…I can’t comprehend this…I can’t understand people that work without anything truly driving them…

One person with passion is better than forty people merely interested.

— E. M. Forste

Again, it’s almost impossible to represent what drives me…but I know i’d rather be passionate in life than not.


DDoS Annihilation – What Can Service Providers Do?

Recently we have experienced a series of DDoS attacks against client hosted sites that resulted in varying level of service outages to hosted services across a section of our hosting platform. In my 10+ years of working in the hosting industry this series of attacks was by far the most intense I’ve experienced and certainly was the most successful in terms of achieving the core goal of a DDoS.

On the one hand, as a collective you might think “…we had been lucky to avoid an attack up to this point” while on the other hand you are dealing with the misguided expectations of clients that you are protected against such attacks and when you explain the realities of a DDoS to a customer who is expecting 100% up-time the responses generally encountered is along the lines of “…I thought you said your service will never go down?” or “…I thought you have full redundancy?”

The absolute reality (that I have no problem in explaining to clients) is that most, if not all service providers are pretty helpless against a DDoS dependent on the size and scale of the attack. In our case we where able to mitigate the service disruption by re-routing all traffic to the affected IP to a NULL route at our carrier edge which reduced the load under which the firewall had been placed under which in turn caused the CPU to spike…making the DDoS successful in it’s end game.

So what can be done to mitigate the risk a DDoS presents? Service Providers can look at spending money by purchasing extremely expensive IDS systems and/or larger capacity routing and firewall devices that might only shield against and attack a little more effectively than less expensive options. An example there is that if a firewall device is capable of 10,000 connections per second and 100,000 total connections a DDoS will look to saturate it’s capability to a point where it’s memory and/or CPU resources are consumed trying to process the attack traffic…upgrading to a device capable of 20,000 connections per second and 200,000 total connections will only serve to buffer the resources that little bit longer which might give you more time to mitigate the attack…but the point that’s made here is that…

…service provider resources will always come off second best if an attack is large enough.

And this is the really scary thing for service providers…if someone (individual or organisation) wants to maliciously target your network and/or a client service hosted on your network and they want to inflict maximum service disruption…the best thing that can be done is attempt to mitigate where possible and ride it out.

There are a number of sites that track and list current and trending DDoS attack frequency and origin…one of the better ones I’ve come across is Prolexic’s real time Attack Tracker linked below.

Companies such as Prolexic generally provide services and physical devices that are linked to global networks that act to shield client networks from attacks similar to ways in which SenderBase.org shields email users from obvious SPAM. In discussions with Steven Crockett (Anittel CTO) he described a service which effectively re-routes traffic at the upstream providers end to route through overseas carrier networks who’s connectivity throughput allows otherwise crippling DDoS traffic to be filtered and cleaned before being sent onto it’s destination. This service isn’t site or service specific but involved routing entire subnets…so at this level it’s much more expensive and holistic than reverse proxy content delivery networks.

Working with a CDN will add protection in the form of a value-add service to current service offerings.

So what alternative measures can service providers take to add some level of protection to their key client/internal services. Unless the SP is loaded with more cash than it knows what to do with (at which point there is a case to scale out/upgrade the hosting platform itsself) the only option is to utilize the services of bigger companies that run dedicated Content Delivery Networks.

CDN companies are popping up all over internet, and while a company like Akamai have dominated the website caching market for many years, CDN’s are becoming more the norm whereby caching of static site content is making way for reverse proxy DNS redirection. In wake of the DDoS attacks experienced recently I’ve been testing a couple of the better known CDN providers. One of the those is CloudFlare. The way that a CloudFlare, or Amazon Web Services CloudFront works is by taking over a websites DNS records and use geo-routing to distribute visitors through their CDN network which also filters for potential DDoS or other malicious traffic that would otherwise hit the origin web server.

CDN services are charged generally on a usage basis which commoditizes the service, however CloudFlare charge per site, with their business plans going around the $200 per month mark. For a service providers customer after added insurance against a DDoS or even to generally attempt to increase site responsiveness and performance I believe it’s a no brainier in the age of increasingly brutal DDoS attacks to offer these services as a value-add. At the end of the day the more sites a Service Provider fronts with CDN’s the better able their own hosting network will be able to deal with the inevitability of a DDoS.

One final point to make on going down the CDN path is to ensure that customers understand that their sites are still subject to downtime…this is best illustrated by CloudFlare’s recent outage on the 3rd of March 2013, due to a router bug propagated into their network during a routine DDoS prevention exercise. To their credit, they where very open and transparent of the Root Cause while sites where offline for a period of time, there where options available to re-route the site DNS records back to the origin such is the flexibility of offering a service such as this to service provider clients.

A Hypothetical…

So what’s the title all about? DDoS Annihilation? In my opinion we are getting closer to DDoS events on such large scales that they will have the potential to take down the majority of all service provider and carrier networks which, in turn will have huge social and economic impact around the globe. We don’t have to wait for a Coronal Mass Ejection to blackout the planet…a massive DDoS has the ability to inflict severe damage.

Near on 1 Billion internet hosts used against us in an global DDoS?? No network has the ability to handle that!

How-To: Citrix NetScaler GeoIP Restrictions

I had a request from a Hosting Client this week to look at options around blocking malicious users from causing trouble on a local Auction site. As the site was only for Australian and New Zealand users we needed to come up with a solution to block the whole world except AU and NZ visitors. Obviously I know there are mechanisms in existence that have annoyed me in the past while trying to source overseas content and getting the message telling you that you can’t access this site in your region.

I’ve never personally had to act on a request like this, and thought about options relating to some sort of code based filtering or filtering at the gateway level. I’ve known that in real terms I haven’t even scratched the surface of what our Citrix NetScaler VPX’s can do, and with that I searched for some guidelines on getting up GeoIP Responder rules at the Load Balancer’s Virtual Server level. Not being able to find anything definitive end to end, here are the steps I took to achieve the end result.

Citirx NetScaler ArticleHow to Block Access to a Site by Country using a Location Database

First step is to enable the Responder Feature is it’s not already enabled. Citrix suggest you disable any feature not in use to save on system resources.

ns_geoip_1


In order for the NetScaler to work our what location a visitor is coming from it needs to reference a GeoIP database. MaxMind offer a free database from here: These are updated on the first Tuesday of everymonth, so a little upkeep is required moving forward. There are IPv4/6 versions as well as an extended database City version which lets you get very granular in terms of allowing city access. For this exercise we will use the GeoIPCountryWhois CSV database.

Jump into the shell of the NetScaler and create a new directory. Note that if you have a HA setup, you need to do this on each NetScaler node.

Use SCP to upload the CSV database to that location just created on the NetScaler and then run the following command to import the location parameters. Once done you can query the location database to ensure you have  imported the CSV line items.

Now that you have the GeoIP location locked and loaded, you can created the Responder Policy. I had a little trouble trying to work out how to structure the rule to work correctly limiting visitors to only .AU and .NZ. I’ll be honest here and admit that trial and error was the winner here, but eventually I came up with the following that works.

Reading through the policy it’s easy enough to see what’s going on…this page references the Location Database General Information and formats, however it’s confusing at best..my advice is for Country Based GeoIP use the above as a template and simply change the country codes to suit.

Back to the GUI of the NetScaler and under Load Balancing settings of the Virtual Server(s) in question, open the Virtual Server for editing and go to the Policies Tab -> Click on the Responder sub tab and right click to Insert Policy and the end result will be similar to what’s shown below.

ns_geoip_2

I was able to use Twitter contacts with servers in global locations to test out the rule which was behaving exactly as expected. If you go back to the Policy menu item under Responder and check the Responder Policies you will be able to see if the rule is active and how many hits the rule has triggered.

ns_geoip_3

The default action of the policy is to DROP or RESET the connection. You do have the option of creating a custom REDIRECT rule that will allow you to make the end user a little nicer in terms of presenting the user with a HTML page letting them know the page is restricted ..with the DROP and REST the browser simply shows a page not found or connection reset. I’ll update this post once i’ve created the REDIRECT rule.

Update: Turns out that if you apply the above rule it’s not that great for Google Analytics and the bots that hit your site. If you want to get the GoogleBot User Agent through the rule, create a rule similar to below

Looking at that additional condition  you are looking in the HTTP Request Header, ignoring case and matching Googlebot.

Load Balancer Internal IP’s Appearing in IIS/Apache Logs: Quick Fix

If you are NAT’ing public to private addresses with a load balancer in between your web server and your Gateway/FireWall device you might come across the situation where the IIS/Apache logs report the IP of the Load Balancer, when what you really want, is the client IP.

It’s obvious that the biggest issue with this is that any Log Parser/Analytic’s you do against the site will all be relative to the IP of the load balancer. All useful client and geographical information is lost.

Most Load Balancer’s get around this by inserting a Header into the packet that relates to Client IP. In most cases that I have seen, both Juniper and NetScalers the Header is set to rlnclientipaddr.

What needs to be done at the web server configuration level to help pick up on and translate the header info so it can be used to translate the correct client IP into the log files. There are obviously different way to achieve this in Apache, compared to IIS and Apache has a much simply solution than IIS.

Apache:

In your apache.conf go to the LogFormat sections and modify the default format as shown below (Replace the Red text with the green text) and restart the Apache Service.


IIS
:

The IIS 5/6/7/8 solution is a little more involved, but still just as efficient and not overly complicated at the end of the day…in fact for me the hardest part was actually chasing up the DLL’s linked below. It must be noted that while this has worked perfectly for me against both a Juniper DX and NetScaler VPX load balancer I would suggest testing the solution before putting it into production. Reason being is that the ISAPI filters are specifically sourced for the Juniper DX series, but in my testing I found that they worked for the NetScalers as well. Sourcing the x64 DLL’s was a mission, so in this I am saving you a great deal of time by provided the files below.

rllog-ISAPI

Download and extract those files into your Windows root. Go to the Features View -> ISAPI Filters and Click on Add. Enter in the Name and Executable Location and click ok. Note that it’s handy to add both 32 and 64 bit version to a 64bit IIS Web Server just in case you are dealing with legacy Application that are required to run in 32bit mode. Adding the ISAPI Filter at the root config of the Web Server so it propagates down to all existing sites and any newly created sites.

isapi_dll

The Backup Delusion – Part 2

It’s been a while since my first post on this topic, but there has certainly been a lot of thought and effort put into this subject since then. At first I envisaged this to be a two part post, but I think I’m going to break this up over a couple more posts, that focus on a couple particular area’s that have come to the fore since i’ve begun to seriously think about backups as a hosting provider.

I’ve been running an internal product group that’s tasked with trying to find, test and launch the best overall Backup Application for our diverse client base. As a group we have gone through a process of trying to work out what features and benefits are most important to both us, as a business, and what’s important from a client’s perspective.

backup_sel_matrix_1

We spent some time working on a Backup Selection Matrix that could quantify and rate those features and from there, we would be able to score any Backup Product based on those numbers. In the previous post I listed out some of those features and explained how they effect they way in which, both clients and us as providers look at selecting, developing and deploying products. At the end of that process we where able to clearly graph products against an X and Y axis (as shown below) and from that, clearly get an indication on which products came out on top based on those requirements.

backup_sel_matrix_2

At the sake of not embarrassing some Backup vendor’s I’ve removed the product names from the images above. Suffice to say that some large, well known vendor products fell well short of expectation and rated very poorly. Across the board it was clear that not one product stood out…but some certainly failed and scored poorly.

What it’s allowed the group to do is to quantify against the testing, staging and real world UAT sites which in theory should lead to a calculated decision to be made on which product best fits the requirements.

In the next post in the series i’ll explain why, in some countries such as Australia where high speed broadband is not as widely available as in other countries, we have a fundamental issue with offsite backup technologies which basically cause most large offsite replication and backup jobs to fail…which ultimately renders the offsite backup solution useless…and that effectively puts service providers at risk of credibility issues if expectations are not set based on real world metrics.

The Backup Delusion – Part 1

vExpert 2012 – My Journey in Virtualization so far…

If you had asked me 2 years ago that I’d be writing as a VMware vExpert I would have thought you were crazy. At that stage my only exposure to VMware was on a co-lo server I was hosting for a mates start-up back in 2008. It was ESXi 3.5 back then and, compared to Hyper-V R2, it seemed fairly run of the mill…a clunky foreign interface to someone who lived in Microsoft MMC’s and all I was dealing with was VM related errors…with no HA!

I’m a Microsoft guy…I am still happy to point that out. My passion in Hosting was born of IIS, MSSQL, MSCRM, Exchange and SharePoint. I also work on Linux based systems for PHP/MySQL hosting, DNS and POP3 mail. Without a decent medium it was near on impossible to get a look in at an MVP award, but I have always been strong in evangelization of the systems I work with day in and day out. A strong advocate of partner hosted services I have always been one to rise up and speak against the public cloud offerings Microsoft (and others) have pushed hard in the vein attempts to play catch-up with Google. Public Cloud offerings such as Office365, have been largely built upon the momentum partners built up over the 2000’s in being able to deliver services such as Hosted Exchange and MSCRM when they were not built for multi-tenancy from the ground up the partner community drove early adoption and made it viable for slogans such as “To the Cloud” (shudder) possible…more to come on this later in the post.

I started out testing in lab environments on old 486/Pentium systems that I could put together from spare parts in the office…while I was able to get some decent labs up, space was always at a premium and performance was limited. From there, I remember getting my hands on Virtual PC from Microsoft and started to load up lab machine on that…I remember it taking a whole day to load up Windows 2003, so the experience was frustrating to say the least…even so, the seed had been sewn. From there Virtual PC 2005 was released and, from a viability point of view, we were in business. The first VM we put into production was a BlackBerry server (a positive example of Microsoft trying to play catch up and kill of a competitor) which run nicely in an environment, that was 100% physical at the time. At Tech-Ed 2005, we first got introduced to Hyper-V. Michael Kleef at the time was running an advance beta build for his presentation demo’s and I was blown away at being able to run multiple VM’s on a single platform, with a single console. At this time I didn’t even know about VMware existence other than reading articles on Hyper-V’s challenge to the incumbent.

Before moving over to Accord/Anittel in later 2009 I had put together a robust Hyper-V cluster, from which we were hosting multiple Windows VM’s…mainly for staging purposes, but as time went on, I added MSCRM and IIS frontends. Cluster Share Volumes introduced in SP2 of Windows 2008 added live migration and all of a sudden the platform was complete. By this stage I knew about VMware as a competing product and I was up to speed with the arguments for and against. My first few months in the new job I got used to working on an ESX4.0 platform, but to be honest, my first experiences where not great…Windows Server 2008 R2 locked up randomly due to an issue with VMware Tool (later fixed in a patch) and I was hearing client issues all over the place…and our own ESX hosts where crashing at times… but I was learning the ins and outs of vSphere and was being shown features such as vMotion and Storage vMotion as well as seeing the efficiencies of how ESX deals with host to VM memory.

The big turning point in my move towards VMware was while working on a client project that involved a Hyper-V Cluster build. The client had been swayed on price and decided to go with Hyper-V with VMM 2010 over VMware Essentials. While the project went well, a glaring design flaw was exposed when the site experienced a long power outage…when both Windows Hosts came back up, the Cluster had no way of firing up, due to DNS not being available as it was on a VM hosted by the cluster…after nearly 8 hours of trying to bring up the cluster, it was pure luck that the old Physical Domain controller was still available, so with that powered back on and on the network I was able to bring up the cluster and all was well. While some of you might say, it’s obvious you needed a DC that was separate to the cluster…be it physical or a VM outside of the Hyper-V cluster, it certainly made be sit up and notice ESX in a new light…that just doesn’t happen with VMware.

Since then I’ve been able to work on Anittel’s multi-site ESX Cluster backed by a strong MPLS network which has stretched from Perth to Sydney and about to head up for Brisbane…being able to live migrate a VM from Perth to Sydney still blows me away. From a hosting point of view I’ve been able to host some very high profile websites on both Windows and Linux and offer geographic redundancy and high availability…VMware’s ability to scale out VM’s with ease makes hosting high load websites a breeze and through working on developing Anittel’s vCloud platform I’ve been involved in some large projects that have allowed me to speak at events across Australia on the power of the cloud as a hosting platform for load testing and running seasonal sites. Through my Twitter feed I’ve been able to post and contribute to the massive social network…there is no better resource for information.

For me, being able to work on vCloud has been an excellent journey that’s allowed me to get truly passionate about the power of virtualization, and while I still feel the platform is still a couple versions away from being mature enough to truly be game changing It’s allowed me to get involved with VMware at the partner level via the VSPP program and in certifying Anittel as a vCloud Powered Partner (http://vcloud.vmware.com) In this I’ve picked up the biggest difference between Microsoft and VMware…VMware is all about the partners…their slogan of the past 12 months has been “Your Cloud” which is an empowering push for partners to deliver services via a partner ecosystem as opposed to Microsoft’s push to their own Public Cloud…be it Office365 or Azure. And you only need to look at Microsoft’s licensing restrictions for VDI to show their current mentality to partner hosting.

With products such as Project Octopus and AppBlast, VMware are further empowering partners to build upon the vSphere platform to delivery cutting edge technology…and while I am still nowhere ready to leave Exchange as my email platform of choice, it won’t be long until Zimbra gets enough legs to challenge. At this stage, VMware don’t want to host their own public cloud…let’s hope it stays that way so they can continue to focus on delivering a solid platform for virtualization on which solid apps can be built upon.

Being awarded a vExpert for 2012 is a great honour and being part of a special group of industry peers is very satisfying for someone who has come full circle when it comes to my journey with Virtualization. One of the unique aspects of this award is that it’s not tied to a certification…which is a good thing for me J While I am aiming to sit my VCP 5 at some stage this year, you can’t beat hands on experiencing, being thrown in the deep end and gaining knowledge via online and social means. Point in case, I’ve learnt as much as I care to about iSCSI storage in ESX due to some massive performance issues experienced at the present time, but I wouldn’t have it any other way…I love technology and all that it brings.

Thanks to VMware and the local Australian Partner Team for the honour and I hope to continue to evangelize and contribute to the community.