Category Archives: Apache

The Anatomy of a vBlog Part 1: Building a Blogging Platform

Earlier this week my good friend Matt Crape sent out a Tweet lamenting the fact that he was having issues uploading media to WordPress…shortly after that tweet went out Matt wasn’t short of Twitter and Slack vCommunity advice (follow the Twitter conversation below) and there where a number of options presented to Matt on how best to host his blogging site Matt That IT Guy.

Over the years I have seen that same question of “which platform is best” pop up a fair bit and thought it a perfect opportunity to dissect the anatomy of Virtualization is Life!. The answer to the specific question as to which blogging platform is best doesn’t have a wrong or right answer and like most things in life the platform that you use to host your blog is dependent on your own requirements and resources. For me, I’ve always believed in eating my own dog food and I’ve always liked total end to end control of sites that I run. So while, what I’m about to talk about worked for me…you might like to look at alternative options but feel free to borrow on my example as I do feel it gives bloggers full flexibility and control.

Brief History:

Virtualization is Life! started out as Hosting is Life! back in April of 2012 and I choose WordPress at the time mainly due to it’s relatively simple installation and ease of use. The site was hosted on a Windows Hosting Platform that I had built at Anittel, utilizing WebsitePanel on IIS7.5, running FastCGI to serve the PHP content. Server backend was hosted on a VMware ESX Cluster out of the Anittel Sydney Zones. The cost of running this site was approximately $10 US per month.

Tip: At this stage the site was effectively on a shared hosting platform which is a great way to start off as the costs should be low and maintenance and uptime should be included in the hosters SLA.

Migration to Zettagrid:

When I started at Zettagrid, I had a whole new class of virtual infrastructure at my hands and decided to migrate the blog to one of Zettagrid’s Virtual DataCenter products where I provisioned a vCloud Director vDC and created a vApp with a fresh Ubuntu VM inside. The migration from a Windows based system to Linux went smoother than I thought and I only had a few issues with some character maps after restoring the folder structure and database.

The VM it’s self is configured with the following hardware specs:

  • 2 vCPU (5GHz)
  • 4GB vRAM
  • 20GB Storage

As you can see above the actual usage pulled from vCloud Director shows you how little resource a VM with a single WordPress instance uses. That storage number actually represents the expanded size of a thin provisioned disk…actual used on the file system is less than 3GB, and that is with four and a half years and about 290 posts worth of media and database content  I’ll go through site optimizations in Part 2, but in reality the amount of resources required to get you started is small…though you have to consider the occasional burst in traffic and work in a buffer as I have done with my VM above.

The cost of running this Virtual Datacenter in Zettagrid is approx $120 US per month.

TipEven though I am using a vCloud Director vDC, given the small resource requirements initially needed a VPS or instance based service might be a better bet. Azure/AWS/Google all offer instance based VM instances, but a better bet might be a more boutique provider like DigitalOcean.

Networking and Security:

From a networking point of view I use the vShield/NSX Edge that is part of vCloud Director as my Gateway device. This handles all my DHCP, NAT and Firewall rules and is able to handle the site traffic with ease. If you want to look at what capabilities the vShield/NSX Edges can do, check out my NSX Edge vs vShield Series. Both the basic vShield Edges and NSX Edges have decent Load Balancing features that can be used in high availability situations if required.

As shown below I configured the Gateway rules from the Zettagrid MyAccount Page but could have used the vCloud Director UI. For a WordPress site, the following services should be configured at a minimum.

  • Web (HTTP)
  • Secure Web (HTTPS)
  • FTP (Locked down to only accept connections from specific IPs)
  • SSH (Locked down to only accept connections from specific IPs)

OS and Web Platform Details:

As mentioned above I choose Ubuntu as my OS of choice to run Wordpress though any Linux flavour would have done the trick. Choosing Linux over Windows obviously means you save on the Microsoft SPLA costs associated with hosting a Windows based OS…the savings should be around $20-$50 US a month right there. A Linux distro is a personal choice so as long as you can install the following modules it doesn’t really matter which one you use.

  • SSH
  • PHP
  • MySQL
  • Apache
  • HTOP

The only thing I would suggest is that you use a long term support distro as you don’t want to be stuck on a build that can’t be upgraded or patched to protect against vulnerability and exploits. Essentially I am running a traditional LAMP stack, which is Linux, Apache, MySQL and PHP built on a minimal install of Ubuntu with only SSH enabled. The upkeep and management of the OS and LAMP stack is not much and I would estimate that I have spent about five to ten hours a year since deploying the original server dealing with updates and maintenance. Apache as a web server still performs well enough for a single blog site, though I know many that made the switch to NGINX and use the LEMP Stack.

The last package on this list is a personal favorite of mine…HTOP is an interactive process viewer for Unix systems that can be installed with a quick apt-get install htop command. As shown below it has a detailed interface and is much better than trying to work through standard top.

TipIf you don’t want to deal with installing the OS or installing and configuring the LAMP packages, you can download a number of ready made appliances that contain the LAMP stack. Turnkey Linux offers a number of appliances that can be deployed in OVA format and have a ready made LAMP appliance as well as a ready made WordPress appliance.

That covers off the hosting and platform components of this blog…In Part 2 I will go through my WordPress install in a little more detail and look at themes and plugins as well as talk about how best to optimize a blogging site with the help of free caching and geo-distribution platforms.

References and Guides:

http://www.ubuntu.com/download/server

http://howtoubuntu.org/how-to-install-lamp-on-ubuntu

https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04

Ninefold: Going Head to Head with AWS and Using Opensource is Risky in Cloud Land

Today Ninefold (an Australian based IaaS and PaaS) provider announced that they where closing their doors an would be migrating their clients to their parent companies (Macquarie Telecom) cloud services. And while this hasn’t come as a surprise to me…having closely watched Ninefold from it’s beta days through to it’s shutdown it does highlight a couple of key points about the current state of play in the public cloud in Australia and also around the world.

As a disclaimer…this post and the view points given are totally my own and I don’t pretend to understand the specific business decisions as to why Ninefold decided to shut up doors apart from what was written in the press today around operational expenditure challenges of upgrading the existing platform.

“After an evaluation of the underlying technical platform, much consideration and deep reflection, we have decided not to embark on this journey,” the company said on Monday.

However, rather than have people simply assume that the IaaS/Cloud game is too hard given the dominance of AWS, Azure and Google I thought i’d write down some thoughts on why choosing the right underlying platform is key to any Clouds success…especially when looking to compete with the big players.

Platform Reliability:

Ninefold had some significant outages in their early days…and when I say significant, I mean significant…we are talking days to weeks where customers couldn’t interact or power on VM instances to go along with other outages all of which I was led to believe due to their adoption of CloudStack and Xen Server as their hypervisor. At the time I was investigating a number of Cloud Management Platforms and CloudStack (at the time) had some horror bugs which ruled out any plans to go with that platform at the time…I remember thinking how much prettier the interface was compared to the just released vCloud Director but the list of show stopping bugs at the time was enough to put me off proceeding.

Platform Choice:

CloudStack was eventually bought by Citrix and then given to the Apache Foundation where is currently resides but for Ninefold the damage to their initial reputation as a IaaS provider for mine did not survive these initial outages and throughout it’s history attempted to transform firstly into a Ruby On Rails platform and more recently looked to jump on the containers bandwagon as well as trying to specialize in Storage as a Service.

This to me highlights a fairly well known belief in the industry that going Opensource may be cheap in the short term but is going to come back and bite you in some form later down the track. The fact that the statement on their closure was mainly focused around the apparent cost of upgrading their platform (assuming a move to Openstack or some other *stack based CMP) highlights the fact that going with more supported stacks such as VMware ESXi with vCloud Director or even Microsoft Hyper-V with Azure is a safer bet long term as their are more direct upgrade paths version to version and there is also official support when upgrading.

Competing against AWS Head On:

http://www.itnews.com.au/news/sydney-based-cloud-provides-price-challenge-247576

Macquarie Telecom subsidiary Ninefold launches next week, promising a Sydney-based public cloud computing service with an interface as seamless as those of Amazon’s EC2 or Microsoft’s Azure.

Ninefold from the early days touted themselves as the Public Cloud alternative and their initial play was to attract Linux based workloads to their platform and offer very very cheap pricing when compared to the other IaaS providers at the time…they where also local in Australia before the likes of AWS and Azure set up shop locally.

I’ve talked previously about what Cloud Service Providers should be offering when it comes to competing against the big public cloud players…offering a similar but smaller slice of the services offered targeting their bread and butter will not work long term. Cloud Providers need to add value to attract a different kind of client base to that of AWS and Azure…there is a large pie out there to be had and I don’t believe we will be in a total duopoly situation for Cloud services short to medium term but Cloud Providers need to stop focusing on price, so much as quality of their products and services.

Final Thoughts:

Ninefold obviously believed that they couldn’t compete on the value of their existing product set and due to their initial choice of platform felt that upgrading to one that did allow some differentiation in the marketplace compared to the big public cloud players was not a viable option moving forward…hence why their existing clients will be absorbed into a platform that does run a best of breed stack and one that doesn’t try to complete head to head with AWS…at least from the outside.

“Those tier two ISPs and managed services outfits standing up wannabe AWS clones cobbled together out of bits of Xen, OpenStack and cable ties?”
Roadkill.
As the industry matures, smaller local players will find they can’t make it pay and go away. The survivors will move into roles as resellers and managed services providers who make public cloud easier for those who don’t like to get hands on with the big boys. This is happening already. By 2015 we’ll see exits from the cloud caper.“

http://www.zdnet.com/article/ninefold-to-shut-operations/

http://www.itnews.com.au/news/ninefold-to-shut-down-411312?utm_source=twitter&utm_medium=social&utm_campaign=itnews_autopost

http://www.crn.com.au/News/411313,macquarie-telecoms-ninefold-closing-down.aspx?utm_source=twitter&utm_medium=social&utm_campaign=crn_autopost

http://www.theregister.co.uk/2013/12/19/australias_new_year_tech_headlines_for_2015/

#VeeamOn 2015: Scale-out Backup Repository Will Be Brilliant for Cloud Service Providers

[UPDATE] – This feature will not be available for Cloud Connect in the initial releases but will be supported in future updates…. Hopefully sooner rather than later!

Veeam has been releasing new features for Backup & Replication v9.0 for a while now, but the recent announcement around the Scale-out Backup Repository is probably the most significant so far…especially for those running large backup repositories such as Service Providers who operate a Cloud Connect offering. Manageability of large repositories has been an ongoing challenge for Veeam administrators and many know about the pain associated with having to juggle storage to accomodate increasing backup file sizes and what’s involved in having to migrate jobs to larger repositories.

In a sentence, a scale-out repository will group multiple “simple” repositories into a single entity which will then be used as a target for any backup copy and backup job operation.

As Luca describes in his Veeam Blog Post, Veeam administrators don’t have to think too hard about how this dramatically simplifies the configuration and management of backup jobs and removes the pain of repository sprawl and optimizes storage by letting the new repository algorithms work out the best place to place a backup job based on the global Scale-Out repository namespace.

With this new capability, Service Providers will be able to:

  • Dramatically simplify backup storage and backup job management through a single, software-defined backup repository encompassing multiple heterogeneous storage devices
  • Reduce storage hardware spending by allowing existing backup storage investments to be fully leveraged, eliminating the problem of underutilized backup storage devices
  • Improve backup performance, allowing for lower RPOs and reduced risk of data loss in daily operations

Beyond the official release info…thinking about how this helps Cloud Service Providers offering Cloud Connect and Replication, the fact you can target all jobs to that single Global repository means that as storage becomes an issue all that’s required is to add a new target and let Veeam B&R do it’s job to place any new backup jobs. Perfect for those who had previously struggled with how to build into their offering a way to automatically load balance jobs based on target repository sizes.

As mentioned…a brilliant new feature of v9 and can’t wait for it to be available for Cloud Connect and Cloud Connect Replication in future v9 Updates!

References:

http://www.veeam.com/blog/introducing-scale-out-backup-repository-coming-in-availability-suite-v9.html

Installing and Configuring Cassandra and KairosDB on Ubuntu 14.04 LTS

Earlier this year I put together a post on installing and configuring a Cassandra Cluster in order to meet the requirements for vCloud SP 5.6.x Metric Reporting. In that post I went through the deployment of a Cassandra Cluster and promised a follow up on installing KairosDB. In my labs we are currently working with the vCD Metric APIs and I needed a quick way to stand up the Cassandra/KairosDB environment the vCD Metrics requires. Given the availability and sizing requirements in the lab are not representative of Production I decided to create a Single Node instance.

I also streamlined the Cassandra install by adding the Debian repositories for easier install and management. Watch the video below (suggest 2x speed) and check out the key commands listed after the video.

One of the Key settings to configure thats not shown above is changing the KairosDB datastore location from the default In Memory H2 Module to the local Cassandra location. After KairosDB has been started you are ready to point vCloud Director at the endpoint to start exporting VM Metrics to…the post showing that is still to come.

First Look: Apache DeltaCloud

As I was browsing my Twitter feed last night I can across a tweet that talked about the 1.0 Release of Apache DeltaCloud.

As described on the website:

Deltacloud provides the API server and drivers necessary for connecting to cloud providers.

Deltacloud maintains long-term stability for scripts, tools and applications and backward compatibility across different versions.

Using single API Deltacloud enables management of resources in different clouds.

Start an instance on an internal cloud, then with the same code start another on EC2 or RHEV-M

For something that has come out of the relative blue, it tweaked my interest right off the bat. Where I see value in this, isn’t so much in the fact you can seamlessly control compute/storage instances from the one platform, but in the ability to look at using the API’s/REST mechanisms to control vSphere instances. Now, I’ll be up front and honest that I’ve never had a chance to look at coding/developing at this level…it’s all very much outside of my current ability, but I am seeing the need to be at least familiar with these mechanisms…I see them begin increasingly useful in the (sic) Software Defined Datacentre era. It’s a skill that needs to be learnt on my part.

Anyways, I dove straight into the installation on a spare Ubuntu 12.04 Lab VM I have running her at Anittel. I followed the step by step here: and went about installing the various packages and dependencies. I cam across a few issues where I had to look for slightly different package names then the ones listed (which may be a reason why I have run into some issues: below). But i’ve listed my initial installs below:

 

Once that’s been done you can run DeltaCloud from the cli as per the instructions here: Right off the bat, trying to run against the vSphere driver I got the following error:

So, something is obviously wrong with the vSphere/Rubi driver. After a little searching, I couldn’t find anything definitive, so I have given up on the vSphere angle for the moment. It might well require a fresh server build as my lab instance is a little dirty.

In any case, the guys at Apache have given you the option to run up a mock instance by running this command. Note that if you want to access the website from the server IP outside of the OS you need to specify the -r command.

 

Browsing to that address you are presented with a very clean interface:

deltacloud_1

Without anything real (in terms of instances behind it) all the options look useful in the cloud compute/storage world, and clicking through a couple of the areas, you start to get a feel of the potential usefulness of the platform.

deltacloud_2
deltacloud_3

And a list of the supported cloud/storage provider drivers (once you get them working)

deltacloud_4

So, without being able to actually do anything useful…as a first look the platform looks very interesting. Hopefully with a fresh build I can get the vSphere driver working and really start to put it to work.

And if anyone has a quick fix for the issue above, feel free to post or email.