Monthly Archives: May 2012

vCloud Director and Citrix NetScaler How-To

Ive come across a couple of how-to’s on configuring vCloud Cells in a highly available Load Balanced environment. There is a good overview here by @hany_michael with the always excellent @ccolotti referenced throughout. Ive come across specific posts such as this one from @DuncanYB for F5 Load Balancers…but nothing on Citrixs NetScalers. It must be said that I had help during my initial configuration and troubleshooting from Chris Colotti over a few Twitter DM’s that helped me nut out the Console Proxy setup.


Citrix acquired NetScaler in 2005 and in 2009 released the NetScaler VPX Appliances, which allowed the platform to go virtual. Read more about the NetScalers here This guide is based on the 9.3 VPX platform, but will be good for previous versions and the just released 10x platform. As a side note, Ive worked with Cisco 4840 and Juniper DX Load Balancers and I have to say that the NetScalers are far and away the best platform Ive come across. Feature packed for more than just Load Balancing, Ive found the interface intuitive and performance (even in a Virtual Appliance) has been rock solid and they offer Service Provider Licensing!

Environment Overview:

I wont go too deep into the specifics of the vCloud setup, but in a nutshell we are talking about a generic two cell setup connected to a typical vCenter design as governed by the vCat 2.0 Both cells are in a private VLAN fronted by the NetScaler which in turn is fronted by a border gateway that handles the public to private IP NATing and Firewalling. The NetScaler Visualiser below shows the basic layout from the Virtual Server IP’s and Service Names, through to the Web UI and Console Proxy Services on both cells.


CISCO ASA Configuration:

There is nothing special here as the ASA sits in front of the NetScaler and handles the NAT’ing of the Public IP to the Private Virtual IP on the NetScalers and also acts as the Firewall.


NetScaler Config – Server Setup:

Once logged into the Web UI of the NetScaler the first thing you want to do is add both vCloud Cells as Server Objects. Expand the Load Balancing Tree Root and select Servers. From there Right Click in the center pane and select Add. What you want to do here is create two entries per Cell…one for the main Web Portal Interface IP, and one for the Console Proxy Interface IP.

There isn’t a lot of detail to enter, Just the Server Name and the IP Address as shown below


In my setup with two cells I have four server entries in the central pane as shown below.


TIP: With the NetScaler if you click Add on a previously created object you will be presented with the settings of the selected object. From there its potentially a quick edit for the new element.

NetScaler Config – Service Setup:

There are a couple of ways to configure services and it comes down to whether you want to group like services into Service Groups, or configure individual services per Server instance.  In this example I have used individual Services as selected under the Load Balancing Tree root shown on the right. These options come together when configuring the Virtual Servers and boil down to being able to control weightings on specific Services on a per server basis, or grouping a farm of services together in the group. In either example you can take the underlying server in and out of production at will via the Servers section.

The Web Portal vCloud Cell interface is setup as shown below. 


Enter in the Service Name and select the Server from the dropdown. Protocol for this interface is SSL and the Port is 443.


TIP: Under the Advanced tab you should see the Client IP Header as a globally set value similar to the above. This allows us to have the vCloud logs report back the originating client IP instead of the IP of the load balancers…handy for advanced logging and troubleshooting.

For the vCloud Cell Console Proxy Interface the single biggest Gotchya is if you configure the protocol as SSL. @ccolotti guided me through my initial problems with this setup and got me to configure the protocol as TCP. Once that was configured as show below, I was able to view to console.


For me, this is one of the real features that makes a Hosted/Cloud Server truly functional. Having the console available via the management layer is a must and is pretty much standard with most Management Layers out there…along with the ability to stop/start/reset VM’s. At this point I would mention that one of my biggest gripes with vCloud is that there is no real time resource graphs or usage stats…hopefully this is added for future relases – Take that to be Feature Request VMware 🙂

At the end of this process you should see the following in your central window pane:


NetScaler Config – Virtual Server Setup:

Once you have configured your Servers and Service Groups, the final part is to put it all together and configure the Virtual Servers. The IP that you allocate (the VIP) is what you NAT your public IP to.  Right-Click on Add to get the Configure Virtual Server window as shown below. Enter in a Name, you IP Address and select SSL or TCP (depending on if setting up the Web Portal or Console Proxy) as the Protocal. The Port remains as 443On the Services Tab bind the Service Name’s of the Services we created in the steps above.


Click on the Method and Persistence Tab and here you want to set your LB Method Algorithm and your Persistence Method. There are a few to choose from in this list provided by the NetScaler, but I tend to always choose Least Connections which will send the next connection request to the server with the least number of active connections. One thing you don’t want in a load balanced setup for the Web Portal or Console Proxy is have sessions bouncing between Cells without stickiness which leads to session state loss. The Persistence method can be IP based or Cookie based for the Web Portal, but the Console Proxy needs to be IP Based as Cookies isn’t an option with TCP set as the protocol. Time-out can be set to any value you see fit, but I like setting this to 120 minutes to ensure a long stick.


The final step in setting up the Virtual Server is to bind an SSL certificate. Click on the SSL Settings tab. Assuming you have imported your SSL certificate into the NetScaler prior to setup, select the certificate from the left pane and click Add.

At the end of this process you should see the following in your central window pane:


vCloud SSL and WildCards:

Early on in my vCloud testing, I spent a huge amount of time trying to import a wildcard SSL certificate into the KeyStore without much luck. From what I could find on-line there wasn’t a lot of good how-to’s on getting this process down-pack with vCloud…let alone with any JAVA based KeyStore setup. My workaround was to put a Load Balancer in front of the cells. This was, the clients connect in over SSL to the NetScaler (with a legit wildcard SSL) and the NetScaler can connect to the cells over SSL/TCP (with with default vCloud certificate) ignoring the certificate warning, which no one like to see on a production system.


For a robust, redundant and highly available vCloud Cell design, a solid Load Balancer fronting the platform is a must. The Citrix NetScalers are  impressive appliances and are an excellent addition to any vCloud implementation.

SharePoint 2010 Web UI Timeout Creating Web Application: Quick Fix

Had a really interesting issue with a large SharePoint Farm instance we host… over the last couple of days when we tried to create a new Web Application the task was failing on the SharePoint Farm members. While being initially thrown off by a couple permission related event log entries for SharePoint Admin database access there was no clear indication of the problem or why it starting happening after weeks of no issues.

The symptoms being experienced was that from the Central Admin Web Site ->; Application Management ->; Manage Web Application page, creating a New Web Application would eventually return what looked like a HTTP timeout error. Looking at Central Admin page on both servers, it showed the Web Application as being present and created and the WSS file system was in place on both servers…however the IIS Application Pool and Website where only created on the server that ran the initial New Web Application. What’s better is that there where not event logs or SharePoint logs that logged the issue or cause.


In an attempt to try and see a little more verbose logging during the New Web Application process I ran up the new-SPWebApplication PowerShell command below:

New-SPWebApplication -Name “” -Port 443 -HostHeader “” -URL “” -ApplicationPool “” -ApplicationPoolAccount (Get-SPManagedAccount “DOMAIN\spAppPoolAcc”) -DatabaseServer MSSQL-01 -DatabaseName WSS_Content_Site -SecureSocketsLayer:$yes -Verbose

While the output wasn’t as verbose as I had expected, to my surprise the Web Application was created and functional on both servers in farm. After a little time together with Microsoft Support (focusing on permissions as the root cause for most of the time) we modified the Shutdown Time Limit setting under the Advanced Settings of the SharePoint Central Admin Application pool:


The Original value is set to 90 seconds by default. We raised this to 300 and tested the New Web Application function from the Web UI which this time was able to complete successfully. While it does make logical sense that a HTTP timeout was happening, the SharePoint farm wasn’t overly busy or under high resource load at the time, but still wasn’t able to complete the request in 90 seconds.

One to modify for all future/existing deployments.

The Backup Delusion – Part 1

Ill put this right out there! I would rather live in a world without Backup and Recovery. I have burnt countless hours and hair follicles working my way through and trying to tame backup application platforms. Unfortunately we have not reached a point whereby the technology we use is reliable and resilient enough to prevent failures and with that we backup and sometimes we recover.

Historically companies and service providers have relied on tape backups to protect their mission critical data, but with the advent of the digital age and to a lesser extent virtualization we find ourselves in an opposing world of increasing resource density and efficiency and â Big Data. Tape drives, while still in use by some have given way to disk based backup systems and applications have failed to keep pace with the change.

I’ve had the misfortune of dealing with a large number of backup applications over the past couple of years and very few, if any have lived up to expectation. From poor Application Support (sometimes waiting a year to support a new platform after release) to products that staggeringly cant recover data it claims to have backed up successfully. The amount of man hours I see being burnt by onsite techs and senior engineers on backend and client side issues is mind boggling. I would be very interested to see the $$ value Backup applications suck out of service providers and businesses alone! The amount of times I’ve heard a tech or sales person try to explain to a customer that, while we had the backup, and it appeared to be working, we couldn’t recover your data…sorry about that!

And as I currently try to truncate 500GB worth of Exchange Server logs (on a Virtual Server that had a 300GB SnapShot go out of control and consume all datastore space, resulting in VM failure) due to a new version of a product that previously performed the function, but now does not until a future patch, I ponder what makes a good backup application? Im also wondering if traditional backup applications are the way to go? Do we still need to provide an application? Does that application need to cover all requirements?

Traditionally a backup Application needed to cover the following:

–  Agent Compatibility/Deployment

–  Application Awareness via API/VSS

–  File Level Backup Options

–  Bare Metal Recovery of Physical Servers

Throw Virtualization into the mix and you need to cover the following:

–  Agentless Backup Options

–  Multi-Platform Support (?)

–  Change Block Tracking

– Offsite Backup Options

Now throw in Operational Requirements and Expectations to cover the following:

–  Cost of licensing Application and vendor royalties

–  Cost of backend storage and ongoing costs of data sprawl

–  Requirement for storage efficiencies through enhanced compression and de-dupe

–  Proven stability and scalability

–  Minimal Engineering and ongoing Management time

And lastly, throw in business/client expectations to cover the following:

–  Relative value for money“ I want the world, but dont want to pay for it.

–  100% Faith in Product being delivered “ You said it would work!

–  Fast Backup and Recover Times “ I need that file from 18 months ago now!

–  Expectation that Application backups up everything “ This is my DR right?

–  Offsite Backup Options “ To the Cloud! Its safer up there I hear?

Ok, so I might have listed out some pent up frustration drawn from client interactions for that last part…but the question remains…is there a product that ticks all those boxes? And while vendors will have you believe the marketing FUD, I have yet to find a product that does…and I would argue that no product will ever meet all requirements. We are about to enter the post PC era, and while debatable in some (Redmond) circles the truth is that we have seen the landscape of data and how it’s stored and accessed shift …and with that current backup applications and the platforms they sit upon simply cant cope with the change.

So what do we have at our disposals to cope with this change? What vendor will release that ‘Silver Bullet’ application that solves all our issues? I don’t believe there will ever be one application that covers all bases…but there are certainly new applications and technologies that backup and control data which are emerging or are close to release. In Part 2 I’ll go through these and try to (not solve) work through what would be suitable for the foreseeable future in data backup and recovery and introduce the often misinterpreted concept of DRaaS.

The BackupDelusion – Part 2