Category Archives: Microsoft

Veeam is now in the Network Game! Introducing Veeam Powered Network.

Today at VeeamON 2017 we announced the Release Candidate of Veeam PN (Veeam Powered Network) which together with our existing feature, Direct Restore to Microsoft Azure creates a new solution called Veeam Disaster Recovery for Microsoft Azure. At the heart of this new solution is Veeam PN which extends an on-premises network to one that’s in Azure enhancing our availability capabilities around disaster recovery.

Veeam PN allows administrators to create, configure and connect site-to-site or point-to-site VPN tunnels easily through an intuitive and simple UI all within a couple of clicks. There are two components to Veeam PN, that being a Hub Appliance that’s deployable from the Azure Marketplace and a Site Gateway that’s downloadable from the veeam.com website and deployable on-premises from an OVA meaning it can be installed onto

Veeam PN for Microsoft Azure (Veeam Powered Network) is a free solution designed to simplify and automate the setup of a disaster recovery (DR) site in Microsoft Azure using lightweight software-defined networking (SDN).

  • Provides seamless and secure networking between on-premises and Azure-based IT resources
  • Delivers easy-to-use and fully automated site-to-site network connectivity between any site

Veeam PN is designed for both SMB and Enterprise customers, as well as service providers.

From my point of view this is a great example of how Veeam is no longer a backup company but a company that’s focused on availability. Networking is still the most complex part of executing a successful disaster recovery plan and with Veeam PN easily extending on-premises networks to DR networks as well as providing connectivity from remote sites back to DR networks via site-to-site connectivity while also providing access for remote endpoints the ability to connect into the HUB appliance and be connected to networking configured via a point-to-site connection.

Look out for more information from myself on Veeam PN as we get closer to GA.

The Anatomy of a vBlog Part 1: Building a Blogging Platform

Earlier this week my good friend Matt Crape sent out a Tweet lamenting the fact that he was having issues uploading media to WordPress…shortly after that tweet went out Matt wasn’t short of Twitter and Slack vCommunity advice (follow the Twitter conversation below) and there where a number of options presented to Matt on how best to host his blogging site Matt That IT Guy.

Over the years I have seen that same question of “which platform is best” pop up a fair bit and thought it a perfect opportunity to dissect the anatomy of Virtualization is Life!. The answer to the specific question as to which blogging platform is best doesn’t have a wrong or right answer and like most things in life the platform that you use to host your blog is dependent on your own requirements and resources. For me, I’ve always believed in eating my own dog food and I’ve always liked total end to end control of sites that I run. So while, what I’m about to talk about worked for me…you might like to look at alternative options but feel free to borrow on my example as I do feel it gives bloggers full flexibility and control.

Brief History:

Virtualization is Life! started out as Hosting is Life! back in April of 2012 and I choose WordPress at the time mainly due to it’s relatively simple installation and ease of use. The site was hosted on a Windows Hosting Platform that I had built at Anittel, utilizing WebsitePanel on IIS7.5, running FastCGI to serve the PHP content. Server backend was hosted on a VMware ESX Cluster out of the Anittel Sydney Zones. The cost of running this site was approximately $10 US per month.

Tip: At this stage the site was effectively on a shared hosting platform which is a great way to start off as the costs should be low and maintenance and uptime should be included in the hosters SLA.

Migration to Zettagrid:

When I started at Zettagrid, I had a whole new class of virtual infrastructure at my hands and decided to migrate the blog to one of Zettagrid’s Virtual DataCenter products where I provisioned a vCloud Director vDC and created a vApp with a fresh Ubuntu VM inside. The migration from a Windows based system to Linux went smoother than I thought and I only had a few issues with some character maps after restoring the folder structure and database.

The VM it’s self is configured with the following hardware specs:

  • 2 vCPU (5GHz)
  • 4GB vRAM
  • 20GB Storage

As you can see above the actual usage pulled from vCloud Director shows you how little resource a VM with a single WordPress instance uses. That storage number actually represents the expanded size of a thin provisioned disk…actual used on the file system is less than 3GB, and that is with four and a half years and about 290 posts worth of media and database content  I’ll go through site optimizations in Part 2, but in reality the amount of resources required to get you started is small…though you have to consider the occasional burst in traffic and work in a buffer as I have done with my VM above.

The cost of running this Virtual Datacenter in Zettagrid is approx $120 US per month.

TipEven though I am using a vCloud Director vDC, given the small resource requirements initially needed a VPS or instance based service might be a better bet. Azure/AWS/Google all offer instance based VM instances, but a better bet might be a more boutique provider like DigitalOcean.

Networking and Security:

From a networking point of view I use the vShield/NSX Edge that is part of vCloud Director as my Gateway device. This handles all my DHCP, NAT and Firewall rules and is able to handle the site traffic with ease. If you want to look at what capabilities the vShield/NSX Edges can do, check out my NSX Edge vs vShield Series. Both the basic vShield Edges and NSX Edges have decent Load Balancing features that can be used in high availability situations if required.

As shown below I configured the Gateway rules from the Zettagrid MyAccount Page but could have used the vCloud Director UI. For a WordPress site, the following services should be configured at a minimum.

  • Web (HTTP)
  • Secure Web (HTTPS)
  • FTP (Locked down to only accept connections from specific IPs)
  • SSH (Locked down to only accept connections from specific IPs)

OS and Web Platform Details:

As mentioned above I choose Ubuntu as my OS of choice to run Wordpress though any Linux flavour would have done the trick. Choosing Linux over Windows obviously means you save on the Microsoft SPLA costs associated with hosting a Windows based OS…the savings should be around $20-$50 US a month right there. A Linux distro is a personal choice so as long as you can install the following modules it doesn’t really matter which one you use.

  • SSH
  • PHP
  • MySQL
  • Apache
  • HTOP

The only thing I would suggest is that you use a long term support distro as you don’t want to be stuck on a build that can’t be upgraded or patched to protect against vulnerability and exploits. Essentially I am running a traditional LAMP stack, which is Linux, Apache, MySQL and PHP built on a minimal install of Ubuntu with only SSH enabled. The upkeep and management of the OS and LAMP stack is not much and I would estimate that I have spent about five to ten hours a year since deploying the original server dealing with updates and maintenance. Apache as a web server still performs well enough for a single blog site, though I know many that made the switch to NGINX and use the LEMP Stack.

The last package on this list is a personal favorite of mine…HTOP is an interactive process viewer for Unix systems that can be installed with a quick apt-get install htop command. As shown below it has a detailed interface and is much better than trying to work through standard top.

TipIf you don’t want to deal with installing the OS or installing and configuring the LAMP packages, you can download a number of ready made appliances that contain the LAMP stack. Turnkey Linux offers a number of appliances that can be deployed in OVA format and have a ready made LAMP appliance as well as a ready made WordPress appliance.

That covers off the hosting and platform components of this blog…In Part 2 I will go through my WordPress install in a little more detail and look at themes and plugins as well as talk about how best to optimize a blogging site with the help of free caching and geo-distribution platforms.

References and Guides:

http://www.ubuntu.com/download/server

http://howtoubuntu.org/how-to-install-lamp-on-ubuntu

https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04

Azure Stack – Microsoft’s White Elephant?

Microsoft’s World Wide Partner Conference is currently on again in Toronto and even though my career has diverged from working on the Microsoft stack (no pun) over the past four or five years I still attend the local Microsoft SPLA monthly meetings where possible and keep a keen eye on what Microsoft is doing in the cloud and hosting spaces.

The concept of Azure Stack has been around for a while now and it entered Technical Preview early this year. Azure Stack was/is touted as an easily deployable end to end solution that gives enterprises Azure like flexibility on premises covering IaaS, PaaS and Containers. The premise of the solution is solid and Microsoft obviously see an opportunity to cash in on the private and hybrid cloud market that at the moment, hasn’t been locked down by any one vendor or solution. The end goal though is for Microsoft to have workloads that are easily transportable into the Azure Cloud.

Azure Stack is Microsoft’s emerging solution for enabling organizations to deploy private Azure cloud environments on-premises. During his Day 2 keynote presentation at the Worldwide Partner Conference (WPC) in Toronto, Scott Guthrie, head of Microsoft’s Cloud and Enterprise Group, touted Azure Stack as a key differentiator for Microsoft compared to other cloud providers.

The news overnight at WPC is that apart from the delay in it’s release (which wasn’t unexpected given the delays in Windows Server 2016) Microsoft have now said that the Azure Stack will only be available via pre-validated hardware partners which means that customers can’t deploy the solution themselves meaning the stack loses flexibility.

Neil said the move is in response to feedback from customers who have said they don’t want to deal with the complexities and downtime of doing the deployments themselves. To that end, Microsoft is making Azure Stack available only through pre-validated hardware partners, instead of releasing it as a solution that customers can deploy, manage and customize.

This is an interesting and in my opinion risky move by Microsoft. There is a precedence to suggest that going down this path leads to lesser market penetration and could turn the Azure Stack into that white elephant that I suggested in a tweet and in the title of this post. You only have to look at how much of a failure VMware’s EVO:Rail product was to understand the risks of tying a platform to vendor specific hardware and support. Effectively they are now creating a Converged Infrastructure Stack with Azure bolted on where as before there was absolute freedom in enterprises being able to deploy Azure Stack into existing hardware deployments allowing for a way to realise existing costs and extending that to provide private cloud services.

As with EVO:Rail and other Validated Designs, I see three key areas where they suffer and impact customer adoption.

Validated Design Equals Cost:

If I take EVO:Rail as an example there was a premium placed on obtaining the stack through the validated vendors and this meant a huge premium on what could have been sourced independently when you took hardware, software and support costs into account. Potentially this will be the same for the Azure Stack…vendors will add their percentage for the validated design, plus ongoing maintenance. As mentioned above, there is also now the fact that you must buy new hardware (compute, network, storage) meaning any existing hardware that can and should be used for private cloud is now effectively dead weight and enterprises need to rethink long term about existing investments.

Validated Design Equals Inherit Complexity:

When you take something in-house and not let smart technical people deploy a solution my mind starts to ask the question why? I understand the argument will be that Microsoft want a consistent experience for the Azure Stack and there are other examples of controlled deployments and tight solutions (VMware NSX comes to mind in the early days) but when the market you are trying to break into is built on the premise of reduced complexity…only allowing certain hardware and partners to run and deploy your software tells me that it walks a fine line between being truly consumable and it being a black box. I’ve talked about Complex Simplicity before and this move suggests that Azure Stack was not ready or able to be given to techs to install, configure and manage.

Validated Design Equals Inflexibility:

Both of the points above lead into the suggestion that the Azure Stack looses it’s flexibility. Flexibility in the private and hybrid cloud world is paramount and the existing players like Openstack and others are extremely flexible…almost to a fault. If you buy from a vendor you loose the flexibility of choice and can then be impacted at will by costs pressures relating to maintenance and support. If the Azure stack is too complex to be self managed then it certainly looses the flexibility to be used in the service provider space…let alone the enterprise.

Final Thoughts:

Worryingly the tone of the offical Blog Announcement over the delay suggest that Microsoft is reaching to try and justify the delay and the reasoning for going down the different distribution model. You just have to read the first few comments on the blog post to see that I am not alone in my thoughts.

Microsoft is committed to ensuring hardware choice and flexibility for customers and partners. To that end we are working closely with the largest systems vendors – Dell, HPE, Lenovo to start with – to co-engineer integrated systems for production environments. We are targeting the general availability release of Azure Stack, via integrated systems with our partners, starting mid-CY2017. Our goal is to democratize the cloud model by enabling it for the broadest set of use-cases possible.

 

With the release of Azure Stack now 12+ months away Microsoft still has the opportunity to change the perception that the WPC2016 announcements has in my mind created. The point of private cloud is to drive operational efficiency in all areas. Having a fancy interface with all the technical trimmings isn’t what will make an on-premises stack gain mainstream adoption. Flexibility, cost and reduced complexity is what counts.

References:

https://azure.microsoft.com/en-us/blog/microsoft-azure-stack-delivering-cloud-infrastructure-as-integrated-systems/?utm_campaign=WPC+2016&utm_medium=bitly&utm_source=MNC+Microsite

https://rcpmag.com/articles/2016/07/12/wpc-2016-microsoft-delays-azure-stack.aspx

http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

http://www.techworld.com.au/article/603302/microsoft-delays-its-azure-stack-software-until-mid-2017/

#vBrownBag TechTalk – NSX…An Unexpected Journey

While at VMworld a couple of weeks ago I presented a short talk around my journey working with NSX-v and how it has shifted (pivoted) the direction of what I consider to be important in my day to day role. The unexpected part of the journey dragged me kicking and screaming into the world of APIs and dare I say…Devops.

And while I don’t consider myself a DevOp (far from it)…I find myself more and more getting sucked into that world and with that I am trying adjust how I consume IT. In any case if you have a spare 10 minutes have a listen about how NSX kickstarted my interest and got me looking more under the covers of the server platforms and services we sometimes take for granted. Before this change I was comfortable accepting a UI as the only way to interact and consume services…are you?

For those interested the full schedule is here, along with direct links to the YouTube Channel with all the talks.

http://professionalvmware.com/2015/08/vbrownbag-techtalks-schedule-vmworld-usa-2015/

SharePoint 2010 Web UI Timeout Creating Web Application: Quick Fix

Had a really interesting issue with a large SharePoint Farm instance we host… over the last couple of days when we tried to create a new Web Application the task was failing on the SharePoint Farm members. While being initially thrown off by a couple permission related event log entries for SharePoint Admin database access there was no clear indication of the problem or why it starting happening after weeks of no issues.

The symptoms being experienced was that from the Central Admin Web Site ->; Application Management ->; Manage Web Application page, creating a New Web Application would eventually return what looked like a HTTP timeout error. Looking at Central Admin page on both servers, it showed the Web Application as being present and created and the WSS file system was in place on both servers…however the IIS Application Pool and Website where only created on the server that ran the initial New Web Application. What’s better is that there where not event logs or SharePoint logs that logged the issue or cause.

sp_timeout_error

In an attempt to try and see a little more verbose logging during the New Web Application process I ran up the new-SPWebApplication PowerShell command below:

New-SPWebApplication -Name “www.site.com.au443” -Port 443 -HostHeader “www.site.com.au” -URL “https://www.site.com.au” -ApplicationPool “www.site.com.au443” -ApplicationPoolAccount (Get-SPManagedAccount “DOMAIN\spAppPoolAcc”) -DatabaseServer MSSQL-01 -DatabaseName WSS_Content_Site -SecureSocketsLayer:$yes -Verbose

While the output wasn’t as verbose as I had expected, to my surprise the Web Application was created and functional on both servers in farm. After a little time together with Microsoft Support (focusing on permissions as the root cause for most of the time) we modified the Shutdown Time Limit setting under the Advanced Settings of the SharePoint Central Admin Application pool:

sp_timeout

The Original value is set to 90 seconds by default. We raised this to 300 and tested the New Web Application function from the Web UI which this time was able to complete successfully. While it does make logical sense that a HTTP timeout was happening, the SharePoint farm wasn’t overly busy or under high resource load at the time, but still wasn’t able to complete the request in 90 seconds.

One to modify for all future/existing deployments.