Category Archives: General

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Veeam Vault #11 – VBR, Veeam ONE, VAC Releases plus Important Update for Service Providers

Welcome to the 11th edition of Veeam Vault and the first one for 2019! It’s been more than a year since the last edition, however in light of some important updates that have been released over the past couple of weeks and months, I thought it was time to open up the Vault again! Getting stuck into this edition, I’ll cover the releases of Veeam Backup & Replication 9.5 Update 4b, Veeam One Update 4a as well as an update for Veeam Availability Console and some supportability announcements.

Backup & Replication 9.5 Update 4b and Veeam ONE 4a:

In July we released Update 4b for Veeam Backup & Replication 9.5. It brought with it a number of fixes to common support issues as well as a number of important platform supportability milestones. If you haven’t moved onto 4b yet, it’s worth getting there as soon as possible. You will need to be on at least 9.0 Update 2 (build 9.0.0.1715) or later prior to installing this update. After the successful upgrade, your build number will be 9.5.4.2866.

Veeam ONE 9.5 Update 4a was released in early September and containers similar platform supportability to Backup & Replication as well as a number of fixes. Details can be found in this VeeamKB.

Backup & Replication Platform support

  • VMware vCloud Director 9.7 compatibility at the existing Update 4 feature levels.
  • VMware vSphere 6.5 and 6.7 U3 Supportability vSphere 6.5 and 6.7 U3 GA is officially supported with Update 4b.
  • Microsoft Windows 10 May 2019 Update and Microsoft Windows Server version 1903 support as guest VMs, and for the installation of Veeam Backup & Replication and its components and Veeam Agent for Windows 3.0.2 (included in the update).
  • Linux Kernel version 5.0 support by the updated Veeam Agent for Linux 3.0.2 (included in the update)

For a full list of updates and bug fixes, head to the offical VeeamKB. Update 4b is a cumulative update, meaning it includes all enhancements delivered as a part of Update 4a. There are also a number of fixes specifically for Veeam Cloud & Service Providers that offer Cloud Connect services. For the full change log, please see this topic on the private VCSP forum.

https://www.veeam.com/kb2970

VAC 3.0 Patch:

Update 3 for Veeam Availability Console v3 (build 2762) was released last week, and containers a number of important fixes and enhancements. The VeeamKB lists out all the resolved issues, but i’ve summerized the main ones below. It is suggested that all VAC installations are updated as soon as possible. As a reminder, don’t forget to ensure you have a backup of the VAC server before applying the update.

  • UI – Site administrators can select Public IP Addresses belonging to a different site when creating a company. Under certain conditions, “Used Storage” counter may display incorrect data on the “Overview” tab.
  • Server – Veeam.MBP.Service fails to start when managed backup agents have duplicate IDs (due to cloning operation) in the configuration database.
  • Usage Reporting – Under certain conditions, usage report for managed Veeam Backup & Replication servers may not be created within the first ten days of a month.
  • vCloud Director – Under certain conditions, the management agent may connect to a VAC server without authentication.
  • Reseller Reseller can change his or her backup quota to “unlimited” when creating a managed company with “unlimited” quota.
  • RESTful APIs – Querying “v2/tenants/{id}” and “/v2/tenants/{id}/backupResources” information may take considerable amount of time.

https://www.veeam.com/kb3003

Veeam Cloud Connect Replication Patch:

Probably one of the more important patches we have released of late has to do with a bug found in the stored procedure that generates automated monthly license usage reports for Cloud Connect Replication VMs. This displays an unexpected number of replicated VMs and licensed instances which has been throwing off some VCSP license usage reporting. If VCSPs where using the PowerShell command Get-VBRCloudTenant -Name “TenantName”, the correct information is returned.

To fix this right now, VCSPs offering Cloud Connect Replication servers can visit this VeeamKB, download an SQL script and apply it to the MSSQL server as instructed. There will also be an automated patch released and the fix baked into future Updates for Backup & Replication.

https://www.veeam.com/kb3004

Quick Round Up:

Along with a number of platform supportability announcements at VMworld 2019, it’s probably important to reiterate that we now have a patch available that allows us to support restores into NSX-T for VMware Cloud on AWS SDDCs environments. This also means that NSX-T is supported on all vSphere environments. The patch will be baked into the next major release of Backup & Replication.

Finally, the Dell EMC SC storage plug-in is now available which I know will be popular among our VCSP community who leverage SCs in their Service Provider platforms. Being able to offload the data transfer of backup and replication jobs to the storage layer introduces a performance advantage. In this way, backups from storage array snapshots provide a fast and efficient way to allow the Veeam backup proxy to move data to a Veeam backup repository.

Quick Fix – OS Not Found Deploying Windows Template with Terraform

During the first plan execution of a new VM based on a Windows Server Core VM Template, my Terraform plan timed out on Guest Customizations. The same plan had worked without issue previously with an existing Windows Template, so I was a little confused as to what had gone wrong. When I checked the console of the cloned VMs in vSphere, I found that it was stuck at the boot screen not able to find the Operating System.

Operating System not found – Obviously having issues booting into the templated disk.

After a little digging around, I came across this post which describes the error being related to the VM Template being configured with EFI Firmware which is now the default for vSphere 6.7 VMs. Upon cloning, Terraform deploys the new VM with a BIOS Firmware resulting in the disk not able to boot.

Checking the VM Template, it did in-fact have EFI set.

An option was to reconfigure the Template and make it default to BIOS, however the Terraform vSphere Provider was updated last year to include an option to set the Firmware on deployment.

In the instance declaration file we can set firmware as shown below

If we set it up such that it reads that value from a variable we only have to configure the efi or bios setting once in the terraform.tfvars files.

In the variables.tf file the variable is set and has a default value of bios set.

Once this was configured, the plan was able to successfully deploy the new Windows Template without issue and Guest Customizations where able to continue.

Terraform Version: 0.11.7

Resources:

https://github.com/terraform-providers/terraform-provider-vsphere/issues/441

https://github.com/terraform-providers/terraform-provider-vsphere/pull/485

https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html#firmware

VMworld 2019 – Still Time To Go for FREE*!

VMworld is rapidly approaching, and for those that have not secured their place at the event in San Francisco, and for whatever reason have been hindered in terms of purchasing an event ticket… there is still time and there is still a way!

We (Veeam) have been running a competition that gives away three FULL conference pass over the course of the last few months but ends on the 19th of August so time is running out!

Head here to register for the chance to win a FULL conference pass.

For a quick summary of what is happening at VMworld from a Veeam perspective including sessions, parties and more, click here to head to the main event page that contains details on what Veeam is doing at VMworld 2019.

*The Prize does not include any associated costs including but not limited to expenses, insurance, travel, food or additional accommodation costs unless otherwise specified above.

I went on Holiday (Vacation) and Managed to Switch Off!!

The jet lag has almost passed… I’ve nearly caught up with the backlog of messages on my various social platforms… expenses are almost done… and i’m about to hit Outlook and clear my email inbox. Yes, I’ve just come back from nearly 4.5 weeks away on holiday (vacation) and barring a few conversations on Teams with my team, I managed to switch off almost 100%… In fact this is my first blog post in more than a month!

To be honest, this is something that I thought I would ever do and on one side I feel somewhat ok about it… while on the other I have a case of mild agita over needing to catch up and get back into the groove of work life.

“Don’t F*cking Tell me to Switch Off!!!”

This was the original title of the post (modified to soften the blow) and mainly relates to the bucket load of messages and comments telling me to switch off while on leave as I started this trip. This is something I have experienced throughout my career and I see it all the time when people make the “mistake” of checking into Twitter, or posting in Slack when they are on vacation.

There is almost nothing I find more frustrating than people telling me (or others) how to spend my time… be it on holiday or otherwise and especially when it comes to work related matters. I’ve written before about work/life balance and how I have struggled to achieve that over the years and in fact the whole work/life balance in IT has become a real topic since then and there have been many people that have written about their own personal struggles.

To that end, when people tell me to switch off, I tend to respond with what is stated above and the immediate thought that resonates in my mind is that I’ll switch off when and if I damn well please! And if I don’t, then that is ok as well! If I feel balance and I am ok in myself, then it’s something that is in my control and not the place for others to try and dictate to me.

Regardless… I Did Switch Off

When it comes to my thoughts around switching off… it comes down to the fact that my hobby is also my job and my career. Tinkering is how I learn and an important component of learning is staying connected and engaged with the various online communities and content sources. This is why I find it hard to completely switch off. I don’t deny that there is a physiological side to this which equates to an addiction… it’s well documented that we thrive on the hits of dopamine that come from social reward.

For us as techies, that social reward is linked to emails, messages, Tweets, likes, hits, views etc. I’ll be honest and admit that I do crave all those things as well as social interaction with my workmates. However as I settled into my holiday I began to replace the need for technical reward with that of personal and family rewards that generated different types of dopamine hits.

 

The max hit came while at a local village feast in Gozo where memories of my childhood trips to Malta came flooding back… and as I ate my third Imaret I was at max switch off level and knew that I had succeeding in doing something I thought not possible! Total disconnect!

I captured that moment below in the fourth picture… this is for me as a reminder of where I can get to if I ever feel the need to switch off again.

Ultimately I was able to not touch my MBP for work all holiday and I let myself drift away from my connected world without much thought or fear of missing out… for the most 🙂

I still did a bit here and there, but not nearly as much as I had thought. Now that I am back, it’s time to get into the connected world and get back to what I do… stay engaged, stay connected and stay switched on!

Kubernetes Everywhere…Time to Take off the Blinkers!

This is more or less a follow up post to the one I wrote back in 2015 about the state of containers in the IT World as I saw it at the time. I started off that post talking about the freight train that was containerization along with a cheeky meme… fast forward four years and the narrative around containers has changed significantly, and now there is new cargo on that freight train… and it’s all about Kubernetes!

In my previous role working at a Cloud Provider, shortly after writing that 2015 post I started looking at ways to offer containers as a service. At the time there wasn’t much, but I dabbled a bit in Docker and if you remember at the time, VMware’s AppCatalyst… which I used to deploy basic Docker images on my MBP (think it’s still installed actually) with the biggest highlight for me at the time being able to play Docker Doom!

I also was involved in some of the very early alphas for what was at the time vSphere Integrated Containers (Docker containers as VMs on vCenter) which didn’t catch on compared to what is currently out there for the mass deployment and management of containers. VMware did evolve it’s container strategy with Pivotal Container Services, however those outside the VMware world where already looking elsewhere as the reality of containerised development along with serverless and cloud has taken hold and become accepted as a mainstream IT practice.

Even four or five years ago I was hearing the word Kubernetes often. I remember sitting in my last VMware vChampion session with where Kit Colbert was talking about Kuuuuuuuurbenites (the American pronunciation stuck in my mind) and how we all should be ready to understand how it works as it was about to take over the tech world. I didn’t listen… and now, I have a realisation that I should have started looking into Kubernetes and container management in general more seriously sooner.

Not because it’s fundamental to my career path…not because I feel like I was lagging technically and not because there have been those saying for years that Kubernetes will win the race. There is an opportunity to take off the blinkers and learn something that is being adopted by understanding the fundamentals about what makes it tick. In terms of discovery and learning, I see this much like what I have done over the past eighteen months with automation and orchestration.

From a backup and recovery point of view, we have been seeing an increase in the field of customers and partners asking how they backup containers and Kubernetes. For a long time the standard response was “why”. But it’s becoming more obvious that the initial stateless nature of containers is making way for more stateful persistent workloads. So now, it’s not only about backing up the management plane.. but also understanding that we need to protect the data that sits within the persistent volumes.

What I’ll Be Doing:

I’ve been interested for a long time superficially about Kubernetes, reading blogs here and there and trying to absorb information where possible. But as with most things in life, you best learn by doing! My intention is to create a series of blog posts that describe my experiences with different Kubernetes platforms to ultimately deploy a simple web application with persistent storage.

These posts will not be how-tos on setting up a Kubernetes cluster etc. Rather, I’ll look at general config, application deployment, usability, cost and whatever else becomes relevant as I go through the process of getting the web application online.

Off the top of my head, i’ll look to work with these platforms:

  • Google Kubernetes Engine (GKE)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Container Service (AKS)
  • Docker
  • Pivotal Container Service (PKS)
  • vCloud Director CSE
  • Platform9

The usual suspects are there in terms of the major public cloud providers. From a Cloud and Service Provider point of view, the ability to offer Kubernetes via vCloud Director is very exciting and if I was still in my previous role I would be looking to productize that ASAP. For a different approach, I have always likes what Platform 9 has done and I was also an early tester of their initial managed vSphere support, which has now evolved into managed OpenStack and Kubernetes. They also recently announced Managed Applications through the platform which i’ve been playing with today.

Wrapping Up:

This follow up post isn’t really about the state of containers today, or what I think about how and where they are being used in IT today. The reality is that we live in a hybrid world and workloads are created as-is for specific platforms on a need by need basis. At the moment there is nothing to say that virtualization in the form of Virtual Machines running on hypervisors on-premises are being replaced by containers. The reality is that between on-premises, public clouds and in between…workloads are being deployed in a variety of fashions… Kubernetes seems to have come to the fore and has reached some level of maturity that makes it a viable option… that could no be said four years ago!

It’s time for me (maybe you) to dig underneath the surface!

Link:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Kubernetes is mentioned 18 times in this and on this page

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

Quick Fix: Unable to Login to WordPress Site

I’ve just had a mild scare in that I was unable to log into this WordPress site even after trying a number of different ways to gain access by resetting the password via the methods listed on a number of WordPress help sites. The standard reset my password via email option was also not working. I have access directly to the web server and also have access to the backend MySQL database via PHPMyAdmin. Even with all that access, and having apparently changed the password value successfully, I was still getting failed logins.


I had recently enabled Two Factor Authentication using the Google Authenticator and using the WordPress Plugin of the same name. I suspected that this might be the issue as one of the suggestions on the troubleshooting pages was to disable all plugins.

Luckily, I remembered that through the WordPress website you have administrative access back to your blog site. So rather than go down a more complex and intrusive route, I went in and remotely disabled the plugin in question.

Disabling that plugin worked and I was able to login. I’m not sure yet if there was general issues with the Google Authenticator, or if the Plugin had some sort of issue, however end result was I could login and my slight panic was over.

Interesting note is that most things can be done through the WordPress website including publishing blog posts and general site administration. In this case it saved me a lot of time trying to work out what was happening with me not able to login. So if you do have issues with your login, and you suspect it’s a Plugin, make sure you have access to WordPress.com and remotely handle the activation status of the plugin.

The Reality of Disaster Recovery Planning and Testing

As recent events have shown, outages and disasters are a fact of life in this modern world. Given the number of different platforms that data sits on today, we know that disasters can equally come in many shapes and sizes and lead to data loss and impact business continuity. Because major wide scale disasters occur way less often than smaller disasters from within a datacenter, it’s important to plan and test cloud disaster recovery models for smaller disasters that can happen at different levels of the platform stack.

Because disasters can lead to revenue, productivity and reputation loss, it’s important to understand that having cloud based backup is just one piece of the data protection puzzle. Here at Veeam, we empower our cloud and service providers to offer services based on Veeam Cloud Connect Backup and Replication. However, the planning and testing of what happens once disaster strikes is ultimately up to either the organizations purchasing the services or the services company offering Disaster Recovery as a Service (DRaaS) that is wrapped around backup and replication offerings.

Why it’s Important to Plan:

In theory, planning for a disaster should be completed before selecting a product or solution. In reality, it’s common for organizations to purchase cloud DR services without an understanding of what needs to be put in place prior to workloads being backed up or replicated to a cloud provider or platform. Concepts like recovery time and recovery point objectives (RTPO) need to be understood and planned so, if a disaster strikes and failover has occurred, applications will not only be recovered within SLAs, but also that data on those recovered workloads will be useful in terms of its age.

Smaller RTPO values go hand-in-hand with increased complexity and administrative services overhead. When planning ahead, it’s important to size your cloud disaster platform and build the right disaster recovery model that’s tailored to your needs. When designing your DR plan, you will want to target strategies that relate to your core line of business applications and data.

A staged approach to recovery means that you recover tier-one applications first so the business can still function. A common tier-one application example is the mail server. Another is payroll systems, which could result in an organization being unable to pay its suppliers. Once your key applications and services are recovered, you can move on to recovering data. Keeping mind that archival data generally doesn’t need to be recovered first. Again, being able to categorize systems where your data sits and then working those categories into your recovery plan is important.

Planning should also include specific tasks and controls that need to be followed up on and adhered to during a disaster. It’s important to have specific run books executed by specific people for a smoother failover. Finally, it is critical to make sure that all IT staff know how to accessing applications and services after failover.

Why it’s Important to Test:

When talking about cloud based disaster recovery models, there are a number of factors to consider before a final sign-off and validation of the testing process. Once your plan is in place, test it regularly and make adjustments if issues arise from your tests. Partial failover testing should be treated with the same level of criticality as full failover testing.

Testing your DR plan ensures that business continuity can be achieved in a partial or full disaster. Beyond core backup and replication services testing, you should also test networking, server and application performances. Testing should even include situational testing with staff to be sure that they are able to efficiently access key business applications.

Cloud Disaster Recovery Models:

There are a number of different cloud disaster recovery models, that can be broken down into three main categories:

  • Private cloud
  • Hybrid cloud
  • Public cloud

Veeam Cloud Connect technology works for hybrid and public cloud models, while Veeam Backup & Replication works across all three models. The Veeam Cloud & Service Provider (VCSP) program offers Veeam Cloud Connect backup and replication classified as hybrid clouds offering RaaS (recovery-as-a-service). Public clouds, such as AWS and Azure, can be used with Veeam Backup & Replication to restore VM workloads. Private clouds are generally internal to organizations and leverage Veeam Backup & Replication to replicate or back up or for a backup copy of VMs between datacenter locations.

The ultimate goal here is to choose a cloud recovery model that best suits your organization. Each of the models above offer technological diversity and different price points. They also plan and test differently in order to, ultimately, execute a disaster plan.

When a partial or full disaster strikes, a thoroughly planned and well-tested DR plan, backed by the right disaster recovery model, will help you avoid a negative impact on your organization’s bottom line. Veeam and its cloud partners, service-provider partners and public cloud partners can help you build a solution that’s right for you.

First Published on veeam.com by me – modified and updated for republish today  

In Defence of Qantas Dreamliner’s Premium Economy

I don’t usually use this blog to write about things other than technology but seeing as though a big part of my professional life is spent in the air flying, I felt compelled to write a quick post in rebuttal of this article on Channel News Australia. The blog post in my opinion unfairly paints the Premium Economy seats on the Qantas 787 Dreamliner’s in a bad light and I wanted to share my experience traveling in the seats while countering some of the claims made by the author.

Updated: Qantas Dreamliner Premium Economy Seating Should Face ACCC Probe

Indeed it is a bit dramatic to be calling for ACCC intervention in a case where it is more about ones personal experience over what myself and many think to be one of the best flying experiences that exists in global aviation today. The author has every right to his opinion but the fact is that he was a little short sighted in his review and I felt unnecessarily harsh on the seat… which in turn painted the whole Qantas Premium Economy experience, not as advertised and substandard.

My Experience:

JetItUp tells me that I have been on eight Qantas Dreamliner flights totalling nearly 64 thousand KMs of distance travelled. Of those eight flights, six have been in Premium Economy, one in Business and one in Economy. I’ve sat in a number of different rows and seats on those Premium Economy flights with the latest (flying Perth to London on QF9) being in a similar bulkhead seat (20F) to the one complained about in the article.

While it is not a 100% positive experience being in the Bulkhead seats of the Qantas 787’s… compared to other carriers and other aircrafts, the seats are amazing and allow for maximum comfort for any long haul flight. The service on the Dreamliner’s is impeccable, the food is above average for airline food, the cabin and seats are modern and comfortable. Unlike the other Premium Economy seats I was able to almost sleep flat for most of the 17 hour journey from Perth to London at the bulkhead with the extra leg room.

SeatGuru also lists the bulkhead seats as being highly sought after… of which the only downside on the wings is the proximity to the baby bassinet.

Directly Addressing Some Comments:

Instead of a Premium Economy seat with a screen in front of you and above all a footrest that allow one to stretch out I got bulkhead seat with no screen, along with nowhere to put a pair of headphones or tablet other than on the floor.

It shouldn’t be a surprise that a bulkhead seat doesn’t have a screen in front of it and that it is stowed away inside the seat… this is true of any bulkhead seat in any airline around the world. It is also incorrect that there is no footrest in the bulkhead row (more about that below) and there was more than enough room for me to slide a 13″ MacBook Pro plus some other bits and bob in the pocket at the front

The so called leather and very flexible footrest that the bulk of other Premium Economy passengers are afforded is a footrest which is a hard metal fixture that has a simple extender and a soft mesh centre that results in one having to try and sleep with your knees bent while at the same time trying to prevent the metal bar at the top of the fixture cutting into your feet as you try to sleep.

This is the one section that made me want to write this post. I felt that the flexible footrest at the bulkhead was the best feature of the seat and allowed me to relax at an almost business class level. With the added leg room, you are able to almost lie flat (I am 178cm and of average build) once the Premium seat is reclined (38 inches of pitch) and settled into a comfortable spot.

In addition to that I was able to stow some items below my feet, however this is not as secure or safe as a non bulkhead… but that is a minor thing (see below).

I was not even given a choice of accepting or declining a bulkhead seat when checking in.

This is completely false as any Qantas customer can choose their seating online up to three hours before the flight takes off. As soon as the author knew he was in Premium Economy he should have checked the seat allocation and chosen to his liking. It’s true that at checkin there might not have been the option to change as every other Premium seat in the cabin might have been pre-allocated, but even the most basic flyer would have the option to choose.

A Few Cons:

The one downside I will agree on is the lack of room to stow personal items and drinks. The bulkheads need to remain clear for takeoff and landing meaning need to place most items in the bag holders above. This was highlighted to me on one of my more recent flights on the Qantas A380 Premium Economy upper deck (Seat 24J) which is an isle, emergency exit seat. There was literally no bulkhead, no place to store anything and I must admit this was frustrating… similar experience might be had on the same 747 Premium Economy seats… however every bulkhead on the 787 Dreamliner has the footrest and at least minimal space to stow items. As I mentioned above, I was able to stow the laptop in front of me no worries.

Final Word:

Don’t take my word for it… I’ve embedded a YouTube review below that looks at the Premium Economy seats and talks about the pros and cons of the bulkhead and rear seats of the Premium Cabin.

Overall, my experience of the Premium Economy Dreamliner cabin is that it is up there with the best flying experience in the world. The ChannelNewsAU article is grossly inaccurate and in fact sensationalist in its review of the bulkhead seats… the seats which should be, and are most coveted by any frequent flyer who travels for a living.

Disclosure: I travel for business regularly and obtain upgrades through Qantas Classic Points Rewards.

References:

Updated: Qantas Dreamliner Premium Economy Seating Should Face ACCC Probe

https://www.seatguru.com/airlines/Qantas_Airways/Qantas_Airways_Boeing_789.php

« Older Entries