Given this was my first AWS re:invent I didn’t know what to expect from the keynotes and while Wednesday’s keynote focused on new release announcements, Thursday’s keynote with Werner Vogels was more geared towards thought leadership on where in AWS want’s to take the industry that it has enabled over the next two to five years. He titled this, 21st Century Architecture and talked around and how AWS don’t go about the building of their platforms by themselves in an isolated environment…they take feedback from clients which allows them to radically change the way they build their systems.
The goal is for them to design very nimble and fast tools from which their customers can decide exactly how to use them. The sheer number of new tools and services i’ve seen AWS release since I first used them back in 2011 is actually quiet daunting. As someone who is not a developer but has come from a hosting and virtualization background I sometimes look at AWS as offering complex simplicity. In fact I wrote about that very thing in this post from 2015. In that post I was a little cynical of AWS and while I still don’t have the opinion that AWS is the be all and end all of all things cloud, I have come around to understanding the way they go about things…..
Treating the machine as Human:
I wanted to take some time to comment on Vogels thoughts on voice and speech recognition. The premise was that all past and current interactions with computers has been driven my the machinery…screen, keyboard, mouse and fingers are all common however up to this point it could be argued that it’s not the way in which we naturally interact with other people. Because of the fact this interaction is driven by the machine we know how to not only interact with machines, but also manipulate the inputs so we get what we want as efficiently as possible.
If I look at the example of SIRI or Alexa today…when I try to ask them to answer me based on a query I have I know to fashion the question in such a way that will allow the technology to respond…this works most of the time because I know how to structure to questions to get the right answer. I treat the machine as a machine! If I look at how my kids interact with the same devices their way of asking questions is not crafted as if they where talking to a computer…for them they ask Alexa a question as if she was real. They treat the machine as a person.
This is where Vogels started talking about his vision for interfaces of the future to by more human centric all based around advances in neural network technology which allow for near realtime responses which will drive the future of interfaces to these digital systems. The first step in that is going to be voice and Amazon has looked to lead the way in which home users interact with Amazon.com with Alexa. With the release of Alexa for Business this will look to extend beyond the home.
For IT pros there is a future in voice interfaces that allow you to not only get feedback on current status of systems, but also (like in many SciFi movies of the last 30 to 40 years) allow us to command functions and dictate through voice interfaces the configuration, setup and management of core systems. This is already happening today with a few project that I’ve seen using Alex to interact with VMware vCenter, or like the video below showing Alex interacting with a Veeam API to get the status of backups.
There are negatives to voice interfaces with the potential to commit voice triggered mistakes high, however as these systems become more human centric voice should allow us to have a normal and more natural way of interacting with systems…at that point we may stop being able to manipulate the machine because the interaction will become natural. AWS is trying to lead the way with products like Alexa but almost every leading computer software company is toying with voice and AI which means we are quickly nearing an inflection point from which we will see an acceleration of the technology which will lead to it become a viable alternative to today’s more commonly used interfaces.