Consumer-facing talking assistants like Amazon Alexa and Google Assistant will lead to an M2M-dominated internet, and unprecedented fluctuations in workload. With the rise of this voice-powered Interent of Things (IoT), telcos must be ready for efficient resource allocation with software-defined infrastructure.

Amazon Alexa and Google Assistant explode onto the scene

In a hundred years, when students study the history of the IoT, they’re likely to have an early lesson on Amazon Alexa and Google Assistant. Although these are not the first wired “things,” those devices are the first widely used, consumer-facing “things” that are not phones, desktops, or laptops.

Big data and voice-activated assistants Amazon-Echo-IMG_6126-450x300.jpg

Alexa and Assistant have a (for now) special user interface; they vocally converse with the user. Then they send data over the internet on the user’s behalf, just as a phone or PC does. But there’s a significant difference; these voice-activated assistants are the ones making the online requests; not the human being. The user doesn’t know if the assistant is using a browser, or an app, or what; the whole transaction is happening at a level of remove from what humans typically experience when using the internet. In other words, when you’re looking at a screen, using a browser, it can be said that you are online. But when you ask Alexa to make a purchase, you’re not going online. Alexa is going online for you.

Is Amazon Alexa a first step towards the movie Her?

At this point, Alexa and Assistant both seem to be designed for short-term interaction. You enunciate a keyword and they wake up; you give them an order and they immediately follow it. But similar devices that think in the longer term can’t be far off. Imagine a pro-active device that recommends purchases, activities, even human relationships for you, based on 24/7 (or as much as you’re comfortable with) monitoring of your life. Perhaps this sort of entity has best been characterized in the popular imagination as the Scarlett Johanssen-voiced “operating system” in the movie Her.

Once we reach the point where the device is the one making the decisions about when to connect, and for how long, then it’s effectively using machine-to-machine (M2M) communication. And as M2M communication increases, the landscape of the internet is going to drastically change.. Relative to M2M, human activity online is relatively slow, and and involves relatively little data.

M2M data traffic - an undiscovered country

In fact, we really don’t know what the throughput of a worldwide network dominated by M2M will look like. Like human activity, M2M activity will probably wax and wane, with peak times and lag times, seasonal fluctuations, and some types of resources (memory, compute, network) needed more than others during various periods. Right now, a lot of smart people are spending a lot time speculating on what kind of traffic patterns that ecosystem might create. But it’s just that: speculation. We won’t know for sure until we actually see it happen.

Datacenter flexibility requires software-defined infrastructureEricsson-cloud-blog-IoT-Amazon-Alexa-datacenter.jpg

So what does this mean for telcos that run their own datacenters? It means that fluctuations in demand from enterprise customers (the ones receiving the data from the devices making up the IoT) will be more unpredictable than ever. Right now, traffic spikes can be predicted within a certain margin of error based on known human-oriented events like Black Friday. But throw a few million devices effectively making their own decisions (or making decisions together, based on a shared neural network), and we may realize that, for some reason to be discovered later, they all wanted to submit orders on a certain Wednesday, between 2 and 3 a.m.

To handle this ebb and flow, the datacenter of the near future must be agile, utilizing software-defined infrastructure (SDI) to make the most efficient use of its resources. Memory, compute, networking, and storage allocated to answering questions asked Assistant, or to processing orders submitted through devices like Alexa, will be pulled from a pool of available resources.

Then, when the peak period is over, the virtual performance-optimized datacenters (VPODs) that handled the workloads can be dissolved or reconfigured, releasing the components they use back into the pool, ready to be used by IoT devices that are on a different schedule than personal assistants.

Exercise monitors, perhaps.

To explore the possibilities of hyperscale datacenters now and in the future, please check out our white paper:  Hyperscale cloud: reimagining datacenters from hardware to application:

Download the white paper


Digital Industrialization Cloud Infrastructure Data & Analytics

Michael Bennett Cohn

Michael Bennett Cohn was head of digital product and revenue operations at Condé Nast, where he created the company's first dynamic system for digital audience cross-pollination. At a traditional boutique ad agency, he founded and ran the digital media buying team, during which time he planned and executed the digital ad campaign that launched the first Amazon Kindle. At Federated Media, where he was the first head of east coast operations, he developed and managed conversational marketing campaigns for top clients including Dell, American Express, and Kraft. He also has a master's degree in cinema-television from the University of Southern California. He lives in Brooklyn.

Michael Bennett Cohn

Discussions