Someday, humans might be connected directly to the internet. It will be the final step of a process that has been going on for a long time. But if we're smart, we'll have the proper infrastructure built far in advance.

The final countdown

We approach the final interface. The agency that hatched the internet, the U.S. Defense Advanced Research Projects Agency (DARPA), has partnered with Paradromics Inc., to create an implant that will, if it works, transmit data between a human brain and a computer. The technology involves cords made of tiny wires made from a technique “similar to the one that produces fiber-optic strands” according to Technology Review.

There’s plenty of room to be skeptical. Even if the interface works exactly as hoped, what will the data look like? How will it be represented in whatever machine analyzes it? What other methodology will be necessary to turn it into real, actionable insights? And most importantly, what are the challenges that will be faced in formatting data to send in the other direction: from the machine to the brain?

Taking the long view

But if we take a step back, we have to acknowledge that, from a Big Data point of view, the success or failure of this particular project doesn’t really matter (although we wish our friends at Paradromics all the best). Sooner or later, there will be a usable, direct, human-computer interface, available at scale and connected to the internet, effectively turning its human users into nodes from which data can be collected.

When that happens, we’ll have an Internet of Carbon (IoC). The amount of data collected in the IoC, how that data will be used, and how fast it will have to be routed in order to be useful, are difficult to predict. But suffice it to say that there will be a lot of data, with many uses, and it will have to be routed very fast, all over the world. And consider the traffic surges. Workload fluctuations might depend on how many people are awake, or whether they’re experiencing stress, or just whether they happen to be concentrated in one place.

The IoT will prepare us for the IoC

The infrastructure needed to support that kind of massive, unpredictable, high-demand workload will have to be very robust indeed, and very flexible. Fortunately, the world of datacenter infrastructure has already gotten started on the intermediate step of supporting the Internet of Things (IoT): the vast data network made up by the smart devices that are currently the non-surgical extensions of the human body: smart phones, exercise trackers, and VR/AR headsets, in addition to the quickly proliferating smart devices that will soon inhabit every aspect of our lives: smart cars, smart planes, smart home appliances, smart drones, and so forth.

The IoT is new, but the use of cybernetics as a people connector has been around for a while. In fact, all early communication networks were designed to connect people to each other, going back to telegraphs and carrier pigeons. If we think just in terms of electricity-based systems in which the end user is actually touching the nearest node, then perhaps the first generation of the human network was voice calling. After that came mobile voice, then the internet (email, VOIP, video conferencing, and so on), and then personal data collected from non-phone mobile devices, such as personal health data collected from a Fitbit. With the IoT, we're going to see such personal data collectors proliferate endlessly. The IoC will be the final step, wherein carbon based life forms are seamlessly integrated with our silicone-based brethren, rather than through proxy devices.

The amount of data flowing to and from those devices may not reach the level of the data that will flow through the IoC, but it’s going to get a heck of a lot closer than we have ever been before. Certainly, IoT traffic threatens to overwhelm the infrastructure of traditional datacenters, which were primarily built to handle traffic generated by (non-cyborg) users sitting in offices, browsing the web, and clicking on ads. (Ah, the good old days.)

SDI paves the way for networks of the future

The solution is software-defined infrastructure (SDI). Which is to say, pools of hardware resources (compute, memory, storage, and networking) that can be brought together as needed to form virtual performance-optimized datacenters (vPODs).

Hardware pools set up for SDI require backwards-compatible CPUs built for SDI. And to deal with the workloads that are coming, they have to be darn fast. Which is why Ericsson is excited about our deployment of the new Intel® Xeon® Scalable processor in our Hyperscale Datacenter System 8000. And using that processor, we recently tested our evolved packed core (EPC) solution, Ericsson virtual Evolved Packet Gateway (vEPG), at a throughput of 40Gbps per processor. Download the exclusive report below.

Download now

Learn more about the Intel® Xeon® Scalable processor.


header image: Kevin Dooley


Build your cloud and NFV infrastructure Evaluate options

Geoff Hollingworth
Follow Geoff Hollingworth:

Geoff Hollingworth

Geoff is Head of Product Marketing Cloud Systems, responsible for the global positioning, promotion and education of Ericsson’s next generation Cloud infrastructure offerings. He was previously embedded with AT&T in Silicon Valley, leading Ericsson’s innovation efforts towards the AT&T Foundry initiative. He has also held positions as Head of IP Services Strategy for North America and overseeing the Ericsson brand in North America, as well as other roles in software R&D and mobile network deployment. Joining Ericsson more than 20 years ago, Geoff has been based in London, Stockholm, Dallas and Palo Alto. He holds a First Class Honors Bachelors degree in Computing Science and has won the Computing Science Prize of Excellence from Aston University in Birmingham, United Kingdom.