There could be a prize-winning answer to the question: “What is the next big thing?” In this chapter, we will try to read the crystal ball to determine, for the technologies we have discussed throughout this book, the likelihood that we will see some form of evolution in the coming decades and which new and disruptive technologies could still revolutionize the cloud century.

All the technology areas we have discussed throughout this book like computing, networking, the Internet, virtualization, the IoT and big data analytics will see significant changes but also significant improvements, which will form the basis of how our lives might look some decades from now. Among all these evolutions, we will definitely find some of the next big things that will surely revolutionize our future.

The Ever-Increasing Computing Power

The future of computing brings exciting new possibilities. Considering how computing has evolved over the past 50–60 years, including the revolutions of personal computers, laptops and continuing with the processing power available today on mobile devices like tablets or smart phones, we might have only seen the tip of the iceberg in terms of what is feasible in computing.

For example, the Apple iPhone 4,Footnote 1 introduced in 2010, offered the same processing power as the Cray-2Footnote 2 Supercomputer from 1985, which is roughly 1.6 GFLOPS (Giga Floating Point Operations per Second) . Considering this evolution, it is pretty obvious that these numbers will increase even faster over the next decades, while miniaturization will allow for the design of even smaller devices (see Fig. 1).

Fig. 1
figure 1

Increase of supercomputing power over time

If we perform the comparison in the opposite direction of the Tianhe-2Footnote 3 Supercomputer (2013) versus the PlayStation 4sFootnote 4 (2016), we see that with the supercomputer it is possible to achieve 33.86 PFLOPS (Peta Floating Point Operations per Second) versus an impressive 1.84 TFLOPS (Tera Floating Point Operations per Second) using the PlayStation 4s; a clear sign of how fast the performance limits are being pushed today.

1 Trillion-Fold Increase in Computing Power from 1956 to 2015

This example also shows that computing power has been steadily increasing and, even more compelling, the processing power of today’s supercomputers will be reached by stand-alone computers and mobile devices in a few years. We can expect enormous processing power to be available in mobile devices like smartphones or tablets in the next 10 years, which will easily equal the power of large stationary supercomputers operating today. This trend has brought us a 1 trillion-fold increase in performance from 1956 to 2015 and, since technology innovation has accelerated significantly over time, the expected capabilities of future computing devices will also grow dramatically (see Fig. 2).

Fig. 2
figure 2

Dhrystone MIPS of processors

As a result, the possibilities for future devices that we will use in our everyday lives are endless, ranging from smart mobile devices or any kind of computer used in cars, buildings, households, hospitals, etc.Footnote 5

The conclusion from this trend is that computing power is constantly increasing and will keep doing so over the next decades, even if we only continue to use semiconductor-based technologies. As soon as newer technologies like grapheme based semiconductors or quantum computing become commercially available, this trend will be even more heavily disrupted.

Parallel CPU s

Parallel computation is nothing new (Ananth Grama, 2003) and future computer architectures (Novoselvo, 2015) will probably expand parallel computation even more using microprocessors with a large number of cores.Footnote 6 Several CPU s will then work together on the same chip thereby increasing the number of tasks simultaneously running and, in doing so, increase the overall system performance.

At the same time, more specialized computers will start to appear because the costs will be less prohibitive than in the past, a trend we can already see today in the early stages of the Internet of Things.

Parallel computation will definitely become a very effective way of increasing computing power, but how parallel the computations can be and how many computers will be linked to a large network or to the cloud, is a different question. With advances in telecommunications and ever increasing communication speeds, it becomes much easier to link many computers to a large network. This is already happening today, as a lot of our data is stored not on desktops but in the cloud. Cloud computing will become more and more popular, while remaining based on microprocessors and electronics and current computing architectures.

As soon as new materials like grapheme become available on a manufacturing scale and replace silicon as the basic technology of computers, new functions will also be designed and new architectures optimized for these new functions, opening up new, exciting paradigm changes.

New Materials in Computing

One of the most driving but at the same time limiting factors in computing has always been the minimal size of transistor structures, which are the semiconductor devices forming the building blocks of modern computers. Silicon will likely continue to be the predominantly used material for transistors for the next 10 years, but people are already experimenting with alternative materials and technologies to replace silicon, given its failure to deliver for increasingly smaller and smaller transistors. A grapheme-based transistorFootnote 7 is one of these alternatives.

Graphene (Skakalova, 2014), along with other materials, allows for the building of one-atom-thick 2-D materials and hetero-structures based on those 2-D crystals.Footnote 8 They could potentially provide an alternative to silicon technologies, but we are talking about completely new architectures here, rather than just introducing a new material into an existing system. It’s hard to predict how it will develop because when one new material is introduced into a process, it’s already quite a complicated step. Needless to say, changing the whole architecture would require years of research but these changes need to come and the signs of change look good.

The Networking Revolution

The need for new networking architectures comes from the sheer growing number of mobile devices, new advancements in content, virtualization and the fact that state of the art cloud services have become common today. As already discussed, conventional hierarchical but static network architectures become more and more obsolete in order to fulfill the requirements of modern networks, especially the dynamically changing requirements imposed by new types of applications as seen today. These requirements include supporting quickly changing traffic patterns, enabling easier use of mobile devices in private as well as corporate environments, cloud services and big data analytics, all while providing high level security.

Constantly Expanding Infrastructure

One of the most driving factors of networking and the Internet was the ever-increasing transmission speeds from access via distribution to the core.

Fixed Line Bandwidth Increase

If we consider 20 years ago in 1996, the speed of analog dial up modems (last mile) was already called blazingly fast with 9.6 kbps, followed pretty soon by 64 kbps shortly thereafter. In the mid-2000s access speeds were in the Mbps range. This is a 10- to 20-fold increase in speed and it was accelerated by the introduction of ADSL2Footnote 9 offering 24 Mbps or VDSL2Footnote 10 offering 200 Mbps, all still running the last mile i.e. century old telephone lines.

With the introduction of fiber to the home (FTTHFootnote 11 ), speeds could easily increase to the 10 Gbps range and if this evolution continues, fixed line bandwidth will be in the Petabit per second range 20 years from now (see Fig. 3).

Fig. 3
figure 3

Fixed line bandwidth increase, logarithmic plotting

Mobile Bandwidth Increase

Similar to the fixed line wireless access technologies also evolved over the past 15 years and, not long ago, we were happy to have 56 kbps bandwidth offered via GPRS,Footnote 12 which quickly grew to several Mbps with the introduction of HSPA.Footnote 13 Today these speeds already reach the range of 150 Mbps using LTE,Footnote 14 and will easily offer several Gbps wireless access speeds as soon as 5GFootnote 15 technologies become available.

Over the past 20 years, these mobile access speed improvements have paved the way for the very advanced Internet experience we know today. We can expect further growth in this area by mobile devices as well as IoT and M2M communication.

Core Bandwidth Increase

Of course, distribution and also core networking speeds had to increase to handle the new traffic volume coming from users, devices and sensors or actors (aka IoT and Fog). While 40 Gbps was the realistic maximum speed per Internet backbone channel for a long time, this speed has already increased today using lasers that operate close to a single frequency and thus are able to carry a new amount of data through fiber optic cables.

This finally results in a 160 Gbps per Internet backbone channel and it can be assumed that 400 Gbps or even 1600 Gbps will be achievable with some further tweaking—so we can rest assured that the backbone speeds required by the cloud age are guaranteed.

Traffic Increase

According to Cisco’s VNI (Visual Networking Index) in 2016,Footnote 16 the overall IP traffic will be around 194 Exabytes per month in 2020 which is triple the total amount of 2015. This equates to 511 Tbps (terabit per second) being sent or equal to 142 million people streaming high definition Internet videos simultaneously (see Fig. 4).

Fig. 4
figure 4

Global IP traffic per month 2015–2020

It is even more interesting to look at the device types filling these impressive numbers with a CAGR of 10% from 2015 to 2020. M2M communication has a strong lead with 30% by 2020 and is the fastest growing segment, a clear sign of the disruptive nature of the IoT as well as fog computing. M2M is followed by smartphones and video, while classic PCs and tablets belong to the minority of devices. This, of course, shows another important future trend in that how we use and access the Internet constantly changes and will continue to be dominated by mobility.

Changing Applications Means Changing Traffic Patterns

We already discussed why traffic patterns change frequently with today’s applications, which access different databases and servers. This results in east-west traffic flows before the results can be returned in the classic north-south traffic patterns to the end users. The new traffic from corporate content applications and mobile devices is added on top of other traffic patterns, which per definition implies very dynamic behavior. In addition, the enterprise cloud is usually split into private, public and hybrid clouds, creating new traffic types across wide area networks as well.

In previous times, the access to corporate networks was mainly via quasi static personal computers and later moved towards mobile laptops, today we see an increasing use of all kinds of mobile devices, including smartphones, tablets, and still notebooks. Access and security from all these personal devices needs to comply with corporate rules and fully protect corporate data in this highly dynamic and constantly changing environment (Zhang, Hu, & Fujise, 2006).

Challenges for Legacy IP Networks

The evolution towards IP only based devices was originally a great idea and made multiprotocol networking a thing of the past. Over the past decade, it quickly became difficult to scale, manage, secure and adapt IP based networks to meet new challenges from the usage and business side and this trend is poised to increase in the years to come.

There were many reasons for this, but one of the most ubiquitous was the increased use of mobile devices which introduced totally different working styles, as users suddenly wanted to connect anywhere with the same security, privacy, quality of service etc. guaranteed (Ulrich Dolata, 2013). The upcoming deployment of cloud computing and the use of cloud based services (Eric Bauer, 2014) from the end of 2000s onwards brought another major shift in the way enterprises and users work and connect today.

The possibility for smaller enterprises to conduct business on a global scale by building and supporting the necessary networks changed the landscape for bad guys and hackers who shifted their attacks from large and medium size enterprises to smaller businesses who lacked the right staff to thwart them.

Finally, the increasing complexity of today’s networks with all the fancy but necessary features like more security, application intelligence and mobile orientation in addition to old IP-based networks became a constant challenge to address appropriately and in a timely manner.

SDN and Cloud Based Networking Services

Cloud computing, network virtualization and new approaches for managing these future networks radically altered the economics of business networking. A number of companies offer SDN solutions today, all with the goal of changing the face and future of networkingFootnote 17 and, in so doing, reshuffle the incumbent vendor deck. Network-as-a-Service (NaaS) and Infrastructure-as-a-Service (IaaS) have already become very prominent cloud services, allowing users to connect anywhere to the IT resources they need and whenever they need them. There are still significant differences in the various products offered which, if implemented in an effective way, allows for not only the management of the data-center, servers, storage and LAN but also the WAN network, also known as the Network Function Virtualization (NFV ).Footnote 18

Cloud services became an integral and constantly growing part of today’s businesses via private, public and hybrid clouds (Bloomberg, 2013). It is imperative that these services are easily accessible over the new, corporate infrastructures for applications, ensuring the additional security, compliance and auditing requirements. It is also important that they support frequent business reorganizations and mergers, which have become a constant part of modern corporate life. Thus, elasticity of all parts of the cloud infrastructure like computing, storage and network resources has become mandatory and, in these modern environments, will only be achieved through the consequent use of SDN and common management tools playing seamlessly together. This is one of the main reasons why SDN will become an even greater base technology for cloud based networking services.

Good implementation of SDN based cloud-networking servicesFootnote 19 needs to offer WAN overlay architectures that provide an end-to-end abstraction of the underlying carrier IP infrastructure. Data forwarding and control planes need to be nicely separated in order to create dynamic and resilient network topologies and be able to run on standard virtual machines (merchant VM s) within major, geographically dispersed cloud datacenters. This in turn will further help to afford an efficient overlay network infrastructure that can adapt in real-time as needed, reflect the quickly changing demands of end users, route around underlying infrastructure failures and be able to withstand future, threatening storms.

Ideally most or all of the topological and policy based complexity of traditional networks such as IP addressing, network address assignment and translation, access control, security certificates, authentication, DNS , etc. will reside in a cloud based control plane.

The result will be a radically simplified network building experience, which is one of the most important requirements for future networks, and network service deployments. It should be seamless as to enable dynamic and user-friendly configuration of cloud networks on demand, including users and resources and allowing for additional, future network services without any issues. The deployment of these technologies and methods marks the era of future programmable networking, which will finally fully arrive over the next decade.

Big Data Analytics Networking Requirements

The huge increase in data used by many modern applications generally referred to as “big data” requires not only new computing architectures including multi-core and parallel processing of up to several thousands of servers, but also additional, highly dynamic networking capacitiesFootnote 20 that must be based on SDNFootnote 21 for flexibility reasons. As a result, we need hyper-scale data center networks,Footnote 22 supporting not only new magnitudes of scalability, but also assuring any-to-any connectivity on demand without any failures (Xu, Lin, Misic, & Shen, 2015).

Wireless Future

WiFi has been around for almost two decades, but so far it has been optimized for wireless communication over medium to longer distances. It seems that WiGig,Footnote 23 a new standard by the Wireless Gigabit AllianceFootnote 24 involving wireless gigabit data links, is currently being pushed by HP and Dell to enable the connection of monitors, hard drives and other PC peripherals without the use of any cable and available for deployment very soon. This will allow a computer-near network similar to Bluetooth but capable of much higher speeds of up to 7 Gbps within a range up to 10 m, using beam-forming antenna technology (Poole, 2015). In addition, Intel is pioneering a wireless charging method for laptops, smartphones and other devices (Lowe, 2014a). According to a Cisco white paper,Footnote 25 the Internet traffic will increase to more than 180 PByte per month by 2020, with fixed Internet still maintaining a share of more than 60%. Mobile traffic is predicted to be around 15% in 2020 but with a yearly increase of more than 50% (see Fig. 5).

Fig. 5
figure 5

Internet traffic projection by type

Low Power Wide Area (LPWA) networks are specifically optimized for the further development of the IoT and will be one of the key components in the success of this fast-growing field. Today three standards for different requirements in the IoT networking developed by GSMAFootnote 26 exist. These are Extended Coverage GSM for IoT (EC-GSM-IoTFootnote 27 ), Long Term Evolution for Machines (LTE-MFootnote 28 ) and Narrow-Band Internet of Things (NB-IoTFootnote 29 ). It is very likely that new features and releases will supplement these wireless, low power wide area standards over the next years, especially as there will be a big demand for the use of the 5G spectrum for enhanced communication features.

IPv6

IPv6 is a prerequisite for the adoption of the IoT. Up to today, IPv6 has already made substantial progress, mainly because the latest devices fully support this standard as well as operators who continuously upgrade their networks to IPv6. We have already seen the exhaustion of IPv4 addresses in many regions worldwide.

Due to Cisco VNI 2016, there will be around 13 billion IPv6 devices by 2020 including both mobile as well as fixed devices. This is more than three times increase from the four billion devices in 2015. A total of 90% of all smart phones and tablets will be IPv6 capable by 2020. Extrapolating these numbers into the next decades proves how right and necessary the decision for IPv6 was more than two decades ago, as it has enabled many options from the future we will be living in. We are talking about roughly 120 billion devices by 2030 and it is anyone’s guess as to where these numbers will head (see Fig. 6).

Fig. 6
figure 6

Number of IPv6 capable devices and connections 2015–2020

The Future Internet

How the Internet evolves will undeniably have a huge impact on industrial, economic and, of course, humanitarian sectors. This has proven to be true since the Internet’s beginnings in the late 1980s to early 1990s. It was 1989 when Tim Berners Lee wrote the first online message, which would eventually lead to the World Wide Web. From then until today, the Internet has grown into a multidimensional communications network and will surely evolve to cover several new dimensions over the next decades. The burning question is what this future Internet might look like.

We will try to answer this question by looking into requirements for future coverage, speed, capacity, new and advanced security standards, next generation user interfaces, resilience as well as new dimensions imposed by the IoT, embedded systems and swarming.

Coverage for Several Billion Nodes

In 2015, around three billion users already used the Internet. While this is a massive increase from around 15 million users 20 years ago, according to Cisco’s VNI from 2016Footnote 30 we can expect connectivity numbers to continue to grow even more dramatically, fueled by the growth in mobile data traffic followed by video traffic and the growing availability of cheap and easy to use smart devices. A number of companies like Apple, Google, Facebook and others are accelerating this trend through the proposed use of the Internet by satellites, smart vehicles, drones, balloon systems and, in a wider context, the Internet of Things. This will result in a projected seven billion nodes connecting to the Internet, requiring a solid infrastructure to handle all this data collection and necessary communication by 2020 (see Fig. 7).

Fig. 7
figure 7

Global mobile data traffic growth 2014–2019 (Source CISCO)

Speed as Never Seen Before

We all know Moore’s Law for computer speed, which claims a roughly 60% increase in performance per year. There is also a law for the Internet Bandwidth defined by Nielsen,Footnote 31 which says that the typical user bandwidth will grow by around 50% per year, roughly 10% less than growth of computer speed. These numbers have proven to be correct from 1983 to 2016 and are shown below in the Figure below, which charts the respective bandwidth offered by slow modems in 1983 to typical access speeds in 2016. Keep in mind that this diagram shows an exponential growth curve in a logarithmic scale, so if we simply extrapolate the data to 2040 we get well above 1 Tbps for access speed (Frey, 2011)—some might find this shocking (see Fig. 8).

Fig. 8
figure 8

Law of internet bandwidth, logarithmic plotting

Zettabyte Capacity

Cisco projects in the Cisco VNI from 2016 that global IP traffic will almost triple from 2015 to 2020 (from 72.5 exabyte per month to 194 exabyte per month). This means a total of 2.3 zettabyte by 2020. To visualize the size of a zettabyte, if each terabyte in a zetabyte were a kilometer, it would equal 1300 round trip journeys to the moon and back, or if every gigabyte in a zettabyte were a brick, this would be enough bricks for 258 Great Walls of China (see Fig. 9).

Fig. 9
figure 9

Increase of internet traffic

Overall, it shows that a yearly growth rate of 20–25% is safe to assume which would result in a total of 7 zettabytes in 2025 and so forth. Just to put these numbers into context once more, 1 exabyte is equal to 1 billion gigabytes and 1 zettabyteFootnote 32 is equal to a 1000 exabytes.

Big data has become a very popular technology today, being responsible for the generation of huge amounts of data from different sources with different meanings and is becoming increasingly popular fueled by new social structures, social media, new business experiences etc. The availability of large data sets from web pages, genome datasets, as well as the output of scientific instruments requires big data analytics to take on a new form for large scale computing today. Over the next decade, these systems will have to manage and analyze data sets the size of an exabyte, sometimes even bordering on the region of zettabytes.

In order to be able to handle these amounts of data, new supercomputers are needed as well. This is also the reason why, in 2015, President Obama signed an order to fund the world’s first exascale computer,Footnote 33 which should be operational by 2025 thus overtaking China who is currently leading in this field. This is the reason why a new National Strategic Computing Initiative (NSCI )Footnote 34 was established in order to develop this exascale computer, which can handle at least one exaflops calculations per second, the equivalent of a billion times a billion calculations, meaning a 1000-fold increase since the petascale computer in 2008. Even if an exascale computer is an impressive machine today, it will be outdated as soon as it is operational in 2025.

From the above we see that requirements are not only increasing for transmission speeds, but also for storage and computing, fueled by new ways of using the Internet like the IoT and many others.

Based on the exponential growth stated in Moore’s law, by 2030 a micro SD card will have the storage capacity equivalent to 20,000 human brains and, by 2043, the storage capacity of more than 500 billion gigabytes. This is equal to the entire content of the Internet in 2009. By 2050 this storage capacity will be three times the brain capacity of the entire human race as projected by FutureTimeline.net.Footnote 35

Balance Between Privacy and Security

In today’s networked and connected world, security has become a major concern, and the number of bad actors intending to harm major businesses or even entire countries is increasing exponentially. Of course, there are surveillance technologies to ensure secure operation but it is a constant race between evil minded and good minded actors. On the other hand, everybody wants to have privacy, which is in contradiction to security, as more convenience means less security.

While, in theory, a radically transparent world would ensure a safer environment, it also means that every detail about every person would be known, like bank accounts, credit card numbers, passwords etc., which is hard to envisage based on the principles on which we have built our existing world.

The future Internet will need to strike a balance between privacy, security, trust and ethics. President Obama for a Privacy Bill of Rights reflects this in the 2015 proposal,Footnote 36 but this is just a US initiative and actually we would require a global initiative and solution to address these issues. This needs to be pushed by the UN and should reflect a kind of Geneva Convention for Privacy as the basis for global standards and practical implementation guidelines, as well as legal definitions, monitoring tools and tools to handle cases of abuse. Without this and without more philosophical work and preparation, the future Internet will suffer and will unlikely be able to reach its full potential.

Next Generation User Interfaces

User interfaces have come a long way since the use of keyboards given that the Internet was mainly text based in its beginning stages. The mouse and the invention of hyperlinks can be seen as the start of the surfing experience, but graphics were still pretty poor due to the small bandwidth allowed by connection links in those days. As bandwidth increased, animated graphics and videos became very popular and can no longer be excluded from a modern Internet experience. Speech recognition allows for further improvements in user communication with input devices, but there are still devices operating between what humans think and what devices can fulfill, as we still have to articulate our thoughts before a machine can understand what we want.

Before we can take the next, big and revolutionary step by finally allowing a brain interface, we will see many minor, but important steps in that direction, like optical interface devices for augmented reality and virtual reality such as Google Glass,Footnote 37 Facebook’s Oculus VR,Footnote 38 Microsoft’s HoloLens,Footnote 39 , Footnote 40 or Samsung’s Gear VR.Footnote 41

Resilience and Survivability

For a network to become as popular, important and vitally necessary as the Internet, which is just at the beginning of its evolution if we take into account the future projections of features like the IoT , Fog Computing or Big Data Analytics, the network needs to be extremely resilient to any kind of attacks. It needs to withstand not only hacker attacks, but also survive military strikes and potential downturns in the global economy.

Even more serious than these external threats is the constant aging of the systems of which the Internet is composed. It needs things like distributed intelligence between its core root servers. In mid-2015, ten root servers were located in the US, two in Europe and one in JapanFootnote 42 and all of these root servers operate in multiple geographical locations via anycast addressing, using redundant equipment to provide uninterrupted service in case of hardware or software failures. To address the growing demands of the future Internet, this root server architecture will no longer be sufficient and enhancing it needs technical understanding as well as the ability to envisage economic, military and environmental disasters.

In the years to come, a special emphasis will have to be put on the durability and long term survivability of the future Internet in order to ensure its stable and secure operation given the requirements of the coming decades. Especially when we are looking at security vulnerabilities in the area of the IoT, it becomes pretty obvious that there will be a huge need for better security protection.Footnote 43

New Dimensions Through IoT and Embedded Systems

The original Internet mainly linked simple computers with one another. Smaller and more capable laptops, smartphones, smart watches, tablets and a number of other devices allowing humans new ways of accessing the Internet have replaced personal computers. In addition, the IoT with all its sensors, actuators and embedded devices continues to constantly change the dimensions that the Internet must support.

Janus Bryzek, a Fairchild executive, organized the Trillion Sensor SummitFootnote 44 in 2013 with 200 executives from around the world in attendance. The numbers coming out of this event are breathtaking. Sensors will exceed the trillion devices threshold by 2020 and could reach 100 trillion sensors by 2030,Footnote 45 adding many new data streams from virtually anywhere. Not only will these sensors be embedded in almost every article we can imagine like cars, houses or clothing etc., but the computational power of these things will increase immensely as seen in skin sensors and body sensors that will be able to constantly monitor a person’s condition or health.

The advancements in 3D printing will soon allow us to embed things consisting of sensors, microchips and transmitters into printed objects.Footnote 46 We already have solar cells that are printed on roof foil, connected vehicles will use sensors in the pavement and at crossroads and, for travelers, identity devices will allow automatic security and customs checks. Wireless energy will provide cordless charging,Footnote 47 which we have seen through the introduction of wireless networking during the 2010s.

Swarming and Collaboration

Today the advancements in the Internet have enabled us to collaborate much more efficiently than in the 1990s. Collaboration tools allow efficient pairing of human intelligence to specific projects and, in this way, groups of people can easily become dynamic unified systems, solving problems unsolvable before. Nature demonstrates repeatedly that creatures functioning together in systems can easily outperform individual creatures in problem solving and decision-making tasks. Internet technologies allow humans to build these groups, often called swarms, as seen in studies at the California State University.Footnote 48

Swarming will evolve to encompass powerful tools in the next decades, enabling groups to unleash intelligence in many fields and applications. Furthermore, the future Internet will help us to understand intelligence in order to make better use of it.Footnote 49

Storage Virtualization Future

Software Defined Data Center

To consider the future of storage virtualization, we need to first understand one of the latest evolutions in data centers, the so-called Software Defined Data Center (SDDC ).Footnote 50 SDDC (Darin, 2014), which is also referred to as Virtual Data Center (VDC) is an IT infrastructure vision extending virtualization concepts like abstraction, pooling and automation to all data center resources and services in order to finally achieve something like IT as a Service (ITaaS).

In an SDDC, all infrastructure elements such as networking, CPU , storage and security are virtualized and can be delivered on demand as a service.Footnote 51 The deployment, provisioning, configuration, operation, monitoring and automation of the infrastructure is abstracted from hardware and implemented in software. This means that the entire infrastructure is virtualized and delivered as a service—hence Infrastructure as a Service (IaaS). SDDC seeks integrators and datacenter builders and not as tenants, as data center infrastructure software awareness should not be visible to tenants.

SDDC is not very well accepted, as many critics see this as a marketing tool more than a new vision with many future implementation scenarios. On the other side of the camp, there are individuals who are sure that software will define future data centers and see this trend as a work in progress.Footnote 52 The growth potential for SDDC looks pretty promising anyway, according to many analysts who expect that some components of SDDC will see strong growth soon. The Software Defined Storage (SDS) market alone should grow to $22.56 billion by 2021,Footnote 53 which is due to exponential growth of data volume across enterprises and a general rise in software defined concepts.

Storage Virtualization as Last Missing Link in SDDC

Meanwhile the believers in SDDC have started to see storage as the final missing link of the complete virtualization and SDN story. What is needed now is a clear separation of storage control planes, where software controls and manages data, as well as a storage data plane for storing, copying and retrieving data, both working seamlessly with storage infrastructures.

One of the most important reasons to separate the control and data planes is to free the storage control software from the hardware. Software defined storage (Lowe, 2014b) enables offloading of the computationally heavy parts of storage management functions as seen in RDMA (remote direct memory access) protocol handling, data lifecycle management compression and caching. This computational power can come from vast amounts of CPU power available in private and public clouds and opens up unknown possibilities for both network and storage management, options which were not feasible before.

Revolutionized Non-volatile Memory Design

Advancements in non-volatile memory (NVM) technology make solid state memory, especially flash memory more affordable, while there are also numerous new possibilities promised by next generation storage technologies like Phase Change Memory (PCM)Footnote 54 (Moinuddin K. Quershi, 2011) and Spin Transfer Torque Random Access Memory (STT-RAM) Footnote 55 (Xiaobin Wang, 2010). Both PCM and STT-RAM offer access speeds as well as byte addressable characteristics of Dynamic Random Access Memory (DRAM) , which is mainly used in servers today, but they both have the same advantage of solid-state persistence, like flash memory.

As soon as these two prototype technologies become cheaper than flash memory, one or both of them will revolutionize memory design and within a few years most server storage will be based on solid-state cache within the server itself, which will have huge implications on storage design in combination with evolving network technologies and software supporting distributed architectures. If every server had terabytes of super-fast solid-state memory connected via ultra-fast and low latency networking, this would make today’s implementation of shared storage for critical applications a thing of the past, an evolution which is definitely necessary.

Optimized Capacity Large Disk Drives

Today we are increasingly faced with the demand for massive amounts of data storage and processing, mainly fueled by big data analytics, and coinciding with a dramatic cost reduction for data storage. Meanwhile, drive capacity has started to exceed 10 TB per disk using conventional magnetic storage technologies. When using new storage techniques, for example, those based on quantum physicsFootnote 56 (Vathsan, 2015), future storage characteristics will increase significantly compared to those used today.

Large cloud providers have totally new needs in terms of scalability of computing power in combination with close data proximity, which means the value proposition for storing an organization’s cold data becomes very compelling. This opens up new requirements for securing and finding this data as well as managing the lifecycles of this huge amount of information, which requires new structures that are more capable than today’s methods of files, folders and directories. Thus, new management and access technologies are required based on object-based access to data, like we have already seen in Amazon S3Footnote 57 and the open standards based Cloud Data Management Interface (CDMI )Footnote 58 (SNIA, 2015).

Why Software Defined Storage (SDS) Infrastructure

Using Software Defined Storage (SDS )Footnote 59 infrastructures is the only way to ensure the effective utilization of performance as well as speed of solid-state storage and the scalability advantages of capacity optimized storage. The separation of the control plane especially enables data center designs to make effective use of these new storage trends.

Server Virtualization Evolution

Organizations can achieve big impacts and a greater level of benefits by appropriately virtualizing their server infrastructures. The more sophisticated server virtualization technologies are deployed, the greater the final value derived from these activities will be. It is safe to assume that server virtualization will continue to grow and accelerate as it has already done over the past decade, so that participating organizations can achieve greater value. We will look into the future of server virtualization in this section, which is closely related to the evolution of storage virtualization.

The Overall Virtualization Problem

Back in the days of mainframes, computing was centralized and expensive, but it was also predictable and controllable and, moreover, it was manageable. One of the major drivers for decentralized, distributed computing was the reduction of CAPEX due to the introduction of low cost commodity servers. These servers must be connected to proprietary, expensive, monolithic storage boxes, keeping the overall virtualization solution for computing and storage proprietary and expensive. Servers have meanwhile become reasonably cheap, but storage today is still complex, incompatible and highly priced.

The big question remains, how all this virtualization will evolve if we are to create data centers based on virtualized assets. In order to enable this, all virtual layers need to coexist supporting the same functionalities, while appropriately reacting to changing conditions within their areas of discipline.

Future Server Virtualization Requirements

We need elements (boxes) that are able to self-optimize and reconfigure themselves to changing workload requirements, and self-healing infrastructures that can deal with fault scenarios autonomously and rebuild themselves without affecting the applications. This involves self-scaling infrastructures that extend virtually to all requirements imposed by workloads and self-managing infrastructures that automatically adapt to changing scenarios based on policies. These are the requirements that need to be addressed in future virtualized infrastructures but, in 2017, we are still far from such a solution, although there is hope when looking at some of the solutions, which are already around.

From Fiber Channel to Ethernet

While servers are connected to Ethernet today, storage is still largely connected via Fiber Channel (FC ) networks,Footnote 60 which were used in the mainframe times to carrying ESCON and FICON traffic.Footnote 61 It would require further price drops in 10 Gbps Ethernet and cheaper offers of 40 Gbps and 100 Gbps Ethernet to transfer all intra data center connectivity to Ethernet.

There are, at least, no longer “religious” fights between FC and Ethernet, but we are still ages away from a complete Ethernet connectivity solution. Meanwhile FC vendors push Fiber Channel over Ethernet (FCoE ),Footnote 62 , Footnote 63 in order to at least preserve the existing protocol over the Ethernet network layer, allowing them to run FCoE on Ethernet switches. This should not be seen as anything more than an interim step towards full Ethernet implementation, as FCoE does not reduce complexity nor challenge costs.

ISCSI defined in RFC 3720Footnote 64 and NAS already allow full Ethernet, nicely supporting a number of modern environments, while pure forms of Ethernet storage have appeared like ATA over Ethernet (AoE ).Footnote 65 This enables design and support of flexible and cost efficient virtualized and cloud based architectures.

The Single Data Center Networking Solution

Ultimately, the future data centers need to converge on a single networking solution. There are examples in nature of this, like the human body’s nervous system, which shares a common structure while supporting a network of different functions. The same must become true for the future data center and virtualization networking solution.

Ethernet Based Storage Area Networks (SANs)

One of the secrets of future data centers is the support of Scale Out Storage.Footnote 66 In an Ethernet connected network, virtual servers can be established everywhere allowing virtual workloads to run almost instantly and wherever needed. However, with the monolithic storage that we often still use today, the overall advantage of total flexibility stays pretty limited. The answer to this is Commodity Based Storage,Footnote 67 which supports full native Ethernet connectivityFootnote 68 and allows for the scaling out of capabilities either matching or exceeding the capabilities of the server layer.

Full Data Mobility for Storage and Server Virtualization

The ESG (Enterprise Strategy Group) has pushed for the similar mobility and ease of use in storage since the early 2010s, in order to make data mobility as seamless as virtual machine mobility. While server mobility is an instant task, moving virtual machines to another server and immediately increasing CPU power has not achieved enough speed or efficiency.

For this to happen, storage needs to become a fully virtualized complement to server and network layers, running on commodity hardware components, supporting full Ethernet connectivity, remaining self-managing and self-healing and, finally, able to scale to whatever demand is required. Most of the technologies to make this happen exist already and will soon be required by users. Vendors not jumping on this bandwagon will lose a significant market share and probably disappear.

Storage administration will become something fully automated, allowing performance, protection and recovery policies to be enforced by a virtualization orchestrating layer while assets are provisioned almost instantaneously in order to allow for seamless operation of server and storage virtualization.

Network Virtualization of the Next Decade

Virtualization became the foundation and enabled many new technology trends like cloud computing, IoT , fog computing, and big data analytics to mention just the most important. If we want to understand the evolution and future of virtualization, we need to understand why virtualization and cloud computing are so closely related.

Cloud has come a long way since the end of the 2010s and it is still constantly evolving, but just as cloud computing evolves virtualization must too. Most clouds still use 10-year-old virtualization technology, but new ways of virtualization are needed to build the future clouds.

Future virtualization needs more IO intensive network and storage workloads, while fully ensuring the support for open industry standards, and making these standards applicable to new hypervisor designs. Cloud computing software like OpenStack, sitting on the highest layer, will manage cloud infrastructures and whatever is needed for virtualization.

SDDC and Networking

This evolution of virtualization and cloud computing is a continuous process and finds a new settling point in the Software Defined Data Center (SDDC ) that we have already discussed in Storage Virtualization Breakthroughs. Here, however, we will concentrate on the networking aspects of SDDC. Virtualization and cloud evolution are tightly intertwined and while virtualization already was used decades ago, the latest revolutions in virtualization and cloud will significantly shape the future of cloud services and offerings (see Fig. 10).

Fig. 10
figure 10

Virtualization and cloud evolution

Virtualization and Cloud Computing Based on SDDC

There is a continuous evolution of virtualization happening today through the presence of cloud computing and culminating in SDDC . Cloud computing saw a new operational model for IT services based on virtualization technologies and new, IaaS approaches. SDDC will allow for the delivering of even more intelligent services in combination with advanced management solutions for cloud and standard virtualization technologies.

The most significant changes in cloud environments today are driven by consolidation and the need for private as well as hybrid cloud environments. One of the secrets of SDDC is to look into applications and how they run, or how better to run them, in the data center. Most applications are network distributed (aka known as multi-tier apps) and require instant distribution, which is fine as long as we are dealing with servers. For tasks like re-configuring the network elements such as firewalls, load balancers and setting up new virtual LANs (VLAN s) as well as new IP addresses for all applications, network distributed applications is still very slow taking hours, days and even weeks.

Network Distributed Applications and SDDC

SDDC provides the foundation for this new, flexible data center by defining a container, a virtual data center or a virtual application for network-distributed applications. This container can be easily manipulated in almost the same way as virtual machines, but now with the capability to manipulate the complete application. This technology was first driven by VMware,Footnote 69 which used the ideas of the SDN implementation they got from the acquisition of NiciraFootnote 70 in July 2012.

With SDDC, virtualization is expanded from servers to include storage and networking while separating applications from the infrastructure, all being encapsulated in a container. As soon as applications are in containers, the lifecycle of these containers is automated and, likewise, the applications in these containers. This concept of lifecycles sounds familiar to OVF [see Open Virtualization Format (OVF )]Footnote 71 and this is not a surprise, as in the cases of OVF and SDDC the same standards organization is behind and driving both. VMware, so again no surprise in this context, also heavily drives DMTF.

This container conceptFootnote 72 , Footnote 73 becomes especially important for large enterprises running thousands of applications, which would not be possible to provision and manage individually. With the application-container concept provisioning and management of applications becomes very similar to classical virtualization operations including provisioning, moving to scale up or down, moving for availability and moving to and from the cloud.

New Demands on Hypervisors

With all this additional flexibility, it is pretty obvious that SDDC places a lot of new demands on hypervisors, as these will now have to handle IO-intensive storage and network virtual appliances plus traditional applications. This, in turn, requires increased processing and thread capabilities, resulting in increasing demand for processor cores.

There are already some hypervisor vendors entering this market like ZeroVM.Footnote 74 This hypervisor is specifically designed for the new SDDC model allowing application isolation and efficiency combined with the necessary deployment speed, and can separate every single task into its own container, only virtualizing the parts of the server that are required to do the work. This approach is more optimized compared to existing clouds where giant server farms are used wasting precious resources through the virtualization of unneeded things.

This hypervisor creates a new VM for every incoming request and its UNIX style processes communicate through pipes such as VMware, XENFootnote 75 and KVM.Footnote 76 Multiple physical servers can be aggregated and are represented as a single virtual system as it is possible to represent a number of virtual systems backed up by any number of virtual servers. This enables dividing the hypervisor into huge number of processes thereby allowing totally new levels of virtualization. This opens up a new concept of virtualizing from the application to the process and user levels.

SDN Controller Future

Future SDN controllers can evolve from where they are today in several possible directions, by either becoming the network operating system, evolve into single function solutions, become cloud orchestration platforms or evolve into policy renderers.

When controllers become the network operating system they need to evolve into generic platforms that form the basis for network applications with support for services like file systems or memory management of traditional operating systems. They also need to offer APIs for application developers etc. This means that controllers will, in general, become bigger and much more complex.

As soon as controllers end up as single function solutions tied into specific applications like optimizing network virtualization applications or software defined WAN applications, the scope and size of the controller would likely be limited because functionality would be implemented within the applications and no longer as a separate entity.

If controllers become cloud orchestration platforms with OpenStack being a very prominent example, the network would become an abstracted view of the cloud infrastructures. In this example, the applications need to deal with the cloud orchestration platform, which moves control and agility to the cloud orchestration and easily can limit proactive adjustments the network could otherwise have made. These controllers would finally be parts of the orchestration platform and no longer stand-alone entities.

When controllers become pure policy renderers, this makes them policy and intent based systems that translate higher-level device policies into lower level device configurations. This, in turn, sets all the controls within the policy platform and reduces the controller to a pure translator for the network configuration.

At this time, it is not completely sure that all the described scenarios will become a reality in the future, but it is likely that they will happen to some degree and it is even more likely that we will see mixed implementations depending on their infrastructure parts. It is easily feasible that even in the same data center, different instantiations could be deployed on demand.

One fact does not change: the controller will remain the strategic control point of the SDN network and will keep the role of being the network brain. It is also unlikely that networking vendors will voluntarily hand over control of their networking equipment to others, because otherwise they would most likely be pushed out of business sooner than later.

From Closed to Open SDN Environments

Finally, we are left with two possible ways to go in this virtualized networking ecosystem: either to follow dominant networking vendors and use the controllers they provide for orchestrating their proprietary although somehow open equipment, or trust vendors using open controllers supported by many vendors, in the best case we can do both. This open approach we discussed as the OpenDaylightFootnote 77 (ODL ) Project and it has gained tremendous vendor support over the recent years. The open controllers allow end users as well as cloud and service providers to operate, develop and deploy in an open environment, making it much easier and faster to add proprietary implementations and offer differentiated solutions for their specific needs.

IoT and Fog: The Next Big Disruption?

The IoT has become very popular over the last few years, but we are still far from where this technology, that makes the connection between intelligent devices, can lead us. Fog computing, which extends cloud computing and services to the edge of the network, can be seen as an enhancement for the overall IoT movement and will hence be part of the greater IoT ecosystem that will evolve.

We have discussed many cases and solution samples and have seen that the IoT requires a vast range of new technologies and skills, which are unfortunately, still absent in many organizations and not yet mastered by many vendors in that area. Hence, how this immaturity is architected and the many risks managed will be key for the success of the IoT.

This also means that the picture we have of IoT today will change massively over the coming years, which is of course a good thing as we are dealing with a greenfield market where new players making use of new business models and solutions can easily outperform incumbents. This market is big, ranging from wearable devices, implants, connected homes, smart cities and healthcare to new businesses and enterprises, and as soon as the latter starts deploying IoT solutions the rise will be even multiplied.

Things in the IoT will become increasingly inexpensive and connectivity will become extraordinarily cheap. As this happens, more applications will exist further propelling the IoT market. This will all contribute significantly to the emergence of new ecosystems. Think about the trillions of devices that will generate tons of zettabyte data; not only big data analytics and cloud computing will see new heights, but also millions of new apps will fight with each other for success and a market share, all of this translating into enormous economic possibilities.

From Internet Age to IoT Age

Since we are able to identify things in a unique way, we are able to use Internet technology (in a simple way: the internet protocol stack) to connect them and to build new types of solutions based on the smart interaction between things and applications. In the old Internet, connections were built between components like servers, PC s, routers and switches as typical computer devices. In the IoT, we are dealing with integration and connectivity of any type of item like cars, house infrastructure, production machines or other types of things (Greengard, 2015).

The fast-growing technology of microminiaturization allows the design and production of items that are easy to integrate into things making them a potential part of the IoT . The basic functions making a “thing” a part of the IoT are simple: sensor, actor and connectivity. Sensor functions like temperature, position, movement or any other kind of valuable information are used to collect the important data and information used by applications. Actor functions are necessary to let the device react to new information, by changing their status, influencing their environment or informing users. Last but not least, connectivity to the Internet is necessary to integrate things and applications. Gartner says that 8.4 billion connected things will be used in 2017, which is an impressive 31% increase compared to 2016.Footnote 78 If that will “only” stay a 30% increase per year, we end up with 18.4 billion connected things in 2020, but this yearly increase is only the conservative assumption, as estimated from Cisco and Intel for connected IoT devices range from 50 to 200 billion in 2020 (we already discussed that in chapter “IoT”) (see Fig. 11).

Fig. 11
figure 11

IoT units installed base by category

Compared to the old Internet, these IoT functions need to be seen in a much wider context and related to the real world, as this was the case with the typical computing devices. Sensor functions will react on different parameters in the real world. Actor functions will control not only digital data but analog and technical parameters like power level, water drainage, position and movement of mechanical structures and so on. Connectivity will include not only classical Internet access but also all types of wireless as well as low bandwidth connections.

Self-Driving and Flying Cars

Hand in hand with these developments, devices will gain immense levels of intelligence and make many decisions for us. Examples are driverless cars as we have seen in many trials since the mid-2010s by Google,Footnote 79 BMW,Footnote 80 Apple,Footnote 81 etc., or smart homes.

There are even flying cars in the pipeline and due to be tested by AirbusFootnote 82 at the end of 2017, and passenger drones coming from a Chinese company E-Hang.Footnote 83

The big revolutionary step will come as soon as we can pair contextual computing with advanced input devices to scan our brains directly and translate this information into meaningful orders for machines via contextual computing. And it is closely aligned with the evolution of mobile technologies, as these allow for devices, which will be always with us, in whatever kind of wearable or implanted form.Footnote 84

Enormous Economic Benefits Through IoT

The IoT has a big potential to be the next big thing and initialize an unprecedented disruption that might be even bigger and more impactful than the invention of the Internet and the Internet browser put together. The estimated number of objects that the IoT will consist of in 2020 is 50 billion! McKinsey Global Institute sees the potential for the IoT to create an economic impact of $11.1 trillion per year in 2025 for IoT applications.Footnote 85 , Footnote 86 These numbers alone clarify the hype about the IoT as the next big revolution, which might happen some decades after the Internet and the Internet Browser (see Fig. 12).

Fig. 12
figure 12

Projected global revenue of IoT 2007–2020

Next Big Disruption Through IoT

It is obvious that the real value created by the IoT results from the intersection of collecting data by sensors and using this data in a meaningful way by machines to influence our environment.Footnote 87 If we only collect information using all the sensors we have available at a given point in time, all this data would be pretty meaningless without the right, real time infrastructure to analyze it. The only way we know how to achieve this today is with cloud-based applications, which are necessary to interpret all the data coming from the sensors and in turn start machines (we call actuators) to take action as needed like control temperature or pressure, light, etc. with whatever necessary electromechanical possibility.

As a result, the IoT enables not only the collection and analysis of data in order to allow better control of business processes for certain enterprises or user groups, it also offers fully automated processes to control a smarter future world as we discussed in IoT solution samples like Smart Connected Vehicles, Precision Agriculture, etc. This means enabling smart cities, smart ports, smart cars, smart roads, etc., with sensors monitoring and tracking all sorts of data, cloud based applications that translate this data into useful intelligence and communicate with actuators (machines) enabling mobile real-time responses. It is not just about business optimization and money savings, it is a much larger and fundamental shift introduced and enabled by the IoT , as making things intelligent is the major engine for creating new products, services and possibilities.

This is why the IoT is probably the biggest and most disruptive technology trend right now and will be for the years to come. It is fueled by a lot of supporting technologies and new methods that were enabled by recent technology evolutions like cloud, big data, analytics, virtualization, mobility and more.Footnote 88 This is why the IoT will most likely give us the most disruptive possibilities and opportunities over the next decade!Footnote 89

Big Data Analytics Changing All Our Lives

Big data analytics has gained impressive momentum in the past few years and is poised to do so even more over the next decade. As such big data analytics, which is per se nothing new, can easily become another one of the next big things influencing the way societyFootnote 90 evolves, we do business and maybe the only solution for handling and keeping new trends like social media, tens of millions of connected people, many billion sensors, trillions of transactions etc. under controlFootnote 91 and, in so doing, turn them into big revenue sources and successes.

Triumph of Open Source Tools

While gaining momentum, today there is naturally a big emphasis on open source tools to break down and analyze data. Clearly HadoopFootnote 92 and NoSQLFootnote 93 databases seem to be the winners in this game and proprietary technologies are determined to disappear quickly. One of the main goals is to unlock data from proprietary data silos and keep the big data as open and accessible as possible.Footnote 94

Big Data Analytics and New Market Segments

Meanwhile a large number of big data analytics platforms have already hit the market and this is only the beginning, which is impressively shown by all the new companies emerging to focus and cover specific niches of the big data analytics market (see also big data analytics use cases and market).Footnote 95 Currently, though not many vertical specific applications are available on top of the general analytics platforms, the market is still not mature enough and it could still be a bit unsafe to bet on Hadoop or NoSQL as the general-purpose platforms for underlying databases.

Thus, we can expect more vertical tools to emerge, targeting specific analytic challenges, which are common to business sectors like marketing, online shopping, shipping, social media and more. Small-scale analytic engines are being built into software suites, such as social media management tools like HootsuiteFootnote 96 and Nimble,Footnote 97 which include data analysis as key feature and are also interesting for future market segmentation.

Predictive Analytics

It has always been the big desire of mankind to be able to predict future events and behaviors and while some things are easy to predict (like bad weather suppressing voter turnout), other predictions are much harder like swing voters that are alienated rather than influenced by push polls. This is also the reason why machine learning, modeling, statistical analysis and big data are often mixed together, hoping to better predict future events and behaviors.Footnote 98

Of course, we have the ability to run large scale experiments on our accumulated data in a continuous way, as when online retailers do a redesign of shopping carts in order to find out which specific design yields the most sales, or doctors who are able to predict future disease risks based on data about family history, diet or the amount of exercise one gets every day.

Many of these predictions date back to the beginnings of human history, but instead of just basing predictions on gut feelings or incomplete data sets, predictive analytics has made a lot of progress in many areas like fraud detection, risk management for insurance companies and customer retention, just to name a few. We will see a lot of new possibilities in the field of predictive analytics in the years to come, as the tools become better and more stable in collecting, storing and analyzing big data.

Is Human Decision Making Still Necessary?

In times with extensive machine-to-machine communication and constantly improving machine learning, the human factor seems to become less important at first glance. This is only natural because eliminating human error has always been a very critical factor. Consider the simple mistakes humans usually make in the area of security by using weak passwords or getting caught up in phishing attacks or even worse clicking links they should not have. There is much hope that once machines can take over critical actions, a lot of these weaknesses imposed by human beings might go away.

On the downside, this is only half of the truth, as machines will only be able to do what human beings have initially taught them.Footnote 99 Relate this to big data and it becomes instantly clear that there are limits to what we can learn from machines and how much we can rely on these conclusions and, at the end of the day, the human element will always remain important.

Nate Silver, who is considered to be one of the big data pioneers, (Silver, 2012) outlines that what matters most for predictions is not the machinery which is used to collect data and run the initial analysis, but the human being and human intelligence necessary to find out what all the results from big data analytics and even more so from predictive analytics means. In the example of Silver, he analyzes reams of data, looks at historical results, calculates in factors that could influence margins of error and finally emerges with very accurate predictions.

As soon as big data analytics becomes a state of the art technology, it will be seen as just another tool we can use to help human beings derive the best possible decisions for whatever business, research etc. is needed. It is important to understand that what one does with the results of big data analytics is what really matters and the success of this task will remain with human beings for a long time.Footnote 100