BLOG@CACM
Computing Profession

HiPEAC’s Vision for the Future

Envisioning the next computing paradigm.

Posted
Credit: Roger Castro, Monzón HiPEAC Vision 2024 Next Computing Paradigm

What will be the future of computing systems (hardware, software and infrastructure)? The vision of HiPEAC, an EU-funded European network of over 2,000 experts in computing systems is the “next computing paradigm.” This post provides a summary of the main concepts; for a more detailed description, read the HiPEAC Vision 2024 summary and visit the “next computing paradigm” chapter in the HiPEAC Vision 2024 rationale.

Fusing trends in from information and communication technologies from the last few decades, the ‘next computing paradigm’ (NCP) refers to the convergence of key technologies from the web, cyber-physical systems (CPS), the cloud, the internet of things (IoT), digital twins, the continuum of computing, the metaverse and artificial intelligence (AI) into a coherent, federated ecosystem.

Envisioning the NCP starts from anticipating the evolution towards a 4D computing paradigm that elevates the computing space from the two dimensions of document-based resources into a full-fledged 3D spatial representation, plus time. That will be further enhanced with a coherent continuum of computing that intertwines the real world and its constraints with the cyberworld, incorporating generative AI, enabling dynamic orchestrations of resources in order to achieve what is requested by users. This evolution will create a seamless, multi-level networked cooperative structure where resources are accessed and manipulated as needed with streamlined web-type protocols, and where programs (or ‘services’) and data flow smoothly onto computing resources that cooperate with each other, enhancing context awareness and efficiency in digital interactions.

A seamless flow of compute and data across the continuum

Cloud computing has become the dominant model for most end users. Through the offers of ‘software-as-a-service’, ‘platform-as-a-service’, and ‘infrastructure-as-a-service’, it facilitates access to rich applications without the need for significant capital investment and has allowed digital businesses to thrive.

Encompassing the bulk of computing resources, the cloud has therefore become the center of gravity for computing, with users and data being drawn into its pull. However, vast amounts of computing resources are also available, cumulatively, at the edge of the network and in intermediate layers between datacenters and the edge, where users, usage, and data are located. If those resources were pooled together seamlessly, à la cloud, innumerable value-added computations could take place in this continuum of computing rather than in the cloud. This would offer latency and energy reductions, decentralization, personalization, privacy and context awareness in a way the cloud could not possibly match.

Pooling edge resources and joining these with cloud resources gives rise to the edge-cloud continuum, a compute infrastructure where computation may be deployed opportunistically and dynamically, wherever it is most convenient for the user.

Extending the cloud service model to ‘everything-as-a-service’ is another important vector of innovation that shifts the center of gravity towards the edge. Enabling the ‘everything-as-a-service’ model requires the ability to orchestrate services that execute at various places along the computing continuum from edge to cloud, both in the physical world via IoT sensing and actuating, and in the digital-twin sphere.

This orchestration will need to maintain a balance between resource availability (associated with the center of the cloud) and privacy, performance, latency, energy, decentralization, personalization, and context awareness (all of which are more favorable at the edge). This will need to be more dynamic and adaptive than traditional orchestration at centralized resources in the cloud, and should mean that associated computations are able to move opportunistically across the continuum in search of the optimal temporary residence.

The envisioned orchestration would embed intelligence, including generative AI, to do the bidding of individual users at the edge, promoted by user requirements and returning ad hoc programmatic orchestration engines. The underlying infrastructures would also need intelligence to federate opportunely and adaptively available resources within the right timeframe.

The building blocks

The NCP relies on the key elements of the digital space that surrounds us, many of them well established. These include:

  • The Web: the infrastructure that supports most of our activities over the Internet. If we corresponded our ‘navigating the network’ to moving around a building, the Internet would be its foundation, so far below ground to be invisible, while the visible (hence navigable for its users) architecture of the building, which holds all the contents together and allows users to move around conveniently, would be the Web.
  • The cloud, probably the most impactful by-product of the Web to date. The concept of the cloud originated from the visionary realization that everything could be exposed and accessed as a Web resource: not only static data, but also computation (apps), and computing resources (CPUs, storage, networking).
  • The Internet of Things (IoT) originated from equipping non-digital items with radio-frequency identification (RFID) devices that would allow them to be interrogated digitally, if only for tracking purposes. It subsequently evolved into requiring such items to become ‘smart’, that is, capable of sensing and actuation, and sometimes even of basic in-place processing, eventually interconnecting them with human-side devices or among themselves.
  • Cyberphysical systems (CPSs), which can be seen as the command-and-control processing part of all sorts of articulations of IoT devices deployed into mission-critical products that help us build ‘intelligent’ industrial and civil infrastructure. This is an essential part of coping with the constraints of the physical world, such as latency, energy consumption, etc.
  • As CPSs control physical devices, safety concerns arise, together with security concerns. The central tenet of modern CPSs is a holistic view of concerns, components, and implementation competences. The range of functionalities required of CPSs increasingly includes Web-enabled components, which conjoins CPSs to the landscape of the NCP.
  • Digital twins, the digital representation of real-world entities. They are the actual vectors of 4D computing because they allow the use of time as a real variable: model simulation can show us what may have happened in the past, whereas model-based predictions fed with current state can show us what may analyze future evolution. Digital twins may be realized and exposed as Web resources, thereby becoming part of the general (or specialized) Web space.

Recently, two further important innovations have arisen:

  • Generative AI, which is able to create digital products of any sort, including computer programs and control commands, using ‘generative models’. Developing task-specific learned models and associated engines that could be deployed on resource-constrained devices would push generative AI to the edge and enable it to render personalized services. In the HiPEAC Vision, these services would be delivered by an AI personal assistant, a form of “guardian angel” [2].
HiPEAC Vision 2019 Comic Guardian Angel
A scene from the 2020 HiPEAC Vision comic book [3] evoking the Guardian Angel concept
HiPEAC Digels cartoon
The HiPEAC Digel, as showcased in [4]
  • The continuum of computing, as the digital integration of all elements listed above into a seamless networked platform where:
    • all available resources are exposed as as-a-service web resources;
    • individual application services can be federated dynamically into ephemeral aggregations originating from any point of the continuum, possibly constructed by AI engines;
    • the execution of the parts of those orchestrations can move opportunistically from edge devices to the center of the cloud and back, as the need arises.

The continuum of computing, as the digital integration of all elements listed above into a seamless networked platform where:

  • all available resources are exposed as as-a-service web resources;
  • individual application services can be federated dynamically into ephemeral aggregations originating from any point of the continuum, possibly constructed by AI engines;
  • the execution of the parts of those orchestrations can move opportunistically from edge devices to the center of the cloud and back, as the need arises.
  • individual application services can be federated dynamically into ephemeral aggregations originating from any point of the continuum, possibly constructed by AI engines;
  • the execution of the parts of those orchestrations can move opportunistically from edge devices to the center of the cloud and back, as the need arises.
Credit: Arnout Fierens 

AI hype personal cloud, cartoon

Credit: Arnout Fierens

Key implications

What implications would the convergence of the above elements into the NCP have? First, the integration of the ‘Web of humans’ with the ‘Web of machines’, where all the digital resources represented in that integration expose as-a-service interfaces that can be accessed, manipulated, and aggregated using Web-type protocols. To this end, such protocols will have to be maximally streamlined and interoperable to become viable for use with all types of compute devices. This will require the specification capabilities of the interface points for such Web-type protocols to be augmented to capture an increasing range of non-functional requirements (energy, latency, provenance, service level, etc.).

Next, the spatial (and temporal) dimension of the NCP, which will be crucial for context-awareness relating to physical constraints, location-dependent rules (e.g., regulations and legislation), local knowledge, will require stacks of 4D-aware implementation technologies capable of spatial and time-aware computing. The augmentation of Web-type protocols will:
• require a standard language to encode properties of physical objects and spaces, logical concepts and allowable activities associated with them (such as what is developed in the forthcoming IEEE P2874 standard), and
• require a suite of standard protocols to expose contract-based interfaces associated to zones and objects, and to support credentialed requests and interrogations.

The envisioned spatial computing will be CPS-like (operating with and for physical systems, coping with time constraints of the real world), swarm-like (supporting opportunistic dynamic and mobile aggregations of compute nodes within variable-size logical regions), and 4D-enabled (fit for extended reality, spatial digital twins involved in time-sensitive operation).

The NCP also will rely on AI engines (models and prompt handlers) in edge devices to construct applications on the fly by orchestrating calls to the interfaces exposed in the logical or physical regions of interest. Such regions could be temporary, self-sufficient, federated clusters of edge devices (offering services) that may even occasionally be partitioned from the internet. The prompts that will trigger the creation, deployment, and execution of these dynamic orchestrations will use ‘natural’ interfaces, including voice for humans, and video imaging for automated requestors.

Conclusion

Drawing together established and recent concepts in computing, and intertwining the digital and physical realms, the NCP as outlined in the HiPEAC Vision 2024 represents a more efficient, context-aware, and user-centric digital ecosystem. To make the NCP a reality, collaboration across multiple domains is required. We welcome comments from the community on this vision and its implications for different research fields.


The HiPEAC project has received funding from the European Union’s Horizon Europe research and innovation funding program under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

References

[1] Licklider, J. C. R. and Taylor, R. W., Internet at 50, April 1968. [Online]. Available: https://internetat50.com/references/Licklider_Taylor_The-Computer-As-A-Communications-Device.pdf. [Accessed 17 April 2024].
[2] Vardanaga, T. and Duranton, M., Digels, digital genius loci engines to guide and protect users in the next web, HiPEAC Vision 2023, pp. 18-21, https://doi.org/10.5281/zenodo.7461766, 2023.
[3] Vardanega, T., De Bosschere, K., Munk, H., Giorgetti, E., and Duranton, M., Past, present and future of the Internet and digitally-augmented humanity: A HiPEAC Vision, March 2020. [Online]. Available: https://www.hipeac.net/media/public/files/46/7/HiPEAC-2019-Comic-Book.pdf.
[4] HiPEAC, Digels: A journey to the heart of the web, StudioRain, March 2023. [Online]. Available: https://youtu.be/jIjf9G09tNo.

Tullio Vardanega of the University of Padua

Tullio Vardanega, MSc @ Uni Pisa, IT (1986), PhD @ TU Delft, NL (1998), has been at the University of Padua, IT, since Jan 2002. He was PI for a small R&D enterprise (1987-1991), and then at the European Space Agency (NL) until Dec 2001. He specializes in high-integrity real-time systems, edge-to-cloud continuum, software engineering, active learning, informatics education. He has run several international research collaborations around those themes. He is a member of IEEE and ACM. He is the technical expert for Italy in ISO/IEC JTC1/SC22: WG9 (Ada) and WG23 (Programming Language Vulnerabilities). Since 2004, he has been the chairperson of Ada-Europe. Since 2017, he has been a member of the HiPEAC Vision editorial board.

Marc Duranton, editor-in-chief of the HiPEAC Vision

Marc Duranton is a senior fellow of CEA (the French Alternative Energies and Atomic Energy Commission) and a member of the Digital Systems and Integrated Circuits Division of the Research and Technology Department of CEA. Previously, he worked for Philips and NXP where he led the development of the family of L-Neuro chips, digital processors using artificial neural networks and also contributed to the development of other video coprocessors. Since 2009, he has been the editor in chief of the HiPEAC Vision, a roadmap produced by the HiPEAC (High Performance Embedded Architecture and Compilation) project funded by the European Union.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More