With the emergence of generative artificial intelligence, and the dawn of General Artificial Intelligence (GAI), we wanted to imagine scenarios for the use of AI in cybersecurity solutions by 2050.
The aim of this fantasy story is to encourage you to think about your use of AI and to offer you some food for thought.
From machine learning to digital intuition
The year is 2049. Alexandre Renaud, Information Systems Security Manager (ISSM), is on his way as usual to the head office of UnityRisk, a market-leading insurance company.
After switching on his workstation by recognising his vital and biological parameters, he consults his daily security diagnosis using ALFRED (Artificial Logic for Full-spectrum Response and Enterprise Defence). This general artificial intelligence (AGI) has been designed to support him in managing cybersecurity issues.
Since the late 2040s, the notion of identity has disappeared, the password having been abandoned following the failure to develop post-quantum encryption.
To ensure the identification of a user, AGI has evolved its notion of machine learning into digital intuition. This new capability enables artificial intelligence to make decisions in a context where some of the information needed to make a decision is missing. This new paradigm significantly increases detection thresholds, particularly for weak signals, and reduces false positives.
Digital intuition can be found everywhere, enabling the implementation of an adaptive, real-time policy to deal with cyber threats and extortion attempts. In 2050, ‘alert fatigue’ will no longer exist, as SOC operators will have been replaced by software agents developing judgement skills using digital intuition. A response to the difficulties in recruiting cybersecurity professionals in the 2030s.
From the automatic configuration of network segmentation rules to the monitoring of standards and regulations, the company’s entire information system evolves in real time, like an immune system adapting its defence mechanisms in response to new pathogens.
The human element is no longer the driving force: its role is to guide the operation of the AGI.
Against a backdrop of urgent climate change, smart cities have become the norm
Against a backdrop of climate emergency, UnityRisk’s premises are hyperconnected. Temperature, air and water quality sensors, robots, drones: the company’s environmental impact is continuously monitored by the government, and must remain below a given threshold or the business will be shut down immediately.
For Alexandre Renaud, cybersecurity constraints have been joined by environmental constraints. For each cybersecurity requirement, an associated environmental cost that must not be exceeded must be taken into account.
This is why the distributed edge computing infrastructures that host artificial intelligence models have replaced instances hosted in the cloud, which consume too much water.
Controlled by the AGI, in cold weather, this equipment in each room warms the space by the heat it generates, thereby reducing energy requirements. In hot weather, the equipment, which operates on the blockchain principle, distributes the load and switches off in turn so as not to exceed a predefined temperature indicator.
For the SSI team responsible for supervision, traceability is no longer just a matter for the company’s human employees: it now includes the machine language of autonomous equipment such as robots, drones and level 5 autonomous vehicles. Each company has developed its own standard, and the cybersecurity management platform transcribes on the fly into a single format the data sources received by all the connected equipment.
A larger attack surface
The technological innovations of the last 30 years have considerably extended the attack surface. In 2050, the new Neuralink chips will incorporate all the business skills that a human operator needs to know in order to work at his or her job.
Although higher education is no longer a prerequisite for taking up a position of responsibility, ensuring the veracity of the information imparted by the chip is a safety and security issue for companies.
That’s why quality control standards for the day-to-day behaviour of company employees and service providers are now part of the new regulations.
Using surveillance cameras, the AGI is able to detect suspicious behaviour and notify the SSI team, with the aim of registering a breach of compliance and enabling a more thorough check to be carried out.
And what if 2050 was finally tomorrow?
If our story sounds fantastic, you should know that some of these concepts are currently at work in 2024.
- Companies such as cybersecurity publisher Kaspersky are looking into the creation of a concept of cyber immunity that includes a pillar called ‘digital intuition’.
- Cybersecurity vendor Juniper is currently developing functionalities for self-configuring computer networks using generative artificial intelligence (GAI).
- INRIA is currently working on two projects: AI@EDGE, a reusable, secure and reliable artificial intelligence, hosted in an edge computing instance as an alternative to cloud computing. And EDGE, an industrial project in partnership with Qarnot and ADEME, which aims to reuse IT waste heat to reduce the energy footprint of tomorrow’s factories.
- In 2024, Elon Musk’s company Neuralink has just integrated its first chip into a patient’s brain, with the aim of restoring mobility to people with disabilities. A number of specialists are already asking what additional capabilities Neuralink will be able to provide once its brain-machine interface is operational. One of them is to add skills to a human.
It’s impossible to know what tomorrow will bring, especially in a field as changing as cybersecurity. Nevertheless, it is always useful to look ahead, and in turn to question the way in which we interact with AI today.