What does our future with AI hold?
Artificial intelligence (AI) is increasingly becoming part of our daily lives — for example in the form of apps or machine control software — and raising many questions. Will it develop beyond our control, replace humans in the labour market to a harmful degree or create new risks in the fields of weapons and research? Commentators discuss the rationale and possibilities when it comes to regulating this powerful technology.
Respect and solidarity needed
AI is not just a threat, interjects Corriere della Sera:
“Paradoxically, the centrality of machines will put humans back at the centre. They are called upon to imagine a future in which machines and progress are at the service of the happiness of the individual, their relationships and their freedom. For too long, the majority of humanity has lived like a hamster in a wheel without addressing the big existential questions. ... The future will have to bring a rediscovery of values. ... The real invention we need will be the promotion of solidarity systems as a form of social cohesion. Gratitude, respect and solidarity will have to be our compass.”
The responsibility lies with the user
The consequences of AI depend on how it is used, explains Morgane Soulier, an expert in digital strategies, in Le Point:
“Just as a knife can help you eat or be used to kill, AI is a tool that can be used to build or destroy. The risk lies solely in how humans use the tool, not in the tool itself. ... Let us nurture the hope that, as long as it is used well, artificial intelligence will be the catalyst for a healthier, more educational, democratic and environmentally friendly future. ... It is up to us, as a society, to set clear limits to ensure that it is used ethically. ... This is exactly what the EU Commission’s AI Act is all about.”
The threat of new super-weapons
AI could open a dangerous new chapter in the arms race, fears Index:
“In the battle for hegemony, AI could become the focus of the new arms competition. ... Building a nuclear bomb is expensive, time-consuming, and attempts are easily noticed. Developing destructive artificial intelligence — once the technology is already available — is cheaper, easier to copy and harder to detect. This is especially dangerous when our adversary [China] is already on its way to becoming a techno-dictatorship and won’t be held back by concerns about human rights.”