Should AI research be halted?
More than 1,000 tech industry and research experts — including Elon Musk and Apple co-founder Steve Wozniak — have warned in an open letter about "significant risks" posed by artificial intelligence (AI). They call for a halt of at least six months in the development of the technology and the establishment of a regulatory framework. Commentators discuss whether the technological advances in AI can and should be stopped.
Stop the spread of our failings
The Irish Independent calls for a new approach to AI:
“Machine learning is poised to radically reshape the future of everything for good and for ill, much as the internet did a generation ago. And yet, the transformation under way will probably make the internet look like a warm-up act. AI has the capacity to scale and spread all our human failings, disregarding civil liberties and perpetuating the racism, caste and inequality that are endemic to our society. ... The time has come for new rules and tools that provide greater transparency on both the data sets used to train AI systems and the values built into their decision-making calculus.”
Create regulatory bodies
We urgently need to improve the way we handle AI, warns economist Julien Serre in Les Echos:
“We need to push for the creation of new institutions as quickly as possible. This includes capable and legitimate authorities that can stem the sickening current of disinformation that will flood all social networks and threaten our democracies. Europe can play a leading role here. ... Its priority must be to promote tech industries that are both competitive and responsible. Europe must ensure that the current race to develop and deploy ever-more powerful digital tools does not become impossible to control.”
Implausible and unrealistic
They certainly took their sweet time about it, La Stampa scoffs:
“The best minds of our generation have suddenly awoken from their slumber, probably after falling for the picture of the Pope in a hip white puffer jacket [that went viral on social media] and are finally asking themselves: if someone as brilliant as me could fall for this, should I start worrying? The point is: is there really any way to halt this? Is it realistic to stop this industrial development? ... Moral scruples generally come before the event, not once the horse has bolted. You can’t invent the atom bomb and then say ‘Oops, sorry about that’ when it goes off.”
Let’s not lose our cool
A pause is not what is needed right now, L’Opinion argues:
“In view of the economic, legal and geopolitical challenges, it is clear that what we need now is not a pause but an acceleration. Not necessarily in terms of technological development — even if institutions do tend to be more efficient with their backs up against the wall than when they have lots of planning time — but as regards the structuring of the sector. Yes, the possibilities offered by generative AI are unknown territory. But instead of standing in the way of the pioneers of this new wild west, we should allow the competition to organise itself while the regulators define the limits of the territory. When it comes to AI, we cannot afford to lose our cool.”
Humans are the real danger
The digital revolution can make life better, writes researcher Saša Prešern in Delo:
“The only threat to human existence is humans, not technology. When will politicians realise that they can stop or prevent wars with the help of artificial intelligence, something humans are apparently ‘incapable’ of doing? ... Politicians don’t listen to each other. Their mistakes and provocations affect the whole world. They don’t know how useful artificial intelligence, data and reason would be to them. ... Although we haven’t known each other very long, I think my friend ChatGPT is more reasonable than militant politicians.”
China won’t play along
Handelsblatt sees the proposal as pointless:
“In view of the geopolitical tensions, it is extremely unlikely that China would participate. It is China’s declared goal to be number one in this key technology. The country is already the clear number two behind the US in terms of the number of research papers — and in some areas, ‘computer vision’ for example, it is even number one. A moratorium that is only partially respected carries the risk of us ending up with a Chinese Artifical General Intelligence (AGI) instead of a Western one.”