AI pioneer leaves Google warning about dangers
Concerns about artificial intelligence are growing. Yesterday, Geoffrey Hinton, one of the pioneers of the technology, joined the critical voices. Hinton quit Google saying he is worried that humans will lose control of the technology and warning that people may soon "not be able to know what is true anymore". Commentators echo his fears.
Heed the godfathers of this technology
Geoffrey Hinton is following in the footsteps of Yoshua Bengio, one of the signatories of the petition calling for a pause in the development of artificial intelligence, La Repubblica points out:
“Bengio and Hinton are considered the ‘godfathers’ of AI. And if they are worried, perhaps we should be too. ... This is not just about the primacy of our intelligence; the future of democracies threatened by the disinformation and fake photos that generative AI can produce is also at stake. Robert Oppenheimer, the physicist who helped create the atomic bomb, said: ‘When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success’. That was also Hinton’s credo. It has become his cross.”
No going back
It’s high time to get prepared for the impact of the new technologies, writes El País:
“Those who try out these technologies are fascinated by the immense possibilities they offer. But those who are familiar with them see the global chaos they could unleash.... History shows that efforts to curb the proliferation and misuse of new technologies do not succeed: nuclear weapons continue to spread around the world. ... Tom Siebel, head of a major AI company, has described the risks as terrifying. ... The recent initiative by a group of experts proposing a moratorium shows that even leading experts are thinking what many intuitively suspect: we’re not ready. ... So we’d better get up to speed as quickly as possible, because there’s no going back with these innovations.”
Military technology soon out of control?
The US software company Palantir has released a video on YouTube demonstrating its Palantir Artificial Intelligence Platform, which was developed for military purposes. Highly dangerous, warns Le Temps:
“This is very close to reality, and Palantir seems to be very advanced. ... These systems ‘must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way’, the US company promises in its video. But it is by no means certain that all this will be strictly adhered to in reality. So we can expect to see more and more decisions being made by AI, or at least based on AI-generated scenarios, in military operations. With a very real risk of losing control of the situation.”