An expert tells Sky News that it's important to "pay attention" to the possibilities for harm from AI and claims it's "not clear" that governments know how to regulate the technology in a safe way.
  • 10 months ago
The rapid rise of artificial intelligence (AI) is not only raising concerns among societies and lawmakers, but also some tech leaders at the heart of its development.

Some experts, including the 'godfather of AI' Geoffrey Hinton, have warned that AI poses a similar risk of human extinction as pandemics and nuclear war.

From the boss of the firm behind ChatGPT to the head of Google's AI lab, over 350 people have said that mitigating the "risk of extinction from AI" should be a "global priority".

While AI can perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, its fast-growing capabilities and increasingly widespread use have raised concerns.

We take a look at some of the main ones - and why critics say some of those fears go too far.

Disinformation and AI-altered images

AI apps have gone viral on social media sites, with users posting fake images of celebrities and politicians, and students using ChatGPT and other "language learning models" to generate university-grade essays.

One general concern around AI and its development is AI-generated misinformation and how it may cause confusion online.

British scientist Professor Stuart Russell has said one of the biggest concerns was disinformation and so-called deepfakes.

These are videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else - typically used maliciously or to spread false information.

Recommended