1.7 C
Munich
Monday, December 23, 2024

Artificial intelligence: 12 challenges with AI ‘must be addressed’ – including ‘existential threat’, MPs warn | Science & Tech News

Must read


The potential threat AI poses to human life itself should be a focus of any government regulation, MPs have warned.

Concerns around public wellbeing and national security were listed among a dozen challenges that members of the Science, Innovation and Technology Committee said must be addressed by ministers ahead of the UK hosting a world-first summit at Bletchley Park.

Rishi Sunak and other leaders will discuss the possibilities and risks posed by AI at the event in November, held at Britain’s Second World War codebreaking base.

the spy agency moved to Bletchley Park in 1939 just before the start of the second world war
Image:
Bletchley Park, the home of Britain’s WWII codebreakers

The site was crucial to the development of the technology, as Alan Turing and others used Colossus computers to decrypt messages sent between the Nazis.

Greg Clark, committee chair and a Conservative MP, said he “strongly welcomes” the summit – but warned the government may need to show “greater urgency” to ensure potential legislation doesn’t quickly become outdated as powers like the US, China, and EU consider their own rules around AI.

The 12 challenges the committee said “must be addressed” are:

1. Existential threat – if, as some experts have warned, AI poses a major threat to human life, then regulation must provide national security protections.

2. Bias – AI can introduce new or perpetuate existing biases in society.

3. Privacy – sensitive information about individuals or businesses could be used to train AI models.

4. Misrepresentation – language models like ChatGPT may produce material that misrepresents someone’s behaviour, personal views, and character.

5. Data – the sheer amount of data needed to train the most powerful AI.

6. Computing power – similarly, the development of the most powerful AI requires enormous computing power.

7. Transparency – AI models often struggle to explain why they produce a particular result, or where the information comes from.

8. Copyright – generative models, whether they be text, images, audio, or video, typically make use of existing content, which must be protected so not to undermine the creative industries.

9. Liability – if AI tools are used to do harm, policy must establish whether the developers or providers are liable.

10. Employment – politicians must anticipate the likely impact on existing jobs that embracing AI will have.

11. Openness – the computer code behind AI models could be made openly available to allow for more dependable regulation and promote transparency and innovation.

12. International coordination – the development of any regulation must be an international undertaking, and the November summit must welcome “as wide a range of countries as possible”.

Read more:
How AI could transform future of crime
The author embracing AI to help write novels

Please use Chrome browser for a more accessible video player

‘US and China have to cooperate’ on AI

‘Exciting’ potential for AI in NHS

Mr Clark also highlighted health care as where the “most exciting” opportunities lie for AI.

It’s already being used in the NHS to read X-rays, while researchers are looking into how it could be used to predict damaging long-term conditions like diabetes.

He said AI could be used to help make treatment “increasingly personalised”, but reiterated the report’s concerns around potential biases being incorporated into any AI model’s training data.

“If you’re conducting medical research on a particular sample or ethnic minority, then the data on which AI is trained may mean the recommendations are inaccurate,” he added.

Government wants ‘proportionate’ approach

The committee said it would publish a finalised set of recommendations for the government “in due course”.

It wants any proposed AI legislation to be put before MPs during the next parliament, which begins in September following the summer recess.

A government spokesperson said it was committed to a “proportionate and adaptable approach to regulation”, and pointed towards an initial £100m fund set aside for the safe development of AI models in the UK.

They added: “AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.”



Source link

- Advertisement -spot_img

More articles

- Advertisement -

Latest articles