GovWire

Press release: New innovation challenge launched to tackle bias in AI systems

Equality Human Rights Commission

October 16
11:01 2023

  • up to 400,000 in investment up for grabs as Fairness Innovation Challenge opens for submissions
  • new scheme funds innovative solutions to tackle bias and discrimination in AI
  • scheme to focus on healthcare and other real-world use cases

UK companies can apply for up to 400,000 in government investment from today to fund innovative new solutions which tackle bias and discrimination in AI systems.The competition will look to support up to three ground-breaking homegrown solutions, with successful bids securing a funding boost of up to 130,000 each.

It comes ahead of the UK hosting the worlds first major AI Safety Summit to consider how to best manage the risks posed by AI while harnessing the opportunities in the best long-term interest of the British people.

The first round of submissions to the Department for Science, Innovation, and Technologys Fairness Innovation Challenge, delivered through the Centre for Data Ethics and Innovation, will nurture the development of new approaches to ensure fairness underpins the development of AI models.

The challenge will tackle the threats of bias and discrimination by encouraging new approaches which will see participants building a wider social context into the development of their models from the off.

Fairness in AI systems is one of the governments key principles for AI, as set out in the AI Regulation White Paper. AI is a powerful tool for good, presenting near limitless opportunities to grow the global economy and deliver better public services.

In the UK, the NHS is already trialling AI to help clinicians identify cases of breast cancer, and the technology offers enormous potential to develop new drugs and treatments, and help us tackle pressing global challenges like climate change. These opportunities though cannot be realised without first addressing risks, in this instance tackling bias and discrimination.

Minister for AI, Viscount Camrose, said:

The opportunities presented by AI are enormous, but to fully realise its benefits we need to tackle its risks.

This funding puts British talent at the forefront of making AI safer, fairer, and trustworthy. By making sure AI models do not reflect bias found in the world, we can not only make AI less potentially harmful, but ensure the AI developments of tomorrow reflect the diversity of the communities they will help to serve.

While there are a number of technical bias audit tools on the market, many of these are developed in the United States, and although companies can use these tools to check for potential biases in their systems, they often fail to fit alongside UK laws and regulations. The challenge will promote a new UK-led approachwhich puts the social and cultural context at the heart of how AI systems are developed, alongside wider technical considerations.

The Challenge will focus on two areas. First, a new partnership with Kings College London will offer participants from across the UKs AI sector the chance to work on potential bias in their generative AI model. The model, developed with Health Data Research UK with the support of NHS AI Lab, is trained on the anonymised records of more than 10 million patients to predict possible health outcomes.

Second, is a call for open use cases. Applicants can propose new solutions which tackle discrimination in their own unique models and areas of focus, including tackling fraud, building new law enforcement AI tools, or helping employers build fairer systems which will help analyse and shortlist candidates during recruitment.

Companies currently face a range of challenges in tackling AI bias, including insufficient access to data on demographics, and ensuring potential solutions meet legal requirements. The CDEI are working in close partnership with the Information Commissioners Office (ICO) and the Equality and Human Rights Commission (EHRC) to deliver this Challenge. This partnership allows participants to tap into the expertise of regulators to ensure their solutions marry up with data protection and equality legislation.

Stephen Almond, Executive Director of Technology, Innovation and Enterprise at the ICO, said:

The ICO is committed to realising the potential of AI for the whole of society, ensuring that organisations develop AI systems without unwanted bias.

Were looking forward to supporting the organisations involved in the Fairness Challenge with the aim of mitigating the risks of discrimination in AI development and use.

The challenge will also offer companies guidance on how assurance techniques can be applied in practice to AI systems to achieve fairer outcomes. Assurance techniques are methods and processes which are used to verify and ensure systems and solutions meet certain standards, including those related to fairness.

Baroness Kishwer Falkner, Chairwoman of the Equality and Human Rights Commission, said:

Without careful design and proper regulation, AI systems have the potential to disadvantage protected groups, such as people from ethnic minority backgrounds and disabled people.

Tech developers and suppliers have a responsibility to ensure that the AI systems do not discriminate.

Public authorities also have a legal obligation under the Public Sector Equality Duty to understand the risk of discrimination with AI, as well as its capacity for mitigating bias and its potential to support people with protected characteristics.

The Fairness Innovation Challenge will be instrumental in supporting the development of solutions to mitigate bias and discrimination in AI, ensuring that the technology of the future is used for the good of all. I wish all participants the best of luck in the challenge.

The Fairness Innovation Challenge closes for submissions at 11am on Wednesday 13th December, with successful applicants notified of their selection on 30th January, 2024.

Further Information

Related Articles

Comments

  1. We don't have any comments for this article yet. Why not join in and start a discussion.

Write a Comment

Your name:
Your email:
Comments:

Post my comment

Recent Comments

Follow Us on Twitter

Share This


Enjoyed this? Why not share it with others if you've found it useful by using one of the tools below: