Warning over public sector AI ‘discrimination’

NIcholas Cecil
Visitors looking at AI (artificial intelligence) security cameras with facial recognition technology at the China International Exhibition Center in Beijing in 2018: AFP via Getty Images

Artificial intelligence being used to deliver public services may discriminate against some people, a watchdog warned today.

The Committee on Standards in Public Life, chaired by former MI5 boss Lord Evans, highlighted the risk of “bias” in machines being used by Whitehall.

It said data bias was a “serious concern” and called for further work to measure and mitigate “the impact of bias to prevent discrimination via algorithm”.

Lord Evans added: “AI and, in particular, machine learning, will transform the way public-sector organisations make decisions and deliver services.

“Public-sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

Healthcare and policing currently have the most developed AI programmes, the committee added.

But it highlighted concerns that algorithmic tools to screen visa applications risked replicating “historic bias” if officials had previously discriminated against applicants from certain countries.

Similarly, job application AI filters could be discriminatory if data inputted by humans had, for example, in the past favoured men over women.

“The UK does not need a new AI regulator but regulators must adapt to the challenges that AI poses to their sectors,” added Lord Evans.

He also defended the Government’s decision to allow Chinese tech giant Huawei a limited role in Britain’s 5G network, a move which has angered Donald Trump.

“It’s right that the UK should take its own decision. The process, from what I can see, has been thought through and that is encouraging,” he said.

Read more

AI 'surpasses' experts in the detection of breast cancer, study finds