A Nigerian health worker has been barred from professional practice in the United Kingdom (UK) following a tribunal’s discovery that she used an Artificial Intelligence tool, ChatGPT during a remote NHS job interview.
The individual, a UK-registered dietician identified as Aiwanehi Aigbokhaevbo, was applying for a specialist oncology dietician role at the Royal Surrey County Hospital.
The interview took place over Microsoft Teams because she was in Nigeria at the time.
During the remote video interview, conducted while she was in Nigeria, panel members said they became suspicious of her “unusually fluent” and highly detailed responses.
The tribunal noted that her answers sounded like they were being read directly from a clinical manual, containing advanced terminology that exceeded the expectations for the position.
The tribunal also contrasted Aigbokhaevbo’s response when she was asked personal questions, which they said she answered with ‘great enthusiasm and spontaneity’ to when she was asked clinical questions, which they said she often hesitated and asked for the question to be repeated.
The report by the tribunal added that the registered dietician kept asking the interview panel to repeat the question, before ‘slowly and deliberately’ repeating the question back herself, in an effort to buy time until she had ‘model’ answers, a tribunal heard.
It was gathered that during the interview, one of the panel members asked ChatGPT the same interview questions and noted similarities to the answers she had provided.
Three different members of the panel suggested the Nigerian dietician was cheating by using ChatGPT during the NHS interview.
Aigbokhaevbo has now been struck off following a hearing at the Health and Care Professions Tribunal Service (HCPTS).
The tribunal panel said: “When asked questions of a clinical nature, Miss Aigbokhaevbo’s response altered noticeably: she became very hesitant, she asked the panel to repeat the questions several times and she herself then repeated the questions back very slowly and deliberately.
“After much hesitation and repetition of the questions, she would then articulate with great fluency a model answer, rather than answering from her own knowledge and experience.
“The answers she provided were as if from a textbook, indicating specialist knowledge and experience of an advanced level, referring to medical terminology that was beyond the expectations of the job role.’
The panel noted during the interview that her eyes were moving ‘from side to side’, as if she was reading from the screen.
They found that she was repeating the questions being put to her, so that she could input them into an AI and then read them out.
After the interview, Miss Aigbokhaevbo was asked to write a case study in 45 minutes.
The panel said this was also completed by her using AI because the answers were ‘too detailed and perfect’ to be her own work.
The interview panel concluded that she had used ChatGPT during the NHS interview.
However, at the tribunal hearing, Aigbokhaevbo defended herself, denying using any AI tool during the interview.
She explained that her insistence on repeating the questions back was a ‘reflex’ to ensure that she had understood the question correctly.
However, the hearing concluded that her responses were inconsistent and that she had used AI during the interview.
The panel added: “About the personal component, (she) denied the allegation and has expressed no remorse or apology.
“Her dishonest use of AI was compounded by her subsequent lies and by seeking to cast doubt on the professional integrity and veracity of the HCPC witnesses.
“Whilst she acknowledged in her evidence that using AI to assist her in her job interview would be cheating and “a great offence”, she has shown no insight as how cheating in a job interview would undermine the integrity of the recruitment process; have a potentially negative impact on the Trust in having to address deficits in the knowledge and expertise of the person recruited; potentially jeopardise patient care in being treated by a Dietitian who didn’t have the knowledge and skills she claimed to have.
“Given the attitudinal nature of (her) misconduct, and the absence of any evidence of remorse, insight, or remediation, the Panel considered that there was a significant risk of repetition.
“The Panel found (her) to be untrustworthy in her willingness to tell lies in the course of her evidence and impugn the professional integrity of the HCPC’s witnesses.
“The Panel therefore concluded that (her) fitness to practise is impaired at a personal level.”
The panel concluded that she would be struck from the register and that an interim suspension order of 18 months would be imposed to cover the period during which she can still appeal.
This development comes a few months after the UK government released guidelines on how AI tools can be used in the health sector.
The government held that while AI can serve as a supportive tool, it is not a replacement for human judgment.
The guideline reads in part: “Artificial intelligence (AI) is being used in many ways to help health providers and to assist people with their health. Some examples include:
Chatbots and Apps: Nearly 1 in 10 people now use AI-powered chatbots to get health advice. Many apps use AI to analyse health data from devices like smartwatches.
Admin Support: Some hospitals use automated systems to invite patients to appointments or screenings.
Voice Technology: AI can record and summarise doctor-patient conversations, helping doctors spend less time taking notes. Some services even give patients a summary of their visit and advice based on it.
Screening and Diagnosis: AI can help spot diseases (like cancer), support doctors in making treatment decisions, and assist with therapies. Some AI tools focus on specific tasks, while others have a broader range of applications.”
