Conversations With Industry Leaders : Dr Sheuli Porkess

By Dr Stephanie Jones
It was lovely to catch up with Sheuli today in our virtual meeting. We covered some powerful topics, such as the use of new technologies including AI, and the challenges of encouraging diversity in STEM careers such as ours.

New technologies
Sheuli is particularly interested in the intersection between technology and society, and is focused on how to make sure the use of technology is meaningful. With her background in medical affairs and clinical research, she has recently been supporting start-ups and SMEs with medical devices and digital health. The key question we have before us is: “What is technology going to do to make the world a better place?”
Much of this recent technology is associated with AI, and we talked about Sheuli’s involvement across both the data and the ethical aspects.
AI in the shadows or out in the open?
I have found that business leaders in our industry are investing more in new technologies and upgrading their systems. Recently I spoke at a conference on the way these technologies are affecting how we work, and have spoken with friends in other companies about experiences with learning to use AI tools.
In her role as the Chief Medical Officer (CMO) recently for a life sciences company, Sheuli has been helping to build concepts around digital healthcare and clinical data. In this role she has gained hands on experience with building capabilities, and become more familiar with AI as a device.
We talked about “Shadow AI” where staff could be using AI without the organisation’s awareness, and the problems this can cause, including the leaking of confidential information into the public domain and security issues. This is why it is so important to encourage the right questions to be asked so that technologies can be used responsibly.
Ethics and AI:
Sheuli is perfectly placed to talk about the ethics of AI, considering her vast experience in this field. We talked about her time on the Pandemic Ethics Accelerator during the early days of the COVID-19 pandemic. As well as bringing the pharmaceutical medicine perspective to the discussions, Sheuli and others discussed critical decisions during the evolving crisis, such as school closures.
Sheuli also sat on the IFPMA bioethics working group talking about ethical issues for the pharmaceutical industry.
Sheuli is currently working on the ethical and practical aspects of AI for the industry and we talked about some of the issues relating to AI, including the rapidly evolving development in this area and the need for regulatory guidance.
What can regulatory agencies do?
We discussed the EU AI Act, which provides a helpful, searchable guide to adopting AI and maintaining compliance with the regulations.
But we both fear that as AI gathers momentum the regulations are not quite keeping up with the exponential evolution. It is unlikely that they will ever be able to cover every type of system or algorithm, therefore we need to return to core principles to help someone using (or creating) AI understand how to make the right decisions.
AI literacy is defined in EU AI Act (which also provides a database of literacy programmes) and can help people to understand this area so that they can ask the right questions.

Don’t let the tail wag the dog!
Technology has a very broad definition, and is evolving in multiple directions in an uncontrolled manner. It is easy to be swept along in the wrong direction, but Sheuli noted we must not let the tail wag the dog! We agreed that we need to consider the direction we want healthcare to be driven rather than let ourselves be driven by the tech.
Chatty friend or friendly expert?
We have both heard lots of ways to describe AI, and my understanding of this area has increased immeasurably over the past 18 months that I’ve been learning about it. AI is not one thing.
Sheuli described a new way of looking at AI and we broke it down into these 3 categories:
- Chatty friend: someone who gives you ideas and helps you reach creative solutions
- New graduate: full of enthusiasm and some knowledge but with no experience, waiting for you to train and lead them
- Colleague who is an expert in one field: with deep knowledge that can give you insights in one area, but is not equipped to understand other areas.
Fears around AI
I confessed that around 2 years ago I had no idea what AI meant, and even 18 months ago my understanding was very basic and I’ve learned such a lot about these tools since then. Others who have not put so much work into trying to understand this area may still feel those fears and concerns that this may inhibit them from utilising this technology. I asked Sheuli how she thinks we may be able to improve AI literacy to help people to understand the right questions to ask to use the tools responsibly.
How to improve AI literacy:
Sheuli talked about the need to be able to experiment and ask questions. People need to play with a new system to find out how it works. However in our tightly-regulated industry it is not possible to put data into any tool and find out what it could do, as this could be company-confidential. There are security risks with trying out external tools. So organisations need to consider how they support their staff to experience their tools in a safe way.
Not just button-pushing!
Experimenting and asking questions to improve understanding of a new AI tool is not simply about learning which buttons to press! Before people can trust the output from a new tool they need to be able to critically question the system, find out how it is creating the outputs, look for potential sources of bias, and consider the unintended consequences.
To do this they need to be furnished with information so that they know which questions to ask, and empowered to ask those questions.
Creating the right culture:
It can be challenging to encourage people to ask questions. Our industry is filled with data-driven scientists who may be reluctant to raise questions in person, particularly to someone who is senior to them, and they may instead accept the outputs rather than look for failings. But in order to use new technologies effectively we require critical appraisal and testing, which necessitates feedback from the users. It is so important to empower those users so that they can raise the right questions with confidence. We need to find ways to create this culture within our workplaces, maybe with training, small focus groups, and anonymous feedback opportunities.
Automation bias:
Asking questions is more important than ever now that we have systems and tools that can invent information (known as “AI Hallucinations”). As a society we have grown up trusting computers (such as calculators) more than we trust ourselves, a phenomenon known as Automation bias.
We need to change this mindset to become adept at challenging the outputs and putting the right checks in place to maintain the right level of oversight.

Human in the loop:
Regulators are talking about human oversight, also called “Human in the loop”, or “Expert in the loop”. The output may be appropriate in some contexts but that judgment still needs to be made by a human. That person needs to be empowered to challenge and even override the AI if needed. Sheuli described an example where a tool produced a nice list of references, but when she tried to find them some turned out to be false. We need to build in checks in people’s work with such tools. She mentioned the “Post office scandal” – a widely publicised case where people were convicted in courts because a computer made errors (the report was published 8-Jul-2025 following an inquiry). There were miscarriages of justice, people lost their jobs and even their lives, because their leaders believed the computer not the people.
There is a joke in the UK: “Computer says No!” We need to start trusting humans and providing appropriate challenge to the outputs from computers. We need to recognise humans as higher up the hierarchy than computers, if we are to make the best use of the technologies emerging today.
Diversity in STEM:
This topic is important to both of us and we discussed some of the key topical issues.
Why is our industry still led mostly by white males, and are we seeing a change?
Glacial speed of change
We agreed that we are seeing a change but it seems glacially slow. In fact, Sheuli revealed some shocking data on gender inequalities, identified as recently as 2021. She was reviewing the differential effect of lockdowns and home schooling specifically on male and female researchers and found an interesting article. Muric et al conducted a retrospective observational study and found gender disparity in the authorship of biomedical research publications during the COVID-19 pandemic. Although men and women can now, in principle, occupy similar roles in the workplace, during the lockdowns male researchers published more while women published less than they usually would. This is one marker of the continuing issue of gender bias towards specific roles.
What are the challenges?
We feel that there is a tradition in many sectors, where those in senior roles naturally tend to favour those who they feel most comfortable with. This includes people who were in the same schools or clubs. This then perpetuates the lack of diversity.
While looking at people in specific roles we may be put off pursuing that career if we do not find anyone who resembles us.
When they are combined, these factors tend to limit the diversity we observe in the workplace.
What we can we do?
We need more role models that people can see themselves more reflected. For example more female role models and those from different ethnicities.
Sheuli recently searched for famous women in life sciences and found that the results tend to come from North America. She had to specify a country to retrieve results from that country. Does this reflect how people are valued and recognised by their peer community? It was an eye opener that even if numerically you might be more famous or successful, you may still not appear in the list of outputs. Search engines will only find what is recognised and published. We therefore need to shout more about our achievements and publicly recognise those from diverse backgrounds.
Importance of diversity
It is also important to emphasise whenever possible, why we need diversity. If an organisation includes people from different backgrounds they will benefit from the fresh ideas and perspective they bring. Specific skills sets, particularly in niche professions such as pharmaceutical medicine, are rare and we need to look beyond the limitations of people we know to identify such skills.
It was lovely to catch up with Sheuli and great to address such important topics with her.