+1 (970) 414-2609
  • Active Campaign
  • Global News
Thursday, June 1, 2023
  • Home
  • About Us
  • Our Experts
  • AI Tools
  • Our Work
    • Education
    • Community
    • Humanitarian
No Result
View All Result
Boys & Girls Clubs of Senegal
No Result
View All Result
Home Artificial Intelligence

Artificial intelligence is infiltrating health care. We shouldn’t let it

by masha gessen
April 21, 2023
in Artificial Intelligence
Reading Time: 5 mins read
A A
464
SHARES
1.8k
VIEWS
Share on FacebookShare on Twitter

AI paternalism could put patient autonomy at risk—if we let it.

Stephanie Arnett/MITTR | Getty Images This article is from The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? It’s a question I’ve been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that they’re better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. They’re trained on limited or biased data, and they often don’t work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

There’s another problem. As these technologies begin to infiltrate health-care settings, researchers say we’re seeing a rise in what’s known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patient’s own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

“Sometimes we don’t actually know what kinds of systems are being used,” says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AI’s results, even when those results contradicted their own clinical opinion.

There’s a very real risk that we’ll come to rely on these technologies to a greater extent than we should. And here’s where paternalism could come in.

“Paternalism is captured by the idiom ‘the doctor knows best,’” write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that person’s feelings, beliefs, culture, and anything else that might influence the choices any of us make.

“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,” McCradden and Kirsch continue. They say there is a “rising trend toward algorithmic paternalism.” This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isn’t infallible. These technologies are trained on historical data sets that come with their own flaws. “You’re not sending an algorithm to med school and teaching it how to learn about the human body and illnesses,” says Wachter.

As a result, “AI cannot understand, only predict,” write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends won’t necessarily tell doctors everything they need to know about how a patient’s treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldn’t diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that data—an expensive endeavor that probably won’t appeal to those who are looking to use AI to cut costs, says Wachter. 

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups won’t necessarily work for others, whether that’s because of their biology or their beliefs. “Humans are not the same everywhere,” says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Read more from Tech Review’s archive Philip Nitschke, otherwise known as “Dr. Death,” is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkilä has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

From around the web Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancer—but people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

ADVERTISEMENT

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeks—despite not having completed clinical testing. (STAT)

news image

Previous Post

Sci fi author Lavie Tidhar: Using Midjourney to explore ethics

Next Post

The Download: AI paternalism in health care, and Nigeria’s answer

Related Posts

AI, metaverse showcased at Taiwan trade show

by augustine haslett
May 30, 2023

TAIPEI, Taiwan: Artificial intelligence (AI), high performance computing and the metaverse are among the key themes of Computex Taipei 2023...

Read more

Protecting privacy with artificial intelligence

by jeanice wiers
May 30, 2023

WEBWIRE – Tuesday, May 30, 2023 In the wake of a recent incident where personal identification numbers of several municipality...

Read more

AI poses ‘extinction’ risk, experts warn

by anthony guillemette
May 30, 2023

Updated 7 hours ago · Published on 30 May 2023 11:59PM · Industry chief and experts are warning global leaders...

Read more

“AI First” Applications are the Favorites of the Future, Says

by georgianna motsinger
May 30, 2023

May 28, 2023, Kai-Fu Lee, Chairman and CEO of Innovation Works, delivered a keynote speech at the “Artificial Intelligence Large...

Read more

AI tool helps couples write wedding vows as marriage expert

by tmz staff
May 30, 2023

Artificial intelligence is the hot new topic of conversation as AI tools are increasingly threatening to replace the “human” component in...

Read more

System Shock

by diego redner
May 30, 2023

A remake that closely follows the original classic, with a slightly different overall effect. As a newcomer to System Shock,...

Read more
Next Post

The Download: AI paternalism in health care, and Nigeria’s answer

ADVERTISEMENT

Trending Posts

Brain Teasers

Crucial Skills When Danger’s Here and Seconds Count

by BGC Senegal
May 31, 2023

Read more

Crucial Skills When Danger’s Here and Seconds Count

Crack All These Riddles, You’re a Problem Solving Beast

18 Detective Riddles to Test Your Friend’s IQ

AE Live 17.2 – Visible Thinking Routine for the Language Classroom

You Won’t Believe This Harry Potter Illusion

This Cloud Screams Danger, Run Away If You See It

Load More

Popular Posts

How to write a business plan?

by BGC Senegal
January 25, 2023

  How to Write a Business Plan Writing a business plan for your business is a key step in creating...

How to become a Senegalese citizen?

by BGC Senegal
January 27, 2023

  How to Become a Senegalese Citizen? Senegal, located in West Africa, is well known for its vibrant culture and...

What is cyber security?

by BGC Senegal
January 28, 2023

  What is Cyber Security? Cyber Security, sometimes referred to as information security, is the practice of protecting systems, networks...

Facebook Twitter LinkedIn Youtube

NEWSLETTER

Subscribe to our newsletter and be the first to know about our upcoming events and programs.

QUICK LINKS

  • About Us
  • Learning Center
  • Active Campaign
  • Privacy Policy
  • Terms and Conditions
  • Contact us
  • Global News

CONTACT INFO

  • info@senegalbgc.org
  • For donations contact us at: donate@senegalbgc.org

© 2019-2023 Boys & Girls Clubs of Senegal. We are a 501 (C)(3) organization and donations are tax deductible. - EIN: 83-3699796

No Result
View All Result
  • Home
  • About Us
  • Our Experts
  • AI Tools
  • Our Work
    • Education
    • Community
    • Humanitarian

© 2019-2023 Boys & Girls Clubs of Senegal. We are a 501 (C)(3) organization and donations are tax deductible. - EIN: 83-3699796

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT