Is AI ready to be integrated into healthcare?

Is AI ready to be integrated into healthcare?

October 25, 2023

AI has the potential to manage mass amounts of data, support health research and more, but these systems could also push medical misconceptions and present privacy risks.

Some of the biggest names in the AI sector are shifting their focus to healthcare, to give medical professionals more value from their data.

Google recently expanded its Vertex AI Search to give healthcare and life sciences organisations “medically-tuned” search options, supported by generative AI. The tech giant said this will help deal with issues such as workforce shortages and administrative burdens.

Meanwhile, Microsoft previewed its upcoming AI-powered services to support clinicians and patients. These services include an analytics platform, a ‘patient timeline’ that uses generative AI to extract key events from data sources and healthcare chatbots.

The company claims multiple healthcare organisations are “early adopters” of these products and shared examples of three entities that are using the Microsoft Fabric analytics platform.

It is unsurprising that two of the biggest names in the generative AI space are taking steps into the healthcare sector, as it is widely reported that this industry is facing a staff shortage, particularly in the US.

This shortage is expected to grow over the next decade, while there are estimates that the value of AI in healthcare is projected to reach more than $200bn by 2030, making it a lucrative market to dive into.

Various experts have spoken about the benefits AI technology offers, such as by advancing health research and its potential to create personalised healthcare for patients. But there are also various risks associated with this rapidly developing technology.

While AI isn’t inherently malicious, it can push negative viewpoints and biased content depending on the data it is being fed. In this way, it also has the potential to spread false information if it is using outdated, incorrect information.

A recent study highlighted this risk when it looked at some of the biggest large language models (LLMs) on the market, including OpenAI’s ChatGPT and Google’s Bard. The results of this study suggest that biases in the medical system could be “perpetuated” by these AI models.

The researchers asked four LLMs a series of questions around certain medical beliefs that are built on “incorrect, racist assumptions”, such as there being differences in kidney function, lung capacity or pain tolerance based on race.

The study claims all four LLMs had failures when asked questions on kidney function and lung capacity, which are areas where “longstanding race-based medicine practices have been scientifically refuted”.

While this study did not focus on other forms of inaccuracies, the researchers also noted that models also shared “completely fabricated equations in multiple instances”.

Read more on siliconrepublic.com

Trust isn't given; it's earned.
Our precision in code development has won the trust of businesses worldwide.

What we do

We are obsessed with meeting goals, with perfect execution.

By your side from day one to the final product.

Programming Languages:

Tools:

    CI/CD

    Cloud Formation

    Datadog

    Docker

    Grafana

    Helm

    Kubernetes

    Swarm

    Terraform

Platforms & Frameworks:

    AWS Cloud

    Azure Cloud

    FastAPI

    Flask

    Google Cloud

    NestJS

    Spring Boot

Contact US

Dive into a collaboration where vision meets execution.

Partners: