Bayesian Health
  • OUR PLATFORM
  • WHY BAYESIAN
  • ABOUT
  • CAREERS
  • INSIGHTS
  • CONTACT
  • Menu Menu
  • HOME
  • OUR PLATFORM
  • WHY BAYESIAN
  • ABOUT
  • CAREERS
  • INSIGHTS
  • CONTACT

Bayesian Health and Johns Hopkins University Announce Ground-Breaking Results

Press

07/22/22 AI Thority – Bayesian Health and Johns Hopkins University Announce Ground-Breaking Results With a Clinically Deployed Artificial Intelligence Platform

New health AI system from Bayesian detects sepsis earlier.  Where Early AI Deployments Have Failed to Produce Real-World Results, Bayesian Demonstrates Reduced Mortality, Long-Term Efficacy, High Adoption and Fewer False Alerts in a Trio of Prospective, Peer-Reviewed Studies

LINKS


Read the AI Thority article

Learn more about Bayesian’s peer-reviewed research on sepsis here

https://www.bayesianhealth.com/wp-content/uploads/2022/12/aithority-image.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-07-21 10:30:452023-01-04 12:51:08Bayesian Health and Johns Hopkins University Announce Ground-Breaking Results

STAT – Race, Bias in Machine Learning

Press, Whitepapers/Case Studies

“People have this misconception that if they just include race as a variable or don’t include race as variable, it’s enough to deem a model to be fair or unfair,” said Bayesian Health CEO Suchi Saria.

 

Read the article here

https://www.bayesianhealth.com/wp-content/uploads/2022/11/Stat-Bias-in-Machine-Learning.jpg 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-06-28 14:33:062023-01-04 12:56:05STAT – Race, Bias in Machine Learning

Bias Checklist – JAMIA: Exploring Deployment of AI to Improve Patient Outcomes

Research

As a user exploring deployment of healthcare AI, a key challenge has been the lack of a comprehensive assessment for measuring bias within your solution. Further complicating matters; most scientific papers focus on one or two aspects of bias while meta-reviews or industry tool-kits simply surveil or summarize existing quantitative measures.

In a first-of-its-kind research paper by JAMIA (a leading informatics journal) our very own Suchi Saria brings together a team of experts in health disparities, health services, machine learning and informatics, providing a rare, end-to-end perspective of bias.

If you are exploring deployment of AI to improve patient outcomes, lower readmissions and decrease alert fatigue, this checklist provides a solid foundation for identifying and overcoming sources of bias.

Download the full checklist here to better identify and understand bias in healthcare AI solutions.

 

https://www.bayesianhealth.com/wp-content/uploads/2022/12/Blog-Post-bias-checklist-6-7-22-v5.jpg 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-06-07 10:51:022023-01-12 14:07:18Bias Checklist – JAMIA: Exploring Deployment of AI to Improve Patient Outcomes

Modern Healthcare: How Analytics, AI Tools can Overlook Multiracial Patients

Press

Hospitals and health systems are rolling out more tools that analyze and crunch data to try to improve patient care—raising questions about when and how it’s appropriate to integrate race and ethnicity data….READ MORE

https://www.bayesianhealth.com/wp-content/uploads/2022/12/How-analytics-AI-tools-can-overlook-multiracial-patients.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-01-18 10:53:012023-01-04 12:48:28Modern Healthcare: How Analytics, AI Tools can Overlook Multiracial Patients

Beckers Hospital Review: Lessons learned and best practices for an effective AI strategy

Press

The healthcare sector has met artificial intelligence with a mixture of excitement and apprehension. In a world where clinicians are overwhelmed by data, many view predictive AI solutions as necessary tools that can be paired with human expertise and judgment. When done ‘right’, these tools… READ MORE

https://www.bayesianhealth.com/wp-content/uploads/2022/12/Lessons-learned-and-best-practices-for-an-effective-impactful-clinical-AI-strategy.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-01-17 11:10:192023-01-04 12:48:01Beckers Hospital Review: Lessons learned and best practices for an effective AI strategy

Modern Healthcare: Why Capturing Patient Race Data is So Difficult

Press

Race can sound like straightforward information to collect from patients—but changes to how race has been categorized over time, how consistently demographic information is asked of patients and how patients think about race make it a data point worth taking with a grain of salt in patient records, experts say… READ MORE

https://www.bayesianhealth.com/wp-content/uploads/2022/12/Why-capturing-patient-race-data-is-so-difficult.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2022-01-16 14:02:482023-01-04 12:47:41Modern Healthcare: Why Capturing Patient Race Data is So Difficult

The Essential Checklist for Predictive AI Solutions

Whitepapers/Case Studies

The development of predictive AI tools in healthcare shows tremendous promise in accelerating more accurate diagnoses and improving the safety and quality of healthcare.

However, what’s been lacking is a standard way to evaluate whether or not an AI tool will do what it says it does. As a result, health systems are often left on their own to develop a way to evaluate competing solutions from scratch. It is easy to spend precious hours researching available products, as there are many technical and logistical components to understand.

We created this checklist together with leading clinicians and informaticists detail the 10 consistent components every predictive tool needs to have.

Download the full checklist here to learn about the non-negotiable features, capabilities and requirements predictive AI tools need to be safe and effective.

https://www.bayesianhealth.com/wp-content/uploads/2022/11/1280-x-720-Checklist.jpg 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2021-10-24 16:29:522023-01-04 12:54:53The Essential Checklist for Predictive AI Solutions

Bayesian Health Announces LifeBridge Health Partnership and Creation of Advisory Board

Press

LifeBridge Health Selects Bayesian Health’s Research-Backed AI Platform to Help Diagnose and Treat Pressure Injury, Sepsis, and Patient Deterioration

NEW YORK, OCTOBER 20, 2021—Bayesian Health today announced that LifeBridge Health will be deploying its AI-based clinical decision support platform to help diagnose and treat pressure injury, sepsis, and patient deterioration.  Bayesian Health’s platform operates within Epic and Cerner electronic medical records (EMR), deploying state-of-the-art artificial intelligence and machine learning (AI/ML) strategies to detect patient complications early.

Bayesian Health’s research-based AI platform takes existing EMR data, analyzing patient, clinical and third-party data with its industry-leading AI/ML models. The platform sends accurate and actionable clinical signals within existing workflows when a critical condition is detected, helping physicians and care team members accurately diagnose, intervene, and deliver timely care. The technology also includes a performance optimization engine which helps secure long-term physician, nurse and care team engagement with the tool. 

“Like all health systems, our physicians must navigate large amounts of health data as they make decisions about patient care. As we look for ways to support our teams, we are interested to see how this new tool may work within the current workflow while allowing for some customization to augment the decision-making process for each provider or practice,” says Tressa Springmann, senior vice president and chief information officer at LifeBridge Health.  

“LifeBridge has always looked for new and innovative ways to deliver high quality, compassionate care, and Bayesian Health’s research-backed platform will help LifeBridge Health’s physicians and care team members deliver on this mission,” said Suchi Saria, CEO of Bayesian Health. “I’m also thrilled to welcome Lee and Gary to Bayesian Health as founding members of our advisory board. They recognize the immense opportunity for health systems to improve patient outcomes and save lives by leveraging the Bayesian platform to augment physician and nurse decision-making with crucial data.” 

Lee Sacks, MD, is the former Chief Medical Officer at Advocate Aurora, responsible for safety, quality, population health, insurance, claims, risk management, research, and medical education. He previously served as Executive Vice President, Chief Medical Officer of Advocate Health Care as well as the founding CEO of Advocate Physician Partners.

“Health Systems remain under market pressure to improve outcomes while reducing costs,” said Lee Sacks, MD. “Though health systems have made substantial investments in their EMR over the last decades, most are struggling to leverage their data in a way that supports clinicians in improving outcomes. Bayesian Health’s evidence-based AI/machine learning platform applied to sepsis outperforms what has been in the marketplace, and as a platform solution applied to many clinical areas, I believe it will enable health systems to achieve the dual goals of improving outcomes while reducing costs.”

Gary E. Bisbee, Jr., PhD, is founder, chairman and CEO of Think Medium, a digital media company focused on healthcare leadership, and sits on the Cerner board of directors. Prior to Think Medium, Bisbee was co-founder, chairman and CEO of The Health Management Academy, and he served as CEO and in various board of directors roles at ReGen Biologics, Inc., Aros Corporation, and APACHE Medical Systems, Inc.

Said Gary E. Bisbee, Jr, PhD, “Building technology that can adeptly analyze and manage messy EMR and healthcare data is hard. Building technology that physicians and nurses want to use day in and day out is even harder. Bayesian’s platform is uniquely positioned to bridge the gap between basic data and information a clinician needs to support a care decision. Accurate and actionable clinical decision support–that physicians want to use–is long overdue in the space, and the impact Bayesian Health will have on patient outcomes will be substantial.”

With a research-first foundation of over 21 patents and peer-reviewed research papers, Bayesian Health is approaching the market with a transparent and results focused strategy. It recently published a large, five site study analyzing use and practice impact over two years for Bayesian’s sepsis module. The platform drove 1.85 hour faster antibiotic treatment and demonstrated high, sustained adoption by physicians and nurses (89% adoption), driven by the sensitivity and precision of the insights and user experience of the software. 

Bayesian Health’s technology overcomes common hurdles faced by many in the field by using cutting edge strategies to increase precision, make the models stronger, and encourage behavior change and on-going use. As a result, Bayesian’s technology accuracy is 10x higher than other solutions in the marketplace, driving tangible patient outcomes. To learn more about Bayesian Health, visit bayesianhealth.com. 

Bayesian Health is on a mission to make healthcare proactive by empowering physicians and care team members with real-time data to save lives. Just like the best physicians continually incorporate new data to refine their prognostication of what’s going on with a patient, Bayesian Health’s research-based AI platform integrates every piece of available data to equip physicians and nurses with accurate and actionable clinical signals that empower them to accurately diagnose, intervene, and deliver proactive, higher quality care.

https://www.bayesianhealth.com/wp-content/uploads/2022/12/LB-Advisory.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2021-10-20 14:04:482023-01-04 12:47:11Bayesian Health Announces LifeBridge Health Partnership and Creation of Advisory Board

Leveraging AI/machine learning technology

Active Learning (Blog)

The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias.

Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:

  1. Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,
  2. Proactively evaluating models for bias and robustness ; and
  3. Continuously monitoring results and outputs over time.

 

Understanding why healthcare is biased and the sources of bias

Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not  account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.

Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost).

Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.

These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women.

For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.

What can be done to combat bias and promote equity in AI/ML technology?

Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.

Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?

Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:

  • Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
  • Is the prediction target measured in the same way for all subgroups?
  • Are input variables more likely to be missing in one subgroup than another?
  • Could end users use the model outputs differently for specific subgroups?

 

Proactively evaluate models for bias and robustness.  Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe.

Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed

As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.

https://www.bayesianhealth.com/wp-content/uploads/2022/11/Leveraging-AI-machine-learning-technology.jpg 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2021-10-05 16:28:412023-01-04 12:55:41Leveraging AI/machine learning technology

How Health Systems can Provide Safer Care by Leveraging AI/Machine Learning Technology

Active Learning (Blog)

The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias. 

Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:

  1. Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,

  2. Proactively evaluating models for bias and robustness ; and

  3. Continuously monitoring results and outputs over time.

 

Understanding why healthcare is biased and the sources of bias

Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not  account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.

Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost). 

Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.

These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women. 

For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.

What can be done to combat bias and promote equity in AI/ML technology?

Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.

Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?

Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:

  • Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
  • Is the prediction target measured in the same way for all subgroups?
  • Are input variables more likely to be missing in one subgroup than another?
  • Could end users use the model outputs differently for specific subgroups?

 

Proactively evaluate models for bias and robustness.  Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe. 

Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed 

As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.

https://www.bayesianhealth.com/wp-content/uploads/2022/12/Copy-of-Press-STAT-2.png 720 1280 integritive https://www.bayesianhealth.com/wp-content/uploads/2023/01/Bayesian-Health-logo-2x-color.png integritive2021-10-05 14:07:162023-01-04 12:46:44How Health Systems can Provide Safer Care by Leveraging AI/Machine Learning Technology
Page 5 of 8«‹34567›»

Recent Posts

  • Bio-IT World: AI in Healthcare: How To Assess What Works April 18, 2025
  • Elon Musk asked People to Upload Their Health Data. X Users Obliged November 8, 2024
  • AI challenges: a hot-button topic at Digital Health Summit October 30, 2024
  • Time Best Inventions Award 2024 – AI-Powered Pressure Ulcer Prevention October 30, 2024
  • Where Cleveland Clinic is piloting AI August 12, 2024
Let’s Talk

CONTACT US

General Email

[email protected]

Careers

[email protected]

  • OUR PLATFORM
  • WHY BAYESIAN
  • ABOUT
  • CAREERS
  • INSIGHTS
  • CONTACT

Stay up to date on the latest in machine learning and healthcare

© 2025 Bayesian Health - New York, NY
  • Link to X
  • Link to LinkedIn
  • Privacy Policy
Scroll to top Scroll to top Scroll to top