Race can sound like straightforward information to collect from patients—but changes to how race has been categorized over time, how consistently demographic information is asked of patients and how patients think about race make it a data point worth taking with a grain of salt in patient records, experts say… READ MORE
The development of predictive AI tools in healthcare shows tremendous promise in accelerating more accurate diagnoses and improving the safety and quality of healthcare.
However, what’s been lacking is a standard way to evaluate whether or not an AI tool will do what it says it does. As a result, health systems are often left on their own to develop a way to evaluate competing solutions from scratch. It is easy to spend precious hours researching available products, as there are many technical and logistical components to understand.
We created this checklist together with leading clinicians and informaticists detail the 10 consistent components every predictive tool needs to have.
Download the full checklist here to learn about the non-negotiable features, capabilities and requirements predictive AI tools need to be safe and effective.
LifeBridge Health Selects Bayesian Health’s Research-Backed AI Platform to Help Diagnose and Treat Pressure Injury, Sepsis, and Patient Deterioration
NEW YORK, OCTOBER 20, 2021—Bayesian Health today announced that LifeBridge Health will be deploying its AI-based clinical decision support platform to help diagnose and treat pressure injury, sepsis, and patient deterioration. Bayesian Health’s platform operates within Epic and Cerner electronic medical records (EMR), deploying state-of-the-art artificial intelligence and machine learning (AI/ML) strategies to detect patient complications early.
Bayesian Health’s research-based AI platform takes existing EMR data, analyzing patient, clinical and third-party data with its industry-leading AI/ML models. The platform sends accurate and actionable clinical signals within existing workflows when a critical condition is detected, helping physicians and care team members accurately diagnose, intervene, and deliver timely care. The technology also includes a performance optimization engine which helps secure long-term physician, nurse and care team engagement with the tool.
“Like all health systems, our physicians must navigate large amounts of health data as they make decisions about patient care. As we look for ways to support our teams, we are interested to see how this new tool may work within the current workflow while allowing for some customization to augment the decision-making process for each provider or practice,” says Tressa Springmann, senior vice president and chief information officer at LifeBridge Health.
“LifeBridge has always looked for new and innovative ways to deliver high quality, compassionate care, and Bayesian Health’s research-backed platform will help LifeBridge Health’s physicians and care team members deliver on this mission,” said Suchi Saria, CEO of Bayesian Health. “I’m also thrilled to welcome Lee and Gary to Bayesian Health as founding members of our advisory board. They recognize the immense opportunity for health systems to improve patient outcomes and save lives by leveraging the Bayesian platform to augment physician and nurse decision-making with crucial data.”
Lee Sacks, MD, is the former Chief Medical Officer at Advocate Aurora, responsible for safety, quality, population health, insurance, claims, risk management, research, and medical education. He previously served as Executive Vice President, Chief Medical Officer of Advocate Health Care as well as the founding CEO of Advocate Physician Partners.
“Health Systems remain under market pressure to improve outcomes while reducing costs,” said Lee Sacks, MD. “Though health systems have made substantial investments in their EMR over the last decades, most are struggling to leverage their data in a way that supports clinicians in improving outcomes. Bayesian Health’s evidence-based AI/machine learning platform applied to sepsis outperforms what has been in the marketplace, and as a platform solution applied to many clinical areas, I believe it will enable health systems to achieve the dual goals of improving outcomes while reducing costs.”
Gary E. Bisbee, Jr., PhD, is founder, chairman and CEO of Think Medium, a digital media company focused on healthcare leadership, and sits on the Cerner board of directors. Prior to Think Medium, Bisbee was co-founder, chairman and CEO of The Health Management Academy, and he served as CEO and in various board of directors roles at ReGen Biologics, Inc., Aros Corporation, and APACHE Medical Systems, Inc.
Said Gary E. Bisbee, Jr, PhD, “Building technology that can adeptly analyze and manage messy EMR and healthcare data is hard. Building technology that physicians and nurses want to use day in and day out is even harder. Bayesian’s platform is uniquely positioned to bridge the gap between basic data and information a clinician needs to support a care decision. Accurate and actionable clinical decision support–that physicians want to use–is long overdue in the space, and the impact Bayesian Health will have on patient outcomes will be substantial.”
With a research-first foundation of over 21 patents and peer-reviewed research papers, Bayesian Health is approaching the market with a transparent and results focused strategy. It recently published a large, five site study analyzing use and practice impact over two years for Bayesian’s sepsis module. The platform drove 1.85 hour faster antibiotic treatment and demonstrated high, sustained adoption by physicians and nurses (89% adoption), driven by the sensitivity and precision of the insights and user experience of the software.
Bayesian Health’s technology overcomes common hurdles faced by many in the field by using cutting edge strategies to increase precision, make the models stronger, and encourage behavior change and on-going use. As a result, Bayesian’s technology accuracy is 10x higher than other solutions in the marketplace, driving tangible patient outcomes. To learn more about Bayesian Health, visit bayesianhealth.com.
Bayesian Health is on a mission to make healthcare proactive by empowering physicians and care team members with real-time data to save lives. Just like the best physicians continually incorporate new data to refine their prognostication of what’s going on with a patient, Bayesian Health’s research-based AI platform integrates every piece of available data to equip physicians and nurses with accurate and actionable clinical signals that empower them to accurately diagnose, intervene, and deliver proactive, higher quality care.
The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias.
Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:
- Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,
- Proactively evaluating models for bias and robustness ; and
- Continuously monitoring results and outputs over time.
Understanding why healthcare is biased and the sources of bias
Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.
Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost).
Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.
These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women.
For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.
What can be done to combat bias and promote equity in AI/ML technology?
Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.
Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?
Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:
- Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
- Is the prediction target measured in the same way for all subgroups?
- Are input variables more likely to be missing in one subgroup than another?
- Could end users use the model outputs differently for specific subgroups?
Proactively evaluate models for bias and robustness. Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe.
Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed
As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.
The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias.
Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:
-
Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,
-
Proactively evaluating models for bias and robustness ; and
-
Continuously monitoring results and outputs over time.
Understanding why healthcare is biased and the sources of bias
Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.
Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost).
Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.
These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women.
For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.
What can be done to combat bias and promote equity in AI/ML technology?
Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.
Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?
Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:
- Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
- Is the prediction target measured in the same way for all subgroups?
- Are input variables more likely to be missing in one subgroup than another?
- Could end users use the model outputs differently for specific subgroups?
Proactively evaluate models for bias and robustness. Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe.
Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed
As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.
Clinical AI can reduce harm, improve patient outcomes and deliver financial benefits by augmenting physician and nurse decision-making at the bedside, making care more proactive. But, knowing where to begin can be tricky. In what clinical areas should your health system consider applying AI? With so many potential areas — sepsis, stroke, patient deterioration, medication adherence — knowing where to begin applying AI can be tricky..
Reducing pressure injuries is one clinical area where many health systems are applying AI to improve in-the-moment decision-making, reduce the burden on nurses and reduce hospital acquired pressure injuries (HAPIs).
Why tackle pressure injuries with AI?
HAPIs hurt patients, prolong hospital stays, consume resources, reflect poorly on quality of care, and are costly to hospitals. To achieve these expectations, hospitals need to catch pressure injuries early — and also have efficient and effective ways of documenting pressure injuries on admission. However, current approaches to pressure injury risk prediction are not based on robust evidence, creating a need — and opportunity — for better data-driven decision-making. Integrating an AI platform that has a specific pressure injury module can provide health systems with the necessary efficiency to prevent pressure injuries and improve patient outcomes. Specifically, there are three benefits health systems are seeing by applying AI to pressure injuries:
1. Clinical AI can more accurately predict a patient’s risk level for pressure injuries
Current methods of pressure injury risk prediction use the standard Braden or Norton scales. These scales only cover a limited range of factors and may suffer from poor interobserver reliability. The Braden model alone catches 40 percent of patients with pressure injuries at 90 percent specificity. By incorporating AI with the Braden model, health systems can catch 60 percent of pressure injuries at the same level of specificity. This provides a significant opportunity for health systems to prevent pressure injuries before development. For example, Bayesian Health’s AI platform can accurately predict pressure injury infections in patients a median of 6.2 days prior to development, equipping nurses and physicians with the time they need to intervene, conduct screening, and take preventative action. Better predictions and focused interventions means less HAPIs.
2. Clinical AI can enable nurses to act faster, preventing hospital acquired pressure injuries
Using an AI tool targeted for catching pressure injuries early can also make triaging assessments faster and more efficient. On average, only one in eight patients are at high risk of developing a pressure injury, but nurses are required to complete lengthy assessments on all patients. AI can help nurses prioritize these high risk patients, from the minute they start their shift. With high risk alerts, AI helps ensure that nurses care for their highest risk patients first, leading to earlier interventions and improved patient outcomes.
3. Clinical AI can improve nursing documentation efficiency and compliance
Pressure injury documentation can be lengthy, repetitive, and is often spread through a combination of electronic and paper charts. AI can provide a consolidated documentation system, charting a comprehensive assessment into the EMR that meets CMS and coding standards. The time saved increases efficiency in documentation and improves compliance from nurses, allowing for more lifesaving, face-to-face patient care. Similarly, it can be incredibly difficult for staff to consistently identify and document pressure injuries that are present on admission, leading to penalties. AI can help identify a pressure injury present on arrival, and ensure appropriate documentation. For example, Bayesian Health’s pressure injury module identifies 95% of pressure injuries which are present on admission, and facilitates the appropriate documentation from nurses, leading to improved clinical, quality and financial outcomes.
A tangible clinical and financial impact
Clinical AI provides health systems with a huge opportunity to focus on preventing and reducing pressure injuries. With better calculations, faster identification, and improved documentation, health systems can improve patient outcomes and reduce costs by an estimated $400k a year. This impact is significant. To learn more about what makes a pressure injury AI tool safe, effective, and impactful, developed together with leading informaticists and clinicians.
Building and deploying AI predictive tools in healthcare isn’t easy. The data are messy and challenging from the start, and building models that can integrate, adapt, and analyze this type of data requires a deep understanding of the latest AI/ML strategies and an ability to employ these strategies effectively. Recent studies and reporting have shown how hard it is to get it right, and how important it is to be transparent with what’s “under the hood” and the effectiveness of any predictive tool.
What makes this even harder is that the industry is still learning how to evaluate these types of solutions. While there are many entities and groups (such as the FDA) working diligently on creating guidelines and regulations to evaluate AI and predictive tools in healthcare, at the moment, there’s no governing body explaining the right way to do predictive tool evaluations, which is leaving a gap in terms of understanding what a solution should look like and how it should be measured.
As a result, many are making mistakes when evaluating AI and predictive solutions. These mistakes can lead to health systems choosing predictive tools that aren’t effective or appropriate for their population. As a long-time researcher in the field, I have seen these common mistakes made, and also have been guiding health systems on how to overcome them to have a safe, robust, and reliable tool.
Here are the top seven common mistakes typically made when evaluating an AI / predictive healthcare tool, and how to overcome these challenges to ensure an effective tool:
- Only the workflow is evaluated, not the models: The models are just as important as the workflow. Look for high performing models, e.g. with both high sensitivity and high precision before implementing within workflow. Not evaluating if the models work before implementation, and assuming you can obtain efficacy through optimizing workflows alone is like not knowing if a drug will work and changing the label on it to try to increase effectiveness.
- The models are evaluated, but with the wrong metrics: The models should be evaluated, but the metrics should be determined based on the mechanism of action for each condition area. For example, in sepsis, lead time–median time prior to antibiotics administration–is critical. But, you also don’t want to alert on too many people because low quality alerts that are not actionable will lead to provider burnout and over-treatment. The key criteria to look for in a sepsis tool are high sensitivity, significant lead time, and low false alerting rate.
- Adoption isn’t measured on a granular level: Typically, end user adoption isn’t measured. However, to obtain sustained outcome improvements, a framework for measuring adoption (at varying levels of granularity) and improving adoption is critical. Look to see if the tool also comes with an infrastructure that continuously monitors use, and provides strategies to improve and increase adoption.
- The impact on outcomes isn’t measured correctly: Many studies rely on coded data to identify cases and measure outcome impact. These are not reliable because coding is highly dependent on documentation practices and often a surveillance tool itself impacts documentation. In fact, a common flawed design is a pre/post study where the post period leverages a surveillance tool that dramatically increases the number of coded cases, in turn, leading to the perception that outcomes have improved because adverse rate (e.g., sepsis mortality rate on coded cases) has decreased. Look for rigorous studies of the tool that account for these types of issues.
- The ability to detect and tackle shifts isn’t identified: If a model doesn’t proactively tackle the issue of shifts and transportability, it is at risk of being “unsafe.” Strategies to reduce bias and adapt for dataset shift is critical because practice patterns are frequently changing (see what happened at one hospital during Covid-19, for example). Look for evidence of high performance across diverse populations to see if the solution is detecting and tuning appropriately for shifts (read more about best practices for combating dataset shift in this recent New England Journal of Medicine article).
- “Apples to oranges” outcome studies are compared: A common mistake is to overlook what the standard of care was in the environment where the outcome studies were done. For example, a 10% improvement in outcomes at a high reliability organization may be just as much or more impressive than similar improvement at a different organization with historically poor outcomes. Understanding the populations in which the studies were done and the standard of care in those environments will help you understand how and why the tool worked.
- Assuming a team of informaticists can tune any model to success: Keeping models tuned to be high-performing over time is a significant lift. Further, a common mistake is to assume any model can be made to work in your environment with enough rules and configurations added on top. The predictive AI tool should come with its own ability to tune, with an understanding of when and how to tune. Starting with the rudimentary model is akin to being given the names of molecules and asking you to create the right drug if you can mix the ingredients correctly.
When dealing with predictive AI tools in the healthcare space, the stakes could not be higher. As a result, predictive solutions need to be monitored and evaluated to ensure effectiveness, otherwise it’s likely the tools will have no impact, or worse, result in a negative patient impact. Understanding the common mistakes made, as well as the best practices for evaluation, will help health systems identify solutions that are safe, robust, and reliable, and ultimately, help physicians and care team members deliver safer, and higher quality care.
Learn more about Bayesian Health’s research-first mentality, recent evaluations and outcome studies here.
Healthcare providers around the country are struggling due to a severe shortage of workers. The massive healthcare demand caused by the pandemic has inflicted burnout and stress to healthcare workers in all sectors, driving most to their wits’ end. According to the Bureau of Labor Statistics, the healthcare sector has already lost at least 500,000 workers since February 2020. This spells disaster for healthcare providers today, especially since many more healthcare workers are considering leaving the workforce due to other reasons such as reduced benefits, cut salaries, and grueling working conditions.
A recent report projects that around 6.5 million employees in the healthcare sector will leave their jobs by 2026. To address the shortage of workers, healthcare providers should leverage various tools and technologies. One such technological innovation that can help prepare healthcare providers is artificial intelligence (AI). In this post, let’s explore how AI can assist in reducing the impact of the current staffing shortage in healthcare.
Assist staff in triaging patients
Triaging has always been an essential process in healthcare institutions, but its role has been further highlighted by COVID-19. Triage staff need to stay alert and successfully differentiate COVID-19 from similar respiratory illnesses such as the flu. However, healthcare workers who are tasked with triaging patients may find it difficult to effectively and efficiently do their jobs, especially if cases soar and more people visit the hospital.
Healthcare providers can lighten the load on triage staff by using AI. Patients can answer a series of questions that will be evaluated against an algorithm. An AI-powered program can then help a healthcare worker accurately and quickly respond to the needs of a patient — whether it’s further testing or emergency health services. In order to effectively use AI for this purpose, healthcare providers should have a solid IT infrastructure that relies on printed circuit boards with reliable power integrity structure. This allows triage staff to utilize this AI feature continuously and accurately assess patients that need immediate healthcare attention.
Streamline healthcare documentation
Electronic health records (EHRs) are vital healthcare documentation that inform the doctor about the patient’s current medical status and contain notes on how to move forward with a patient’s treatment plan. Aside from that, EHRs also act as a report card for the government, as it includes billing records, insurance records, and other crucial documentation that may be legally required. Overseeing EHRs and making sure that the information in them remains updated can be a cumbersome task for healthcare workers. In addition, some healthcare executives say that while EHRs are a necessary tool today, they did nothing to improve patient encounters and instead added more time to a clinician’s workday.
Through AI, EHR documentation for clinicians and other healthcare workers can be made more efficient and accurate. AI technology can listen to clinician-patient conversations, and then interpret and transform it into salient content for orders, referrals, and notes. The AI can also input data directly to the EHR, which then reduces the burdensome administrative tasks to the already limited healthcare staff.
Improve patient outcome
The shortage in healthcare workers worsen the patient outcomes in an already delicate time. As the global health crisis continues to send more people to hospital, it is important that healthcare leaders employ tools that boost patient outcomes and reduce readmissions. AI can improve patient outcomes and reduce the healthcare burden by automating routine tasks, allowing healthcare workers to focus on patient care. In addition, AI-assisted solutions such as predictive analytics can help boost healthcare by determining which individuals are most at risk from a particular disease, identify patients who are likely to skip out on scheduled appointments, and even identify early warning signs of diseases before they become severe.
Expand healthcare access to underserved regions
Despite the advancement of healthcare technologies, there are still a lot of medically underserved areas in the country. The current shortage of healthcare workers in the country will also surely exacerbate this problem. Thankfully, AI can reduce the impact of the current staffing shortage by fulfilling some diagnostic duties usually assigned to specialists and other healthcare workers. For example, healthcare providers in regions that have a scarcity of ultrasound technicians and radiologists can use AI imaging tools to assess chest x-rays for signs of illnesses such as tuberculosis and pneumonia, with a level of accuracy akin to human specialists. Through this, AI can enhance the availability and accessibility of crucial healthcare processes to areas that don’t have sufficient healthcare workers.
Indeed, AI can be effectively leveraged by healthcare providers to reduce the strain on the healthcare system caused by the current staffing shortage. In the long run, AI can decrease the rate of burnout and stress that healthcare workers experience, as well as drive more equitable and safer health systems, both of which can improve the quality of healthcare provided to today’s patients.
written for bayesianhealth.com
by Jamie Rose
Bayesian Health’s platform drove faster treatment for sepsis, with high provider adoption.
A large, five site study analyzing use and practice impact over two years for Bayesian Health’s sepsis module showed high sensitivity (80%+) with high precision (1 in 3 alerts were provider confirmed). Sepsis is a needle in a haystack problem, so it’s hard to achieve high precision at high sensitivity, and especially harder where you’re also optimizing for earlier detection times. But all three of these things are critical to cracking the code on earlier sepsis recognition and prevention.
Bayesian’s high quality, timely clinical signals resulted in 1.85 hour faster life-saving patient treatment driven by high provider adoption (89%).
With research showing the average adoption of clinical decision support tools to be in the low double-digits, our high adoption is exciting news, and shows physician confidence and trust in the Bayesian insights, leading to proactive, earlier care. This is great news for patient outcomes, where for sepsis every hour of delayed treatment directly impacts mortality rate.
Bayesian Health, a health data startup created by Johns Hopkins researcher Suchi Saria, PhD, launched its artificial intelligence-powered clinical decision support platform on the commercial market July 12, according to a news release. READ MORE