The Government Accountability Office has issued a report (GAO-21-7SP) entitled "Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care".
The report was sent to Sen. Lamar Alexander, R-Tennessee, chairman of the Senate Health, Education, Labor, and Pensions Committee. Rep. Greg Walden, R-Oregon, ranking member of the House Energy and Commerce Committee, Rep. Michael Burgess, R-Texas, ranking member of the House Energy and Commerce subcommittee on Health , Rep. S. Brett Guthrie, R-Kentucky, ranking member of the House Energy and Commerce subcommittee on Oversight and Investigations, on Nov. 30, 2020. Here are excerpts of summaries associated with the report.
What GAO Found: "Artificial Intelligence (AI) tools have shown promise for augmenting patient care in the following two areas:
* Clinical AI tools have shown promise in predicting health trajectories of patients, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management (i.e., efforts to improve the health outcomes of a community). These tools are at varying stages of maturity and adoption, but many we describe, with the exception of population health management tools, have not achieved widespread use.
* Administrative AI tools have shown promise in reducing provider burden and increasing efficiency by recording digital notes, optimizing operational processes, and automating laborious tasks. These tools are also at varying stages of maturity and adoption, ranging from emerging to widespread.
GAO identified the following challenges surrounding AI tools, which may impede their widespread adoption:
* Data access. Developers experience difficulties obtaining the high-quality data needed to create effective AI tools.
Bias. Limitations and bias in data used to develop AI tools can reduce their safety and effectiveness for different groups of patients, leading to treatment disparities.
* Scaling and integration. AI tools can be challenging to scale up and integrate into new settings because of differences among institutions and patient populations.
* Lack of transparency. AI tools sometimes lack transparency--in part because of the inherent difficulty of determining how some of them work, but also because of more controllable factors, such as the paucity of evaluations in clinical settings.
* Privacy. As more AI systems are developed, large quantities of data will be in the hands of more people and organizations, adding to privacy risks and concerns.
* Uncertainty over liability. The multiplicity of parties involved in developing, deploying, and using AI tools is one of several factors that have rendered liability associated with the use of AI tools uncertain. This may slow adoption and impede innovation.
GAO developed six policy options that could help address these challenges or enhance the benefits of AI tools. The first five policy options identify possible new actions by policymakers, which include Congress, elected officials, federal agencies, state and local governments, academic and research institutions, and industry. The last is the status quo, whereby policymakers would not intervene with current efforts. See below for details of the policy options and relevant opportunities and considerations.
See table here: https://www.gao.gov/products/GAO-21-7SP"
Why GAO Did This Study: "The U.S. health care system is under pressure from an aging population; rising disease prevalence, including from the current pandemic; and increasing costs. New technologies, such as AI, could augment patient care in health care facilities, including outpatient and inpatient care, emergency services, and preventative care. However, the use of AI-enabled tools in health care raises a variety of ethical, legal, economic, and social concerns.
GAO was asked to conduct a technology assessment on the use of AI technologies to improve patient care, with an emphasis on foresight and policy implications. This report discusses (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges surrounding the use of these tools, and (3) policy options to address challenges or enhance benefits of the use of these tools.
GAO assessed AI tools developed for or used in health care facilities; interviewed a range of stakeholder groups including government, health care, industry, academia, and a consumer group; convened a meeting of experts in collaboration with the National Academy of Medicine; and reviewed key reports and scientific literature. GAO is identifying policy options in this report."
* * *
November 30, 2020
The U.S. health care system is under pressure. People age 65 and older are projected to make up one-fifth of the U.S. population by 2030. The overall prevalence of disease is increasing, even setting aside the recent Coronavirus Disease 2019 (COVID-19) pandemic./4 Further, individuals, health insurers, and federal and state governments spent approximately $3.5 trillion in 2018 on health consumption expenditures, representing 16.9 percent of the nation's gross domestic product. One potential tool for addressing concerns surrounding the quality and cost of health care is emerging from the massive volume of health data, which is increasing at an unprecedented rate. Humans alone are not capable of meaningfully analyzing this flood of data on a reasonable time scale. Artificial intelligence (AI) is a promising alternative that can rapidly process and analyze large amounts of complex data.
New AI tools have the potential to both reduce administrative burdens and improve treatment, including outpatient and inpatient care, emergency services, and preventative care. Examples of AI being used to augment patient care include systems that provide personalized treatment recommendations, software that interprets vital signs to monitor patients in intensive care units (ICU), and smart speakers that convert words spoken during a medical appointment into electronic health records (EHR) and codes for insurance billing./5 However, the use of AI technologies in health care raises a variety of ethical, legal, economic, and social concerns. For example, AI tools developed using historical data could perpetuate biases such as underrepresentation of certain groups based on race, socioeconomic status, or gender.
In view of the potential for AI to help improve patient care, you asked us to conduct a technology assessment in this area, with an emphasis on foresight and policy implications. This report discusses (1) current and emerging AI tools available for augmenting patient care and their potential benefits, (2) challenges surrounding the use of these tools, and (3) policy options to address challenges or enhance benefits of the use of these tools.
To address these objectives, we assessed available and developing AI technologies for clinical or administrative purposes that companies or health care providers may use to augment patient care as well as the challenges associated with using such technologies./6 We focused our review on selected technologies used at locations that employ health care providers, including but not limited to physicians, registered nurses, medical assistants, and physical therapists. We excluded technologies used in other environments, such as the home, and excluded technologies that are exclusively focused on diagnostics./7
We reviewed key reports and scientific literature describing current and emerging technologies and interviewed a variety of stakeholders, including agency officials, industry members, academic researchers, and a consumer group. We also collaborated with the National Academy of Medicine to convene a 2-day expert meeting on current and emerging AI technologies for augmenting patient care. The meeting included experts from academia and industry, as well as legal scholars, with expertise covering all significant areas of our review. Following the meeting, we continued to use the experts' advice to clarify and expand on what we heard. We then identified six policy options in response to the challenges identified during our work and examined potential opportunities and considerations of each. Consistent with our quality assurance framework, we provided the experts and relevant agencies with a draft of our report and solicited their feedback, which we incorporated as appropriate. See appendix I for additional information on our scope and methodology.
We conducted our work from November 2019 through November 2020 in accordance with all sections of GAO's Quality Assurance Framework that are relevant to technology assessments.
The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations to our work.
We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product.
See footnotes here: https://www.gao.gov/assets/720/710920.pdf
* * *
Agency and Expert Comments
We provided a draft of this report to the Department of Health and Human Services (Food and Drug Administration, National Institutes of Health, and the Centers for Medicare and Medicaid Services) and the Department of Veterans Affairs with a request for technical comments. We incorporated agency comments into this report as appropriate.
We also provided a draft of this report to 14 participants from our expert meeting, and incorporated comments received as appropriate.
* * *
The text of the GAO report is available at https://www.gao.gov/products/GAO-21-7SP
TARGETED NEWS SERVICE (founded 2004) features non-partisan 'edited journalism' news briefs and information for news organizations, public policy groups and individuals; as well as 'gathered' public policy information, including news releases, reports, speeches. For more information contact MYRON STRUCK, editor, [email protected], Springfield, Virginia; 703/304-1897; https://targetednews.com