Symposium Report on the Next Decade of Artificial Intelligence (2024)

Opportunities and risks

Over the course of the day, speakers identified several areas where AI technology, including generative AI, may provide meaningful benefits for the public, as well as the major risks that the technology poses.

Healthcare uses

AI technology has the potential to improve healthcare. Participants at the symposium discussed how AI can be used for early disease detection; drug discovery; monitoring trends in public health; administrative tasks that can alleviate physician burnout; and precision medicine, which involves the creation of personalized treatment plans based on information like genetic and clinical profiles.

AI tools have already been used to assist with medical imaging, making scans faster and less expensive. These tools can help clinicians triage by screening medical images to identify potentially urgent issues for priority review by a physician. AI models are now trained to go a step further and help detect disease. A speaker discussed an AI tool that can review mammograms and identify abnormalities that could signal breast cancer risk up to five years prior to developing cancer, allowing for earlier intervention and potentially better outcomes.[3] Speakers agreed that such AI tools should be used to augment clinicians’ work rather than replace it.

On the administrative front, AI is now used to help ease the burden on clinicians, such as by transcribing patient conversations. A physician discussed attempts to use generative AI technology to summarize patient histories to help ensure clinicians see relevant information that might otherwise get lost in extensive notes. This speaker noted that generative AI tools can also create responses to simple patient questions via chat and can provide translation services. As the technology develops, he observed, AI tools could continuously be running in hospital surroundings. For example, recording tools could be used to transcribe patient conversations or monitoring tools could continuously observe vital signs in patients’ rooms. Such tools could potentially be used in patients’ homes, such as video to monitor patient activity.

However, these developments come with risks. Healthcare data is especially sensitive. Patients may not understand what data is being collected or how it is being used by AI tools, especially when such tools are continuously running in their hospital rooms or even homes. In addition to these privacy concerns, there are also serious concerns about unequal access. Minority groups are underrepresented in clinical data used to create personalized treatment plans, and AI transcription services currently do not cover a broad range of languages or accents. To effectively use AI tools in such a sensitive context, speakers noted, there must be a human involved who has ultimate responsibility and who is prepared to make decisions on when to trust AI tools and when to challenge them.

Information and misinformation

AI tools, including chatbots powered by generative AI, can help people easily find information. For example, they are already being used to supplement some phone lines, such as 311 public non-emergency services and corporate customer service. This use of chatbots can free up phone operators to focus on providing specific services and addressing complicated questions. In addition, generative AI tools can automate translation, allowing government and businesses to better communicate with people in their native languages and provide better access to information.

However, as multiple speakers noted, the technology is far from perfect. Generative AI is notoriously prone to arriving at faulty conclusions, or “hallucinations,” and providing false responses.Generative AI chatbots can therefore share incorrect information with people, making them a flawed tool for providing information to the public. These chatbots can also fabricate stories about people, which could cause emotional and reputational harm.

In addition, generative AI can be used by bad actors to intentionally create misinformation materials, such as deepfakes. Laws around defamation and fraud provide some recourse but do not address the full scope of the problem, particularly as deepfakes become increasingly realistic and harder to detect. Speakers noted that the use of generative AI in misinformation would be a major concern over the coming months ahead of the general election, as bad actors may create a deluge of misinformation that cannot be adequately factchecked in time. They cited examples of audio and visual deepfakes that could have serious repercussions if people believed they were true, such as robocalls imitating presidential candidates that encouraged people not to vote in primary elections,[4] images of former President Trump embracing Dr. Fauci,[5] and an image of an explosion at the Pentagon that briefly interrupted markets.[6]

Administrative tasks and automated decision-making

AI tools may be helpful to streamline a host of administrative tasks, particularly for government agencies. For example, a government official outlined opportunities to use generative AI to calculate tax liability, generate public education materials, and write computer code.

One common use case for AI technology is to assist with reviewing applications, which can significantly streamline those processes. For example, by using AI tools to automatically identify people eligible for services or benefits, government agencies can distribute those services and benefits to constituents more quickly and efficiently.

Of course, using AI tools to prescreen applications also comes with risks. Many companies use AI screening tools for hiring, potentially introducing algorithmic bias. One researcher noted that some companies may have started to use AI tools in hiring with the goal of addressing the unfairness and implicit bias inherent in human review. However, speakers cited ample evidence that AI tools often amplify, rather than correct, bias. For example, algorithms trained on data from past hiring can amplify human biases reflected in past hiring decisions and entrench existing norms. The black-box nature of AI algorithms makes it difficult to understand whether and how AI tools work, making it difficult to ensure fairness in decision making. In fact, a speaker argued that it is best to assume that AI tools discriminate by default.

Data concerns

As generative AI models are trained on unprecedentedly vast data sets, the quality, quantity, and fair use of training data raise several concerns. A key issue is copyright, as companies are using copyrighted articles, images, and videos collected from across the internet in their models without compensating the creators for their work. Copyright concerns have received much public attention and are currently being litigated. Another key issue, discussed in the context of healthcare in a previous section, is the underrepresentation of minority groups in training data. As a result, generative AI tools may create outputs that benefit only certain groups.

There are also other data concerns that have not received as much attention, such as the availability of data used to train AI models. Generative AI models need vast amounts of data for training. Consequently, companies that had been scraping the web for years for free have an enormous advantage over newer entrants to the AI market. This is particularly true as platforms and content providers have started to lock up their data and enter into exclusive licensing agreements. This situation raises concerns that the market will become concentrated around just a few players, suppressing competition and further innovation while the technology is still in its infancy.

“Data democratization,” or encouraging the free flow of data, may allow for greater innovation. Of course, any such initiatives should be balanced with privacy concerns, especially concerning sensitive data. As companies seek additional data for training, models are increasingly using their own outputs for training, called “synthetic data.” The use of synthetic data may reinforce issues, particularly with hallucinations, and ultimately cause models to become more error-prone (“model collapse”).

There are also concerns about generative AI tools outputting content that is false, biased, or otherwise problematic because the model was trained on data that was itself flawed. This is often referred to as the “garbage in, garbage out” problem. Because there is little transparency into how AI models operate, one speaker noted concerns with outputs that may have been trained on inaccurate data (e.g., farcical articles), inappropriate data (e.g., protected classes like race or sex), or secret data (e.g., trade secrets). Another speaker warned that inadequate privacy protections on training data may allow generative AI tools to leak personal data or reidentify deidentified data in their outputs.

Figure 2: Garbage data in produces garbage data out.

Symposium Report on the Next Decade of Artificial Intelligence (2024)
Top Articles
Audrey King ‘21 – Olympic Skier – CIS Alumni Connect
Online and Mobile Banking Agreement | Disclosures | Sovereign Bank
Tlc Africa Deaths 2021
Lifewitceee
Uihc Family Medicine
Occupational therapist
Crossed Eyes (Strabismus): Symptoms, Causes, and Diagnosis
Academic Integrity
Plus Portals Stscg
Espn Expert Picks Week 2
Bill Devane Obituary
Craigslist Estate Sales Tucson
Byte Delta Dental
Wilmot Science Training Program for Deaf High School Students Expands Across the U.S.
2016 Hyundai Sonata Refrigerant Capacity
Ups Access Point Lockers
Urban Airship Expands its Mobile Platform to Transform Customer Communications
Union Ironworkers Job Hotline
Rs3 Eldritch Crossbow
Evil Dead Rise Showtimes Near Pelican Cinemas
Best Boston Pizza Places
Bolsa Feels Bad For Sancho's Loss.
Ficoforum
Student Portal Stvt
From This Corner - Chief Glen Brock: A Shawnee Thinker
The Eight of Cups Tarot Card Meaning - The Ultimate Guide
Is Henry Dicarlo Leaving Ktla
Unity Webgl Car Tag
San Jac Email Log In
Gt7 Roadster Shop Rampage Engine Swap
Funky Town Gore Cartel Video
How Much Is An Alignment At Costco
Grove City Craigslist Pets
Acuity Eye Group - La Quinta Photos
Kltv Com Big Red Box
Marine Forecast Sandy Hook To Manasquan Inlet
2012 Street Glide Blue Book Value
Log in or sign up to view
Craigslist Georgia Homes For Sale By Owner
Msnl Seeds
Heelyqutii
Fifty Shades Of Gray 123Movies
2700 Yen To Usd
California Craigslist Cars For Sale By Owner
Academic Calendar / Academics / Home
The Great Brian Last
From Grindr to Scruff: The best dating apps for gay, bi, and queer men in 2024
Walmart Front Door Wreaths
Congruent Triangles Coloring Activity Dinosaur Answer Key
Muni Metro Schedule
F9 2385
Cryptoquote Solver For Today
Latest Posts
Article information

Author: Rueben Jacobs

Last Updated:

Views: 6339

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.