Officials launch artificial intelligence research initiative

 

Officials announced an interdisciplinary research initiative on artificial intelligence’s real-life applications earlier this month.

The Trustworthy AI initiative, a plan to engage GW researchers across multiple fields, aims to improve existing AI models and research strategies for AI’s societal applications to increase user trust. Zoe Szajnfarber, the director of strategic initiatives for GW Engineering and a faculty director for GW TAI, said GW TAI seeks to unite GW faculty who research AI under one initiative to create opportunities for research collaborations across disciplines.

Szajnfarber said many faculty members at GW have ongoing research projects related to AI as well as larger programs like the Institute for Trustworthy AI in Law and Society and the Co-Design of Trustworthy AI Systems. She said it can be difficult for faculty to find joint research opportunities because AI research at the University spans across numerous schools, and programs like GW TAI connecting AI researchers did not exist prior.

“The challenge is that the work is so diverse and distributed across many disciplines that it’s sometimes hard to keep up, let alone find whom to connect with relevant collaborators on any given project,” Szajnfarber said in an email.

Szajnfarber said GW TAI will serve as a means to facilitate research collaborations and joint projects across disciplines at GW and to share AI-related events and opportunities in one place.

“I see GW TAI as a platform to bring together researchers who want to contribute to this important problem space of TAI in systems and for society,” Szajnfarber said.

Faculty involved in the initiative said they hope to bolster current AI models and study the implications of AI use in areas like consumer behavior, social justice issues and medical decisions.

Erica Wortham — the director of the GW Innovation Center and a co-principal investigator of Designing Trustworthy AI Systems, a program for doctoral students to conduct AI research — said she teaches a summer course for computer science and systems engineering doctoral students on designing AI solutions to solve real-world problems, like AI use for cashierless grocery stores. She said the partnership between students in two fields is an example of the Trustworthy AI initiative’s multidisciplinary approach and allows those designing AI to focus on addressing problems for those who use AI.

“You have the folks making the models and building the algorithms talking to folks that study technical systems in context,” Wortham said.

Douglas Crawford, an assistant professor of interior architecture and a GW TAI faculty member, said he hopes to collaborate with faculty members who create AI to develop architecture-specific models through the initiative.

He said architecture students utilize AI’s graphic design capabilities to create “inspirational imagery” and to generate quick mock-ups for their designs. But, since graphic AI is not specifically tailored to architecture, AI outputs include “hallucinations” like staircases that lead to a wall without a doorway, he said.

“I’m excited to be included amongst that and be able to offer up the unique perspective of someone in the Corcoran School who is working the graphic AI side of things,” Crawford said.

Nils Olsen, an assistant professor of organizational sciences and a GW TAI faculty member, said he looks forward to further examining AI’s impacts on consumer decisions and its uses in the medical field, like determining diagnoses, as a researcher in the initiative.

“Certainly there are a lot of opportunities,” Olsen said. “My real value to add there would be on the cognitive underpinnings, how people make decisions, literally in their brain.”

Olsen said he’s been conducting consumer behavior research since 2019 using AI bots that were cartoon versions of people from various racial groups to analyze how consumers would negotiate with the different bots over Airbnb prices. He said researchers aimed to assess if consumers would have a different level of aggression when negotiating with a Black, Asian or white individual and found that consumers perceived the bot resembling a Black individual as the most competent, likable and human.

Olsen said researchers are now thinking about the implications of those findings, as AI bots could begin to facilitate negotiations and customer service more frequently.

“They also understand where AI already is being implemented and where there could be opportunities for future kind of introductions of AI,” Olsen said.

Alexa Alice Joubin, the director of the Digital Humanities Institute and a professor of English, said she studies societal biases using AI because she found there are biases within AI algorithms through their responses that reflect various larger societal issues.

“My conclusion is that current AI is actually a social surveillance tool,” Joubin said. “Do you want to know about biases in society? Test it on AI. If you curate it correctly, what comes out actually reflects what the society collectively thinks.”

She said coders often think linearly about AI algorithms, and those in humanities often consider alternative approaches to AI use, which she said demonstrates the value of researchers in different fields collaborating as part of the initiative.

“It’s so that you don’t lose sight of what it is for, it’s for humans,” Joubin said. “That’s why humanities are here.”

Doug Evans, the founder of the Behavioral Research Insights and Digital Health Technology Institute and a professor of prevention and community health, said he hopes to explore how researchers can use AI to influence health-related behaviors through GW TAI.

“There may be developments or collaboration opportunities that arise that could benefit my work,” Evans said. “So I was very interested in that sort of thing.”

Source: gwhatchet.com

Hipther

FREE
VIEW