Faculty Research Spotlight | Jie Ren, Ph.D., Associate Professor, Information, Technology, and Operations; Director, M.S. in Business Analytics
Faculty | Dec 22, 2025 | Gabelli School of Business
A Recently Published Study on Leveraging Crowdsourced Creativity to Innovate and Solve Problems & a New Study on AI’s Role in Amplifying Bias
Jie Ren, Ph.D., has always been fascinated by the idea of the crowd — specifically how the collective wisdom and creativity of large groups can be harnessed to innovate and solve problems. Ren, an associate professor at Fordham University’s Gabelli School of Business, recently co-authored “Crowdsourcing Creativity: Support Architectures and Task-Knowledge Intensity.” Published in the journal Technovation, the research advances the understanding of how organizations can design better systems to foster creativity, collaboration, and problem-solving at scale.
She and her co-authors, Pinar Ozturk from Duquesne University and Yue Han from Adelphi University, are among the few scholars who are trying to organize the crowd. “We are focusing on the organizational mechanism in order to tap into thousands of people’s minds,” she said. “And for societal issues that do not belong to one organization or one individual, it should be a collective effort.”
Grounded in the componential theory of creativity, the study compares two approaches—remixing and external stimuli—to find the most effective way to generate ideas in crowdsourcing environments. The research has practical applications for drawing on collective intelligence to address societal challenges ranging from global issues like climate change and public health to local, community-based problems.
Ren’s current research project is focused on how AI can amplify bias. She and her co-authors, Tom Mattson from University of Richmond and Qin Weng from Baylor University, are exploring how AI models pick up impressions about stereotypes and societal roles, and how those impressions can affect responses to AI prompts in ways that impact users. “If the biases or stereotypes are systematic, they could be dominating the words floating around the internet, and this bias can creep into the training data, contaminating AI models,” she argued.
Ren explains that machines have been trained on societal expectations, so some stereotypical ideas are reinforced—like, for example, the notion that men showing emotion can be considered a weakness—and the machine demonstrates the same biases. “If someone is venting to ChatGPT or its equivalent, and that person discloses his or her gender, men will tend to get less empathy than women,” she explained.
While the expression of the biases may seem subtle, Ren’s research suggests that the solitary nature of people’s interactions with AI can amplify biases significantly. “Without external feedback, it’s you versus the machine. You are absorbing information from the machine as if it is true and it is the information to follow,” she noted. “Over time, this could influence how you see many things, amplifying the bias. It could create an information echo chamber–one that can spread bias.”
Written by: Kimberly Volpe-Casalino