How We Work


As a learning community, the Molina Lab Group (MLG) fosters collaboration and growth among students and research partners. 

  • Members develop and refine research ideas across all stages, from initial concepts to publication-ready work. 
  • The lab supports scholarly development through peer feedback, presentation practice, and methodological training. 

We engage in academic life through discussions on authorship, ethics, and the responsible use of emerging technologies. 

Current Research Topics…


Our lab examines how people perceive, trust, and interact with AI systems that act as sources of communication, such as content moderation tools and generative AI systems. We study the cognitive heuristics users rely on when evaluating AI, which can lead to over-trust or distrust that is misaligned with a system’s actual capabilities. Our findings show that design features like interactive transparency can help calibrate trust by increasing user agency and understanding. This line of work is supported by an NSF CAREER grant, which investigates how design strategies can mitigate unfounded heuristics about generative AI and promote responsible information sharing in human–AI interactions.

Our lab studies how technological affordances—such as interactivity, customization, and agency—can motivate positive, socially beneficial behaviors, including physical activity, healthy eating, and learning. Using computational and experimental methods, we examine how digital features that support autonomy, competence, and relatedness shape sustained engagement over time. Our work also highlights that these effects are not one-size-fits-all and can vary across cultural and social contexts. By centering diverse user experiences, our research informs the design of inclusive, psychologically grounded health and learning technologies.

We investigate how technological affordances influence the spread and credibility of negative online content, particularly misinformation. Our research examines how modality (e.g., text, memes, video), interface cues, and emotional responses shape users’ judgments of credibility and decisions to share content. We find that identity alignment and the modality of information presentation play a strong role in how misinformation is processed and evaluated. This work advances understanding of misinformation by emphasizing user psychology, communication processes, and the role of platform design.