Mitigating Popularity Bias in LLM-Based Recommender Systems: A Combined Approach of Structured Prompt Engineering and Customized Loss Functions
Faculty Mentor
Sanmeet Kaur
Presentation Type
Oral Presentation
Start Date
4-14-2026 9:00 AM
End Date
4-14-2026 9:20 AM
Location
PUB 321
Primary Discipline of Presentation
Computer Science
Abstract
Large language models (LLMs) have emerged as powerful tools for recommendation systems, leveraging deep semantic understanding to model user preferences. However, these systems are susceptible to popularity bias, where they over-recommend mainstream items at the expense of relevant niche content, which degrades both user experience and fairness. While existing research has explored bias mitigation through either prompt engineering or algorithmic debiasing in isolation, the potential of combining these strategies remains relatively underexplored. This study attempts to address the primary research question of how can the popularity bias in LLM-based recommendation systems be better mitigated through a combined approach of structured prompt engineering and customized loss functions without having significant degradation of recommendation relevance? Building on a comprehensive synthesis of recent literature, this research proposes and evaluates a dual-mitigation framework. First, we investigate specific prompt structures that reduce bias and combine it with gentle "temporal-diverse" debiasing instructions to yield bias reduction with minimal accuracy loss as seen in related literature. Second, we design a custom loss function for LLM fine-tuning that extends Scaled Cross-Entropy (SCE) with popularity penalty terms and diversity regularization. Lastly, we develop a comprehensive evaluation framework that measures popularity bias, diversity, and fairness. Expected contributions include a loss function with provable bias-mitigation properties, and a rigorous evaluation protocol which enables a systematic comparison of mitigation strategies.
Recommended Citation
Espinoza, Jesus, "Mitigating Popularity Bias in LLM-Based Recommender Systems: A Combined Approach of Structured Prompt Engineering and Customized Loss Functions" (2026). 2026 Symposium. 1.
https://dc.ewu.edu/srcw_2026/op_2026/o3_2026/1
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Mitigating Popularity Bias in LLM-Based Recommender Systems: A Combined Approach of Structured Prompt Engineering and Customized Loss Functions
PUB 321
Large language models (LLMs) have emerged as powerful tools for recommendation systems, leveraging deep semantic understanding to model user preferences. However, these systems are susceptible to popularity bias, where they over-recommend mainstream items at the expense of relevant niche content, which degrades both user experience and fairness. While existing research has explored bias mitigation through either prompt engineering or algorithmic debiasing in isolation, the potential of combining these strategies remains relatively underexplored. This study attempts to address the primary research question of how can the popularity bias in LLM-based recommendation systems be better mitigated through a combined approach of structured prompt engineering and customized loss functions without having significant degradation of recommendation relevance? Building on a comprehensive synthesis of recent literature, this research proposes and evaluates a dual-mitigation framework. First, we investigate specific prompt structures that reduce bias and combine it with gentle "temporal-diverse" debiasing instructions to yield bias reduction with minimal accuracy loss as seen in related literature. Second, we design a custom loss function for LLM fine-tuning that extends Scaled Cross-Entropy (SCE) with popularity penalty terms and diversity regularization. Lastly, we develop a comprehensive evaluation framework that measures popularity bias, diversity, and fairness. Expected contributions include a loss function with provable bias-mitigation properties, and a rigorous evaluation protocol which enables a systematic comparison of mitigation strategies.