Mitigating Popularity Bias in LLM-Based Recommender Systems: A Combined Approach of Structured Prompt Engineering and Customized Loss Functions

Faculty Mentor

Sanmeet Kaur

Presentation Type

Oral Presentation

Start Date

4-14-2026 9:00 AM

End Date

4-14-2026 9:20 AM

Location

PUB 321

Primary Discipline of Presentation

Computer Science

Abstract

Large language models (LLMs) have emerged as powerful tools for recommendation systems, leveraging deep semantic understanding to model user preferences. However, these systems are susceptible to popularity bias, where they over-recommend mainstream items at the expense of relevant niche content, which degrades both user experience and fairness. While existing research has explored bias mitigation through either prompt engineering or algorithmic debiasing in isolation, the potential of combining these strategies remains relatively underexplored. This study attempts to address the primary research question of how can the popularity bias in LLM-based recommendation systems be better mitigated through a combined approach of structured prompt engineering and customized loss functions without having significant degradation of recommendation relevance? Building on a comprehensive synthesis of recent literature, this research proposes and evaluates a dual-mitigation framework. First, we investigate specific prompt structures that reduce bias and combine it with gentle "temporal-diverse" debiasing instructions to yield bias reduction with minimal accuracy loss as seen in related literature. Second, we design a custom loss function for LLM fine-tuning that extends Scaled Cross-Entropy (SCE) with popularity penalty terms and diversity regularization. Lastly, we develop a comprehensive evaluation framework that measures popularity bias, diversity, and fairness. Expected contributions include a loss function with provable bias-mitigation properties, and a rigorous evaluation protocol which enables a systematic comparison of mitigation strategies.

This document is currently not available here.

Share

COinS
 
Apr 14th, 9:00 AM Apr 14th, 9:20 AM

Mitigating Popularity Bias in LLM-Based Recommender Systems: A Combined Approach of Structured Prompt Engineering and Customized Loss Functions

PUB 321

Large language models (LLMs) have emerged as powerful tools for recommendation systems, leveraging deep semantic understanding to model user preferences. However, these systems are susceptible to popularity bias, where they over-recommend mainstream items at the expense of relevant niche content, which degrades both user experience and fairness. While existing research has explored bias mitigation through either prompt engineering or algorithmic debiasing in isolation, the potential of combining these strategies remains relatively underexplored. This study attempts to address the primary research question of how can the popularity bias in LLM-based recommendation systems be better mitigated through a combined approach of structured prompt engineering and customized loss functions without having significant degradation of recommendation relevance? Building on a comprehensive synthesis of recent literature, this research proposes and evaluates a dual-mitigation framework. First, we investigate specific prompt structures that reduce bias and combine it with gentle "temporal-diverse" debiasing instructions to yield bias reduction with minimal accuracy loss as seen in related literature. Second, we design a custom loss function for LLM fine-tuning that extends Scaled Cross-Entropy (SCE) with popularity penalty terms and diversity regularization. Lastly, we develop a comprehensive evaluation framework that measures popularity bias, diversity, and fairness. Expected contributions include a loss function with provable bias-mitigation properties, and a rigorous evaluation protocol which enables a systematic comparison of mitigation strategies.