A list of selected papers in our review paper A Survey of Direct Preference Optimization.
If you find a missing paper or a possible mistake in our survey, please feel free to create an issue or pull a request here. I am more than glad to receive your advice. Thanks!
In this survey, we introduce a novel taxonomy that categorizes existing DPO works into four key dimensions based on different components of the DPO loss: data strategy, learning framework, constraint mechanism, and model property.
This taxonomy provides a systematic framework for understanding the methodological evolution of DPO and highlights the key distinctions between different variations.
RRHF: Rank responses to align language models with human feedback without tears
SLiC-HF: Sequence Likelihood Calibration with Human Feedback
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Preference Ranking Optimization for Human Alignment
MallowsPO: Fine-Tune Your LLM with Preference Dispersions
Direct Preference Optimization With Unobserved Preference Heterogeneity
Group Robust Preference Optimization in Reward-free RLHF
No Preference Left Behind: Group Distributional Preference Optimization
-
Direct Preference Optimization with an Offset
-
Enhancing Alignment using Curriculum Learning & Ranked Preferences
-
sDPO: Don't Use Your Data All at Once
-
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model
-
Filtered Direct Preference Optimization
-
Direct Alignment of Language Models via Quality-Aware Self-Refinement
-
Adaptive Preference Scaling for Reinforcement Learning with Human Feedback
-
$\beta$ -dpo: Direct preference optimization with dynamic$\beta$ -
Reward Difference Optimization for Sample Reweighting in Offline RLHF
-
Geometric-Averaged Preference Optimization for Soft Preference Labels
-
α -DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
-
Plug-and-Play Training Framework for Preference Optimization
-
Gap-Aware Preference Optimization: Enhancing Model Alignment with Perception Margin
Provably Robust DPO: Aligning Language Models with Noisy Feedback
ROPO: Robust Preference Optimization for Large Language Models
Impact of Preference Noise on the Alignment Performance of Generative Language Models
Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment
Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Understanding Generalization of Preference Optimization Under Noisy Feedback
Combating inherent noise for direct preference optimization
Perplexity-aware Correction for Robust Alignment with Noisy Preferences
ULMA: Unified Language Model Alignment with Human Demonstration and Point-wise Preference
KTO: Model Alignment as Prospect Theoretic Optimization
Noise Contrastive Alignment of Language Models with Explicit Rewards
Binary Classifier Optimization for Large Language Model Alignment
Offline Regularised Reinforcement Learning for Large Language Models Alignment
Distributional Preference Alignment of LLMs via Optimal Transport
General Preference Modeling with Preference Representations for Aligning Language Models
Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
A General Theoretical Paradigm to Understand Learning from Human Preferences
Negating negatives: Alignment without human positive samples via distributional dispreference optimization
Negative preference optimization: From catastrophic collapse to effective unlearning
Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment
Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization
On Extending Direct Preference Optimization to Accommodate Ties
Preference Optimization as Probabilistic Inference
Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization
LiPO: Listwise Preference Optimization through Learning-to-Rank
Panacea: Pareto Alignment via Preference Adaptation for LLMs
Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
LIRE: listwise reward enhancement for preference alignment
Ordinal Preference Optimization: Aligning Human Preferences via NDCG
Preference Optimization with Multi-Sample Comparisons
Optimizing Preference Alignment with Differentiable NDCG Ranking
TODO: Enhancing LLM Alignment with Ternary Preferences
Token-level Direct Preference Optimization
From r to Q∗: Your Language Model is Secretly a Q-Function
DPO Meets PPO: Reinforced Token Optimization for RLHF
Selective Preference Optimization via Token-Level Reward Function Estimation
EPO: Hierarchical LLM Agents with Environment Preference Optimization
TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights
SparsePO: Controlling Preference Alignment of LLMs via Sparse Token Masks
Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective
Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Step-controlled dpo: Leveraging stepwise error for enhanced mathematical reasoning
Data-Centric Human Preference Optimization with Rationales
TPO: Aligning Large Language Models with Multi-branch & Multi-step Preference Trees
Improving Multi-Step Reasoning Abilities of Large Language Models with Direct Advantage Policy Optimization
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
MAPO: Advancing Multilingual Reasoning through Multilingual-Alignment-as-Preference Optimization
Advancing LLM Reasoning Generalists with Preference Trees
Iterative Reasoning Preference Optimization
FactAlign: Long-form factuality alignment of large language models
Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents
Direct Multi-Turn Preference Optimization for Language Agents
Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents
Building Math Agents with Multi-Turn Iterative Preference Learning
SDPO: Segment-Level Direct Preference Optimization for Social Agents
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
ULMA: Unified Language Model Alignment with Human Demonstration and Point-wise Preference
Contrastive preference optimization: Pushing the boundaries of LLM performance in machine translation
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Enhancing Alignment using Curriculum Learning & Ranked Preferences
ORPO: Monolithic Preference Optimization without Reference Model
sDPO: Don't Use Your Data All at Once
Paft: A parallel training paradigm for effective LLM fine-tuning
Statistical rejection sampling improves preference optimization
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint
Some things are more cringe than others: Iterative preference optimization with the pairwise cringe loss
Self-Rewarding Language Models
Direct Language Model Alignment from Online AI Feedback
RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models
Direct large language model alignment through self-rewarding contrastive prompt distillation
Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model
ROPO: Robust Preference Optimization for Large Language Models
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
Iterative Reasoning Preference Optimization
Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
D2PO: Discriminator-Guided DPO with Response Evaluation Models
Understanding the performance gap between online and offline alignment algorithms
Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment
Exploratory Preference Optimization: Provably Sample-Efficient Exploration in RLHF with General Function Approximation
The Importance of Online Data: Understanding Preference Fine-tuning via Coverage
Online DPO: Online Direct Preference Optimization with Fast-Slow Chasing
OPTune: Efficient Online Preference Tuning
BPO: Supercharging Online Preference Learning by Adhering to the Proximity of Behavior LLM
Self-training with direct preference optimization improves chain-of-thought reasoning
Building Math Agents with Multi-Turn Iterative Preference Learning
AIPO: Improving Training Objective for Iterative Preference Optimization
The Crucial Role of Samplers in Online Direct Preference Optimization
Accelerated Preference Optimization for Large Language Model Alignment
SeRA: Self-Review & Alignment with Implicit Reward Margins
CREAM: Consistency Regularized Self-Rewarding Language Models
COMAL: A Convergent Meta-Algorithm for Aligning LLMs with General Preferences
Online Preference Alignment for Language Models via Count-based Exploration
Active Preference Learning for Large Language Models
Reinforcement Learning from Human Feedback with Active Queries
Active Preference Optimization for Sample Efficient RLHF
Active Preference Optimization via Maximizing Learning Capacity
Beyond one-preference-fits-all alignment: Multi-objective direct preference optimization
Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment
SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
Hybrid Preference Optimization: Augmenting Direct Preference Optimization with Auxiliary Objectives
Simultaneous Reward Distillation and Preference Learning: Get You a Language Model Who Can Do Both
MOSLIM: Align with diverse preferences in prompts through reward classification
Nash learning from human feedback
Self-play fine-tuning converts weak language models to strong language models
A minimaximalist approach to reinforcement learning from human feedback
Human Alignment of Large Language Models through Online Preference Optimisation
Direct nash optimization: Teaching language models to self-improve with general preferences
Self-Play Preference Optimization for Language Model Alignment
BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling
Self-Improving Robust Preference Optimization
Dynamic Noise Preference Optimization for LLM Self-Improvement via Synthetic Data
Enhancing Alignment using Curriculum Learning & Ranked Preferences
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model
Learn Your Reference Model for Real Good Alignment
Building Math Agents with Multi-Turn Iterative Preference Learning
Contrastive preference optimization: Pushing the boundaries of LLM performance in machine translation
ORPO: Monolithic Preference Optimization without Reference Model
SimPO: Simple Preference Optimization with a Reference-Free Reward
Understanding reference policies in direct preference optimization
SimPER: Simple Preference Fine-Tuning without Hyperparameters by Perplexity Optimization
Beyond Reverse KL- Generalizing Direct Preference Optimization with Diverse Divergence Constraints
Diverse Preference Learning for Capabilities and Alignment
Towards Efficient Exact Optimization of Language Model Alignment
Generalized Preference Optimization: A Unified Approach to Offline Alignment
Soft Preference Optimization: Aligning Language Models to Expert Distributions
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Chi-Squared Preference Optimization
FlipGuard: Defending Preference Alignment against Update Regression with Constrained Optimization
Direct Preference Optimization Using Sparse Feature-level Constraints
DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
Stepwise Alignment for Constrained Language Model Policy Optimization
Enhancing LLM Safety via Constrained Direct Preference Optimization
Adversarial DPO: Harnessing Harmful Data for Reducing Toxicity with Minimal Impact on Coherence and Evasiveness in Dialogue Agents
Backtracking Improves Generation Safety
SafeDPO: A Simple Approach to Direct Preference Optimization with Enhanced Safety
Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data
Robust Preference Optimization through Reward Model Distillation
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment
Self-Improving Robust Preference Optimization
Right Now, Wrong Then: Non-Stationary Direct Preference Optimization under Preference Drift
On the limited generalization capability of the implicit reward model induced by direct preference optimization
RRHF: Rank responses to align language models with human feedback without tears
A Long Way to Go: Investigating Length Correlations in RLHF
Disentangling Length from Quality in Direct Preference Optimization
SimPO: Simple Preference Optimization with a Reference-Free Reward
Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence
Direct Multi-Turn Preference Optimization for Language Agents
Following length constraints in instructions
The Hitchhiker’s Guide to Human Alignment with *PO
Length Desensitization in Direct Preference Optimization
Understanding the Logic of Direct Preference Alignment through Logic
](https://openreview.net/forum?id=qTrEq31Shm) LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay Perspective
Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive
Towards Analyzing and Understanding the Limitations of DPO: A Theoretical Perspective
From r to Q∗: Your Language Model is Secretly a Q-Function
Robust Preference Optimization through Reward Model Distillation
3D-Properties: Identifying Challenges in DPO and Charting a Path Forward
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
Understanding Likelihood Over-optimisation in Direct Alignment Algorithms
A Common Pitfall of Margin-based Language Model Alignment: Gradient Entanglement
Mitigating the Alignment Tax of RLHF
Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment
Preference Learning Algorithms Do Not Learn Preference Rankings
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques
PAFT: A Parallel Training Paradigm for Effective LLM Fine-Tuning
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Discovering Preference Optimization Algorithms with and for Large Language Models
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
RainbowPO: A Unified Framework for Combining Improvements in Preference Optimization
When is RL better than DPO in RLHF? A Representation and Optimization Perspective
A Survey of Preference-Based Reinforcement Learning Methods
Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
Aligning Large Language Models with Human: A Survey
Large Language Model Alignment: A Survey
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
AI Alignment: A Comprehensive Survey
A Survey of Reinforcement Learning from Human Feedback
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
A Survey on Human Preference Learning for Large Language Models
A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More
Towards a Unified View of Preference Learning for Large Language Models: A Survey
Preference Tuning with Human Feedback on Language, Speech, and Vision Tasks: A Survey
A Comprehensive Survey of Direct Preference Optimization: Datasets, Theories, Variants, and Applications