Skip to content

JuliaDecisionFocusedLearning/DecisionFocusedLearningBenchmarks.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DecisionFocusedLearningBenchmarks.jl

Stable Dev Build Status Coverage Code Style: Blue

What is Decision-Focused Learning?

Decision-focused learning (DFL) is a paradigm that integrates machine learning prediction with combinatorial optimization to make better decisions under uncertainty. Unlike traditional "predict-then-optimize" approaches that optimize prediction accuracy independently of downstream decision quality, DFL directly optimizes end-to-end decision performance.

A typical DFL algorithm involves training a parametrized policy that combines a statistical predictor with an optimization component:

$$\xrightarrow[\text{Instance}]{x} \fbox{Statistical model \$\varphi_w\$} \xrightarrow[\text{Parameters}]{\theta} \fbox{CO algorithm \$f\$} \xrightarrow[\text{Solution}]{y}$$

Where:

  • Instance $x$: input data (e.g., features, context)
  • Statistical model $\varphi_w$: machine learning predictor (e.g., neural network)
  • Parameters $\theta$: predicted parameters for the optimization problem
  • CO algorithm $f$: combinatorial optimization solver
  • Solution $y$: final decision/solution

Package Overview

DecisionFocusedLearningBenchmarks.jl provides a comprehensive collection of benchmark problems for evaluating decision-focused learning algorithms. The package offers:

  • Standardized benchmark problems spanning diverse application domains
  • Common interfaces for datasets, statistical models, and optimization components
  • Ready-to-use pipelines compatible with InferOpt.jl and the whole JuliaDecisionFocusedLearning ecosystem
  • Evaluation tools for comparing algorithm performance

Benchmark Categories

The package organizes benchmarks into three main categories based on their problem structure:

Static Benchmarks (AbstractBenchmark)

Single-stage optimization problems with no randomness involved:

Stochastic Benchmarks (AbstractStochasticBenchmark)

Single-stage problems with random noise affecting the objective:

Dynamic Benchmarks (AbstractDynamicBenchmark)

Multi-stage sequential decision-making problems:

Getting Started

In a few lines of code, you can create benchmark instances, generate datasets, initialize learning components, and evaluate performance, using the same syntax across all benchmarks:

using DecisionFocusedLearningBenchmarks

# Create a benchmark instance for the argmax problem
benchmark = ArgmaxBenchmark()

# Generate training data
dataset = generate_dataset(benchmark, 100)

# Initialize policy components
model = generate_statistical_model(benchmark)
maximizer = generate_maximizer(benchmark)

# Training algorithm you want to use
# ... your training code here ...

# Evaluate performance
gap = compute_gap(benchmark, dataset, model, maximizer)

Related Packages

This package is part of the JuliaDecisionFocusedLearning organization, and built to be compatible with other packages in the ecosystem:

About

Benchmark problems for decision-focused learning

Topics

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages