Skip to content

Added a decision-focused learning example #621

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

senneberden
Copy link
Collaborator

No description provided.

@senneberden senneberden requested a review from tias April 10, 2025 11:02
@senneberden senneberden self-assigned this Apr 10, 2025
@tias
Copy link
Collaborator

tias commented Apr 10, 2025

I don't know how to edit this pull request in my local repo, because it is somehow not a local branch but your master branch...

I'll comment through this comment then.

I think re-implementing SPO is not good, you are not using enough of the capabilities of the PyEPO library... If that means we should use its dataloader etc, then so be it?

I started from the code on their README and started cleaning and playing and figured we can actually create a generic cpmpyModel that inherits pyepo.optModel; which can work for any CPMpy model. So now we can nicely separate out the CPMpy modeling, instead of having to create a class each time like now pyepo forces you into...

The result is very clean imho:

#!/usr/bin/env python
# coding: utf-8

import torch
import pyepo
import cpmpy as cp
from tqdm import tqdm

# Generic PyEPO wrapper for CPMpy models
class cpmpyModel(pyepo.model.opt.optModel):
    # prec: precision if it must be scaled to integer, e.g. 0.001
    def __init__(self, dvars, model, sense, solver=None, prec=None):
        self.dvars = dvars
        self.s = cp.SolverLookup.get(solver, m)
        self.modelSense = sense
        self.prec = prec

    def _getModel(self):
        pass  # created by constructor

    def setObj(self, coef):
        if isinstance(coef, torch.Tensor):
            coef = coef.detach().cpu().numpy()
        # rounding upto precision?
        if self.prec is not None:
            coef = (coef/self.prec).astype(int)

        if self.modelSense == pyepo.EPO.MAXIMIZE:
            self.s.maximize(cp.sum(self.dvars*coef))
        else:
            self.s.minimize(cp.sum(self.dvars*coef))

    def solve(self):
        self.s.solve()
        # solution must be numeric (not bool but int)
        return self.dvars.value().astype(int), self.s.objective_value()

def train_test_pyepo(optmodel, feats, costs, num_epochs):
    # Construct PyEPO dataset (with opt model for ground truth solutions) and Torch dataloader
    dataset = pyepo.data.dataset.optDataset(optmodel, feats, costs)
    dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)

    # Init Torch prediction model, simple linear regression
    predmodel = torch.nn.Linear(feats.shape[1], costs.shape[1])
    optimizer = torch.optim.Adam(predmodel.parameters(), lr=1e-2)
    # Init Torch loss: PyEPO's SPO+ loss over the PyEPO optimisation model
    spop = pyepo.func.SPOPlus(optmodel, processes=1)

    # Training: gradient descent with Torch
    for epoch in tqdm(range(num_epochs)):
        for data in dataloader:
            feat, cost, sol, obj = data
            # forward pass
            costpred = predmodel(feat)
            loss = spop(costpred, cost, sol, obj)
            # backward pass
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

    # Eval (on same training set for now)
    regret = pyepo.metric.regret(predmodel, optmodel, dataloader)
    print("Regret on Training Set: {:.4f}".format(regret))


if __name__ == "__main__":
    # Generate data
    num_data = 1000  # number of optimisation instances
    num_feat = 5     # number of features
    num_item = 10    # number of items in each instance
    weights, feats, costs = pyepo.data.knapsack.genData(num_data, num_feat, num_item,
                                                        dim=3, deg=4, noise_width=0.5, seed=135)
    weights = (weights*100).astype(int)  # to integer, precision is 0.01 anyway

    # Initialize PyEPO-wrapped optimisation model
    # just the constraints, works for any CPMpy model
    m = cp.Model()
    x = cp.boolvar(shape=num_item, name="x")
    for i,rhs in [(0,700), (1,800), (2,900)]:
        m += weights[i]*x <= rhs
    optmodel = cpmpyModel(x, m, sense=pyepo.EPO.MAXIMIZE, solver="ortools")
    #optmodel = cpmpyModel(x, m, sense=pyepo.EPO.MAXIMIZE, solver="gurobi")

    train_test_pyepo(optmodel, feats, costs, num_epochs=10)

could you adopt this style instead?

I think what your script does better is the train/test splitting (hard to believe their example doesn't do that...), though there could be existing torch functions that help you with that in a less tedious way... and the above should open the door for trying all/multiple solvers...

@tias tias added this to the v0.9.26 milestone Jun 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants