Releases: khalil-research/PyEPO
v1.1.1
🎉 We're happy to announce the PyEPO 1.1.1 release. 🎉
What's New
Bug Fixes
- Fixed numerical stability: added epsilon guards to prevent division by zero in gradient computation (surrogate, blackbox, perturbed, rank)
- Fixed
adaptiveImplicitMLEnumerical issue - Fixed
MVarindexing inoptGRBModel - Fixed
solpooldevice transfer - Fixed COPT
solve()returns
Tests
- Added CUDA unit tests
Docs
- Updated docstring types and descriptions
v1.1.0
🎉 We're happy to announce the PyEPO 1.1.0 release. 🎉
What's New
Google OR-Tools Solver Backend
- Added Google OR-Tools as a new solver backend, supporting both pywraplp (linear programming) and CP-SAT (constraint programming) solvers
- New optimization models:
optOrtModel,optOrtCpModel - Built-in models:
shortestPathModel,shortestPathCpModel,knapsackModel,knapsackModelRel,knapsackCpModel
New Models for COPT and Pyomo
- Added Portfolio optimization model for COPT and Pyomo (OMO)
- Added TSP (Traveling Salesman Problem) model for COPT and Pyomo (OMO)
Improvements
Compatibility
- Standard DLPack protocol support for better tensor interoperability in JAX (for MPAX)
Bug Fixes
- Fixed
addConstrignoring new constraints due to stale JIT cache in JAX (for MPAX)
v1.0.5
🎉 We're happy to announce the PyEPO 1.0.5 release. 🎉
What's Changed
Bug Fixes
- Fixed: exclude self in KNN dataset
Refactoring & Performance
- Centralized solution pool management into
utils.pystandalone functions - Split
_solve_in_passinto_solve_in_pass(with pool update) +_solve_batch(pure solver) - Optimized solution pool dedup hashing (
tobytes()instead oftuple/tolist) - Unified device checks into solve/cache utility functions
- Cleaned up unused
devicevariable across modules
v1.0.4
🎉 We're happy to announce the PyEPO 1.0.4 release. 🎉
Performance
- Cache optmodel in worker processes to avoid redundant model rebuilds during parallel solving
- Vectorize _getKNN to replace Python-level loops with batch matrix operations
- Replace torch.unique with hash-set deduplication for solution pool updates
- Optimize tensor conversion with device-aware checks in solution pool
- Pre-convert dataset arrays to tensors in optDataset / optDatasetKNN
Bug Fixes
- Fix NameError in omo module when Pyomo is not installed
- Fix the solution pool device mismatch when using GPU tensors
- Fix sigma parameter handling for Implicit MLE
- Fix package import error
Compatibility
- Update autograd.Function to class-method .apply() pattern for PyTorch >= 2.1 deprecation
Tests & CI
- Add unit test suite: test_data, test_model, test_metric, test_func, test_utils, test_integration
- Add GitHub Actions CI for Python 3.9 – 3.14
- Support graceful skip for Gurobi / Pyomo dependent tests
Docs
- Fix typos and grammar in docstrings
v1.0.0
🎉 We're happy to announce the PyEPO 1.0.0 release. 🎉
We are excited to announce the support of MPAX, a PDHG-based optimization framework. With batched linear programming on either CPU or GPU, MPAX leverages JIT compilation for faster execution and enhanced scalability. Unlike traditional solvers, MPAX can run entirely on the GPU, eliminating costly CPU-GPU communication overhead during training.
In general, MPAX is particularly highly efficient for solving large-scale optimization problems. To see it in action, check out our Jupyter Notebook Tutorial.
Additional Updates in PyEPO 1.0.0
- Further Vectorization for Computation: Eliminated unnecessary
for-loops, enhancing training efficiency. - Bug Fixes in Perturbation Algorithms: Resolved issues with solution caching in
perturbedOpt,perturbedFenchelYoung,implicitMLE, andadaptiveImplicitMLE.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.4.0
🎉 We're happy to announce the PyEPO 0.4.0 release. 🎉
Happy Holiday! We're thrilled to bring you an exciting new feature in this release:
We are excited to announce the addition of a new module, perturbationGradient, designed to implement Perturbation Gradient (PG) loss. This module provides flexibility for various optimization tasks with configurable parameters such as sigma (step size) and two_sides (differencing type).
This feature is based on the paper "Decision-Focused Learning with Directional Gradients". It is a surrogate loss function of objective value, which measures the decision quality of the optimization problem. According to Danskin’s Theorem, the PG Loss is derived from different zeroth order approximations and has an informative gradient. Thus, it allows us to design an algorithm based on stochastic gradient descent.
In addition, thank you, @RuoyuChen615, for providing her version of PG loss. Her version of PG loss implementation has provided good insights, helping us refine and enhance this module.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.9
🎉 We're happy to announce the PyEPO 0.3.9 release. 🎉
We're thrilled to bring you an exciting new feature in this release:
We are excited to announce the addition of a new module, optDatasetKNN, thanks to @NoahJSchutte. This module is designed for implementing k-nearest neighbors (kNN) robust loss in decision-focused learning. The implementation introduces a new class, optDatasetKNN in dataset.py with the parameters k and weight.
This feature is based on the paper Robust Losses for Decision-Focused Learning by Noah Schutte, which has been accepted at IJCAI. You can explore this feature in our Google Colab tutorial for hands-on guidance.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.8a
v0.3.8
🎉 We're happy to announce the PyEPO 0.3.8 release. 🎉
We're thrilled to bring you some exciting new features in this release:
We add a data generator pyepo.data.portfolio.genData for portfolio optimization and the corresponding Gurobi model pyepo.model.grb.portfolioModel. See details in our docs for data and optimization model.
This synthetic dataset comes from Smart “Predict, then Optimize” with detailed implementation guidelines provided in Appendix-D of the supplemental material.
Additionally, we have addressed several minor bugs to ensure a smoother user experience.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!
v0.3.7
🎉 We're happy to announce the PyEPO 0.3.7 release. 🎉
We're thrilled to bring you some exciting new features in this release:
We add an autograd module pyepo.func.adaptiveImplicitMLE, which uses the perturb-and-MAP framework and adaptively chooses the interpolation step size. This module samples noise perturbation from a Sum-of-Gamma distribution, subsequently interpolating the loss function for a more precise finite difference approximation. There is the corresponding paper Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models. See details in our docs.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!









