Mansur Arief
  • Home
  • Research
  • Publications
  • Teaching
  • Calendar
  • CV
  • Contact

Deterministic Optimization with Pyomo (Python) and Gurobi (Solver)¶

By: Mansur Arief

In this notebook, we will show how to solve optimization problems using pyomo and gurobi solver. This is an introductory notebook before we move to more advanced topics such as stochastic optimization and robust optimization. You will need to have pyomo and gurobi installed in your system to follow along with this notebook. Note that gurobi is a commercial solver, but it has a free academic license.

Introduction¶

Optimization is a mathematical discipline that deals with finding the best solution from all feasible solutions. The best solution is defined by an objective function, which we want to minimize or maximize. The feasible solutions are defined by a set of constraints. In this notebook, we will focus on linear optimization problems, which are optimization problems with linear objective function and linear constraints. In the later notebook, we will add complexity to the problem by considering random parameters, and we will solve the problem using stochastic optimization methods.

To begin, consider this deterministic linear optimization problem:

\begin{align*} \text{minimize}~ \quad & Ax + b \\ \text{s.t.} \quad & Cx \leq D \\ & x \in \mathbb{R}^n \end{align*}

where $b$ is known constants, and $A, C$ and $D$ are known matrices. Note that the objective function is linear, and the constraints are linear. The feasible set is defined by the set of $x$ that satisfy the constraints, which is a polyhedron. A polyhedron is a set of points defined by a set of linear inequalities, which in 2D is simply a polygon.

Problem Example¶

For concreteness, let's assume a 2D problem, i.e. $n=2$, with the following parameters: $A=\begin{bmatrix}1 & 2\end{bmatrix}$, $b=-1$, $C=\begin{bmatrix}-1 & -2 \\ -2 & 1 \\ 0 & -1 \\ 2 & 1\end{bmatrix}$, and $D=\begin{bmatrix}8\\1\\-1\\20\end{bmatrix}$. Let's store these parameters in a dictionary for later use.

In [ ]:
# Given data
num_vars = 2
num_constraints = 4

A = {1: 1, 2: 2}
b = -1
C = {
    (1, 1): -1, (1, 2): 2, 
    (2, 1): -2, (2, 2): 1, 
    (3, 1): 0, (3, 2): -1, 
    (4, 1): 2, (4, 2): 1
}
D = {1: 8, 2: 1, 3: -1, 4: 20}


# Model parameters for pyomo instance later
model_params = {
    None: {
        'A': A,
        'b': {None: b},
        'C': C,
        'D': D
    }
}

The full problem is therefore as follows:

\begin{align*} \text{minimize}~ \quad & x_1 + 2x_2 - 1 \\ \text{s.t.} \quad & -x_1 - 2x_2 \leq 8 \\ & -2x_1 + x_2 \leq 1 \\ & -x_2 \leq -1 \\ & 2x_1 + x_2 \leq 20 \\ & x_1, x_2 \in \mathbb{R} \end{align*}

The feasible set is the intersection of the half-planes defined by the constraints, shown in the figure below. Note that the following code is not part of the optimization problem, but it is used for visualization purposes.

In [ ]:
from utils import plot_constraints_and_feasible_set

plot_constraints_and_feasible_set(C, D)

It is clear that the feasible set is bounded, so we can find the optimal solution by solving the problem. Let's use the pyomo library to solve this problem.

Model Formulation in Pyomo¶

We begin by importing the necessary libraries. Then, we define the sets and parameters $A$, $b$, $C$, and $D$. We also define the model and the decision variables $x$.

In [ ]:
from pyomo.environ import AbstractModel, Var, Param, Objective, Constraint, SolverFactory, minimize, RangeSet, value, Reals, SolverStatus, TerminationCondition

# Define the abstract model
model = AbstractModel()

# Declare index sets for variables and constraints
model.I = RangeSet(1, num_vars)  # Index set for variables 
model.J = RangeSet(1, num_constraints)  # Index set for constraints

# Declare parameters
model.A = Param(model.I, within=Reals)
model.b = Param(within=Reals)
model.C = Param(model.J, model.I, within=Reals)
model.D = Param(model.J, within=Reals)

# Declare model variables with correct domain
model.x = Var(model.I, domain=Reals)

Finally, we define the objective function and the constraints.

In [ ]:
# Objective function
def objective_rule(model):
    return sum(model.A[i] * model.x[i] for i in model.I) + model.b
model.obj = Objective(rule=objective_rule, sense=minimize)

# Constraints
def constraint_rule(model, j):
    return sum(model.C[j, i] * model.x[i] for i in model.I)  <= model.D[j]
model.constraint = Constraint(model.J, rule=constraint_rule)

Note here that we use the AbstractModel class to define the model. This allows us to define the model without specifying the values, just yet. This is useful when we want to solve the same problem with different parameters, or when we want to solve the problem with random parameters.

We then create an instance of the model by passing the values of the parameters we stored earlier.

In [ ]:
# Create a concrete instance of the model
instance = model.create_instance(model_params)

Solving the Model¶

Now, that the problem instance is created, we can solve the problem using the gurobi solver. You may use other solvers such as cplex or glpk by changing the solver name. You also need to find the path to the solver executable, and set the path in the solver object.

In [ ]:
# Solve the concrete instance of the model
solver = SolverFactory('gurobi', executable='/Library/gurobi1100/macos_universal2/bin/gurobi.sh') #change the path to the gurobi.sh file in your system
result = solver.solve(instance, tee=True)
Set parameter Username
Academic license - for non-commercial use only - expires 2024-12-23
Read LP format model from file /var/folders/d5/r2v0z0z17nnfzysx1xgwf9jc0000gn/T/tmpmn34wctr.pyomo.lp
Reading time = 0.00 seconds
x3: 5 rows, 3 columns, 8 nonzeros
Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[x86] - Darwin 23.2.0 23C71)

CPU model: Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 5 rows, 3 columns and 8 nonzeros
Model fingerprint: 0x049fad8e
Coefficient statistics:
  Matrix range     [1e+00, 2e+00]
  Objective range  [1e+00, 2e+00]
  Bounds range     [0e+00, 0e+00]
  RHS range        [1e+00, 2e+01]
Presolve removed 5 rows and 3 columns
Presolve time: 0.01s
Presolve: All rows and columns removed
Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    1.0000000e+00   0.000000e+00   0.000000e+00      0s

Solved in 0 iterations and 0.02 seconds (0.00 work units)
Optimal objective  1.000000000e+00

Also, it is a good practice to check the status of the solution. If the status is optimal, then the solution is valid. Otherwise, the solution may not be valid.

In [ ]:
# Print the results
if result.solver.status == SolverStatus.ok and result.solver.termination_condition == TerminationCondition.optimal:
    x_optimal = {i: value(instance.x[i]) for i in instance.I}
    optimal_value = instance.obj()    
    print(f'Optimal solution: {x_optimal}, optimal value: {optimal_value}')
else:
    print('Solver status:', result.solver.status)
    print('Termination condition:', result.solver.termination_condition)
Optimal solution: {1: 0.0, 2: 1.0}, optimal value: 1.0

Extracting the Solution¶

Now that we have the solution, we can extract the optimal solution and the optimal value of the objective function. Finally, we can visualize the solution by plotting the feasible set and the optimal solution. The optimal solution is the point where the objective function is minimized, and it is the point where the objective function is tangent to the feasible set.

In [ ]:
from utils import plot_feasible_set_and_objective

x_star = [x_optimal[i] for i in instance.I]
f_star = optimal_value

plot_feasible_set_and_objective(C, D, A, b, f_star, x_star)

Concluding Remarks¶

A couple of concluding remarks here:

  • You can easily load the parameters from a file, or from a database, or from a web service. This is useful when you want to solve the same problem with different parameters.
  • You can also use the pyomo library to solve more complex problems, such as mixed-integer linear programming (MILP), quadratic programming (QP), and nonlinear programming (NLP).
  • You can also use the pyomo library to solve problems with random parameters, and you can use the gurobi solver to solve stochastic optimization problems. In the next notebook, we will show how to solve stochastic optimization problems using simulation-based method and polytope-based method.

The full notebook with the visualization code and practice set included is available here.

  • Email me
  • GitHub
  • LinkedIn
  • Google Scholar

Mansur Arief  •  2025

Powered by Beautiful Jekyll