By: Mansur Arief
In this the previous notebook, we learned about deterministic optimization problems. In this notebook, we will add complexity to the problem by considering random parameters, and we will solve optimization under uncertainty problems. We will use the same problem as in the previous notebook, but we will add random parameters to the problem.
Optimization under uncertainty involves random parameters. The random parameters can be modeled as random variables, and the goal of the optimization problem is to find the best decision that minimizes or maximizes the expected objective value (if the parameter of the objective functions are stochastic) while satisfying the possible constraints values (if the parameters of the constraints are strochastic). There are several methods to solve this class of problems, such as simulation-based approach, and robust optimization. In this notebook, we will begin with the simulation-based approach to solve the optimization under uncertain constraints.
Recall our deterministic linear optimization problem:
\begin{align*} \text{minimize}~ \quad & Ax + b \\ \text{s.t.} \quad & Cx \leq D \\ & x \in \mathbb{R}^n \end{align*}where $b$ is known constants, and $A, C$ and $D$ are known matrices. In this notebook, we will add randomness to the problem by considering random parameters. We will assume that the parameters $A, b,$ and $D$ to be deterministic, while the parameters $C$ to be random, but comes from a bounded set. We will lso assume that the random parameters are independent and identically distributed (i.i.d.).
For concreteness, let's assume a 2D problem, i.e. $n=2$, with the following parameters: $A=\begin{bmatrix}1 & 2\end{bmatrix}$, $b=-1$, and $D=\begin{bmatrix}8\\1\\-1\\20\end{bmatrix}$. Meanwhile, $C=\begin{bmatrix}c_{11} & c_{12} \\ c_{21} & c_{22} \\ c_{31} & c_{32} \\ c_{41} & c_{42}\end{bmatrix}$, where
# Define the intervals for each element of C
intervals = {
'c11': (-1.2, -0.8),
'c12': (1.8, 2.2),
'c21': (-2.2, -1.8),
'c22': (0.8, 1.2),
'c31': (-0.2, 0.2),
'c32': (-1.2, -0.8),
'c41': (1.8, 2.2),
'c42': (0.8, 1.2),
}
In this approach, we will generate a number of scenarios for the random parameters, and solve the optimization problem ensuring that the solution is feasible for all scenarios. We begin by setting up the deterministic parameters.
# Given data
num_vars = 2
num_constraints = 4
A = {1: 1, 2: 2}
b = -1
D = {1: 8, 2: 1, 3: -1, 4: 20}
For the random parameters, we will sample num_samples
scenarios. For simplicity, we will use a
uniform distribution for each parameter.
from utils import sample_C, plot_constraints_and_feasible_set
num_samples = 30
C_samples = sample_C(num_samples, num_vars, num_constraints, intervals)
# Format the samples into a dictionary
flattened_C = C_samples.reshape((-1, num_vars))
C_s = {(i, j): flattened_C[i-1, j-1] for i in range(1, flattened_C.shape[0]+1) for j in range(1, num_vars+1)}
# Duplicate the D values for each sample
D_s = {i+1: D[i %(num_constraints)+1] for i in range(num_constraints*num_samples)}
# Model parameters for pyomo instance later
model_params = {
None: {
'A': A,
'b': {None: b},
'C': C_s,
'D': D_s
}
}
We can now plot the feasible region for each scenario.
plot_constraints_and_feasible_set(C_s, D_s, False)
It is clear that the feasible region is different for each scenario. The goal of the simulation-based
optimization is to find the best decision that is feasible for all scenarios. Similar with the previous
problems, we will use pyomo
library to solve this problem.
We begin by importing the necessary libraries. Then, we define the sets and parameters $A$, $b$, $C$, and $D$. Here, we include all realizations of $C$s in the constraint set. The idea is to ensure that the solution is feasible for all realizations of $C$s. Finally, we also define the model and the decision variables $x$.
from pyomo.environ import AbstractModel, Var, Param, Objective, Constraint, SolverFactory, minimize, RangeSet, value, Reals, SolverStatus, TerminationCondition
# Define the abstract model
model = AbstractModel()
# Declare index sets for variables and constraints
model.I = RangeSet(1, num_vars)
model.J = RangeSet(1, num_constraints*num_samples) # Note here we neeed to multiply by the number of samples
# Parameters
model.A = Param(model.I, within=Reals)
model.b = Param(within=Reals)
model.C = Param(model.J, model.I, within=Reals)
model.D = Param(model.J, within=Reals)
# Decision Variables
model.x = Var(model.I, domain=Reals)
Finally, we define the objective function and the constraints.
def objective_rule(model):
return sum(model.A[i] * model.x[i] for i in model.I) + model.b
model.obj = Objective(rule=objective_rule, sense=minimize)
def constraint_rule(model, j):
return sum(model.C[j, i] * model.x[i] for i in model.I) <= model.D[j]
model.constraint = Constraint(model.J, rule=constraint_rule)
We use the AbstractModel
class to define the model, allowing us to defer specifying the values
for this parameters until later. We then create an instance of the model by passing the values of the
parameters we stored earlier.
# Create a concrete instance of the model
instance = model.create_instance(model_params)
Now, that the problem instance is created, we can solve the problem using the gurobi
solver.
You may use other solvers such as cplex
or glpk
by changing the solver name. You
also need to find the path to the solver executable, and set the path in the solver
object.
# Solve the concrete instance of the model
solver = SolverFactory('gurobi', executable='/Library/gurobi1100/macos_universal2/bin/gurobi.sh') #change the path to the gurobi.sh file in your system
result = solver.solve(instance, tee=False)
Checking the status of the solution, we can see that the solver has found an optimal solution. We can also print the value of the decision variables and the objective function.
# Print the results
if result.solver.status == SolverStatus.ok and result.solver.termination_condition == TerminationCondition.optimal:
x_optimal = {i: value(instance.x[i]) for i in instance.I}
optimal_value = instance.obj()
print(f'Optimal solution: {x_optimal}, optimal value: {optimal_value}')
else:
print('Solver status:', result.solver.status)
print('Termination condition:', result.solver.termination_condition)
Optimal solution: {1: 0.23077942964942114, 2: 1.235076775806934}, optimal value: 1.7009329812632892
We can visualize the solution by plotting the feasible region and the optimal solution. We can also extract the value of the decision variables and the objective function.
from utils import plot_feasible_set_and_objective
x_star = [x_optimal[i] for i in instance.I]
f_star = optimal_value
plot_feasible_set_and_objective(C_s, D_s, A, b, f_star, x_star, False)
A couple of remarks here:
In the previous example, we considered uncertainty in the constraints. We can also consider uncertainty in the objective function. In this case, we will consider the parameters $A$ to be random, but comes from a bounded set. We will also assume that the random parameters are independent and identically distributed (i.i.d.).
For concreteness, let's assume a 2D problem, i.e. $n=2$, with the following parameters: $b=-1$, $C=\begin{bmatrix}-1 & -2 \\ -2 & 1 \\ 0 & -1 \\ 2 & 1\end{bmatrix}$, and $D=\begin{bmatrix}8\\1\\-1\\20\end{bmatrix}$. Meanwhile, $A=\begin{bmatrix}a_1 & a_2\end{bmatrix}$, where
# Define the intervals for each element of A
intervals = {
'a1': (0, 2),
'a2': (1, 3),
}
Similar to the previous case, we will generate a number of scenarios for the random parameters, and optimize for the expected value of the objective function.
from utils import sample_A
# Given data
num_vars = 2
num_constraints = 4
num_samples = 30
A_samples = sample_A(num_samples, num_vars, intervals)
# Format the samples into a dictionary
A_s = {(i, j): A_samples[i-1, j-1] for i in range(1, A_samples.shape[0]+1) for j in range(1, num_vars+1)}
We can also set the deterministic parameters, then format it into model_params
dictionary for
pyomo
.
C = {
(1, 1): -1, (1, 2): 2,
(2, 1): -2, (2, 2): 1,
(3, 1): 0, (3, 2): -1,
(4, 1): 2, (4, 2): 1
}
D = {1: 8, 2: 1, 3: -1, 4: 20}
# Model parameters for pyomo instance later
model_params = {
None: {
'A': A_s,
'b': {None: b},
'C': C,
'D': D
}
}
With this new setting, we can now define the model that optimizes the expected/avg objective value.
# Define the abstract model
model_avg = AbstractModel()
# Declare index sets for variables and constraints
model_avg.I = RangeSet(1, num_vars)
model_avg.J = RangeSet(1, num_constraints)
model_avg.K = RangeSet(1, num_samples) # Index set for samples
# Parameters
model_avg.A = Param(model_avg.K, model_avg.I, within=Reals)
model_avg.b = Param(within=Reals)
model_avg.C = Param(model_avg.J, model_avg.I, within=Reals)
model_avg.D = Param(model_avg.J, within=Reals)
# Decision variables
model_avg.x = Var(model_avg.I, domain=Reals)
The objective function is now the expected value of the objective function, and the constraints are the same as before.
def objective_rule(model_avg):
return (sum(model_avg.A[k, i] * model_avg.x[i] + model_avg.b for i in model_avg.I for k in model_avg.K))/num_samples
model_avg.obj = Objective(rule=objective_rule, sense=minimize)
def constraint_rule(model_avg, j):
return sum(model_avg.C[j, i] * model_avg.x[i] for i in model_avg.I) <= model_avg.D[j]
model_avg.constraint = Constraint(model_avg.J, rule=constraint_rule)
We can now instantiate the model and solve the problem.
instance_avg = model_avg.create_instance(model_params)
result_avg = solver.solve(instance_avg, tee=False)
The solution can be extracted and visualized as before.
from utils import plot_feasible_set_and_objective_avg
# Print the results
if result_avg.solver.status == SolverStatus.ok and result_avg.solver.termination_condition == TerminationCondition.optimal:
x_optimal_avg = {i: value(instance_avg.x[i]) for i in instance_avg.I}
optimal_value_avg = instance_avg.obj()
print(f'Optimal solution: {x_optimal_avg}, optimal value: {optimal_value_avg}')
else:
print('Solver status:', result_avg.solver.status)
print('Termination condition:', result_avg.solver.termination_condition)
# Visualize
x_star = [x_optimal_avg[i] for i in instance_avg.I]
f_star = optimal_value_avg
plot_feasible_set_and_objective_avg(C, D, A_s, b, f_star, x_star, num_samples, False)
Optimal solution: {1: 0.0, 2: 1.0}, optimal value: -0.09309911717481321
In this notebook, we learned about optimization under uncertainty. We considered two cases: uncertainty in the constraints and uncertainty in the objective function. We mainly used the simulation-based approach to solve the problems. In the next notebook, we will learn about robust optimization, which is another approach to solve optimization under uncertainty problems.
The full notebook with the visualization code and practice set included is available here.