aerosandbox.optimization#

Submodules#

Package Contents#

Classes#

Opti

The base class for mathematical optimization. For detailed usage, see the docstrings in its key methods:

OptiSol

class aerosandbox.optimization.Opti(variable_categories_to_freeze=None, cache_filename=None, load_frozen_variables_from_cache=False, save_to_cache_on_solve=False, ignore_violated_parametric_constraints=False, freeze_style='parameter')[source]#

Bases: casadi.Opti

The base class for mathematical optimization. For detailed usage, see the docstrings in its key methods:
  • Opti.variable()

  • Opti.subject_to()

  • Opti.parameter()

  • Opti.solve()

Example usage is as follows:

>>> opti = asb.Opti() # Initializes an optimization environment
>>> x = opti.variable(init_guess=5) # Initializes a new variable in that environment
>>> f = x ** 2 # Evaluates a (in this case, nonlinear) function based on a variable
>>> opti.subject_to(x > 3) # Adds a constraint to be enforced
>>> opti.minimize(f) # Sets the objective function as f
>>> sol = opti.solve() # Solves the problem using CasADi and IPOPT backend
>>> print(sol(x)) # Prints the value of x at the optimum.
Parameters:
  • variable_categories_to_freeze (Union[List[str], str]) –

  • cache_filename (str) –

  • load_frozen_variables_from_cache (bool) –

  • save_to_cache_on_solve (bool) –

  • ignore_violated_parametric_constraints (bool) –

  • freeze_style (str) –

variable(init_guess=None, n_vars=None, scale=None, freeze=False, log_transform=False, category='Uncategorized', lower_bound=None, upper_bound=None, _stacklevel=1)[source]#

Initializes a new decision variable (or vector of decision variables). You should pass an initial guess ( init_guess) upon defining a new variable. Dimensionality is inferred from this initial guess, but it can be overridden; see below for syntax.

It is highly, highly recommended that you provide a scale (scale) for each variable, especially for nonconvex problems, although this is not strictly required.

Usage notes:

When using vector variables, individual components of this vector of variables can be accessed via normal indexing. Example:

>>> opti = asb.Opti()
>>> my_var = opti.variable(n_vars = 5)
>>> opti.subject_to(my_var[3] >= my_var[2])  # This is a valid way of indexing
>>> my_sum = asb.sum(my_var)  # This will sum up all elements of `my_var`
Parameters:
  • init_guess (Union[float, aerosandbox.numpy.ndarray]) –

    Initial guess for the optimal value of the variable being initialized. This is where in the design space the optimizer will start looking.

    This can be either a float or a NumPy ndarray; the dimension of the variable (i.e. scalar, vector) that is created will be automatically inferred from the shape of the initial guess you provide here. (Although it can also be overridden using the n_vars parameter; see below.)

    For scalar variables, your initial guess should be a float:

    >>> opti = asb.Opti()
    >>> scalar_var = opti.variable(init_guess=5) # Initializes a scalar variable at a value of 5
    

    For vector variables, your initial guess should be either:

    • a float, in which case you must pass the length of the vector as n_vars, otherwise a scalar

    variable will be created:

    >>> opti = asb.Opti()
    >>> vector_var = opti.variable(init_guess=5, n_vars=10) # Initializes a vector variable of length
    >>> # 10, with all 10 elements set to an initial guess of 5.
    
    • a NumPy ndarray, in which case each element will be initialized to the corresponding value in

    the given array:

    >>> opti = asb.Opti()
    >>> vector_var = opti.variable(init_guess=np.linspace(0, 5, 10)) # Initializes a vector variable of
    >>> # length 10, with all 10 elements initialized to linearly vary between 0 and 5.
    

    In the case where the variable is to be log-transformed (see log_transform), the initial guess should not be log-transformed as well - just supply the initial guess as usual. (Log-transform of the initial guess happens under the hood.) The initial guess must, of course, be a positive number in this case.

  • n_vars (int) –

    [Optional] Used to manually override the dimensionality of the variable to create; if not provided, the dimensionality of the variable is inferred from the initial guess init_guess.

    The only real case where you need to use this argument would be if you are initializing a vector variable to a scalar value, but you don’t feel like using init_guess=value * np.ones(n_vars). For example:

    >>> opti = asb.Opti()
    >>> vector_var = opti.variable(init_guess=5, n_vars=10) # Initializes a vector variable of length
    >>> # 10, with all 10 elements set to an initial guess of 5.
    

  • scale (float) –

    [Optional] Approximate scale of the variable.

    For example, if you’re optimizing the design of a automobile and setting the tire diameter as an optimization variable, you might choose scale=0.5, corresponding to 0.5 meters.

    Properly scaling your variables can have a huge impact on solution speed (or even if the optimizer converges at all). Although most modern second-order optimizers (such as IPOPT, used here) are theoretically scale-invariant, numerical precision issues due to floating-point arithmetic can make solving poorly-scaled problems really difficult or impossible. See here for more info: https://web.casadi.org/blog/nlp-scaling/

    If not specified, the code will try to pick a sensible value by defaulting to the init_guess.

  • freeze (bool) –

    [Optional] This boolean tells the optimizer to “freeze” the variable at a specific value. In order to select the determine to freeze the variable at, the optimizer will use the following logic:

    • If you initialize a new variable with the parameter freeze=True: the optimizer will freeze

    the variable at the value of initial guess.

    >>> opti = Opti()
    >>> my_var = opti.variable(init_guess=5, freeze=True) # This will freeze my_var at a value of 5.
    
    • If the Opti instance is associated with a cache file, and you told it to freeze a specific

    category(s) of variables that your variable is a member of, and you didn’t manually specify to freeze the variable: the variable will be frozen based on the value in the cache file (and ignore the init_guess). Example:

    >>> opti = Opti(cache_filename="my_file.json", variable_categories_to_freeze=["Wheel Sizing"])
    >>> # Assume, for example, that `my_file.json` was from a previous run where my_var=10.
    >>> my_var = opti.variable(init_guess=5, category="Wheel Sizing")
    >>> # This will freeze my_var at a value of 10 (from the cache file, not the init_guess)
    
    • If the Opti instance is associated with a cache file, and you told it to freeze a specific

    category(s) of variables that your variable is a member of, but you then manually specified that the variable should be frozen: the variable will once again be frozen at the value of init_guess:

    >>> opti = Opti(cache_filename="my_file.json", variable_categories_to_freeze=["Wheel Sizing"])
    >>> # Assume, for example, that `my_file.json` was from a previous run where my_var=10.
    >>> my_var = opti.variable(init_guess=5, category="Wheel Sizing", freeze=True)
    >>> # This will freeze my_var at a value of 5 (`freeze` overrides category loading.)
    

    Motivation for freezing variables:

    The ability to freeze variables is exceptionally useful when designing engineering systems. Let’s say we’re designing an airplane. In the beginning of the design process, we’re doing “clean-sheet” design - any variable is up for grabs for us to optimize on, because the airplane doesn’t exist yet! However, the farther we get into the design process, the more things get “locked in” - we may have ordered jigs, settled on a wingspan, chosen an engine, et cetera. So, if something changes later ( let’s say that we discover that one of our assumptions was too optimistic halfway through the design process), we have to make up for that lost margin using only the variables that are still free. To do this, we would freeze the variables that are already decided on.

    By categorizing variables, you can also freeze entire categories of variables. For example, you can freeze all of the wing design variables for an airplane but leave all of the fuselage variables free.

    This idea of freezing variables can also be used to look at off-design performance - freeze a design, but change the operating conditions.

  • log_transform (bool) – [Optional] Advanced use only. A flag of whether to internally-log-transform this variable before passing it to the optimizer. Good for known positive engineering quantities that become nonsensical if negative (e.g. mass). Log-transforming these variables can also help maintain convexity.

  • category (str) – [Optional] What category of variables does this belong to? # TODO expand docs

  • lower_bound (float) – [Optional] If provided, defines a bounds constraint on the new variable that keeps the variable above a given value.

  • upper_bound (float) – [Optional] If provided, defines a bounds constraint on the new variable that keeps the variable below a given value.

  • _stacklevel (int) – Optional and advanced, purely used for debugging. Allows users to correctly track where variables are declared in the event that they are subclassing aerosandbox.Opti. Modifies the stacklevel of the declaration tracked, which is then presented using aerosandbox.Opti.variable_declaration().

Returns:

The variable itself as a symbolic CasADi variable (MX type).

Return type:

casadi.MX

subject_to(constraint, _stacklevel=1)[source]#

Initialize a new equality or inequality constraint(s).

Parameters:
  • constraint (Union[casadi.MX, bool, List]) –

    A constraint that you want to hold true at the optimum.

    Inequality example:

    >>> x = opti.variable()
    >>> opti.subject_to(x >= 5)
    

    Equality example; also showing that you can directly constrain functions of variables:

    >>> x = opti.variable()
    >>> f = np.sin(x)
    >>> opti.subject_to(f == 0.5)
    

    You can also pass in a list of multiple constraints using list syntax. For example:

    >>> x = opti.variable()
    >>> opti.subject_to([
    >>>     x >= 5,
    >>>     x <= 10
    >>> ])
    

  • _stacklevel (int) – Optional and advanced, purely used for debugging. Allows users to correctly track where

  • the (constraints are declared in the event that they are subclassing aerosandbox.Opti. Modifies) –

  • tracked (stacklevel of the declaration) –

  • using (which is then presented) –

  • aerosandbox.Opti.constraint_declaration().

Returns:

The dual variable associated with the new constraint. If the constraint input is a list, returns a list of dual variables.

Return type:

Union[casadi.MX, None, List[casadi.MX]]

minimize(f)[source]#

[INTERNAL]

minimize(self, MX f)

Set objective.

Objective must be a scalar. Default objective: 0 When method is called

multiple times, the last call takes effect

Extra doc: https://github.com/casadi/casadi/wiki/L_1a

Doc source: https://github.com/casadi/casadi/blob/develop/casadi/core/optistack.hpp#L133

Implementation: https://github.com/casadi/casadi/blob/develop/casadi/core/optistack.cpp#L82-L88

Parameters:

f (casadi.MX) –

Return type:

None

maximize(f)[source]#
Parameters:

f (casadi.MX) –

Return type:

None

parameter(value=0.0, n_params=None)[source]#

Initializes a new parameter (or vector of parameters). You must pass a value (value) upon defining a new parameter. Dimensionality is inferred from this value, but it can be overridden; see below for syntax.

Parameters:
  • value (Union[float, aerosandbox.numpy.ndarray]) –

    Value to set the new parameter to.

    This can either be a float or a NumPy ndarray; the dimension of the parameter (i.e. scalar, vector) that is created will be automatically inferred from the shape of the value you provide here. (Although it can be overridden using the n_params parameter; see below.)

    For scalar parameters, your value should be a float: >>> opti = asb.Opti() >>> scalar_param = opti.parameter(value=5) # Initializes a scalar parameter and sets its value to 5.

    For vector variables, your value should be either:

    • a float, in which case you must pass the length of the vector as n_params, otherwise a scalar

    parameter will be created:

    >>> opti = asb.Opti()
    >>> vector_param = opti.parameter(value=5, n_params=10) # Initializes a vector parameter of length
    >>> # 10, with all 10 elements set to value of 5.
    
    • a NumPy ndarray, in which case each element will be set to the corresponding value in the given

    array:

    >>> opti = asb.Opti()
    >>> vector_param = opti.parameter(value=np.linspace(0, 5, 10)) # Initializes a vector parameter of
    >>> # length 10, with all 10 elements set to a value varying from 0 to 5.
    

  • n_params (int) –

    [Optional] Used to manually override the dimensionality of the parameter to create; if not provided, the dimensionality of the parameter is inferred from value.

    The only real case where you need to use this argument would be if you are initializing a vector parameter to a scalar value, but you don’t feel like using value=my_value * np.ones(n_vars). For example:

    >>> opti = asb.Opti()
    >>> vector_param = opti.parameter(value=5, n_params=10) # Initializes a vector parameter of length
    >>> # 10, with all 10 elements set to a value of 5.
    

Returns:

The parameter itself as a symbolic CasADi variable (MX type).

Return type:

casadi.MX

solve(parameter_mapping=None, max_iter=1000, max_runtime=1e+20, callback=None, verbose=True, jit=False, detect_simple_bounds=False, options=None, behavior_on_failure='raise')[source]#

Solve the optimization problem using CasADi with IPOPT backend.

Parameters:
  • parameter_mapping (Dict[casadi.MX, float]) –

    [Optional] Allows you to specify values for parameters. Dictionary where the key is the parameter and the value is the value to be set to.

    Example: # TODO update syntax for required init_guess
    >>> opti = asb.Opti()
    >>> x = opti.variable()
    >>> p = opti.parameter()
    >>> opti.minimize(x ** 2)
    >>> opti.subject_to(x >= p)
    >>> sol = opti.solve(
    >>>     {
    >>>         p: 5 # Sets the value of parameter p to 5, then solves.
    >>>     }
    >>> )
    

  • max_iter (int) – [Optional] The maximum number of iterations allowed before giving up.

  • max_runtime (float) – [Optional] Gives the maximum allowable runtime before giving up.

  • callback (Callable[[int], Any]) –

    [Optional] A function to be called at each iteration of the optimization algorithm. Useful for printing progress or displaying intermediate results.

    The callback function func should have the syntax func(iteration_number), where iteration_number is an integer corresponding to the current iteration number. In order to access intermediate quantities of optimization variables (e.g. for plotting), use the Opti.debug.value(x) syntax for each variable x.

  • verbose (bool) – Controls the verbosity of the solver. If True, IPOPT will print its progress to the console.

  • jit (bool) – Experimental. If True, the optimization problem will be compiled to C++ and then JIT-compiled using the CasADi JIT compiler. This can lead to significant speedups, but may also lead to unexpected behavior, and may not work on all platforms.

  • options (Dict) – [Optional] A dictionary of options to pass to IPOPT. See the IPOPT documentation for a list of available options.

  • behavior_on_failure (str) –

    [Optional] What should we do if the optimization fails? Options are:

    • ”raise”: Raise an exception. This is the default behavior.

    • ”return_last”: Returns the solution from the last iteration, and raise a warning.

      NOTE: The returned solution may not be feasible! (It also may not be optimal.)

  • detect_simple_bounds (bool) –

Return type:

OptiSol

Returns: An OptiSol object that contains the solved optimization problem. To extract values, use

my_optisol(variable).

Example:
>>> sol = opti.solve()
>>> x_opt = sol(x) # Get the value of variable x at the optimum.
solve_sweep(parameter_mapping, update_initial_guesses_between_solves=False, verbose=True, solve_kwargs=None, return_callable=False, garbage_collect_between_runs=False)[source]#
Parameters:
  • parameter_mapping (Dict[casadi.MX, aerosandbox.numpy.ndarray]) –

  • solve_kwargs (Dict) –

  • return_callable (bool) –

  • garbage_collect_between_runs (bool) –

Return type:

Union[aerosandbox.numpy.ndarray, Callable[[casadi.MX], aerosandbox.numpy.ndarray]]

find_variable_declaration(index, use_full_filename=False, return_string=False)[source]#
Parameters:
  • index (int) –

  • use_full_filename (bool) –

  • return_string (bool) –

Return type:

Union[None, str]

find_constraint_declaration(index, use_full_filename=False, return_string=False)[source]#
Parameters:
  • index (int) –

  • use_full_filename (bool) –

  • return_string (bool) –

Return type:

Union[None, str]

set_initial_from_sol(sol, initialize_primals=True, initialize_duals=True)[source]#

Sets the initial value of all variables in the Opti object to the solution of another Opti instance. Useful for warm-starting an Opti instance based on the result of another instance.

Args: sol: Takes in the solution object. Assumes that sol corresponds to exactly the same optimization problem as this Opti instance, perhaps with different parameter values.

Returns: None (in-place)

Parameters:

sol (casadi.OptiSol) –

Return type:

None

save_solution()[source]#
get_solution_dict_from_cache()[source]#
derivative_of(variable, with_respect_to, derivative_init_guess, derivative_scale=None, method='trapezoidal', explicit=False, _stacklevel=1)[source]#

Returns a quantity that is either defined or constrained to be a derivative of an existing variable.

For example:

>>> opti = Opti()
>>> position = opti.variable(init_guess=0, n_vars=100)
>>> time = np.linspace(0, 1, 100)
>>> velocity = opti.derivative_of(position, with_respect_to=time)
>>> acceleration = opti.derivative_of(velocity, with_respect_to=time)
Parameters:
  • variable (casadi.MX) – The variable or quantity that you are taking the derivative of. The “numerator” of the

  • derivative ("denominator" of the) –

  • parlance. (in colloquial) –

  • with_respect_to (Union[aerosandbox.numpy.ndarray, casadi.MX]) – The variable or quantity that you are taking the derivative with respect to. The

  • derivative – In a typical example case, this with_respect_to parameter would be time. Please make sure that the value of this parameter is monotonically increasing, otherwise you may get nonsensical answers.

  • parlance. – In a typical example case, this with_respect_to parameter would be time. Please make sure that the value of this parameter is monotonically increasing, otherwise you may get nonsensical answers.

  • derivative_init_guess (Union[float, aerosandbox.numpy.ndarray]) – Initial guess for the value of the derivative. Should be either a float (in which

  • same (case the initial guess will be a vector equal to this value) or a vector of initial guesses with the) –

  • info (length as variable. For more) –

  • parameter. (opti.variable()'s scale) –

  • derivative_scale (Union[float, aerosandbox.numpy.ndarray]) – Scale factor for the value of the derivative. For more info, look at the docstring of

  • parameter.

  • method (str) –

    The type of integrator to use to define this derivative. Options are:

    ”forward euler”, “backward euler”, and “midpoint” are all (lower-order) Runge-Kutta methods…

    • ”runge-kutta-3/8” - A modified version of the Runge-Kutta 4 proposed by Kutta in 1901. Also

    fourth-order-accurate, but all of the error coefficients are smaller than they are in the standard Runge-Kutta 4 method. The downside is that more floating point operations are required per timestep, as the Butcher tableau is more dense (i.e. not banded).

    Citation: Kutta, Martin (1901), “Beitrag zur näherungsweisen Integration totaler Differentialgleichungen”, Zeitschrift für Mathematik und Physik, 46: 435–453

  • explicit (bool) – If true, returns an explicit derivative rather than an implicit one. In other words,

  • a (this defines the output to be a derivative of the input rather than constraining the output to the) –

  • input. (derivative of the) –

    Explicit derivatives result in smaller, denser systems of equations that are more akin to shooting-type methods. Implicit derivatives result in larger, sparser systems of equations that are more akin to collocation methods. Explicit derivatives are better for simple, stable systems with few states, while implicit derivatives are better for complex, potentially-unstable systems with many states.

    # TODO implement explicit

  • _stacklevel (int) – Optional and advanced, purely used for debugging. Allows users to correctly track where

  • the (constraints are declared in the event that they are subclassing aerosandbox.Opti. Modifies) –

  • tracked (stacklevel of the declaration) –

  • using (which is then presented) –

  • `aerosandbox.Opti.variable_declaration

Return type:

casadi.MX

Returns: A vector consisting of the derivative of the parameter variable with respect to with_respect_to.

constrain_derivative(derivative, variable, with_respect_to, method='trapezoidal', _stacklevel=1)[source]#

Adds a constraint to the optimization problem such that:

d(variable) / d(with_respect_to) == derivative

Can be used directly; also called indirectly by opti.derivative_of() for implicit derivative creation.

Parameters:
  • derivative (casadi.MX) – The derivative that is to be constrained here.

  • variable (casadi.MX) – The variable or quantity that you are taking the derivative of. The “numerator” of the

  • derivative

  • parlance. (in colloquial) –

  • with_respect_to (Union[aerosandbox.numpy.ndarray, casadi.MX]) – The variable or quantity that you are taking the derivative with respect to. The

  • derivative – In a typical example case, this with_respect_to parameter would be time. Please make sure that the value of this parameter is monotonically increasing, otherwise you may get nonsensical answers.

  • parlance. – In a typical example case, this with_respect_to parameter would be time. Please make sure that the value of this parameter is monotonically increasing, otherwise you may get nonsensical answers.

  • method (str) –

    The type of integrator to use to define this derivative. Options are:

    ”forward euler”, “backward euler”, and “midpoint” are all (lower-order) Runge-Kutta methods…

    • ”runge-kutta-3/8” - A modified version of the Runge-Kutta 4 proposed by Kutta in 1901. Also

    fourth-order-accurate, but all of the error coefficients are smaller than they are in the standard Runge-Kutta 4 method. The downside is that more floating point operations are required per timestep, as the Butcher tableau is more dense (i.e. not banded).

    Citation: Kutta, Martin (1901), “Beitrag zur näherungsweisen Integration totaler Differentialgleichungen”, Zeitschrift für Mathematik und Physik, 46: 435–453

  • prevents (Note that all methods are expressed as integrators rather than differentiators; this) –

  • PDE (singularities from forming in the limit of timestep approaching zero. (For those coming from the) –

  • world

  • allow (this is analogous to using finite volume methods rather than finite difference methods to) –

  • capturing.) (shock) –

  • _stacklevel (int) – Optional and advanced, purely used for debugging. Allows users to correctly track where

  • the (constraints are declared in the event that they are subclassing aerosandbox.Opti. Modifies) –

  • tracked (stacklevel of the declaration) –

  • using (which is then presented) –

  • `aerosandbox.Opti.variable_declaration

Return type:

None

Returns: None (adds constraint in-place).

class aerosandbox.optimization.OptiSol(opti, cas_optisol)[source]#
Parameters:
  • opti (Opti) –

  • cas_optisol (casadi.OptiSol) –

__call__(x)[source]#

A shorthand alias for sol.value(x). See OptiSol.value() documentation for details.

Parameters:

x (Union[casadi.MX, aerosandbox.numpy.ndarray, float, int, List, Tuple, Set, Dict, Any]) – A Python data structure to substitute values into, using the solution in this OptiSol object.

Returns:

A copy of x, where all symbolic optimization variables (recursively substituted at unlimited depth) have been converted to float or array values.

Return type:

Any

_value_scalar(x)[source]#

Gets the value of a variable at the solution point. For developer use - see following paragraph.

This method is basically a less-powerful version of calling sol(x) - if you’re a

user and not a developer, you almost-certainly want to use that method instead, as those are less fragile with respect to various input data types. This method exists only as an abstraction to make it easier for other developers to subclass OptiSol, if they wish to intercept the variable substitution process.

Parameters:

x (Union[casadi.MX, aerosandbox.numpy.ndarray, float, int]) –

Return type:

Union[float, aerosandbox.numpy.ndarray]

Returns:

value(x, recursive=True, warn_on_unknown_types=False)[source]#
Gets the value of a variable (or a data structure) at the solution point. This solution point is the optimum,

if the optimization process solved successfully.

On a computer science level, this method converts a symbolic optimization variable to a concrete float or

array value. More generally, it converts any Python data structure (along with any of its contents, recursively, at unlimited depth), replacing any symbolic optimization variables it finds with concrete float or array values.

Note that, for convenience, you can simply call: >>> sol(x) if you prefer. This is equivalent to calling this method with the syntax: >>> sol.value(x) (these are aliases of each other)

Parameters:
  • x (Union[casadi.MX, aerosandbox.numpy.ndarray, float, int, List, Tuple, Set, Dict, Any]) – A Python data structure to substitute values into, using the solution in this OptiSol object.

  • recursive (bool) – If True, the substitution will be performed recursively. Otherwise, only the top-level data structure will be converted.

  • warn_on_unknown_types (bool) – If True, a warning will be issued if a data type that cannot be converted or parsed as definitively un-convertable is encountered.

Returns:

A copy of x, where all symbolic optimization variables (recursively substituted at unlimited depth)

have been converted to float or array values.

Return type:

Any

Usage:

stats()[source]#
Return type:

Dict[str, Any]

value_variables()[source]#
value_parameters()[source]#
show_infeasibilities(tol=0.001)[source]#

Prints a summary of any violated constraints in the solution.

Parameters:

tol (float) – The tolerance for violation. If the constraint is violated by less than this amount, it will not be printed.

Return type:

None

Returns: None (prints to console)