aerosandbox.numpy.surrogate_model_tools#

Module Contents#

Functions#

softmax(*args[, softness, hardness])

An element-wise softmax between two or more arrays. Also referred to as the logsumexp() function.

softmin(*args[, softness, hardness])

An element-wise softmin between two or more arrays. Related to the logsumexp() function.

softmax_scalefree(*args[, relative_softness, ...])

softmin_scalefree(*args[, relative_softness, ...])

softplus(x[, beta, threshold])

A smooth approximation of the ReLU function, applied elementwise to an array x.

sigmoid(x[, sigmoid_type, normalization_range])

A sigmoid function. From Wikipedia (https://en.wikipedia.org/wiki/Sigmoid_function):

swish(x[, beta])

A smooth approximation of the ReLU function, applied elementwise to an array x.

blend(switch, value_switch_high, value_switch_low)

Smoothly blends between two values on the basis of some switch function.

aerosandbox.numpy.surrogate_model_tools.softmax(*args, softness=None, hardness=None)[source]#

An element-wise softmax between two or more arrays. Also referred to as the logsumexp() function.

Useful for optimization because it’s differentiable and preserves convexity!

Great writeup by John D Cook here:

https://www.johndcook.com/soft_maximum.pdf

Notes: Can provide either hardness or softness, not both. These are the inverse of each other. If neither is provided, hardness is set to 1.

Parameters:
  • *args (Union[float, aerosandbox.numpy.ndarray]) – Provide any number of arguments as values to take the softmax of.

  • hardness (float) – Hardness parameter. Higher values make this closer to max(x1, x2).

  • softness (float) –

    Softness parameter. (Inverse of hardness.) Lower values make this closer to max(x1, x2).

    • Setting softness is particularly useful, because it has the same units as each of the function’s

    inputs. For example, if you’re taking the softmax of two values that are lengths in units of meters, then softness is also in units of meters. In this case, softness has the rough meaning of “an amount of discrepancy between the input values that would be considered physically significant”.

Returns:

Soft maximum of the supplied values.

Return type:

Union[float, aerosandbox.numpy.ndarray]

aerosandbox.numpy.surrogate_model_tools.softmin(*args, softness=None, hardness=None)[source]#

An element-wise softmin between two or more arrays. Related to the logsumexp() function.

Useful for optimization because it’s differentiable and preserves convexity!

Great writeup by John D Cook here:

https://www.johndcook.com/soft_maximum.pdf

Notes: Can provide either hardness or softness, not both. These are the inverse of each other. If neither is provided, hardness is set to 1.

Parameters:
  • *args (Union[float, aerosandbox.numpy.ndarray]) – Provide any number of arguments as values to take the softmin of.

  • hardness (float) – Hardness parameter. Higher values make this closer to min(x1, x2).

  • softness (float) –

    Softness parameter. (Inverse of hardness.) Lower values make this closer to min(x1, x2).

    • Setting softness is particularly useful, because it has the same units as each of the function’s

    inputs. For example, if you’re taking the softmin of two values that are lengths in units of meters, then softness is also in units of meters. In this case, softness has the rough meaning of “an amount of discrepancy between the input values that would be considered physically significant”.

Returns:

Soft minimum of the supplied values.

Return type:

Union[float, aerosandbox.numpy.ndarray]

aerosandbox.numpy.surrogate_model_tools.softmax_scalefree(*args, relative_softness=None, relative_hardness=None)[source]#
Parameters:
  • args (Union[float, aerosandbox.numpy.ndarray]) –

  • relative_softness (float) –

  • relative_hardness (float) –

Return type:

Union[float, aerosandbox.numpy.ndarray]

aerosandbox.numpy.surrogate_model_tools.softmin_scalefree(*args, relative_softness=None, relative_hardness=None)[source]#
Parameters:
  • args (Union[float, aerosandbox.numpy.ndarray]) –

  • relative_softness (float) –

  • relative_hardness (float) –

Return type:

Union[float, aerosandbox.numpy.ndarray]

aerosandbox.numpy.surrogate_model_tools.softplus(x, beta=1, threshold=40)[source]#

A smooth approximation of the ReLU function, applied elementwise to an array x.

Softplus(x) = 1/beta * log(1 + exp(beta * x))

Often used as an activation function in neural networks.

Parameters:
  • x (Union[float, aerosandbox.numpy.ndarray]) – The input

  • beta – A parameter that controls the “softness” of the function. Higher values of beta make the function approach ReLU.

  • threshold – Values above this threshold are approximated as linear.

Returns: The value of the softplus function.

aerosandbox.numpy.surrogate_model_tools.sigmoid(x, sigmoid_type='tanh', normalization_range=(0, 1))[source]#
A sigmoid function. From Wikipedia (https://en.wikipedia.org/wiki/Sigmoid_function):

A sigmoid function is a mathematical function having a characteristic “S”-shaped curve or sigmoid curve.

Parameters:
  • x – The input

  • sigmoid_type (str) – Type of sigmoid function to use [str]. Can be one of: * “tanh” or “logistic” (same thing) * “arctan” * “polynomial”

  • normalization_type

    Range in which to normalize the sigmoid, shorthanded here in the documentation as “N”. This parameter is given as a two-element tuple (min, max).

    After normalization:
    >>> sigmoid(-Inf) == normalization_range[0]
    >>> sigmoid(Inf) == normalization_range[1]
    
    • In the special case of N = (0, 1):
      >>> sigmoid(-Inf) == 0
      >>> sigmoid(Inf) == 1
      >>> sigmoid(0) == 0.5
      >>> d(sigmoid)/dx at x=0 == 0.5
      
    • In the special case of N = (-1, 1):
      >>> sigmoid(-Inf) == -1
      >>> sigmoid(Inf) == 1
      >>> sigmoid(0) == 0
      >>> d(sigmoid)/dx at x=0 == 1
      

  • normalization_range (Tuple[Union[float, int], Union[float, int]]) –

Returns: The value of the sigmoid.

aerosandbox.numpy.surrogate_model_tools.swish(x, beta=1)[source]#

A smooth approximation of the ReLU function, applied elementwise to an array x.

Swish(x) = x / (1 + exp(-beta * x)) = x * logistic(x) = x * (0.5 + 0.5 * tanh(x/2))

Often used as an activation function in neural networks.

Parameters:
  • x – The input

  • beta – A parameter that controls the “softness” of the function. Higher values of beta make the function approach ReLU.

Returns: The value of the swish function.

aerosandbox.numpy.surrogate_model_tools.blend(switch, value_switch_high, value_switch_low)[source]#

Smoothly blends between two values on the basis of some switch function.

This function is similar in usage to numpy.where (documented here: https://numpy.org/doc/stable/reference/generated/numpy.where.html) , except that instead of using a boolean as to switch between the two values, a float is used to smoothly transition between the two in a differentiable manner.

Before using this function, be sure to understand the difference between this and smoothmax(), and choose the correct one.

Parameters:
  • switch (float) – A value that acts as a “switch” between the two values [float]. If switch is -Inf, value_switch_low is returned. If switch is Inf, value_switch_high is returned. If switch is 0, the mean of value_switch_low and value_switch_high is returned. If switch is 1, the return value is roughly (0.88 * value_switch_high + 0.12 * value_switch_low). If switch is -1, the return value is roughly (0.88 * value_switch_low + 0.12 * value_switch_high).

  • value_switch_high – Value to be returned when switch is high. Can be a float or an array.

  • value_switch_low – Value to be returned when switch is low. Can be a float or an array.

Returns: A value that is a blend between value_switch_low and value_switch_high, with the weighting dependent

on the value of the ‘switch’ parameter.