axtreme.utils.transforms¶
The module is used to determine the ax transformations used and create the equivalent botorch transformation.
This allows the user to work directly with the Botorch model, while using inputs and outputs in the original space (e.g the problem space)
Todo
It would be nice to be able to return an identity transform so we don’t have to deal with Nones (sw 2024-11-21).
Functions
Determines the input and output transforms applied by Ax, and creates the equivalent transforms in botroch. |
|
|
Used to ensure a transform has not been applied. |
|
Make sure that Cast has not flattned a HierachicalSearchSpace. |
|
Handle Derelativize (relative constraints to non-relative constraints). |
|
Handle IVW (inverse variance weight transform). |
|
Translate ax standardisation into botorch standardisation. |
|
Converts a trained UnitX to botorch equivalent. |
- axtreme.utils.transforms.ax_to_botorch_transform_input_output(transforms: list[Transform], outcome_names: list[str]) tuple[InputTransform, OutcomeTransform] ¶
Determines the input and output transforms applied by Ax, and creates the equivalent transforms in botroch.
This allows the botorch model internal to ax (which operates in a standard “model” space), to be used in the problem space (e.g non-standardised input and output). This is useful when calculating QoIs.
- Parameters:
transforms – the TRAINED transforms that have been applied by ax. - Typically found at TorchModelBridge.transforms.
outcome_names – the order of the output columns used to train the internal ax.Model. - Typically found at TorchModelBridge.outcomes.
- Returns:
output_tranform:
- Return type:
input_transform
- Using them in the following way allow input and output in the outcome/problem space:
Assume: model is a trained`botorch.models`(such as TorchModelBridge.model.surrogate.model)
> model.posterior(input_transform(X), posterior_transform = output_transform.untransform_posterior)
Todo
- Ideally ax_to_botorch_transform_conversion would a config within the root of this module, so it could easily
be exteneded. This is challenign becuase the ‘translate_standardisey’ function needs the specific outcome_names of the problem to be passed. This is because ax does not maintain the order of the metrics in the transform itself (it stores the names/order internally. See ax.modelbridge.base.ModelBridge._transform_data for details)
- axtreme.utils.transforms.check_transform_not_applied(transform: Transform, parameter_names_store: str) tuple[InputTransform | None, OutcomeTransform | None] ¶
Used to ensure a transform has not been applied.
Many transforms store an internal list of the ax.parameters (input) they should be applied to. This is determined by that parameter being of a specific type and having specific attributes as checked within the transform. This helper function is used to double check the transforms are not being used/applied to anything. This mean a translation from ax to botorch is not required.
- Parameters:
transform – the transform to check
parameter_names_store – The attribute on the transform that should be empty (falsey) if the tranform is not applied to anything.
- Returns:
Input and output transforms required (will be None). Will raise an error if these transformation have actually been applied.
Note
- This should be instantiated with functools.partial, e.g.
>>> from functools import partial >>> log_checker = partial(check_not_applied, parameter_names_store="transform_parameters")
- axtreme.utils.transforms.translate_cast(transform: Cast) tuple[InputTransform | None, OutcomeTransform | None] ¶
Make sure that Cast has not flattned a HierachicalSearchSpace.
- Cast changes the parameter (e.g RangeParameter), castings the VALUE to the data type it should be.
e.g RangeParameter values should be a float, cast the value to ensure it is a float
- It also deals with HierachicalSearchSpace:
- (basically this is like a tree that navigates you to a more specific search space
e.g if ‘parameter_a’> 2 -> use SearchSpace1
.flatten_hss flag if this has been used
- axtreme.utils.transforms.translate_derelativize(transform: Derelativize) tuple[InputTransform | None, OutcomeTransform | None] ¶
Handle Derelativize (relative constraints to non-relative constraints).
Derelativize transforms optimisation configs and untransforms constraints. As we are only interested in input and output transformations, this can be ignored.
Todo
Is there a safer way to ensure this is not being used? Difficult because it doesn’t store anything internally
Todo
This needs some additional work.
- axtreme.utils.transforms.translate_ivw(transform: IVW) tuple[InputTransform | None, OutcomeTransform | None] ¶
Handle IVW (inverse variance weight transform).
IVW is used when at the same location (x), there are multiple measure of the same metric. It combines these into a single measure of the metric, and passes this on to the botroch model for training As this is only using for training (transforming the y input to the model), we can ignore this, as we currently use these transforms for prediction only.
Note
It is hard to tell if this transformation has been applied because no attribute are stored on the object.
Todo
Check if botorch supports multiple measure of a single metric at a single point (suspect not) - if not it is reasonable to ignore this transformation as standard botorch model shouldn’t be using in that way
- axtreme.utils.transforms.translate_standardisey(transform: StandardizeY, col_names: list[str]) tuple[InputTransform | None, OutcomeTransform | None] ¶
Translate ax standardisation into botorch standardisation.
Note
Ax does not maintain the order of the metrics, so need to explicitly pass the order.
Todo
Can there be some work around for this? Would be good not to have to pass constraints.
Note
- col_name should be passed using functools.partial
>>> from functools import partial >>> standardise_y = partial(translate_standardiseY, col_names=["loc", "scale"])
- Parameters:
transform – StandardizeY to translate
col_names – the order of the column in the data being passed in. This is required to the correct transformation can be applied to the correct column
- axtreme.utils.transforms.translate_unitx(transform: UnitX) tuple[InputTransform | None, OutcomeTransform | None] ¶
Converts a trained UnitX to botorch equivalent.
Ax bounds look like this: {‘x1’: (0.0, 2.0), ‘x2’: (0.0, 3.0)}
BoTorch bounds look like: tensor([[0., 0.],[2., 3.]])