LocalMetadataMetric¶
- class axtreme.metrics.LocalMetadataMetric(name: str, lower_is_better: bool | None = None, properties: Dict[str, Any] | None = None)¶
Bases:
Metric
This metric retrieves its results form the trial metadata.
The simple example run the simultion within this function call (e.g. see) In general, this method should only ‘fetch’ the results from somewhere else where they have been run. For example, Runner deploys simulation of remote, this connects to remote and collects result. This is local implementation of this patter, where results are stored on trail metadata.
This is useful when: - Running a single simulation produces multiple output metrics (meaning the simulation doesn’t need to be run as many times) - Want to execute the simulation when trail.run() is called
Note
This object is coupled with LocalMetadataRunner, through SimulationPointResults
Background:
Flow: - trial.run() called, internally call the runner, and puts the resulting data into metadata - Later Metric.fetch_trial_data(trial) is called Therefore, Metric has access to all the “bookkeeping” trial information directly, the only thing that should be in metadata is run result.
- __init__(name: str, lower_is_better: bool | None = None, properties: Dict[str, Any] | None = None) None ¶
Inits Metric.
- Parameters:
name – Name of metric.
lower_is_better – Flag for metrics which should be minimized.
properties – Dictionary of this metric’s properties
Methods
__init__
(name[, lower_is_better, properties])Inits Metric.
bulk_fetch_experiment_data
(experiment, metrics)Fetch multiple metrics data for multiple trials on an experiment, using instance attributes of the metrics.
bulk_fetch_trial_data
(trial, metrics, **kwargs)Fetch multiple metrics data for one trial, using instance attributes of the metrics.
clone
()Create a copy of this Metric.
deserialize_init_args
(args[, ...])Given a dictionary, deserialize the properties needed to initialize the object.
fetch_data_prefer_lookup
(experiment, metrics)Fetch or lookup (with fallback to fetching) data for given metrics, depending on whether they are available while running.
fetch_experiment_data_multi
(experiment, metrics)Fetch multiple metrics data for an experiment.
fetch_trial_data
(trial, **kwargs)Fetch data for one trial.
fetch_trial_data_multi
(trial, metrics, **kwargs)Fetch multiple metrics data for one trial.
is_available_while_running
()Whether metrics of this class are available while the trial is running.
maybe_raise_deprecation_warning_on_class_methods
()period_of_new_data_after_trial_completion
()Period of time metrics of this class are still expecting new data to arrive after trial completion.
serialize_init_args
(obj)Serialize the properties needed to initialize the object.
Attributes
db_id
fetch_multi_group_by_metric
Metric class, with which to group this metric in Experiment._metrics_by_class, which is used to combine metrics on experiment into groups and then fetch their data via Metric.fetch_trial_data_multi for each group.
name
Get name of metric.
summary_dict
Returns a dictionary containing the metric's name and properties.
- fetch_trial_data(trial: BaseTrial, **kwargs: Any) Result[Data, MetricFetchE] ¶
Fetch data for one trial.