zero.metrics¶
Tiny ecosystem for metrics.
TL;DR: with this module, evaluation looks like this:
metrics = metric_fn.calculate_iter(map(predict_batch, val_loader))
In order to create your own metric, inherit from Metric and implement its interface
(see Metric’s docs for examples). The API throughout the module intentionally follows
that of ignite.metrics, hence, Ignite
metrics are supported almost everywhere where Metric is supported. For giving Ignite
metrics full functionality of Metric, use IgniteMetric.
Warning
Distributed settings are not supported out-of-the-box. In such cases, you have the following options:
wrap a metric from Ignite in
IgniteMetricuse
ignite.metrics.metric.sync_all_reduceandignite.metrics.metric.reinit__is_reducedmanually take care of everything
Metric¶
- 
class 
zero.metrics.Metric[source]¶ The base class for metrics.
In order to create your own metric, inherit from this class and implement all methods marked with
@abstractmethod. High-level functionality (Metric.calculate,Metric.calculate_iter) is already implemented.Tutorial
class Accuracy(Metric): def __init__(self): self.reset() def reset(self): self.n_objects = 0 self.n_correct = 0 def update(self, y_pred, y): self.n_objects += len(y) self.n_correct += (y_pred == y).sum().item() def compute(self): assert self.n_objects return self.n_correct / self.n_objects metric_fn = Accuracy() y_pred = torch.tensor([0, 0, 0, 0]) y = torch.tensor([0, 1, 0, 1]) assert metric_fn.calculate(y_pred, y) == 0.5 import zero y = torch.randint(2, size=(10,)) X = torch.randn(len(y), 3) batches = zero.iter_batches((X, y), batch_size=2) def perfect_prediction(batch): X, y = batch y_pred = y return y_pred, y score = metric_fn.calculate_iter(map(perfect_prediction, batches), star=True) assert score == 1.0
Reset the metric’s state.  | 
|
  | 
Update the metric’s state.  | 
Compute the metric.  | 
|
  | 
Calculate metric for a single input.  | 
  | 
Calculate metric for iterable.  | 
MetricsDict¶
- 
class 
zero.metrics.MetricsDict(metrics)[source]¶ Dictionary for metrics.
The container is suitable when all metrics take input in the same form.
- Parameters
 metrics –
Examples
metric_fn = MetricList([FirstMetric(), SecondMetric()])
Tutorial
from ignite.metrics import Precision class MyMetric(Metric): ... a = MyMetric() b = IgniteMetric(Precision()) metric_fn = MetricsDict({'a': a, 'b': b}) metric_fn.reset() # reset all metrics metric_fn.update(...) # update all metrics metric_fn.compute() # {'a': <my metric>, 'b': <precision>} assert metric_fn['a'] is a and metric['b'] is b
Reset all underlying metrics.  | 
|
  | 
Update all underlying metrics.  | 
Compute the results.  | 
|
Access a metric by key.  | 
IgniteMetric¶
- 
class 
zero.metrics.IgniteMetric(ignite_metric)[source]¶ Wrapper for metrics from
ignite.metrics.- Parameters
 metric (
ignite.metrics.Metric) –
Examples
from ignite.metrics import Precision metric_fn = IgniteMetric(Precision()) metric_fn.calculate(...) metric_fn.calculate_iter(...)
Get the underlying metric.  | 
|
Reset the underlying metric.  | 
|
  | 
Update the underlying metric.  | 
Compute the result.  |