Evaluation Metrics
Evaluation metrics provide a way to evaluate the performance of a learned model. This is typically used during training to monitor performance on the validation set.
# MXNet.mx.ACE
— Type.
ACE
Calculates the averaged crossentropy (logloss) for classification.
Arguments:

eps::Float64
: Prevents returningInf
ifp = 0
.
source
# MXNet.mx.AbstractEvalMetric
— Type.
AbstractEvalMetric
The base class for all evaluation metrics. The subtypes should implement the following interfaces:
source
# MXNet.mx.Accuracy
— Type.
Accuracy
Multiclass classification accuracy.
Calculates the mean accuracy per sample for softmax in one dimension. For a multidimensional softmax the mean accuracy over all dimensions is calculated.
source
# MXNet.mx.MSE
— Type.
MSE
Mean Squared Error.
Calculates the mean squared error regression loss. Requires that label and prediction have the same shape.
source
# MXNet.mx.MultiACE
— Type.
MultiACE
Calculates the averaged crossentropy per class and overall (see ACE
). This can be used to quantify the influence of different classes on the overall loss.
source
# MXNet.mx.MultiMetric
— Type.
MultiMetric(metrics::Vector{AbstractEvalMetric})
Combine multiple metrics in one and get a result for all of them.
Usage
To calculate both meansquared error Accuracy
and logloss ACE
:
mx.fit(..., eval_metric = mx.MultiMetric([mx.Accuracy(), mx.ACE()]))
source
# MXNet.mx.NMSE
— Type.
NMSE
Normalized Mean Squared Error
Note that there are various ways to do the normalization. It depends on your own context. Please judge the problem setting you have first. If the current implementation do not suitable for you, feel free to file it on GitHub.
Let me show you a use case of this kind of normalization:
Bob is training a network for option pricing. The option pricing problem is a regression problem (pirce predicting). There are lots of option contracts on same target stock but different strike price. For example, there is a stock S
; it's market price is 1000. And, there are two call option contracts with different strike price. Assume Bob obtains the outcome as following table:
+++++
  Strike Price  Market Price  Pred Price 
+++++
 Op 1  1500  100  80 
+++++
 Op 2  500  10  8 
+++++
Now, obviously, Bob will calculate the normalized MSE as:
Both of the pred prices got the same degree of error.
For more discussion about normalized MSE, please see #211 also.
source
# MXNet.mx.SeqMetric
— Type.
SeqMetric(metrics::Vector{AbstractEvalMetric})
Apply a different metric to each output. This is especially useful for mx.Group
.
Usage
Calculate accuracy Accuracy
for the first output and logloss ACE
for the second output:
mx.fit(..., eval_metric = mx.SeqMetric([mx.Accuracy(), mx.ACE()]))
source
# MXNet.mx.NullMetric
— Type.
NullMetric()
A metric that calculates nothing. Can be used to ignore an output during training.
source
# Base.get
— Method.
get(metric)
Get the accumulated metrics.
Returns Vector{Tuple{Base.Symbol, Real}}
, a list of namevalue pairs. For example, [(:accuracy, 0.9)]
.
source
# MXNet.mx.hasNDArraySupport
— Method.
hasNDArraySupport(metric) > Val{true/false}
Trait for _update_single_output
should return Val{true}() if metric can handle
NDArraydirectly and
Val{false}()if requires
Array`. Metric that work with NDArrays can be async, while native Julia arrays require that we copy the output of the network, which is a blocking operation.
source
# MXNet.mx.reset!
— Method.
reset!(metric)
Reset the accumulation counter.
source
# MXNet.mx.update!
— Method.
update!(metric, labels, preds)
Update and accumulate metrics.
Arguments:

metric::AbstractEvalMetric
: the metric object. 
labels::Vector{NDArray}
: the labels from the data provider. 
preds::Vector{NDArray}
: the outputs (predictions) of the network.
source