Added part about metrics
Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
@ -393,6 +393,20 @@ performance and computational performance under open set conditions
|
|||||||
using the SceneNet RGB-D data set with the MS COCO classes as
|
using the SceneNet RGB-D data set with the MS COCO classes as
|
||||||
"known" object classes.
|
"known" object classes.
|
||||||
|
|
||||||
|
The computational performance is measured by the time in milliseconds
|
||||||
|
every test run takes. Interesting are not the absolute numbers,
|
||||||
|
as these vary from machine to machine and are influenced by a
|
||||||
|
plethora of uncontrollable factors, but the relative difference
|
||||||
|
between both approaches and if the difference is significant.
|
||||||
|
Object detection performance is measured by precision, recall,
|
||||||
|
F1-score, and an open set error. While the first three metrics are
|
||||||
|
standard, the last is adapted from Miller et al. It is defined
|
||||||
|
as the number of observations (for dropout sampling) or detections
|
||||||
|
(for GPND) that pass the respective false positive test (entropy or
|
||||||
|
novelty), fall on unknown objects (there are no overlapping ground
|
||||||
|
truth objects with IoU \(\geq 0.5\) and a known true class label)
|
||||||
|
and do not have a winning class label of "unknown".
|
||||||
|
|
||||||
\subsection*{Technical Contribution}
|
\subsection*{Technical Contribution}
|
||||||
|
|
||||||
\chapter{Thesis as a project}
|
\chapter{Thesis as a project}
|
||||||
|
|||||||
Reference in New Issue
Block a user