Added another section to allow covering the GPND in detail
Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
parent
9a1bd269fd
commit
6a64b736cf
|
@ -235,15 +235,15 @@ detections for unknown object classes have a higher label
|
|||
uncertainty. A treshold on the entropy \(H(\mathbf{q}_i)\) can then
|
||||
be used to identify and reject these false positive cases.
|
||||
|
||||
\section{GPND}
|
||||
\section{Generative Probabilistic Novelty Detection}
|
||||
|
||||
For the theoretical underpinning of the Generative Probabilistic
|
||||
Novelty Detection the reader is advised to refer to the paper of
|
||||
Pidhorskyi et al\cite{Pidhorskyi2018}. This section will only
|
||||
cover the key aspects of an adversarial auto-encoder required
|
||||
to understand their method.
|
||||
% TODO Write about GPND in understandable terms
|
||||
|
||||
\subsection{Adversarial Auto-encoder}
|
||||
\section{Adversarial Auto-encoder}
|
||||
|
||||
This section will explain the adversarial auto-encoder used by
|
||||
Pidhorskyi et al\cite{Pidhorskyi2018} but in a slightly modified
|
||||
form to make it more understandable.
|
||||
|
||||
The training data points \(x_i \in \mathbb{R}^m \) are the input
|
||||
of the auto-encoder. An encoding function \(e: \mathbb{R}^m \rightarrow \mathbb{R}^n\) takes the data points
|
||||
|
|
Loading…
Reference in New Issue