Added missing citations

Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
2019-09-09 14:14:11 +02:00
parent d165299df8
commit 22d328e959
2 changed files with 65 additions and 5 deletions

View File

@ -188,9 +188,9 @@ allows a probabilistic interpretation. Pidhorskyi et al.~\cite{Pidhorskyi2018}
combine a probabilistic approach to novelty detection with auto-encoders.
Distance-based novelty detection uses either nearest neighbour-based approaches
(e.g. ) %TODO citations)
(e.g. \(k\)-nearest neighbour \cite{Hautamaki2004})
or clustering-based approaches
(e.g. ). % TODO citations
(e.g. \(k\)-means clustering algorithm \cite{Jordan1994}).
Both methods are similar to estimating the
pdf of data, they use well-defined distance metrics to compute the distance
between two data points.
@ -198,7 +198,7 @@ between two data points.
Domain-based novelty detection describes the boundary of the known data, rather
than the data itself. Unknown data is identified by its position relative to
the boundary. A common implementation for this are support vector machines
(e.g. implemented by ). % TODO citations
(e.g. implemented by Song et al. \cite{Song2002}).
Information-theoretic novelty detection computes the information content
of a data set, for example, with metrics like entropy. Such metrics assume
@ -206,8 +206,8 @@ that novel data inside the data set significantly alters the information
content of an otherwise normal data set. First, the metrics are calculated over the
whole data set. Afterwards, a subset is identified that causes the biggest
difference in the metric when removed from the data set. This subset is considered
to consist of novel data. For example, xyz provide a recent approach.
% TODO citations
to consist of novel data. For example, Filippone and Sanguinetti \cite{Filippone2011} provide
a recent approach.
\subsection{Reconstruction-based novelty detection}