Removed dummy discussion
Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
24
body.tex
24
body.tex
@ -659,30 +659,6 @@ from 0.1 to 2.5 as specified in Miller et al.~\cite{Miller2018}.
|
|||||||
|
|
||||||
\label{chap:discussion}
|
\label{chap:discussion}
|
||||||
|
|
||||||
To recap, the hypothesis is repeated here.
|
|
||||||
|
|
||||||
\begin{description}
|
|
||||||
\item[Hypothesis] Novelty detection using auto-encoders delivers similar or better object detection performance under open set conditions while being less computationally expensive compared to dropout sampling.
|
|
||||||
\end{description}
|
|
||||||
|
|
||||||
Based on the reported results, no clear answer can be given to the
|
|
||||||
research question; rather new questions emerge: "Can auto-encoders
|
|
||||||
work on realistic data sets like COCO with multiple different classes
|
|
||||||
in one image?" In other words: "Is my experience due to
|
|
||||||
implementation issues or a general theoretical problem of
|
|
||||||
auto-encoders?"
|
|
||||||
|
|
||||||
Despite best efforts, the results of Miller et al.~\cite{Miller2018}
|
|
||||||
could not be replicated. This does not show anything though.
|
|
||||||
To disprove Miller's work, any and all possible ways to replicate
|
|
||||||
their work must fail. Contrarily, one successful replication
|
|
||||||
proves the ability to replicate. On the surface, both Miller et al.
|
|
||||||
and I used the same weights, the same network, and the same
|
|
||||||
data sets. Only difference of note: they used a Caffe implementation
|
|
||||||
of SSD, for this thesis the Tensorflow implementation with eager mode
|
|
||||||
was used.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
\chapter{Closing}
|
\chapter{Closing}
|
||||||
\label{chap:closing}
|
\label{chap:closing}
|
||||||
|
|||||||
Reference in New Issue
Block a user