Added related works

Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
Jim Martens 2019-08-01 16:52:59 +02:00
parent 8fa51535af
commit 1a1b6937f6
1 changed files with 78 additions and 0 deletions

View File

@ -130,6 +130,84 @@ with MS COCO classes.
\chapter{Background and Contribution}
This chapter will begin with an overview over previous works
in the field of this thesis. Afterwards the theoretical foundations
of the work of Miller et al.~\cite{Miller2018} and auto-encoders will
be explained. The chapter concludes with more details about the
research question and the intended contribution of this thesis.
\section{Related Works}
Novelty detection for object detection is intricately linked with
open set conditions: the test data can contain unknown classes.
Bishop~\cite{Bishop1994} investigates the correlation between
the degree of novel input data and the reliability of network
outputs. Pimentel et al.~\cite{Pimentel2014} provide a review
of novelty detection methods published over the previous decade.
There are two primary pathways that deal with novelty: novelty
detection using auto-encoders and uncertainty estimation with
bayesian networks.
Japkowicz et al.~\cite{Japkowicz1995} introduce a novelty detection
method based on the hippocampus of Gluck and Meyers~\cite{Gluck1993}
and use an auto-encoder to recognize novel instances.
Thompson et al.~\cite{Thompson2002} show that auto-encoders
can learn "normal" system behaviour implicitly.
Goodfellow et al.~\cite{Goodfellow2014} introduce adversarial
networks: a generator that attempts to trick the discriminator
by generating samples indistinguishable from the real data.
Makhzani et al.~\cite{Makhzani2015} build on the work of Goodfellow
and propose adversarial auto-encoders. Richter and
Roy~\cite{Richter2017} use an auto-encoder to detect novelty.
Wang et al.~\cite{Wang2018} base upon Goodfellow's work and
use a generative adversarial network for novelty detection.
Sabokrou et al.~\cite{Sabokrou2018} implement an end-to-end
architecture for one-class classification: it consists of two
deep networks, with one being the novelty detector and the other
enhancing inliers and distorting outliers.
Pidhorskyi et al.~\cite{Pidhorskyi2018} take a probabilistic approach
and compute how likely it is that a sample is generated by the
inlier distribution.
Kendall and Gal~\cite{Kendall2017} provide a Bayesian deep learning
framework that combines input-dependent
aleatoric\footnote{captures noise inherent in observations}
uncertainty with epistemic\footnote{uncertainty in the model}
uncertainty. Lakshminarayanan et al.~\cite{Lakshminarayanan2017}
implement a predictive uncertainty estimation using deep ensembles
rather than Bayesian networks. Geifman et al.~\cite{Geifman2018}
introduce an uncertainty estimation algorithm for non-Bayesian deep
neural classification that estimates the uncertainty of highly
confident points using earlier snapshots of the trained model.
Miller et al.~\cite{Miller2018a} compare merging strategies
for sampling-based uncertainty techniques in object detection.
Sensoy et al.~\cite{Sensoy2018} treat prediction confidence
as subjective opinions: they place a Dirichlet distribution on it.
The trained predictor for a multi-class classification is also a
Dirichlet distribution.
Gal and Ghahramani~\cite{Gal2016} show how dropout can be used
as a Bayesian approximation. Miller et al.~\cite{Miller2018}
build upon the work of Miller et al.~\cite{Miller2018a} and
Gal and Ghahramani: they use dropout sampling under open-set
conditions for object detection. Mukhoti and Gal~\cite{Mukhoti2018}
contribute metrics to measure uncertainty for semantic
segmentation. Wu et al.~\cite{Wu2019} introduce two innovations
that turn variational Bayes into a robust tool for Bayesian
networks: they introduce a novel deterministic method to approximate
moments in neural networks which eliminates gradient variance, and
they introduce a hierarchical prior for parameters and an
Empirical Bayes procedure to select prior variances.
% SSD: \cite{Liu2016}
% ImageNet: \cite{Deng2009}
% COCO: \cite{Lin2014}
% YCB: \cite{Xiang2017}
% SceneNet: \cite{McCormac2017}
\chapter{Methods}
\section{Design of Source Code}