Moved software and source code design to appendices
Additionally, the entire LaTeX setup was adapted for this use case. Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
parent
e5662ed48d
commit
15aabfa9ac
|
@ -0,0 +1,45 @@
|
||||||
|
\chapter{Software and Source Code Design}
|
||||||
|
|
||||||
|
The source code of many published papers is either not available
|
||||||
|
or seems like an afterthought: it is poorly documented, difficult
|
||||||
|
to integrate into your own work, and often does not follow common
|
||||||
|
software development best practices. Moreover, with Tensorflow,
|
||||||
|
PyTorch, and Caffe there are at least three machine learning
|
||||||
|
frameworks. Every research team seems to prefer another framework
|
||||||
|
and sometimes even develops their own; this makes it difficult
|
||||||
|
to combine the work of different authors.
|
||||||
|
In addition to all this, most papers do not contain proper information
|
||||||
|
regarding the implementation details, making it difficult to
|
||||||
|
accurately replicate them if their source code is not available.
|
||||||
|
|
||||||
|
Therefore, it was clear to me: I will release my source code and
|
||||||
|
make it available as Python package on the PyPi package index.
|
||||||
|
This makes it possible for other researchers to simply install
|
||||||
|
a package and use the API to interact with my code. Additionally,
|
||||||
|
the code has been designed to be future proof and work with
|
||||||
|
the announced Tensorflow 2.0 by supporting eager mode.
|
||||||
|
|
||||||
|
Furthermore, it is configurable, well documented, and conforms
|
||||||
|
to the clean code guidelines: evolvability and extendability among
|
||||||
|
others.
|
||||||
|
%Unit tests are part of the code as well to identify common
|
||||||
|
%issues early on, saving time in the process.
|
||||||
|
% TODO: Unit tests (!)
|
||||||
|
|
||||||
|
The code was designed to be modular: One module creates the command
|
||||||
|
line interface (main.py), another implements the actions
|
||||||
|
chosen in the CLI (cli.py), the MS COCO to SceneNet RGB-D mapping can
|
||||||
|
be found in the definitions.py module,
|
||||||
|
preparation of the data sets and retrieval of data is
|
||||||
|
grouped in the data.py module, evaluation metrics have
|
||||||
|
their separate module (evaluation.py), the configuration is
|
||||||
|
accessed and handled by the config.py module, debug-only code
|
||||||
|
can be found in debug.py, and the ssd.py module contains
|
||||||
|
code to train the SSD and later predict with it. All
|
||||||
|
code relating to the auto-encoder can be found in its own
|
||||||
|
sub directory.
|
||||||
|
|
||||||
|
Lastly, the SSD implementation from a third party repository
|
||||||
|
has been modified to work inside a Python package architecture and
|
||||||
|
with eager mode. It is stored as a Git submodule inside the package
|
||||||
|
repository.
|
51
body.tex
51
body.tex
|
@ -366,9 +366,8 @@ be used to identify and reject these false positive cases.
|
||||||
|
|
||||||
\label{chap:methods}
|
\label{chap:methods}
|
||||||
|
|
||||||
This chapter explains the functionality of the Bayesian SSD, the
|
This chapter explains the functionality of the Bayesian SSD and the
|
||||||
decoding pipelines, and
|
decoding pipelines.
|
||||||
provides some information on the software and source code design.
|
|
||||||
|
|
||||||
\section{Bayesian SSD for Model Uncertainty}
|
\section{Bayesian SSD for Model Uncertainty}
|
||||||
|
|
||||||
|
@ -516,52 +515,6 @@ observation for removal.
|
||||||
Per class confidence thresholding, non-maximum suppression, and
|
Per class confidence thresholding, non-maximum suppression, and
|
||||||
top \(k\) selection happen like in vanilla SSD.
|
top \(k\) selection happen like in vanilla SSD.
|
||||||
|
|
||||||
\section{Software and Source Code Design}
|
|
||||||
|
|
||||||
The source code of many published papers is either not available
|
|
||||||
or seems like an afterthought: it is poorly documented, difficult
|
|
||||||
to integrate into your own work, and often does not follow common
|
|
||||||
software development best practices. Moreover, with Tensorflow,
|
|
||||||
PyTorch, and Caffe there are at least three machine learning
|
|
||||||
frameworks. Every research team seems to prefer another framework
|
|
||||||
and sometimes even develops their own; this makes it difficult
|
|
||||||
to combine the work of different authors.
|
|
||||||
In addition to all this, most papers do not contain proper information
|
|
||||||
regarding the implementation details, making it difficult to
|
|
||||||
accurately replicate them if their source code is not available.
|
|
||||||
|
|
||||||
Therefore, it was clear to me: I will release my source code and
|
|
||||||
make it available as Python package on the PyPi package index.
|
|
||||||
This makes it possible for other researchers to simply install
|
|
||||||
a package and use the API to interact with my code. Additionally,
|
|
||||||
the code has been designed to be future proof and work with
|
|
||||||
the announced Tensorflow 2.0 by supporting eager mode.
|
|
||||||
|
|
||||||
Furthermore, it is configurable, well documented, and conforms
|
|
||||||
to the clean code guidelines: evolvability and extendability among
|
|
||||||
others.
|
|
||||||
%Unit tests are part of the code as well to identify common
|
|
||||||
%issues early on, saving time in the process.
|
|
||||||
% TODO: Unit tests (!)
|
|
||||||
|
|
||||||
The code was designed to be modular: One module creates the command
|
|
||||||
line interface (main.py), another implements the actions
|
|
||||||
chosen in the CLI (cli.py), the MS COCO to SceneNet RGB-D mapping can
|
|
||||||
be found in the definitions.py module,
|
|
||||||
preparation of the data sets and retrieval of data is
|
|
||||||
grouped in the data.py module, evaluation metrics have
|
|
||||||
their separate module (evaluation.py), the configuration is
|
|
||||||
accessed and handled by the config.py module, debug-only code
|
|
||||||
can be found in debug.py, and the ssd.py module contains
|
|
||||||
code to train the SSD and later predict with it. All
|
|
||||||
code relating to the auto-encoder can be found in its own
|
|
||||||
sub directory.
|
|
||||||
|
|
||||||
Lastly, the SSD implementation from a third party repository
|
|
||||||
has been modified to work inside a Python package architecture and
|
|
||||||
with eager mode. It is stored as a Git submodule inside the package
|
|
||||||
repository.
|
|
||||||
|
|
||||||
\chapter{Experimental Setup and Results}
|
\chapter{Experimental Setup and Results}
|
||||||
|
|
||||||
\label{chap:experiments-results}
|
\label{chap:experiments-results}
|
||||||
|
|
|
@ -3,14 +3,15 @@
|
||||||
|
|
||||||
% Fallback, pass all unknown options to base class
|
% Fallback, pass all unknown options to base class
|
||||||
\DeclareOption*{
|
\DeclareOption*{
|
||||||
\PassOptionsToClass{\CurrentOption}{scrreprt}
|
\PassOptionsToClass{\CurrentOption}{scrbook}
|
||||||
}
|
}
|
||||||
|
|
||||||
% Process given options
|
% Process given options
|
||||||
\ProcessOptions\relax
|
\ProcessOptions\relax
|
||||||
|
|
||||||
% Load base class
|
% Load base class
|
||||||
\LoadClass[a4paper]{scrreprt}
|
\LoadClass[a4paper,twoside=false,%
|
||||||
|
appendixprefix=true,numbers=noenddot]{scrbook}
|
||||||
|
|
||||||
% do some stuff
|
% do some stuff
|
||||||
|
|
||||||
|
|
|
@ -111,6 +111,7 @@
|
||||||
\input{./private/definitions.tex}
|
\input{./private/definitions.tex}
|
||||||
}{}
|
}{}
|
||||||
|
|
||||||
|
\usepackage[titletoc]{appendix}
|
||||||
|
|
||||||
\MakeOuterQuote{"}
|
\MakeOuterQuote{"}
|
||||||
|
|
||||||
|
@ -175,10 +176,11 @@
|
||||||
\tableofcontents
|
\tableofcontents
|
||||||
}
|
}
|
||||||
|
|
||||||
\newcommand{\finish}{%
|
\newcommand{\references}{%
|
||||||
%\clearpage
|
|
||||||
\printbibliography[heading=bibintoc]
|
\printbibliography[heading=bibintoc]
|
||||||
|
}
|
||||||
|
|
||||||
|
\newcommand{\finish}{%
|
||||||
%\clearpage
|
%\clearpage
|
||||||
\printglossary
|
\printglossary
|
||||||
|
|
||||||
|
@ -189,10 +191,12 @@
|
||||||
\else
|
\else
|
||||||
\clearpage
|
\clearpage
|
||||||
\selectlanguage{ngerman}
|
\selectlanguage{ngerman}
|
||||||
|
\chapter*{Eidesstattliche Versicherung}
|
||||||
\input{declaration.tex}
|
\input{declaration.tex}
|
||||||
\fi
|
\fi
|
||||||
\if@library
|
\if@library
|
||||||
\clearpage
|
\clearpage
|
||||||
|
\chapter*{Erklärung zu Bibliothek}
|
||||||
\input{library.tex}
|
\input{library.tex}
|
||||||
\else\fi
|
\else\fi
|
||||||
}
|
}
|
||||||
|
|
20
thesis.tex
20
thesis.tex
|
@ -33,13 +33,31 @@
|
||||||
% specify bib resource
|
% specify bib resource
|
||||||
\addbibresource{ma.bib}
|
\addbibresource{ma.bib}
|
||||||
|
|
||||||
|
\makeatletter
|
||||||
|
\g@addto@macro\appendix{%
|
||||||
|
\renewcommand*{\chapterformat}{%
|
||||||
|
{\appendixname\nobreakspace\thechapter\autodot\enskip}%
|
||||||
|
}
|
||||||
|
\renewcommand*{\chaptermarkformat}{%
|
||||||
|
{}%
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\makeatother
|
||||||
|
|
||||||
% begin document
|
% begin document
|
||||||
\begin{document}
|
\begin{document}
|
||||||
% invoke start command(s) from masterthesis package
|
% invoke start command(s) from masterthesis package
|
||||||
|
\frontmatter
|
||||||
\start
|
\start
|
||||||
|
\mainmatter
|
||||||
\input{body.tex}
|
\input{body.tex}
|
||||||
|
\references
|
||||||
|
\begin{appendices}
|
||||||
|
\appendix
|
||||||
|
\input{appendix.tex}
|
||||||
|
\end{appendices}
|
||||||
|
|
||||||
% invoke finish command(s) from masterthesis package
|
% invoke finish command(s) from masterthesis package
|
||||||
|
\backmatter
|
||||||
\finish
|
\finish
|
||||||
\end{document}
|
\end{document}
|
||||||
|
|
Loading…
Reference in New Issue