Moved software and source code design to appendices

Additionally, the entire LaTeX setup was adapted for this use case.

Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
Jim Martens 2019-08-14 16:40:38 +02:00
parent e5662ed48d
commit 15aabfa9ac
5 changed files with 75 additions and 54 deletions

45
appendix.tex Normal file
View File

@ -0,0 +1,45 @@
\chapter{Software and Source Code Design}
The source code of many published papers is either not available
or seems like an afterthought: it is poorly documented, difficult
to integrate into your own work, and often does not follow common
software development best practices. Moreover, with Tensorflow,
PyTorch, and Caffe there are at least three machine learning
frameworks. Every research team seems to prefer another framework
and sometimes even develops their own; this makes it difficult
to combine the work of different authors.
In addition to all this, most papers do not contain proper information
regarding the implementation details, making it difficult to
accurately replicate them if their source code is not available.
Therefore, it was clear to me: I will release my source code and
make it available as Python package on the PyPi package index.
This makes it possible for other researchers to simply install
a package and use the API to interact with my code. Additionally,
the code has been designed to be future proof and work with
the announced Tensorflow 2.0 by supporting eager mode.
Furthermore, it is configurable, well documented, and conforms
to the clean code guidelines: evolvability and extendability among
others.
%Unit tests are part of the code as well to identify common
%issues early on, saving time in the process.
% TODO: Unit tests (!)
The code was designed to be modular: One module creates the command
line interface (main.py), another implements the actions
chosen in the CLI (cli.py), the MS COCO to SceneNet RGB-D mapping can
be found in the definitions.py module,
preparation of the data sets and retrieval of data is
grouped in the data.py module, evaluation metrics have
their separate module (evaluation.py), the configuration is
accessed and handled by the config.py module, debug-only code
can be found in debug.py, and the ssd.py module contains
code to train the SSD and later predict with it. All
code relating to the auto-encoder can be found in its own
sub directory.
Lastly, the SSD implementation from a third party repository
has been modified to work inside a Python package architecture and
with eager mode. It is stored as a Git submodule inside the package
repository.

View File

@ -366,9 +366,8 @@ be used to identify and reject these false positive cases.
\label{chap:methods}
This chapter explains the functionality of the Bayesian SSD, the
decoding pipelines, and
provides some information on the software and source code design.
This chapter explains the functionality of the Bayesian SSD and the
decoding pipelines.
\section{Bayesian SSD for Model Uncertainty}
@ -516,52 +515,6 @@ observation for removal.
Per class confidence thresholding, non-maximum suppression, and
top \(k\) selection happen like in vanilla SSD.
\section{Software and Source Code Design}
The source code of many published papers is either not available
or seems like an afterthought: it is poorly documented, difficult
to integrate into your own work, and often does not follow common
software development best practices. Moreover, with Tensorflow,
PyTorch, and Caffe there are at least three machine learning
frameworks. Every research team seems to prefer another framework
and sometimes even develops their own; this makes it difficult
to combine the work of different authors.
In addition to all this, most papers do not contain proper information
regarding the implementation details, making it difficult to
accurately replicate them if their source code is not available.
Therefore, it was clear to me: I will release my source code and
make it available as Python package on the PyPi package index.
This makes it possible for other researchers to simply install
a package and use the API to interact with my code. Additionally,
the code has been designed to be future proof and work with
the announced Tensorflow 2.0 by supporting eager mode.
Furthermore, it is configurable, well documented, and conforms
to the clean code guidelines: evolvability and extendability among
others.
%Unit tests are part of the code as well to identify common
%issues early on, saving time in the process.
% TODO: Unit tests (!)
The code was designed to be modular: One module creates the command
line interface (main.py), another implements the actions
chosen in the CLI (cli.py), the MS COCO to SceneNet RGB-D mapping can
be found in the definitions.py module,
preparation of the data sets and retrieval of data is
grouped in the data.py module, evaluation metrics have
their separate module (evaluation.py), the configuration is
accessed and handled by the config.py module, debug-only code
can be found in debug.py, and the ssd.py module contains
code to train the SSD and later predict with it. All
code relating to the auto-encoder can be found in its own
sub directory.
Lastly, the SSD implementation from a third party repository
has been modified to work inside a Python package architecture and
with eager mode. It is stored as a Git submodule inside the package
repository.
\chapter{Experimental Setup and Results}
\label{chap:experiments-results}

View File

@ -3,14 +3,15 @@
% Fallback, pass all unknown options to base class
\DeclareOption*{
\PassOptionsToClass{\CurrentOption}{scrreprt}
\PassOptionsToClass{\CurrentOption}{scrbook}
}
% Process given options
\ProcessOptions\relax
% Load base class
\LoadClass[a4paper]{scrreprt}
\LoadClass[a4paper,twoside=false,%
appendixprefix=true,numbers=noenddot]{scrbook}
% do some stuff

View File

@ -111,6 +111,7 @@
\input{./private/definitions.tex}
}{}
\usepackage[titletoc]{appendix}
\MakeOuterQuote{"}
@ -175,10 +176,11 @@
\tableofcontents
}
\newcommand{\finish}{%
%\clearpage
\newcommand{\references}{%
\printbibliography[heading=bibintoc]
}
\newcommand{\finish}{%
%\clearpage
\printglossary
@ -189,10 +191,12 @@
\else
\clearpage
\selectlanguage{ngerman}
\chapter*{Eidesstattliche Versicherung}
\input{declaration.tex}
\fi
\if@library
\clearpage
\chapter*{Erklärung zu Bibliothek}
\input{library.tex}
\else\fi
}

View File

@ -33,13 +33,31 @@
% specify bib resource
\addbibresource{ma.bib}
\makeatletter
\g@addto@macro\appendix{%
\renewcommand*{\chapterformat}{%
{\appendixname\nobreakspace\thechapter\autodot\enskip}%
}
\renewcommand*{\chaptermarkformat}{%
{}%
}
}
\makeatother
% begin document
\begin{document}
% invoke start command(s) from masterthesis package
\frontmatter
\start
\mainmatter
\input{body.tex}
\references
\begin{appendices}
\appendix
\input{appendix.tex}
\end{appendices}
% invoke finish command(s) from masterthesis package
\backmatter
\finish
\end{document}