Moved software and source code design to appendices
Additionally, the entire LaTeX setup was adapted for this use case. Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
51
body.tex
51
body.tex
@ -366,9 +366,8 @@ be used to identify and reject these false positive cases.
|
||||
|
||||
\label{chap:methods}
|
||||
|
||||
This chapter explains the functionality of the Bayesian SSD, the
|
||||
decoding pipelines, and
|
||||
provides some information on the software and source code design.
|
||||
This chapter explains the functionality of the Bayesian SSD and the
|
||||
decoding pipelines.
|
||||
|
||||
\section{Bayesian SSD for Model Uncertainty}
|
||||
|
||||
@ -516,52 +515,6 @@ observation for removal.
|
||||
Per class confidence thresholding, non-maximum suppression, and
|
||||
top \(k\) selection happen like in vanilla SSD.
|
||||
|
||||
\section{Software and Source Code Design}
|
||||
|
||||
The source code of many published papers is either not available
|
||||
or seems like an afterthought: it is poorly documented, difficult
|
||||
to integrate into your own work, and often does not follow common
|
||||
software development best practices. Moreover, with Tensorflow,
|
||||
PyTorch, and Caffe there are at least three machine learning
|
||||
frameworks. Every research team seems to prefer another framework
|
||||
and sometimes even develops their own; this makes it difficult
|
||||
to combine the work of different authors.
|
||||
In addition to all this, most papers do not contain proper information
|
||||
regarding the implementation details, making it difficult to
|
||||
accurately replicate them if their source code is not available.
|
||||
|
||||
Therefore, it was clear to me: I will release my source code and
|
||||
make it available as Python package on the PyPi package index.
|
||||
This makes it possible for other researchers to simply install
|
||||
a package and use the API to interact with my code. Additionally,
|
||||
the code has been designed to be future proof and work with
|
||||
the announced Tensorflow 2.0 by supporting eager mode.
|
||||
|
||||
Furthermore, it is configurable, well documented, and conforms
|
||||
to the clean code guidelines: evolvability and extendability among
|
||||
others.
|
||||
%Unit tests are part of the code as well to identify common
|
||||
%issues early on, saving time in the process.
|
||||
% TODO: Unit tests (!)
|
||||
|
||||
The code was designed to be modular: One module creates the command
|
||||
line interface (main.py), another implements the actions
|
||||
chosen in the CLI (cli.py), the MS COCO to SceneNet RGB-D mapping can
|
||||
be found in the definitions.py module,
|
||||
preparation of the data sets and retrieval of data is
|
||||
grouped in the data.py module, evaluation metrics have
|
||||
their separate module (evaluation.py), the configuration is
|
||||
accessed and handled by the config.py module, debug-only code
|
||||
can be found in debug.py, and the ssd.py module contains
|
||||
code to train the SSD and later predict with it. All
|
||||
code relating to the auto-encoder can be found in its own
|
||||
sub directory.
|
||||
|
||||
Lastly, the SSD implementation from a third party repository
|
||||
has been modified to work inside a Python package architecture and
|
||||
with eager mode. It is stored as a Git submodule inside the package
|
||||
repository.
|
||||
|
||||
\chapter{Experimental Setup and Results}
|
||||
|
||||
\label{chap:experiments-results}
|
||||
|
||||
Reference in New Issue
Block a user