Added paragraph about training of the auto-encoder

Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
2019-03-07 15:32:46 +01:00
parent dfd9d36744
commit edec4937b2

View File

@ -327,7 +327,9 @@ updates of each of the aforementioned components:
\item Minimize \(\mathcal{L}_{error}\) and \(\mathcal{L}_{adv-d_z}\) by updating weights of \(e\) and \(g\). \item Minimize \(\mathcal{L}_{error}\) and \(\mathcal{L}_{adv-d_z}\) by updating weights of \(e\) and \(g\).
\end{itemize} \end{itemize}
Practically, the auto-encoder is trained separately for every
object class that is considered "known". Pidhorskyi et al trained
it on the MNIST\cite{Lecun1998} data set, once for every digit.
\section{Contribution} \section{Contribution}