Added paragraph about training of the auto-encoder
Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
@ -327,7 +327,9 @@ updates of each of the aforementioned components:
|
|||||||
\item Minimize \(\mathcal{L}_{error}\) and \(\mathcal{L}_{adv-d_z}\) by updating weights of \(e\) and \(g\).
|
\item Minimize \(\mathcal{L}_{error}\) and \(\mathcal{L}_{adv-d_z}\) by updating weights of \(e\) and \(g\).
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
Practically, the auto-encoder is trained separately for every
|
||||||
|
object class that is considered "known". Pidhorskyi et al trained
|
||||||
|
it on the MNIST\cite{Lecun1998} data set, once for every digit.
|
||||||
|
|
||||||
\section{Contribution}
|
\section{Contribution}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user