Auto-Encoder-CODEC
This is a continuation of Auto-Encoder or Auto-Associative How To Examples
Example 3 - MNIST Digit ConvAutoencoder
This "how-to" example comes from a competition held on the kaggle.com website.
WHAT IS THE OVERARCHING GOAL?Recognize handwritten digits from the MNIST dataset. Do it with CNN generated "features" or "encoded" version of the input - the center of an autoencoder.
The subject matter expert (SME) can recognize digits. They qualitatively rate the amount of data lost due to encoded compression. If the SME can no longer make a classification from a decoded input, they qualitatively assess that too much data has been removed.
Is this true? No. It is possible to train a CNN to classify with internal represntations that are distorted and simplified to the extent that they no longer can be used to recreate the input. I believe this is due to the fact that the SME not only classifies the examples, but so much more else.
All Examples:
https://github.com/prof-nussbaum/Auto-Encoder-CODEC/
This Example (Example 3):
https://github.com/prof-nussbaum/Auto-Encoder-CODEC/tree/main/Example_3
Comments
Post a Comment