Auto-Encoder-CODEC

This is a continuation of Auto-Encoder or Auto-Associative How To Examples

This repository includes "how to" examples of auto-associative neural networks. These are sometimes called Auto-Encoders. These networks have the ability to encode and decode (CODEC) sample data in the most efficient and low-loss way. The minimization of the loss function is accomplished through the use of gradient descent training algorithms.

Example 3 - MNIST Digit ConvAutoencoder

This "how-to" example comes from a competition held on the kaggle.com website.

WHAT IS THE OVERARCHING GOAL?

Recognize handwritten digits from the MNIST dataset. Do it with CNN generated "features" or "encoded" version of the input - the center of an autoencoder.

The subject matter expert (SME) can recognize digits. They qualitatively rate the amount of data lost due to encoded compression. If the SME can no longer make a classification from a decoded input, they qualitatively assess that too much data has been removed.

Is this true? No. It is possible to train a CNN to classify with internal represntations that are distorted and simplified to the extent that they no longer can be used to recreate the input. I believe this is due to the fact that the SME not only classifies the examples, but so much more else.


All Examples:

https://github.com/prof-nussbaum/Auto-Encoder-CODEC/


This Example (Example 3):

https://github.com/prof-nussbaum/Auto-Encoder-CODEC/tree/main/Example_3



Comments

Popular posts from this blog