Posts

Showing posts from April, 2021
Auto-Encoder-CODEC This is a continuation of Auto-Encoder or Auto-Associative How To Examples This repository includes "how to" examples of auto-associative neural networks. These are sometimes called Auto-Encoders. These networks have the ability to encode and decode (CODEC) sample data in the most efficient and low-loss way. The minimization of the loss function is accomplished through the use of gradient descent training algorithms. Example 3 - MNIST Digit ConvAutoencoder This "how-to" example comes from a competition held on the kaggle.com website. WHAT IS THE OVERARCHING GOAL? Recognize handwritten digits from the MNIST dataset. Do it with CNN generated "features" or "encoded" version of the input - the center of an autoencoder. The subject matter expert (SME) can recognize digits. They qualitatively rate the amount of data lost due to encoded compression. If the SME can no longer make a classification from a decoded input, they qualitativel...
Image
Reverse Inference Example Software:  This points to example software on GitHub where you can see some of these concepts in action. Here is the link: https://github.com/prof-nussbaum/Reverse_Inference  Reverse_Inference Working backwards through a deep convolutional network, to recreate the input image - and identify areas for improvement. Please see this article for more details on this technique, that I call "Reading the Robot Mind." https://readingtherobotmind.blogspot.com/2021/03/reading-robot-mind-deep-convolution.html What if we could read the robot's mind? It sounds like a silly statement, but if you could read a person's mind, you could "see" what they were thinking of when they mention a classification. If they say "I am seeing a dog" - reading their mind would give you additional details about the dog, or even perhaps, see what they are seeing. This is the same premise for Artificial Intelligence and Machine Learning. I know that deep conv...