×

DeepLearning.TV's video: Autoencoders - Ep 10 Deep Learning SIMPLIFIED

@Autoencoders - Ep. 10 (Deep Learning SIMPLIFIED)
Autoencoders are a family of neural nets that are well suited for unsupervised learning, a method for detecting inherent patterns in a data set. These nets can also be used to label the resulting patterns. Deep Learning TV on Facebook: https://www.facebook.com/DeepLearningTV/ Twitter: https://twitter.com/deeplearningtv Essentially, autoencoders reconstruct a data set and, in the process, figure out its inherent structure and extract its important features. An RBM is a type of autoencoder that we have previously discussed, but there are several others. Autoencoders are typically shallow nets, the most common of which have one input layer, one hidden layer, and one output layer. Some nets, like the RBM, have only two layers instead of three. Input signals are encoded along the path to the hidden layer, and these same signals are decoded along the path to the output layer. Like the RBM, the autoencoder can be thought of as a 2-way translator. Autoencoders are trained with backpropgation and a new concept known as loss. Loss measures the amount of information about the input that was lost through the encoding-decoding process. The lower the loss value, the stronger the net. Some autoencoders have a very deep structure, with an equal number of layers for both encoding and decoding. A key application for deep autoencoders is dimensionality reduction. For example, these nets can transform a 256x256 pixel image into a representation with only 30 numbers. The image can then be reconstructed with the appropriate weights and bias; as an addition, some nets also add random noise at this stage in order to enhance the robustness of the discovered patterns. The reconstructed image wouldn’t be perfect, but the result would be a decent approximation depending on the strength of the net. The purpose of this compression is to the reduce the input size on a set of data before feeding it to a deep classifier. Smaller inputs lead to large computational speedups, so this preprocessing step is worth the effort. Have you ever used an autoencoder to reduce the dimensionality of your data? Please comment and share your experiences. Deep autoencoders are much more powerful than their predecessor, principal component analysis. In the video, you'll see the comparison of two letter codes associated with news stories of different topics. Among the two models, you’ll find the deep autoencoder to be far superior. Credits Nickey Pickorita (YouTube art) - https://www.upwork.com/freelancers/~0147b8991909b20fca Isabel Descutner (Voice) - https://www.youtube.com/user/IsabelDescutner Dan Partynski (Copy Editing) - https://www.linkedin.com/in/danielpartynski Marek Scibior (Prezi creator) - http://brawuroweprezentacje.pl/ Jagannath Rajagopal (Creator, Producer and Director) - https://ca.linkedin.com/in/jagannathrajagopal

758

33
DeepLearning.TV
Subscribers
81.1K
Total Post
31
Total Views
4M
Avg. Views
128K
View Profile
This video was published on 2015-12-23 01:10:51 GMT by @DeepLearning.TV on Youtube. DeepLearning.TV has total 81.1K subscribers on Youtube and has a total of 31 video.This video has received 758 Likes which are lower than the average likes that DeepLearning.TV gets . @DeepLearning.TV receives an average views of 128K per video on Youtube.This video has received 33 comments which are lower than the average comments that DeepLearning.TV gets . Overall the views for this video was lower than the average for the profile.

Other post by @DeepLearning.TV