I am loving Term 2. It starts with the basics of neural networks and the introduction to Multilayer Perceptron networks (MPLs) where you build a basic neural network that you can train and use to predict either categorical (classification) or numerical values (regression). It’s power and limitations are presented through very fun exercises involving analyzing IMDB and image data.
MLPs are followed by Convolution Neural Network (CNN) introduction, by far the most fun I’ve had in this course. CNNs are mostly used for image recognition and you get to experience building image recognition networks while working on a project to classify dog breeds from a set of provided images. Really cool and fun project. It barely scratches the surface of what’s possible but you get a very nice intro and then you can branch out on your own.
CNN intro was followed by the Recurring Neural Networks (RNN). The application here focuses on language and character prediction tests with the project asking to implement a model that is capable generating English language words. It was really cool how effective the model was and how it was able to generate English words after just a few hours of toying around with training and parameters. This article http://karpathy.github.io/2015/05/21/rnn-effectiveness/ contains really cool info on RNNs.
After these topics, you reach a point in the degree where you have to select a specialization. I went with Computer Vision as that is the area that I find really fascinating and exciting. The other choices are Natural Language Processing (NLP) and Voice User Interfaces. NLP is probably the most useful but I am trying to stay away from it as everything that has to do with “language” I find very boring. It’s something that I will get over but right now I want to just have a lot of fun and learn and I know Computer Vision will provide me with both.
The plan right now is to finish up the lectures, get the project done (I hear it’s short, a weekend type of endeavor) and I am done! Can’t wait to graduate!