Overview of Andrew Ng’s deeplearning.ai courses

In this post, I’m going to briefly write about the recently launched Andrew Ng’s coursera courses in Neural Network and Deep Learning that I just finished with certificates. I want to argue that there’s merit in taking these courses even if you’re already familiar with some good portion of the syllabi.

First a short relevant intro about myself then the overview.

Relevant background

Back in 2014, the first course in machine learning that I took was Andrew Ng’s very famous introductory coursera course. I became very interested mainly because with simple math and programming, one was able to build models for solving various important tasks. Later by taking more academic courses I started to have the idea of even working in ML field. It’s been quite some time now that I work in ML and Big Data areas. I am also a Deep Learning enthusiast starting to get firm grasp of practical and theoretical aspects of DL. My current position requires changing two hats frequently; researcher hat and software engineer hat.

In short, the research role requires being updated with ML research trends and the ability to assess many academic results. The software engineer role requires criticizing what results are valuable in real production environment in order to bring them into existing software ecosystems / platforms and ultimately creating / enhancing ML products.

Given that, I’ve had some previous exposure to DL and training deep NN, but I want to emphasize on 1) going back to basics frequently to close some of the learning gaps 2) never missing the opportunity of learning from the key people of the area that you’re working on. These two were my motivations of taking Andrew Ng’s DL coursera courses.

My overview

At the time of write this post, 3 out of 5 courses were lunched. I’ll also update this post when course 4 and 5 become available and I finish them. I should also mention I skipped the lectures of course 1 and watched the lectures of course 2 and 3 at x1.25 / x1.5 speed.

  • Course 1 (Neural networks and deep learning): What I really liked and definitely would recommend is when you want to learn ML/DL, start coding everything at low level of Numpy and don’t jump into Keras (as opposed to fast.ai‘s approach. I’ll back to it later). This is exactly how assignments where designed though with inevitable amount of writing boilerplate code.
  • Course 2 (Improving deep neural networks): It gives you the necessary intuitions about improving and tuning deep networks from different perspectives. Again I liked the practical aspects of implementing various tuning, regularization and optimization techniques in the assignments. The last assignment teaches the introductory Tensorflow. I was expecting to get to using Tensorflow much sooner though.
  • Course 3 (Structuring machine learning projects): I enjoyed this course more that the first two mainly because it taught me things that don’t exist in any literature that are extremely important from both the research and engineering sides. The course offers techniques to critical problems that arise when you want to design/architect or assess ML/DL projects as well as prioritize on what directions to choose in different scenarios.
  • One highlight of the courses that I also enjoyed is having a series of interviews with DL heros, from Hinton to others key researchers.
  • I like the fact that Andrew Ng is bringing his own terminologies and notations.
  • Assignments were straightforward and nicely designed in Python.
  • There’re few typos or solution mismatches in the assignments that overtime will be corrected.
  • Nitpick in term of coding style in Python, I always advocate keeping PEP8 in mind, such as parameters in callables with given values shouldn’t have extra whitespaces; e.x. write f(x=1) not f(x = 1).

Compare to fast.ai DL course:

I finished watched fast.ai first course lectures when they were just launched earlier this year and also their latest course recently. I can understand the value of their approach and why it works for those with less exposure to Math, however, for me it was rather disappointing and wasn’t satisfying at all. From the early on in part 1), building a cat vs. dog classifier using VGG16 in Keras in a few lines of code shows me how much important details were kept away from me (behind so much abstractions) and worries me more than gives me confident. However, part 2) of their lectures is more appealing to me.

So I think, at the end it boils down to two factors whether you know you want to understand deeper from the start (Andrew Ng’s courses) or you want to go into very simplified applications first and don’t care about the details, then gradually learn some techniques to better understand DL (fast.ai courses).

Recommendations

Finally, here is my list of other DL courses/book that I enjoyed:

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.