YouTube: Deeplearning.ai

Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization)

Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization)

  • Welcome (Deep Learning Specialization C1W1L01)
  • What is a Neural Network? (C1W1L02)
  • Supervised Learning with a Neural Network (C1W1L03)
  • Why is deep learning taking off? (C1W1L04)
  • About This Course (C1W1L05)
  • Course Resources (C1W1L06)
  • Binary Classification (C1W2L01)
  • Logistic Regression (C1W2L02)
  • Logistic Regression Cost Function (C1W2L03)
  • Gradient Descent (C1W2L04)
  • Derivatives (C1W2L05)
  • More Derivative Examples (C1W2L06)
  • Computation Graph (C1W2L07)
  • Derivatives With Computation Graphs (C1W2L08)
  • Logistic Regression Gradient Descent (C1W2L09)
  • Gradient Descent on m Examples (C1W2L10)
  • Vectorization (C1W2L11)
  • More Vectorization Examples (C1W2L12)
  • Vectorizing Logistic Regression (C1W2L13)
  • Vectorizing Logistic Regression’s Gradient Computation (C1W2L14)
  • Broadcasting in Python (C1W2L15)
  • A Note on Python/Numpy Vectors (C1W2L16)
  • Quick Tour of Jupyter/iPython Notebooks (C1W2L17)
  • Explanation of Logistic Regression’s Cost Function (C1W2L18)
  • Neural Network Overview (C1W3L01)
  • Neural Network Representations (C1W3L02)
  • Computing Neural Network Output (C1W3L03)
  • Vectorizing Across Multiple Examples (C1W3L04)
  • Explanation For Vectorized Implementation (C1W3L05)
  • Activation Functions (C1W3L06)
  • Why Non-linear Activation Functions (C1W3L07)
  • Derivatives Of Activation Functions (C1W3L08)
  • Gradient Descent For Neural Networks (C1W3L09)
  • Backpropagation Intuition (C1W3L10)
  • Random Initialization (C1W3L11)
  • Deep L-Layer Neural Network (C1W4L01)
  • Forward Propagation in a Deep Network (C1W4L02)
  • Getting Matrix Dimensions Right (C1W4L03)
  • Why Deep Representations? (C1W4L04)
  • Building Blocks of a Deep Neural Network (C1W4L05)
  • Forward and Backward Propagation (C1W4L06)
  • Parameters vs Hyperparameters (C1W4L07)
  • What does this have to do with the brain? (C1W4L08)

Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization (Course 2 of the Deep Learning Specialization)

Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization (Course 2 of the Deep Learning Specialization)

  • Train/Dev/Test Sets (C2W1L01)
  • Bias/Variance (C2W1L02)
  • Basic Recipe for Machine Learning (C2W1L03)
  • Regularization (C2W1L04)
  • Why Regularization Reduces Overfitting (C2W1L05)
  • Dropout Regularization (C2W1L06)
  • Understanding Dropout (C2W1L07)
  • Other Regularization Methods (C2W1L08)
  • Normalizing Inputs (C2W1L09)
  • Vanishing/Exploding Gradients (C2W1L10)
  • Weight Initialization in a Deep Network (C2W1L11)
  • Numerical Approximations of Gradients (C2W1L12)
  • Gradient Checking (C2W1L13)
  • Gradient Checking Implementation Notes (C2W1L14)
  • Mini Batch Gradient Descent (C2W2L01)
  • Understanding Mini-Batch Gradient Dexcent (C2W2L02)
  • Exponentially Weighted Averages (C2W2L03)
  • Understanding Exponentially Weighted Averages (C2W2L04)
  • Bias Correction of Exponentially Weighted Averages (C2W2L05)
  • Gradient Descent With Momentum (C2W2L06)
  • RMSProp (C2W2L07)
  • Adam Optimization Algorithm (C2W2L08)
  • Learning Rate Decay (C2W2L09)
  • Tuning Process (C2W3L01)
  • Using an Appropriate Scale (C2W3L02)
  • Hyperparameter Tuning in Practice (C2W3L03)
  • Normalizing Activations in a Network (C2W3L04)
  • Fitting Batch Norm Into Neural Networks (C2W3L05)
  • Why Does Batch Norm Work? (C2W3L06)
  • Batch Norm At Test Time (C2W3L07)
  • Softmax Regression (C2W3L08)
  • Training Softmax Classifier (C2W3L09)
  • The Problem of Local Optima (C2W3L10)
  • TensorFlow (C2W3L11)

Structuring Machine Learning Projects (Course 3 of the Deep Learning Specialization)

Structuring Machine Learning Projects (Course 3 of the Deep Learning Specialization)

  • Improving Model Performance (C3W1L01)
  • Orthogonalization (C3W1L02 )
  • Single Number Evaluation Metric (C3W1L03)
  • Satisficing and Optimizing Metrics (C3W1L04)
  • Train/Dev/Test Set Distributions (C3W1L05)
  • Sizeof Dev and Test Sets (C3W1L06)
  • When to Change Dev/Test Sets (C3W1L07)
  • C3W1L08 WhyHumanLevelPerformance
  • Avoidable Bias (C3W1L09)
  • Understanding Human-Level Performance? (C3W1L10)
  • Surpassing Human-Level Performance (C3W1L11)
  • Improving Model Performance (C3W1L12)
  • Carrying Out Error Analysis (C3W2L01)
  • Cleaning Up Incorrectly Labelled Data (C3W2L02)
  • Build First System Quickly, Then Iterate (C3W2L03)
  • Training and Testing on Different Distributions (C3W2L04)
  • Bias and Variance With Mismatched Data (C3W2L05)
  • Addressing Data Mismatch (C3W2L06)
  • Transfer Learning (C3W2L07)
  • Multitask Learning (C3W2L08)
  • What is end-to-end deep learning? (C3W2L09)
  • Whether to Use End-To-End Deep Learning (C3W2L10)

Convolutional Neural Networks (Course 4 of the Deep Learning Specialization)

Convolutional Neural Networks (Course 4 of the Deep Learning Specialization)

  • C4W1L01 Computer Vision
  • C4W1L02 Edge Detection Examples
  • C4W1L03 More Edge Detection
  • C4W1L04 Padding
  • C4W1L05 Strided Convolutions
  • C4W1L06 Convolutions Over Volumes
  • C4W1L07 One Layer of a Convolutional Net
  • C4W1L08 Simple Convolutional Network Example
  • C4W1L09 Pooling Layers
  • C4W1L10 CNN Example
  • C4W1L11 Why Convolutions
  • C4W2L01 Why look at case studies?
  • C4W2L02 Classic Network
  • C4W2L03 Resnets
  • C4W2L04 Why ResNets Work
  • C4W2L05 Network In Network
  • C4W2L06 Inception Network Motivation
  • C4W2L07 Inception Network
  • C4W2L08 Using Open Source Implementation
  • C4W2L09 Transfer Learning
  • C4W2L10 Data Augmentation
  • C4W2L11 State of Computer Vision
  • C4W3L01 Object Localization
  • C4W3L02 Landmark Detection
  • C4W3L03 Object Detection
  • C4W3L04 Convolutional Implementation Sliding Windows
  • C4W3L06 Intersection Over Union
  • C4W3L07 Nonmax Suppression
  • C4W3L08 Anchor Boxes
  • C4W3L09 YOLO Algorithm
  • C4W3L10 Region Proposals
  • C4W4L01 What is face recognition
  • C4W4L02 One Shot Learning
  • C4W4L03 Siamese Network
  • C4W4L04 Triplet loss
  • C4W4L05 Face Verification
  • C4W4L06 What is neural style transfer?
  • C4W4L07 What are deep CNs learning?
  • C4W4L08 Cost Function
  • C4W4L09 Content Cost Function
  • C4W4L10 Style Cost Function
  • C4W4L11 1D and 3D Generalizations

Sequence Models (Course 5 of the Deep Learning Specialization)

Sequence Models (Course 5 of the Deep Learning Specialization)

  • C5W3L01 Basic Models
  • C5W3L02 Picking the most likely sentence
  • C5W3L06 Bleu Score (Optional)
  • C5W3L07 Attention Model Intuition
  • C5W3L08 Attention Model
  • C5W3L09 SpeechRecog

Leave a Reply

Your email address will not be published. Required fields are marked *