{"id":11012,"date":"2020-04-30T09:08:50","date_gmt":"2020-04-30T09:08:50","guid":{"rendered":"http:\/\/blog.bachi.net\/?p=11012"},"modified":"2020-04-30T09:13:30","modified_gmt":"2020-04-30T09:13:30","slug":"youtube-deeplearning-ai","status":"publish","type":"post","link":"https:\/\/blog.bachi.net\/?p=11012","title":{"rendered":"YouTube: Deeplearning.ai"},"content":{"rendered":"<p><!-- ---------------------------------------------------------------------------------------------- --><\/p>\n<h1>Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization)<\/h1>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLkDaE6sCZn6Ec-XTbcX1uRg2_u4xOEky0\">Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization)<\/a><\/p>\n<ul>\n<li>Welcome (Deep Learning Specialization C1W1L01)<\/li>\n<li>What is a Neural Network? (C1W1L02)<\/li>\n<li>Supervised Learning with a Neural Network (C1W1L03)<\/li>\n<li>Why is deep learning taking off? (C1W1L04)<\/li>\n<li>About This Course (C1W1L05)<\/li>\n<li>Course Resources (C1W1L06)<\/li>\n<li>Binary Classification (C1W2L01)<\/li>\n<li>Logistic Regression (C1W2L02)<\/li>\n<li>Logistic Regression Cost Function (C1W2L03)<\/li>\n<li>Gradient Descent (C1W2L04)<\/li>\n<li>Derivatives (C1W2L05)<\/li>\n<li>More Derivative Examples (C1W2L06)<\/li>\n<li>Computation Graph (C1W2L07)<\/li>\n<li>Derivatives With Computation Graphs (C1W2L08)<\/li>\n<li>Logistic Regression Gradient Descent (C1W2L09)<\/li>\n<li>Gradient Descent on m Examples (C1W2L10)<\/li>\n<li>Vectorization (C1W2L11)<\/li>\n<li>More Vectorization Examples (C1W2L12)<\/li>\n<li>Vectorizing Logistic Regression (C1W2L13)<\/li>\n<li>Vectorizing Logistic Regression&#8217;s Gradient Computation (C1W2L14)<\/li>\n<li>Broadcasting in Python (C1W2L15)<\/li>\n<li>A Note on Python\/Numpy Vectors (C1W2L16)<\/li>\n<li>Quick Tour of Jupyter\/iPython Notebooks (C1W2L17)<\/li>\n<li>Explanation of Logistic Regression&#8217;s Cost Function (C1W2L18)<\/li>\n<li>Neural Network Overview (C1W3L01)<\/li>\n<li>Neural Network Representations (C1W3L02)<\/li>\n<li>Computing Neural Network Output (C1W3L03)<\/li>\n<li>Vectorizing Across Multiple Examples (C1W3L04)<\/li>\n<li>Explanation For Vectorized Implementation (C1W3L05)<\/li>\n<li>Activation Functions (C1W3L06)<\/li>\n<li>Why Non-linear Activation Functions (C1W3L07)<\/li>\n<li>Derivatives Of Activation Functions (C1W3L08)<\/li>\n<li>Gradient Descent For Neural Networks (C1W3L09)<\/li>\n<li>Backpropagation Intuition (C1W3L10)<\/li>\n<li>Random Initialization (C1W3L11)<\/li>\n<li>Deep L-Layer Neural Network (C1W4L01)<\/li>\n<li>Forward Propagation in a Deep Network (C1W4L02)<\/li>\n<li>Getting Matrix Dimensions Right (C1W4L03)<\/li>\n<li>Why Deep Representations? (C1W4L04)<\/li>\n<li>Building Blocks of a Deep Neural Network (C1W4L05)<\/li>\n<li>Forward and Backward Propagation (C1W4L06)<\/li>\n<li>Parameters vs Hyperparameters (C1W4L07)<\/li>\n<li>What does this have to do with the brain? (C1W4L08)<\/li>\n<\/ul>\n<p><!-- ---------------------------------------------------------------------------------------------- --><\/p>\n<h1>Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization (Course 2 of the Deep Learning Specialization)<\/h1>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLkDaE6sCZn6Hn0vK8co82zjQtt3T2Nkqc\">Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization (Course 2 of the Deep Learning Specialization)<\/a><\/p>\n<ul>\n<li>Train\/Dev\/Test Sets (C2W1L01)<\/li>\n<li>Bias\/Variance (C2W1L02)<\/li>\n<li>Basic Recipe for Machine Learning (C2W1L03)<\/li>\n<li>Regularization (C2W1L04)<\/li>\n<li>Why Regularization Reduces Overfitting (C2W1L05)<\/li>\n<li>Dropout Regularization (C2W1L06)<\/li>\n<li>Understanding Dropout (C2W1L07)<\/li>\n<li>Other Regularization Methods (C2W1L08)<\/li>\n<li>Normalizing Inputs (C2W1L09)<\/li>\n<li>Vanishing\/Exploding Gradients (C2W1L10)<\/li>\n<li>Weight Initialization in a Deep Network (C2W1L11)<\/li>\n<li>Numerical Approximations of Gradients (C2W1L12)<\/li>\n<li>Gradient Checking (C2W1L13)<\/li>\n<li>Gradient Checking Implementation Notes (C2W1L14)<\/li>\n<li>Mini Batch Gradient Descent (C2W2L01)<\/li>\n<li>Understanding Mini-Batch Gradient Dexcent (C2W2L02)<\/li>\n<li>Exponentially Weighted Averages (C2W2L03)<\/li>\n<li>Understanding Exponentially Weighted Averages (C2W2L04)<\/li>\n<li>Bias Correction of Exponentially Weighted Averages (C2W2L05)<\/li>\n<li>Gradient Descent With Momentum (C2W2L06)<\/li>\n<li>RMSProp (C2W2L07)<\/li>\n<li>Adam Optimization Algorithm (C2W2L08)<\/li>\n<li>Learning Rate Decay (C2W2L09)<\/li>\n<li>Tuning Process (C2W3L01)<\/li>\n<li>Using an Appropriate Scale (C2W3L02)<\/li>\n<li>Hyperparameter Tuning in Practice (C2W3L03)<\/li>\n<li>Normalizing Activations in a Network (C2W3L04)<\/li>\n<li>Fitting Batch Norm Into Neural Networks (C2W3L05)<\/li>\n<li>Why Does Batch Norm Work? (C2W3L06)<\/li>\n<li>Batch Norm At Test Time (C2W3L07)<\/li>\n<li>Softmax Regression (C2W3L08)<\/li>\n<li>Training Softmax Classifier (C2W3L09)<\/li>\n<li>The Problem of Local Optima (C2W3L10)<\/li>\n<li>TensorFlow (C2W3L11)<\/li>\n<\/ul>\n<p><!-- ---------------------------------------------------------------------------------------------- --><\/p>\n<h1>Structuring Machine Learning Projects (Course 3 of the Deep Learning Specialization)<\/h1>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLkDaE6sCZn6E7jZ9sN_xHwSHOdjUxUW_b\">Structuring Machine Learning Projects (Course 3 of the Deep Learning Specialization)<\/a><\/p>\n<ul>\n<li>Improving Model Performance (C3W1L01)<\/li>\n<li>Orthogonalization (C3W1L02 )<\/li>\n<li>Single Number Evaluation Metric (C3W1L03)<\/li>\n<li>Satisficing and Optimizing Metrics (C3W1L04)<\/li>\n<li>Train\/Dev\/Test Set Distributions (C3W1L05)<\/li>\n<li>Sizeof Dev and Test Sets (C3W1L06)<\/li>\n<li>When to Change Dev\/Test Sets (C3W1L07)<\/li>\n<li>C3W1L08 WhyHumanLevelPerformance<\/li>\n<li>Avoidable Bias (C3W1L09)<\/li>\n<li>Understanding Human-Level Performance? (C3W1L10)<\/li>\n<li>Surpassing Human-Level Performance (C3W1L11)<\/li>\n<li>Improving Model Performance (C3W1L12)<\/li>\n<li>Carrying Out Error Analysis (C3W2L01)<\/li>\n<li>Cleaning Up Incorrectly Labelled Data (C3W2L02)<\/li>\n<li>Build First System Quickly, Then Iterate (C3W2L03)<\/li>\n<li>Training and Testing on Different Distributions (C3W2L04)<\/li>\n<li>Bias and Variance With Mismatched Data (C3W2L05)<\/li>\n<li>Addressing Data Mismatch (C3W2L06)<\/li>\n<li>Transfer Learning (C3W2L07)<\/li>\n<li>Multitask Learning (C3W2L08)<\/li>\n<li>What is end-to-end deep learning? (C3W2L09)<\/li>\n<li>Whether to Use End-To-End Deep Learning (C3W2L10)<\/li>\n<\/ul>\n<p><!-- ---------------------------------------------------------------------------------------------- --><\/p>\n<h1>Convolutional Neural Networks (Course 4 of the Deep Learning Specialization)<\/h1>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLkDaE6sCZn6Gl29AoE31iwdVwSG-KnDzF\">Convolutional Neural Networks (Course 4 of the Deep Learning Specialization)<\/a><\/p>\n<ul>\n<li>C4W1L01 Computer Vision<\/li>\n<li>C4W1L02 Edge Detection Examples<\/li>\n<li>C4W1L03 More Edge Detection<\/li>\n<li>C4W1L04 Padding<\/li>\n<li>C4W1L05 Strided Convolutions<\/li>\n<li>C4W1L06 Convolutions Over Volumes<\/li>\n<li>C4W1L07 One Layer of a Convolutional Net<\/li>\n<li>C4W1L08 Simple Convolutional Network Example<\/li>\n<li>C4W1L09 Pooling Layers<\/li>\n<li>C4W1L10 CNN Example<\/li>\n<li>C4W1L11 Why Convolutions<\/li>\n<li>C4W2L01 Why look at case studies?<\/li>\n<li>C4W2L02 Classic Network<\/li>\n<li>C4W2L03 Resnets<\/li>\n<li>C4W2L04 Why ResNets Work<\/li>\n<li>C4W2L05 Network In Network<\/li>\n<li>C4W2L06 Inception Network Motivation<\/li>\n<li>C4W2L07 Inception Network<\/li>\n<li>C4W2L08 Using Open Source Implementation<\/li>\n<li>C4W2L09 Transfer Learning<\/li>\n<li>C4W2L10 Data Augmentation<\/li>\n<li>C4W2L11 State of Computer Vision<\/li>\n<li>C4W3L01 Object Localization<\/li>\n<li>C4W3L02 Landmark Detection<\/li>\n<li>C4W3L03 Object Detection<\/li>\n<li>C4W3L04 Convolutional Implementation Sliding Windows<\/li>\n<li>C4W3L06 Intersection Over Union<\/li>\n<li>C4W3L07 Nonmax Suppression<\/li>\n<li>C4W3L08 Anchor Boxes<\/li>\n<li>C4W3L09 YOLO Algorithm<\/li>\n<li>C4W3L10 Region Proposals<\/li>\n<li>C4W4L01 What is face recognition<\/li>\n<li>C4W4L02 One Shot Learning<\/li>\n<li>C4W4L03 Siamese Network<\/li>\n<li>C4W4L04 Triplet loss<\/li>\n<li>C4W4L05 Face Verification<\/li>\n<li>C4W4L06 What is neural style transfer?<\/li>\n<li>C4W4L07 What are deep CNs learning?<\/li>\n<li>C4W4L08 Cost Function<\/li>\n<li>C4W4L09 Content Cost Function<\/li>\n<li>C4W4L10 Style Cost Function<\/li>\n<li>C4W4L11 1D and 3D Generalizations<\/li>\n<\/ul>\n<p><!-- ---------------------------------------------------------------------------------------------- --><\/p>\n<h1>Sequence Models (Course 5 of the Deep Learning Specialization)<\/h1>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLkDaE6sCZn6F6wUI9tvS_Gw1vaFAx6rd6\">Sequence Models (Course 5 of the Deep Learning Specialization)<\/a><\/p>\n<ul>\n<li>C5W3L01 Basic Models<\/li>\n<li>C5W3L02 Picking the most likely sentence<\/li>\n<li>C5W3L06 Bleu Score (Optional)<\/li>\n<li>C5W3L07 Attention Model Intuition<\/li>\n<li>C5W3L08 Attention Model<\/li>\n<li>C5W3L09 SpeechRecog<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization) Neural Networks and Deep Learning (Course 1 of the Deep Learning Specialization) Welcome (Deep Learning Specialization C1W1L01) What is a Neural Network? (C1W1L02) Supervised Learning with a Neural Network (C1W1L03) Why is deep learning taking off? (C1W1L04) About This Course (C1W1L05) Course Resources [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-11012","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11012","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11012"}],"version-history":[{"count":6,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11012\/revisions"}],"predecessor-version":[{"id":11018,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11012\/revisions\/11018"}],"wp:attachment":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11012"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11012"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11012"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}