{"id":11038,"date":"2020-05-05T18:55:41","date_gmt":"2020-05-05T18:55:41","guid":{"rendered":"http:\/\/blog.bachi.net\/?p=11038"},"modified":"2020-05-05T18:55:41","modified_gmt":"2020-05-05T18:55:41","slug":"youtube-nptel-noc-iitm","status":"publish","type":"post","link":"https:\/\/blog.bachi.net\/?p=11038","title":{"rendered":"YouTube: NPTEL-NOC IITM"},"content":{"rendered":"<h2>Machine Learning for Engineering and Science Applications<\/h2>\n<p><a href=\"https:\/\/www.youtube.com\/playlist?list=PLyqSpQzTE6M-SISTunGRBRiZk7opYBf_K\">Machine Learning for Engineering and Science Applications<\/a><\/p>\n<ul>\n<li>Machine Learning for Engineering and Science Applications &#8211; Intro Video<\/li>\n<li>Introduction to the Course History of Artificial Intelligence<\/li>\n<li>Overview of Machine Learning<\/li>\n<li>Why Linear Algebra ? Scalars, Vectors, Tensors<\/li>\n<li>Basic Operations<\/li>\n<li>Norms<\/li>\n<li>Linear Combinations Span Linear Independence<\/li>\n<li>Matrix Operations Special Matrices Matrix Decompositions<\/li>\n<li>Introduction to Probability Theory Discrete and Continuous Random Variables<\/li>\n<li>Conditional &#8211; Joint &#8211; Marginal Probabilities Sum Rule and Product Rule Bayes&#8217; Theorem<\/li>\n<li>Bayes&#8217; Theorem &#8211; Simple Examples<\/li>\n<li>Independence Conditional Independence Chain Rule Of Probability<\/li>\n<li>Expectation<\/li>\n<li>Variance Covariance<\/li>\n<li>Some Relations for Expectation and Covariance (Slightly Advanced)<\/li>\n<li>Machine Representation of Numbers, Overflow, Underflow, Condition Number<\/li>\n<li>Derivatives,Gradient,Hessian,Jacobian,Taylor Series<\/li>\n<li>Matrix Calculus (Slightly Advanced)<\/li>\n<li>Optimization \u2013 1 Unconstrained Optimization<\/li>\n<li>Introduction to Constrained Optimization<\/li>\n<li>Introduction to Numerical Optimization Gradient Descent &#8211; 1<\/li>\n<li>Gradient Descent \u2013 2 Proof of Steepest Descent Numerical Gradient Calculation Stopping Criteria<\/li>\n<li>Introduction to Packages<\/li>\n<li>The Learning Paradigm<\/li>\n<li>A Linear Regression Example<\/li>\n<li>Linear Regression Least Squares Gradient Descent<\/li>\n<li>Coding Linear Regression<\/li>\n<li>Generalized Function for Linear Regression<\/li>\n<li>Goodness of Fit<\/li>\n<li>Bias-Variance Trade Off<\/li>\n<li>Gradient Descent Algorithms<\/li>\n<li>Feedforward Neural Network<\/li>\n<li>Structure of an Artificial Neuron<\/li>\n<li>Multinomial Classification &#8211; One Hot Vector<\/li>\n<li>Multinomial Classification- Introduction<\/li>\n<li>XOR Gate<\/li>\n<li>NOR, AND, NAND Gates<\/li>\n<li>OR Gate Via Classification<\/li>\n<li>Logistic Regression<\/li>\n<li>Summary of Week 05<\/li>\n<li>Introduction to back prop<\/li>\n<li>Biological neuron<\/li>\n<li>Schematic of multinomial logistic regression<\/li>\n<li>Multinomial Classification &#8211; Softmax<\/li>\n<li>Code for Logistic Regression<\/li>\n<li>Gradient of logistic regression<\/li>\n<li>Differentiating the sigmoid<\/li>\n<li>Binary Entropy cost function<\/li>\n<li>Introduction to Week 5 (Deep Learning)<\/li>\n<li>Introduction to Convolution Neural Networks (CNN)<\/li>\n<li>Types of convolution<\/li>\n<li>CNN Architecture Part 1 (LeNet and Alex Net)<\/li>\n<li>CNN Architecture Part 2 (VGG Net)<\/li>\n<li>CNN Architecture Part 3 (GoogleNet)<\/li>\n<li>CNN Architecture Part 4 (ResNet)<\/li>\n<li>CNN Architecture Part 5 (DenseNet)<\/li>\n<li>Train Network for Image Classification<\/li>\n<li>Semantic Segmentation<\/li>\n<li>Hyperparameter optimization<\/li>\n<li>Transfer Learning<\/li>\n<li>Segmentation of Brain Tumors from MRI using Deep Learning<\/li>\n<li>Introduction to RNNs<\/li>\n<li>Summary of RNNs<\/li>\n<li>Deep RNNs and Bi- RNNs<\/li>\n<li>Why LSTM Works<\/li>\n<li>LSTM<\/li>\n<li>RNN Architectures<\/li>\n<li>Vanishing Gradients and TBPTT<\/li>\n<li>Training RNNs &#8211; Loss and BPTT<\/li>\n<li>Example &#8211; Sequence Classification<\/li>\n<li>Batch Normalizing<\/li>\n<li>Data Normalization<\/li>\n<li>Learning Rate decay, Weight initialization<\/li>\n<li>Activation Functions<\/li>\n<li>Introduction- Week 09<\/li>\n<li>Knn<\/li>\n<li>Binary decision trees<\/li>\n<li>Binary regression trees<\/li>\n<li>Bagging<\/li>\n<li>Random Forest<\/li>\n<li>Boosting<\/li>\n<li>Gradient boosting<\/li>\n<li>Unsupervised learning &#038; Kmeans<\/li>\n<li>Agglomerative clustering<\/li>\n<li>Probability Distributions Gaussian, Bernoulli<\/li>\n<li>Covariance Matrix of Gaussian Distribution<\/li>\n<li>Central Limit Theorem<\/li>\n<li>Na\u00efve Bayes<\/li>\n<li>MLE Intro<\/li>\n<li>PCA part 1<\/li>\n<li>PCA part 2<\/li>\n<li>Support Vector Machines<\/li>\n<li>MLE, MAP and Bayesian Regression<\/li>\n<li>Introduction to Generative model<\/li>\n<li>Generative Adversarial Networks (GAN)<\/li>\n<li>Variational Auto-encoders (VAE)<\/li>\n<li>Applications: Cardiac MRI &#8211; Segmentation &#038; Diagnosis<\/li>\n<li>Applications: Cardiac MRI Analysis &#8211; Tensorflow code walkthrough<\/li>\n<li>Introduction to Week 12<\/li>\n<li>Application 1 description &#8211; Fin Heat Transfer<\/li>\n<li>Application 1 solution<\/li>\n<li>Application 2 description &#8211; Computational Fluid Dynamics<\/li>\n<li>Application 2 solution<\/li>\n<li>Application 3 description &#8211; Topology Optimization<\/li>\n<li>Application 3 solution<\/li>\n<li>Application 4 &#8211; Solution of PDE\/ODE using Neural Networks<\/li>\n<li>Summary and road ahead<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Machine Learning for Engineering and Science Applications Machine Learning for Engineering and Science Applications Machine Learning for Engineering and Science Applications &#8211; Intro Video Introduction to the Course History of Artificial Intelligence Overview of Machine Learning Why Linear Algebra ? Scalars, Vectors, Tensors Basic Operations Norms Linear Combinations Span Linear Independence Matrix Operations Special Matrices [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-11038","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11038","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11038"}],"version-history":[{"count":1,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11038\/revisions"}],"predecessor-version":[{"id":11039,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/11038\/revisions\/11039"}],"wp:attachment":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11038"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11038"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11038"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}