機器學習 Machine Learning 公開課
從基礎理論到實際應用的全面機器學習教育。本課程涵蓋了數據表示、特徵工程、模型評估與驗證,以及優化技術等關鍵領域,並通過具體案例和實際操作展示這些概念的應用。課程適合從初學者到進階學習者,幫助學生深入理
內容簡介
作者介紹
適合人群
你將會學到什麼
購買須知
-
機器學習介紹(Intro)
此系列從基本定義開始,探討了機器學習的應用場景、學習的基本組成部分、機器學習與其他領域的關聯,以及學習的可行性和概率理論在其中的作用。此外,還涉及不同類型的學習方法,包括監督學習、非監督學習和強化學習等,並用實例說明這些概念。這些視頻由知名教授和專家主講,如林軒田、Siraj Raval、StatQuest的創建者,以及莫烦Python,他們的解釋清晰易懂,適合所有希望深入了解機器學習基礎的學習者。
-
Machine Learning Tutorial Python -1: What is Machine Learning? - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python What is Machine Learning? This is an introduction to machine learning to begin the python machine learning tutorial series. This video describes what is machine learning, deep learning, machine learning application in real life. In next tutorial we will start writing python code to solve a simple problem using machine learning To download csv and code for all tutorials: go to https://github.com/codebasics/py, click on a green button to clone or download the entire repository and then go to relevant folder to get access to that specific file. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #sklearntutorials #scikitlearntutorials ⭐️Topics that are covered in this Machine Learning Video:⭐️ 0:00 Introduction 1:06 What is machine learning? 3:55 what is Deep Learning? 5:09 Machine Learning implementation in real life
-
IAML2.2: What is machine learning? - Victor Lavrenko
-
What Is Machine Learning @ Machine Learning Foundations (機器學習基石) - 林軒田
-
Applications of Machine Learning @ Machine Learning Foundations (機器學習基石) - 林軒田
-
Components of Learning @ Machine Learning Foundations (機器學習基石) - 林軒田
-
Machine Learning and Other Fields @ Machine Learning Foundations (機器學習基石) - 林軒田
-
Feasibility of Learning :: Probability to the Rescue @ Machine Learning Foundations (機器學習基石) - 林軒田
-
Machine Learning Foundations/Techniques: Types of Learning - 林軒田
-
Machine Learning Foundations/Techniques: Learning to Answer Yes/No - 林軒田
-
Intro - The Math of Intelligence - Siraj Raval
Welcome to The Math of Intelligence! In this 3 month course, we'll cover the most fundamental math concepts in Machine Learning. In this first lesson, we'll go over a very popular optimization technique called gradient descent to help us predict how many calories a cyclist would burn given just their distance traveled. We'll also follow the story of 2 data scientists as they attempt to find the Higgs-Boson (God particle) via anomaly detection. No collaborations, this is an independent course.
-
A Gentle Introduction to Machine Learning - StatQuest
Machine Learning is one of those things that is chock full of hype and confusion terminology. In this StatQuest, we cut through all of that to get at the most basic ideas that make a foundation for the whole thing. These ideas are simple and easy to understand. After watching this StatQuest, you'll be ready to learn all kinds of new and exciting things about Machine Learning.
-
什么是机器学习? What is machine learning? - 莫烦Python
在这里我们介绍了什么是机器学习, 还有机器学习包含了哪些方法.通常来说, 机器学习的方法包括:监督学习 supervised learning;非监督学习 unsupervised learning;半监督学习 semi-supervised learning;强化学习 reinforcement learnin...
-
【機器學習2021】預測本頻道觀看人數 (上) - 機器學習基本概念簡介 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/regression%20(v16).pdf
-
【機器學習2021】預測本頻道觀看人數 (下) - 深度學習基本概念簡介 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/regression%20(v16).pdf
-
【機器學習 2022】淺談機器學習原理 - Hung-yi Lee
再探寶可夢、數碼寶貝分類器 — 淺談機器學習原理
-
-
機器學習演算法(Machine Learning Algorithms)
這一系列視頻專注於機器學習演算法的核心概念和應用。覆蓋了從基本的演算法原理到具體的應用場景,適合對機器學習算法感興趣的學習者。 Edureka的教程提供了全面的機器學習課程,從基本原理到實際案例,適合對機器學習全面學習有需求的人士。 Noureddin Sadawi和Victor Lavrenko的視頻專注於闡述分類、回歸和聚類等基本概念,並探討了監督學習和非監督學習之間的差異。 此外,視頻還涵蓋了二元分類器與多類別分類器的區別,以及生成式學習和區別式學習的對比。
-
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Training | Edureka
This Machine Learning Algorithms Tutorial shall teach you what machine learning is, and the various ways in which you can use machine learning to solve a problem! Towards the end, you will learn how to prepare a data-set for model creation and validation and how you can create a model using any machine learning algorithm! In this Machine Learning Algorithms Tutorial video you will understand: 00:00:00 Introduction 00:00:12 Agenda for Today's Session 00:01:11 What is an Algorithm? 00:02:07 Algorithm - Example 00:04:19 What is Machine Learning? 00:06:14 Unsupervised Learning 00:12:07 Reinforcement Learning 00:16:19 How a problem is solved using Machine Learning? 00:19:08 Classification Algorithms 00:20:33 Anomaly Detection Algorithms 00:21:44 Regression Algorithms 00:22:49 Clustering Algorithms 00:24:10 Reinforcement Algorithms 00:27:02 Dataset
-
Machine Learning Full Course - 12 Hours | Machine Learning Roadmap [2024] | Edureka
This Edureka Machine Learning Full Course video will help you understand and learn Machine Learning Algorithms in detail. This Machine Learning Tutorial is ideal for both beginners and professionals who want to master Machine Learning Algorithms. Below are the topics covered in this Machine Learning Roadmap course: 00:00:00 Introduction to Machine Learning Full Course 00:01:08 Agenda of Machine Learning Full Course 00:02:45 What is Machine learning? 00:06:28 Supervised Machine Learning 00:11:49 Un-Supervised Machine Learning 00:16:03 Reinforcement Machine Learning 00:32:21 How to Become a Machine Learning Engineer? 00:41:53 Machine Learning Algorithm 01:03:46 Linear Regression Algorithm 01:06:40 What is Linear Regression 01:11:13 Linear Regression Use Cases 01:12:24 Use Case- How to Implement Linear Regression using Python 01:30:22 Logistic Regression Algorithm 01:35:44 Logistic Regression Use cases 02:17:36 Linear Regression Vs Logistic Regression 02:21:05 Decision Tree Algorithm 02:25:53 Types of Classification 02:34:57 What is Decision Tree? 02:58:25 What is Pruning? 02:58:36 Hands-on 03:06:42 Random Forest 03:10:46 Working of Random Forest 03:17:45 Splitting Methods 03:20:32 Advantages & Disadvantages of Random Forest 03:23:52 Hands-on Random Forest 03:35:18 KNN Algorithm 03:37:39 Features of the KNN Algorithm 03:45:54 How KNN works 03:51:21 Hands-on KNN Algorithm 04:07:45 Naive Bayes Classifier 04:29:25 Support Vector Machine 04:31:13 How do SVM work 04:55:00 K- Means Clustering Algorithm 04:58:26 K Means Clustering 05:07:16 Agglomerative Clustering 05:09:16 Division Clustering 05:09:41 Mean shift Clustering 05:18:21 Hierarchical Clustering 05:25:10 How Agglomerative Clustering Works 05:32:59 Applications of Hierarchical Clustering 05:38:34 Apriori Algorithm Explained 05:52:58 Demo 06:30:26 Linear Algebra Application 06:54:00 Probability 07:07:01 Statistics 07:12:47 Types of Statistics 07:38:40 How to select the correct predictive modeling techniques 07:50:54 ML Model Deployment with Flask on Heroku 08:28:32 Azure Machine Learning 08:54:49 AWS Machine Learning 09:35:24 Machine learning Engineer Skills 09:43:30 Machine Learning Engineer Job Trend, Salary & Resume 09:59:20 Top Machine Learning Tools & Frameworks 10:09:12 Machine Learning Roadmap 10:22:20 Machine Learning Interview Question & Answers
-
What is Classification? What is a Classifier? - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
The OneR Classifier .. What it is and How it Works - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
The ZeroR Classifier .. What it is and How it Works - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
IAML2.3: What is classification? - Victor Lavrenko
-
IAML2.4: What is regression? - Victor Lavrenko
-
IAML2.5: What is clustering? - Victor Lavrenko
-
IAML2.20: Supervised vs unsupervised learning - Victor Lavrenko
-
IAML2.21: Binary vs. multiclass classifiers - Victor Lavrenko
-
IAML2.23: Generative vs. discriminative learning - Victor Lavrenko
-
如果大数定律失效,机器学习还能学吗?幂律分布可以告诉你答案 - 王木头学科学
(林轩田老师 《机器学习基石课》 学习感悟与总结 part 1)演绎、归纳、演化 三种学习方式对应了实现人工智能的三大流派用归纳的方式实现机器学习,大数定律是唯一可以利用的理论如果大数定律失效了这么办呢?幂律分布会告诉你,如果这样,演化将会是唯一有效的学习方法
-
Machine Learning for Everybody – Full Course - freeCodeCamp.org
Learn Machine Learning in a way that is accessible to absolute beginners. You will learn the basics of Machine Learning and how to use TensorFlow to implement many different concepts. ✏️ Kylie Ying developed this course. Check out her channel: / ycubed ⭐️ Code and Resources ⭐️ 🔗 Supervised learning (classification/MAGIC): https://colab.research.google.com/dri... 🔗 Supervised learning (regression/bikes): https://colab.research.google.com/dri... 🔗 Unsupervised learning (seeds): https://colab.research.google.com/dri... 🔗 Dataets (add a note that for the bikes dataset, they may have to open the downloaded csv file and remove special characters) 🔗 MAGIC dataset: https://archive.ics.uci.edu/ml/datase... 🔗 Bikes dataset: https://archive.ics.uci.edu/ml/datase... 🔗 Seeds/wheat dataset: https://archive.ics.uci.edu/ml/datase... 🏗 Google provided a grant to make this course possible. ⭐️ Contents ⭐️ ⌨️ (0:00:00) Intro ⌨️ (0:00:58) Data/Colab Intro ⌨️ (0:08:45) Intro to Machine Learning ⌨️ (0:12:26) Features ⌨️ (0:17:23) Classification/Regression ⌨️ (0:19:57) Training Model ⌨️ (0:30:57) Preparing Data ⌨️ (0:44:43) K-Nearest Neighbors ⌨️ (0:52:42) KNN Implementation ⌨️ (1:08:43) Naive Bayes ⌨️ (1:17:30) Naive Bayes Implementation ⌨️ (1:19:22) Logistic Regression ⌨️ (1:27:56) Log Regression Implementation ⌨️ (1:29:13) Support Vector Machine ⌨️ (1:37:54) SVM Implementation ⌨️ (1:39:44) Neural Networks ⌨️ (1:47:57) Tensorflow ⌨️ (1:49:50) Classification NN using Tensorflow ⌨️ (2:10:12) Linear Regression ⌨️ (2:34:54) Lin Regression Implementation ⌨️ (2:57:44) Lin Regression using a Neuron ⌨️ (3:00:15) Regression NN using Tensorflow ⌨️ (3:13:13) K-Means Clustering ⌨️ (3:23:46) Principal Component Analysis ⌨️ (3:33:54) K-Means and PCA Implementations 🎉 Thanks to our Champion and Sponsor supporters: 👾 Raymond Odero 👾 Agustín Kussrow 👾 aldo ferretti 👾 Otis Morgan 👾 DeezMaster
-
-
數據表示與特徵工程(Data Representation and Feature Engineering)
專注於機器學習中的數據表示與特徵工程。這些視頻深入探討了如何有效地處理和表示各種類型的數據,包括類別數據、序數數據、數值數據,以及如何處理數據中的異常值和偏斜分佈。視頻還涵蓋了如何在機器學習模型中表示複雜對象,如圖像、手寫數字、文本和音樂。每一節都具體介紹了不同類型數據的特點及其在機器學習中的應用,強調了特徵工程在機器學習成功應用中的重要性。
-
IAML2.6: Attribute-value representation - Victor Lavrenko
-
IAML2.7: Categorical (nominal) attributes - Victor Lavrenko
-
IAML2.8: Ordinal attributes - Victor Lavrenko
-
IAML2.9: Numeric attributes and outliers - Victor Lavrenko
-
IAML2.10: Skewed and non-monotonic attributes - Victor Lavrenko
-
IAML2.11: Credit scoring example - Victor Lavrenko
-
IAML2.12: How to represent images - Victor Lavrenko
-
IAML2.13: Representing handwritten digits - Victor Lavrenko
-
IAML2.14: Why blurring helps machine learning - Victor Lavrenko
-
IAML2.15: When pixels work as attributes and when they don't - Victor Lavrenko
-
IAML2.16: Attributes for object recognition - Victor Lavrenko
-
IAML2.17: Representing text with categorical attributes - Victor Lavrenko
-
IAML2.18: Representing text with numeric attributes - Victor Lavrenko
-
IAML2.19: Representing music with Fourier coefficients - Victor Lavrenko
-
IAML2.24: How to represent structured objects in machine learning - Victor Lavrenko
-
One-Hot, Label, Target and K-Fold Target Encoding, Clearly Explained!!! - StatQuest
In theory, discrete variables, or features, are easy to use with machine learning algorithms. However, in practice, it's not always so easy and we often have...
-
StatQuest: Principal Component Analysis (PCA), Step-by-Step - StatQuest
Principal Component Analysis, is one of the most useful data analysis and machine learning methods out there. It can be used to identify patterns in highly c...
-
Machine Learning Tutorial Python - 19: Principal Component Analysis (PCA) with Python Code - codebasics
PCA or principal component analysis is a dimensionality reduction technique that can help us reduce dimensions of dataset that we use in machine learning for training. It helps with famous dimensionality curse problem. In this video we will understand what PCA is all about, write python code for handwritten digits dataset classification and then use PCA to train the same model using PCA. Code: https://github.com/codebasics/py/blob... Exercise: https://github.com/codebasics/py/blob... ⭐️ Timestamps ⭐️ 00:00 Theory 09:12 Coding 23:04 Exercise
-
-
回歸模型(Regression Models)
這系列視頻專注於回歸分析(Regression Analysis)的多個方面,涵蓋了從基本理論到各種回歸模型的應用。適合於希望深入理解並應用回歸分析於數據分析和機器學習項目的學習者。 視頻包含對線性回歸(Linear Regression)的基礎介紹,涉及單變量和多變量情況,並展示如何在Python和R中實現這些技術。 林軒田的講座深入講解了回歸分析的理論,包括最小二乘法、設計矩陣以及用於分類的線性模型。 系列還包括對正則化回歸(Regularization Regression)技術的探討,如嶺回歸(Ridge Regression)、套索回歸(Lasso Regression)和彈性網回歸(Elastic Net Regression)。 此外,還介紹了支持向量回歸(Support Vector Regression, SVR)和其他複雜回歸模型,例如決策樹回歸和k最近鄰回歸。
-
Machine Learning Tutorial Python - 3: Linear Regression Multiple Variables - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python In this machine learning tutorial with python, we will write python code to predict home prices using multivariate linear regression in python (using sklearn linear_model). Home prices are dependent on 3 independent variables: area, bedrooms and age. Pandas dataframe is used to fill missing values first and then use that dataset to train a multivariate regression model.You can use exercise at the end to consolidate your understanding on whatever you have learnt in this machine learning tutorial. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #LinearRegression #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... (Exercise is at the end of the ipynb notebook so just open that file and read through) Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Machine Learning Video: 0:00 Linear Regression With Multiple Variables: 0:48 Data set 2:07 Linear Equation 3:28 Load Data in Pandas Data Frame 4:16 Data preeprocessing (Handle Missing Values) 6:17 Train Lemear Model 8:18 Predict home prices using trained model 11:35 Exercise to predict hired candidates salary based on few parameters Topic Highlights: 1) Data Preprocessing Handle Missing Values 2) Linear Regression Using Multiple Variables 3) Train Lemear Model 4) Exercise to predict hired candidates salary based on few parameters
-
Machine Learning Tutorial Python - 2: Linear Regression Single Variable - codebasics
In this tutorial we will predict home prices using linear regression. We use training data that has home areas in square feet and corresponding prices and train a linear regression model using sklearn linear regression class. Later on predict method is used on linear regression object to make actual forecast. Exercise CSV file is here: https://github.com/codebasics/py/tree... Code in this tutorial is here: https://github.com/codebasics/py/tree... (check the .ipynb file) To download csv and code for all tutorials: go to https://github.com/codebasics/py, click on a green button to clone or download the entire repository and then go to relevant folder to get access to that specific file. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #LinearRegression #sklearntutorials #scikitlearntutorials Topics that are covered in this Machine Learning Video: 0:00 Simple linear regression 1:59 Linear equation 2:22 Import data in dataframe 2:43 Import sklearn library 3:52 Plot scatter plot 5:26 Create Linear Regression object 13:35 Exercise at the end to predict canada's per capita income Topic Highlights: 1) What is linear regression 2) Mean squared error 3) Predict home prices by minimizing mean squared error (or MSE) 4) Exercise at the end to predict canada's per capita income
-
Machine Learning Foundations/Techniques: Linear Regression - 林軒田
-
The Main Ideas of Fitting a Line to Data (The Main Ideas of Least Squares and Linear Regression.)
Fitting a line to data is actually pretty straightforward.For a complete index of all the StatQuest videos, check out:https://statquest.org/video-index/If yo...
-
Linear Regression, Clearly Explained!!!
The concepts behind linear regression, fitting a line to data with least squares and R-squared, are pretty darn simple, so let's get down to it! NOTE: This S...
-
Linear Regression in R, Step-by-Step
This video, which walks you through a simple regression in R, is a companion to the StatQuest on Linear Regression https://youtu.be/nk2CQITm_eoIf you want t...
-
Multiple Regression, Clearly Explained!!!
This video directly follows part 1 in the StatQuest series on General Linear Models (GLMs) on Linear Regression https://youtu.be/nk2CQITm_eo . This StatQuest...
-
Using Linear Models for t-tests and ANOVA, Clearly Explained!!!
This StatQuest shows how the methods used to determine if a linear regression is statistically significant (covered in part 1) can be applied to t-tests and ...
-
Design Matrices For Linear Models, Clearly Explained!!!
In order to use general linear models (GLMs) you need to create design matrices. At first, these can seem intimidating, but this StatQuest puts together a bunch of examples and illustrates them all so that they are clearly explained. The examples in this video are worked out in R in this video: • Design Matrix Examples in R, Clearly ...
-
Design Matrix Examples in R, Clearly Explained!!!
This StatQuest complements the StatQuest: GLMs Pt.3 - Design Matrices https://youtu.be/2UYx-qjJGSs with examples given in R. If you would like the code, you ...
-
Multiple Regression in R, Step-by-Step!!!
This StatQuest is a companion to the StatQuest on Multiple Regression https://youtu.be/zITIFTsivN8 It starts with a simple regression in R and then shows how...
-
Saturated Models and Deviance
This video follows from where we left off in Part 3 of the Logistic Regression series, but the ideas are more general, so I decided not to make it just Part ...
-
Deviance Residuals
This video follows up on the StatQuest on Saturated Models and Deviance Statistics. https://youtu.be/9T0wlKdew6IFor a complete index of all the StatQuest vid...
-
Regularization Part 1: Ridge (L2) Regression - StatQuest
Ridge Regression is a neat little way to ensure you don't overfit your training data - essentially, you are desensitizing your model to the training data. It...
-
Regularization Part 2: Lasso (L1) Regression - StatQuest
Lasso Regression is super similar to Ridge Regression, but there is one big, huge difference between the two. In this video, I start by talking about all of ...
-
Ridge vs Lasso Regression, Visualized!!! - StatQuest
People often ask why Lasso Regression can make parameter values equal 0, but Ridge Regression can not. This StatQuest shows you why.NOTE: This StatQuest assu...
-
Regularization Part 3: Elastic Net Regression - StatQuest
Elastic-Net Regression is combines Lasso Regression with Ridge Regression to give you the best of both worlds. It works well when there are lots of useless v...
-
Ridge, Lasso and Elastic-Net Regression in R
The code in this video can be found on the StatQuest GitHub:https://github.com/StatQuest/ridge_lasso_elastic_net_demo/blob/master/ridge_lass_elastic_net_demo...
-
Support Vector Regression :: Kernel Ridge Regression @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Support Vector Regression :: Support Vector Regression Primal @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Support Vector Regression :: Support Vector Regression Dual @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Support Vector Regression :: Summary of Kernel Models @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Machine Learning Foundations/Techniques: Linear Models for Classification - 林軒田
-
Regression with Decision Trees - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Multiple Linear Regression (MLP) 1/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Multiple Linear Regression (MLP) 2/2 ... with an example - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Regression with the k-Nearest Neighbor (kNN) Algorithm - Noureddin Sadawi
Regression with the kNN AlgorithmMy web page:www.imperial.ac.uk/people/n.sadawi
-
-
邏輯回歸(Logistic Regression)
介紹和深入解析邏輯回歸(Logistic Regression),這是一種廣泛用於機器學習中的分類算法。從二元分類到多類別分類,這些視頻涵蓋了邏輯回歸的各種應用和技術細節。 codebasics的視頻提供了對二元和多類別邏輯回歸的實用指南,適合初學者快速掌握基礎知識。 林軒田的講座深入探討了邏輯回歸的理論基礎,包括錯誤分析和核邏輯回歸,以及與支持向量機(SVM)的比較。 StatQuest和Siraj Raval的視頻則更側重於解釋邏輯回歸的數學原理和實際實施過程。 視頻還包括了如何在R中實現邏輯回歸,以及對其係數、最大似然估計、R平方和p值等細節的深入分析。
-
Logistic Regression (Binary Classification) - codebasics
Logistic regression is used for classification problems in machine learning. This tutorial will show you how to use sklearn logisticregression class to solve binary classification problem to predict if a customer would buy a life insurance. At the end we have an interesting exercise for you to solve. Usually there are two types of machine learning problems (1) Linear regression where prediction value is continuous (2) Classification where predicted value is categorical. Logistic regression is used for classification problems mainly. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #LogisticRegression #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Exercise: Open above notebook from github and go to the end. Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 - Theory (Explain difference between logic regression and classification) 1:18 - What is logistic regression? 1:26 - Classification types (Binary vs multiclass classification) 1:53 - Explanation of logistic regression using the example of if person will buy insurance based on his age 5:38 - Sigmoid or Logit function 8:18 - Coding (for coding we are using an example of if a person will buy insurance or not based on his age) 14:36 - sklearn predict_proba() function 15:49 - Exercise (Solve a problem of predicting employee retention based on salary, distance to work, promotion, department etc)
-
Logistic Regression (Multiclass Classification) - codebasics
Logistic regression is used for classification problems in machine learning. This tutorial will show you how to use sklearn logisticregression class to solve multiclass classification problem to predict hand written digit. We will use sklearn load_digits to load readily available dataset from sklearn library and train our classifier using that information. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #LogisticRegression #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Exercise: Open above notebook from github and go to the end. Topics that are covered in this Video: 0:00 - Theory (Binary classification vs multiclass classification) 0:26 - How to identify hand written digits? 1:02 - Coding (Solve a problem of hand written digit recognition) 11:24 - Confusion Matrix (sklearn confusion_matrix) 12:42 - Plot confusion matrix using seaborn library 14:00 - Exercise (Use sklearn iris dataset to predict flower type based on different features using logistic regression)
-
Machine Learning Foundations/Techniques: Logistic Regression (1/2) - 林軒田
-
Machine Learning Foundations/Techniques: Logistic Regression (2/2) - 林軒田
-
Logistic Regression :: Logistic Regression Error @ Machine Learning Foundations (機器學習基石) - 林軒田
-
StatQuest: Logistic Regression
Logistic regression is a traditional statistics technique that is also very popular as a machine learning tool. In this StatQuest, I go over the main ideas ...
-
Logistic Regression Details Pt1: Coefficients
When you do logistic regression you have to make sense of the coefficients. These are based on the log(odds) and log(odds ratio), but, to be honest, the easi...
-
Logistic Regression Details Pt 2: Maximum Likelihood
This video follows from where we left off in Part 1 in this series on the details of Logistic Regression. This time we're going to talk about how the squiggl...
-
Logistic Regression Details Pt 3: R-squared and p-value
This video follows from where we left off in Part 2 in this series on the details of Logistic Regression. Last time we saw how to fit a squiggly line to the...
-
Logistic Regression in R, Clearly Explained!!!!
This video describes how to do Logistic Regression in R, step-by-step. We start by importing a dataset and cleaning it up, then we perform logistic regressio...
-
Logistic Regression - The Math of Intelligence (Week 2) - Siraj Raval
We're going to use logistic regression to predict if someone has diabetes or not given 3 body metrics! We'll use Newton's method to help us optimize the mode...
-
Binary Logistic Regression Tutorial - Siraj Raval
Binary logistic regression is a machine learning algorithm most useful when we want to model the event probability for a categorical response variable with two outcomes (yes/no, true/false, etc.). In this video we'll build a sentiment classifier app that uses binary logistic regression to classify tweets as either happy, sad, or neutral. I'll use animations, code, rap, skits, and equations to explain how it all works. Enjoy!
-
Kernel Logistic Regression :: Soft-Margin SVM as Regularized @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Logistic Regression :: SVM versus Logistic Regression @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Logistic Regression :: Kernel Logistic Regression @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Logistic Regression :: SVM for Soft Binary @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Classification with Logistic Regression 1/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Classification with Logistic Regression 2/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
-
貝葉斯分類器(Bayesian Classifiers)
介紹並深入解析貝葉斯分類器(Bayesian Classifiers)及其在機器學習中的應用。這些視頻適合想要理解並應用貝葉斯方法於數據分類問題的學習者。 王木头学科学的視頻探討了L1和L2正則化在貝葉斯框架下的解釋,揭示了它們作為最大後驗估計的本質。 codebasics和Noureddin Sadawi的視頻提供了對於朴素貝葉斯分類器(Naive Bayes Classifier)的基礎和進階概念的實用指南,包括如何處理數值屬性。 StatQuest系列則通過清晰的解釋和實例,深入介紹了朴素貝葉斯和高斯貝葉斯(Gaussian Naive Bayes)。 此外,Siraj Raval的視頻在介紹概率論基礎的同時,闡述了它在智能算法中的應用。 Edureka的視頻則提供了對貝葉斯分類器的全面解釋,包括算法的原理和應用。
-
贝叶斯解释“L1和L2正则化”,本质上是最大后验估计。如何深入理解贝叶斯公式? - 王木头学科学
用贝叶斯概率理解L1和L2正则化,它们本质上是最大后验估计如何直观理解先验概率、后验概率什么是最大后验估计,与最大似然估计的区别是什么?用贝叶斯主义思想去理解机器学习
-
Naive Bayes Classifier Algorithm Part 1 - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python This is part 1 of naive bayes classifier algorithm machine learning tutorial. Naive bayes theorm uses bayes theorm for conditional probability with a naive assumption that the features are not correlated to each other and tries to find conditional probability of target variable given the probabilities of features. We will use titanic survival dataset here and using naive bayes classifier find out the survival probability of titanic travellers. We use sklearn library and python for this beginners machine learning tutorial. GaussianNB is the classifier we use to train our model. There are other classifiers such as MultinomialNB but we will use that in part 2 of the tutorial. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #NaiveBayes #sklearntutorials #scikitlearntutorials
-
Naive Bayes Classifier Algorithm Part 2 - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python In this python machine learning tutorial for beginners we will build email spam classifier using naive bayes algorithm. We will use sklearn CountVectorizer to convert email text into a matrix of numbers and then use sklearn MultinomialNB classifier to train our model. The model score with this approach comes out to be very high (around 98%). Sklearn pipeline allows us to handle pre processing transformations easily with its convenient api. In the end there is an exercise where you need to classify sklearn wine dataset using naive bayes. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #NaiveBayes #sklearntutorials #scikitlearntutorials Dataset: https://github.com/codebasics/py/blob... Exercise: https://github.com/codebasics/py/blob... Code:https://github.com/codebasics/py/blob... Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 00:00 explore spam email dataset 02:33 sklearn CountVectorizer 04:30 types of naive bayes classifiers 05:23 sklearn MultinomialNB classifier 06:48 sklearn pipeline 09:35 Exercise
-
How Naive Bayes Classifier Works 1/2.. Understanding Naive Bayes and Example - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Naive Bayes Classifier 2/2 .. Naive Bayes and Numerical Attributes - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Naive Bayes, Clearly Explained!!! - StatQuest
When most people want to learn about Naive Bayes, they want to learn about the Multinomial Naive Bayes Classifier - which sounds really fancy, but is actually quite simple. This video walks you through it one step at a time and by the end, you'll no longer be naive about Naive Bayes!!!
-
Gaussian Naive Bayes, Clearly Explained!!! - StatQuest
Gaussian Naive Bayes takes are of all your Naive Bayes needs when your training data are continuous. If that sounds fancy, don't sweat it! This StatQuest wil...
-
Probability Theory - The Math of Intelligence #6 - Siraj Raval
We'll build a Spam Detector using a machine learning model called a Naive Bayes Classifier! This is our first real dip into probability theory in the series; I'll talk about the types of probability, then we'll use Bayes Theorem to help us build our classifier.
-
Naive Bayes Classifier Explained | Naive Bayes Algorithm | Edureka
This Edureka video will provide you with a detailed and comprehensive knowledge of Naive Bayes Classifier Algorithm in python. At the end of the video, you will learn from a demo example on Naive Bayes. Below are the topics covered in this tutorial: 00:00:00 Introduction 00:00:25 Agenda 00:01:35 What is Machine Learning 00:13:47 Introduction to Classification 00:17:28 Classification Algorithms 00:18:30 What is Naïve Bayes 00:31:36 Mathematical Working of Naïve Bayes 00:39:24 Example
-
-
支持向量機(Support Vector Machines, SVM)
支持向量機(Support Vector Machines, SVM)的全面介紹和深入分析。適合於想要深入理解SVM及其在機器學習中應用的學習者。 Siraj Raval的視頻從基礎介紹SVM,解釋其在智能算法中的作用。 StatQuest和codebasics系列通過實例和清晰的解釋,分別介紹了SVM的基本概念、多項式核、RBF核以及如何在Python中從頭到尾實現SVM。 Noureddin Sadawi的視頻專注於解釋SVM的工作原理,包括線性和非線性SVM,以及核技巧。 林軒田的系列從線性SVM到雙重SVM,再到核SVM和軟邊界SVM,深入探討了SVM的不同方面,包括其動機、理論基礎和實際應用。 王木头学科学的視頻則從更理論的角度探討SVM,包括軟間隔的理解、合葉損失函數和與其他機器學習算法的對比,以及VC維度在理解SVM中的作用。
-
Support Vector Machines - The Math of Intelligence (Week 1) - Siraj Raval
Support Vector Machines are a very popular type of machine learning model used for classification when you have a small dataset. We'll go through when to use...
-
Support Vector Machines Part 1 (of 3): Main Ideas!!!
Support Vector Machines are one of the most mysterious methods in Machine Learning. This StatQuest sweeps away the mystery to let know how they work.Part 2: ...
-
Support Vector Machines Part 2: The Polynomial Kernel (Part 2 of 3)
Support Vector Machines use kernel functions to do all the hard work and this StatQuest dives deep into one of the most popular: The Polynomial Kernel. We ta...
-
Support Vector Machines Part 3: The Radial (RBF) Kernel (Part 3 of 3)
Support Vector Machines use kernel functions to do all the hard work and this StatQuest dives deep into one of the most popular: The Radial (RBF) Kernel. We ...
-
Support Vector Machines in Python from Start to Finish.
NOTE: You can support StatQuest by purchasing the Jupyter Notebook and Python code seen in this video here: http://statquest.gumroad.com/l/iulneaThis webinar...
-
Machine Learning Tutorial Python - 10 Support Vector Machine (SVM) - codebasics
Support vector machine (SVM) is a popular classification algorithm. This tutorial covers some theory first and then goes over python coding to solve iris flower classification problem using svm and sklearn library. We also cover different parameters such as gamma, regularization and how to fine tune svm classifier using these parameters. Basically the way support vector machine works is it draws a hyper plane in n dimension space such that it maximizes the margin between classification groups. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #SupportVectorMachine #SVM #sklearntutorials #scikitlearntutorials
-
How Support Vector Machine (SVM) Works 1/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
How Support Vector Machine (SVM) Works 2/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Nonlinear Support Vector Machine (SVM) .. The Kernel Trick - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Linear Support Vector Machine (SVM) :: Course Introduction @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Linear SVM :: Large-Margin Separating Hyperplane @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Linear SVM :: Standard Large-Margin Problem @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Linear SVM :: Support Vector Machine @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Linear SVM :: Reasons behind Large-Margin Hyperplane @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Dual Support Vector Machine :: Motivation of Dual SVM @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Dual Support Vector Machine :: Largange Dual SVM @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Dual Support Vector Machine :: Solving Dual SVM @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Dual Support Vector Machine :: Messages behind Dual SVM @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Support Vector Machine :: Kernel Trick @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Support Vector Machine :: Polynomial Kernel @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Support Vector Machine :: Gaussian Kernel @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Kernel Support Vector Machine :: Comparison of Kernels @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Soft-Margin Support Vector Machine :: Motivation and Primal @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Soft-Margin Support Vector Machine :: Dual Problem @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Soft-Margin Support Vector Machine :: Messages @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Soft-Margin Support Vector Machine :: Model Selection @ Machine Learning Techniques (機器學習技法) - 林軒田
-
什么是SVM,如何理解软间隔?什么是合叶损失函数、铰链损失函数?SVM与感知机横向对比,挖掘机器学习本质 - 王木头学科学
将SVM与感知机横向对比,对机器学习做期中总结深度学习之前SVM为什么能够流行?为什么SVM可以自带正则化项?什么是合叶损失函数、铰链损失函数?如何理解 机器学习=模型+策略+算法?
-
用VC维度理解SVM的结构风险最小化 & VC维是理解正则化的第4个角度 - 王木头学科学
从模型复杂度的角度重新理解正则化什么是VC维度?为什么说SVM是结构风险最小化?什么是经验风险最小化?SVM为什么可以减少VC维度?
-
-
K最近鄰(k-Nearest Neighbors, kNN)
K最近鄰(k-Nearest Neighbors, kNN)算法的全面介紹,涵蓋了從基本概念到實際應用和實現的各個方面。適合於想要深入理解和應用kNN算法的學習者。 StatQuest和Noureddin Sadawi的視頻提供了對kNN分類器工作原理的直觀解釋,並展示了如何在Java和Python中實現kNN。 Victor Lavrenko的一系列短視頻從不同角度深入探討了kNN,包括算法的直觀理解、決策邊界、對異常值的敏感性、分類和回歸算法、選擇鄰居數量、相似度/距離測量、解決鄰居間平局的策略、以及kNN的優缺點。 視頻還介紹了kNN在數據結構(如k-d樹和局部敏感哈希LSH)、以及與其他機器學習方法(如支持向量機SVM)的關聯。
-
StatQuest: K-nearest neighbors, Clearly Explained
Machine learning and Data Mining sure sound like complicated things, but that isn't always the case. Here we talk about the surprisingly simple and surprisin...
-
How K-Nearest Neighbors (kNN) Classifier Works - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Java Implementation of K-Nearest Neighbors (kNN) Classifier 1/2 - Noureddin Sadawi
The code can be found here:www.imperial.ac.uk/people/n.sadawiGo to Tutorials and then Machine Learning section!
-
Java Implementation of K-Nearest Neighbors (kNN) Classifier 2/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Machine Learning Tutorial Python - 18: K nearest neighbors classification with python code - codebasics
In this video we will understand how K nearest neighbors algorithm work. Then write python code using sklearn library to build a knn (K nearest neighbors) model. The end, I have an exercise for you to practice concepts you learnt in this video. Code: https://github.com/codebasics/py/blob... Exercise: https://github.com/codebasics/py/blob... ⭐️ Timestamps ⭐️ 00:00 Theory 03:51 Coding 14:09 Exercise
-
kNN.1 Overview - Victor Lavrenko
-
kNN.2 Intuition for the nearest-neighbor method - Victor Lavrenko
-
kNN.3 Voronoi cells and decision boundary - Victor Lavrenko
-
kNN.4 Sensitivity to outliers - Victor Lavrenko
-
kNN.5 Nearest-neighbor classification algorithm - Victor Lavrenko
-
kNN.6 MNIST digit recognition - Victor Lavrenko
-
kNN.7 Nearest-neighbor regression algorithm - Victor Lavrenko
-
kNN.8 Nearest-neighbor regression example - Victor Lavrenko
-
kNN.9 Number of nearest neighbors to use - Victor Lavrenko
-
kNN.10 Similarity / distance measures - Victor Lavrenko
-
kNN.11 Breaking ties between nearest neighbors - Victor Lavrenko
-
kNN.12 Parzen windows, kernels and SVM - Victor Lavrenko
-
kNN.13 Pros and cons of nearest-neighbor methods - Victor Lavrenko
-
kNN.14 Computational complexity of finding nearest-neighbors - Victor Lavrenko
-
kNN.15 K-d tree algorithm - Victor Lavrenko
-
kNN.16 Locality sensitive hashing (LSH) - Victor Lavrenko
-
kNN.17 Inverted index - Victor Lavrenko
-
-
決策樹(Decision Trees)
這系列視頻提供了對決策樹(Decision Trees)的全面介紹和深入分析。適合於想要理解和應用決策樹於數據分類和回歸問題的學習者。 Victor Lavrenko的一系列短視頻從基礎介紹開始,涵蓋了決策樹的各個方面,包括ID3算法、分割純度、熵、信息增益、過擬合和修剪、信息增益比率,以及決策樹在處理實數值數據和回歸問題中的應用。 StatQuest的視頻深入解釋了決策樹和分類樹的工作原理,包括特徵選擇和缺失數據的處理。 codebasics和Noureddin Sadawi的視頻提供了實際的決策樹實現範例,包括Python中的分類樹實作。 林軒田的講座深入講解了決策樹算法、啟發式方法和C&RT中的應用。
-
Decision and Classification Trees, Clearly Explained!!! - StatQuest
Decision trees are part of the foundation for Machine Learning. Although they are quite simple, they are very flexible and pop up in a very wide variety of s...
-
Regression Trees, Clearly Explained!!!
Regression Trees are one of the fundamental machine learning techniques that more complicated methods, like Gradient Boost, are based on. They are useful for...
-
Machine Learning Tutorial Python - 9 Decision Tree - codebasics
Decision tree algorithm is used to solve classification problem in machine learning domain. In this tutorial we will solve employee salary prediction problem using decision tree. First we will go over some theory and then do coding practice. In the end I've a very interesting exercise for you to solve. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #DecisionTree #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... csv file for exercise: https://github.com/codebasics/py/blob... Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 - How to solve classification problem using decision tree algorithm? 0:26 - Theory (Explain rationale behind decision tree using a use case of predicting salary based on department, degree and company that a person is working for) 2:10 - How do you select ordering of features? High vs low information gain and entropy 3:52 - Gini impurity 4:28 - Coding (start) 9:11 - Create sklearn model using DecisionTreeClassifier 13:32 - Exercise (Find out survival rate of titanic ship passengers using decision tree)
-
How Decision Trees Work 1/2 .. an Introduction + What is Entropy - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
How Decision Trees Work 2/2 .. An Example - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
IAML7.1 Decision Trees: an introduction - Victor Lavrenko
-
IAML7.2 Decision tree example - Victor Lavrenko
-
IAML7.3 Quinlan's ID3 algorithm - Victor Lavrenko
-
IAML7.4 Decision tree: split purity - Victor Lavrenko
-
IAML7.5 Decision tree entropy - Victor Lavrenko
-
IAML7.6 Information gain - Victor Lavrenko
-
IAML7.7 Overfitting in decision trees - Victor Lavrenko
-
IAML7.8 Decision tree pruning - Victor Lavrenko
-
IAML7.9 Information gain ratio - Victor Lavrenko
-
IAML7.10 Decision trees are DNF formulas - Victor Lavrenko
-
IAML7.11 Decision trees and real-valued data - Victor Lavrenko
-
IAML7.12 Decision tree regression - Victor Lavrenko
-
IAML7.13 Pros and cons of decision trees - Victor Lavrenko
-
IAML7.15 Summary - Victor Lavrenko
-
StatQuest: Decision Trees, Part 2 - Feature Selection and Missing Data
This is just a short follow up to last week's StatQuest where we introduced decision trees. Here we show how decision trees deal with variables that don't im...
-
How to Prune Regression Trees, Clearly Explained!!!
Pruning Regression Trees is one the most important ways we can prevent them from overfitting the Training Data. This video walks you through Cost Complexity ...
-
Classification Trees in Python from Start to Finish
NOTE: You can support StatQuest by purchasing the Jupyter Notebook and Python code seen in this video here: https://statquest.gumroad.com/l/tzxohThis webinar...
-
Decision Tree :: Decision Tree Hypothesis @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Decision Tree :: Decision Tree Algorithm @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Decision Tree :: Decision Tree Heuristics in C&RT @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Decision Tree :: Decision Tree in Action @ Machine Learning Techniques (機器學習技法) - 林軒田
-
-
集成學習(Ensemble Learning)
介紹和深入探討集成學習(Ensemble Learning)中的幾種關鍵技術:裝袋(Bagging)、提升(Boosting)、混合(Blending)和堆疊(Stacking)。這些方法都是集成學習策略的重要部分,旨在通過結合多個學習器來提高整體模型的性能。 codebasics 和 林軒田 的視頻提供了對裝袋技術的深入介紹,其中包括其動機、不同形式的混合,以及裝袋(即自助聚合)的具體方法和原理。 StatQuest 的視頻以及 林軒田 的一系列視頻深入探討了自適應提升技術 AdaBoost,包括提升的動機、通過重新權重實現多樣性的方法,以及 AdaBoost 算法的細節和實際應用案例。 Sebastian Raschka 的視頻介紹了堆疊(Stacking),這是一種集成多個不同模型預測的技術,通常用於提高預測準確度。
-
Machine Learning Tutorial Python - 21: Ensemble Learning - Bagging - codebasics
Ensemble learning is all about using multiple models to combine their prediction power to get better predictions that has low variance. Bagging and boosting are two popular techniques that allows us to tackle high variance issue. In this video we will learn about bagging with simple visual demonstration. We will also right python code in sklearn to use BaggingClassifier. And oh yes, in the end we have the exercise for you, as always! Code: https://github.com/codebasics/py/blob... Exercise: https://github.com/codebasics/py/blob... ⭐️ Timestamps ⭐️ 00:00 Theory 08:01 Coding 22:25 Exercise
-
Blending and Bagging :: Motivation of Aggregation @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Blending and Bagging :: Uniform Blending @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Blending and Bagging :: Linear and Any Blending @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Blending and Bagging :: Bagging (Bootstrap Aggregation) @ Machine Learning Techniques (機器學習技法) - 林軒田
-
AdaBoost, Clearly Explained - StatQuest
AdaBoost is one of those machine learning methods that seems so much more confusing than it really is. It's really just a simple twist on decision trees and ...
-
Adaptive Boosting :: Motivation of Boosting @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Adaptive Boosting :: Diversity by Re-weighting @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Adaptive Boosting :: Adaptive Boosting Algorithm @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Adaptive Boosting :: Adaptive Boosting in Action @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Stacking (L07: Ensemble Methods) - Sebastian Raschka
This video explains Wolpert's stacking algorithm (stacked generalization) and shows how to use stacking classifiers in mlxtend and scikit-learn.
-
-
高階樹狀模型:隨機森林、XGBoost 與梯度提升Gradient Boosting
這系列視頻提供了對高階樹狀模型學習方法,特別是隨機森林(Random Forests)、XGBoost和梯度提升(Gradient Boosting)的全面介紹和深入分析。這些視頻適合於想要理解和應用這些高級機器學習算法於數據分析問題的學習者。 StatQuest的視頻清晰解釋了AdaBoost和隨機森林的基本原理和實際應用,並展示了如何在R中實作這些方法。 Victor Lavrenko的視頻系列從理論和實踐兩方面深入探討了隨機森林和梯度提升決策樹,包括AdaBoost決策樹、梯度提升的優化過程,以及集成學習的總結。 視頻還包括了對XGBoost算法的全面介紹,從回歸和分類問題到數學細節和優化技術。 Siraj Raval和Edureka的視頻提供了隨機森林算法的額外解釋和實踐範例。 codebasics的視頻展示了如何在Python中實作隨機森林。
-
Machine Learning Tutorial Python - 11 Random Forest - codebasics
Random forest is a popular regression and classification algorithm. In this tutorial we will see how it works for classification problem in machine learning. It uses decision tree underneath and forms multiple trees and eventually takes majority vote out of it. We will go over some theory first and then solve digits classification problem using sklearn RandomForestClassifier. In the end we have an exercise for you to solve. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #MachineLearningAlgorithm #RandomForest #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Exercise: Exercise description is avialable in above notebook towards the end Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 Random forest algorithm 0:50 How to build multiple decision trees based on single data set? 2:34 Use of sklearn digits data set to make a classification using random forest 3:04 Coding (Start) (Use sklearn digits dataset for classification using random forest) 7:10 sklearn.ensemble RandomForestClassifier 10:36 Confusion Matrix (sklearn.metrics confusion_matrix) 12:04 Exercise (Classify iris flower using sklearn iris flower dataset and random forest classifier)
-
StatQuest: Random Forests Part 1 - Building, Using and Evaluating
Random Forests make a simple, yet effective, machine learning method. They are made out of decision trees, but don't have the same problems with accuracy. In...
-
StatQuest: Random Forests Part 2: Missing data and clustering
NOTE: This StatQuest is the updated version of the original Random Forests Part 2 and includes two minor corrections.Last time we talked about how to create,...
-
StatQuest: Random Forests in R
Random Forests are an easy to understand and easy to use machine learning technique that is surprisingly powerful. Here I show you, step by step, how to use ...
-
IAML7.14 Random forest algorithm - Victor Lavrenko
-
Gradient Boost Part 1 (of 4): Regression Main Ideas
Gradient Boost is one of the most popular Machine Learning algorithms in use. And get this, it's not that complicated! This video is the first part in a series that walks through it one step at a time. This video focuses on the main ideas behind using Gradient Boost to predict a continuous value, like someone's weight. We call this, "using Gradient Boost for Regression". In the next video, we'll work through the math to prove that Gradient Boost for Regression really is this simple. In part 3, we'll walk though how Gradient Boost classifies samples into two different categories, and in part 4, we'll go through the math again, this time focusing on classification.
-
Gradient Boost Part 2 (of 4): Regression Details
Gradient Boost is one of the most popular Machine Learning algorithms in use. And get this, it's not that complicated! This video is the second part in a ser...
-
Gradient Boost Part 3 (of 4): Classification
This is Part 3 in our series on Gradient Boost. At long last, we are showing how it can be used for classification. This video gives focuses on the main idea...
-
Gradient Boost Part 4 (of 4): Classification Details
At last, part 4 in our series of videos on Gradient Boost. This time we dive deep into the details of how it is used for classification, going through algori...
-
Gradient Boosted Decision Tree :: AdaBoost Decision Tree @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Gradient Boosted Decision Tree :: Optimization of AdaBoost @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Gradient Boosted Decision Tree :: Gradient Boosting @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Gradient Boosted Decision Tree :: Summary of Aggregation @ Machine Learning Techniques (機器學習技法) - 林軒田
-
XGBoost Part 1 (of 4): Regression
XGBoost is an extreme machine learning algorithm, and that means it's got lots of parts. In this video, we focus on the unique regression trees that XGBoost ...
-
XGBoost Part 2 (of 4): Classification
In this video we pick up where we left off in part 1 and cover how XGBoost trees are built for Classification.NOTE: This StatQuest assumes that you are alrea...
-
XGBoost Part 3 (of 4): Mathematical Details
In this video we dive into the nitty-gritty details of the math behind XGBoost trees. We derive the equations for the Output Values from the leaves as well as the Similarity Score. Then we show how these general equations are customized for Regression or Classification by their respective Loss Functions. If you make it to the end, you will be approximately 22% smarter than you are now! :)
-
XGBoost Part 4 (of 4): Crazy Cool Optimizations
This video covers all kinds of extra optimizations that XGBoost uses when the training dataset is huge. So we'll talk about the Approximate Greedy Algorithm,...
-
XGBoost in Python from Start to Finish
NOTE: You can support StatQuest by purchasing the Jupyter Notebook and Python code seen in this video here: https://statquest.org/product/jupyter-notebook-xgboost-in-python/
-
Random Forests - The Math of Intelligence (Week 6) - Siraj Raval
This is one of the most used machine learning models ever. Random Forests can be used for both regression and classification, and our use case will be to ass...
-
Explore Random Forest Algorithm | Machine Learning Training | Edureka
This Edureka video on Random Forest in Machine Learning explains the concept of the Random Forest algorithm in Python and how is it used. Topics Covered: 00:00:00 Introduction 00:01:16 What is Classification 00:02:34 What is Random Forest? 00:03:11 Why Random Forest? 00:07:36 Ensemble Learning 00:10:22 Random Forest Analogy 00:15:00 How good is your model?
-
Random Forest :: Random Forest Algorithm @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Random Forest :: Out-of-bag Estimate @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Random Forest :: Feature Selection @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Random Forest :: Random Forest in Action @ Machine Learning Techniques (機器學習技法) - 林軒田
-
-
群聚分析(Clustering Analysis)
群聚分析(Clustering Analysis)的全面介紹和深入分析,適合於想要理解和應用各種聚類技術的學習者。 codebasics和Noureddin Sadawi的視頻介紹了K均值(K-Means)聚類和自組織映射(Self-Organizing Map, SOM)等分割型(Partitive)聚類方法。 Siraj Raval提供了關於K均值聚類和高斯混合模型(Gaussian Mixture Models)的深入探討,這些技術適用於更複雜的數據集。 StatQuest的視頻以淺顯易懂的方式解釋了階層聚類(Hierarchical Clustering)和DBSCAN聚類方法。 Victor Lavrenko的一系列短視頻從基礎概念開始,詳細介紹了聚類算法的不同類型,包括K均值聚類的算法原理、目標函數、最佳聚類數量的確定方法以及聚類系統的評估。
-
Machine Learning Tutorial Python - 13: K Means Clustering Algorithm - codebasics
K Means clustering algorithm is unsupervised machine learning technique used to cluster data points. In this tutorial we will go over some theory behind how k means works and then solve income group clustering problem using sklearn, kmeans and python. Elbow method is a technique used to determine optimal number of k, we will review that method as well. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #kmeans #MachineLearningTechnique #sklearn #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... data link: https://github.com/codebasics/py/tree... Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 introduction 0:08 Theory - Explanation of Supervised vs Unsupervised learning and how kmeans clustering works. kmeans is unsupervised learning 5:00 Elbow method 7:33 Coding (start) (Cluster people income based on age) 9:38 sklearn.cluster KMeans model creation and training 14:56 Use MinMaxScaler from sklearn 24:07 Exercise (Cluster iris flowers using their petal width and length)
-
Partitive Clustering .. Self-Organizing Map (SOM) - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Partitive Clustering (K-Means Clustering) - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Hierarchical Clustering (Agglomerative and Divisive Clustering) - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Clustering Algorithms .. What is Clustering? - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
StatQuest: Hierarchical Clustering
Hierarchical clustering is often used with heatmaps and with machine learning type stuff. It's no big deal, though, and based on just a few simple concepts. ...
-
StatQuest: K-means clustering
K-means clustering is used in all kinds of situations and it's crazy simple. The R code is on the StatQuest GitHub: https://github.com/StatQuest/k_means_clus...
-
Clustering with DBSCAN, Clearly Explained!!! - StatQuest
DBSCAN is a super useful clustering algorithm that can handle nested clusters with ease. This StatQuest shows you exactly how it works. BAM!For a complete in...
-
K-Means Clustering - The Math of Intelligence (Week 3) - Siraj Raval
Let's detect the intruder trying to break into our security system using a very popular ML technique called K-Means Clustering! This is an example of learnin...
-
Gaussian Mixture Models - The Math of Intelligence (Week 7) - Siraj Raval
We're going to predict customer churn using a clustering technique called the Gaussian Mixture Model! This is a probability distribution that consists of multiple Gaussian distributions, very cool. I also have something important but unrelated to say in the beginning of the video.
-
Clustering 1: Overview - Victor Lavrenko
-
Clustering 2: Monothetic vs polythetic - Victor Lavrenko
-
Clustering 3: Types of clustering algorithms - Victor Lavrenko
-
Clustering 4: Overview of k-means clustering - Victor Lavrenko
-
Clustering 5: The K-means algorithm - Victor Lavrenko
-
Clustering 6: The k-means algorithm visually - Victor Lavrenko
-
Clustering 7: K-means objective function - Victor Lavrenko
-
Clustering 8: Optimal number of clusters - Victor Lavrenko
-
Clustering 9: Evaluating clustering systems - Victor Lavrenko
-
Clustering 10: Intrinsic evaluation and alignment - Victor Lavrenko
-
Clustering 11: Paired evaluation and Rand index - Victor Lavrenko
-
Clustering 12: K-means for image representation - Victor Lavrenko
-
Clustering 13: Summary - Victor Lavrenko
-
IAML19.1 Flat vs hierachical clustering - Victor Lavrenko
-
IAML19.2 Divisive clustering (top-down) - Victor Lavrenko
-
IAML19.3 Agglomerative clustering (bottom-up) - Victor Lavrenko
-
IAML19.4 Agglomerative clustering: dendrogram - Victor Lavrenko
-
IAML19.5 Single-link, complete-link, Ward's method - Victor Lavrenko
-
IAML19.6 Lance-Williams algorithm - Victor Lavrenko
-
IAML19.7 Summary - Victor Lavrenko
-
-
模型評估與驗證方法
專注於機器學習中的模型評估與驗證技術。包括如何使用驗證集來避免過擬合、K折交叉驗證的實施、訓練與測試數據的區分,以及理解偏差與變異在模型訓練中的影響。這些視頻提供了深入的理論背景和實際的應用示例,幫助學習者理解如何有效地評估機器學習模型的性能,並避免常見的過擬合問題。這個播放列表適合那些希望提高模型泛化能力和穩健性的學習者。
-
為什麼用了驗證集 (validation set) 結果卻還是過擬合(overfitting)了呢? - Hung-yi Lee
-
Machine Learning Tutorial Python 12 - K Fold Cross Validation - codebasics
Many times we get in a dilemma of which machine learning model should we use for a given problem. KFold cross validation allows us to evaluate performance of a model by creating K folds of given dataset. This is better then traditional train_test_split. In this tutorial we will cover basics of cross validation and kfold. We will also look into cross_val_score function of sklearn library which provides convenient way to run cross validation on a model #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #MachineLearningModel #sklearn #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Exercise: Exercise description is avialable in above notebook towards the end Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 Introduction 0:21 Cross Validation 1:02 Ways to train your model( use all available data for training and test on same dataset) 2:08 Ways to train your model( split available dataset into training and test sets) 3:26 Ways to train your model (k fold cross validation) 4:26 Coding (start) (Use hand written digits dataset for kfold cross validation) 8:23 sklearn.model_selection KFold 9:10 KFold.split method 12:21 StratifiedKFold 19:45 cross_val_score
-
Machine Learning Fundamentals: Cross Validation - StatQuest
One of the fundamental concepts in machine learning is Cross Validation. It's how we decide which machine learning method would be best for our dataset. Check out the video to find out how! For a complete index of all the StatQuest videos, check out: https://statquest.org/video-index/ If you'd like to support StatQuest, please consider... Buying The StatQuest Illustrated Guide to Machine Learning!!! PDF - https://statquest.gumroad.com/l/wvtmc Paperback - https://www.amazon.com/dp/B09ZCKR4H6 Kindle eBook - https://www.amazon.com/dp/B09ZG79HXC
-
Scikit-Learn 8 cross validation 交叉验证1 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程本节练习代码: https://github.com/MorvanZhou/tutorials/tree/master/sklearnTUT/sk8_cross_validationsklearn 中的 cross validation 交叉验证 对于我们选择正确的 ...
-
Scikit-Learn 9 cross validation 交叉验证2 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/sklearnTUT/sk9_cross_validation2.pysklearn.learning_curve 中的 learning curv...
-
Scikit-Learn 10 cross validation 交叉验证3 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/sklearnTUT/sk10_cross_validation3.py连续三节的 cross validation让我们知道在机器学习中 vali...
-
Machine Learning Foundations/Techniques: Validation - 林軒田
-
Overfitting 1: over-fitting and under-fitting - Victor Lavrenko
When building a learning algorithm, we want it to work well on the future data, not on the training data. Many algorithms will make perfect predictions on the training data, but perform poorly on the future data. This is known as overfitting. In this video we provide formal definitions of over-fitting and under-fitting and give examples for classification and regression tasks.
-
Overfitting 2: training vs. future error - Victor Lavrenko
Training error is something we can always compute for a (supervised) learning algorithm. But what we want is the error on the future (unseen) data. We define the generalization error as the expected error of all possible data that could come in the future. We cannot compute it, but can approximate it with error computed over a testing set.
-
Overfitting 3: confidence interval for error - Victor Lavrenko
The error on the test set is an approximation of the true future error. How close is it? We show how to compute a confidence interval [a,b] such that the error of our classifier in the future is between a and b (with high probability, and under the assumption that future data looks similar to our test set).
-
Overfitting 4: training, validation, testing - Victor Lavrenko
When building a learning algorithm, we need to have three disjoint sets of data: the training set, the validation set and the testing set. This video explains why we need all three.
-
Machine Learning Foundations/Techniques: Training versus Testing - 林軒田
-
Machine Learning Tutorial Python - 7: Training and Testing Data - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python sklearn.model_selection.train_test_split method is used in machine learning projects to split available dataset into training and test set. This way you can train and test on separate datasets. When you test your model using dataset that model didn't see during training phase, it will give you better idea on the accuracy of a model. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #MachineLearningMethod #DataTraining #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 - Theory behind why we need to split given dataset into training and test using sklearn train set split method. 0:54 - Coding (Here we use car price prediction problem to demonstrate train test split) 2:14 - Use train_test_split from sklearn 3:54 - Use of random state method 4:49 - Use of fit() method to train your model 5:35 - Score() method (to check the accuracy of the model) Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses.
-
IAML8.8 Why we need validation sets - Victor Lavrenko
-
IAML8.9 Cross-validation - Victor Lavrenko
-
IAML8.10 Leave-one-out cross-validation - Victor Lavrenko
-
-
機器學習- 優化與調參數
機器學習中的優化技術和超參數調優,涵蓋了從基礎的梯度下降算法到更進階的優化策略。
-
Hyperparameter Optimization - The Math of Intelligence #7 - Siraj Raval
Hyperparameters are the magic numbers of machine learning. We're going to learn how to find them in a more intelligent way than just trial-and-error. We'll g...
-
Machine Learning Tutorial Python - 16: Hyper parameter Tuning (GridSearchCV) - codebasics
In this python machine learning tutorial for beginners we will look into, 1) how to hyper tune machine learning model paramers 2) choose best model for given machine learning problem We will start by comparing traditional train_test_split approach with k fold cross validation. Then we will see how GridSearchCV helps run K Fold cross validation with its convenient api. GridSearchCV helps find best parameters that gives maximum performance. RandomizedSearchCV is another class in sklearn library that does same thing as GridSearchCV but without running exhaustive search, this helps with computation time and resources. We will also see how to find best model among all the classification algorithm using GridSearchCV. In the end we have interesting exercise for you to solve. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #HyperParameter #GridSearchCV #sklearntutorials #scikitlearntutorials Exercise: https://github.com/codebasics/py/blob... Code in this tutorial: https://github.com/codebasics/py/blob... Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 00:00 Introduction 00:45 train_test_split to find model performance 01:37 K fold cross validation 04:44 GridSearchCV for hyperparameter tuning 10:18 RandomizedSearchCV 12:35 Choosing best model 15:25 Exercise
-
Second Order Optimization - The Math of Intelligence #2 - Siraj Raval
Gradient Descent and its variants are very useful, but there exists an entire other class of optimization techniques that aren't as widely understood. We'll ...
-
Gradient Descent, Step-by-Step - StatQuest
Gradient Descent is the workhorse behind most of Machine Learning. When you fit a machine learning method to a training dataset, you're probably using Gradie...
-
Stochastic Gradient Descent, Clearly Explained!!! - StatQuest
Even though Stochastic Gradient Descent sounds fancy, it is just a simple addition to "regular" Gradient Descent. This video sets up the problem that Stochas...
-
Machine Learning Tutorial Python - 4: Gradient Descent and Cost Function - codebasics
In this tutorial, we are covering few important concepts in machine learning such as cost function, gradient descent, learning rate and mean squared error. We will use home price prediction use case to understand gradient descent. After going over math behind these concepts, we will write python code to implement gradient descent for linear regression in python. At the end I've an an exercise for you to practice gradient descent #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #CostFunction #GradientDescent Code: https://github.com/codebasics/py/blob... Exercise csv file: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 Overview 1:23 - What is prediction function? How can we calculate it? 4:00 - Mean squared error (ending time) 4:57 - Gradient descent algorithm and how it works? 11:00 - What is derivative? 12:30 - What is partial derivative? 16:07 - Use of python code to implement gradient descent 27:05 - Exercise is to come up with a linear function for given test results using gradient descent Topic Highlights: 1) Theory (We will talk about MSE, cost function, global minima) 2) Coding - (Plain python code that finds out a linear equation for given sample data points using gradient descent) 3) Exercise - (Exercise is to come up with a linear function for given test results using gradient descent)
-
-
模型性能指標與評估
深入探討了機器學習中的各種模型性能評估指標,如靈敏度、特異性、混淆矩陣、ROC和AUC曲線。這些視頻講解了如何利用這些指標來評估和改善分類和回歸模型的性能。此外,還包括了對均方誤差、平均絕對誤差和相關係數等重要統計量的詳細解釋。這個播放列表適合想要深入理解機器學習模型評估方法和提高評估技巧的學習者。
-
Precision, Recall, F1 score, True Positive|Deep Learning Tutorial 19 (Tensorflow2.0, Keras & Python) - codebasics
In this video we will go over following concepts, What is true positive, false positive, true negative, false negative What is precision and recall What is F1 score We will also write simple code to compare dog vs non dog labels and print all above measures on them #Whatistruepositive #falsepositive #truenegative #falsenegative #precisionandrecall #F1score #deeplearning
-
Machine Learning Fundamentals: Bias and Variance - StatQuest
Bias and Variance are two fundamental concepts for Machine Learning, and their intuition is just a little different from what you might have learned in your ...
-
IAML2.22: Classification accuracy and imbalanced classes - Victor Lavrenko
-
IAML2.25: Detect outliers by visualising the data - Victor Lavrenko
-
Machine Learning Tutorial Python - 20: Bias vs Variance In Machine Learning - codebasics
Bias variance trade off is a popular term in statistics. In this video we will look into what bias and variance means in the field of machine learning. We will understand this concept by going through a simple example of house price prediction and also cover overfitting, underfitting.
-
IAML8.1 Generalization in machine learning - Victor Lavrenko
-
IAML8.2 Overfitting and underfitting - Victor Lavrenko
-
IAML8.3 Examples of overfitting and underfitting - Victor Lavrenko
-
IAML8.4 How to control overfitting - Victor Lavrenko
-
IAML8.5 Generalization error - Victor Lavrenko
-
IAML8.6 Estimating the generalization error - Victor Lavrenko
-
IAML8.7 Confidence interval for generalization - Victor Lavrenko
-
Machine Learning Foundations/Techniques: Regularization - 林軒田
-
Machine Learning Tutorial Python - 17: L1 and L2 Regularization | Lasso, Ridge Regression - codebasics
In this python machine learning tutorial for beginners we will look into, 1) What is overfitting, underfitting 2) How to address overfitting using L1 and L2 regularization 3) Write code in python and sklearn for housing price prediction where we will see a model overfit when we use simple linear regression. Then we will use Lasso regression (L1 regularization) and ridge regression (L2 regression) to address this overfitting issue #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #L1andL2Regularization #Regularization #sklearntutorials #scikitlearntutorials
-
“L1和L2正则化”直观理解(之一),从拉格朗日乘数法角度进行理解 - 王木头学科学
L1和L2正则化,可以从3个角度分别理解:拉格朗日乘数法角度权重衰减角度贝叶斯概率角度3种方法意义各不相同,却又殊途同归
-
“L1和L2正则化”直观理解(之二),为什么又叫权重衰减?到底哪里衰减了? - 王木头学科学
L1和L2正则化,可以从3个角度分别理解:拉格朗日乘数法角度权重衰减角度贝叶斯概率角度3种方法意义各不相同,却又殊途同归
-
什么是 L1 L2 正规化 正则化 Regularization (深度学习 deep learning) - 莫烦Python
今天我们会来说说用于减缓过拟合问题的 L1 和 L2 regularization 正规化手段.更多内容在我的教学网站: https://morvanzhou.github.io/tutorials/Theano 使用 L1 L2 正规化: https://morvanzhou.github.io/tutoria...
-
Machine Learning Fundamentals: Sensitivity and Specificity - StatQuest
In this StatQuest we talk about Sensitivity and Specificity - to key concepts for evaluating Machine Learning methods. These make it easier to choose which m...
-
Machine Learning Fundamentals: The Confusion Matrix - StatQuest
One of the fundamental concepts in machine learning is the Confusion Matrix. Combined with Cross Validation, it's how we decide which machine learning method would be best for our dataset. Check out the video to find out how! NOTE: This video illustrates the confusion matrix concept as described in the Introduction to Statistical Learning, page 145. For a complete index of all the StatQuest videos, check out: https://statquest.org/video-index/ If you're not already familiar with cross validation, check out the Quest: • Machine Learning Fundamentals: Cross ... If you'd like to support StatQuest, please consider... Buying The StatQuest Illustrated Guide to Machine Learning!!! PDF - https://statquest.gumroad.com/l/wvtmc Paperback - https://www.amazon.com/dp/B09ZCKR4H6
-
ROC and AUC in R
This tutorial walks you through, step-by-step, how to draw ROC curves and calculate AUC in R. We start with basic ROC graph, learn how to extract thresholds ...
-
Design Matrix Examples in R, Clearly Explained!!!
This StatQuest complements the StatQuest: GLMs Pt.3 - Design Matrices https://youtu.be/2UYx-qjJGSs with examples given in R. If you would like the code, you ...
-
R-squared, Clearly Explained!!!
R-squared is one of the most useful metrics in statistics. It can give you a sense of how good your model is. For a complete index of all the StatQuest videos, check out: https://statquest.org/video-index/
-
ROC and AUC, Clearly Explained! - StatQuest
ROC (Receiver Operator Characteristic) graphs and AUC (the area under the curve), are useful for consolidating the information from a ton of confusion matric...
-
IAML8.11 Stratified sampling - Victor Lavrenko
-
IAML8.12 Evaluating classification and regression - Victor Lavrenko
-
IAML8.13 False positives and false negatives - Victor Lavrenko
-
IAML8.14 Classification error and accuracy - Victor Lavrenko
-
IAML8.15 When classification error is wrong - Victor Lavrenko
-
IAML8.16 Recall, precision, miss and false alarm - Victor Lavrenko
-
IAML8.17 Classification cost and utility - Victor Lavrenko
-
IAML8.18 Receiver Operating Characteristic (ROC) curve - Victor Lavrenko
-
IAML8.19 Evaluating regression: MSE, MAE, CC - Victor Lavrenko
-
IAML8.20 Mean squared error and outliers - Victor Lavrenko
-
IAML8.21 Mean absolute error (MAE) - Victor Lavrenko
-
IAML8.22 Correlation coefficient - Victor Lavrenko
-
-
實戰教程
這系列視頻涵蓋了多種機器學習應用的實戰教程,從基礎到進階,適合對實際建模和應用感興趣的學習者。 莫烦Python的一系列視頻專注於專注於介紹和教學Scikit-Learn這個廣泛使用的Python機器學習庫還有使用機器學習和強化學習算法來構建和完善機械手臂,從搭建靜態和動態環境到實現強化學習算法。 同樣,該系列還包括了構建汽車狀態分類器的教程,從數據分析到搭建測試網絡。 Siraj Raval的“Machine Learning for Hackers”系列提供了一系列快節奏的項目,涵蓋了從創建聊天機器人到股市預測等各種主題。 Edureka提供了更長形式的教程,包括使用機器學習預測COVID-19疫情和房價預測等項目。
-
Scikit-learn Crash Course - Machine Learning Library for Python - freeCodeCamp.org
Scikit-learn is a free software machine learning library for the Python programming language. Learn how to use it in this crash course. ✏️ Course created by Vincent D. Warmerdam. ⭐️ Course Contents ⭐️ ⌨️ (0:00:00) introduction ⌨️ (0:03:08) introducing scikit-learn ⌨️ (0:34:36) preprocessing ⌨️ (0:53:36) metrics ⌨️ (1:24:49) meta-estimators ⌨️ (1:45:34) human-learn ⌨️ (2:06:17) wrap-up ⭐️ Code ⭐️ 💻 Full code: https://github.com/koaning/calm-noteb... 💻 Notebook per section: 🖥 introducing scikit-learn: https://github.com/koaning/calm-noteb... 🖥 preprocessing: https://github.com/koaning/calm-noteb... 🖥 metrics: https://github.com/koaning/calm-noteb... 🖥 meta estimators: https://github.com/koaning/calm-noteb... 🖥 human-learn: https://github.com/koaning/calm-noteb...
-
Scikit-Learn Course - Machine Learning in Python Tutorial - freeCodeCamp.org
Scikit-learn is a free software machine learning library for the Python programming language. Learn about machine learning using scikit-learn in this full course. 💻 Code: https://github.com/DL-Academy/Machine... 🔗 Scikit-learn website: https://scikit-learn.org ✏️ Course from DL Academy. Check out their YouTube channel: / @betahex3589 🔗 View more courses here: https://thedlacademy.com/ ⭐️ Course Contents ⭐️ Chapter 1 - Getting Started with Machine Learning ⌨️ (0:00) Introduction ⌨️ (0:22) Installing SKlearn ⌨️ (3:37) Plot a Graph ⌨️ (7:33) Features and Labels_1 ⌨️ (11:45) Save and Open a Model Chapter 2 - Taking a look at some machine learning algorithms ⌨️ (13:47) Classification ⌨️ (17:28) Train Test Split ⌨️ (25:31) What is KNN ⌨️ (33:48) KNN Example ⌨️ (43:54) SVM Explained ⌨️ (51:11) SVM Example ⌨️ (57:46) Linear regression ⌨️ (1:07:49) Logistic vs linear regression ⌨️ (1:23:12) Kmeans and the math beind it ⌨️ (1:31:08) KMeans Example Chapter 3 - Artificial Intelligence and the science behind It ⌨️ (1:42:02) Neural Network ⌨️ (1:56:03) Overfitting and Underfitting ⌨️ (2:03:05) Backpropagation ⌨️ (2:18:16) Cost Function and Gradient Descent ⌨️ (2:26:24) CNN ⌨️ (2:31:46) Handwritten Digits Recognizer
-
Scikit-Learn 1 Why? (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程Scikit learn 也简称 sklearn, 是机器学习领域当中最知名的 python 模块之一. 视频中提到了我们为什么要学习 sklearn, 还有用 sklearn 可以解决哪些问题. sklearn 网站: http://scikit-learn.org...
-
Scikit-Learn 2 安装 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程安装 Scikit-learn (sklearn) 最简单的方法就是使用 pip 安装它.在你的终端上执行 (pip install scikit-learn) 就好啦~ 注意 python3.x版本的用户要使用 (pip3 install scikit-learn)...
-
Scikit-Learn 3 如何选择机器学习方法 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程 处理不同问题的时候呢, 我们会要用到不同的机器学习-学习方法. Sklearn 提供了一张非常有用的流程图,供我们选择合适的学习方法. 流程图网址: http://scikit-learn.org/stable/tutori... 播放列表: • Scikit-learn (sklearn) 优雅地学会机器学习 教学教程
-
Scikit-Learn 4 通用学习模式 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程 本节练习代码: https://github.com/MorvanZhou/tutoria... 我们用 sklearn 自己的 iris 的例子实现了一次 KNeighborsClassifier 学习. 说明了所有 sklearn的编程结构和过程都是极度类似的.所以只需要先定义 用什么model学习,然后再 model.fit(数据), 这样 model 就能从数据中学到东西. 最后还可以用 model.predict() 来预测值.
-
Scikit-Learn 5 sklearn 的 datasets 数据库 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程 本节练习代码:https://github.com/MorvanZhou/tutoria... Sklearn 提供了很多的有用的数据库,既有真实数据也有你可以编造的数据!特别的强大. datasets 网址: http://scikit-learn.org/stable/module...
-
Scikit-Learn 6 model 常用属性和功能 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/sklearnTUT/sk6_model_attribute_method.pysklearn 的 model 属性和功能都是高度统一的. 你可以运...
-
Scikit-Learn 7 normalization 标准化数据 (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程 本节练习代码: https://github.com/MorvanZhou/tutoria... normalization 在数据跨度不一的情况下对机器学习有很重要的作用.特别是各种数据属性还会互相影响的情况之下. Scikit-learn 中标准化的语句是 preprocessing.scale() . scale 以后, model 就更能从标准化数据中学到东西.
-
Scikit-Learn 11 Save (机器学习 sklearn 教学教程tutorial) - 莫烦Python
python3 sklearn 基础 教学教程本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/sklearnTUT/sk11_save.py总算到了最后一次的课程了, 我们练习好了一个 model 以后总需要保存和再次预测, 所以保存和读取我...
-
#1 从零开始做一个汽车状态分类器: 分析数据 (机器学习实战 教程教学 tutorial) - 莫烦Python
这个实践很简单, 我们需要训练出来一个神经网络的分类器, 用来对数据中的类别进行分类. 其实在现实生活中, 我们有很多形式的数据, 纯数字或文本的数据是最多的. 所以我选择了这样一个分类的练习来做实战.
-
#2 从零开始做一个汽车状态分类器: 搭建测试网络 (机器学习实战 教程教学 tutorial) - 莫烦Python
这节内容, 就是非常容易, 我们搭建一个模型, 训练, 并可视化它. code: https://github.com/MorvanZhou/train-c... "莫烦Python" 实战 教学目录: https://morvanzhou.github.io/tutorial... 通过 "莫烦 Python" 支持我做出更好的视频: https://morvanzhou.github.io/support/ 莫烦 Python 更多有趣的教程: https://morvanzhou.github.io 翻译帮助其他语言的观看者:http://www.youtube.com/timedtext_cs_p...
-
#1 机械手臂从零开始 搭建结构 (机器学习实战 教程教学 tutorial) - 莫烦Python
做这个实践的主要目的就是让我们活学活用, 从0开始搭建一个强化学习框架. Code: https://github.com/MorvanZhou/train-r... "莫烦Python" 实战 教学目录: https://morvanzhou.github.io/tutorial... 通过 "莫烦 Python" 支持我做出更好的视频: https://morvanzhou.github.io/support/ 莫烦 Python 更多有趣的教程: https://morvanzhou.github.io 翻译帮助其他语言的观看者:http://www.youtube.com/timedtext_cs_p...
-
#2 机械手臂从零开始 静态环境 (机器学习实战 教程教学 tutorial) - 莫烦Python
有一个机器人在你屏幕上跑来跑去, 你能看见它, 根据他的行为来调整程序, 比看不见任何东西, 不知道是哪除了问题要好得多. 所以做一个可视化的环境变得重要起来. Code: https://github.com/MorvanZhou/train-r... "莫烦Python" 实战 教学目录: https://morvanzhou.github.io/tutorial... 通过 "莫烦 Python" 支持我做出更好的视频: https://morvanzhou.github.io/support/ 莫烦 Python 更多有趣的教程: https://morvanzhou.github.io 翻译帮助其他语言的观看者:http://www.youtube.com/timedtext_cs_p...
-
#3 机械手臂从零开始 写动态环境 (机器学习实战 教程教学 tutorial) - 莫烦Python
上次我们搭建好了一个静态的环境, 整个环境还没有动起来. 这次我们结合手臂的运动部分和手臂的成像部分来写全整个手臂的摆动规则. 并且通过不断地可视化来测试是否写错.Code: https://github.com/MorvanZhou/train-robot-arm-from-scratch"莫烦Python" ...
-
#4 机械手臂从零开始 加入强化学习算法 (机器学习实战 教程教学 tutorial) - 莫烦Python
在上节中, 我们的环境已经基本建设完成了, 现在我们需要的就是一个强化学习的学习方法. 学习方法有很多, 而且也分很多类型. 我们需要按照自己环境的要求挑选适合于这个环境的学习方法.Code: https://github.com/MorvanZhou/train-robot-arm-from-scratch"莫...
-
#5 机械手臂从零开始 完善测试 (机器学习实战 教程教学 tutorial) - 莫烦Python
上节我们加上了强化学习的方法, 来测试了一下整个学习的流程. 不过也发现了一些问题, 而这种发现问题的方式也肯定会出现在你的强化学习项目中.Code: https://github.com/MorvanZhou/train-robot-arm-from-scratch"莫烦Python" 实战 教学目录: htt...
-
Your First ML App - Machine Learning for Hackers #1 - Siraj Raval
This video will get you up and running with your first ML app in just 7 lines of Python. The app will be able to recognize Iris flowers. I created a Slack ch...
-
Build an AI Composer - Machine Learning for Hackers #2 - Siraj Raval
This video will get you up and running with your first AI composer in just 10 lines of Python. The app can compose british folk songs after training on an ex...
-
Build a Game AI - Machine Learning for Hackers #3 - Siraj Raval
This video will get you up and running with your first game AI in just 10 lines of Python. The AI can theoretically learn to master any game you train it on,...
-
Build a Movie Recommender - Machine Learning for Hackers #4 - Siraj Raval
This video will get you up and running with your first movie recommender system in just 10 lines of C++. We train a neural network on a MovieLens dataset of movie ratings by different users to generate a top 10 recommendation list for the default user ID.
-
Build an AI Artist - Machine Learning for Hackers #5 - Siraj Raval
This video will get you up and running with your first AI Artist using the deep learning library Keras!The code for this video is here:https://github.com/llS...
-
Build a Chatbot - ML for Hackers #6 - Siraj Raval
This video will get you up and running with your first Chatbot using the deep learning library Torch!The code for this video is here:https://github.com/llSou...
-
Build an AI Reader - Machine Learning for Hackers #7 - Siraj Raval
This video will get you up and running with your first AI Reader using Google's newly released pre-trained text parser, Parsey McParseface.The code for this ...
-
Build an AI Writer - Machine Learning for Hackers #8 - Siraj Raval
This video will get you up and running with your first AI Writer able to write a short story based on an image that you input. The code for this video is her...
-
Build a Chatbot w/ an API - ML for Hackers #9 - Siraj Raval
This video will get you up and running with your first API-based chatbot able to converse with a user around a topic of your choosing!The code for this video...
-
Reinforcement Learning for Stock Prediction - Siraj Raval
Can we actually predict the price of Google stock based on a dataset of price history? I’ll answer that question by building a Python demo that uses an under...
-
Machine Learning API Tutorial (LIVE) - Siraj Raval
Lets build a simple machine learning API together! I'll use the now classic neural style transfer algorithm to create a simple API that takes in an image and returns a stylized version of it. We'll use the FloydHub cloud service to both train and serve our model in the cloud. We can easily turn a deep neural network into a REST API that anyone can use, i'll detail those steps in this live stream and we'll build it using Tensorflow.
-
COVID - 19 Outbreak Prediction using Machine Learning | Machine Learning | Edureka
This Edureka Session explores and analyses the spread and impact of the novel coronavirus pandemic which has taken the world by storm with its rapid growth. In this session, we shall develop a machine learning model in Python to analyze what has been its impact so far and analyze the outbreak of COVID 19 across various regions, visualize them using charts and tables, and predict the number of upcoming confirmed cases. Finally, we’ll conclude with a few safety measures that you can take to save yourself and your loved ones from getting adversely affected in the hour of crisis. 00:00:00 Introduction 00:02: 53 Introduction to COVID 19 00:05:49 Case Study: the outbreak of COVID 19 00:57:20 Conclusion
-
House Price Prediction using ML | Machine Learning Projects | Edureka
This Edureka video on '𝐇𝐨𝐮𝐬𝐞 𝐏𝐫𝐢𝐜𝐞 𝐏𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧' will help you predict House Prices with Machine Learning concepts. Following pointers are covered in this House Price Prediction : 00:00:00 Agenda 00:00:47 Problem Statement 00:03:37 Tools and Frameworks 00:04:36 Project
-
No Black Box Machine Learning Course – Learn Without Libraries - freeCodeCamp.org
In this No Black Box Machine Learning Course in JavaScript, you will gain a deep understanding of machine learning systems by coding without relying on libraries. This unique approach not only demystifies the inner workings of machine learning but also significantly enhances software development skills. ✏️ Course created by @Radu (PhD in Computer Science) 🎥 Watch part two: • Machine Learning & Neural Networks wi... HOMEWORK 🏠 1st assignment spreadsheet: https://docs.google.com/spreadsheets/... 🏠 Submit all other assignments to Radu's Discord Server: / discord GITHUB LINKS 💻 Drawing App: https://github.com/gniziemazity/drawi... 💻 Data: https://github.com/gniziemazity/drawi... 💻 Custom Chart Component: https://github.com/gniziemazity/javas... 💻 Full Course Code (In Parts): https://github.com/gniziemazity/ml-co... PREREQUISITES 🎥 Interpolation: • Linear Interpolation (Lerp) - Math an... 🎥 Linear Algebra: • Learn 2D Vectors with JavaScript 🎥 Trigonometry: • Learn Trigonometry with JavaScript LINKS 🔗 Check out the Recognizer we'll build in this course: https://radufromfinland.com/projects/... 🔗 Draw for Radu, Call for help video: • Help me make a NEW Machine Learning C... 🔗 Draw for Radu, Data collection tool: https://radufromfinland.com/projects/ml 🔗 Radu's Self-driving Car Course: • Self-driving Car - JavaScript Course 🔗 Radu's older Machine Learning video: • Learn Machine Learning (!BB - Ep1) 🔗 CHART TUTORIAL (mentioned at 01:45:27): • Build a Chart using JavaScript (No Li... 🔗 CHART CODE: https://github.com/gniziemazity/javas... TOOLS 🔧 Visual Studio Code: https://code.visualstudio.com/download 🔧 Google Chrome: https://www.google.com/chrome 🔧 Node JS: https://nodejs.org/en/download (make sure you add 'node' and 'npm' to the PATH environment variables when asked!) TIMESTAMPS ⌨️(0:00:00) Introduction ⌨️(0:05:04) Drawing App ⌨️(0:46:46) Homework 1 ⌨️(0:47:05) Working with Data ⌨️(1:08:54) Data Visualizer ⌨️(1:29:52) Homework 2 ⌨️(1:30:05) Feature Extraction ⌨️(1:38:07) Scatter Plot ⌨️(1:46:12) Custom Chart ⌨️(2:01:03) Homework 3 ⌨️(2:01:35) Nearest Neighbor Classifier ⌨️(2:43:21) Homework 4 (better box) ⌨️(2:43:53) Data Scaling ⌨️(2:54:45) Homework 5 ⌨️(2:55:23) K Nearest Neighbors Classifier ⌨️(3:04:18) Homework 6 ⌨️(3:04:49) Model Evaluation ⌨️(3:21:29) Homework 7 ⌨️(3:22:01) Decision Boundaries ⌨️(3:39:26) Homework 8 ⌨️(3:39:59) Python & SkLearn ⌨️(3:50:35) Homework 9
-
AlphaZero from Scratch – Machine Learning Tutorial - freeCodeCamp.org
In this machine learning course, you will learn how to build AlphaZero from scratch. AlphaZero is a game-playing algorithm that uses artificial intelligence ...
-
-
進階主題(Advanced Topics)
這系列視頻涵蓋了機器學習的多個重要主題,包括學習的可行性、梯度下降、模型保存方法、處理數據不平衡和過擬合等問題,以及非線性變換和過擬合的危險等。這些主題涉及了機器學習理論的核心概念和實際應用技術,非常適合對機器學習理論和實踐有深入興趣的學習者。
-
How Linear Discriminant Analysis (LDA) Classifier Works 1/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
How Linear Discriminant Analysis (LDA) Classifier Works 2/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
The Next Step for Machine Learning - Hung-yi Lee
這是「機器學習」2019 年春季班的上課錄影,只有之前在同一門課沒有講過的新內容會被上傳到 YouTube 頻道上。請見以下撥放清單:https://www.youtube.com/watch?v=XnyM3-xtxHs&list=PLJV_el3uVTsOK_ZK5L0Iv_EQoL1JefRL4這門課過去的內...
-
VC维是如何推导出来的?为什么说它是机器学习理论最重要的发明? - 王木头学科学
(林轩田老师 《机器学习基石课》 学习感悟与总结 part 2)如何才能利用大数定律实现机器学习?第一步,得到霍夫丁不等式第二步,够造Ein和Eout,形成PAC机器学习框架第三步,在PAC基础上,利用Breakpoint得到VC界第四步,用VC维度帮助选择机器学习模型
-
Machine Learning Foundations/Techniques: Feasibility of Learning (1/2) - 林軒田
-
Machine Learning Foundations/Techniques: Feasibility of Learning (2/2) - 林軒田
-
Machine Learning Tutorial Python - 5: Save Model Using Joblib And Pickle - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python Training machine learning model can be quite time consuming if training dataset is very big. In this case it makes sense to train a model and save it to a file so that later on while making predictions you can just load that model from a file and you don't need to train it every time. Pickle and sklearn joblib modules can be used for this purpose. Joblib seems to be more efficient with big numpy arrays hence it is preferred when you have many numpy objects involved in your training step. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #TrainingDataset #sklearntutorials #scikitlearntutorials Code: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 - Saving trained model to the file 0:14 - How to solve a problem using machine learning? 1:33 - Coding 2:06 - Save model using pickle module 5:10 - Save model using joblib module 5:13 - Difference between pickle and joblib?
-
Machine Learning Tutorial Python - 6: Dummy Variables & One Hot Encoding - codebasics
Data Science Full Course For Beginners | Python Data Science Tutorial | Data Science With Python Machine learning models work very well for dataset having only numbers. But how do we handle text information in dataset? Simple approach is to use interger or label encoding but when categorical variables are nominal, using simple label encoding can be problematic. One hot encoding is the technique that can help in this situation. In this tutorial, we will use pandas get_dummies method to create dummy variables that allows us to perform one hot encoding on given dataset. Alternatively we can use sklearn.preprocessing OneHotEncoder as well to create dummy variables. #MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #OneHotEncoding #sklearntutorials #scikitlearntutorials Code in tutorial: https://github.com/codebasics/py/blob... Exercise csv file: https://github.com/codebasics/py/blob... Exercise solution: https://github.com/codebasics/py/blob... Topics that are covered in this Video: 0:00 Introduction 0:47 How to handle text data in machine learning model? 1:38 Nominal vs Ordinal Variables 2:44 Theory (Explain one hot encoding using home prices in different townships) 3:39 Coding (Start) 3:51 Pandas get_dummies method 7:48 Create a model that uses dummy columns 12:45 Label Encoder 13:29 fit_transform() method 15:40 sklearn OneHotEncoder 19:59 Exercise (To predict prices of car based on car model, age, mileage)
-
Machine Learning Foundations/Techniques: Noise and Error - Hsuan-Tien Lin
-
Loss Functions Explained - Siraj Raval
Which loss function should you use to train your machine learning model? The huber loss? Cross entropy loss? How about mean squared error? If all of those seem confusing, this video will help. I'm going to explain the origin of the loss function concept from information theory, then explain how several popular loss functions for both regression and classification work. Using a combination of mathematical notation, animations, and code, we'll see how and when to use certain loss functions for certain types of problems.
-
Mutual Information, Clearly Explained!!! - StatQuest
Mutual Information is metric that quantifies how similar or different two variables are. This is a lot like R-squared, but R-squared only works for continuou...
-
Entropy (for data science) Clearly Explained!!! - StatQuest
Entropy is a fundamental concept in Data Science because it shows up all over the place - from Decision Trees, to similarity metrics, to state of the art dim...
-
【機器學習2021】概述領域自適應 (Domain Adaptation) - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/da_v6.pdf
-
[TA 補充課] More about Domain Adaptation (1/2) (由助教趙崇皓同學講授)
slides: https://drive.google.com/file/d/15wlfUtTmnb4cEAHZtNJ9_jJE26nSNhAX/view
-
[TA 補充課] More about Domain Adaptation (2/2) (由助教楊晟甫同學講授)
slides: https://drive.google.com/file/d/15wlfUtTmnb4cEAHZtNJ9_jJE26nSNhAX/view
-
处理不均衡数据 (深度学习)! Dealing with imbalanced data (deep learning) - 莫烦Python
今天我们会来聊聊在机器学习中常会遇到的问题. 满手都是不均衡数据.很多数据中,正反数据量都是不均衡的,比如在一千个人中预测一个得癌症的人. 有时候只要一直预测多数派, model 的预测误差也能很小, 形成"已经学习好了"的假象. 今天我们来看看如何避免这种情况的发生. 机器学习-简介系列 播放列表: https...
-
什么是过拟合 (深度学习)? What is overfitting (deep learning)? - 莫烦Python
今天我们会来聊聊机器学习中的过拟合 overfitting 现象, 和解决过拟合的方法.机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm8Y8vMkNNPPXAtYXwKinTensorflow dropout: https...
-
直观解释:为什么噪声不是过拟合的原因?又什么只要没有过拟合就一定有噪声? - 王木头学科学
(林轩田老师 《机器学习基石课》 学习感悟与总结 part 3)为什么说过拟合是机器学习的癌症?为什么说噪声不是过拟合发生的前提?为什么说没有过拟合就一定有噪声?决定模型复杂度的因素有哪些?哪个才是最重要的?
-
Machine Learning Foundations/Techniques: Nonlinear Transformation - 林軒田
-
Machine Learning Foundations/Techniques: Hazard of Overfitting - 林軒田
-
Machine Learning Foundations/Techniques: Three Learning Principles - 林軒田
-
并行计算与机器学习(1/3)(中文) Parallel Computing for Machine Learning (Part 1/3) - Shusen Wang
大数据时代的机器学习面临计算量的挑战。这节课里,我给大家讲解并行计算的基础,已经如何用MapReduce实现并行梯度下降。 这节课的主要内容: 0:28 Motivation:并行计算有什么用?为什么机器学习的人需要懂并行计算。 2:42 最小二乘回归(如果已经懂最小二乘,建议跳到 6:43 )。 6:43 用并行计算来解最小二乘回归。 11:14 并行计算中的通信问题。 13:10 MapReduce,已经如何用MapReduce实现并行梯度下降,以及通信、同步的问题
-
并行计算与机器学习(2/3)(中文) Parallel Computing for Machine Learning (Part 2/3) - Shusen Wang
上节课我讲了并行计算的基础,以及如何用MapReduce实现同步的并行计算。这节课我讲另外两种编程模型和架构:Parameter Server(参数服务器)和Decentralized Network(去中心化网络),以及如何用这两种架构实现梯度下降。 这节课的主要内容: 1:02 Parameter Server(参数服务器),以及用Parameter Server实现异步梯度下降。 8:27 Decentralized Network(去中心化网络), 以及用Decentralized Network实现梯度下降。 15:39 Parallel Computing(并行计算)和Distributed Computing(分布式计算)的异同。
-
并行计算与机器学习(3/3)(中文) Parallel Computing for Machine Learning (Part 3/3) - Shusen Wang
这节课的主要介绍TensorFlow中的并行计算库、以及其中Ring All-Reduce的原理。 这节课的主要内容: 1:09 如何应用TensorFlow的并行计算库训练神经网络 7:41 Ring All-Reduce的技术原理。
-
联邦学习:技术角度的讲解(中文)Introduction to Federated Learning - Shusen Wang
这节课的内容是联邦学习。联邦学习是一种特殊的分布式机器学习,是最近两年机器学习领域的一个大热门。联邦学习和传统分布式学习有什么区别呢?什么是Federated Averaging算法?联邦学习有哪些研究方向呢?我将从技术的角度进行解答。 这节课的主要内容: 3:13 分布式机器学习 6:07 联邦学习和传统分布式学习的区别 12:46 联邦学习中的通信问题 15:24 Federated Averaging算法 21:24 联邦学习中的隐私泄露和隐私保护 27:52 联邦学习中的安全问题(拜占庭错误、data poisoning、model poisoning) 33:00 总结
-
One-Shot Learning - Fresh Machine Learning #1 - Siraj Raval
Welcome to Fresh Machine Learning! This is my new course dedicated to making bleeding edge machine learning accessible to developers everywhere. The demo cod...
-
Generative Adversarial Nets - Fresh Machine Learning #2 - Siraj Raval
This episode of Fresh Machine Learning is all about a relatively new concept called a Generative Adversarial Network. A model continuously tries to fool another model, until it can do so with ease. At that point, it can generate novel, authentic looking data! Very exciting stuff. The demo code for this video is a set of adversarial Gaussian Distribution Curves in Python using Theano and PyPlot:
-
Tone Analysis - Fresh Machine Learning #3 - Siraj Raval
This episode of Fresh Machine Learning is all Tone Analysis. Tone analysis consists of not just analyzing sentiment (positive or negative), but also analyzin...
-
Generate Rap Lyrics - Fresh Machine Learning #4 - Siraj Raval
This episode of Fresh Machine Learning is about generating rap lyrics! Lyrical generation is possible using either Hidden Markov Models or deep learning. In ...
-
Build an Autoencoder in 5 Min - Fresh Machine Learning #5 - Siraj Raval
This video is all about autoencoders! I start off explaining what an autoencoder is and how it works. Then I talk about some use cases for autoencoders and t...
-
Build a Self Driving Car in 5 Min - Fresh Machine Learning #6 - Siraj Raval
Let's build a self driving car! In this video, I talk about how self driving cars work, then dive into 2 fresh papers that add modern improvements to autonomous vehicles. The self driving car that we build is in a simulated environment and is built using PyGame and the Keras machine learning library.
-
Build an Antivirus in 5 Min - Fresh Machine Learning #7 - Siraj Raval
In this video, we talk about how machine learning is used to create antivirus programs! Specifically, a classifier can be trained to detect whether or not so...
-
Resume for Machine Learning - Siraj Raval
Welcome to my new course, Machine Learning Journey! If you’re a student, or between jobs, or in a different field, this 10 week course will help you learn everything you need from marketing your skills to building a solid mathematical foundation in order to get a job or start your own venture as a machine learning engineer or data scientist. I'm going to show you how to write a great resume in this first video. There are some key things to keep in mind and it depends on the company you're applying to. I'll cover it all, enjoy!
-
AlphaGo Zero 为什么更厉害? - 莫烦Python
在2017年10月19日, Google Deepmind 推出了新一代的围棋人工智能 AlphaGo Zero. AlphaGo zero 被放出的当天...通过 "莫烦 Python" 支持我做出更好的视频: https://morvanzhou.github.io/support/通过翻译,帮助其他语言的观...
-
Research to Code - Machine Learning tutorial - Siraj Raval
A lot of times, research papers don't have an associated codebase that you can browse and run yourself. In cases like that, you'll have to code up the paper yourself. That is easier said than done, and in this video i'll show you how you should read and dissect a research paper so you can quickly implement it programmatically. The paper we'll be implementing in this video is called Neural Style transfer, that applies artistic filters to an image using 3 loss functions. Its a great starting point, i'll demo it using code, animations, and math. Enjoy!
-
-
強化學習 Reinforcement Learning
播放列表提供了一系列深入淺出的視頻,從基礎概念到高級技術,全面介紹了增強式學習(Reinforcement Learning, RL)。播放列表包括了對增強式學習的基本原理、策略梯度、Actor-Critic方法、Q學習、Sarsa算法等關鍵概念的講解。此外,還涵蓋了諸如DQN、DDPG和A3C等先進技術。適合對增強式學習感興趣的初學者和中級學習者。
-
什么是强化学习? (Reinforcement Learning) - 莫烦Python
今天我们会来聊聊机器学习中的另一大家族, 强化学习 reinforcement learning.详细的文字教程: https://mofanpy.com强化学习教程: https://www.youtube.com/playlist?list=PLXO45tsB95cJYKCSATwh1M4n8cUnUv6lT...
-
强化学习方法汇总 (Reinforcement Learning) - 莫烦Python
强化学习包括了很多种方法, 我们来对比一下各种不同的方法, 让你有大概的了解.(q learning, sarsa, sarsa lambda, policy gradients, deep q network, model-based RL, model-free RL, value-based, policy...
-
#1 why? (强化学习 Reinforcement Learning 教学) - 莫烦Python
我们为什么要学习强化学习. 强化学习是让计算机自己学着处理事情, 而且不通过人为的训练.详细的文字教程: https://mofanpy.com/tutorials/machine-learning/reinforcement-learning/If you like this, please like my c...
-
#2 要求准备 (强化学习 Reinforcement Learning 教学) - 莫烦Python
因为强化学习的复杂性, 强化学习并没有什么好的, 通用的 python 模块, 所以我们手动写出来的强化学习方法要好很多. 我们就来看看手动编写强化学习要有哪些前期准备.详细的文字教程: https://mofanpy.com/tutorials/machine-learning/reinforcement-le...
-
什么是 Q Learning (Reinforcement Learning 强化学习) - 莫烦Python
强化学习 Reinforcement learning 中有很多种算法, 都是让机器人一步步学着处理问题, 这次我们说说比较有名的 Q learning.详细的文字教程: https://mofanpy.com/tutorials/machine-learning/reinforcement-learning/有...
-
#2.1 简单例子 (强化学习 Reinforcement Learning 教学) - 莫烦Python
这一次我们会用 tabular Q-learning 的方法实现一个小例子, 例子的环境是一个一维世界, 在世界的右边有宝藏, 探索者只要得到宝藏尝到了甜头, 然后以后就记住了得到宝藏的方法, 这就是他用强化学习所学习到的行为.详细的文字教程: https://mofanpy.com/tutorials/mach...
-
#2.2 Q Learning 算法更新 (强化学习 Reinforcement Learning 教学) - 莫烦Python
Q learning 强化学习 学习走迷宫.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you like this, please like my code on Github...
-
#2.3 Q Learning 思维决策 (强化学习 Reinforcement Learning 教学) - 莫烦Python
我们介绍完 Q learning 的算法如何更新, 这次就说说 Q Learning 的主算法是如何决策.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you like this...
-
什么是 Sarsa (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中一个和 Q learning 类似的算法, 叫做 Sarsa.Q learning 介绍: https://www.youtube.com/watch?v=HTZ5xn12AL4&index=17&list=PLXO45tsB95cIFm8Y8vMkNNPPXAtYXwKin详细的文字教...
-
#3.1 Sarsa 算法更新 (强化学习 Reinforcement Learning 教学) - 莫烦Python
Sarsa 是一种 on-policy 在线学习法, 它和 Q-learning, 离线学习发有些不同, 我们来编写并对比吧.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you...
-
#3.2 Sarsa 思维决策 (强化学习 Reinforcement Learning 教学) - 莫烦Python
Sarsa 是一种 on-policy 在线学习法, 它和 Q-learning, 离线学习发有些不同, 我们来编写并对比吧.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you...
-
什么是 Sarsa(lambda) (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中基于 Sarsa 的一种提速方法, 叫做 sarsa lambda.Sarsa 简介: https://youtu.be/AANzrFOQIiM详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinfor...
-
#3.3 Sarsa(lambda) (强化学习 Reinforcement Learning 教学) - 莫烦Python
Sarsa lambda 是 Sarsa 的一种提速更新方法, 也有很广的应用. 他结合了 单步更新和回合更新的优势.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you lik...
-
什么是 DQN (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中的一种强大武器, Deep Q Network 简称为 DQN. Google Deep mind 团队就是靠着这 DQN 使计算机玩电动玩得比我们还厉害.详细的文字教程: https://mofanpy.com/tutorials/machine-learning/reinforcem...
-
#4.1 DQN 算法更新 using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
DQN 是 Deep Q Networks 的简称, 也是一种结合了神经网络和强化学习的工具, 这种工具能够应对现实生活中更复杂的强化学习问题.详细的文字教程: https://mofanpy.com/tutorials/machine-learning/reinforcement-learning/If you...
-
#4.2 DQN 神经网络 using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
DQN 是 Deep Q Networks 的简称, 也是一种结合了神经网络和强化学习的工具, 这种工具能够应对现实生活中更复杂的强化学习问题.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learni...
-
#4.3 DQN 思维决策 using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
DQN 是 Deep Q Networks 的简称, 也是一种结合了神经网络和强化学习的工具, 这种工具能够应对现实生活中更复杂的强化学习问题.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learni...
-
#4.4 OpenAI Gym using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
这次我们用 gym 模拟器来训练我们的强化学习方法详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you like this, please like my code on Git...
-
#4.5* Double DQN using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
Double DQN 是 DQN 的升级版, 合理地运用了两个神经网络,解决了 DQN 中的 overestimate 的问题.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If yo...
-
#4.6* DQN with Prioritised Replay using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
Prioritised replay 是 DQN 的又一种改进方法, 让它能更有效率的抽取到需要被学习的 samples.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you l...
-
#4.7* Dueling DQN using Tensorflow (强化学习 Reinforcement Learning 教学) - 莫烦Python
Dueling DQN 将原始的 DQN Q 值输出改写成了 Value + Advantage, 这种改写能使 DQN 更有效率的学习从经验中学习.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-lea...
-
什么是 策略梯度 Policy Gradients (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习家族中另一类型算法, 叫做 Policy Gradients. 详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/Code in Github: https://g...
-
#5.1 Policy Gradients 算法更新 (强化学习 Reinforcement Learning 教学) - 莫烦Python
Policy Gradient 代码学习, 第一部分, 算法更新详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you like this, please like my code...
-
#5.2 Policy Gradients 思维决策 (强化学习 Reinforcement Learning 教学) - 莫烦Python
Policy Gradient 代码学习, 第二部分, 思维决策详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/If you like this, please like my code...
-
什么是 Actor Critic (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中的一种结合体 Actor Critic (演员评判家), 它合并了 以值为基础 (比如 Q learning) 和 以动作概率为基础 (比如 Policy Gradients) 两类强化学习算法.详细的文字教程: https://morvanzhou.github.io/tutorial...
-
#6.1 Actor Critic 演员评论家 (强化学习 Reinforcement Learning 教学) - 莫烦Python
结合了 Policy Gradient (Actor) 和 Function Approximation (Critic) 的方法. Actor 基于概率选行为, Critic 基于 Actor 的行为评判行为的得分, Actor 根据 Critic 的评分修改选行为的概率.详细的文字教程: https://mo...
-
什么是 Deep Deterministic Policy Gradient (DDPG) (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中的一种actor critic 的提升方式 Deep Deterministic Policy Gradient (DDPG), DDPG 最大的优势就是能够在连续动作上更有效地学习.详细的文字教程: https://morvanzhou.github.io/tutorials/mach...
-
#6.2 DDPG (Deep Deterministic Policy Gradient) (强化学习 Reinforcement Learning 教学) - 莫烦Python
Google DeepMind 提出的一种使用 Actor Critic 结构, 但是输出的不是行为的概率, 而是具体的行为, 用于连续动作 (continuous action) 的预测. DDPG 结合了之前获得成功的 DQN 结构, 提高了 Actor Critic 的稳定性和收敛性.详细的文字教程: ht...
-
什么是 A3C (Asynchronous Advantage Actor-Critic) (Reinforcement Learning 强化学习) - 莫烦Python
今天我们会来说说强化学习中的一种有效利用计算资源, 并且能提升训练效用的算法, Asynchronous Advantage Actor-Critic, 简称 A3C.详细的文字教程: https://morvanzhou.github.io/tutorials/machine-learning/reinforc...
-
#6.3 A3C (Asynchronous Advantage Actor-Critic) (强化学习 Reinforcement Learning 教学) - 莫烦Python
Google DeepMind 提出的一种解决 Actor-Critic 不收敛问题的算法. 它会创建多个并行的环境, 让多个拥有副结构的 agent 同时在这些并行环境上更新主结构中的参数. 并行中的 agent 们互不干扰, 而主结构的参数更新受到副结构提交更新的不连续性干扰, 所以更新的相关性被降低, 收敛...
-
#6.4 PPO/DPPO Proximal Policy Optimization (强化学习 Reinforcement Learning with tensorflow 教学) - 莫烦Python
根据 OpenAI 的官方博客, PPO 已经成为他们在强化学习上的默认算法. 如果一句话概括 PPO: OpenAI 提出的一种解决 Policy Gradient 不好确定 Learning rate (或者 Step size) 的问题. 因为如果 step size 过大, 学出来的 Policy 会一直...
-
Deep Q Learning for Video Games - The Math of Intelligence #9 - Siraj Raval
We're going to replicate DeepMind's Deep Q Learning algorithm for Super Mario Bros! This bot will be able to play a bunch of different video games by using reinforcement learning. This is the first video in this series that uses libraries (Keras & Gym) because if it didn't, the code would be way too long for a short video. I'll make a longer, in-depth version without libraries soon.
-
【機器學習2021】概述增強式學習 (Reinforcement Learning, RL) (一) – 增強式學習跟機器學習一樣都是三個步驟 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/drl_v5.pdf
-
【機器學習2021】概述增強式學習 (Reinforcement Learning, RL) (二) – Policy Gradient 與修課心情 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/drl_v5.pdf
-
【機器學習2021】概述增強式學習 (Reinforcement Learning, RL) (三) - Actor-Critic - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/drl_v5.pdf
-
【機器學習2021】概述增強式學習 (Reinforcement Learning, RL) (四) - 回饋非常罕見的時候怎麼辦?機器的望梅止渴 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/drl_v5.pdf
-
【機器學習2021】概述增強式學習 (Reinforcement Learning, RL) (五) - 如何從示範中學習?逆向增強式學習 (Inverse RL) - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/drl_v5.pdf
-
[TA 補充課] RL - Model-based, Large-scale, Meta, Multi-agent, Hide-and-seek, Alpha Star (由助教林義聖同學講授)
slides: https://docs.google.com/presentation/d/1uMEtjj2hblAY76FDa4yBTSEv8HFk7h_R53x0KUkc9DU/edit?usp=sharing
-
-
異常檢測(Anomaly Detection)
專注於異常檢測(Anomaly Detection)的多個方面,涵蓋了從基本概念到進階技術的全面介紹。這些視頻適合於想要深入理解異常檢測在數據分析和機器學習中應用的學習者。 視頻系列從基礎知識開始,逐步介紹了異常檢測的各種方法和技術,包括統計方法、監督學習和非監督學習方法。 接下來的視頻深入探討了異常檢測的實際應用,包括如何識別和處理異常數據,以及異常檢測在不同領域中的實踐案例。
-
Anomaly Detection (1/7) - Hung-yi Lee
-
Anomaly Detection (2/7) - Hung-yi Lee
-
Anomaly Detection (3/7) - Hung-yi Lee
-
Anomaly Detection (4/7) - Hung-yi Lee
-
Anomaly Detection (5/7) - Hung-yi Lee
上課聽的歌:https://www.youtube.com/watch?v=l_VsevrFHLc
-
Anomaly Detection (6/7) - Hung-yi Lee
-
Anomaly Detection (7/7) - Hung-yi Lee
-
[TA 補充課] More about Anomaly Detection (由助教林政豪同學講授)
slides: https://docs.google.com/presentation/d/1kpAp9k_cDJ-lGASgsY4VGKy7Q8oab3nbn0P8e9pN6lk/edit?usp=sharing
-