Lecture 3: Logistic Regression
Lecture 3: Logistic Regression Feng Li Shandong University fli@sdu.edu.cn September 20, 2023 Feng Li (SDU) Logistic Regression September 20, 2023 1 / 29 Lecture 3: Logistic Regression 1 Classification Classification 2 Logistic Regression 3 Newton’s Method 4 Multiclass Classification Feng Li (SDU) Logistic Regression September 20, 2023 2 / 29 Classification Classification problems Email: Spam / Not Spam “Negative Class” (e.g., benign tumor) 1 : “Positive Class” (e.g., malignant tumor) Feng Li (SDU) Logistic Regression September 20, 2023 3 / 29 Warm-Up What if applying linear regress to classification0 码力 | 29 页 | 660.51 KB | 1 年前3Experiment 2: Logistic Regression and Newton's Method
Experiment 2: Logistic Regression and Newton’s Method August 29, 2018 1 Description In this exercise, you will use Newton’s Method to implement logistic regression on a classification problem. 2 Data 70 Exam 1 score 40 45 50 55 60 65 70 75 80 85 90 Exam 2 score 4 Logistic Regression Recall that in logistic regression, the hypothesis function is hθ(x) = g(θT x) = 1 1 + e−θT x = P(y iterations is less than (or equal to) some threshold ϵ, i.e. |L+(θ) − L(θ)| ≤ ϵ (7) Try to resolve the logistic regression problem using gradient de- scent method with the initialization θ = 0, and answer the0 码力 | 4 页 | 196.41 KB | 1 年前3Logistic Regression
Logistic Regression 主讲人:龙良曲 Recap ▪ for continuous: ? = ?? + ? ▪ for probability output: ? = ? ?? + ? ▪ ?: ??????? ?? ???????? Binary Classification ▪ interpret network as ?: ? → ? ? ?; ? ▪ output output ∈ 0, 1 ▪ which is exactly what logistic function comes in! Goal v.s. Approach ▪ For regression: ▪ Goal: ???? = ? ▪ Approach: minimize ????(????, ?) ▪ For classification: ▪ Goal: maximize benchmark issues 2. gradient not continuous since the number of correct is not continuous Q2. why call logistic regression ▪ use sigmoid ▪ Controversial! ▪ MSE => regression ▪ Cross Entropy => classification0 码力 | 12 页 | 798.46 KB | 1 年前3PaddleDTX 1.0.0 中文文档
TaskID: fdc5b7e1-fc87-4e4b-86ee-b139a7721391 命令行各参数说明如下: -a: 训练使用的算法, 可选线性回归 ‘linear-vl’ 或逻辑回归 ‘logistic-vl’ -l: 训练的目标特征 –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表 明了计算需求方的身份, 可以用-k 参数直接指定私钥 -t: 任务类型 'start_vl_linear_predict', 'start_vl_logistic_train' 'start_vl_logistic_predict', 'tasklist', 'gettaskbyid' - 'upload_sample_files' - save linear and logistic sample files into XuperDB linear prediction task - 'start_vl_logistic_train' - start vertical logistic training task - 'start_vl_logistic_predict' - start vertical logistic prediction task - 'tasklist' - list0 码力 | 53 页 | 1.36 MB | 1 年前3PaddleDTX 1.0.0 中文文档
TaskID: fdc5b7e1-fc87-4e4b-86ee-b139a7721391 命令行各参数说明如下: • -a: 训练使用的算法, 可选线性回归‘linear-vl’或逻辑回归‘logistic-vl’ • -l: 训练的目标特征 • –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表明了计算需求方的身份, 可 以用-k 参数直接指定私钥 'start_vl_linear_ �→predict', 'start_vl_logistic_train' 'start_vl_logistic_predict', 'tasklist', 'gettaskbyid' - 'upload_sample_files' - save linear and logistic sample files into XuperDB - 'start_vl_linear_train' vertical linear prediction task - 'start_vl_logistic_train' - start vertical logistic training task - 'start_vl_logistic_predict' - start vertical logistic prediction task - 'tasklist' - list task in0 码力 | 57 页 | 624.94 KB | 1 年前3PaddleDTX 1.1.0 中文文档
TaskID: fdc5b7e1-fc87-4e4b-86ee-b139a7721391 命令行各参数说明如下: -a: 训练使用的算法, 可选线性回归 ‘linear-vl’ 或逻辑回归 ‘logistic-vl’ -l: 训练的目标特征 –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表 明了计算需求方的身份, 可以用-k 参数直接指定私钥 -t: 任务类型 'start_vl_linear_predict', 'start_vl_logistic_train' 'start_vl_logistic_predict', 'tasklist', 'gettaskbyid' - 'upload_sample_files' - save linear and logistic sample files into XuperDB linear prediction task - 'start_vl_logistic_train' - start vertical logistic training task - 'start_vl_logistic_predict' - start vertical logistic prediction task - 'tasklist' - list0 码力 | 57 页 | 1.38 MB | 1 年前3PaddleDTX 1.1.0 中文文档
TaskID: fdc5b7e1-fc87-4e4b-86ee-b139a7721391 命令行各参数说明如下: • -a: 训练使用的算法, 可选线性回归‘linear-vl’或逻辑回归‘logistic-vl’ • -l: 训练的目标特征 • –keyPath: 默认取值’./keys’, 从该文件夹中读取私钥, 计算需求方的私钥, 表明了计算需求方的身份, 可 以用-k 参数直接指定私钥 'start_vl_linear_ �→predict', 'start_vl_logistic_train' 'start_vl_logistic_predict', 'tasklist', 'gettaskbyid' - 'upload_sample_files' - save linear and logistic sample files into XuperDB - 'start_vl_linear_train' vertical linear prediction task - 'start_vl_logistic_train' - start vertical logistic training task - 'start_vl_logistic_predict' - start vertical logistic prediction task - 'tasklist' - list task in0 码力 | 65 页 | 687.09 KB | 1 年前3Lecture 4: Regularization and Bayesian Statistics
Regularization and Bayesian Statistics 1 Overfitting Problem 2 Regularized Linear Regression 3 Regularized Logistic Regression 4 MLE and MAP Feng Li (SDU) Regularization and Bayesian Statistics September 20, 2023 Regularization and Bayesian Statistics September 20, 2023 9 / 25 Regularized Logistic Regression Recall the cost function for logistic regression J(θ) = − 1 m m � i=1 [y(i) log(hθ(x(i))) + (1 − y(i)) log(1 j Feng Li (SDU) Regularization and Bayesian Statistics September 20, 2023 10 / 25 Regularized Logistic Regression (Contd.) Gradient descent: Repeat θ0 := θ0 − α 1 m �m i=1(hθ(x(i)) − y (i))x(i) 00 码力 | 25 页 | 185.30 KB | 1 年前3Lecture Notes on Gaussian Discriminant Analysis, Naive
have to compute are only P(X = x | Y = y) and P(Y = y). Recalling that, in linear regression and logistic regression, we use hypothesis function y = hθ(x) to model the relationship between feature vector p(˜x) where ˜y = 0, 1. 3 Gaussian Discriminant Analysis and Logistic Regression By far, we introduce two classification algorithms, Logistic Regression (LR) and GDA. We now dive into investigating the Σ−1µ0−µT 1 Σ−1µ1 2 + log � ψ 1−ψ �� Therefore, we conclude that GDA model can be reformulated as logistic regression. But the question is, which one is better? GDA makes stronger modeling assumptions, and0 码力 | 19 页 | 238.80 KB | 1 年前3Lecture 5: Gaussian Discriminant Analysis, Naive Bayes
Li (SDU) GDA, NB and EM September 27, 2023 27 / 122 Warm Up (Contd.) In linear regression and logistic regression, x and y are linked through (deterministic) hypothesis function y = hθ(x) How to model �� Feng Li (SDU) GDA, NB and EM September 27, 2023 54 / 122 GDA and Logistic Regression GDA model can be reformulated as logistic regression Which one is better? GDA makes stronger modeling assumptions data to learn “well”) when the modeling as- sumptions are correct or at least approximately correct Logistic regression makes weaker assumptions, and is significantly more robust deviations from modeling assumptions0 码力 | 122 页 | 1.35 MB | 1 年前3
共 75 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8