Rogerspy's Home
博文
视频
资料
关于
Rogerspy's Home
博客
视频小站
学习资料
随心记
分类
标签
归档
Foundations for Machine Learning
1. Introduction
1.1 What is machine learning?
1.2 What kind of problems can be tackled using machine learning?
1.3 Some standard learning tasks
1.4 Learning stages
1.5 Learning scenarios
1.6 Generalization
2. The PAC Learning Framework
2.1 The PCA learning model
2.2 Guarantees for finite hyperthesissets — consistent case
2.3 Guarantees for finite hypothesis sets — inconsistent case
2.4 Generalities
3. Rademacher Complexity and VC-Dimension
3.1 Rademacher complexity
3.2 Growth function
3.3 VC-dimension
3.4 Lower bounds
4. Model Selection
4.1 Estimation and approximation errors
4.2 Empirical risk minimization (ERM)
4.3 Structural risk minimization (SRM)
4.4 Cross-validation
4.5 n-Fold cross-validation
4.6 Regularization-based algorithms
4.7 Convex surrogate losses
5. Support Vector Machines
5.1 Linear classification
5.2 Separable case
5.3 Non-separable case
5.4 Margin theory
6. Kernel Methods
6.1 Introduction
6.2 Positive definite symmetric kernels
6.3 Kernel-based algorithms
6.4 Negative definite symmetric kernels
6.5 Sequence kernels
6.6 Approximate kernel feature maps
7. Boosting
7.1 Introduction
7.2 AdaBoost
7.3 Theoretical results
7.4 L1-regularization
7.5 Discussion
8. On-Line Learning
8.1 Introduction
8.2 Prediction with expert advice
8.3 Linear classification
8.4 On-line to batch conversion
8.5 Game-theoretic connection
9. Multi-Class Classification
9.1 Multi-class classification problem
9.2 Generalization bounds
9.3 Uncombined multi-class algorithms
9.4 Aggregated multi-class algorithms
9.5 Structured prediction algorithms
10. Ranking
10.1 The problem of ranking
10.2 Generalization bound
10.3 Ranking with SVMs
10.4 RankBoost
10.5 Bipartite ranking
10.6 Preference-based setting
10.7 Other ranking criteria
11. Regression
11.1 The problem of regression
11.2 Generalization bounds
11.3 Regression algorithms
12. Maximum Entropy Models
12.1 Density estimation problem
12.2 Density estimation problem augmented with features
12.3 Maxent principle
12.4 Maxent models
12.5 Dual problem
12.6 Generalization bound
12.7 Coordinate descent algorithm
12.8 Extensions
12.9 L2-regularization
13. Conditional Maximum Entropy Models
13.1 Learning problem
13.2 Conditional Maxent principle
13.3 Conditional Maxent models
13.4 Dual problem
13.5 Properties
13.6 Generalization bound
13.7 Logistic regression
13.8 L2-regularization
13.9 Proof of the duality theorem
14. Algorithmic Stability
14.1 Definitions
14.2 Stability-based generalization guarantee
14.3 Stability of kernel-based regularization algorithms
15. Dimensionality Reduction
15.1 Principal component analysis
15.2 Kernel principal component analysis (KPCA)
15.3 KPCA and manifold learning
15.4 Johnson-Lindenstrauss lemma
16. Learning Automata and Languages
16.1 Introduction
16.2 Finite automata
16.3 Efficient exact learning
16.4 Identification in the limit
17. Reinforcement Learning
17.1 Learning scenario
17.2 Markov decision process model
17.3 Policy
17.4 Planning algorithms
17.5 Learning algorithms