Fall 2024 – PHS 597 Machine learning with applications to omics data

Welcome to our Machine Learning/Deep Learning course in the academic year 2023-2024. In the Fall semester, we cover classical machine learning, and in the Spring semester, we cover deep learning (primarily focused on neural networks). The current course on data mining was originally developed in 2016, and this is its 8th edition. The course material is getting closer to its stable form. This year’s course is jointly taught with Prof. Sen Yang.

Key References

This course covers the basics of statistical learning, emphasizing its application to genomic data. As a first course in statistical learning, we will largely follow the flow of the first two books (both of which are free for download! The 2nd book ISL is a baby version of the 1st book ESL):
1. The Elements of Statistical Learning. (ESL) Authors: Hastie T., Tibshirani R. and Friedman J.
2. An Introduction of Statistical Learning with Applications in R. (ISL)  Authors: James, G., Witten, D., Hastie, T., Tibshirani, R (ps: the 2nd edition of the book is out and free to download).

A few topics covered in 1 and 2 appear to be obsolete (e.g., boosting algorithms and support vector machines), so we will supplement with more up-to-date materials from other books or original papers.

A few other references are helpful to have, including
3. Deep Learning by Ian Goodfellow (the first section of the book talks about the math background information as well as classical machine learning)
4. Deep Medicine by Eric Topol (This is a non-technical book, talking about numerous cool applications of AI to medicine).
5. Statistical Learning with Sparsity (This is a technical book, covering the theoretical aspects of LASSO and its further extensions)

Covered topics:

Tentative topics include classification, resampling methods, linear models with regularization (e.g. LASSO), Bayesian variable selection, additive models, classification and regression trees, random forests, support vector machines, boosting, and basics of unsupervised learning, such as k-mean clustering, hierarchical clustering, associative rules, factor analysis, nonnegative matrix factorization.

We will discuss the application of machine learning methods to genomics, which will include (but not limited to) variant annotation, genetic association analysis, fine mapping, risk prediction, variant calling, and filtering from next-generation sequencing data.

We will illustrate examples with Python (scikit learn). 

Lecture notes:

  1. Introduction: slides (ppt)
  2. Python introduction (HTML and HTML)
  3. Linear models: slides (ppt)
  4. Linear discriminant analysis (ppt)
  5. Basis expansion (ppt pdf)
  6. Model assessment (ppt)