Skip to main content

IE 663 Advanced Topics in Deep Learning

Course Code: IE 663

Course Name: Advanced Topics in Deep Learning

Prerequisite:

At least one of the following courses: IE643, CS725, EE769, GNR 638, IE 613 or a fundamental course in machine learning (or deep learning) equivalent to the listed pre-requisite courses.

Course Contents:

Optimization for Deep Learning (non-convex optimization, convergence rates and acceleration effects of algorithms) Learning Theory for Deep Learning (Generalization properties, efficient learning, guarantees on learning) Adversarial Learning Techniques and Robustness Advanced Reinforcement Learning aspects of Deep Learning Advanced topics on Transfer learning, curriculum learning, cognitive learning, auto ML, edge computing using deep neural networks Dynamics of Deep Learning (relations to continuous and discrete dynamical systems) New and Emerging Applications of Deep Learning including statistical physics and bio-inspired applications

References:
Since this course is about advanced topics, the main references provided are recent research papers.
1. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning  requires rethinking generalization. arXiv preprint. 2017. https://arxiv.org/abs/1611.03530
2. B. Neyshabur, R. Tomioka, R. Salakhutdinov, and N. Srebro. Geometry of optimization and  implicit regularization in deep learning.arXiv preprint, 2017. https://arxiv.org/abs/1705.03071
3. C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. How to escape saddle points efficiently. In Proc. 34th International Conference on Machine Learning (ICML), volume 70, pp. 1724–1732, Aug 2017. https://arxiv.org/abs/1703.00887
4. R. T. d. Combes, M. Pezeshki, S. Shabanian, A. Courville, and Y. Bengio. On the learning dynamics of deep neural networks.arXiv preprint, 2018. https://arxiv.org/abs/1809.06848
5. Quynh Nguyen and Matthias Hein. Optimization landscape and expressivity of deep CNNs. In International Conference on Machine Learning, pp. 3727–3736, 2018. https://arxiv.org/abs/1710.10928
6. Yi Zhou and Yingbin Liang. Critical points of neural networks: Analytical forms and landscape properties. InInternational Conference on Learning Representations, 2018. https://arxiv.org/abs/1710.11205
7. X. Li, S. Ling, T. Strohmer, and K. Wei. Rapid, robust, and reliable blind deconvolution via nonconvex optimization.Applied and Computational Harmonic Analysis, 2018. https://arxiv.org/abs/1606.04933
8. C. Yun, S. Sra, and A. Jadbabaie. Small nonlinearities in activation functions create bad local minima in neural networks. In International Conference on Learning Representations, 2019. https://openreview.net/pdf?id=rke_YiRct7