MILA

Presentation

Mission:

  • Federate researchers in the area of deep learning and machine learning for AI
  • Provide a platform for collaboration and co-supervision 
  • Share human ressources as well as infrastructures and computer networks
  • Provide a unique access to the state-of-the-art to a pool of companies which can benefit from the opportunities opened up by machine learning

Scientific Mission:

  • Understanding of fundamental principles
  • Supervised learning and pattern recognition
  • Unsupervised and semi-supervised learning
  • Representation learning, and deep learning of representations
  • Computer vision applications
  • Applications in natural language precessing
  • Applications in modelling signals such as speech
  • Applications to healthcare
  • Applications on large scale data (big data)

Expertise

Researchers from MILA have pioneered the field of deep learning and deep neural networks (both discriminative and generative) and their applications to vision, speech and language. MILA is world-renowned for many breakthroughs in developing novel deep learning algorithms and applying them to various domains. They include neural language modelling, neural machine translation, object recognition, structured output generative modelling and neural speech recognition.

Faculty

The machine learning laboratory at the University of Montreal is led by seven professors, Prof. Yoshua Bengio, Prof. Aaron Courville, Prof. Pascal Vincent, Prof. Roland Memisevic, Prof. Christopher Pal, Prof. Laurent Charlin, and Prof. Simon Lacoste-Julien, all of whom are leading world experts in machine learning, especially in the rapidly growing field of deep learning.

The MILA consists of a large number of researchers in addition to the faculty: at the beginning of 2016, there were 6 post-doctoral researchers, 42 doctoral students, 22 master’s students as well as 6 scientific staff members working full time (one executive director, one chief of software development and 4 specialist programmers in deep learning), making it one of the largest academic labs focusing fully on deep neural networks and their applications ( click here to see the team).

Innovations and publications

Many of the innovations that have led to the recent surge of popularity and success in deep learning were invented or co-invented at this lab. Some of those innovations include, but are not limited to, major contributions to layer-wise unsupervised pre-training, deep supervised rectifier networks, generative neural networks, theory and advances in recurrent neural networks, automatic hyper parameter tuning, neural machine translation and theoretical analysis of deep neural networks. See our publications repository for a more complete list.

Lab Founder: Yoshua Bengio

Yoshua Bengio3Yoshua Bengio founded the lab which became the MILA, and his main research ambition is to understand principles of learning that yield intelligence. His research is widely cited (over 36000 citations found by Google Scholar in early 2016, with an H-index of 76). He is currently action editor for the Journal of Machine Learning Research, associate editor for Neural Computation and for Foundations and Trends in Machine Learning, and has been associate editor for the Machine Learning Journal and the IEEE Transactions on Neural Networks. Yoshua Bengio was Program Chair for NIPS’2008 and General Chair for NIPS’2009 (NIPS is the flagship conference in the areas of learning algorithms and neural computation). Since 1999, he has been co-organizing the Learning Workshop with Yann Le Cun, with whom he has also created the International Conference on Representation Learning (ICLR). He has also organized or co-organized numerous other events, such as most of the deep learning workshops at ICML and NIPS since 2007.

Notable recent publications

  1. Y. Bengio, I. Goodfellow, A. Courville. (2015) Deep Learning. MIT Press (to appear)
  2. Y. Bengio, A. Courville, P. Vincent. (2013) Unsupervised feature learning and deep learning: A review and new perspectives. IEEE Trans. Pattern Analysis and Machine Intelligence.
  3. I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville and Y. Bengio (2013). Maxout Networks. Proc. ICML’2013.
  4. K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio. (2014) Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of EMNLP.
  5. Y. Bengio, E. Thibodeau-Laufer, G. Alain, J. Yosinski. (2014) Deep Generative Stochastic Networks Trainable by Backprop. In Proceedings of NIPS.