I am a PhD student and former Master’s student at MILA.
I aim to understand the most general principles of intelligence and life in terms of knowledge, learning, and survival. I’m interested in work that helps clarify the problems facing real-world intelligent agents, which I tend to associate with the AGI community. I believe that reward-directed discrete decision-making is essential (e.g. in model specification), and I believe Reinforcement Learning is the right approach for studying it. Despite all of this, I maintain (with low probability) the hypothesis that deep nets will somehow magically solve AI 😀
Recently, I invented a novel regularization technique for RNNs and applied it to text and speech data, achieving state of the art results. My past work involved unsupervised learning, generative modelling, and attention applied to image data.
I have a strong interest in the societal impacts of AI and ML technology, especially existential risk. I founded and run the Montreal AI Ethics Group to discuss these issues with MILA and RLLAB. I was an intern at the Future of Humanity Institute in Oxford during the summer of 2016.