Can Programming Languages Research impact Deep Learning 2.0?
Over the last few years, deep neural networks have made amazing progress, reaching or surpassing humans across various tasks ranging from visual perception, natural language processing and disease prediction. Despite these advances, it is increasingly evident that current (version 1.0) deep learning systems suffer from a number of key issues including: reliance on too much labeled data, inability to incorporate world knowledge via background priors, poor generalization to out-of-distribution samples, lack of safety guarantees and many more. To sustain progress, next-generation (2.0) deep learning systems must address many of these issues.
In this talk I will discuss some of the latest research my group has been working on that explores possible solutions to the above challenges. I will also outline a number of open problems where I believe the programming languages community can make substantial impact.