WIRED: The science at the heart of this is actually quite old, isn’t it? People like you and Geoff Hinton, who’s now at Google, first developed these deep learning methods — known as “back-propogation” algorithms — in the mid-1980s. LeCun: That’s the root of it. But we’ve gone way beyond that. Back-propagation allows us do what’s called “supervised running.” So, you have a collection of images, together with labels, and you can train the system to map new images to labels. This is what Google and Baidu are currently using for tagging images in user photo collections. That we know works. But then you have things like video and natural language, for which we have very little label data.
Facebook's 'Deep Learning' Guru Reveals the Future of AI | Wired Enterprise | Wired.com Friday, December 13, 2013 @ 10:24pm