************ UPDATE May 6 ********
Go to
https://github.com/DoAndroidsDream for
presentations and additional resources
****************************************Astonishing images, activated from deep inside machine learning models, lead us to speculate about the roots of human visual imagination.
This panel will survey leading edge applications and research directions, summarize open source tools and resources, and explore how our understanding of human visual experience may be furthered.
This panel will be comprised of experts in visual arts and sciences, including animal and computer vision researchers, art historians and visual design practitioners and/or academics.
An intense curiosity about science and human experience is the only pre-requisite for attending this panel.
ABSTRACT
In New York in the 1980s, inspired by biological models of primate animal vision, Yann LeCun developed Convolutional Neural Networks (CNNs), a machine learning technique that enabled fast, robust and practical automated image and speech recognition.
Now, thirty years later, with computing power far cheaper and faster, very deep and flexible CNNs are routinely learning to categorize vast varieties and quantities of images and videos on the Web.
Deep CNNs learn by applying simple rules over and over again to training inputs consisting of enormous sets of images and metadata. While deep CNNs routinely produce easy to understand and useful outputs, the full nature of their inner workings were, until recently, considered by most scientists to be beyond the reach of human understanding. This transcendent [unfathomable] property of CNNs was thought to be a necessary consequence of the probabilistic nature of the inputs and the explosive [exponential] complexity of repeating the inner-most steps billions of times in seemingly random order.
Then, unexpectedly, in 2012, Andrew Y. Ng, then at Stanford, along with Google scientists reported finding a somewhat abstract image of a cat buried deep within a deep learning machine model that had been running on 16,000 computers [1]. The remarkable thing was that the computer training, . .