One Day Meeting: Visual Image Interpretation in Humans and Machines: Machines that see like us?
Wednesday 10 April 2019
Chairs: Andrew Schofield
Keynote Speakers
- Charles Leek, University of Liverpool, “Deep Neural Networks: The new black box of human vision research?”
- Andrew Glennerster, University of Reading, “Policy networks with and without brains”
- Tim Kietzmann, MRC Cognition and Brain Science Unit, University of Cambridge,
Videos of Talks
On our BMVA youtube channel there are recorded talks of the slides and speaker from the day here.
Meeting Report
After the meeting the organisers will prepare a short summary of the meeting. The report is available here.
Programme
Both the object recognition and game playing performance of deep convolutional neural networks now equals or surpasses that of humans. Deep neural networks share some features in common with the human visual system including multiple layers of processing with the early layers being convolutional in nature. Moreover, techniques such as representational similarity analysis show that appropriately trained neural networks develop representation spaces similar to that of inferior temporal cortex which is known to support object recognition in humans. These results and the superficial similarity in network structure between artificial and biological neural networks lead some to conclude that the former is a good functional model for the latter.
In contrast others note that deep neural networks are easily fooled by image manipulations that are barely noticeable to humans or by specially constructed image elements that trick artificial networks but are seen but ignored by humans. There are also stark differences between artificial and biological networks with mean feature of the latter omitted from artificial systems. There are differences too in the style and rates of learning and generalisation between humans and machines. Such findings suggest that models of human vision should be quite different from deep neural networks.
This one-day meeting will consider issue in human and machine vision discussing how artificial neural networks might be augmented with more biologically plausible features with the aim of making them more robust, and alternatives to neural network models and how their performance compares to the state of the art and human vision.
9:10 | Keynote: Charles Leek , University of Liverpool. “Deep Neural Networks: The new black box of human vision research?” |
10:00 | Thomas Tanay , University College London. “Built-in Vulnerabilities to Imperceptible Adversarial Perturbations”. |
10:15 | Coffee. Z |
10:30 | Marin Dujmovic , University of Bristol, “Human performance on classification of fooling images”. |
10:45 | Gaurav Malhotra , University of Bristol. “The contrasting roles of shape in human vision and convolutional neural networks”. |
11:00 | Poster Session A Spotlights: |
11:15 | Posters and discussion. |
12:00 | Keynote: Tim Kietzmann , MRC CBU Cambridge. “Understanding vision at the interface of computational neuroscience and artificial intelligence”. |
12:45 | Kai Kiwitz , Heinrich-Heine University, Dusseldorf. “Deep Learning Based Brain Mapping Resembles Human Brain Mapping” |
13:00 | Lunch |
13:30 | **Ryan Blything, university of Bristol. “Translation Invariance in Vision: Evidence for On-line Generalization in Humans and Convolutional Neural Networks” |
13:45 | **Javier Vazques Corral, University of East Anglia. “Are convolutional neural networks fooled by visual illusions?” |
14:00 | Poster Session B Spotlights. |
14:15 | Posters and discussion. |
15:00 | Keynote: Andrew Glennerster , University of Reading. “Policy networks with and without brains”. |
15:45 | Julian Forrester , University of Essex. “Genetic Programming as an alternative to Neural Networks for Computer Vision”. |
16:00 | Coffee |
16:15 | Marek Pedziwiatr , Cardiff University. “Meaning maps and deep neural networks are insensitive to semantic information when predicting human eye movements in natural scene viewing”. |
16:30 | Ethan Harris , university of Southampton. “A Biologically Inspired Visual Working Memory for Deep Networks”. |
Poster session A:
- Fraser Smith, University of East Anglia, “Early visual regions in the human brain contain information about occluded parts of human faces”.
- Laszlo Talas, University of Bristol, “Modelling an evolutionarily arms-race with Generative Adversarial Networks”
- John Harston, Imperial College London, “Body dynamics in ongoing tasks are predictive of visual attention”
- Kofi Appiah, Sheffield Hallam University, “Mimicking the honeybee eyes for visual scene recognition”.
- Maija Filipovica, University of Birmingham, “Performance and scene area focus of human participants and neural networks in a visual stability discrimination task”.
- Alex Wade, University of York, “A neural correlate of DNN image classification confidence”
Poster session B:
- Adar Pelah, University of York, “Do machines “see” like us? A comparative study on classification of gender from gait between human, bio-inspired and non bio-inspired learning systems”.
- Frederick Stentiford, University College London, “Visual Recognition without Features or Training Data”.
- Wenshu Zhang, University of Southampton, “Understanding genetic variation by using automated measurement of shape”.
- Xiaoyue Jiang, Northwestern Polytechincal University, “Deep Shadow Detection and Removal”
- Lindsay MacDonald, University College London, “Neural Networks for Colour Space Transformations”
Meeting Location
British Computer Society (BCS), 5 Southampton St, London WC2E 7HA