These videos explain the concepts of the Dynamic Vision Sensor.
An introduction to DVS
Introductory video about Dynamic Vision Sensor
Prof. Delbruck’s talk at IBM research symposium 2015
Prof. Tobi Delbruck gives a detailed talk about DVS technology at IBM research symposium 2015. Video credit: IBM research.
High Performance Characteristics
These videos demonstrate the high performance characteristics of the Dynamic Vision Sensor.
Spinning coin
A coin spins across a desk at about 750 rpm. The video shows the playback of the recording in jAER at about 100x slower than real time, with each rendered frame showing the events which were produced in about 300 us. Video credit: Sim Bamford.
Milk drops
Drops of milk are recorded with the DVS128. Slow playback demonstrates the high temporal precision. Video credit: Tobi Delbruck.
10 kHz blinking LED
An LED blinks at 10 kHz in front of the DVS128. The space-time visualisation (from 7 to 12 seconds) shows the individual on and off events of the LED captured by a small group of pixels. Video credit: Greg Burman.
Catching a ball
The DAVIS240A captures a ball being caught. ON and OFF events, in green and red, are overlaid over the grayscale frames. Video credit: Christian Braendli and Tobi Delbruck.
Man with ball in rain – rolling space-time mode
The DAVIS240C captures a man throwing a ball around in the rain (sped up). The second play-through is in jAER’s rolling space-time mode, showing the inherently sparse data in the event stream. Video credit: Sim Bamford.
Blinking through sunglasses
The DAVIS240C is being used here in APS + DVS mode (static image frames plus dynamic events). Although the eyes are difficult to see in the standard exposure through the sunglasses, the dynamic event stream clearly picks out these 6 eye blinks. Video credit: Sim Bamford.
Seeing into the shadows
Another demonstration of high dynamic range with the DAVIS240C. A desk scene is viewed, looking inside a box in which half of a logo is obscured in the normal video stream. In the second sweep over the desk, the dynamic events are overlaid and the obscured half of the logo is clearly visible, as well as the rat’s nest of cables on the desk. Video credit: Sim Bamford.
Eclipse
In March 2015 we had the chance to play with the high dynamic range of the DVS. The partial solar eclipse is in the background, while a hand is waving in front. You can see lens flare, which is expected due to the optics. There is also blooming around the sun in the frames due to over-exposure, but the DVS picks out the real edge of the sun against the moon and sky, while also outlining the fingers. Video credit: Marc Osswald, Tobi Delbruck and Sim Bamford.
Steadicam
Stabilization of the dynamic event output of a DAVIS240B neuromorphic event-based camera using the onboard IMU rate gyros. Ref: T. Delbruck, V. Villanueva, and L. Longinotti, “Integration of dynamic vision sensor with inertial measurement unit for electronically stabilized event-based vision,” in ISCAS, 2014, pp. 2636–2639 pdf. Since this work, inilabs has integrated the IMU data stream into the FPGA so that IMU and DVS timestamps are synchronized, allowing even better derotation. Video credit: Tobi Delbruck.
Technology Demos
RoboGoalie
Robotic goalie. From 20 s onwards you see what the DVS sees, together, with the real-time tracking algorithm implemented in jAER. Ref: Tobi Delbruck and Manuel Lang. “Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor.” Neuromorphic Engineering Systems and Applications (2015): 16. pdf Video credit: Tobi Delbruck.
Slot Car Racer
Slot car racer project at 2014 Capo Caccia Neuromorphic Engineering Workshop. It uses the DAVIS240 to allow tracking of the car with less than 1 ms latency and less than 5% laptop CPU utilization. Extending previous work from 2010, this allows racing computer vs human. The computer-controlled car is tracked by masking out all DVS events not from pixels on the computer-controlled track. Contributions from Alejandro Linares-Barranco, Elias Muggler, Marc Osswald and Tobi Delbruck. Video credit: Tobi Delbruck.
Predator-prey robots from Visualise EU project, with convolutional neural network
DAVIS240 event and frame input, left, and the convolutional neural network activity of the predator robot, right. The inputs are 36×36 APS or DVS frames and the network has 3 convolutional layers and one 40-neuron fully connected layer before the 4 output units. Ref: Moeys, Diederik Paul, et al. “Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network.” pdf. Video credit: Tobi Delbruck.
Embedded Pencil Balancer Robot
Jorg Conradt’s embedded visually guided pencil balancer robot. Balances a normal pencil using 2 DVS silicon retinas and 600mW of microcontroller computatation. Video credit: Tobi Delbruck.
Human Tracking – Tennis
The DVS outperforms traditional video technology for tracking of body movements. A tennis player returns a shot in front of the DAVIS240B. In the first play-through, only the frames are visible; the ball appears in only two frames and its outward path isn’t seen at all. In the second play-through the dynamic events are overlaid; the fine temporal details of the body movements are clearly visible, as well as the trajectory of the ball. Video credit: Tobi Delbruck.
Algorithms and Methods
Various algorithms and methods have been developed by us and our collaborators for working with the asynchronous stream of events from Dynamic Vision Sensors.
Pulsed laser line 3D reconstruction
The DVS is used together with a pulsed laser line for 3D terrain reconstruction. Credit: Prof Tobi Delbruck and students at the Institute of Neuroinformatics, Zurich. This video is supplemental material for the paper C. Brandli, T. Mantel, M. Hutter, M. Hopflinger, R. Berner, R. Siegwart, and T. Delbruck, “Adaptive Pulsed Laser Line Extraction for Terrain Reconstruction using a Dynamic Vision Sensor,” Frontiers in Neuromorphic Engineering, accepted 2013. pdf
Motion contrast 3D scanning
Motion Contrast 3D scanning. Work of Nathan Matsuda and Oliver Cossairt at the Computational Photography lab at Northwestern University. A novel structured light technique that maximizes bandwidth and light source power to avoid performance trade-offs. The proposed approach will allow 3D vision systems to be deployed in challenging and hitherto inaccessible real-world scenarios requiring high performance using limited power and bandwidth. Learn more
Simultaneous optical flow and intensity estimation
Joint estimation of smooth per-pixel velocity and intensity from pure event data. This method allows reconstruction of HDR video-like intensity and optical flow at arbitrary time rates. Ref: P. A. Bardow, A. J. Davison and S. Leutenegger. “Simultaneous Optical Flow and Intensity Estimation from an Event Camera.” CVPR 2016. pdf
Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation
A variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time. Ref: Reinbacher C, Graber G, Pock T. “Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation.” arXiv preprint arXiv:1607.06283 (2016). pdf
Ultimate SLAM? Combining Events, Images, and IMU for Visual SLAM in HDR and High-Speed Scenarios
High speed SLAM using both frames and DVS events for difficult lighting conditions. Rosinol et al. 2018 pdf
Neuromorphic Sensing and Processing
The DVS interfaces to many popular neuromorphic processors, enabling extremely low-latency, power-efficient systems.
DVS + SpiNNaker: Line-following robot
A robot from the Technische Universität München is controlled by a neural network on the SpiNNaker System (University of Manchester). Visual input is fed in the system through a DVS to a retinotopic map of neurons, while motor control is obtained by the firing rate of simulated motor neurons. Video credit: Francesco Galuppi.
DVS + Brainchip: Unsupervised learning for highway monitoring
A DAVIS240B monitoring a highway is input to Brainchip’s spiking learning neural network emulator, performing unsupervised learning. Video credit: Brainchip
DVS + IBM TrueNorth: Gesture recognition for Samsung
Eric Ryu of Samsung shows a dynamic vision sensor and IBM’s True North spiking neural network hardware working together to demonstrate low power gesture recognition. Video credit: VentureBeat.