Prophesee’s Event-Based Approach to Image Sensing

Prophesee’s event-based approach to image sensing means that vision system designers who want to capture fast events no longer need to make a trade- off between running their cam- eras at high frame rates and dealing with large amounts of redundant data. The volume of data the sensor produces is now governed by the amount of change in its field of view, automatically adjusting as the scene conditions evolve.

Looking at a static scene will generate no events, but if there is a burst of action, the camera adapts automatically to capture it instantly. This makes it easier and more cost effective to ac- quire and analyze very fast motion, even if it is interleaved with times or areas in which motion is absent

Each pixel provides information at the rate of change in its field of view, not at an arbitrary, pre- set and fixed frame rate. An event-based approach also means that dynamic scenes can be analyzed as a highly resolved sequence of events that form spatiotemporal patterns representing features such as the edges, trajectories, or velocities of objects. The figure above rep- resents how the Prophesee sensor would record a rotating robotic arm.

 

Prophesee’s event-based approach to dynamic sensing achieved temporal resolutions of tens of kHz where a frame-based approach struggled to reach 60Hz.

This is possible because each visual event is handled as an incremental change to a continuous signal, which can be analyzed at low computational cost, when compared to the simultaneous analysis of all the pixels in many complete frames. An event-based approach also makes it easier to correlate multiple views of a scene. This eases tasks such as 3D depth reconstruction in multi-camera stereoscopy set-ups, because if two or more cameras sense an event at once, it is likely they are observing the same point.

Recent research also suggests that analyzing the way in which the illuminance of a single pixel changes over time will enable the development of new ways to solve key vision challenges, such as object recognition, obstacle avoidance, and the simultaneous localization and mapping processes that are vital to enabling vehicle autonomy.

Potential advantages:

  1. Efficient streaming of visual data for real-time communication and surveillance, thanks to the data compression inherent in not repeatedly reporting data from static parts of an image.
  2. High-speed tracking of fast-moving objects, thanks to the fact that the data stream from the sensor can be directly processed to measure object speed and predict trajectories.
  3. Detection and real-time analysis of high-speed transient visual events, thanks to each pixel’s ability to dynamically adjust its sampling rate to the scene’s needs.
  4. Efficient stereoscopy, through event-based correlation between multiple cameras simultaneously reporting a change in their fields of view when looking at the same 3D object.
  5. Higher temporal-resolution inputs for real-time control systems.
  6. Enhanced machine-learning strategies, because event-based sensing adds time as another dimension of information to vision analysis tasks.

For more CLICK www.prophesee.ai and https://www.prophesee.ai/2020/01/24/prophesee-gen1-automotive-detection-dataset/

Share →

Leave a Reply

Your email address will not be published. Required fields are marked *