The neuromorphic camera is a biologically inspired camera that continually tracks an object within its field of view, much like the peripheral vision of the human eye. The camera is made in hardware and consists mainly of an integrated circuit mounted with a lens on a PCB. It was implemented in Austria Micro Systems’ 0.6 µm CMOS process and tested at the laboratory. The tests showed only the ability to track bright spots. This was due to a fault in the implementation, that didn’t show in simulations. The fault has later been identified and verified by simulations.
The camera has 51x42 pixels, each sensitive to change in illumination. Each pixel works in the following manner: The part which is first exposed to light is the photoreceptor, which transduces the light into a photocurrent. The current is then amplified by a photo circuit, which adapts to the level of light. The circuitry that follows detects change in the photocurrent, which indicates change in the perceived light.
The pixels are arranged in a 2-dimensional array, each pixel connected to a row and a column. At the end of the rows and the columns are two winner-take-all circuits which find the row and column with the most changes. The row and column chosen will roughly indicate the center of an object that moves across the field of view. In the case of no moving objects, the camera will send out a special timeout signal.
The camera’s output is in the form of spikes, modeled after biology’s nervous system. Spike signals are digital in value and analogue in time, because no clocks are used. To compensate for the vast number of connections in the nervous system, the outputs share a common bus, and follow the address-event protocol. As a consequence, the camera can be connected to equipment that understands this protocol.