Two-Camera System

In robotics it is essential to have adequate sensors, and one of the most important sensors for a robot is the vision sensor. When it comes to vision sensors, speed and size is of great importance. Vision applications using standard techniques are widely used in the industry today, although these systems are bulky and slow. The latest development of reconfigurable hardware together with new sensor chips enable a new type of high performance vision systems to be built.

The research question that needs to be solved in order for this new type of vision systems to become useful for robotic applications (industrial robots or autonomous service robots) is how to use vision algorithms in reconfigurable hardware. The current approach taken in this work is to find algorithms that work on a continuous data flow, using sliding windows, and thus eliminating the need for external memory for storing a complete image.

The ´Two-Camera System´ can be used as a stereo system, but it can also be configured to use either only one camera or two cameras, where the second camera is used differently (shutter speed, white balance or other parameters).

The main problems addressed are the algorithmic and the software support systems for building complex algorithms on reconfigurable hardware. Some algorithms are proprietary, but the research at the university will focus on the general platform and develop methods and IP-blocks to support implementations.

The Two-camera project is a 3-way collaboration between Mälardalen Universety, industry and the Knowledge Foundation. The industrial partners are Hectronic, Sensor Control and MEEQ.

KKS_Logo

GIMME – A General Image Multiview Manipulation Engine

GIMME is the hardware platform that emerged from the Two-Camera System collboration. It is a highly flexible reconfigurable stand-alone mobile two-camera vision platform with stereo-vision capability. GIMME relies on reconfigurable hardware (FPGA) to perform application-specific low to medium-level image-processing. The Qseven-extension enables additional processing power. Thanks to its compact design, low power consumption, standardized interfaces (power and communication) GIMME is an ideal vision platform for autonomous and mobile robot applications.

GIMME features two 5-megapixel CMOS sensors, an FPGA (Spartan-3A DSP 1800),  32MB SDRAM, 16MB flash memory, a Qseven interface, 100Mbit Ethernet and USB communication.

gimme_system

Streaming of images via Ethernet is an important test as it involves acquiring and forwarding sensor information, and hence requires a framework of IP-blocks for
handling the communication between the FPGA and other hardware components.

5.5 fps is achievable when streaming 640×480 RGB-images of 12-bit color-depth, using a Ethernet frame-based protocol.

streamed_image

The main idea behind the hardware design is to implement image processing algorithms within the FPGA. This in order to extract important information from the image data stream. In the Malta project features from the Stephen-Harris combined corner and edge detector are used for autonomous navigation.

The frame rate is 44.4 fps on intensity-based images from a 640×480 RGB-image stream of 12-bit color depth.

streamed_harris

Comments are closed.