In this update, we simulate some of the image processing phases our FPGA undergoes before handing processed image data to our Host. This is a followup to our earlier FPGA Image Processing Pipeline post.
Setting up our test
With our camera connected to an evaluation board, we capture 9 frames of raw image data so that we can simulate the following:
- Demosaicing (& Bit-Depth Reduction)
- Gamma Correction
- Auto White Balance
Stage 0: Raw Input
First, we capture the raw (10-bit) image data, which doesn't possess full color information initially:
NOTE: This is just a 10FPS sample video for illustration purposes. The real AR Mode will operate at 90-120FPS.
Stage 1: Debayering
In the raw data from the previous step, each pixel encodes either a red, green, or blue intensity value which must be interpolated (or "demosaiced"/"debayered") into full color data. Here's the result of this interpolation across the 9 frames:
Stage 2: Auto White Balance
Auto White Balance refers to the process of adjusting pixel intensities in an image to neutralize the colors casted by different lighting conditions. For this we use the "Gray World Assumption" that the average color in each sensor channel ought to be gray over the entire FOV. Here's the result:
Stage 3: Gamma Correction (+ Auto White Balance)
Gamma correction refers to the process of correcting for the discrepancy between the way image sensors and human eyes perceive luminance:
Here is the result of applying gamma correction through our simulated FPGA:
Putting It All Together
Here's how a frame changes over the course of a full transformation: