How to convert camera RGB/YUV to standard display data?

How to convert camera RGB/YUV to standard display data?

Maxim serializers connect and control camera ICs, including the MAX9257 (with a half-duplex UART/I²C control channel), MAX9259, and MAX9263 (both with a full-duplex synchronous control channel). The MAX9263 also supports broadband digital content protection (HDCP). This application note describes how to convert the camera’s RGB or YUV output to RGB data accepted by standard monitors.

introduction

Maxim serializers connect and control camera ICs, including the MAX9257 (with a half-duplex UART/I²C control channel), MAX9259, and MAX9263 (both with a full-duplex synchronous control channel). The MAX9263 also supports broadband digital content protection (HDCP). This application note describes how to convert the camera’s RGB or YUV output to RGB data accepted by standard monitors.

Camera output data format

A camera chip, such as the OmniVision® OV10630, can be connected via a serializer. The interface pins of the OV10630 include: Pixel Clock, PCLK, Line Active, HREF, Frame Sync, VSYNC, and Parallel Data Bit D[9:0]. Data bits are stable on the rising edge of the clock.

YUV and raw RGB data formats

CMOS camera sensors include millions of light-sensitive cells, each of which responds to an entire wavelength of light. Use filters to make certain sensors respond only to red, green, or blue light signals. Adjacent photosensitive cells are usually arranged in a Bayer-structured color filter law, and the number of green color filters is twice the number of red or blue color filters. This method is used to simulate the photosensitive characteristics of the human eye. Reading the sensor unit output from left to right and top to bottom, the raw RGB data sequence is blue, green…blue,green (end of first line), green, red…green, red (end of second line), and so on ,As shown in Figure 1.

How to convert camera RGB/YUV to standard display data?

Figure 1. Raw RGB data arrangement

Generates RGB data with the same density as sensor cells by interpolating adjacent cells. In addition, the image can be restored according to certain rules using the colors of adjacent cells. One of the rules for forming each pixel’s RGB data set is to use adjacent cells in the same row, plus green adjacent cells in the next (or previous) row. The interpolated RGB data sequence is …, red(i-1), green(i-1), blue(i-1), red(i), green(i), blue(i), red(i+ 1), green (i+1), blue (i+1), … as shown in Figure 2. Each pixel requires a set of RGB data to drive the color Display and maintain the highest resolution of the camera sensor. The luminance resolution of the interpolated RGB data is close to that of the sensor unit, but the chrominance resolution is inferior. Since the human eye is more sensitive to the grayscale of each pixel than to the color component of the pixel, the perceived resolution is essentially the same as the sensor unit resolution.

How to convert camera RGB/YUV to standard display data?

Figure 2. RGB data arrangement

However, this interpolation algorithm for RGB data triples the data rate. To reduce data rates, especially where image transmission is required, the YUV color space (compressing analog color television signals to the frequency band of analog black and white television) can be used. In the following formula, the brightness is represented by Y, the color difference between blue and brightness is represented by U, and the color difference between red and brightness is represented by V,

How to convert camera RGB/YUV to standard display data?

where the typical color weights are: WR = 0.299, WB = 0.114, WG = 1 – WR – WB = 0.587, and the normalized values ​​are UMAX, VMAX = 0.615.
For camera sensors with Bayer filters, adjacent pixels have approximately the same U or V data, depending on row index i and pixel index j (if the rule is adjacent colors). Using this guide, YUV data can be generated directly from RGB data according to the following formula.

How to convert camera RGB/YUV to standard display data?

To reduce the data rate, even pixel-indexed U data and odd pixel-indexed V data, and even and odd pixel-indexed Y data are utilized. The compressed YUV data is sent in the arrangement shown in Figure 3, namely: Y1, U0 and V1 are the data of pixel 1; Y2, U2 and V1 are the data of pixel 2 and so on.

How to convert camera RGB/YUV to standard display data?

Figure 3. YUV422 data arrangement

422 represents the sampling ratio of Y:U:V, the 4:x:x standard is the early color NTSC standard, and is resampled according to 4:1:1 chroma, so the color resolution of the image is only a quarter of the luminance resolution one. Currently, only high-end devices that process uncompressed signals use 4:4:4 color resampling, with the same resolution for both luminance and color information.

Serializer Input Format

The parallel interface of Maxim serializers is designed for 24-bit RGB data, specifically the MAX9259, which has a pixel clock bit (PCLK) and 29 data bits for 24-bit RGB as well as line sync, vertical sync, and 3 control bits. In addition to the parallel data interface, the DRS and BWS pins need to be set high or low to select the data rate and bus width, respectively.

Maxim Serializers/Deserializers

MAX9257 and MAX9258 serializer/deserializer (SerDes) with 18-bit parallel I/O for YUV data transfer; MAX9259/MAX9260 chipset with 28-bit parallel I/O for RGB data transfer; MAX9263/MAX9264 SerDes With 28-bit parallel input/output, HDCP function has been added. In addition, the MAX9265 and MAX9268 28-bit SerDes feature a camera link instead of a parallel input/output interface. All 28-bit Maxim serializers and deserializers have the same parallel/serial data mapping and can be used interchangeably. For example, the MAX9259 serializer can be used with the MAX9268 deserializer to transmit RGB data (with the help of an FPGA). Data is sent from the CMOS camera over the serial link to the camera link interface’s display.

Serializer Mapping

To match the output interface of the MAX9268 deserializer camera link, the parallel RGB data should be mapped according to the following signal diagram. Figure 4 shows the mapping between the parallel bits of the MAX9268 and its camera link output, and Figure 5 shows the RGB data mapping for the camera link. Table 1 shows the corresponding content map for the MAX9259 serializer.

How to convert camera RGB/YUV to standard display data?

Figure 4. MAX9268 Internal Parallel-to-Output Mapping

How to convert camera RGB/YUV to standard display data?

Figure 5. Camera Link Content Mapping

Table 1. MAX9259 Serializer RGB Content Bitmap

How to convert camera RGB/YUV to standard display data?

Color Conversion: YUV to RGB

The FPGA chip can convert the compressed (reduced data rate) camera data YUV into RGB data for the MAX9259 serializer. When using 8-bit fixed-point operation, the formula for color space conversion is as follows. In Equation 2 and Equation 3, n of Dn and En is an even number.

How to convert camera RGB/YUV to standard display data?

FPGA solution

input buffer

The input buffer circuit includes a counter, three registers, and combinatorial logic to convert a single-byte clock input to a three-byte clock output at half the input clock rate. The combinational logic is only used to enable the corresponding registers of the Y, U and V bytes, respectively.

How to convert camera RGB/YUV to standard display data?

Figure 6. Input Buffer Circuit

clock switch

The FPGA output pixel clock rate is half the camera pixel clock rate and is used to drive the serializer pixel clock input. However, the camera does not output a pixel clock until it is initialized. The solution is to use a 2:1 clock multiplexer (mux) and a clock detector inside the FPGA, the mux being controlled by the clock detector. On power-up, the mux’s default clock comes from the camera’s clock oscillator, allowing the SerDes chipset to provide the control channel to start the camera. The clock detector counts the field sync pulses, and after several field sync pulses, the mux switches to half the camera pixel clock rate. When using an HD camera sensor, such as the OV10630, each field sync cycle contains more than 100k pixel clocks. A few field sync cycles are sufficient for the camera’s phase-locked loop (PLL) to stabilize. Field sync counting is much more efficient than pixel clock counting and saves resources on FPGA logic cells.

Intermediate buffer

The delay of the hardware circuit is not reflected in the format conversion expression. To generate RGB data from YUV input, two to three multiplications and three to four additions are required. Although the delay of the FPGA logic circuit (gate circuit) is only a few nanoseconds, the carrier transmission, the adder, and the shift multiplier all cause different degrees of delay, which increases the overall delay. To minimize delay, each constant multiplier is approximated by an adder with two shifted inputs (representing the 2 non-zero MSBs of the constant). At an input YUV byte rate of about 100MHz, the delay will cross the timing boundaries of adjacent pixels, adding to image noise. Extended delays are eliminated by intermediate registers after each multiplier.

The YUV to RGB color conversion mentioned above has been used in the Actel® ProASIC3 A3PN125Z FPGA, and Figure 7 shows the schematic for implementing this FPGA.

How to convert camera RGB/YUV to standard display data?

Clear Image (PDF, 172kB)

How to convert camera RGB/YUV to standard display data?

Clear Image (PDF, 180kB)
Figure 7. FPGA Implementation of YUV to RGB Converter

application circuit

The camera chip provided by the manufacturer may be located on the PCB daughter board. Figure 8 shows the functional block diagram of the camera daughter board module. Inputs include power, PWR, and crystal clock (XCLK). The output signals include parallel data bits (D0..D9), I²C bus (SDA, SCL), video sync (HREF, VSYNC) and pixel clock (PCLK).

How to convert camera RGB/YUV to standard display data?

Figure 8. Camera Module Functional Block Diagram

Figure 9 shows the schematic of the FPGA and serializer chips of the application circuit. The circuit is powered by a serial cable consisting of two twisted pairs, one pair is used to transmit the serial signal and the other pair is used for power supply. Standalone LDO power ICs are used in serializers and FPGA devices. The camera module uses bypass capacitors and comes with an LDO power chip to further reduce potential interference. The data link between the FPGA and the serializer uses damping resistors.

How to convert camera RGB/YUV to standard display data?

Figure 9a. FPGA portion of application circuit

How to convert camera RGB/YUV to standard display data?

Figure 9b. Serializer Section of Application Circuit

The MAX9259 can also be connected directly to a camera sensor, such as the OV10630, to build smaller cameras. The color space conversion FPGA can be placed after the deserializer. Since this application requires a camera link output that can be driven directly by the MAX9268, the color conversion FPGA is placed between the camera sensor and the serializer (MAX9259).

Video capture example

The camera application circuit shown in Figure 10 is also built using these camera circuits.

How to convert camera RGB/YUV to standard display data?

Figure 10. Camera Application Circuit

in conclusion

This application note describes a typical scenario in which Maxim’s camera deserializer IC works with an FPGA. Application schematics and FPGA code are provided for use with existing reference designs. An update to this application note is coming soon: RAW RGB to 24-bit RGB FPGA Converter.

The Links:   LM215WF3-SLC5 LP150X05-B2