Image acquisition platform based on S3C2410 with ARM-Linux as operating system

Image acquisition platform based on S3C2410 with ARM-Linux as operating system

In the process of studying remote digital image processing and transmission, it is necessary to perform image processing at the remote site. This paper proposes an embedded digital image processing system that integrates image acquisition, processing and Display. The target image is acquired through image acquisition equipment and performed Related algorithm processing, through modular programming, the original image and image processing results are displayed independently, and the image processing module can be modified to verify and analyze different digital image processing algorithms.

In the process of studying remote digital image processing and transmission, it is necessary to perform image processing at the remote site. This paper proposes an embedded digital image processing system that integrates image acquisition, processing and display. The target image is acquired through image acquisition equipment and performed Related algorithm processing, through modular programming, the original image and image processing results are displayed independently, and the image processing module can be modified to verify and analyze different digital image processing algorithms. The system established in this article can greatly improve the efficiency of remote digital image processing, and the network function of the system can be used to realize remote transmission. This article only briefly analyzes the image acquisition and processing functions of the system.

1 System composition

The hardware composition takes S3C2410 as HX. S3C2410 integrates a USB host controller inside, so it can be used as a USB host without an external USB control chip. The USB Host controller is connected with the video capture camera, and the image capture is realized through an external camera with a USB port on the USB interface. The processed image can be displayed on a 320X240 resolution LCD screen, and Flash is used as a memory to save the system software and the results of image acquisition and processing. Figure 1 shows the block diagram of the system’s hardware composition, and Figure 2 shows the system’s software hierarchical structure diagram.

Image acquisition platform based on S3C2410 with ARM-Linux as operating system

Image acquisition platform based on S3C2410 with ARM-Linux as operating system

The software composition can be divided into three parts: boot program, operating system and image acquisition and processing.

The function of the boot program is to initialize the hardware device and establish a map of the memory space, so as to bring the system’s software and hardware environment to a suitable state, so as to prepare the correct environment for Z to call the operating system kernel or user application sequence. In this application, ViVi is used as a boot program to guide the operating system.

This system chooses ARM-Linux as the operating system. For specific applications of this system, the operating system kernel needs to be configured to support Video For Linux and USB OV511 Camera, and some unnecessary modules are removed to reduce the size of the operating system kernel. The operating system kernel is XZ to the Flash memory through the boot program ViVi, and the boot program ViVi is to the Flash memory through JtagXZ.

The image acquisition, processing, and display program runs on the ARM-Linux operating system. The image acquisition and image display module are closely related to the operating system platform, while the image processing algorithm part has nothing to do with the platform.

2 Image acquisition

In this design, the USB camera is used to collect images. The interface of the video part in the Linux kernel is Video For Linux (V4L). The V4L standard defines a set of interfaces. The kernel, drivers, and applications can follow this interface standard to make the video device normal work. At present, V4L covers the content of video and audio stream capture and processing, and the USB camera also belongs to the category supported by V4L.

In ARM-Linux, the camera device is mapped to /dev/v41/vide00, so you only need to operate this device in accordance with the process specified by the V4L standard. The following is the program code of the key steps.

DY step: open the acquisition device

Fd_video=open(“/dev/v41/video0”, 0_RDWR);

Step 2: Query device properties

rc=ioctl(Fd_video, VIDIOCGCAP, &vC); Query the relevant attributes of the device through the ioctl interface provided by the device driver, and the specific attributes are stored in the structure variable vc of the struct tvideo_capture type. video_capture is one of the structure variable types described for video equipment under Linux.

The third step: query and set the format of the image to be collected, including information such as size, color digits, and image data representation format. In Linux, this information is stored in a variable of type struct video_picture. In this design, the global variable vp is used to save the information of the search and Taobao.

ret=iocfl(Fd_video, VIDIOCGPICT,&vp);//Image related information is stored in the vP variable

The image type can be set by ioctl call with VIDIOCSPICT as the parameter. For example, the system supports the following types of images: 24-bit RGB888 format color image, 16-bit RGB565 format color image, 256-level grayscale gray Degree image. The setting method is first to the member variable vp of vp. Assign palette, such as one of VIDEO_PALETTE_RGB24, VIDEO_PALETFE_RGB565 or VIDEO_PALETYE_GREY, and then call ioetl(video, VIDIOCSPICT, &vp) to set. You can also set other parameters such as resolution. The method is similar to that of setting the data format, so I won’t go into details here.

Step 4: Establish a memory map to map the data buffer address of the video device to the user process address space. This is much faster than calling the read() function directly to read the data. A pointer variable memoryMap of type unsigned char * is defined in this design to save the mapped address of the data buffer.

memoryMap = (unsigned char *) mmap (0, memoryBuffer. size, PROT_READ | PROT_WRITE, MAP_SHARED, Fd_video, 0 );//Establish data buffer mapping

Step 5: Read the data. At this step, the image data can be collected. After the data of one image is collected, the signal of the next image can be triggered to start the collection, and then the image processing operation can be performed during this time, so that the image can be used to the maximum extent. Collect equipment to improve the efficiency of the system. The camera driver provides this double buffering mechanism. By sending the VIDIOCMCAPTURE control command to the camera, it will trigger the collection of image data to the specified buffer. This step will not block the current process, and the program will execute the next instruction instead of Wait until the data collection is complete. By sending the VIDIOCSYNC control command to the camera, the current process will block until the specified buffer is filled with data. The processing flow is as follows:

VIDIOCMCA玎uRE buffer 0//trigger buffer0 to start acquisition
While(1)
{
VIDIOCMCAPTURE buffer 1 //Trigger buffer1 to start acquisition
VIDIOCSYNC buffer 0 //Synchronize the data in buffer0

//process buffer 0 //Data in buffer0 obtained by processing

VIDIOCMCAPTURE buffer 0//trigger buffer0 to start acquisition
VIDIOCSYNC buffer 1 //Synchronize the data in buffer1

//process buffer 1 //Data in bufferl obtained by processing

}

3 Image processing

Image processing is a complex technology, which involves image preprocessing, image analysis, image understanding and many other aspects. It has close relations with many other disciplines, especially the development of related theories and technologies such as pattern recognition theory and technology, artificial intelligence theory, fractal theory, and wavelet analysis technology in recent years, providing a solid theory for the research of image processing technology Basic and new analytical tools. The processing procedures and algorithms used for different application backgrounds are different, but the image data must be obtained first, and then the processing procedures and related algorithms must be selected according to the needs. This article does not introduce the image processing flow and algorithm in detail, only the effect diagram of the image processing algorithm implemented on the system platform is given. Figure 3 is a color image collected by a camera, and Figure 4 is an effect diagram after grayscale conversion, binarization, edge detection, thinning, and connected domain measurement.

4 Image display

This design uses a 240X320 resolution 16-bit LCD screen, which can be directly controlled by operating the relevant registers inside the S3C2410. Since ARM-Linux is used as the operating system, you can directly operate the Framebuffer device under Linux to complete the image Display, the frame buffer (Framebuffer) is an interface provided by Linux for display devices. It is a device after abstracting the video memory. It allows upper-level applications to directly read and write the display buffer in graphics mode. This operation is abstract and unified. The user does not need to care about the location of the physical video memory, the paging mechanism, etc. specific details. These are all done by the Framebuffer device driver.

In the application, you must first open the Framebuffer device. In the Linux system, the Framebuffer device is generally mapped to /dev/fb, and there can be multiple devices. Then call the interface provided by ioctl to obtain device information, mainly to obtain the resolution, color depth, and number of bytes occupied by each row of data of the current Framebuffer device. The key step is to map the screen buffer to the user space. The Framebuffer device can be seen as a video memory image, but all device drivers in Linux work in the kernel mode, so they cannot be directly accessed in the current process space. They can be directly accessed through the mapping mechanism. Map the starting address of the video memory to the address space of the current process, so that the display can be realized quickly and conveniently. The method of establishing the mapping is as follows:

pfb=mmap (0, FBDraw_finfo. smem_len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); //Mapping by read, write and share

The position of the point with coordinates (x, y) on the screen in the video memory is:

pro x * (fb_vinfo.bits_per_pixel>>3) Y * fb_finfo. line_length, assign the corresponding color value to this position and it can be displayed on the screen. It should be noted that when the color depth (fb_vinfo.bits_per_pixel) is different, the format of the color value is also different.

After the image data after image processing or the collected original image data is converted into the color data format of the Framebuffer device (RGB565, RGB888, etc.), the data is copied to the memory starting from the address pfb, and the image display can be realized.

5 concluding remarks

This paper designs a platform for image acquisition, processing, and display using ARM core processor S3C2410 as computing and controlling HX and ARM-Linux as operating system. It is used for algorithm verification in image processing. Debugging and actual use show that the system has The good use effect and high Ding work efficiency have been used in the demonstration of the battlefield reconnaissance system. At the same time, this system has the characteristics of low power consumption, small size, strong network function, etc., which can be easily extended to the fields of reconnaissance, security, and video capture.

The innovation of this article: Use ARM platform, combined with embedded operating system for image acquisition and processing, realize the main image system functions in a smaller system, achieve better results, and have a good application prospect.

The Links:   MG150Q2YS51 NL10276BC28-21F