
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
Select Language
The surveillance system relies heavily on the functions provided by the embedded vision system to accelerate its deployment in a wide range of markets and systems. These monitoring systems have a wide range of uses, including event and traffic monitoring, security and security applications, ISR, and business intelligence. The diversity of uses also brings several major challenges, which require system designers to solve in the solution. They are:
Multi-camera vision-can connect multiple sensor types of the same or heterogeneous type.
Computer Vision Technology-Ability to use high-level libraries and frameworks (such as OpenCV and OpenVX) for development.
Machine learning technology-Ability to use frameworks (such as Caffe) to implement machine learning inference engines.
Increase resolution and frame rate-increase the data processing required for each image frame.
According to the purpose, the monitoring system will implement the corresponding algorithm (such as optical flow method) to detect the movement in the image. Stereo vision provides depth perception within images, and also uses machine learning techniques to detect and classify objects in images.
Figure 1-Example application (top: face detection and classification, bottom: optical flow)
Heterogeneous system devices, such as All Programmable Zynq®-7000 and Zynq® Ultrascale+™ MPSoC, are increasingly used in the development of monitoring systems. These devices are a perfect combination of programmable logic (PL) architecture and high-performance ARM® core processor system (PS).
Compared with traditional solutions, the close coupling of PL and PS makes the created system have stronger responsiveness, reconfigurability and higher energy efficiency. Traditional CPU/GPU-based SoCs need to use system memory to transfer images from one processing stage to the next. This will reduce certainty and increase power consumption and system response delay, because multiple resources need to access the same memory, causing processing algorithm bottlenecks. This bottleneck is aggravated as the frame rate and image resolution increase.
When the solution is implemented using Zynq-7000 or Zynq UltraScale+ MPSoC devices, this bottleneck will be broken. These devices allow designers to implement image processing pipelines in the device's PL. Create a true parallel image pipeline in PL, where the output of one stage is passed to the input of another stage. In this way, a definite response time can be obtained, time delay can be shortened, and an optimal solution for power consumption can be realized.
Using PL to implement image processing pipelines can also obtain wider interface capabilities than traditional CPU/GPU SoC solutions, which can only obtain fixed interfaces. The flexible nature of the PL IO interface allows any connection and supports industry standard interfaces such as MIPI, Camera Link, and HDMI. This flexible feature can also achieve custom traditional interfaces, and can be upgraded to support the latest interface standards. Using PL, the system can also be connected to multiple cameras in parallel.
However, the most important thing is to implement the application algorithm, and there is no need to rewrite all high-level algorithms in a hardware description language (such as Verilog or VHDL). This is where the reVISION™ stack comes in.
Figure 2-Comparison of traditional CPU/GPU solution and Zynq-7000/Zynq UltraScale+ MPSoC
reVISION stackThe reVISION stack enables developers to implement computer vision and machine learning techniques. Here, high-level frameworks and libraries for Zynq-7000 and Zynq UltraScale+ MPSoC are also applicable. To this end, reVISION perfectly combines multiple resources supporting platform, application and algorithm development. The stack is divided into three different levels:
Platform development-this is the bottom layer of the stack and the basis for the construction of the remaining stack layers. This layer provides platform definitions for SDSoC™ tools.
Algorithm development-This is the middle layer of the stack, which provides support for the implementation of the required algorithm. This layer helps image processing and machine learning inference engines accelerate the migration to programmable logic.
Application development-This is the highest level of the stack and can provide industry standard framework support. This layer is used to develop applications in order to use platform development and algorithm development layers.
The algorithm and application layer of the stack supports traditional image processing procedures and machine learning procedures. In the algorithm layer, the OpenCV library is supported to develop image processing algorithms. This includes: the ability to accelerate multiple OpenCV functions (including a subset of the OpenVX core) in programmable logic. To support machine learning, the algorithm development layer provides several predefined hardware functions that can be placed in the PL to implement the machine learning inference engine. Then, the application development layer accesses these image processing algorithms and machine learning inference engines to create final applications and provide support for advanced frameworks such as OpenVX and Caffe.
Figure 3-reVISION stack
The reVISION stack can provide all the necessary elements to implement the algorithms required by a high-performance monitoring system.
Accelerate OpenCV in reVISIONOne of the most important advantages of the algorithm development layer is the ability to accelerate multiple OpenCV functions. In this layer, the OpenCV functions that can be accelerated are divided into four high-level categories.
Calculation-The functions included are: absolute deviation of two frames, pixel operations (addition, subtraction and multiplication), gradient and integration operations.
Input processing-supports bit depth conversion, channel calculation, histogram equalization, remapping and resizing.
Filtering-supports multiple filters, including Sobel, custom convolution and Gaussian filters.
Others-Provides multiple functions, including Canny/Fast/Harris edge detection, threshold, SVM and HoG classifiers.
These functions form the core functions of the OpenVX subset, which can be closely integrated with the application development layer for OpenVX. Development teams can use these features to create algorithm pipelines in programmable logic. In this way, these functions are implemented in logic, which can significantly improve algorithm implementation performance.
Machine learning in reVISIONreVISION provides integration with Caffe, enabling machine learning inference engine. The integration with Caffe occurs at the algorithm development layer and application development layer. The Caffe framework provides developers with a large number of function libraries, models and pre-trained weights in the C++ library, as well as Python™ and MATLAB® bundled programs. The framework enables users to create and train networks to perform the required calculations without starting over. To facilitate model reuse, Caffe users can share models through the model zoo; the library provides multiple network models, and users can implement and update network models for specific tasks. Define these networks and weights in the prototxt file, and use this file to define the inference engine when deployed in a machine learning environment.
reVISION provides Caffe integration function, which makes the implementation of the machine learning inference engine very simple, just provide the prototxt file; the rest of the work is done by the framework. Then, use this prototxt file to configure the processing system and the hardware optimization library in the programmable logic. Programmable logic is used to implement the inference engine, and includes Conv, ReLu, Pooling and other functions.
Figure 4-Caffe process integration
The numerical expression in the machine learning inference engine also plays an important role in performance. Machine learning is increasingly using more efficient and reduced-precision fixed-point digital systems, such as INT8 notation. Compared with the traditional floating point 32 (FP32) method, the fixed-point reduced-precision digital system will not cause a large loss of precision. Compared with floating-point, fixed-point arithmetic is easier to implement, so a more efficient and fast solution can be achieved after switching to INT8. Programmable logic solutions are most suitable for using fixed-point numbers, and reVISION can use INT8 notation in PL. After adopting INT8 notation, special DSP module can be used in PL. With the architecture of these DSP modules, when the same core weight is used, two INT8 multiplication and accumulation operations can be performed simultaneously. In this way, not only can a high-performance implementation scheme be obtained, but also power consumption can be reduced. With the flexible characteristics of programmable logic, fixed-point digital expressions with lower precision can also be easily realized.
in conclusionreVISION enables developers to take advantage of the features provided by Zynq-7000 and Zynq UltraScale+ MPSoC devices. Moreover, even non-experts can use programmable logic to implement algorithms. These algorithms and machine learning applications can be implemented through advanced industry standard frameworks, thereby reducing system development time. This enables developers to provide systems that are more responsive and reconfigurable, and power consumption is more optimized.
Pengarang:
Mr. Simon Feng
Phone/WhatsApp:
July 14, 2023
July 06, 2023
Email ke pemasok ini
Pengarang:
Mr. Simon Feng
Phone/WhatsApp:
July 14, 2023
July 06, 2023
July 14, 2023
July 14, 2023
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
Fill in more information so that we can get in touch with you faster
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.