SDB:Install OpenVINO

Jump to: navigation, search


oneAPI.
OpenVINOā„¢ is an open-source toolkit for optimizing and deploying AI inference.
  • Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
  • Use models trained with popular frameworks like TensorFlow, PyTorch and more
  • Reduce resource demands and efficiently deploy on a range of IntelĀ® platforms from edge to cloud

This open-source version includes several components: namely Model Optimizer, OpenVINOā„¢ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, GNA, multi device and heterogeneous plugins to accelerate deep learning inference on IntelĀ® CPUs and IntelĀ® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.


Requirements

CPU Processor Requirements

Systems based on IntelĀ® 64 architectures below are supported both as host and target platforms.

  • 6th to 13th generation IntelĀ® Coreā„¢ processors
  • 1st to 4th generation IntelĀ® XeonĀ® Scalable processors
  • IntelĀ® PentiumĀ® processor N4200/5, N3350/5, N3450/5 with IntelĀ® HD Graphics
  • Intel AtomĀ® processor with IntelĀ® Streaming SIMD Extensions 4.2 (IntelĀ® SSE4.2)

Newer versions of the operating system kernel may be required for 10th and 11th generation Intel Core processors, 11th generation Intel Core processors S-Series, 12th and 13th generation Intel Core processors, ā€Æor 4th generation Intel Xeon Scalable processors to support a CPU, GPU, Intel GNA, or hybrid-core with CPU capabilities.

IntelĀ® Gaussian & Neural Accelerator (IntelĀ® GNA)

Supported Hardware

  • IntelĀ® GNA

GPU Processor Supported

  • IntelĀ® HD Graphics
  • IntelĀ® UHD Graphics
  • IntelĀ® IrisĀ® Pro Graphics
  • IntelĀ® IrisĀ® Xe Graphics
  • IntelĀ® IrisĀ® Xe MAX Graphics

Discrete Graphics Supported

  • IntelĀ® Data GPU Flex Series Center (formerly code named Arctic Sound)
  • IntelĀ® Arc ā„¢ GPU (formerly code named DG2)

Additional Software Requirements

  • GNU Compiler Collection (GCC)*
  • CMake
  • Python* 3.7-3.11
  • OpenCV

Package Requirements

Important notes about Python: Python version 3.11 was used as a base. Therefore, to guarantee full functionality of this document, I suggest using the same version. But this is not mandatory..

Install CMake*, pkg-config and GNU* Dev Tools to build samples. Although the CMake and pkg-config build tools are not required by OpenVINO tools and toolkits, many examples are provided as CMake projects and require CMake to build them. In some cases, pkg-config is necessary to find the libraries needed to complete the application build.

Intel compilers leverage existing GNU build toolchains to provide a complete C/C++ development environment. If your Linux distribution does not include the full set of GNU development tools, you will need to install these tools. To install CMake, pkg-config, opencl, and the GNU development tools on your Linux system, open a terminal session and enter the following commands:

$ sudo zypper update
$ sudo zypper --non-interactive install cmake pkg-config ade-devel \
                                        patterns-devel-C-C++-devel_C_C++ \
                                        opencl-headers ocl-icd-devel opencv-devel \
                                        pugixml-devel patchelf opencl-cpp-headers \
                                        python311-devel ccache nlohmann_json-devel \
                                        ninja scons git  git-lfs patchelf fdupes \
                                        rpm-build ShellCheck tbb-devel libva-devel \
                                        snappy-devel ocl-icd-devel \
                                        opencl-cpp-headers opencl-headers \
                                        zlib-devel gflags-devel-static \
                                        protobuf-devel 

Verify the installation by displaying the installation location with this command:

$ which cmake pkg-config make gcc g++

One or more of these locations will display:

/usr/bin/cmake
/usr/bin/pkg-config
/usr/bin/make
/usr/bin/gcc
/usr/bin/g++


Another option: The install_build_dependencies.sh file, found in the OpenVINO source code, is available in the GitHub repository. This script is responsible for installing all packages necessary for installing and compiling OpenVINO.

Compiling from sources (.rpm coming soon).

We will begin compiling OpenVINO using source codes extracted directly from GitHub (RPM packages available soon). As a member of openSUSE and Intel Edge Innovator, I am taking the individual initiative of the packaging process and the publication of rpm packages for the openSUSE Linux platform. Thus, soon it will be possible to perform the installation solely through the zypper command.

Download: Github Instruction

Below are the commands to download version 2023.3 (latest release on the date of publication of this text):

$ git clone -b 2024.0.0 https://github.com/openvinotoolkit/openvino.git
$ cd openvino && git submodule update --init --recursive

if you chose another option: Installing package dependencies:

$ sudo ./install_build_dependencies.sh 

Verify the installation by displaying the installation location with this command:

$ which cmake pkg-config make gcc g++

One or more of these locations will display:

/usr/bin/cmake
/usr/bin/pkg-config
/usr/bin/make
/usr/bin/gcc
/usr/bin/g++

Install python dependency for building python wheels

$ python3 -m pip install -U pip 
$ python3 -m pip install -r ./src/bindings/python/wheel/requirements-dev.txt
$ python3 -m pip install -r ./thirdparty/onnx/onnx/requirements-dev.txt

Now we will compile and install openvino with the instructions below:

$ mkdir build && mkdir openvino_dist && cd build

$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=../openvino_dist  \
        -DBUILD_SHARED_LIBS=ON -DENABLE_OV_ONNX_FRONTEND=ON \
        -DENABLE_OV_PADDLE_FRONTEND=ON -DENABLE_OV_PYTORCH_FRONTEND=ON \
        -DENABLE_OV_IR_FRONTEND=ON -DENABLE_INTEL_GNA=OFF \
        -DENABLE_OV_TF_FRONTEND=ON -DENABLE_OV_TF_LITE_FRONTEND=ON \
        -DENABLE_PYTHON=ON -DENABLE_WHEEL=ON \
        -DPYTHON_EXECUTABLE=`which python3.11` \
        -DPYTHON_LIBRARY=/usr/lib64/libpython3.11.so \
        -DPYTHON_INCLUDE_DIR=/usr/include/python3.11 ..
$ make --jobs=$(nproc --all)
$ make install

Install built python wheel for OpenVINO runtime and OpenVINO-dev tools

python3 -m pip install openvino-dev --find-links ../openvino_dist/tools

Quick test for built openvino runtime

OpenVINO environment

Now with OpenVino compiled and installed in the distribution folder, to test it we must initialize the OpenVino development environment with the command below:

# cd ../openvino_dist/
# source ./setupvars.sh 
[setupvars.sh] OpenVINO environment initialized

Insert the omz path into the environmental variable.

export PATH=$PATH:/home/cabelo/.local/bin
export PYTHONPATH=$PYTHONPATH:<openvino_repo>/openvino/bin/intel64/Release/python/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<openvino_repo>/openvino/bin/intel64/Release/

Now, create model directory and install model optimizer dependency:

$ mkdir ~/ov_models
$ pip3 install onnxruntime protobuf==3.19.0 openvino-dev[pytorch]

Important notes: In this tutorial we used the versions onnx==1.15.0, onnxruntime==1.16.3 and protobuf==3.19.0 or 3.20.2. But this is not mandatory the same version. Download the resnet50 pytorch model with omz_downloader

$ omz_downloader --name resnet-50-pytorch -o ~/ov_models/
################|| Downloading resnet-50-pytorch ||################

========== Downloading /home/cabelo/ov_models/public/resnet-50-pytorch/resnet50-19c8e357.pth
... 100%, 100100 KB, 383 KB/s, 261 seconds passed

Now we will convert the resnet50 pytorch model into OpenVINO FP32 IR with the omz_converter utility

$ omz_converter --name resnet-50-pytorch -o ~/ov_models/ -d ~/ov_models/
========== Converting resnet-50-pytorch to ONNX
Conversion to ONNX command: /usr/bin/python3 -- /home/cabelo/.local/lib/python3.11/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-name=resnet50 --weights=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet50-19c8e357.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx --input-names=data --output-names=prob

ONNX check passed successfully.

========== Converting resnet-50-pytorch to IR (FP16)
Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=onnx --output_dir=/home/cabelo/ov_models/public/resnet-50-pytorch/FP16 --model_name=resnet-50-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True

Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP16/resnet-50-pytorch.xml
[ SUCCESS ] BIN file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP16/resnet-50-pytorch.bin

========== Converting resnet-50-pytorch to IR (FP32)
Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=onnx --output_dir=/home/cabelo/ov_models/public/resnet-50-pytorch/FP32 --model_name=resnet-50-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=False

Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.xml
[ SUCCESS ] BIN file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.bin

Run benchmark app with resnet50 FP32 IR model on CPU

$ benchmark_app -m ~/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.xml -d CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 10.81 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Model inputs:
[ INFO ]     data (node: data) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     prob (node: prob) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[ INFO ] Model batch size: 1
[Step 6/11] Configuring input of the model
[ INFO ] Model inputs:
[ INFO ]     data (node: data) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     prob (node: prob) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 178.74 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: main_graph
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]   NUM_STREAMS: 4
[ INFO ]   AFFINITY: Affinity.CORE
[ INFO ]   INFERENCE_NUM_THREADS: 8
[ INFO ]   PERF_COUNT: False
[ INFO ]   INFERENCE_PRECISION_HINT: <Type: 'float32'>
[ INFO ]   PERFORMANCE_HINT: PerformanceMode.THROUGHPUT
[ INFO ]   EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]   ENABLE_CPU_PINNING: True
[ INFO ]   SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE
[ INFO ]   ENABLE_HYPER_THREADING: True
[ INFO ]   EXECUTION_DEVICES: ['CPU']
[ INFO ]   CPU_DENORMALS_OPTIMIZATION: False
[ INFO ]   CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given for input 'data'!. This input will be filled with random values!
[ INFO ] Fill input 'data' with random values 
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 48.68 ms

[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices:['CPU']
[ INFO ] Count:            1172 iterations
[ INFO ] Duration:         60360.89 ms
[ INFO ] Latency:
[ INFO ]    Median:        224.46 ms
[ INFO ]    Average:       205.72 ms
[ INFO ]    Min:           106.17 ms
[ INFO ]    Max:           296.32 ms
[ INFO ] Throughput:   19.42 FPS

Before carrying out the test hello_classification.py, should we download model alexnet with the command below:

$ cd samples/python/hello_classification/
$ omz_downloader --name alexnet
################|| Downloading alexnet ||################

========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt
... 100%, 3 KB, 18505 KB/s, 0 seconds passed

========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.caffemodel
... 100%, 238146 KB, 5134 KB/s, 46 seconds passed

========== Replacing text in /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt


Now we will convert the alexnet caffe model into OpenVINO again.

$ omz_converter --name alexnet
========== Converting alexnet to IR (FP16)
Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=caffe --output_dir=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16 --model_name=alexnet --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.caffemodel --input_proto=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.prototxt '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=True

Please expect that Model Optimizer conversion might be slow. You are currently using Python protobuf library implementation. 
Check that your protobuf package version is aligned with requirements_caffe.txt.


 For more information please refer to Model Conversion API FAQ, question #80. (https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=80#question-80)
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16/alexnet.xml
[ SUCCESS ] BIN file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16/alexnet.bin

========== Converting alexnet to IR (FP32)
Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=caffe --output_dir=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32 --model_name=alexnet --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.caffemodel --input_proto=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.prototxt '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=True '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=False

Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32/alexnet.xml
[ SUCCESS ] BIN file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32/alexnet.bin

Okay, if everything is working correctly, run the command below to test the classification example in the Python language.

$ python3 hello_classification.py public/alexnet/FP32/alexnet.xml /dados/openvino/banana.jpg CPU
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: public/alexnet/FP32/alexnet.xml
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: /dados/openvino/banana.jpg
[ INFO ] Top 10 results: 
[ INFO ] class_id probability
[ INFO ] --------------------
[ INFO ] 954      0.9988611
[ INFO ] 951      0.0003525
[ INFO ] 950      0.0002846
[ INFO ] 666      0.0002556
[ INFO ] 502      0.0000543
[ INFO ] 945      0.0000491
[ INFO ] 659      0.0000155
[ INFO ] 600      0.0000136
[ INFO ] 953      0.0000134
[ INFO ] 940      0.0000102
[ INFO ] 
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

This text was built by Official Edge Innovator Intel and openSUSE member Alessandro de Oliveira Faria based on Intel tutorials. More information see the official page HERE

cabelo.

See also