https://software.intel.com/en-us/articles/get-started-with-the-openvino-toolkit-and-aws-greengrass
Hardware Accelerated Function-as-a-Service (FaaS) enables cloud developers to deploy inference functionalities on Intel® IoT edge devices with accelerators such as Intel® Processor Graphics, Intel® FPGA, and Intel® Movidius™ Neural Compute Stick. These functions provide a great developer experience and seamless migration of visual analytics from cloud to edge in a secure manner using a containerized environment. Hardware-accelerated FaaS provides the best-in-class performance by accessing optimized deep learning libraries on Intel IoT edge devices with accelerators.
This section describes implementation of FaaS inference samples (based on Python* 2.7) using Amazon Web Services (AWS) Greengrass* and AWS Lambda* software. AWS Lambda functions (Lambdas) can be created, modified, or updated in the cloud and can be deployed from cloud to edge using AWS Greengrass. This document covers:
Sample File: greengrass_object_detection_sample_ssd.py
This AWS Greengrass sample detects objects in a video stream and classifies them using single-shot multi-box detection (SSD) networks such as SSD SqueezeNet, SSD MobileNet, and SSD300. This sample publishes detection outputs such as class label, class confidence, and bounding box coordinates on AWS IoT* Cloud every second.
Note: Python* 2.7 with opencv-python, numpy, and boto3 is required for use with AWS Greengrass. Use the instruction sudo pip2 to install the packages in locations accessible by AWS Greengrass. Python* 3.0+ is required for use with the Intel® Distribution of OpenVINO™ toolkit model optimizer.
/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/Ubuntu_16.04/intel64/
- libcpu_extension_sse4.so – for use with Intel Atom® processors
- libcpu_extension_avx2.so – for use with Intel® Core™ and Intel® Xeon® processors
The previous command creates the IR (Intermediate Representation) files with .xml and .bin file extensions. If it fails, install the prerequisites for the model optimizer:
1 |
cd install_prerequisites/ |
2 |
./install_prerequisites.sh |
2 |
cd Edge-optimized-models/ |
1 |
cd SqueezeNet\ 5-Class\ detection/ |
2 |
sudo cp SqueezeNetSSD-5Class.* /deployment_tools/model_optimizer/ |
Note: For CPU, models must use data type FP32 for best performance. For GPU and FPGA, models must use data type FP16 for the best performance. For more information on how to use the Model Optimizer, follow the instructions at Intel® Distribution of OpenVINO™ toolkit Model Optimizer.
1 |
cd /deployment_tools/model_optimizer/ |
2 |
sudo python3 mo.py --input_model SqueezeNetSSD-5Class.caffemodel --input_proto SqueezeNetSSD-5Class.prototxt --data_type FP16 |
1 |
mkdir ~/greengrass-input-files |
2 |
cp SqueezNetSSD-5Class.xml SqueezeNetSSD-5Class.bin ~/greengrass-input-files |
Add an input video to the ~/greengrass-input-files folder. Upload a custom video or select one of the sample videos. In this demo we use the SqueezeNet 5 Class Model which detects Bicycle, Bus, Car, Motorbike, and Person classes. Make sure to choose an appropriate video so that the model can make valid inferences.
For each Intel‘s edge platform, create a new AWS Greengrass group and install AWS Greengrass core software to establish the connection between cloud and edge. Follow the instructions in the AWS Greengrass Developer Guide. To create an AWS Greengrass Group, see Configure AWS Greengrass on AWS IoT. To install and configure an AWS Greengrass core on edge platform, see AWS Greengrass core on edge platform.
After configuring Greengrass on the edge device, set group and user permissions to start the daemon. Run the following commands:
1 |
sudo adduser --system ggc_user |
2 |
sudo addgroup --system ggc_group |
Start the daemon by typing the following:
1 |
cd /greengrass/ggc/core/ |
2 |
sudo ./greengrassd start |
This section describes how to:
mkdir ~/greengrass_project |
1 |
sudo tar –xvf <Download_Location>/greengrass-core-python-sdk-1.2.0. tar .gz |
2 |
cd aws_greengrass_core_sdk/examples/HelloWorld |
3 |
sudo unzip greengrassHelloWorld.zip |
4 |
cd greengrassHelloWorld |
cp -r greengrasssdk/ ~/greengrass_project |
cp <INSTALL_DIR>/deployment_tools/inference_engine/samples/python_samples/greengrass_samples/greengrass_object_detection_sample_ssd.py ~/greengrass_project |
1 |
cd /greengrass/ggc/packages/1.6.0/runtime/python2.7/ |
2 |
cp -r greengrass_common/ ~/greengrass_project |
3 |
cp -r greengrass_ipc_python_sdk/ ~/greengrass_project |
zip -r greengrass_sample_python_lambda.zip greengrass_common greengrass_ipc_python_sdk greengrasssdk greengrass_object_detection_sample_ssd.py |
This demo involves creating a Lambda using AWS CLI. The CLI enables an updates to an Alias pointing to the Lambda code. This feature is very useful for users who make frequent changes to the Lambda code.
$ aws lambda create-function --region region --function-name greengrass_object_detection --zip-file fileb://~/greengrass-project/greengrass_sample_python_lambda.zip --role role-arn --handler greengrass_object_detection_sample_ssd.function_handler --runtime python2.7 --profile default
Note: For this demo we set --region to us-east-1 and --role-arn to the arn of the IAM role we wish to apply. You may have to create an IAM role for Lambda first. Make sure --handler is in the format: <mainfile_name>.function_handler and your region is the same as your greengrass group.
1 |
aws lambda create- alias \ |
2 |
--region region \ |
3 |
-- function -name greengrass_object_detection \ |
4 |
--description "Alias for Greengrass" \ |
5 |
-- function -version 1 \ |
6 |
--name GG_Alias \ |
7 |
--profile default |
If you experience issues creating the Lambda function, see AWS Greengrass Developer Guide, Tutorial: Using AWS Lambda Aliases.
1 |
aws lambda publish-version \ |
2 |
--region region \ |
3 |
--function-name greengrass_object_detection \ |
4 |
--profile default |
After creating the AWS Greengrass group and the Lambda function, configure the Lambda function for AWS Greengrass. Follow the instructions in the AWS Greengrass Developer Guide, Configure the Lambda Function for AWS Greengrass, Steps 1-8.
Use the name of the Lambda and Alias in the instructions you followed previously. Additionally, in step 8, change the memory limit to 2048MB to accommodate large input video streams.
Add the environment variables in Table 1 as key-value pairs when editing the Lambda configuration and click on Update. See Table 2 for key-value pairs used in the demo.
Table 1. Environment Variables: Key-value Pairs
Key |
Value |
LD_LIBRARY_PATH |
<INSTALL_DIR>/opencv/share/OpenCV/3rdparty/lib: |
PYTHONPATH |
<INSTALL_DIR>/deployment_tools/inference_engine/python_api/Ubuntu_1604/python2 |
PARAM_MODEL_XML |
<MODEL_DIR>/<IR.xml>, where <MODEL_DIR> is user specified and contains IR.xml, the Intermediate Representation file from Intel Model Optimizer |
PARAM_INPUT_SOURCE |
<DATA_DIR>/input.mp4 to be specified by user. Holds both input and output data. |
PARAM_DEVICE |
For CPU, specify `CPU`. For GPU, specify `GPU`. For FPGA, specify `HETERO:FPGA,CPU`. |
PARAM_CPU_EXTENSION_PATH |
<INSTALL_DIR>/deployment_tools/inference_engine/lib/Ubuntu_16.04/intel64/<CPU_EXTENSION_LIB>, where CPU_EXTENSION_LIB is |
PARAM_OUTPUT_DIRECTORY |
<DATA_DIR> to be specified by user. Holds both input and output data. |
PARAM_NUM_TOP_RESULTS |
User specified for classification sample (e.g. 1 for top-1 result, 5 for top-5 results) |
Note: Table 1 lists the general paths for environment variables accessed during Greengrass deployment. Environment variable paths depend on the version of Intel® Distribution of OpenVINO™ toolkit installed. When running an Intel® Distribution of OpenVINO™ toolkit application without AWS Greengrass, the <INSTALL_DIR>/bin/setupvars.sh script is sourced first. With Greengrass deployment, however, the environment variables are sourced through the Lambda configuration.
This demo uses Intel® Distribution of OpenVINO™ toolkit R3 on the Up Squared* platform. Table 2 lists the environment variables for the Lambda configuration.
Table 2. Environment Variables: Key-value Pairs for Demo
Key |
Value |
LD_LIBRARY_PATH |
/opt/intel/computer_vision_sdk_2018.3.343/opencv/share/OpenCV/3rdparty/lib:/opt/intel/computer_vision_sdk_2018.3.343/opencv/lib:/opt/intel/opencl:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/cldnn/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/gna/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/model_optimizer_caffe/bin:/opt/intel/computer_vision_sdk_2018.3.343/openvx/lib: |
PYTHONPATH |
/opt/intel/computer_vision_sdk_2018.3.343/python/python2.7:/opt/intel/computer_vision_sdk_2018.3.343/python/python2.7/ubuntu16:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer: |
PARAM_MODEL_XML |
/home/upsquared/greengrass-input-files/SqueezeNetSSD-5Class.xml |
PARAM_INPUT_SOURCE |
/home/upsquared/greengrass-input-files/sample-videos/inputvideo.mp4 |
PARAM_DEVICE |
GPU |
PARAM_CPU_EXTENSION_PATH |
/opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_sse4.so |
PARAM_OUTPUT_DIRECTORY |
/home/upsquared/greengrass-output |
PARAM_NUM_TOP_RESULTS |
3 |
Table 3 lists the LD_LIBRARY_PATH and additional environment variables for Intel® Arria® 10 GX FPGA Development Kit.
Table 3. Environment Variables: Additional Key-value pairs for Intel® Arria® 10 GX FPGA Developer Kit
Key |
Value |
LD_LIBRARY_PATH |
/opt/altera/aocl-pro-rte/aclrte-linux64/board/a10_ref/linux64/lib: /opt/altera/aocl-pro-rte/aclrte-linux64/host/linux64/lib: <INSTALL_DIR>/opencv/share/OpenCV/3rdparty/lib: <INSTALL_DIR>/opencv/lib:/opt/intel/opencl: <INSTALL_DIR>/deployment_tools/inference_engine/external/cldnn/lib: <INSTALL_DIR>/deployment_tools/inference_engine/external/mkltiny_lnx/lib: <INSTALL_DIR>/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64: <INSTALL_DIR>/deployment_tools/model_optimizer/model_optimizer_caffe/bin: <INSTALL_DIR>/openvx/lib |
DLA_AOCX |
<INSTALL_DIR>/a10_devkit_bitstreams/0-8-1_a10dk_fp16_8x48_arch06.aocx |
CL_CONTEXT_COMPILER_MODE_INTELFPGA |
3 |
Figure 1. Environment Variable Example
To subscribe or publish messages from AWS Greengrass Lambda function, follow the AWS Greengrass Developer Guide, Configure the Lamb Function for AWS Greengrass, Steps 10-14.
The Optional topic filter field should be mentioned inside the Lambda function. For example, openvino/ssd is the topic used in greengrass_object_detection_sample_ssd.py.
To grant Greengrass access to the hardware resources, as well as the environment variable paths, follow the AWS Greengrass Developer Guide,
Table 4. Resource Access
Name | Resource Type | Local Path | Access |
InputDir | Volume | home/<username>/greengrass-input-files | Read-Only |
Webcam
|
Device
|
/dev/video0
|
Read-Only
|
OutputDir
|
Volume
|
home/<username>/greengrass-output
|
Read and Write
|
OpenVINOPath
|
Volume
|
<INSTALL_DIR> (OpenVINO Install location)
|
Read-Only
|
Note: If using a webcam rather than a pre-recorded video, modify the code in greengrass_object_detection_sample_ssd.py.
Change the PARAM_INPUT_SOURCE line:
From : PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
TO: PARAM_INPUT_SOURCE = 0
The 0 value represents the suffix of the video device in the /dev folder.
Table 5. Resource Access for GPU
Name | Resource Type | Local Path | Access |
GPU | Device | /dev/dri/renderD128 | Read and Write |
Table 6. Resource Access for FPGA
Name | Resource Type | Local Path | Access |
FPGA | Device | /dev/acla10_ref0 | Read and Write |
FPGA_DIR1 | Volume | /opt/Intel/OpenCL/Boards | Read and Write |
FPGA_DIR2 | Volume | /etc/OpenCL/vendors | Read and Write |
Figure 2. Resource Access Example
Lastly, add a role to the Greengrass group.
1. Go to the Greengrass Console > Groups. Select your group name.
2. Choose Settings and Add Role for the Group Role Section.
Note: You may have to create a Greengrass IAM role prior to following the Add Role instructions. Adding a role is required to upload images to S3 and access other AWS resources.
To deploy the Lambda function to AWS Greengrass core device, select Deployments on group page and follow the instructions in Deploy Cloud Configurations to AWS Greengrass Core Device.
Upon first deployment, an error may occur.
Figure 3. First Deployment Error
chown ggc_user:ggc_group /opt/intel/computer_vision_sdk |
This section describes how to deploy new version of the Lambda to AWS Greengrass after changing the code inside the Lambda Console. For example, modifying greengrass_object_detection_sample_ssd.py requires deploying a new version.
1 |
aws lambda update- alias \ |
2 |
--region region \ |
3 |
-- function -name greengrass_object_detection \ |
4 |
-- function -version 2 \ |
5 |
--name GG_Alias \ |
6 |
--profile default |
For --function-version, specify the function version that you published to in the Lambda Console.
There are four options available for output consumption:
These options are used to report, stream, upload and store inference output at an interval defined by the variable reporting_interval in the AWS Greengrass samples.
AWS IoT Cloud Output
The AWS Greengrass samples enable AWS IoT Cloud Output by default with the variable enable_iot_cloud_output. The option verifies the Lambda running the edge device. It also enables publishing messages to AWS IoT Cloud using subscription topics specified in the Lambda. For example, the option publishes messages for classification using the subscription topic openvino/classification. The option publishes the top class label to AWS IoT cloud. It uses the subscription topic openvino/ssd for object detection samples. For SSD object detection, it publishes bounding box co-ordinates of objects, class label, and class confidence.
To view the output on AWS IoT cloud, follow the AWS Greengrass Developer Guide, Verify the Lambda Function is Running on the Device.
AWS Kinesis Streaming
The AWS Kinesis Streaming option enables inference output to be streamed from the edge device to cloud using AWS Kinesis streams when enable_kinesis_output is set to True. The edge devices act as data producers and continually push processed data to the cloud. Users set up and specify AWS Kinesis stream name, AWS Kinesis shard, and AWS region in the AWS Greengrass samples.
Cloud Storage using AWS S3* Bucket
The Cloud Storage Using AWS S3 Bucket option enables uploading and storing processed frames (JPEG format) in an AWS S3* bucket when the enable_s3_jpeg_output variable is set to True. The users need to set up and specify the AWS S3 bucket name in the AWS Greengrass samples to store the JPEG images. The images are named using the timestamp and uploaded to AWS S3.
Local Storage
The Local Storage option enables storing processed frames (JPEG format) on the edge device when the enable_s3_jpeg_output variable is set to True. The images are named using the timestamp and stored in a directory specified by PARAM_OUTPUT_DIRECTORY.
原文版:Intel OpenVINO? Toolkit and AWS* Greengrass!!!
原文:https://www.cnblogs.com/cloudrivers/p/11483703.html