Getting Started with the ADLINK Edge AWS Lookout for Vision Edge Solution

Reading Time: 11 minutes

Solution Architecture

The ADLINK Edge AWS Lookout for Vision Solution is comprised of several applications which communicate via the ADLINK Data River. Together these applications ingest video from cameras, perform inferencing on the captured images and output the inference results for other applications to consume.

Optionally, additional ADLINK Edge applications can be added to the solution to enable bi-directional communication with OT devices such as PLCs.

The core applications are:

Additionally, the VMLink desktop application forms part of the solution and can be used to:

ADLINK Edge Profile Builder can be installed on either the edge device or the PC used for development. To install ADLINK Edge Profile Builder follow the relevant guide below for your OS.

VMLink is a useful tool for viewing video streams along with associated inference results. It can also be used to collect images for training AI models. VMLink is currently available for Ubuntu 18.04 on x86 and NVIDA Jetpack on ARM systems.

For more information on installing VMLink please see How to Install VMLink on Ubuntu 18.04 and NVIDIA Jetpack.

Using the Automated Solution Installer

ADLINK provides an install script (available from here) to assist with the deployment of the ADLINK Edge AWS Lookout for Vision Solution onto the edge device. The installer can be used to automate many of the steps below including the deployment of the following components onto the edge device:

Deploying AWS IoT Greengrass onto the Edge Device

The steps below document the process for setting up AWS IoT Greengrass on an edge device with support for deploying the ADLINK Edge AWS Lookout for Vision Edge solution.

Before the ADLINK Edge applications making up the solution can be deployed to the edge device via AWS IoT Greengrass the applications must first be configured using ADLINK Edge Profile Builder.

To assist with this ADLINK provide the following template profiles for download within Profile Builder:

Downloading the Profiles

From within a project in ADLINK Edge Profile Builder click on Add profile and select Download a profile. Click the Next button.

Ensure the ADLINK-Templates repository is selected and select the profiles you wish to download. Click Next.

The selected profiles will now be downloaded.

When the download is complete the profiles will be available to edit in Profile Builder.

Deploying a Streamer Profile

The following sections describe how to configure and deploy a streamer profile to facilitate the collection of images for use in training a model. Before diving into how the profile is configured first the concept of Stream Id will be introduced.

A Stream Id identifies a unique stream of video frames from a camera and as such is a key parameter used in the configuration of several applications to direct the flow of data. It is recommended that Stream Id’s are set in a way that identifies the context of the stream in the system. For example, a stream
from a camera on a machine in an ADLINK factory in Taipei may be assigned a hierarchical stream Id of 1.

1 Please note that VMLink does not support multi-segment stream Ids at this time. Multi-segment Ids can instead be represented by replacing . characters with – characters such as taiwan-taipei-lineA-machine1-camera1.

Configuring the Streamer Profile

A Streamer Profile serves two purposes:

  1. Allows images to be streamed from a camera for the purposes of collecting training images with VMLink.
  2. Allows images to be streamed to the inference engine application (configured as part of the Inference profile) for live inferencing at the edge.

An example of a Streamer Profile is shown below (aws-lfv-edge-genicam-streamer):

Clicking on an application within a profile opens the configuration editor for that application:

The configuration of an application and the container it will be deployed in can be configured through the Configuration, Files and Docker tabs in the editor. Documentation for the application is also available by clicking on the Documentation tab.

In general, the template streamer profiles are configured with default parameters suitable for deployments consisting of a single camera with the Stream Id pre-configured as camera1. In certain cases, device configuration and Docker settings may need to be updated.

Exporting the Profile

For deployment via AWS IoT Greengrass profiles are exported as a docker-compose bundle. To export a profile from Profile Builder open the profile to be exported and click the Deploy button. Select Download a docker-compose and click Next.

The profile will then be prepared and a download triggered.

Creating a Greengrass Component from the Profile

An exported profile must be uploaded to AWS S3 to a bucket in the same region as the AWS IoT Greengrass deployment. Once uploaded an AWS IoT Greengrass Generic component can be created for the profile which references the file in AWS S3. Example recipes for each of the streamer template profiles are given below:

Note the path to the profile in AWS S3 will need to be modified before creating a component from the example recipes. For the above example recipes the <bucket> and <path> segments of the URI will need to be changed as highlighted below:

For further information on how to create an AWS IoT Greengrass Component from a recipe see here.

Please ensure you have completed the step to allow Greengrass to access the component artifacts in AWS S3. If this step is omitted the Greengrass core device will be unable to deploy the component onto the device.

Deploying the Streamer Profile to a Device

The various camera streamer profiles have no additional dependencies. To deploy the profile add the Greengrass component created in the previous section to a Greengrass deployment targetted at your device.

For further information on how to deploy a component see here.

Collecting Images for Training a Model

To view the camera stream first open the VMLink application. The link can usually be found on the desktop and alongside other applications in the Ubuntu 18.04 launcher menu. The icon is shown below:

On the App Select screen click the LAUNCH button below the Streamer application.

As video streams are discovered by the Streamer application they will be added to the Stream Select window. Click the LAUNCH STREAM button below a discovered stream to view that stream.

Video frames will be displayed as they arrive.

Deploying the Inference Profile

The Inference Profile can be deployed once training data has been captured and an initial model trained.

Configuring the Profile

The Inference Profile includes the following applications:

Other ADLINK Edge and Docker applications can be added to the profile as required. For example, the ADLINK Edge Modbus Connect application can be added to support connections to Modbus devices. Please note these additional applications may require separate licenses to be purchased.

Lookout for Vision Inference Engine Application

The Lookout for Vision (aws-lookout-vision) application performs inferencing on received video frames. The key fields to configure in the application are:

Training Streamer Application

The Training Streamer application captures frames automatically based on configured triggering conditions. The key fields to configure in the application are:

In the template profile, the Training Streamer application is configured to acquire AWS credentials through the Greengrass Token Exchange Service. In order to access the configured S3 bucket, an appropriate IAM policy must be attached to the token exchange role. More information can be found here.

Greengrass Connect Application

The Greengrass Connect application uses the AWS IoT Greengrass Core IPC service to relay messages from the Data River to other locally deployed Greengrass components or the AWS IoT broker in the cloud.

The key fields to configure in the application are:

In the template profile, the Greengrass Connect application is configured to send messages to the AWS IoT Cloud broker. This requires appropriate access control policies to be set as part of the component recipe or component configuration in a deployment. Such a policy is included in the sample component recipe provided within the Creating a Greengrass Component from the Profile section. More information can be found here.

Node-RED application

The Node-RED application does not need to be configured. By default, it comes complete with an example flow file that receives video frames and inference results from the default camera1 stream and displays these on a local web dashboard accessible at http://<Device IP>:1880/ui. If the flow is subsequently changed by users the flow can be exported and added to the application’s profile by adding the file to /adlinkedge/config/flows.json using the Files tab.

Exporting the Profile

Once the inference profile has been configured it can be exported from Profile Builder as a docker-compose bundle.

Creating a Greengrass Component from the Profile

As with the streamer profile, the exported profile must be uploaded to AWS S3 to a bucket in the same region as the AWS IoT Greengrass deployment. Once uploaded a Greengrass component can be created for the profile using the following example recipe file:

Deploying the Inference Profile

As with the camera streamer profiles the inference profile can be deployed to a Greengrass Core by creating an AWS IoT Greengrass component and adding that component to a Greengrass deployment. However additional Greengrass Components must be installed along with the inference profile component:

Viewing the Inference Results

In addition to displaying video frames from frame streamer applications, VMLink automatically overlays any inference results associated with the stream on top of the images.

Video streams can be viewed within VMLink using the Streamer application.

Viewing the Inference Results in Node-RED

An example Node-RED Dashboard is included with the inference profile and can be viewed by navigating to the following URL in a web browser:

http://<Device IP>:1880/ui

The dashboard shows the video stream from camera1 as well as the inference results.