Skip to content

ert build guide

pierre drezet edited this page Aug 20, 2025 · 1 revision

Build Machine & System Requirements

In general inxware runtimes are built using Ubuntu 22.04 machines or a machine that can support the following packages

  • build-essentials* (GNU Make)
  • Git* & Git-LFS
  • Docker (Standard that can mount host with synched permission groups)

Other dependencies and tools should be installed using specific methods e.g. the ert-runtime environment is created by running make prepdeps in the ert-components repo (see below). This will install the remaining requirements on a debian machine, though we aim to use docker images as much as possible to contain tools (BUT NOT CODE!). There may be other dependencies needed for building the tools etc. on windows.

eRT Build Overview

If you are looking to build an existing production target go to this section. Other wise to build a target from scratch, understanding each step please read on.
A typical sequence of building a complete and configured eRT package consists of the following steps:

./configure <TARGET> make prepdeps (Optional: Downloads dependencies needed for some targets) make all_docker (Compiles & linkers eRT C/C++-code to exe or dll.) make targetenv (Aggregates all deployed files to a staging directory) make targetenv_version (Optional: only after QA and mandatory for release). make targetenv_package (Optional: calls relevant packager as platform config) make upload_xxxx (Optional: deployment to a server for OTA update) make install_via_xxxx (Optional Deployment directly to a device)

Software Dependency Locations

Purpose Repo Path Notes
eRT source
eRT porting /target/
eRT build ./Makefile + specific .mks

Also See the following documents for more details in eRT’s architecture:

eRT Architecture & Porting Guide - Public

Inxware Build Processes

Building a Pre-configured eRT Platform

This section should provide only necessary information for an internal inx technical product manager (i.e. not just a developer) to be able to create a release of inxware-based products. The content of this section should provide a general overview of the process and components involved and how things SHOULD work.

Starting from Scratch

System requirements: Ubuntu 22.04 Recommended (64 bit essential)

Warning! Check your ~/.ssh/ for the xxxx.pub file. If it is rsa256 you can either remove it (if not already in use) or configure an additional key for the url below. The new key can be generated with ssh-keygen -t ed25519

The initial system preparation involves the following steps:

mkdir inxware cd inxware sudo apt-get install git build-essential git clone ssh://tech-data@dev.inx-systems.net:8822/home/inx-data/data/Repos/ert-components.git cd ert-components make prepdeps

The final step will checkout two other (large) git-lfs enabled repos in the inxware directory. It may take 10 minutes or more and requires >20GB storage.

Once your environment is setup you can test a build for your development environment (e.g. Debian 11/Ubuntu 22 example) using

./configure linux_x86_64_gtk_gst_debian11 make all_docker # Build source and create binaries make targtenv # Add misc resources to staging directory cd ../TARGET_TREES/ehs-env_linux_x86_64_gtk_gst_debian11/bin ./ehs.exe

The final step will run eRT on a Debian/Ubuntu host and wait to accept an application from the inxware tools. See here to build and run the tools on linux.

eRT Build System General Reference

To compile an ert executable for any of the hardware targets eRT supports the ert-components repository contains the core build system needed to create deployable executables and system images. The eRT build system does not support building operating system images such as linux or android, but will build full “flash” images for targets where runtime installable packages are not supported (e.g. FreeRTOS or bare metal).

TO build for a particular target the first step is to configure the build system from those available. The list of supported targets can be identified using the ./configure help command.
Once you have identified your target the build tree is configured as follows

./configure <your chosen target platform>

Retrieve dependencies by checking out toolchains & library dependencies (including C-lib) from github/or inx internal git-lfs repos. (Note this typically only needs to be run once on a particular machine, but it will pick up updates from github if repeated.)

make prepdeps

The next step is to build the eRT source code, which is typically done within a docker environment. The docker environment may be needed only to support running a particular toolchain or it may contain the toolchain and all dependencies itself.

make all_docker

Developer Tip: If your host linux environment already supports the toolchains required for your target or you are running in an interactive docker environment (e.g. after running make target_buildenv) then you can use the default “all” target e.g. by running make -j 8 directly.

Finally you will typically need to build a runtime directory structure for eRT and assemble any other runtime files needed during execution or for device management in a staging direction found at ../TARGET_TREES/<your chosen target platform>/…to run-within using the following command:

make targetenv

The directories under TARGET_TREES are staging directories that usually contain software artifacts in the format required in a runtime deployment. The structure of this is defined in the directories under ert-components/target/envtree/***/

Target types with special environment initialisations:

  • Unity (e.g. tellisign - not needed for Ambifier2) - see Unity & eRT Supervisor section below. Unity needs to be run interactively to set up the license. (A personal license is OK for now?)
    • Can we create a license Unity tools Docker image?

Production Builds {#production-builds}

See relevant directory in scripts/build-deploy/. These scripts will typically build all variants of the product and optionally upload these to the relevant deployment Devman server instance.

Build Types (from Scratch)

=============================================================

Android Builds e.g Ambifier2

Note: These steps are usually done in the following scripts for specific products:
e.g.
./scripts/build-deploy/moodsonic-tsa/makeEhs-android-ambifier-server-upload.sh

The steps needed to build/upload a specific android platform e.g. ambifier for A6 board

./configure linux_android_arm_p64_a6_ambifier #use ./configure for options
make prepdeps # pull from all dependencies repos
make clean
make all_docker # Build the source code in docker so we know the toolchain runs
make targetenv # Assemble the various files into the staging directory
make targetenv_version # optionally update the global release version number
make targetenv_apk_docker #Create the main app APK from the staging directory.
make targetenv_android_dep_pack # Package bits not included in the APK. e.g. device supervisor scripts and APK.
# Now we can either flash a device running Android locally or upload to Devman
# upload to device via adb (optionally setenv ADB_IP=<device ip>)
make upload_ehs_via_adb

# AND/OR deploy to Devman distribution server
make upload_ehs_sys_patch

Use ‘make help’ for details on what each step means.

======================================================================

The above Android target build gets built in following stages:

  1. make all_docker - This builds NDK ehs.so plugin
  2. make targetenv - This aggregates eRT file system structure, Lucid apps and the above plugin into TARGET_TREES/ehs_env-<target>
  3. make targetenv_apk_docker - This copies the Android Studio template project (ert-components/target/os-arch/android_ALL/android_studio_ehs) into TARGET_TREES/ehs_env-<target> and by using gradle creates deployable .apk file.
  4. make targetenv_android_dep_pack - This aggregates all supervisor scripts and creates a deployable package (with APKs) for this target in TARGET_TREES/ehs_env-<target>

Unity (e.g. signage) Android Builds

Setup UnityHub and Licence (Skip this!!! Only do it if this fails, which probably means you have installed it and signed the license).

make targetenv_unity_export )

(needs to be done only once. if not present on your machine !!!)

  1. Install Unity3d Hub (Todo we prolly want to do this in docker)
    • Install following dependencies
      1. sudo apt-get install gconf2
    • Download the Unity Hub from https://unity3d.com/get-unity/download
    • chmod +x ./UnityHub.AppImage
    • Run the hub, login and setup your licence (use inx developer account ) developer@inx-systems.com:HelloUnity101
    • Press the activate new license button if there’s no licenses shown (Choose personal use if this is the case).

Build Entire Platform (including Supervisor)

Commands for building Android Unity targets with supervisor and updates.

# build 64-bit eRT plugin required by all Unity targets
./configure linux_android_arm64_unity-lib
make clean
make all_docker
make targetenv

# build 32-bit eRT plugin and apk
./configure linux_android_arm_unity-tellisign
make clean
make targetenv_cleanall # needed ONLY when Unity C# project needs updating
make all_docker
make targetenv
make targetenv_unity_export
make targetenv_apk (targetenv_apk_docker doesn’t seem to work for some reason??)

# bundles supervisor and updates for deployment (only used for managed devices)
make targetenv_android_dep_pack

# deploying to server
make upload_ehs_sys_patch

Note the above seems to be broken when building with docker.

============================================================
Notes:

  1. Make libraries (e.g. 64 bit)
  2. Make base .so (usually 32 bit)
  3. Make (do unity thing) -> We need a new make targetenv_unity_export
    1. Exports a unity IDE project containing all (compiles c# code to mono binary) -> ….xxx.so. (Potentially these could be stored in an ert-contrib-middleware).
    2. Exports this as an android studio project.
    3. We then add JNI / Java script to the android studio project.
  4. Make targetenv_apk

Tellisign - needs both 32 and 64 bit versions.

  1. Prior step to build 64 bit plugin - should be in ../TARGET-TREE/…plugins/… libehs.so (e.g.).
    1. This is checked during make targetenv - but not used - just a warning is issues before targetnv_version etc. is called.
  2. Why do we get Android Studio differently - Should be the same or docker.
    1. E.g. to avoid the gradle version problem.
  3. We need to export Unity’s project to our own android studio so we can add more code to it when it is built.
    1. Potentially this should be in ert-build-support?
    2. Unzipped to ../ next to ert-components.
  4. Unity has android gcc & gradle toolchains (all of Android SDK) in the zip file and we need to use this.
  5. Things we copy into the Vanilla Unity project (where are the follows):
    1. Knows about the ehs plugins
    2. Adds certificates
  6. Uses mono to build the unity app code.
  7. Updated the build version stuff.
  8. Android has JNI stuff and Java - which is not needed for windows below.

Windows Unity Builds

Example of creating installer for Windows Unity running on Sandbox server

./configure win_x86_unity_sandbox
make all_docker
make targetenv
make targetenv_nsis_docker

Updating Windows 32-bit Unity 3d Template

At the moment Windows Unity needs to be built using IDE and added as a template to

ert-contrib-middleware/contrib/Unity3D/SignageWindowsBuild

Building Win32 Unity 3d Template

Make sure you have Unity Hub with a license and Unity 2019.4.40 (LTS) installed on your windows device.

Next, from the Unity Hub open this project “EHS/target/os-arch/android_ALL/Unity_EHS”

Open build setting dialog from File->Build Settings… Next make sure that you set the Target Platform to Windows, and Architecture to x86 . (Do not use x86 64-bit, as plugin dll is built for windows 32-bit for now) See screenshot below for reference.

Click build and navigate to the folder where you’d like the project to be built (e.g create SignageWindowsBuild folder somewhere in your file system).

Unity build roadmap ideas

  • Unity toolchain move form tarball to ert-build-support
  • C# remain in ert-components (or possibly contrib middleware would make more sense from the following step PoV).
  • Generated prebuilt dependencies and output Android Studio Project should go into ert-contrib-middleware.
  • Windows version - Kamil Wieczorek- this is manual ATM, see how it would fit into the above.

Plain Android

We need to do the following steps so that we are issuing a command to do an update (not how to todo the update):

  1. Copy the devman deployed script functions from ./target/envbuildscripts/installers/android-adb/* to things that go into the supervisor scripts and get run in a simple way
  2. Then we need to do a migration release to all devices (or save one on Devman if some devices are offline).
  3. … Then remove ./target/envbuildscripts/installers/android-adb/

For ambifier updates we download two apks in the dldata.tgz and download them both and the ambifier installer that unpacks the installs both.

It also has a zip file for the supervisor scripts, which get downloaded as separate.

  1. Propose new more generic method:
    Supervisor & Downloader & optional ambifier are all in one zip file and the respective installer on the device will do the right thing.
  2. ONLY LATEST VERSION FOR Each variant needs to be on the server and should have the same path/URL.

Update script unzip the dldata.tgz and look for specific apk’s in it - if they’re there it will install them including

The product is EHS/Ambifier/Telesign - this will be consumed in the above generic method.

make install_via_adb

This make target may modify the target’s init scripts and install the supervisor code also.

Things we do now as root:

  • H6 - set a new MAC address
  • Rock64 - creates MAC address for a new hardware device, doesn’t work once
    • Can we change the initscripts on the vanilla image? Possibly need to res-gn / CRC.
  • Volume? Some need root some don’t (may also depend on SE Linux)
  • Install?
  • …/
  • Adb is needed for automatically granting permission to apps to avoid user.

AOSP - option

  1. ADB as root install initscript and downloader to connect to Devman to do updates.
    1. This needs to know what kind of image to download (i.e. we want some permanent file installed via the ert-component make upload_via_adb script.. I.e. ${TARGET} value from ert-components would be the type identifier.
  2. Ideal way installing an APK - e.g. Supervisor +downloader.
    1.

Uploading eRT to Appland

There are a couple of targets which can be uploaded to the appland after building. The following structure needs to be used for the target to support appland upload.

./target/platform/<platform>/appland
info
INSTALLER.html
res - directory with resources e.g. images, html etc.

To upload target package and the above to the appload, you need to do the following

make targetenv_upload_appland

Make sure you fully build the target before uploading to appland e.g. tragetenv, targetenv_apk_docker, targetenv_esp32s3_docker etc. (depending on the target)

This script can be used for building and uploading all targets which are placed in the appland,

./scripts/build-deploy/appland/build_upload_all.sh

Merge ‘master’ into ‘Release’ branch

cd apps/
'master uptodate' (in sync with remote)
git checkout RELEASE-PRODUCTION
git pull (in case behind)
git merge master
git push
git checkout master (back to master so we don't accidentally modify RELEASE-PRODUCTION)

Configuring eRT Build Targets

Configuring Devman IoT Server Connections

./target/devman-config/

export DEVMAN_SERVER_DOMAIN=devman.inx-systems.net

export DEVMAN_SERVER_PROTOCOL=http

#export DEVMAN_SERVER_CERTS_FULL_CA_BUNDLE=yes

#Server config & credentials for uplading OTA updates

export DEVMAN_UNAME="inx"

#export EHS_PRODUCT_NAME="ambifier"

export DEVMAN_SERVER_NAME=sandbox

DevmanSecurity

./certs/client/
Notes
Broken builds
linux_arm64_gtk_gst_gg_debian10
linux_arm64_gtk_gst_gg_debian11
linux_x86_64_clang_gg_debian10
linux_x86_64_clang_gg_debian11
linux_x86_64_clang_gtk_gst_gg_debian11
TARGET=linux_x86_64_clang_gtk_gst_gg_debian11

eRT Software Structure Overview

Software module rationale

Common

  • Most Components should be implemented here unless they are target specific.
    • These should only reference all library code (including clib) via the HAL prefixed API. There are exceptions here such as the YAJL json parser I think.
  • All the business logic of the kernel/framework should be implemented here
    • These should only reference all library code (including clib) via the HAL prefixed API .
  • HAL
    • Provides all the header (API) prototypes and code that Components and the Kernel reference.
    • Should USE (and should check for) EHS Target prefixed versions of all 3rd-party libraries
      • Note this probably hasn’t been done for some complex libraries, such as Libcurl, which are referenced directly, largely because there is little likelihood of using an alternative library for the features (though this is subjective).

Kernel

  • Prebuilt kernels are found in ert-build-support repo.
  • The source (inx only) shared the same build system script structures (duplicated) and a platform for each os-arch is required as the kernel SHOULD be only dependent on the host OS and ARCH and not any middleware or other IO dependencies.

Target

Todo
Describe the rationale for when we use different HAL prefixes:

  • EhsT_XXXXX
    • EhsH_XXXXX

eRT Build Dependencies

Ert-build-support

This is tool chains, libc and basic

See above

ert-contrib-middleware

Some eRT components have 3rd-party library dependencies. There are a limited number of contributed library sources within the ert-components repository, but the vast majority are contained in git large-file-support repos ert-build-support (toolchains, libc & optional kernel headers) and the remaining middleware libraries are maintained in ert-contrib-middleware (e.g. networking, media or ML libraries)..

More details of this can be found below and reference manuals in the documented below:

CI System - Design & Implementation Notes [Archive]
inxware Software Build Release (Products )

The intention of the EHS-build-support binary structure is that the HOST identifier should be selected when doing an EHS build on a supported build machine architecture

Target output architecture is selected automatically from the canonical names given in the target directory platform and os-arch make scripts (e.f. OS type, CPU architecture and any SDK-specific bits, optionally). These are typically the tail end of the directory name structures.

Overview of Key config.mk build parameters

The details of the remaining build this are provided below and a curated list of build parameters is maintained here: eRT Build System Parameters.

A list of currently active (regression tested) targets is maintained in the following spreadsheet : inxware-ert target status.

Platform Name OS / Arch Onprem Repos Public (github) Repos
linux_amd64 linux amd64 (64 bit) working Working (*generic docker)
linux_amd64_gtk_gst linux_amd64 (64 bit) working Working (*generic docker)
linux_x86_gtk_gst linux/ x86 (32 bit) working No - Needs Docker file
linux_x86_64_clang linux_x86_64 (64 bit)
linux_armv7l_clang linux_armv7l working
linux_armv7l_gtk_gst linux_armv7l Not checked since refactor
nxp_arm_inx_hrcdispv1_ehs_debug FreeRTOS_arm working
android linux-android_arm
unity unity
linux_armv7l_clang_gtk linux_armv7l Working
linux_x86_64_clang-host linux_x86_64_clang-host Not working
esp32_freertos-xtensor-base xtensor-esp32_freertos
linux_android_arm64_unity-lib linux_android_arm64
linux_android_arm linux_android_arm64
linux_android_arm_p64_a6_ambifier linux_android_arm64
linux_android_arm_p64_h6_player-sandbox linux_android_arm64
linux_android_arm_p64_h6_player-sandbox-debug linux_android_arm_p64_h6_player-sandbox-debug
linux_android_arm_p64_h6_unity-tellisign linux_android_arm (32/64bit)
linux_x86_64_clang_gtk linux_x86 (64 bit)
linux_x86_gtk_gst_ambifier2_debian11 linux_x86 (64 bit)
win_x86_gtk_gst win_x86 (32 bit)

(NOTE THIS TABLE NEEDS POPULATING!!)

Configuring Debug Levels

eRT-components

The following variables can be set in config.mk:

The debugger console is required for targets that will be used with the Lucid tools for local connection app updates and debugging.
EHS_DEBUG_TCPIP_CONSOLE=yes
See EHS Console Specificationfor more information on the Lucid-console logging system.
To enable local logging on the device the following should be set to log to stdio or to a local file.
Todo: We need an additional setting to enable disable file logging as we don’t want this left on accidentally on

EHS_RUNTIME_LOGGER_ENABLED=yes

THe AV systems can be debugged with
EHS_DEBUG_AV=yes

EHS-Kernel

All functions entry exist tracing in the kernel and some in the eRT HAL can be enabled with
EHS_DEBUG_TRACE=y
This enables all logging functions EhsError()

Adding New Target

For adding a new target to EHS-kenel, one needs to create a platform and os-arch (if not defined for this target).

target/platform/<target specific>/

target/os-arch/<target specific>

Note that there can be multiple variants of the os-arch specified in the platform.

Configuring New eRT Platforms

eRT porting can often be achieved using existing target support largely by configuring the ert-build system as described in the document.To support new target architectures, peripherals and operating systems it may be necessary to “Port” the eRT’s source code using the porting layer (HAL) as described in this document : eRT Architecture & Porting Guide - Public

Each platform supported by ert has a directory under ./ert-components/target/platforms/

Each directory contains a few configuration files, but the most important is the config.mk file.

The parameters defined here not only provide conditional build directives in the ert source code, but also help the build system identify the correct toolchain and libraries that the build should be carried out with.

  • The goals of any level of build configuration is to achieve “Orthogonality” i.e. we avoid conflating configuration parameters with other happenstance factors by making configuration items very specific to their particular variant, especially for conditional build C-processor macros.
  • A second goal is to minimise the complexity which can be in conflict with the above orthogonality by making configurations more detailed than necessary. We attempt to reduce configuration complexity by aggregating
  • NOTE: The build paths constructed from the config.mk file parameters have often been overridden which often hides the underlying automatic methods that were initially intended. We should try and revert back to the “preferred” automatic method in this project.

Key Platform config.mk Parameters Overview

A detailed description of all eRT build configuration parameters is (should”) be maintained in the following spreadsheet:

eRT Build System Parameters

  • We need to review the above in both presentation, content and to identify any anomalies in our build system configurations.

The most generic conditional build configuration in the eRT source code is controlled by the OS and Architecture parameters EHS_OS and EHS_ARCH, which may have the following values:

$EHS_ARCH-$EHS_OS : Selects the ./target/os-arch/ resources.
: Provides default path for the toolchain (ert-build-support)
: Provides default path for middleware (ert-contrib-middleware)

Some possible values:
EHS_ARCH=x86,amd64,arm,arm7x
EHS_OS=none,freertos,nxp,linux,win32

Because gcc and many open source middleware libraries are build and identified using slightly more specific CPU and OS designators the parameters EHS_GNU_ARCH and EHS_GNU_OS can be used to be more selective within the set of toolchains and middleware options available.

$EHS_GNU_ARCH-$EHS_GNU_OS : overrides the toolchain and middleware paths if a
a specific GNU naming convention path is used.

E.g. the conventions of this on linux platforms can be found using the following:

EHS_GNU_ARCH (see uname -m)
EHS_GNU_OS (see uname -i)

However the GNU arch conventions are also used for gcc, clang and mingw and libc for non-linux targets too.

It is also possible to select a different contributed middleware libraries to build against with furter config.mk override parameters, which will be discussed in more details below.

Toolchain Selection

As mentioned above the default toolchains are selected on the basis of the target OS, but are also arranged by which host they can run on. <HOST_OS> is the build systems architecture string as defined by uname -i on the build host**.**

Default Toolchain

If TOOLCHAIN_NAME, CC_OVERRIDE, or EHS_GNU_* options are not set the following base toolchain will be used:

../ert-build-support/toolchains/<HOST_OS>/<EHS_ARCH>_<EHS_OS>/

The base toolchain may be soft-linked in the toolchains directory to a more specific version so that the default can be easily changed to more up to date compilers if required (TODO in ert-build-support).

If either EHS_GNU_OS or EHS_GNU_ARCH are set then these will override the respective EHS_OS or EHS_ARCH parameters for the toolchain path. This is to allow matching of toolchains to GNU specific GNU formats that are used by compilers and also when building middleware packages with autotools for example.

Overriding Toolchains

There are cases where a specific toolchain is used for a target, which can be selected using the

Some toolchains use different naming conventions to the standard gcc format, particularly for cross-compilation.

The path to the toolchain binaries within ert-build-support/toolchains/ can be explicitly set using by setting the variable TOOLCHAIN_NAME to the path.
E.g.

TOOLCHAIN_NAME=arm-none-linux-gnueabi-4.4.6

If the build is to take place in a specific Docker or vagrant environment and the default host toolchain in the PATH should be used then set TOOLCHAIN_NAME to “HOST”.
TOOLCHAIN_NAME=HOST

The filename of the compiler and linker can also be explicitly set using CC_OVERRIDE if this is not gcc
CC_OVERRIDE=arm-none-linux-gnueabi-gcc

libc Selection

Libc default to using the sysroot directory of the toolchain, hover for toolchains without this support a specific directory can be defined for the libc headers and libraries.

EHS_CLIB_OVERRIDE_PATH can be used to choose a different sysroot for a target under the ./ert-build-support/support_libs/target_libs/${EHS_CLIB_OVERRIDE_PATH} path.

E.g.
EHS_CLIB_OVERRIDE_PATH=arm-linux-gnu-glibc-2.12.1-ti-blaze-ubuntu-10_10

Which may for example be copied from a 3rd-party distro’s target’s filesystem to build against if build headers and libraries cannot be reproduced any other way.

EHS_SPECIAL_CLIB_EXT (This should start with a delimiter e.g. -v2. TODO - this is for special extensions to middleware build libraries - TODO add these if needed). This applies to both libc in ert-build-support and the ert-contrib-middleware.

TODO add flag in platform.mk to not get the targetenv scripts to add the target libraries to core lib and/or cslib/ ).

Feature Selection

See eRT Build Variant Management section in eRT Porting Guide - Public for more details.

ert-component builds may include different features for different targets and satisfy certain features with different technologies and 3rd party middleware support.

During porting certain features and function blocks can be included as “Stubbed” versions in cases where supporting the features is difficult or deferrable, but you still want other dependent features and apps to operate without that functionality.

Make files such as os-arch/target.mk and platforms/../config.mk files will typically use the following method for different features:

EHS_<FEATURE>_SUPPORT = {<specific technology>,none,stubbed}

The ehs.mk, components.mk files should generate C preprocessor macros accordingly with the format.

EHS_<FEATURE>_SUPPORT__{<specific technology>,NONE,STUBBED}

Examples

Graphics

EHS_GUI_SUPPORT=gtk/gdi/OpenGLE2/OpenGLE1_1/android_stub/fb

Audio Visual

EHS_AV_SUPPORT=gst,vlc
EHS_VIDEO_SUPPORT=yes (If you only want audio then unset this.)
EHS_MEDIA_SUPPORT=all (enables the media content handling toolbox such as devman media and SMIL parser. TODO is to decide to combine this with the AV toolbox. In which case this can be removed from the config.mk files and the EHS_AV_SUPPORT variable tested instead.

Networking

EHS_NETWORKING_SUPPORT=all (Enables system networking, not toolbox).
EHS_COMPONENT_NETWORKING_SUPPORT=all (enables the networking toolbox).

EHS_DEVMAN_SUPPORT=all (enables the core devman code)
EHS_DEVMAN_MON_SUPPORT=yes (enables the core device management system)

EHS_COMMS_API_SUPPORT=bsdsockets/winsock/lwip (Chooses the type of socket library. TODO this should be moved into the os-arch/*/target.mk, where it is implicitly set for all platforms )

EHS_COMMS_TASK=tcp_server_common (Always set this if you want debug capability)

Miscellaneous

EHS_TOOLKIT_DEPRECATED=yes (enables non-current component versions in case you want to deploy old apps to new devices without updating the app’s components.

EHS_PERIPHERAL_DEVICE_SUPPORT=all

A Typical config.mk File

########################################################################################
# Target: x86 Linux Media Enabled Device
########################################################################################
#MUST SET the following for any component config:
EHS_ARCH=x86
EHS_OS=linux
#Optional Settings:
EHS_GNU_ARCH=i686
EHS_GNU_OS=linux-gnu
#use a specific legacy toolchain and kernel headers
KERNEL_VERSION=linux/2.6.35.9
TOOLCHAIN_NAME=i686-pc-linux-gnu-4.4.6
CC_OVERRIDE=i686-pc-linux-gnu-gcc
# Component Toolbox Options:
EHS_GUI_SUPPORT=gtk
EHS_AV_SUPPORT=gst
EHS_VIDEO_SUPPORT=yes
EHS_MEDIA_SUPPORT=none
EHS_NETWORKING_SUPPORT=stubbed
EHS_COMPONENT_NETWORKING_SUPPORT=stubbed
EHS_DEVMAN_SUPPORT=all
EHS_DEVMAN_MON_SUPPORT=yes
EHS_TOOLKIT_DEPRECATED=yes
EHS_COMMS_API_SUPPORT=bsdsockets
EHS_COMMS_TASK=tcp_server_common
EHS_PERIPHERAL_DEVICE_SUPPORT=all
# Optional Logging Settings:
EHS_DEBUGALL=true

To compile specific target_platform libraries ert-contrib-middleware/target_libs/ should be present in the directory. Otherwise the program will not be compiled.

Contributed Middleware Selection

FOr features that depend on contributed software that is built with it’s own build system and provides a C header & library stored in ert-contrib-middleware/target/libs/ the default path to the dependencies can be modified using additional make file variables as follows:

COMPONENT_VARIANT - MUST be set if there are middleware dependencies. It can usually be set to component library sources that have more features than needed. (TODO works out a way that if we package DLLs into cslib/corelib that this can be a subset of all those possible for some targets. At the moment this is done by having different variants in ert-contrib-middleware/ with specific /target_libs, but soft links to a common ./build/ directory (The build directory is where the compiler looks for headers and libs to link against.).

Building with Docker

Each platform can (and should) be provided with a docker image identified in
./target/platform/<platform name>/Dockerimagename

This file contains a name of a dockerhub hosted docker image that should be used when running certain ert-component make commands such as:

make all_docker # same as make -j 8 all but in docker
make targetenv_<package type>_docker # uses the packager found in the dockerimage
# Or
make target_buildenv # Start DOCKER environment shell in pwd.

If a new Dockerimage (or updated Dockerimage) needs to be created then the platform must have a “Dockerfile'' also located in the platform directory. These can be published to Dockerhub to share them with other platform targets and other users to ensure consistency across builds systems and also to reduce build times using Docker's local caching. Working from dockerhub published images also avoids uncertainties of building Dockerfiles at different times and geographical locations where differences can be observed.

make publish_docker_image # Build new docker image and publish to github

The script will publish to Dockerhub using inx’s dockerhub account (TBC checked!) and hence can only be published by inx. The pushed images are however public AND SHOULD NOT CONTAIN ANY CODE!
Naming conventions for inxware Dockerhub files are

inxware/<target arch-os>_<hosted distribution version>-<added packages>

eRT Initialisation Sequence

  • [See porting guide information and notes from Xiaosheng )
  1. Ehs-Main()
    1. Init KernelHAL [EhsHSys_Init()] - IO only - files, tcpip devman,
    2. Init Kernel [EhsKSys_Init()] - Initialise data tables, memory management, timers.
    3. Load Version Information [];
    4. ----- Identifiy Components Available
      1. Add Component Modules [ EhsAddStaticModules() & EhsAddDynamicModules() ] Iterates through the static and dynamic modules modules
      2. Initialise Modules [EhsInitStaticModules() & EhsInitDynamicModules()] These functions are currently hardwired to specific toolbox init functions. Future Implementations should allow for anonymous module iteration.
    5. --- Initialise Application Runtime Environment
      1. Initialise Application Tables (app_data::EhsDataConnectionTable_init())
      2. CD to App Directory (Default Working Directory)
      3. EhsDataConnectionTable_resetMonitorFlags(void) (If debugging)
    6. Load SODL (parse_sodl.c::???) - populated the runtime tables
    7. -- Execute Application
      1. If new base app - Application::start(Curently do always - aggregate apps un-implemented)
      2. Start main execution Loop - (exits on new app?)
    8. Case Exiting for new app start (redo v. from this state) :
      1. (Possibly in the future re-iterate the component library for new dynamically linked components. - Will require Close of components.
      2. Reset Modules to state 1.iv.b.
      3. Wait for tear down to complete EhsApplicationWaitForTearDown()(for Components that create their own threads)
      4. Reset the Application Runtime Environment: app_data::EhsApplicationReset():
        1. Ehs_cdTOaPP()
        2. Start Groups ( Looped in here !!!!)
        3. Create an initial event::EhsFunctionInstanceDataTable_triggerInitialEvent();
        4. Function Instance data should be resert here??
      5. app_data::EEhsApplicationResethsDataConnectionTable_applicationReset();
      6. Loop to 1.v.

EHS Runtime File & Asset Structure

EHS bin


This is the main container for ehs executable code. This contains subdirectories listed below plus the following files:

Entry

ehs/bin

Contents

run_ehs.sh (or run_ehs.bat)

This is the start script that sets up the OS environment and starts ehs and also starts aggressive OS level devman update system.

Arguments

#1 NO_RESTART - stops the restart mechanism if ehs.exe crashes
#2 LIB_HOST - uses the hosts clib dlls rather than those in corelib
#3DEBUG or GDB - starts ehs with logging to disk or GDB. respectively.

ehs.exe

This is the ehs core executable (user space kernel) this should be run as root in which case it will gain high thread priority (in fact it may run as a linux REAL_TIME_THREAD_PRIOIRITY.

Arguments

DEBUG (see run_ehs.sh)
SODL_PATH - path to new SODL directory (TBD - shouldn’t be in a different level to appdata?

Modules loaded dynamically by EHS - TBC
EHS will reject applications with incorrect Module dependencies .

Entry

ehs/bin/cscore/
eRT-build-support files such as any libc overrides. Very seldomly used these days!

Component Support Directories (Required on load)

---

Entry

ehs/bin/cslib/
content s are copied (on make targetenv) from
ert-contrib-middleware/target-Libs/<target contrib middleware path> target-packages/*

runtime libraries including for example plugin directories for middleware such as gstreamer. Dynamic libs are the typical 3rd party Module support components
These are typically standalone .so or .dll files required for EHS components
EHS core should not be dependent on these libs.

Component Support Directories (Plugins)


Many 3rd party support libraries require directories to contain plugin libs, extensions, meta data or scripting to run. Examples include VLC, LUA.
THis may not be used at all any more. Plugsin are usually included as subdirectories to cslib

Entry

ehs/bin/csdir

eRT Runtime Tree - Linux example

Degrees of persistence:

Runtime Dynamic
Restart Application
Restart ehs.exe
Reboot Updatable

|-- **appdata/**  
|   |-- default/  
|   |   |-- t.sdl			\[default application if no other has been selected  
|   |   \`-- XXXX.XX		\[Other application meta data files and resources e.g. gui files  
|   |-- temp/			\[ like default but where debugger applications are installed and run  
|-- **bin/**  
|   |-- runehs.sh		\[ launch script with setting up for  eRT (ehs.exe)\]  
|   |-- restartehs		\[ OS level script that should restart ehs\]  
|   |-- reboot			\[ OS level script for rebooting\]  
|   |-- sys.crons			\[ obsolete here?\]  
|   |-- ehs.exe  
|   |-- **inxlib/**  
|   |   |--custom\_module1.inx	\[ Future option for toolbox plugin DLLs \- not currently used   
|   |   |--custom\_module2.inx	\[ Future option for toolbox plugin DLLs \- not currently used   
|   |-- **cslib/** 			**\[ ert-contrib middleware ./target\_support/ for linux .so’s**	  
|   |-- **csdir/			\[ ert-build-support libraries \- Ideally this is empty\]**  
|-- devman/			\[ Devman configuration and OD-level management scripts  
|   |-- core/			\[ This needs updating with the config, certs etc. structure  
|   |   |-- HWID\_NETIP.inx  
|   |   |-- devman\_update.inx  
|   |   |-- download  
|   |   |-- getHWID-NETIP.sh  
|   |   |-- run\_dldata  
|   |   |-- sys-timer.sh  
|   |   \`-- update.sh  
|   \`-- plugins/			\[ These aren’t really used any more TBC\]  
|   	|-- 0  
|   	|   |-- devman\_mon.inx  
|   	|   \`-- download  
|   	|-- 1  
|   	|   |-- dev-x.sh  
|   	|   |-- devman\_player.inx  
|   	|   \`-- download  
|   	|-- 2  
|   	|   |-- dev-x.sh  
|   	|   \`-- download  
.  
.  
.  
|--sysdata/  
|-  |-- EHSVersion.nfo		\[ This has been renamed version.nfo now?\]  
|   |-- devman.crons 		\[ This doesn’t seem to be used any more? \]  
|   |-- ehs\_tcpip.log		\[ A log file that may be delete now TBC\]  
|   |-- platform			\[ A platform type identifier like x86/linux/\<somethingspecific\> \]  
|   |-- sys.crons			\[ We shouldn’t use this any more\! \]  
|   \`-- version.nfo		\[ Version info createt at build that is reported in various ways\]  
\`-- userdata/  
	|-- configs/  
	|   \`-- devman-player/  
	\`-- media/

---

eRT Operating System Support (Supervisors)

Unity & eRT Supervisor {#unity-&-ert-supervisor}

How to Build the Application (taken from Signage notes)

The application can be built using either the Unity IDE or command-line. For a full build, including build of the scheduler plugins, the command line method should be used.

Linux Command-Line Build

  1. Setup source repository. Clone following repositories and make sure that all of them are located in the same directory.
  2. Install Unity3d Hub
    • Install following dependencies
      1. sudo apt-get install gconf2
    • Download the Unity Hub from https://unity3d.com/get-unity/download
    • chmod +x ./UnityHub.AppImage
    • Run the hub, login and setup your licence (use inx developer account ) developer@inx-systems.com:HelloUnity101
    • Press the activate new license button if there’s no licenses shown (Choose personal use if this is the case).
  3. Navigate to EHS project and run ./makeEhs-unity.sh
  • NOTE : it takes a long time to build it when you do it for the first time, so allow at least ~30min for the first build.
  • You can track the long progress of unity build by tailing a log file e.g.
    1. tail -f EHS/../TARGET_TREE/ehs_<your target>/log

Archive (Old Instructions to cherry pick from! And delete)

[ This needs updating but is a good format and has some good pictures! it is roughly right but we have all the elements now documented better elsewhere that needs bringing in]

Name changes (new name :as found in this document)
Ert-components : EHS
Ert-build-support : EHS-build-support
Ert-contrib-middleware : ert-contrib-middleware

Feel free to update the text below as above and add any new structures for bare metal OS builds e.g. esp32,nxp-arm , etc. that we discover.

Multi-Target Build System

EHS Code Structure

./Common/ - HW independent
./target/ - HW/library dependent
./target/platform/ Config. Managnt. is in target_build_scripts (bash) and target_platform (make configs) - largely config.mk

Build Tools

./ert-contrib-middleware

Contains toolchains and some core libraries such as libc - this should be sufficient to built eRT where there are no specific components dependencies required (e.g. stubbed or excluded from the build.

3rd-party Source and Libraries

Git repo located as

./ert-contrib-middleware/

Build against pre-built libraries in relevant directory in target_libraries

These are typically built with a script for each component that will bring in any specific libs required. However binary dependencies may also be added to this repository under the canonically named path:

The eRT build system should generate a library path to the prebuilds from the information in config.mk and os-arch make files such as:

ert-contrib-middleware/target_libs/arm-linux-androideabi-9-arm-none-linux-android-9-headless/build/

This may be included as a sysroot or separate -H and -L compiler instructions (See build config report with

make chkconfig

Building Dependencies in the ert-contrib-middleware

The scripts are canonically named:
./inx_build_scripts/build_all_pkgs.sh
called manually when a new target or code change occurs (rarely).
This is given two parameters: HW OS and options such as

./inx_build_scripts/create_cs_rt.sh - called (this maybe more specific than
./inx_build_scripts/commit_cs_pkg.sh - which is called manually after verification.

and produce the

Generally gnu autools
./configure --prefix=[abs staging directory] - located ../target_lib_buiilds/$TARGETs
./make
./make install

Target Library Binaries

../ert-built-support/[$CSPACKAGE]/

Figure: Process options to identify and create dependencies when building a new ert-components platform.

The EHS build system in (EHS.git) usually requires the support of 2 other repos: EHS-build-support and ert-contrib-middleware, except if the host’s installed compiler is used or a suitably pre-configured cross-compiler is in the search path. These repos are very large and will be downloaded automatically when certain build commands are executed as described in the following sections.

Build Configuration

Each target Type defines a variant. To produce a build variant a platform file must be created in EHS/target/platform/config.mk using the following format:

Targets variants have the following configuration fields, defined as environment variables in the

EHS make environment:
EHS_HW=(x86|SH4|PPC,..)
EHS_OS=(linux|win32,mingw,..)
EHS_RFSSIZE=(KBs)
EHS_RAM=(kBs)
EHS_COMMS_API={NONE,bsdsockets,winsock,serial}

General HAL

Currently we have EHS_NETWORKING_SUPPORT=all this is quite broad and limiting and doesn’t allow us to to be fine-grained about what networking services are supported and what type of target technology is used to make it work.

What we should move to:

  • Start to implement this as a ticket - hopefully less than a couple of hours work

EHS_NETWORKING_SUPPORT=(all|none, or something specific) - this will remove all code and dependencies on networking and no networking support of any kind will be provided (probably just limit IP networking for now, but might include LoRaWAN, Thread/Matter possibly.

We don’t currently support “none” for no networking, which we probably should do. We only support not defined.

We also Need to review the following and refer only to the spread sheet:
eRT Build System Parameters rather the an these probably wrong duplicates/
EHS_NETWORKING_HTTP_SOCKET=(bsd,winsock,lwip,stub,none)
EHS_NETWORKING_HTTP_CLIENT=(libcurl,stub,none)
EHS_NETWORKING_HTTP_SERVER=(lwip,stub,none)
EHS_NETWORKING_MQTT_CLIENT=(lwip,stub,none)

If any of these are set when EHS_NETWORKING_SUPPORT is disabled we should generate a #error build fail to avoid a spammy compile error.

Other notes the curl specific code (to be done at a later date!) in Common/HAL/curl/ should be moved to /target/Component-HAL/url/curl/ and a stub version made available for completeness. The /Common/HAL/url/ code should provide the same API as it does now (perhaps the CURL object needs to be abstracted as class type def so it is build-time polymorphic or a void*?).

Component HAL

EHS_GRAPHICS_SUPPORT = {NONE,GTK,GDI,STAPI,DIRECTFB,SDL,LVGL}
EHS_AV_SUPPORT={NONE,VLC,GSTREAMER,STAPI}
EHS_VECTORGRAPH_SUPPORT={NONE,SVG}
EHS_DATABASE_SUPPORT={NONE,SQLITE,
EHS_DEVMAN_SUPPORT={NONE,DEVUPDATE, DEVMANMON,ALL,}
EHS_MEDIA_SUPPORT={NONE,all,smil,dlna}
and as configuration strings for dependency builds:

OS_HW_GRAPHICS

dependencies on components are required for dependency builds.

These strings are typically identified in platform descriptors also e.g.:
OS_HW_GRAPHICS_NETWORK - the order of these is not critical and missing entries should be equivalent to none.

Build Outputs

Linux & WIndows

ehs.exe

Build steps

Make all . male all_docker
prepdeps
targetenv
all

Variables: $TARGET defines the target type

upload_sypatch_devman_server

Target ehs tree

Boot Image

TARGETENVTREE

Legacy & Migration

C-code Module Responsibilities

app_data

Data (Component) Connection Table (EhsDataConnectionTable) creating, initialising, resetting.
Group Processing Table (EhsKEGroupTable) - Scheduling Spec for the Application
Ehs Trigger Table (EhsTriggerTable) initialise the FIFO buffers

parsesld does not call the init function any more
Component generated threads. (Indexing start and stop and Tear down).

Target Platform Packaging

Android APK

TODO - the following need to be moved somewhere else?

│ └── system
│ └── utils
│ ├── downloader
│ ├── downloader.apk
│ ├── downloader.jks
│ └── password.txt

Debian .deb

ESP32 IDF

ESP32 and esp32S3 builds are for the following target environments

Espressif series (i.e. esp32, esp32s3) software is based on `FreeRTOS`. The filesystem is based on littlefs.

ESP32

build

./configure esp32_freertos-xtensor-base
Make

Flashing

  1. In the EHS-kerne and ert-components repos, set the target to esp32_freertos-xtensor-base.
  2. Run ./build-esp32-freertos-ehs.sh under the “ert-contrib-middleware/inx_build_scripts” directory
  3. Under the “ert-components” repo, run make clean; make && make targetenv_esp32_docker
  4. Then you can flash the built image to the esp32 with sudo ./esp32_flsh.sh.

ESP32S3

Build

# configure and build esp32s3 base target

./configure esp32s3_freertos-xtensa-base
make clean
make targetenv_prebuild
make targetenv_littlefs
make all_docker
make targetenv_esp32s3_docker

./configure esp32s3_freertos-xtensa-hrdcv2B-ehs-caravan-Willerbys-inx-devman-debug
make prepdeps (optional)
make targetenv_version (optional)
make clean ; make targetenv_prebuild && make targetenv_littlefs && make all_docker && make targetenv_esp32s3_docker
make targetenv_upload_ota # upload to the server

# flash and log to file
./scripts/build-deploy/esp32s3/esp32_flash.sh && screen -L -Logfile logfile /dev/ttyACM0 115200

Flashing

  1. Pull the latest changes into ert-components and ert-contrib-middleware.
  2. Under the ert-components repo, run ./configure esp32s3_freertos-xtensa-base.
  3. Under the ert-components repo, run make clean ; make all_docker ; make targetenv.
  4. Connect the Micro-USB cable to the connector labelled as “USB”. Your device node should be /dev/ttyACM0
  5. Under ert-contrib-middleware/contrib/esp-idf/esp-idf-4.4.4/, run . export.sh.
    (If you do it for the first time, make sure to run ‘./install.sh’ first.)
  6. Under TARGET_TREES/ehs_env-esp32s3_freertos-xtensa-base/bin, run
esptool.py --chip esp32s3 elf2image --min-rev-full 0 --max-rev-full 9999 -ff 80m -fm qio -fs 8MB -o ehs.bin ehs.exe; esptool.py --chip esp32s3 --port /dev/ttyACM0 -b 460800 --before default_reset --after hard_reset write_flash -fm dio -fs 8MB -ff 80m 0x0 /path/to//ert-contrib-middleware/target_libs/xtensa-esp32s3_freertos-xtensa-esp32s3-elf-4.4.4/build/lib/bootloader.bin 0x9000 /path/to/ert-contrib-middleware/target_libs/xtensa-esp32s3_freertos-xtensa-esp32s3-elf-4.4.4/build/lib/partition-table.bin 0x10000 ehs.bin && screen /dev/ttyACM0 115200

(If it fails to flash, press “BOOT” button and “RESET” button, then release “RESET” button, finally release “BOOT” button to enter the bootloader mode. After the flashing success, click “RESET” to reset the board.)

# add this to screen command in order to log to a file
screen -L -Logfile logfile /dev/ttyACM0 115200

Erase flash memory

esptool.py --chip esp32s3 --port /dev/ttyACM0 erase_flash

Debugging crashes

xtensa-esp32s3-elf-addr2line -pfiaC -e ehs.exe ADDRESS # e.g 0x4200b062

see:
https://docs.espressif.com/projects/esp-idf/en/latest/esp32s3/api-guides/tools/idf-monitor.html

Config info

Description Memory allocated location
Main Loop 3584 contrib/esp…/ert…/sdkconfig
System Event 2304 contrib/esp…/ert…/sdkconfig
Timer Task (Optional) 3584 contrib/esp…/ert…/sdkconfig
EHS MAIN 20000 target_main.c
TCPIP 4096 target_main.c

in ert-contrib-middleware/contrib/esp-idf/esp-idf-4.4.4/ert_config_files/esp32s3_freertos:
CONFIG_ESP_INT_WDT_TIMEOUT_MS=300
ESP32-S3 Series Datasheet

Windows 10/11

Windows Unity Build (this is the old way! New way is described at the start of this doc)

Build Windows Unity EHS plug-in

./configure win_x86_unity
make prepdeps
make all
make targetenv

This makes a file the .so that gets renamed later.

If you have changes to the C# Unity code you will need to rebuild the Unity project.

We currently configure server and certificates manually - not in ert-components.

Add EHS DLL plug-in to Unity 3d Project

Make sure you have Unity Hub with a license and Unity 2019.4.40 (LTS) installed on your windows device.

Next, from the Unity Hub open this project “EHS/target/os-arch/android_ALL/Unity_EHS”

To replace the ehs plugin simply make sure it’s renamed from ./TARGET_TREES/ehs_env-win_x86_unity/bin/ehs.exe to libnative-activity.dll , then drag and drop dll file to Assets/Libs/win_x86 as shown in the image below.

Build Unity 3d Project Windows

Before doing the following do this you need:

  1. Download a example windows https://drive.google.com/drive/folders/1T-BRbE6N3U7zZbWF6IBFOygUGugPicXM

Packaging the unity build
Open build setting dialog from File->Build Settings… Next make sure that you set the Target Platform to Windows, and Architecture to x86 . (Do not use x86 64-bit, as plugin dll is built for windows 32-bit for now) See screenshot below for reference.

Click build and navigate to the folder where you’d like the project to be built (e.g create SignageWindowsBuild folder somewhere in your file system).

Copy DLLs and EHS Resources to the project folder

After building windows Unity your project folder will contain following files and folders

Make sure you copy all DLLs required by EHS plugin from ./TARGET_TREES/ehs_env-win_x86_unity/bin/ to the root of your app folder (e.g. SignageWindowsBuild)

Assuming that your EHS plugin is configured to work from any directory, create this folder in the root folder (e.g. SignageWindowsBuild)

ehs_data

Next copy following directories

appdata devman sysdata userdata

from ./TARGET_TREES/ehs_env-win_x86_unity/ to ehs_data/ in your build root directory.

Next, copy Signage app from
apps/customer-apps/SimpleSignOn/sso-unity-v1.0.0/export
to
SignageWindowsBuild/ehs_data/appdata/default

Next, copy certs from DevmanSecurity repo to SignageWindowsBuild/ehs_data/devman/core/certs

In theory you should now be able to run TELLISIGN.exe and upload playlists from devman to it.

Issues that may need to be fixed

Make sure the URL is https , not http in
SignageWindowsBuild/ehs_data/devman/core/config/DEVMANURL.000

=======================================================
Other info on the signage should be specified in this document

https://docs.google.com/document/d/1pdd-2uhXRIFGtSfz114ihcWe4u_1_dZaxC08jW3iOno/edit#heading=h.bqhjzp48ygz9

=======================================================

Target Flashing

Xiaosheng An- please see this section and improve it for flashing code on to the NXP devices.

NXP-Kinesis

NXP stuff can be built with NXPs IDE (eclipse thing), but we also have a full set of command line flashing and building options in eRT (and the HRDCv1’s monolithic firmware).

Unit QA & calibration Guide HRDx

Other background information (We should extract any relevant informationvfor flashing binaries here:
HRDx Test and Production Software Setup

Windows MINGW32 remote debug using Linux Visual Code

  1. Install Linux WLS on your Windows machine then install following packages
    sudo apt install gdbserver

eRT Regression Testing

TODO

See also eRT Component Test System which was written up in parallel when below was being implemented. THe file-based method below is along the same lines and we may migrate it to a function block method that will allow more reporting and less complex applications in the future.

  • Make the regression testing scripts more uniform:
    - [ ] Multi-platform build tests are run as a script from ./SystemTests/CI/ - should any bash scripts we use for per-target runtime testing on Linux also be in here (not sure where they are currently - it’s not really a targetenv process - so shouldn’t be in there really.
    - [ ] make targetenv_run_tests should be called make test_run_components
    - [ ] Create a new make target as a place holder for now for running a single profile-specific smoke test that includes some level of stress and testing a full app, including kernel and platform features. make test_run_smoke
    - [ ] Move the function blocks specific tests apps in ./tests/root/ to the function block’s test directories
  • Should we implement the test reporting function block as above before creating too many more regression test apps? The main advantage of the above proposed approach is that is can run on any target type and we can get it to report the status to Devman as ready flags off files on targets will not generally be possible or likely to succeed in most cases.
    - [ ] Same applies to the multi-target build regression system. This should also build all the test targets with a multi-functional smoke test app that runs as many function blocks as possible (test cases for each target profile), which can also report the results to Devman if we can find a good way of deploying the new builds to test devices (e.g. using their OTA update methods).

eRT Regression testing is carried out in up to 3 stages:

  • Multi-target build regression tests (relatively quic test to run during development and refactoring.

./SystemTests/CI/regression_test-published-only.sh # we will change this to accept a profile argument for the targets to run instead of the “published only”.

  • Single target component regression tests - runs all the component tests

./make targetenv_run_tests #(see proposed change of name above!)

  • Single target component regression tests - runs all the component tests

./make targetenv_run_tests (see proposed change of name above!)

Run Tests

At the moment the test can only be run for the linux ert build host targets.

./configure linux_x86_64-lucid-debian11
make clean; make all_docker
make targetenv_run_tests

The tests in the terminal should look like this

Check Results

All test results get saved to this directory with following structure

../TARGET_TREES/TEST_RESULTS/results/

|-- <test name>
|-- expected_result.txt (contains pre-generated expected results of the test)
|-- test_result.txt (result generated by the test, provided eRT runs OK)
|-- test_stdout.txt (eRT stdout logs)
|-- passed (this flag gets created when expected and generated test results match)
|-- timeout (test has timeout before any results were available)

Create New Test

  1. Using Lucid open a template project located in ert-components/tests/TEST-TEMPLATE
  2. Use ‘Save Project As’ to create a new test with a unique name from this template. For now store it in ert-components/tests/root/core or any other category e.g. network. WARNING! - this is a temporary location and all tests will be moved to Common/Components (once all old tests placed in there are sorted out)
  3. Close the TEST-TEMPLATE project and now you can add function blocks to your new test project. Make sure the following events are fired for start,write and end of your test.
  4. To generate your project results. Make sure this directory is present ~/inxware/inx-tests/ (it gets created when running targetenv_run_tests or it can be added manually) and empty (clear it if it has some old data). Next ‘Run’ project in Lucid and once successful check results file content and copy it to the root of your new project e.g ~/inxware/inx-tests/results/test_result.txt -> ./ert-components/tests/root/core/NewTest/test_result.txt
  5. This should be it. Run tests using targetenv_run_tests to see if your new test is passing.

Example of a working test

Core Components

Events & Triggers

State Management

  • STATE - Represents a State in Lucid
  • state_condition - Event driven state condition --> transition and actions
  • state_debug - To debug state machines this function block is required.
  • state_manager - Each state machine is defined by a State Manager

Array & Data Structures

Buffers & Queues

Primitive Data Constants

Data Converters

Boolean Logic



Mathematics

Alebraic Evaluation

Mathematical Operators

Inequalities

Trigonometry

Other Functions

Data Selection

  • indexed_mux_int - Indexed Mux Int
  • indexed_mux_str - Indexed Mux String
  • map_int - Map Int
  • mux_1b - mux_1b
  • mux_1i - mux_1i
  • mux_1r - mux_1r
  • mux_1s - mux_1s
  • mux_2b - MultiplexTwo Input Bool
  • mux_2i - MultiplexTwo Input Int
  • mux_2r - MultiplexTwo Input Real
  • mux_2s - MultiplexTwo Input String
  • mux_3b - MultiplexThree Input Bool
  • mux_3i - MultiplexThree Input Int
  • mux_3r - MultiplexThree Input Real
  • mux_3s - MultiplexThree Input String
  • mux_4b - MultiplexFour Input Bool
  • mux_4i - MultiplexFour Input Int
  • mux_4r - MultiplexFour Input Real
  • mux_4s - MultiplexFour Input String
  • mux_8b - 8-Input Indexed Boolean Multiplexer Function Block
  • mux_8i - 8-Input Indexed Integer Multiplexer Function Block
  • mux_8r - 8-Input Real Number Multiplexer Function Block
  • mux_8s - Number Multiplexer Function Block
  • num_mux - Numeric Multiplexer Function Block

Data Processing & Parsers

Database & Storage

Demultiplexers

String Functions

File Operations



Basic IO Components

GPIO & Hardware I/O



Graphics and UI Components

GUI & User Interface

User Input

  • keypress - Reads key presses & control keys

Unity & Web Integration

  • inx-unity - Provides media and animation widget interface
  • unity2 - Unity 3D
  • webkit - JavaScript/WebKit Interface (Obsolete)

Language & Localization



Media Components

Audio & Media

Digital TV & Media Control



Communications Components

TCPIP Network & Communication

Wireless & LPWAN Networks

Fieldbus Comms



Digital Signal Processing

  • ADC Polled Analogue to Digital converter.
  • ADC_continuous Advanced ADC supporting clocked ISR modes and advanced signal averaging.
  • FFT8 Fast Fourier Transform of 8 bit binary input data
  • FIR8 Finite Impulse Response filter for 8 bit binary data.
  • IIR8 Infinite Impulse Response filter for 8 bit binary data.
  • [calibrate](ADC calibrate) - Calibrates the ADCs


Control Systems Components

PID Controllers



Machine Learning & Machine Vision

  • mv_camera - Provides access to camera input data image streams
  • mv_idsplay - Renders camera image streams.
  • mv_resize - Resizes an image using given interpolation method
  • mv_crop - Crops and image width and height at a give offset
  • mv_apriltag_reader Plain Old Prgramming AprilTag Reader
  • ml_tflite_inference Machine learning model inference.
  • ml_osvm Online iterative machine learning (training& inference).


Platform Components

System Utilities

  • reboot - Reboot the device
  • rtc - RealTimeClock - provdes date/time from RTC device or OS.
  • rtinfo - RuntimeInfo (e.g. MAC/IP address, memory,... )
  • scheduler - Weekly Scheduler
  • system_exec - Executes linux shell commands
  • rng - Random Number Generator (may use hardware RNG)

Time Components

Application Management

Over the Air Update (OTA)

  • ota - OTA Function Block for updating firmware
  • ota_data_parser - Assembles OTA data files from Devman.


Non-functional Components

Some function blocks that can be used Lucid app are for visual/organisation purposes only and do not translate into any executable ert-components.

Sub System Input/Output Ports

-Note this file is autogenerated from ert-config help files and may not currently be complete or properly categorised!

Clone this wiki locally