# Build Machine & System Requirements In general inxware runtimes are built using Ubuntu 22.04 machines or a machine that can support the following packages * **build-essentials\*** (GNU Make) * **Git\*** & Git-LFS * **Docker** (Standard that can mount host with synched permission groups) *Other dependencies and tools should be installed using specific methods e.g. the ert-runtime environment is created by running* **make prepdeps** *in the ert-components repo (see below). This will install the remaining requirements on a debian machine, though we aim to use docker images as much as possible to contain tools (BUT NOT CODE\!). There may be other dependencies needed for building the tools etc. on windows.* # eRT Build Overview If you are looking to build an existing production target go to this [section](#production-builds). Other wise to build a target from scratch, understanding each step please read on. A typical sequence of building a complete and configured eRT package consists of the following steps: | ./configure \ make prepdeps (Optional: Downloads dependencies needed for some targets) make all\_docker (Compiles & linkers eRT C/C++-code to exe or dll.) make targetenv (Aggregates all deployed files to a staging directory) make targetenv\_version (Optional: only after QA and mandatory for release). make targetenv\_package (Optional: calls relevant packager as platform config) make upload\_xxxx (Optional: deployment to a server for OTA update) make install\_via\_xxxx (Optional Deployment directly to a device) | | :---- | ## Software Dependency Locations | Purpose | Repo | Path | Notes | | :---- | :---- | :---- | :---- | | eRT source | | | | | eRT porting | | /target/ | | | eRT build | | ./Makefile \+ specific .mks | | | | | | | | | | | | ![][image1] Also See the following documents for more details in eRT’s architecture: [eRT Architecture & Porting Guide - Public](https://docs.google.com/document/d/1cD-U7T4-0Wmf99GcKDhZS0xlJb0kGqrtDSc0TjGL9D8/edit#) # Inxware Build Processes # Building a Pre-configured eRT Platform This section should provide only necessary information for an internal inx technical product manager (i.e. not just a developer) to be able to create a release of inxware-based products. The content of this section should provide a general overview of the process and components involved and how things SHOULD work. ### Starting from Scratch System requirements: Ubuntu 22.04 Recommended (64 bit essential) | Warning\! Check your \~/.ssh/ for the xxxx.pub file. If it is rsa256 you can either remove it (if not already in use) or configure an additional key for the url below. The new key can be generated with ssh-keygen \-t ed25519 | | :---- | The initial system preparation involves the following steps: | mkdir inxware cd inxware sudo apt-get install git build-essential git clone ssh://[tech-data@dev.inx-systems.net](mailto:tech-data@dev.inx-systems.net):8822/home/inx-data/data/Repos/ert-components.git cd ert-components make prepdeps | | :---- | The final step will checkout two other (large) git-lfs enabled repos in the inxware directory. It may take 10 minutes or more and requires \>20GB storage. Once your environment is setup you can test a build for your development environment (e.g. Debian 11/Ubuntu 22 example) using | ./configure linux\_x86\_64\_gtk\_gst\_debian11 make all\_docker \# Build source and create binaries make targtenv \# Add misc resources to staging directory cd ../TARGET\_TREES/ehs-env\_linux\_x86\_64\_gtk\_gst\_debian11/bin ./ehs.exe | | :---- | The final step will run eRT on a Debian/Ubuntu host and wait to accept an application from the inxware tools. See here to build and run the [tools](https://docs.google.com/document/u/0/d/16p4iZMkgj_46SH9fl4PSG2FGREKYv0jCDF66OfWDnoM/edit) on linux. ### eRT Build System General Reference To compile an ert executable for any of the hardware targets eRT supports the ert-components repository contains the core build system needed to create deployable executables and system images. The eRT build system does not support building operating system images such as linux or android, but will build full “flash” images for targets where runtime installable packages are not supported (e.g. FreeRTOS or bare metal). TO build for a particular target the first step is to configure the build system from those available. The list of supported targets can be identified using the **./configure help** command. Once you have identified your target the build tree is configured as follows | ./configure \ | | :---- | Retrieve dependencies by checking out toolchains & library dependencies (including C-lib) from github/or inx internal git-lfs repos. (**Note** this typically only needs to be run once on a particular machine, but it will pick up updates from github if repeated.) | make prepdeps | | :---- | The next step is to build the eRT source code, which is typically done within a docker environment. The docker environment may be needed only to support running a particular toolchain or it may contain the toolchain and all dependencies itself. | make all\_docker | | :---- | **Developer Tip:** If your host linux environment already supports the toolchains required for your target or you are running in an interactive docker environment (e.g. after running **make target\_buildenv**) then you can use the default “all” target e.g. by running **make \-j 8** directly. Finally you will typically need to build a runtime directory structure for eRT and assemble any other runtime files needed during execution or for device management in a staging direction found at ../TARGET\_TREES/\/…to run-within using the following command: **make targetenv** The directories under TARGET\_TREES are staging directories that usually contain software artifacts in the format required in a runtime deployment. The structure of this is defined in the directories under ert-components/target/envtree/\*\*\*/ Target types with special environment initialisations: * Unity (e.g. tellisign \- not needed for Ambifier2) \- see [Unity & eRT Supervisor](#unity-&-ert-supervisor) section below. Unity needs to be run interactively to set up the license. (A personal license is OK for now?) - [ ] Can we create a license Unity tools Docker image? # Production Builds {#production-builds} See relevant directory in **scripts/build-deploy/.** These scripts will typically build all variants of the product and optionally upload these to the relevant deployment Devman server instance. # Build Types (from Scratch) \============================================================= ## Android Builds e.g Ambifier2 Note: These steps are usually done in the following scripts for **specific products**: e.g. **./scripts/build-deploy**/moodsonic-tsa/makeEhs-android-ambifier-server-upload.sh The steps needed to build/upload a specific android platform e.g. ambifier for A6 board **./configure linux\_android\_arm\_p64\_a6\_ambifier \#use ./configure for options** **make prepdeps \# pull from all dependencies repos** **make clean** **make all\_docker \# Build the source code in docker so we know the toolchain runs** **make targetenv \# Assemble the various files into the staging directory** **make targetenv\_version \# optionally update the global release version number** **make targetenv\_apk\_docker \#Create the main app APK from the staging directory.** **make targetenv\_android\_dep\_pack \# Package bits not included in the APK. e.g. device supervisor scripts and APK.** **\# Now we can either flash a device running Android locally or upload to Devman** **\# upload to device via adb (optionally setenv ADB\_IP=\)** **make upload\_ehs\_via\_adb** **\# AND/OR deploy to Devman distribution server** **make upload\_ehs\_sys\_patch** Use ‘**make help**’ for details on what each step means. \====================================================================== The above Android target build gets built in following stages: 1. **make all\_docker \-** This builds NDK **ehs.so** plugin 2. **make targetenv \-** This aggregates eRT file system structure, Lucid apps and the above plugin into TARGET\_TREES/ehs\_env-\ 3. **make targetenv\_apk\_docker \-** This copies the Android Studio template project (**ert-components/target/os-arch/android\_ALL/android\_studio\_ehs**) into TARGET\_TREES/ehs\_env-\ and by using gradle creates deployable .apk file. 4. **make targetenv\_android\_dep\_pack \-** This aggregates all supervisor scripts and creates a deployable package (with APKs) for this target in TARGET\_TREES/ehs\_env-\ ## Unity (e.g. signage) Android Builds #### **Setup UnityHub and Licence (Skip this\!\!\! Only do it if this fails, which probably means you have installed it and signed the license).** #### **make targetenv\_unity\_export )** **(needs to be done only once. if not present on your machine \!\!\!)** 1. Install Unity3d Hub (Todo we prolly want to do this in docker) * Install following dependencies 1. sudo apt-get install gconf2 * Download the Unity Hub from [https://unity3d.com/get-unity/download](https://unity3d.com/get-unity/download) * chmod \+x ./UnityHub.AppImage * Run the hub, login and setup your licence (use inx developer account ) [developer@inx-systems.com](mailto:developer@inx-systems.com):HelloUnity101 * Press the activate new license button if there’s no licenses shown (Choose personal use if this is the case). ## Build Entire Platform (including Supervisor) Commands for building Android Unity targets with supervisor and updates. **\# build 64-bit eRT plugin required by all Unity targets** **./configure linux\_android\_arm64\_unity-lib** **make clean** **make all\_docker** **make targetenv** **\# build 32-bit eRT plugin and apk** **./configure linux\_android\_arm\_unity-tellisign** **make clean** **make targetenv\_cleanall \# needed ONLY when Unity C\# project needs updating** **make all\_docker** **make targetenv** **make targetenv\_unity\_export** **make targetenv\_apk (targetenv\_apk\_docker doesn’t seem to work for some reason??)** **\# bundles supervisor and updates for deployment (only used for managed devices)** **make targetenv\_android\_dep\_pack** **\# deploying to server** **make upload\_ehs\_sys\_patch** Note the above seems to be broken when building with docker. \============================================================ Notes: 1. Make libraries (e.g. 64 bit) 2. Make base .so (usually 32 bit) 3. Make (do unity thing) \-\> We need a new **make targetenv\_unity\_export** 1. Exports a unity IDE project containing all (compiles c\# code to mono binary) \-\> ….xxx.so. (**Potentially these could be stored in an ert-contrib-middleware).** 2. Exports this as an android studio project. 3. We then add JNI / Java script to the android studio project. 4. Make targetenv\_apk Tellisign \- needs both 32 and 64 bit versions. 1. Prior step to build 64 bit plugin \- should be in ../TARGET-TREE/…plugins/… libehs.so (e.g.). 1. This is checked during make targetenv \- but not used \- just a warning is issues before targetnv\_version etc. is called. 2. Why do we get Android Studio differently \- Should be the same or docker. 1. E.g. to avoid the gradle version problem. 3. We need to export Unity’s project to our own android studio so we can add more code to it when it is built. 1. Potentially this should be in ert-build-support? 2. Unzipped to ../ next to ert-components. 4. Unity has android gcc & gradle toolchains (all of Android SDK) in the zip file and we need to use this. 5. Things we copy into the Vanilla Unity project (where are the follows): 1. Knows about the ehs plugins 2. Adds certificates 6. Uses mono to build the unity app code. 7. Updated the build version stuff. 8. Android has JNI stuff and Java \- which is not needed for windows below. ## Windows Unity Builds Example of creating installer for Windows Unity running on Sandbox server **./configure win\_x86\_unity\_sandbox** **make all\_docker** **make targetenv** **make targetenv\_nsis\_docker** #### **Updating Windows 32-bit Unity 3d Template** At the moment Windows Unity needs to be built using IDE and added as a template to ***ert-contrib-middleware/contrib/Unity3D/SignageWindowsBuild*** ### **Building Win32 Unity 3d Template** Make sure you have Unity Hub with a license and **Unity 2019.4.40 (LTS)** installed on your windows device. Next, from the Unity Hub open this project “EHS/target/os-arch/android\_ALL/Unity\_EHS” Open build setting dialog from **File-\>Build Settings…** Next make sure that you set the **Target Platform** to **Windows,** and **Architecture** to **x86** . (Do not use x86 64-bit, as plugin dll is built for windows 32-bit for now) See screenshot below for reference. ![][image2] Click build and navigate to the folder where you’d like the project to be built (e.g create SignageWindowsBuild folder somewhere in your file system). ## Unity build roadmap ideas - [ ] Unity toolchain move form tarball to ert-build-support - [ ] C\# remain in ert-components (or possibly contrib middleware would make more sense from the following step PoV). - [ ] Generated prebuilt dependencies and output Android Studio Project should go into ert-contrib-middleware. - [ ] Windows version \- [Kamil Wieczorek](mailto:k.wieczorek@inx-systems.com)\- this is manual ATM, see how it would fit into the above. ## Plain Android We need to do the following steps so that we are issuing a command to do an update (not how to todo the update): 1. Copy the devman deployed script functions from ./target/envbuildscripts/installers/android-adb/\* to things that go into the supervisor scripts and get run in a simple way 2. Then we need to do a migration release to all devices (or save one on Devman if some devices are offline). 3. … Then remove ./target/envbuildscripts/installers/android-adb/ 4. For ambifier updates we download two apks in the dldata.tgz and download them both and the ambifier installer that unpacks the installs both. It also has a zip file for the supervisor scripts, which get downloaded as separate. 1. Propose new more generic method: Supervisor & Downloader & optional ambifier are all in one zip file and the respective installer on the device will do the right thing. 2. ONLY LATEST VERSION FOR Each variant needs to be on the server and should have the same path/URL. Update script unzip the dldata.tgz and look for specific apk’s in it \- if they’re there it will install them including The product is EHS/Ambifier/Telesign \- this will be consumed in the above generic method. ### **make install\_via\_adb** This make target may modify the target’s init scripts and install the supervisor code also. Things we do now as root: * H6 \- set a new MAC address * Rock64 \- creates MAC address for a new hardware device, doesn’t work once * Can we change the initscripts on the vanilla image? Possibly need to res-gn / CRC. * Volume? Some need root some don’t (may also depend on SE Linux) * Install? * …/ * Adb is needed for automatically granting permission to apps to avoid user. AOSP \- option 1. ADB as root install initscript and downloader to connect to Devman to do updates. 1. This needs to know what kind of image to download (i.e. we want some permanent file installed via the ert-component make upload\_via\_adb script.. I.e. ${TARGET} value from ert-components would be the type identifier. 2. Ideal way installing an APK \- e.g. Supervisor \+downloader. 1. # Uploading eRT to Appland There are a couple of targets which can be uploaded to the appland after building. The following structure needs to be used for the target to support appland upload. **./target/platform/\/appland** info INSTALLER.html **res** \- directory with resources e.g. images, html etc. To upload target package and the above to the appload, you need to do the following **make targetenv\_upload\_appland** Make sure you fully build the target before uploading to appland e.g. tragetenv, targetenv\_apk\_docker, targetenv\_esp32s3\_docker etc. (depending on the target) This script can be used for building and uploading all targets which are placed in the appland, **./scripts/build-deploy/appland/build\_upload\_all.sh** # Merge ‘master’ into ‘Release’ branch cd apps/ 'master uptodate' (in sync with remote) git checkout RELEASE-PRODUCTION git pull (in case behind) git merge master git push git checkout master (back to master so we don't accidentally modify RELEASE-PRODUCTION) # Configuring eRT Build Targets ## Configuring Devman IoT Server Connections **./target/devman-config/** export DEVMAN\_SERVER\_DOMAIN\=devman.inx-systems.net export DEVMAN\_SERVER\_PROTOCOL\=http \#export DEVMAN\_SERVER\_CERTS\_FULL\_CA\_BUNDLE=yes \#Server config & credentials for uplading OTA updates export DEVMAN\_UNAME\="inx" \#export EHS\_PRODUCT\_NAME="ambifier" export DEVMAN\_SERVER\_NAME\=sandbox DevmanSecurity ./certs/client/ Notes Broken builds linux\_arm64\_gtk\_gst\_gg\_debian10 linux\_arm64\_gtk\_gst\_gg\_debian11 linux\_x86\_64\_clang\_gg\_debian10 linux\_x86\_64\_clang\_gg\_debian11 linux\_x86\_64\_clang\_gtk\_gst\_gg\_debian11 TARGET=linux\_x86\_64\_clang\_gtk\_gst\_gg\_debian11 # eRT Software Structure Overview Software module rationale ## Common * Most Components should be implemented here unless they are target specific. * These should only reference all library code (including clib) via the HAL prefixed API. There are exceptions here such as the YAJL json parser I think. * All the business logic of the kernel/framework should be implemented here * These should only reference all library code (including clib) via the HAL prefixed API . * HAL * **Provides** all the header (API) prototypes and code that Components and the Kernel reference. * Should **USE** (and should check for) EHS Target prefixed versions of all 3rd-party libraries * Note this probably hasn’t been done for some complex libraries, such as Libcurl, which are referenced directly, largely because there is little likelihood of using an alternative library for the features (though this is subjective). ## Kernel * Prebuilt kernels are found in ert-build-support repo. * The source (inx only) shared the same build system script structures (duplicated) and a platform for each os-arch is required as the kernel SHOULD be only dependent on the host OS and ARCH and not any middleware or other IO dependencies. ## Target Todo Describe the rationale for when we use different HAL prefixes: * EhsT\_XXXXX * EhsH\_XXXXX # eRT Build Dependencies ## Ert-build-support This is tool chains, libc and basic See above ## ert-contrib-middleware Some eRT components have 3rd-party library dependencies. There are a limited number of contributed library sources within the ert-components repository, but the vast majority are contained in git large-file-support repos ert-build-support (toolchains, libc & optional kernel headers) and the remaining middleware libraries are maintained in ert-contrib-middleware (e.g. networking, media or ML libraries).. More details of this can be found below and reference manuals in the documented below: [CI System - Design & Implementation Notes \[Archive\]](https://docs.google.com/document/d/1sFktCxBcCHgBxjGzdZvwYfoNrB9K5VymTeg5oxYVM64/edit#heading=h.xmq4cany0dy6) [inxware Software Build Release (Products )](https://docs.google.com/document/d/1UXMSBRWBSyAun8D2ndP9ghKR51nwI7omBsLV4nAsGyg) The intention of the EHS-build-support binary structure is that the HOST identifier should be selected when doing an EHS build on a supported build machine architecture **Target output architecture is selected automatically from the canonical names given in the target directory platform and os-arch make scripts (e.f. OS type, CPU architecture and any SDK-specific bits, optionally). These are typically the tail end of the directory name structures.** ## Overview of Key config.mk build parameters The details of the remaining build this are provided below and a curated list of build parameters is maintained here: [eRT Build System Parameters](https://docs.google.com/spreadsheets/d/1iLa3ac19vAp6ZYZzBp0nRdMIfbQVcbHgvP6oHVGF95U). A list of currently active (regression tested) targets is maintained in the following spreadsheet : [inxware-ert target status](https://docs.google.com/spreadsheets/d/1GhCxv2CQzBMFypJ9X54-AcerGu48goKubdTyHEQ3H3g). | Platform Name | OS / Arch | Onprem Repos | Public (github) Repos | | :---- | :---- | :---- | :---- | | linux\_amd64 | linux amd64 (64 bit) | working | Working (\*generic docker) | | linux\_amd64\_gtk\_gst | linux\_amd64 (64 bit) | working | Working (\*generic docker) | | linux\_x86\_gtk\_gst | linux/ x86 (32 bit) | working | No \- Needs Docker file | | linux\_x86\_64\_clang | linux\_x86\_64 (64 bit) | | | | linux\_armv7l\_clang | linux\_armv7l | working | | | linux\_armv7l\_gtk\_gst | linux\_armv7l | Not checked since refactor | | | nxp\_arm\_inx\_hrcdispv1\_ehs\_debug | FreeRTOS\_arm | working | | | android | linux-android\_arm | | | | unity | unity | | | | linux\_armv7l\_clang\_gtk | linux\_armv7l | Working | | | linux\_x86\_64\_clang-host | linux\_x86\_64\_clang-host | Not working | | | esp32\_freertos-xtensor-base | xtensor-esp32\_freertos | | | | linux\_android\_arm64\_unity-lib | linux\_android\_arm64 | | | | linux\_android\_arm | linux\_android\_arm64 | | | | linux\_android\_arm\_p64\_a6\_ambifier | linux\_android\_arm64 | | | | linux\_android\_arm\_p64\_h6\_player-sandbox | linux\_android\_arm64 | | | | linux\_android\_arm\_p64\_h6\_player-sandbox-debug | linux\_android\_arm\_p64\_h6\_player-sandbox-debug | | | | linux\_android\_arm\_p64\_h6\_unity-tellisign | linux\_android\_arm (32/64bit) | | | | linux\_x86\_64\_clang\_gtk | linux\_x86 (64 bit) | | | | linux\_x86\_gtk\_gst\_ambifier2\_debian11 | linux\_x86 (64 bit) | | | | win\_x86\_gtk\_gst | win\_x86 (32 bit) | | | | | | | | (NOTE THIS TABLE NEEDS POPULATING\!\!) # Configuring Debug Levels ### **eRT-components** The following variables can be set in **config.mk**: The debugger console is required for targets that will be used with the Lucid tools for local connection app updates and debugging. **EHS\_DEBUG\_TCPIP\_CONSOLE=yes** See [EHS Console Specification](https://docs.google.com/document/d/1plJ9_A_l35WiEq0_b4RlH6JW9414tpNpgVTUDavutLg/edit#)for more information on the Lucid-console logging system. To enable local logging on the device the following should be set to log to stdio or to a local file. Todo: We need an additional setting to enable disable file logging as we don’t want this left on accidentally on **EHS\_RUNTIME\_LOGGER\_ENABLED=yes** THe AV systems can be debugged with **EHS\_DEBUG\_AV=yes** ### **EHS-Kernel** All functions entry exist tracing in the kernel and some in the eRT HAL can be enabled with EHS\_DEBUG\_TRACE=y This enables all logging functions EhsError() #### **Adding New Target** For adding a new target to EHS-kenel, one needs to create a platform and os-arch (if not defined for this target). target/platform/\/ target/os-arch/\ Note that there can be multiple variants of the os-arch specified in the platform. # Configuring New eRT Platforms eRT porting can often be achieved using existing target support largely by configuring the ert-build system as described in the document.To support new target architectures, peripherals and operating systems it may be necessary to “Port” the eRT’s source code using the porting layer (HAL) as described in this document : [eRT Architecture & Porting Guide - Public](https://docs.google.com/document/d/1cD-U7T4-0Wmf99GcKDhZS0xlJb0kGqrtDSc0TjGL9D8) Each platform supported by ert has a directory under **./ert-components/target/platforms/** Each directory contains a few configuration files, but the most important is the config.mk file. The parameters defined here not only provide conditional build directives in the ert source code, but also help the build system identify the correct toolchain and libraries that the build should be carried out with. * The goals of any level of build configuration is to achieve “Orthogonality” i.e. we avoid conflating configuration parameters with other happenstance factors by making configuration items very specific to their particular variant, especially for conditional build C-processor macros. * A second goal is to minimise the complexity which can be in conflict with the above orthogonality by making configurations more detailed than necessary. We attempt to reduce configuration complexity by aggregating - [ ] NOTE: The build paths constructed from the config.mk file parameters have often been overridden which often hides the underlying automatic methods that were initially intended. We should try and revert back to the “preferred” automatic method in this project. ## Key Platform config.mk Parameters Overview A detailed description of all eRT build configuration parameters is (should”) be maintained in the following spreadsheet: [eRT Build System Parameters](https://docs.google.com/spreadsheets/d/1iLa3ac19vAp6ZYZzBp0nRdMIfbQVcbHgvP6oHVGF95U/edit#gid=0) - [ ] We need to review the above in both presentation, content and to identify any anomalies in our build system configurations. The most generic conditional build configuration in the eRT source code is controlled by the OS and Architecture parameters EHS\_OS and EHS\_ARCH, which may have the following values: $**EHS\_ARCH**\-$**EHS\_OS** : Selects the ./target/**os-arch**/ resources. : Provides default path for the toolchain (ert-build-support) : Provides default path for middleware (ert-contrib-middleware) Some possible values: **EHS\_ARCH=x86,amd64,arm,arm7*x*** **EHS\_OS=none,freertos,nxp,linux,win32** Because gcc and many open source middleware libraries are build and identified using slightly more specific CPU and OS designators the parameters EHS\_GNU\_ARCH and EHS\_GNU\_OS can be used to be more selective within the set of toolchains and middleware options available. $**EHS\_GNU\_ARCH**\-$**EHS\_GNU\_OS** : overrides the toolchain and middleware paths if a a specific GNU naming convention path is used. E.g. the conventions of this on linux platforms can be found using the following: **EHS\_GNU\_ARCH (see uname \-m)** **EHS\_GNU\_OS (see uname \-i)** However the GNU arch conventions are also used for gcc, clang and mingw and libc for non-linux targets too. It is also possible to select a different contributed middleware libraries to build against with furter config.mk override parameters, which will be discussed in more details below. ### Toolchain Selection As mentioned above the default toolchains are selected on the basis of the target OS, but are also arranged by which host they can run on. \ is the build systems architecture string as defined by **uname \-i** on the build host**.** #### Default Toolchain If **TOOLCHAIN\_NAME, CC\_OVERRIDE, or EHS\_GNU\_\*** options are not set the following base toolchain will be used: **../ert-build-support/toolchains/**\/*\***\_***\***/** The base toolchain may be soft-linked in the toolchains directory to a more specific version so that the default can be easily changed to more up to date compilers if required (**TODO in ert-build-support**). If either **EHS\_GNU\_OS** or **EHS\_GNU\_ARCH** are set then these will override the respective **EHS\_OS** or **EHS\_ARCH** parameters for the toolchain path. This is to allow matching of toolchains to GNU specific GNU formats that are used by compilers and also when building middleware packages with autotools for example. #### Overriding Toolchains There are cases where a specific toolchain is used for a target, which can be selected using the Some toolchains use different naming conventions to the standard gcc format, particularly for cross-compilation. The path to the toolchain binaries within **ert-build-support/toolchains/** can be explicitly set using by setting the variable **TOOLCHAIN\_NAME** to the path. E.g. TOOLCHAIN\_NAME\=arm-none-linux-gnueabi-4.4.6 If the build is to take place in a specific Docker or vagrant environment and the default host toolchain in the PATH should be used then set TOOLCHAIN\_NAME to “HOST”. TOOLCHAIN\_NAME\=HOST The filename of the compiler and linker can also be explicitly set using CC\_OVERRIDE if this is not gcc CC\_OVERRIDE\=arm-none-linux-gnueabi-gcc ### libc Selection Libc default to using the sysroot directory of the toolchain, hover for toolchains without this support a specific directory can be defined for the libc headers and libraries. **EHS\_CLIB\_OVERRIDE\_PATH** can be used to choose a different sysroot for a target under the **./ert-build-support/support\_libs/target\_libs/${EHS\_CLIB\_OVERRIDE\_PATH}** path. E.g. EHS\_CLIB\_OVERRIDE\_PATH\=arm-linux-gnu-glibc-2.12.1-ti-blaze-ubuntu-10\_10 Which may for example be copied from a 3rd-party distro’s target’s filesystem to build against if build headers and libraries cannot be reproduced any other way. EHS\_SPECIAL\_CLIB\_EXT (This should start with a delimiter e.g. \-v2. TODO \- this is for special extensions to middleware build libraries \- TODO add these if needed). This applies to both libc in ert-build-support and the ert-contrib-middleware. TODO add flag in platform.mk to not get the targetenv scripts to add the target libraries to core lib and/or cslib/ ). # Feature Selection See eRT Build Variant Management section in [eRT Porting Guide - Public](https://docs.google.com/document/d/1cD-U7T4-0Wmf99GcKDhZS0xlJb0kGqrtDSc0TjGL9D8/edit?tab=t.0) for more details. ert-component builds may include different features for different targets and satisfy certain features with different technologies and 3rd party middleware support. During porting certain features and function blocks can be included as “Stubbed” versions in cases where supporting the features is difficult or deferrable, but you still want other dependent features and apps to operate without that functionality. Make files such as os-arch/target.mk and platforms/../config.mk files will typically use the following method for different features: EHS\_**\**\_SUPPORT \= {\,none,stubbed} The ehs.mk, components.mk files should generate C preprocessor macros accordingly with the format. EHS\_**\**\_SUPPORT\_\_{\,NONE,STUBBED} ### **Examples** #### Graphics **EHS\_GUI\_SUPPORT=gtk/gdi/OpenGLE2/OpenGLE1\_1/android\_stub/fb** #### Audio Visual **EHS\_AV\_SUPPORT=gst,vlc** **EHS\_VIDEO\_SUPPORT=yes** (If you only want audio then unset this.) **EHS\_MEDIA\_SUPPORT=all** (enables the media content handling toolbox such as devman media and SMIL parser. TODO is to decide to combine this with the AV toolbox. In which case this can be removed from the config.mk files and the EHS\_AV\_SUPPORT variable tested instead. #### Networking **EHS\_NETWORKING\_SUPPORT=all** (Enables system networking, not toolbox). **EHS\_COMPONENT\_NETWORKING\_SUPPORT=all** (enables the networking toolbox). **EHS\_DEVMAN\_SUPPORT=all** (enables the core devman code) **EHS\_DEVMAN\_MON\_SUPPORT=yes** (enables the core device management system) **EHS\_COMMS\_API\_SUPPORT=bsdsockets/winsock/lwip** (Chooses the type of socket library. **TODO this should be moved into the os-arch/\*/target.mk, where it is implicitly set for all platforms** ) **EHS\_COMMS\_TASK=tcp\_server\_common** (Always set this if you want debug capability) #### Miscellaneous **EHS\_TOOLKIT\_DEPRECATED=yes** (enables non-current component versions in case you want to deploy old apps to new devices without updating the app’s components. EHS\_PERIPHERAL\_DEVICE\_SUPPORT=all ### A Typical config.mk File \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Target: x86 Linux Media Enabled Device \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \#MUST SET the following for any component config: EHS\_ARCH\=x86 EHS\_OS\=linux \#Optional Settings: EHS\_GNU\_ARCH\=i686 EHS\_GNU\_OS\=linux-gnu \#use a specific legacy toolchain and kernel headers KERNEL\_VERSION\=linux/2.6.35.9 TOOLCHAIN\_NAME\=i686-pc-linux-gnu-4.4.6 CC\_OVERRIDE\=i686-pc-linux-gnu-gcc \# Component Toolbox Options: EHS\_GUI\_SUPPORT\=gtk EHS\_AV\_SUPPORT\=gst EHS\_VIDEO\_SUPPORT\=yes EHS\_MEDIA\_SUPPORT\=none EHS\_NETWORKING\_SUPPORT\=stubbed EHS\_COMPONENT\_NETWORKING\_SUPPORT\=stubbed EHS\_DEVMAN\_SUPPORT\=all EHS\_DEVMAN\_MON\_SUPPORT\=yes EHS\_TOOLKIT\_DEPRECATED\=yes EHS\_COMMS\_API\_SUPPORT\=bsdsockets EHS\_COMMS\_TASK\=tcp\_server\_common EHS\_PERIPHERAL\_DEVICE\_SUPPORT\=all \# Optional Logging Settings: EHS\_DEBUGALL\=true To compile specific target\_platform libraries **ert-contrib-middleware/target\_libs/** should be present in the directory. Otherwise the program will not be compiled. ## Contributed Middleware Selection FOr features that depend on contributed software that is built with it’s own build system and provides a C header & library stored in ert-contrib-middleware/target/libs/ the default path to the dependencies can be modified using additional make file variables as follows: **COMPONENT\_VARIANT** \- MUST be set if there are middleware dependencies. It can usually be set to component library sources that have more features than needed. (TODO works out a way that if we package DLLs into cslib/corelib that this can be a subset of all those possible for some targets. At the moment this is done by having different variants in ert-contrib-middleware/ with specific /target\_libs, but soft links to a common ./build/ directory (The build directory is where the compiler looks for headers and libs to link against.). # **Building with Docker** Each platform can (and should) be provided with a docker image identified in **./target/platform/\/Dockerimagename** This file contains a name of a dockerhub hosted docker image that should be used when running certain ert-component make commands such as: make all\_docker \# same as make \-j 8 all but in docker make targetenv\_\\_docker \# uses the packager found in the dockerimage \# Or make target\_buildenv \# Start DOCKER environment shell in pwd. If a new Dockerimage (or updated Dockerimage) needs to be created then the platform must have a “Dockerfile'' also located in the platform directory. These can be published to Dockerhub to share them with other platform targets and other users to ensure consistency across builds systems and also to reduce build times using Docker's local caching. Working from dockerhub published images also avoids uncertainties of building Dockerfiles at different times and geographical locations where differences can be observed. make publish\_docker\_image \# Build new docker image and publish to github The script will publish to Dockerhub using inx’s dockerhub account (TBC checked\!) and hence can only be published by inx. The pushed images are however public AND SHOULD NOT CONTAIN ANY CODE\! Naming conventions for inxware Dockerhub files are **inxware/\\_\-\** # **eRT Initialisation Sequence** - [ ] \[See porting guide information and notes from Xiaosheng ) 1. Ehs-Main() 1. Init KernelHAL \[EhsHSys\_Init()\] \- IO only \- files, tcpip devman, 2. Init Kernel \[EhsKSys\_Init()\] \- Initialise data tables, memory management, timers. 3. Load Version Information \[\]; 4. \----- Identifiy Components Available 1. **Add Component Modules** \[ EhsAddStaticModules() & EhsAddDynamicModules() \] Iterates through the static and dynamic modules modules 2. **Initialise Modules** \[EhsInitStaticModules() & EhsInitDynamicModules()\] These functions are currently hardwired to specific toolbox init functions. Future Implementations should allow for anonymous module iteration. 5. **\--- Initialise Application Runtime Environment** 1. **Initialise Application Tables (app\_data::EhsDataConnectionTable\_init())** 2. **CD to App Directory (Default Working Directory)** 3. **EhsDataConnectionTable\_resetMonitorFlags(void) (If debugging)** 6. Load SODL (parse\_sodl.c::???) \- populated the runtime tables 7. \-- Execute Application 1. If new base app \- Application::start(Curently do always \- aggregate apps un-implemented) 2. Start main execution Loop \- (exits on new app?) 8. **Case Exiting for new app start (redo v. from this state) :** 1. **(Possibly in the future re-iterate the component library for new dynamically linked components. \- Will require Close of components.** 2. **Reset Modules to state 1.iv.b.** 3. **Wait for tear down to complete EhsApplicationWaitForTearDown()(for Components that create their own threads)** 4. **Reset the Application Runtime Environment: app\_data::EhsApplicationReset():** 1. **Ehs\_cdTOaPP()** 2. **Start Groups ( Looped in here \!\!\!\!)** 3. **Create an initial event::EhsFunctionInstanceDataTable\_triggerInitialEvent();** 4. **Function Instance data should be resert here??** 5. **app\_data::EEhsApplicationResethsDataConnectionTable\_applicationReset();** 6. **Loop to 1.v.** # **EHS Runtime File & Asset Structure** ### **EHS bin** --- This is the main container for ehs executable code. This contains subdirectories listed below plus the following files: #### **Entry** ehs/bin #### **Contents** ##### [**run\_ehs.sh**](http://run_ehs.sh) **(or run\_ehs.bat)** This is the start script that sets up the OS environment and starts ehs and also starts aggressive OS level devman update system. ###### **Arguments** \#1 NO\_RESTART \- stops the restart mechanism if ehs.exe crashes \#2 LIB\_HOST \- uses the hosts clib dlls rather than those in corelib \#3DEBUG or GDB \- starts ehs with logging to disk or GDB. respectively. ##### **ehs.exe** This is the ehs core executable (user space kernel) this should be run as root in which case it will gain high thread priority (in fact it may run as a linux REAL\_TIME\_THREAD\_PRIOIRITY. ###### **Arguments** DEBUG (see run\_ehs.sh) SODL\_PATH \- path to new SODL directory (TBD \- shouldn’t be in a different level to appdata? --- Modules loaded dynamically by EHS \- TBC EHS will reject applications with incorrect Module dependencies . #### **Entry** ehs/bin/cscore/ eRT-build-support files such as any libc overrides. Very seldomly used these days\! ### **Component Support Directories (Required on load)** ### --- #### **Entry** ehs/bin/cslib/ content s are copied (on make targetenv) from **ert-contrib-middleware**/target-Libs/\ target-packages/\* runtime libraries including for example plugin directories for middleware such as gstreamer. Dynamic libs are the typical 3rd party Module support components These are typically standalone [.so]() or .dll files required for EHS components EHS core should not be dependent on these libs. ### **Component Support Directories (Plugins)** --- Many 3rd party support libraries require directories to contain plugin libs, extensions, meta data or scripting to run. Examples include VLC, LUA. THis may not be used at all any more. Plugsin are usually included as subdirectories to cslib #### **Entry** ehs/bin/csdir ## eRT **Runtime Tree \- Linux example** Degrees of persistence: Runtime Dynamic Restart Application Restart ehs.exe Reboot Updatable ``` |-- **appdata/** | |-- default/ | | |-- t.sdl \[default application if no other has been selected | | \`-- XXXX.XX \[Other application meta data files and resources e.g. gui files | |-- temp/ \[ like default but where debugger applications are installed and run |-- **bin/** | |-- runehs.sh \[ launch script with setting up for eRT (ehs.exe)\] | |-- restartehs \[ OS level script that should restart ehs\] | |-- reboot \[ OS level script for rebooting\] | |-- sys.crons \[ obsolete here?\] | |-- ehs.exe | |-- **inxlib/** | | |--custom\_module1.inx \[ Future option for toolbox plugin DLLs \- not currently used | | |--custom\_module2.inx \[ Future option for toolbox plugin DLLs \- not currently used | |-- **cslib/** **\[ ert-contrib middleware ./target\_support/ for linux .so’s** | |-- **csdir/ \[ ert-build-support libraries \- Ideally this is empty\]** |-- devman/ \[ Devman configuration and OD-level management scripts | |-- core/ \[ This needs updating with the config, certs etc. structure | | |-- HWID\_NETIP.inx | | |-- devman\_update.inx | | |-- download | | |-- getHWID-NETIP.sh | | |-- run\_dldata | | |-- sys-timer.sh | | \`-- update.sh | \`-- plugins/ \[ These aren’t really used any more TBC\] | |-- 0 | | |-- devman\_mon.inx | | \`-- download | |-- 1 | | |-- dev-x.sh | | |-- devman\_player.inx | | \`-- download | |-- 2 | | |-- dev-x.sh | | \`-- download . . . |--sysdata/ |- |-- EHSVersion.nfo \[ This has been renamed version.nfo now?\] | |-- devman.crons \[ This doesn’t seem to be used any more? \] | |-- ehs\_tcpip.log \[ A log file that may be delete now TBC\] | |-- platform \[ A platform type identifier like x86/linux/\ \] | |-- sys.crons \[ We shouldn’t use this any more\! \] | \`-- version.nfo \[ Version info createt at build that is reported in various ways\] \`-- userdata/ |-- configs/ | \`-- devman-player/ \`-- media/ --- ``` # eRT Operating System Support (Supervisors) ## Unity & eRT Supervisor {#unity-&-ert-supervisor} ## How to Build the Application (taken from Signage notes) The application can be built using either the Unity IDE or command-line. For a full build, including build of the scheduler plugins, the command line method should be used. ### Linux Command-Line Build 1. Setup source repository. Clone following repositories and make sure that all of them are located in the same directory. * ssh://tech-data@dev.inx-systems.net:8822/home/inx-data/data/Repos/ert-components.git * ssh://tech-data@dev.inx-systems.net:8822/home/inx-data/data/Repos/ert-contrib-middleware.git * ssh://tech-data@dev.inx-systems.net:8822/home/inx-data/data/Repos/ert-build-support.git * ssh://tech-data@dev.inx-systems.net:8822/home/inx-data/data/Repos/DevmanSecurity.git 1. For this repository (DevmanSecurity) you must also switch your branch to ‘master’ (git checkout ‘master’) 2. Install Unity3d Hub * Install following dependencies 1. sudo apt-get install gconf2 * Download the Unity Hub from [https://unity3d.com/get-unity/download](https://unity3d.com/get-unity/download) * chmod \+x ./UnityHub.AppImage * Run the hub, login and setup your licence (use inx developer account ) [developer@inx-systems.com](mailto:developer@inx-systems.com):HelloUnity101 * Press the activate new license button if there’s no licenses shown (Choose personal use if this is the case). 3. Navigate to EHS project and run ./makeEhs-unity.sh * NOTE : it takes a long time to build it when you do it for the first time, so allow at least \~30min for the first build. * You can track the long progress of unity build by tailing a log file e.g. 1. tail \-f EHS/../TARGET\_TREE/ehs\_\/log # Archive (Old Instructions to cherry pick from\! And delete) \[ This needs updating but is a good format and has some good pictures\! it is roughly right but we have all the elements now documented better elsewhere that needs bringing in\] Name changes (new name :as found in this document) Ert-components : EHS Ert-build-support : EHS-build-support Ert-contrib-middleware : ert-contrib-middleware Feel free to update the text below as above and add any new structures for bare metal OS builds e.g. esp32,nxp-arm , etc. that we discover. # **Multi-Target Build System** ## ### **EHS Code Structure** **./Common/** \- HW independent **./target/** \- HW/library dependent **./target/platform/** Config. Managnt. is in target\_build\_scripts (bash) and target\_platform (make configs) \- largely **config.mk** #### **Build Tools** **./ert-contrib-middleware** Contains toolchains and some core libraries such as libc \- this should be sufficient to built eRT where there are no specific components dependencies required (e.g. stubbed or excluded from the build. #### **3rd-party Source and Libraries** Git repo located as **./ert-contrib-middleware/** Build against pre-built libraries in relevant directory in **target\_libraries** These are typically built with a script for each component that will bring in any specific libs required. However binary dependencies may also be added to this repository under the canonically named path: The eRT build system should generate a **library path** to the prebuilds from the information in config.mk and os-arch make files such as: ert-contrib-middleware/target\_libs/**arm-linux-androideabi-9-arm-none-linux-android-9-headless**/build/ This may be included as a sysroot or separate \-H and \-L compiler instructions (See build config report with | make chkconfig | | :---- | ### **Building Dependencies in the ert-contrib-middleware** The scripts are canonically named: **./inx\_build\_scripts/build\_all\_pkgs.sh** called manually when a new target or code change occurs (rarely). This is given two parameters: HW OS and options such as ./inx\_build\_scripts/create\_cs\_rt.sh \- called (this maybe more specific than ./inx\_build\_scripts/commit\_cs\_pkg.sh \- which is called manually after verification. and produce the Generally gnu autools ./configure \--prefix=\[abs staging directory\] \- located ../target\_lib\_buiilds/$TARGETs ./make ./make install ### **Target Library Binaries** ### ../ert-built-support/\[$CSPACKAGE\]/ **Figure:** Process options to identify and create dependencies when building a new ert-components platform. The EHS build system in (EHS.git) usually requires the support of 2 other repos: EHS-build-support and ert-contrib-middleware, except if the host’s installed compiler is used or a suitably pre-configured cross-compiler is in the search path. These repos are very large and will be downloaded automatically when certain build commands are executed as described in the following sections. ## Build Configuration Each target Type defines a variant. To produce a build variant a platform file must be created in EHS/target/platform/config.mk using the following format: Targets variants have the following configuration fields, defined as environment variables in the EHS make environment: EHS\_HW=(x86|SH4|PPC,..) EHS\_OS=(linux|win32,mingw,..) EHS\_RFSSIZE=(KBs) EHS\_RAM=(kBs) EHS\_COMMS\_API={NONE,bsdsockets,winsock,serial} ### **General HAL** Currently we have EHS\_NETWORKING\_SUPPORT=all this is quite broad and limiting and doesn’t allow us to to be fine-grained about what networking services are supported and what type of target technology is used to make it work. **What we should move to:** - [ ] **Start to implement this as a ticket \- hopefully less than a couple of hours work** EHS\_NETWORKING\_SUPPORT=(all|none, or something specific) \- this will remove all code and dependencies on networking and no networking support of any kind will be provided (probably just limit IP networking for now, but might include LoRaWAN, Thread/Matter possibly. We don’t currently support “none” for no networking, which we probably should do. We only support not defined. We also Need to review the following and refer only to the spread sheet: [eRT Build System Parameters](https://docs.google.com/spreadsheets/d/1iLa3ac19vAp6ZYZzBp0nRdMIfbQVcbHgvP6oHVGF95U/edit?gid=505500840#gid=505500840) rather the an these probably wrong duplicates/ EHS\_NETWORKING\_HTTP\_SOCKET=(bsd,winsock,lwip,stub,none) EHS\_NETWORKING\_HTTP\_CLIENT=(libcurl,stub,none) EHS\_NETWORKING\_HTTP\_SERVER=(lwip,stub,none) EHS\_NETWORKING\_MQTT\_CLIENT=(lwip,stub,none) If any of these are set when EHS\_NETWORKING\_SUPPORT is disabled we should generate a \#error build fail to avoid a spammy compile error. Other notes the curl specific code **(to be done at a later date\!)** in Common/HAL/curl/ should be moved to /target/Component-HAL/url/curl/ and a stub version made available for completeness. The /Common/HAL/url/ code should provide the same API as it does now (perhaps the CURL object needs to be abstracted as class type def so it is build-time polymorphic or a void\*?). ### **Component HAL** EHS\_GRAPHICS\_SUPPORT \= {NONE,GTK,GDI,STAPI,DIRECTFB,SDL,LVGL} EHS\_AV\_SUPPORT={NONE,VLC,GSTREAMER,STAPI} EHS\_VECTORGRAPH\_SUPPORT={NONE,SVG} EHS\_DATABASE\_SUPPORT={NONE,SQLITE, EHS\_DEVMAN\_SUPPORT={NONE,DEVUPDATE, DEVMANMON,ALL,} EHS\_MEDIA\_SUPPORT={NONE,all,smil,dlna} and as configuration strings for dependency builds: OS\_HW\_GRAPHICS dependencies on components are required for dependency builds. These strings are typically identified in platform descriptors also e.g.: OS\_HW\_GRAPHICS\_NETWORK \- the order of these is not critical and missing entries should be equivalent to none. ## Build Outputs ### **Linux & WIndows** **ehs.exe** ##### **Build steps** Make all . male all\_docker prepdeps targetenv all Variables: $TARGET defines the target type upload\_sypatch\_devman\_server ### **Target ehs tree** ### **Boot Image** TARGETENVTREE ## **Legacy & Migration** ### **C-code Module Responsibilities** ##### **app\_data** Data (Component) Connection Table (EhsDataConnectionTable) creating, initialising, resetting. Group Processing Table (EhsKEGroupTable) \- Scheduling Spec for the Application Ehs Trigger Table (EhsTriggerTable) initialise the FIFO buffers parsesld does not call the init function any more Component generated threads. (Indexing start and stop and Tear down). # Target Platform Packaging ## Android APK TODO \- the following need to be moved somewhere else? │ └── system │ └── utils │ ├── downloader │ ├── downloader.apk │ ├── downloader.jks │ └── password.txt ## Debian .deb ## ESP32 IDF ESP32 and esp32S3 builds are for the following target environments # Espressif series (i.e. esp32, esp32s3) software is based on \`FreeRTOS\`. The filesystem is based on `littlefs`. ### **ESP32** #### **build** ./configure esp32\_freertos-xtensor-base Make #### **Flashing** 1. In the EHS-kerne and ert-components repos, set the target to `esp32_freertos-xtensor-base`. 2. Run `./build-esp32-freertos-ehs.sh` under the “ert-contrib-middleware/inx\_build\_scripts” directory 3. Under the “ert-components” repo, run `make clean; make && make targetenv_esp32_docker` 4. Then you can flash the built image to the esp32 with `sudo ./esp32_flsh.sh`. ### ### **ESP32S3** #### **Build** **\# configure and build esp32s3 base target** **./configure esp32s3\_freertos-xtensa-base** **make clean** **make targetenv\_prebuild** **make targetenv\_littlefs** **make all\_docker** **make targetenv\_esp32s3\_docker** ./configure esp32s3\_freertos-xtensa-hrdcv2B-ehs-caravan-Willerbys-inx-devman-debug make prepdeps (optional) make targetenv\_version (optional) make clean ; make targetenv\_prebuild && make targetenv\_littlefs && make all\_docker && make targetenv\_esp32s3\_docker make targetenv\_upload\_ota \# upload to the server \# flash and log to file ./scripts/build-deploy/esp32s3/esp32\_flash.sh && screen \-L \-Logfile logfile /dev/ttyACM0 115200 #### **Flashing** 1. Pull the latest changes into `ert-components` and `ert-contrib-middleware`. 2. Under the `ert-components` repo, run `./configure esp32s3_freertos-xtensa-base`. 3. Under the `ert-components` repo, run `make clean ; make all_docker ; make targetenv`. 4. Connect the Micro-USB cable to the connector labelled as “USB”. Your device node should be /dev/ttyACM0 5. Under `ert-contrib-middleware/contrib/esp-idf/esp-idf-4.4.4/`, run `. export.sh`. (If you do it for the first time, make sure to run ‘./install.sh’ first.) 6. Under `TARGET_TREES/ehs_env-esp32s3_freertos-xtensa-base/bin`, run | esptool.py \--chip esp32s3 elf2image \--min-rev-full 0 \--max-rev-full 9999 \-ff 80m \-fm qio \-fs 8MB \-o ehs.bin ehs.exe; esptool.py \--chip esp32s3 \--port /dev/ttyACM0 \-b 460800 \--before default\_reset \--after hard\_reset write\_flash \-fm dio \-fs 8MB \-ff 80m 0x0 /path/to//ert-contrib-middleware/target\_libs/xtensa-esp32s3\_freertos-xtensa-esp32s3-elf-4.4.4/build/lib/bootloader.bin 0x9000 /path/to/ert-contrib-middleware/target\_libs/xtensa-esp32s3\_freertos-xtensa-esp32s3-elf-4.4.4/build/lib/partition-table.bin 0x10000 ehs.bin && screen /dev/ttyACM0 115200 | | :---- | (If it fails to flash, press “BOOT” button and “RESET” button, then release “RESET” button, finally release “BOOT” button to enter the bootloader mode. After the flashing success, click “RESET” to reset the board.) \# add this to screen command in order to log to a file screen \-L \-Logfile logfile /dev/ttyACM0 115200 #### Erase flash memory esptool.py \--chip esp32s3 \--port /dev/ttyACM0 erase\_flash #### Debugging crashes xtensa-esp32s3-elf-addr2line \-pfiaC \-e ehs.exe ADDRESS \# e.g 0x4200b062 see: https://docs.espressif.com/projects/esp-idf/en/latest/esp32s3/api-guides/tools/idf-monitor.html #### Config info | Description | Memory allocated | location | | :---- | :---- | :---- | | Main Loop | 3584 | contrib/esp…/ert…/sdkconfig | | System Event | 2304 | contrib/esp…/ert…/sdkconfig | | Timer Task (Optional) | 3584 | contrib/esp…/ert…/sdkconfig | | EHS MAIN | 20000 | target\_main.c | | TCPIP | 4096 | target\_main.c | in `ert-contrib-middleware/contrib/esp-idf/esp-idf-4.4.4/ert_config_files/esp32s3_freertos`: CONFIG\_ESP\_INT\_WDT\_TIMEOUT\_MS=300 [ESP32-S3 Series Datasheet](https://www.espressif.com/sites/default/files/documentation/esp32-s3_datasheet_en.pdf) ## Windows 10/11 ## Windows Unity Build (this is the old way\! New way is described at the start of this doc) ### **Build Windows Unity EHS plug-in** ./configure win\_x86\_unity make prepdeps make all make targetenv This makes a file the .so that gets renamed later. If you have changes to the C\# Unity code you will need to rebuild the Unity project. We currently configure server and certificates manually \- not in ert-components. ### **Add EHS DLL plug-in to Unity 3d Project** Make sure you have Unity Hub with a license and Unity 2019.4.40 (LTS) installed on your windows device. Next, from the Unity Hub open this project “EHS/target/os-arch/android\_ALL/Unity\_EHS” To replace the ehs plugin simply make sure it’s renamed from ***./TARGET\_TREES/ehs\_env-win\_x86\_unity/bin/ehs.exe*** to ***libnative-activity.dll*** , then drag and drop dll file to ***Assets/Libs/win\_x86*** as shown in the image below. ![][image3] ### **Build Unity 3d Project Windows** Before doing the following do this you need: 1. Download a example windows [https://drive.google.com/drive/folders/1T-BRbE6N3U7zZbWF6IBFOygUGugPicXM](https://drive.google.com/drive/folders/1T-BRbE6N3U7zZbWF6IBFOygUGugPicXM) Packaging the unity build Open build setting dialog from **File-\>Build Settings…** Next make sure that you set the **Target Platform** to **Windows,** and **Architecture** to **x86** . (Do not use x86 64-bit, as plugin dll is built for windows 32-bit for now) See screenshot below for reference. ![][image4] Click build and navigate to the folder where you’d like the project to be built (e.g create SignageWindowsBuild folder somewhere in your file system). ### **Copy DLLs and EHS Resources to the project folder** After building windows Unity your project folder will contain following files and folders ![][image5] Make sure you copy all DLLs required by EHS plugin from ***./TARGET\_TREES/ehs\_env-win\_x86\_unity/bin/*** to the root of your app folder (e.g. SignageWindowsBuild) Assuming that your EHS plugin is configured to work from any directory, create this folder in the root folder (e.g. SignageWindowsBuild) ***ehs\_data*** Next copy following directories ***appdata devman sysdata userdata*** from ***./TARGET\_TREES/ehs\_env-win\_x86\_unity/*** to ***ehs\_data/*** in your build root directory. Next, copy Signage app from ***apps/customer-apps/SimpleSignOn/sso-unity-v1.0.0/export*** to ***SignageWindowsBuild/ehs\_data/appdata/default*** Next, copy certs from DevmanSecurity repo to ***SignageWindowsBuild/ehs\_data/devman/core/certs*** In theory you should now be able to run ***TELLISIGN.exe*** and upload playlists from devman to it. ### **Issues that may need to be fixed** Make sure the URL is ***https*** , not ***http*** in ***SignageWindowsBuild/ehs\_data/devman/core/config/DEVMANURL.000*** \======================================================= Other info on the signage should be specified in this document [https://docs.google.com/document/d/1pdd-2uhXRIFGtSfz114ihcWe4u\_1\_dZaxC08jW3iOno/edit\#heading=h.bqhjzp48ygz9](https://docs.google.com/document/d/1pdd-2uhXRIFGtSfz114ihcWe4u_1_dZaxC08jW3iOno/edit#heading=h.bqhjzp48ygz9) \======================================================= # Target Flashing [Xiaosheng An](mailto:x.an@inx-systems.com)\- please see this section and improve it for flashing code on to the NXP devices. ## NXP-Kinesis NXP stuff can be built with NXPs IDE (eclipse thing), but we also have a full set of command line flashing and building options in eRT (and the HRDCv1’s monolithic firmware). [Unit QA & calibration Guide HRDx](https://docs.google.com/presentation/d/1HTV-dHJEesui9uUNseXK_fMKOm2k0D4yiw1yCWPAvzY/edit#slide=id.p) Other background information (We should extract any relevant informationvfor flashing binaries here: [HRDx Test and Production Software Setup](https://docs.google.com/document/d/1jv5WtCB9TNM53Ei1bA5pWDUrV25D9-Z37Xx6nxzKf2Q/edit#heading=h.cwzb7d5p1bmf) ### **Windows MINGW32 remote debug using Linux Visual Code** 1) Install Linux WLS on your Windows machine then install following packages sudo apt install gdbserver # eRT Regression Testing ## TODO See also [eRT Component Test System](https://docs.google.com/document/d/1SfMc0sSg_HMddXJK0WqKrK9S98owZaQ2VCPb6wmN1u0/edit#heading=h.79ler92qcb4p) which was written up in parallel when below was being implemented. THe file-based method below is along the same lines and we may migrate it to a function block method that will allow more reporting and less complex applications in the future. - [ ] Make the regression testing scripts more uniform: - [ ] Multi-platform build tests are run as a script from ./SystemTests/CI/ \- should any bash scripts we use for per-target runtime testing on Linux also be in here (not sure where they are currently \- it’s not really a targetenv process \- so shouldn’t be in there really. - [ ] make **targetenv\_run\_tests** should be called **make test\_run\_components** - [ ] Create a new make target as a place holder for now for running a single profile-specific smoke test that includes some level of stress and testing a full app, including kernel and platform features. **make test\_run\_smoke** - [ ] Move the function blocks specific tests apps in ./tests/root/ to the function block’s test directories - [ ] Should we implement the test reporting function block as above before creating too many more regression test apps? The main advantage of the above proposed approach is that is can run on any target type and we can get it to report the status to Devman as ready flags off files on targets will not generally be possible or likely to succeed in most cases. - [ ] Same applies to the multi-target build regression system. This should also build all the test targets with a multi-functional smoke test app that runs as many function blocks as possible (test cases for each target profile), which can also report the results to Devman if we can find a good way of deploying the new builds to test devices (e.g. using their OTA update methods). --- eRT Regression testing is carried out in up to 3 stages: * Multi-target build regression tests (relatively quic test to run during development and refactoring. ./SystemTests/CI/regression\_test-published-only.sh \# we will change this to accept a profile argument for the targets to run instead of the “published only”. * Single target component regression tests \- runs all the component tests ./make targetenv\_run\_tests \#(see proposed change of name above\!) * Single target component regression tests \- runs all the component tests ./make targetenv\_run\_tests (see proposed change of name above\!) ## Run Tests At the moment the test can only be run for the linux ert build host targets. ./configure linux\_x86\_64-lucid-debian11 make clean; make all\_docker make targetenv\_run\_tests The tests in the terminal should look like this ![][image6] ## Check Results All test results get saved to this directory with following structure **../TARGET\_TREES/TEST\_RESULTS/results/** |-- \ |-- expected\_result.txt (contains pre-generated expected results of the test) |-- test\_result.txt (result generated by the test, provided eRT runs OK) |-- test\_stdout.txt (eRT stdout logs) |-- passed (this flag gets created when expected and generated test results match) |-- timeout (test has timeout before any results were available) ## Create New Test 1. Using Lucid open a template project located in **ert-components/tests/TEST-TEMPLATE** 2. Use ‘Save Project As’ to create a new test with a unique name from this template. For now store it in **ert-components/tests/root/core** or any other category e.g. **network**. WARNING\! \- this is a temporary location and all tests will be moved to Common/Components (once all old tests placed in there are sorted out) 3. Close the TEST-TEMPLATE project and now you can add function blocks to your new test project. Make sure the following events are fired for start,write and end of your test.![][image7] 4. To generate your project results. Make sure this directory is present **\~/inxware/inx-tests/** (it gets created when running targetenv\_run\_tests or it can be added manually) and **empty** (clear it if it has some old data). Next ‘Run’ project in Lucid and once successful check results file content and copy it to the root of your new project e.g **\~/inxware/inx-tests/results/test\_result.txt \-\> ./ert-components/tests/root/core/NewTest/test\_result.txt** 5. This should be it. Run tests using targetenv\_run\_tests to see if your new test is passing. ## Example of a working test ![][image8] [image1]: [image2]: [image3]: [image4]: [image5]: [image6]: [image7]: [image8]: