Files
lvgl/tests
Gabor Kiss-Vamosi d61be49c4a
Some checks failed
Arduino Lint / lint (push) Has been cancelled
Build Examples with C++ Compiler / build-examples (push) Has been cancelled
MicroPython CI / Build esp32 port (push) Has been cancelled
MicroPython CI / Build rp2 port (push) Has been cancelled
MicroPython CI / Build stm32 port (push) Has been cancelled
MicroPython CI / Build unix port (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_NORMAL_8BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_SDL - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build ESP IDF ESP32S3 (push) Has been cancelled
C/C++ CI / Run tests with 32bit build (push) Has been cancelled
C/C++ CI / Run tests with 64bit build (push) Has been cancelled
BOM Check / bom-check (push) Has been cancelled
Verify that lv_conf_internal.h matches repository state / verify-conf-internal (push) Has been cancelled
Verify GDB constants are up-to-date / verify-gdb-consts (push) Has been cancelled
Verify the widget property name / verify-property-name (push) Has been cancelled
Verify code formatting / verify-formatting (push) Has been cancelled
Compare file templates with file names / template-check (push) Has been cancelled
Test API JSON generator / Test API JSON (push) Has been cancelled
Install LVGL using CMake / build-examples (push) Has been cancelled
Check Makefile / Build using Makefile (push) Has been cancelled
Check Makefile for UEFI / Build using Makefile for UEFI (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark - Script Check (scripts/perf/tests/benchmark_results_comment/test.sh) (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark - Script Check (scripts/perf/tests/filter_docker_logs/test.sh) (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark - Script Check (scripts/perf/tests/serialize_results/test.sh) (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark 32b - lv_conf_perf32b (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark 64b - lv_conf_perf64b (push) Has been cancelled
Emulated Performance Test / ARM Emulated Benchmark - Save PR Number (push) Has been cancelled
Hardware Performance Test / Hardware Performance Benchmark (push) Has been cancelled
Hardware Performance Test / HW Benchmark - Save PR Number (push) Has been cancelled
Performance Tests CI / Perf Tests OPTIONS_TEST_PERF_32B - Ubuntu (push) Has been cancelled
Performance Tests CI / Perf Tests OPTIONS_TEST_PERF_64B - Ubuntu (push) Has been cancelled
Port repo release update / run-release-branch-updater (push) Has been cancelled
Verify Font License / verify-font-license (push) Has been cancelled
Verify Kconfig / verify-kconfig (push) Has been cancelled
Close stale issues and PRs / stale (push) Has been cancelled
test: generate ref images again with lodepng (#9867)
2026-03-16 19:27:58 +01:00
..

Tests for LVGL

Test types available

  • Unit Tests: Standard functional tests in src/test_cases/ with screenshot comparison capabilities
  • Performance Tests: ARM-emulated benchmarks in src/test_cases_perf/ running on QEMU/SO3 environment
  • Emulated Benchmarks: Automated lv_demo_benchmark runs in ARM emulation to prevent performance regressions

All of the tests are automatically ran in LVGL's CI.

Quick start

  • Local Testing: Run ./tests/main.py test (after scripts/install-prerequisites.sh)
  • Docker Testing: Build with docker build . -f tests/Dockerfile -t lvgl_test_env then run
  • Performance Testing: Use ./tests/perf.py test (requires Docker + Linux)
  • Benchmark Testing: Use ./tests/benchmark_emu.py run for emulated performance benchmarks (requires Docker + Linux)

Running locally

Local

  1. Install requirements by:
scripts/install-prerequisites.sh
  1. Run all executable tests with ./tests/main.py test.
  2. Build all build-only tests with ./tests/main.py build.
  3. Clean prior test build, build all build-only tests, run executable tests, and generate code coverage report ./tests/main.py --clean --report build test.
  4. You can re-generate the test images by adding option --update-image. It relies on scripts/LVGLImage.py, which requires pngquant and pypng. You can run below command firstly and follow instructions in logs to install them. ./tests/main.py --update-image test Note that different version of pngquant may generate different images. As of now the generated image on CI uses pngquant 2.13.1-1.

For full information on running tests run: ./tests/main.py --help.

Docker

To run the tests in an environment matching the CI setup:

  1. Build it
docker build . -f tests/Dockerfile -t lvgl_test_env
  1. Run the tests
docker run --rm -it -v $(pwd):/work lvgl_test_env "./tests/main.py"

This ensures you are testing in a consistent environment with the same dependencies as the CI pipeline.

Running automatically

GitHub's CI automatically runs these tests on pushes and pull requests to master and release/v8.* branches.

Directory structure

  • src Source files of the tests
    • test_cases The written tests,
    • test_cases_perf The performance tests,
    • test_runners Generated automatically from the files in test_cases.
    • other miscellaneous files and folders
  • ref_imgs - Reference images for screenshot compare
  • report - Coverage report. Generated if the report flag was passed to ./main.py
  • unity Source files of the test engine

Add new tests

Create new test file

New test needs to be added into the src/test_cases folder. The name of the files should look like test_<name>.c. The basic skeleton of a test file copy _test_template.c.

Asserts

See the list of asserts here.

There are some custom, LVGL specific asserts:

  • TEST_ASSERT_EQUAL_SCREENSHOT("image1.png") Render the active screen and compare its content with an image in the ref_imgs folder.
    • If the reference image is not found it will be created automatically from the rendered screen.
    • If the compare fails an <image_name>_err.png file will be created with the rendered content next to the reference image.
  • TEST_ASSERT_EQUAL_COLOR(color1, color2) Compare two colors.

Performance Tests

Requirements

  • Docker
  • Linux host machine (WSL may work but is untested)

Running Tests

The performance tests are run inside a Docker container that launches an ARM emulated environment using QEMU to ensure consistent timing across machines. Each test runs on a lightweight ARM-based OS (SO3) within this emulated environment.

To run the tests:

./perf.py [--clean] [--auto-clean] [--test-suite <suite>] [--build-options <option>] [build|generate|test]
  • build and generate: generates all necessary build and configuration files
  • test: launches Docker with the appropriate volume mounts and runs the tests inside the container

Note

Building doesn't actually build the source files because the current docker image doesn't separate the building and running. Instead, it does both

You can specify different build configurations via --build-options, and optionally filter tests using --test-suite.

For full usage options, run:

./perf.py --help

You can also run this script by passing a performance test config to the main.py script. The performance tests configs can be found inside the perf.py file

Emulated benchmarks

In addition to unit and performance tests, LVGL automatically runs the lv_demo_benchmark inside the same ARM emulated environment mentionned in the previous section through CI to prevent unintentional slowdowns.

Requirements

  • Docker
  • Linux host machine (WSL may work but is untested)

To run the these benchmarks in the emulated setup described above, you can use the provided python script:

./benchmark_emu.py [-h] [--config {perf32b,perf64b}] [--pull] [--clean] [--auto-clean]
                        [{generate,run} ...]

The following command runs all available configurations:

./benchmark_emu.py run 

You can also request a specific configuration:

./benchmark_emu.py --config perf32b run