test(perf): add perf test framework (#8064)
Arduino Lint / lint (push) Has been cancelled
Build Examples with C++ Compiler / build-examples (push) Has been cancelled
MicroPython CI / Build esp32 port (push) Has been cancelled
MicroPython CI / Build rp2 port (push) Has been cancelled
MicroPython CI / Build stm32 port (push) Has been cancelled
MicroPython CI / Build unix port (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_NORMAL_8BIT - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_SDL - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_VG_LITE - Ubuntu (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_16BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_24BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_FULL_32BIT - gcc - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_VG_LITE - cl - Windows (push) Has been cancelled
C/C++ CI / Build OPTIONS_VG_LITE - gcc - Windows (push) Has been cancelled
C/C++ CI / Build ESP IDF ESP32S3 (push) Has been cancelled
C/C++ CI / Run tests with 32bit build (push) Has been cancelled
C/C++ CI / Run tests with 64bit build (push) Has been cancelled
BOM Check / bom-check (push) Has been cancelled
Verify that lv_conf_internal.h matches repository state / verify-conf-internal (push) Has been cancelled
Verify the widget property name / verify-property-name (push) Has been cancelled
Verify code formatting / verify-formatting (push) Has been cancelled
Build docs / build-and-deploy (push) Has been cancelled
Test API JSON generator / Test API JSON (push) Has been cancelled
Check Makefile / Build using Makefile (push) Has been cancelled
Check Makefile for UEFI / Build using Makefile for UEFI (push) Has been cancelled
Performance Tests CI / Perf Tests OPTIONS_TEST_PERF_32B - Ubuntu (push) Has been cancelled
Performance Tests CI / Perf Tests OPTIONS_TEST_PERF_64B - Ubuntu (push) Has been cancelled
Port repo release update / run-release-branch-updater (push) Has been cancelled
Verify Font License / verify-font-license (push) Has been cancelled
Verify Kconfig / verify-kconfig (push) Has been cancelled

This commit is contained in:
André Costa
2025-06-10 13:56:46 +02:00
committed by GitHub
parent 6bcdf3c806
commit f512bda660
13 changed files with 2031 additions and 9 deletions
+35
View File
@@ -0,0 +1,35 @@
name: Performance Tests CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#concurrency
# Ensure that only one commit will be running tests at a time on each PR
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
test-perf:
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
# A valid option parameter to the cmake file.
# See BUILD_OPTIONS in tests/perf.py.
build_option: ['OPTIONS_TEST_PERF_32B',
'OPTIONS_TEST_PERF_64B']
name: Perf Tests ${{ matrix.build_option }} - Ubuntu
steps:
- uses: actions/checkout@v4
- uses: ammaraskar/gcc-problem-matcher@master
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Building ${{ matrix.build_option }}
run: python tests/perf.py --build-option=${{ matrix.build_option }} test
+35
View File
@@ -44,6 +44,40 @@ docker run --rm -it -v $(pwd):/work lvgl_test_env "./tests/main.py"
This ensures you are testing in a consistent environment with the same dependencies as the CI pipeline.
## Performance Tests
### Requirements
- **Docker**
- **Linux host machine** (WSL *may* work but is untested)
### Running Tests
The performance tests are run inside a Docker container that launches an ARM emulated environment using QEMU to ensure consistent timing across machines. Each test runs on a lightweight ARM-based OS ([SO3](https://github.com/smartobjectoriented/so3)) within this emulated environment.
To run the tests:
```bash
./perf.py [--clean] [--auto-clean] [--test-suite <suite>] [--build-options <option>] [build|generate|test]
```
- `build` and `generate`: generates all necessary build and configuration files
- `test`: launches Docker with the appropriate volume mounts and runs the tests inside the container
> [!NOTE]
> Building doesn't actually build the source files because the current docker image doesn't separate the building and running. Instead, it does both
You can specify different build configurations via `--build-options`, and optionally filter tests using `--test-suite`.
For full usage options, run:
```sh
./perf.py --help
```
You can also run this script by passing a performance test config to the `main.py` script. The performance tests configs can be found inside the [`perf.py`](./perf.py) file
## Running automatically
GitHub's CI automatically runs these tests on pushes and pull requests to `master` and `releasev8.*` branches.
@@ -51,6 +85,7 @@ GitHub's CI automatically runs these tests on pushes and pull requests to `maste
## Directory structure
- `src` Source files of the tests
- `test_cases` The written tests,
- `test_cases_perf` The performance tests,
- `test_runners` Generated automatically from the files in `test_cases`.
- other miscellaneous files and folders
- `ref_imgs` - Reference images for screenshot compare
+12 -3
View File
@@ -1,7 +1,6 @@
#!/usr/bin/env python3
import argparse
import errno
import shutil
import subprocess
import sys
@@ -17,6 +16,7 @@ sys.path.append(lvgl_script_path)
wayland_dir = os.path.join(lvgl_test_dir, "wayland_protocols")
wayland_protocols_dir = os.path.realpath("/usr/share/wayland-protocols")
from perf import perf_test_options
from LVGLImage import LVGLImage, ColorFormat, CompressMethod
# Key values must match variable names in CMakeLists.txt.
@@ -223,7 +223,7 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='Build and/or run LVGL tests.', epilog=epilog)
parser.add_argument('--build-options', nargs=1,
choices=list(chain(build_only_options, test_options)),
choices=list(chain(build_only_options, test_options, perf_test_options)),
help='''the build option name to build or run. When
omitted all build configurations are used.
''')
@@ -258,6 +258,15 @@ if __name__ == "__main__":
for options_name in options_to_build:
is_test = options_name in test_options
is_perf_test = options_name in perf_test_options
if is_perf_test:
perf_test_script = os.path.join(lvgl_test_dir, "perf.py")
try:
subprocess.check_call([perf_test_script, *(sys.argv[1:])])
except subprocess.CalledProcessError as e:
sys.exit(e.returncode)
continue
build_type = 'Debug'
build_tests(options_name, build_type, args.clean)
if is_test:
@@ -268,7 +277,7 @@ if __name__ == "__main__":
if args.auto_clean:
build_dir = get_build_dir(options_name)
print("Removing " + build_dir)
shutil. rmtree(build_dir)
shutil.rmtree(build_dir)
if args.report:
generate_code_coverage_report()
Executable
+552
View File
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -1,4 +1,4 @@
#if LV_BUILD_TEST
#if LV_BUILD_TEST || LV_BUILD_TEST_PERF
#include "lv_test_init.h"
#include <stdio.h>
#include <stdlib.h>
File diff suppressed because it is too large Load Diff
+33
View File
@@ -0,0 +1,33 @@
#if LV_BUILD_TEST_PERF
#include "unity/unity.h"
static lv_obj_t * active_screen = NULL;
static lv_obj_t * chart = NULL;
static lv_color_t red_color;
void setUp(void)
{
active_screen = lv_screen_active();
chart = lv_chart_create(active_screen);
red_color = lv_palette_main(LV_PALETTE_RED);
}
void test_chart(void)
{
lv_chart_add_series(chart, red_color,
LV_CHART_AXIS_SECONDARY_Y);
TEST_ASSERT_MAX_TIME(lv_chart_add_series, 1, chart, red_color,
LV_CHART_AXIS_SECONDARY_Y);
for(size_t i = 0; i < 10; ++i) {
uint16_t points_in_series = lv_chart_get_point_count(chart);
uint16_t new_point_count = points_in_series * 2;
TEST_ASSERT_MAX_TIME(lv_chart_set_point_count, 1, chart,
new_point_count);
}
}
#endif
+9
View File
@@ -0,0 +1,9 @@
#if LV_BUILD_TEST_PERF
#include "unity/unity.h"
void test_cubic_bezier(void)
{
TEST_ASSERT_MAX_TIME_ITER(lv_cubic_bezier, 10, 60000, 1, 2, 3, 4,
5);
}
#endif
View File
+2 -2
View File
@@ -3,7 +3,8 @@
Copyright (c) 2007-21 Mike Karlesky, Mark VanderVoord, Greg Williams
[Released under MIT License. Please refer to license.txt for details]
============================================================================ */
#if LV_BUILD_TEST
#if LV_BUILD_TEST || LV_BUILD_TEST_PERF
#include "unity.h"
#ifndef UNITY_PROGMEM
@@ -2486,4 +2487,3 @@ int UnityTestMatches(void)
#endif /* UNITY_USE_COMMAND_LINE_ARGS */
/*-----------------------------------------------*/
#endif /*LV_BUILD_TEST*/
+1 -1
View File
@@ -3,7 +3,7 @@
Copyright (c) 2007-21 Mike Karlesky, Mark VanderVoord, Greg Williams
[Released under MIT License. Please refer to license.txt for details]
========================================== */
#if LV_BUILD_TEST
#if LV_BUILD_TEST || LV_BUILD_TEST_PERF
#define UNITY_INCLUDE_PRINT_FORMATTED 1
#ifndef UNITY_FRAMEWORK_H
+1 -1
View File
@@ -3,7 +3,7 @@
Copyright (c) 2007-21 Mike Karlesky, Mark VanderVoord, Greg Williams
[Released under MIT License. Please refer to license.txt for details]
========================================== */
#if LV_BUILD_TEST
#if LV_BUILD_TEST || LV_BUILD_TEST_PERF
#ifndef UNITY_INTERNALS_H
#define UNITY_INTERNALS_H
+32 -1
View File
@@ -1,4 +1,3 @@
#ifndef LV_UNITY_SUPPORT_H
#define LV_UNITY_SUPPORT_H
@@ -37,6 +36,38 @@ extern "C" {
# define TEST_ASSERT_MEM_LEAK_LESS_THAN(prev_usage, threshold) TEST_ASSERT_LESS_OR_EQUAL(threshold, LV_ABS((int64_t)(prev_usage) - (int64_t)lv_test_get_free_mem()));
#ifdef LV_BUILD_TEST_PERF
#include <time.h>
#define TEST_ASSERT_MAX_TIME(fn, max_time_ms, ...) \
do { \
clock_t t = clock(); \
fn(__VA_ARGS__); \
t = clock() - t; \
const double time_taken = \
((double)t * 1000.) / CLOCKS_PER_SEC; \
TEST_ASSERT_LESS_OR_EQUAL_DOUBLE((max_time_ms), time_taken); \
} while (0)
#define TEST_ASSERT_MAX_TIME_ITER(fn, max_time_ms, iterations, ...) \
do { \
clock_t t = clock(); \
for (size_t i = 0; i < iterations; ++i) \
fn(__VA_ARGS__); \
t = clock() - t; \
const double time_taken = \
((double)t * 1000.) / CLOCKS_PER_SEC; \
TEST_ASSERT_LESS_OR_EQUAL_DOUBLE((max_time_ms), time_taken); \
} while (0)
#else
#define TEST_ASSERT_MAX_TIME(fn, max_time_ms, ...) fn(__VA_ARGS__);
#define TEST_ASSERT_MAX_TIME_ITER(fn, max_time_ms, ...) fn(__VA_ARGS__);
#endif /* LV_BUILD_TEST_PERF */
#ifdef __cplusplus
} /*extern "C"*/
#endif