|
|
3 months ago | |
|---|---|---|
| .. | ||
| gen_json | 3 months ago | |
| makefile | 3 months ago | |
| makefile_uefi | 3 months ago | |
| micropy_test | 3 months ago | |
| ref_imgs | 3 months ago | |
| ref_imgs_vg_lite | 3 months ago | |
| src | 3 months ago | |
| test_images | 3 months ago | |
| unity | 3 months ago | |
| .gitignore | 3 months ago | |
| CMakeLists.txt | 3 months ago | |
| Dockerfile | 3 months ago | |
| FindLibDRM.cmake | 3 months ago | |
| README.md | 3 months ago | |
| benchmark_emu.py | 3 months ago | |
| config.yml | 3 months ago | |
| main.py | 3 months ago | |
| perf.py | 3 months ago | |
README.md
Tests for LVGL
Test types available
- Unit Tests: Standard functional tests in
src/test_cases/with screenshot comparison capabilities - Performance Tests: ARM-emulated benchmarks in
src/test_cases_perf/running on QEMU/SO3 environment - Emulated Benchmarks: Automated
lv_demo_benchmarkruns in ARM emulation to prevent performance regressions
All of the tests are automatically ran in LVGL's CI.
Quick start
- Local Testing: Run
./tests/main.py test(afterscripts/install-prerequisites.sh) - Docker Testing: Build with
docker build . -f tests/Dockerfile -t lvgl_test_envthen run - Performance Testing: Use
./tests/perf.py test(requires Docker + Linux) - Benchmark Testing: Use
./tests/benchmark_emu.py runfor emulated performance benchmarks (requires Docker + Linux)
Running locally
Local
- Install requirements by:
scripts/install-prerequisites.sh
- Run all executable tests with
./tests/main.py test. - Build all build-only tests with
./tests/main.py build. - Clean prior test build, build all build-only tests,
run executable tests, and generate code coverage
report
./tests/main.py --clean --report build test. - You can re-generate the test images by adding option
--update-image. It relies on scripts/LVGLImage.py, which requires pngquant and pypng. You can run below command firstly and follow instructions in logs to install them../tests/main.py --update-image testNote that different version of pngquant may generate different images. As of now the generated image on CI uses pngquant 2.13.1-1.
For full information on running tests run: ./tests/main.py --help.
Docker
To run the tests in an environment matching the CI setup:
- Build it
docker build . -f tests/Dockerfile -t lvgl_test_env
- Run the tests
docker run --rm -it -v $(pwd):/work lvgl_test_env "./tests/main.py"
This ensures you are testing in a consistent environment with the same dependencies as the CI pipeline.
Running automatically
GitHub's CI automatically runs these tests on pushes and pull requests to master and release/v8.* branches.
Directory structure
srcSource files of the teststest_casesThe written tests,test_cases_perfThe performance tests,test_runnersGenerated automatically from the files intest_cases.- other miscellaneous files and folders
ref_imgs- Reference images for screenshot comparereport- Coverage report. Generated if thereportflag was passed to./main.pyunitySource files of the test engine
Add new tests
Create new test file
New test needs to be added into the src/test_cases folder. The name of the files should look like test_<name>.c. The basic skeleton of a test file copy _test_template.c.
Asserts
See the list of asserts here.
There are some custom, LVGL specific asserts:
TEST_ASSERT_EQUAL_SCREENSHOT("image1.png")Render the active screen and compare its content with an image in theref_imgsfolder.- If the reference image is not found it will be created automatically from the rendered screen.
- If the compare fails an
<image_name>_err.pngfile will be created with the rendered content next to the reference image.
TEST_ASSERT_EQUAL_COLOR(color1, color2)Compare two colors.
Performance Tests
Requirements
- Docker
- Linux host machine (WSL may work but is untested)
Running Tests
The performance tests are run inside a Docker container that launches an ARM emulated environment using QEMU to ensure consistent timing across machines. Each test runs on a lightweight ARM-based OS (SO3) within this emulated environment.
To run the tests:
./perf.py [--clean] [--auto-clean] [--test-suite <suite>] [--build-options <option>] [build|generate|test]
buildandgenerate: generates all necessary build and configuration filestest: launches Docker with the appropriate volume mounts and runs the tests inside the container
[!NOTE] Building doesn't actually build the source files because the current docker image doesn't separate the building and running. Instead, it does both
You can specify different build configurations via --build-options, and optionally filter tests using --test-suite.
For full usage options, run:
./perf.py --help
You can also run this script by passing a performance test config to the main.py script. The performance tests configs can be found inside the perf.py file
Emulated benchmarks
In addition to unit and performance tests, LVGL automatically runs the lv_demo_benchmark inside the same ARM emulated
environment mentionned in the previous section through CI to prevent unintentional slowdowns.
Requirements
- Docker
- Linux host machine (WSL may work but is untested)
To run the these benchmarks in the emulated setup described above, you can use the provided python script:
./benchmark_emu.py [-h] [--config {perf32b,perf64b}] [--pull] [--clean] [--auto-clean]
[{generate,run} ...]
The following command runs all available configurations:
./benchmark_emu.py run
You can also request a specific configuration:
./benchmark_emu.py --config perf32b run