- Introduction
- Development guidelines
- Getting involved into development
- Technologies
- Folder structure
- Development setup
- Handling of dependencies
- Managing the database
- Adding new authentication module
- Running tests of openQA itself
- CircleCI workflow
- Building plugins
- Checking for JavaScript problems
- Profiling the web UI
- Making documentation changes
openQA is an automated test tool that makes it possible to test the whole installation process of an operating system. It’s free software released under the GPLv2 license. The source code and documentation are hosted in the os-autoinst organization on GitHub.
This document provides the information needed to start contributing to the openQA development improving the tool, fixing bugs and implementing new features. For information about writing or improving openQA tests, refer to the Tests Developer Guide. In both documents it’s assumed that the reader is already familiar with openQA and has already read the Starter Guide. All those documents are available at the official repository.
As mentioned, the central point of development is the os-autoinst organization on GitHub where several repositories can be found.
-
os-autoinst: https://github.com/os-autoinst/os-autoinst
-
the "backend" (thing that executes tests and starts/controls the SUT e.g. using QEMU)
-
-
openQA: https://github.com/os-autoinst/openQA
-
mainly the web UI and accompanying daemons like the scheduler
-
the worker (thing that starts the backend and uploads results to the web UI)
-
documentation
-
miscellaneous support scripts
-
-
test distribution: e.g. https://github.com/os-autoinst/os-autoinst-distri-opensuse for openSUSE
-
the actual tests, in case of
os-autoinst-distri-opensuse
conducted on http://openqa.opensuse.org
-
-
needles: e.g. https://github.com/os-autoinst/os-autoinst-needles-opensuse for openSUSE
-
reference images if not already included in the test distribution
-
-
empty example test distribution: https://github.com/os-autoinst/os-autoinst-distri-example
-
meant to be used to start writing tests (and creating the corresponding needles) from scratch for a new operating system
-
As in most projects hosted on GitHub, pull request are always welcome and are the right way to contribute improvements and fixes.
-
Every commit is checked in CI as soon as you create a pull request, but you should run the tidy scripts locally, i.e. before every commit call:
make tidy
to ensure your Perl and JavaScript code changes are consistent with the style rules.
-
All tests are passed. This is ensured by a CI system. You can also run local tests in your development environment to verify everything works as expected, see Conducting tests)
-
For git commit messages use the rules stated on How to Write a Git Commit Message as a reference
-
Every pull request is reviewed in a peer review to give feedback on possible implications and how we can help each other to improve
If this is too much hassle for you feel free to provide incomplete pull requests for consideration or create an issue with a code change proposal.
-
In Perl files:
-
Sort the use statements in this order from top to bottom:
-
strict
,warnings
or other modules that provide static checks -
All external modules and from "lib" folder
-
use FindBin; use lib "$FindBin::Bin/lib";
or similar to resolve internal modules -
Internal test modules which provide early checks before other modules
-
Other internal test modules
-
-
When using signatures try to follow these rules:
-
Activate the feature with modules we already use if possible, e.g.
use Mojo::Base 'Something', -signatures;
-
Use positional parameters whenever possible, e.g.
sub foo ($first, $second) {
-
Use default values when appropriate, e.g.
sub foo ($first, $second = 'some value') {
-
Use slurpy parameters when appropriate (hash and array), e.g.
sub foo ($first, @more) {
-
Use nameless parameters when appropriate (very uncommon), e.g.
sub foo ($first, $, $third) {
-
Do not get too creative with computational default values, e.g.
sub foo ($first, $second = rand($first)) {
-
Do not combine sub attributes with signatures (requires Perl 5.28+), e.g.
sub foo :lvalue ($first) {
-
-
Developers willing to get really involved into the development of openQA or people interested in following the always-changing roadmap should take a look at the openQAv3 project in openSUSE’s project management tool. This Redmine instance is used to coordinate the main development effort organizing the existing issues (bugs and desired features) into 'target versions'.
Future improvements groups features that are in the developers' and users' wish list but that have little chances to be addressed in the short term, normally because they are out of the current scope of the development. Developers looking for a place to start contributing are encouraged to simply go to that list and assign any open issue to themselves.
openQA and os-autoinst repositories also include test suites aimed at preventing bugs and regressions in the software. codecov is configured in the repositories to encourage contributors to raise the tests coverage with every commit and pull request. New features and bug fixes are expected to be backed with the corresponding tests.
Everything in openQA, from os-autoinst
to the web frontend and from the tests
to the support scripts is written in Perl. So having some basic knowledge
about that language is really desirable in order to understand and develop
openQA. Of course, in addition to bare Perl, several libraries and additional
tools are required. The easiest way to install all needed dependencies is
using the available os-autoinst and openQA packages, as described in the
Installation Guide.
In the case of os-autoinst, only a few CPAN modules are
required. Basically Carp::Always
, Data::Dump
. JSON
and YAML
. On the other
hand, several external tools are needed including
QEMU,
Tesseract and
OptiPNG. Last but not least, the
OpenCV library is the core of the openQA image matching
mechanism, so it must be available on the system.
The openQA package is built on top of Mojolicious, an excellent Perl framework for web development that will be extremely familiar to developers coming from other modern web frameworks like Sinatra and that have nice and comprehensive documentation available at its home page.
In addition to Mojolicious and its dependencies, several other CPAN modules are required by the openQA package. See Dependencies below.
openQA relies on PostgreSQL to store the information. It used to support SQLite, but that is no longer possible.
As stated in the previous section, every feature implemented in both packages
should be backed by proper tests.
Test::Most is used to implement those
tests. As usual, tests are located under the /t/
directory. In the openQA
package, one of the tests consists of a call to
Perltidy to ensure that the contributed code
follows the most common Perl style conventions.
Meaning and purpose of the most important folders within openQA are:
- public
-
Static assets published to users over the web UI or API
- t
-
Self-tests of openQA
- assets
-
3rd party JavaScript and CSS files
- docs
-
Documentation, including this document
- etc
-
Configuration files including template branding specializations
- lib
-
Main perl module library folder
- script
-
Main applications and startup files
- .circleci
-
circleCI definitions
- dbicdh
-
Database schema startup and migration files
- container
-
Container definitions
- profiles
-
Apparmor profiles
- systemd
-
systemd service definitions
- templates
-
HTML templates delivered by web UI
- tools
-
Development tools
For developing openQA and os-autoinst itself it makes sense to checkout the Git repositories and either execute existing tests or start the daemons manually.
Have a look at the packaged version (e.g. dist/rpm/openQA.spec
within the
root of the openQA repository) for all required dependencies. For development
build time dependencies need to be installed as well. Recommended
dependencies such as logrotate can be ignored. For openSUSE there is also the
openQA-devel
meta-package which pulls all required dependencies for
development.
You can find all required Perl modules in form of a cpanfile
that enables
you to install them with a CPAN client. They are also defined in
dist/rpm/openQA.spec
.
To execute all existing checks and tests simply call:
make test
for style checks, unit and integration tests.
To execute single tests call make
with the selected tests in the TESTS
variable specified as a white-space separated list, for example:
make test TESTS=t/config.t
or
make test TESTS="t/foo.t t/bar.t"
To run only unit tests without other tests (perltidy or database tests):
make test-unit-and-integration TESTS=t/foo.t
Or use prove
after pointing to a local test database in the environment
variable TEST_PG
. Also, If you set a custom base directory, be sure to unset
it when running tests. Example:
TEST_PG='DBI:Pg:dbname=openqa_test;host=/dev/shm/tpg' OPENQA_BASEDIR= LC_ALL=C.utf8 LANGUAGE= prove -v t/14-grutasks.t
In the case of wanting to tweak the tests as above, to speed up the test
initialization, start PostgreSQL using t/test_postgresql
instead of using
the system service. E.g.
t/test_postgresql /dev/shm/tpg
To check the coverage by individual test files easily call e.g.
make coverage TESTS=t/24-worker-engine.t
and take a look into the generated coverage HTML report in
cover_db/coverage.html
.
We use annotations in some places to mark "uncoverable" code such as this:
# uncoverable subroutine
See the docs for details https://metacpan.org/pod/Devel::Cover
There are some ways to save some time when executing local tests:
-
One option is selecting individual tests to run as explained above
-
Set the make variable
KEEP_DB=1
to keep the test database process spawned for tests for faster re-runs or run tests withprove
manually after the test database has been created. -
Run
tools/tidyall --git
to tidy up modified code before committing in git -
Set the environment variable
DIE_ON_FAIL=1
fromTest::Most
for faster aborts from failed tests.
For easier debugging of t/full-stack.t one can set the environment variable
OPENQA_FULLSTACK_TEMP_DIR
to a clean directory (relative or absolute path)
to be used for saving temporary data from the test, for example the log files
from individual test job runs within the full stack test.
It is possible to customize the openQA base directory (which is for instance
used to store test results) by setting the environment variable
OPENQA_BASEDIR
. The default value is /var/lib
. For a development setup, set
OPENQA_BASEDIR
to a directory the user you are going to start openQA with has
write access to. Additionally, take into account that the test results and
assets can need a big amount of disk space.
Warning
|
Be sure to clear that variable when running unit tests locally. |
It can be necessary during development to change the configuration.
For example you have to edit etc/openqa/database.ini
to use another database.
It can also be useful to set the authentication method to Fake
and increase
the log level etc/openqa/openqa.ini
.
To avoid these changes getting in your Git workflow, copy them to a new
directory and set the environment variable OPENQA_CONFIG
:
cp -ar etc/openqa etc/mine
export OPENQA_CONFIG=$PWD/etc/mine
Note
|
OPENQA_CONFIG needs to point to the directory containing openqa.ini ,
database.ini , client.conf and workers.ini (and not a specific file).
|
Setting up a PostgreSQL database for openQA takes the following steps:
-
Install PostgreSQL - under openSUSE the following package are required:
postgresql-server postgresql-init
-
Start the server:
systemctl start postgresql
-
The next two steps need to be done as the user postgres:
sudo su - postgres
-
Create user:
createuser your_username
whereyour_username
must be the same as the UNIX user you start your local openQA instance with. For a development instance that is normally your regular user. -
Create database:
createdb -O your_username openqa-local
whereopenqa-local
is the name you want to use for the database -
Configure openQA to use PostgreSQL as described in the section Database of the installation guide. User name and password are not required. Of course you need to change the
database.ini
file under your custom config directory (as you have probably done that in the previous section). -
openQA will default-initialize the new database on the next startup.
The script openqa-setup-db
can be used to conduct step 4 and 5. You must still
specify the user and database name and run it as user postgres
:
sudo sudo -u postgres openqa-setup-db your_username openqa-local`
Note
|
To remove the database again, you can use e.g. dropdb openqa-local as
your regular user.
|
Assuming you have already followed steps 1. to 4. above:
-
Create a separate database:
createdb -O your_username openqa-o3
whereopenqa-o3+
is the name you want to use for the database -
The next steps must be run as the user you start your local openQA instance with, i.e. the
your_username
user. -
Import dump:
pg_restore -c -d openqa-o3 path/to/dump
Note that errors of the formERROR: role "geekotest" does not exist
are due to the users in the production setup and can safely be ignored. Everything will be owned byyour_username
. -
Configure openQA to use that database as in step 7. above.
This section should give you a general idea how to start daemons manually for development after you setup a PostgreSQL database as mentioned in the previous section.
You have to install/update web-related dependencies first using npm install
.
To start the webserver for development, use scripts/openqa daemon
. The other
daemons (mentioned in the architecture diagram)
are started in the same way, e.g. script/openqa-scheduler daemon
.
You can also have a look at the systemd unit files. Although it likely makes
not much sense to use them directly you can have a look at them to see how the
different daemons are started. They are found in the systemd
directory of
the openQA repository. You can substitute /usr/share/openqa/
with the path
of your openQA Git checkout.
Of course you can ignore the user specified in these unit files and instead start everything as your regular user as mentioned above. However, you need to ensure that your user has the permission to the "openQA base directory". That is not the case by default so it makes sense to customize it.
You do not need to setup an additional web server because the daemons
already provide one. The port under which a service is available is logged on
startup (the main web UI port is 9625 by default). Local workers need to be
configured to connect to the main web UI port (add HOST =
http://localhost:9526+ to `workers.ini
).
Note that you can also start services using a temporary database using the unit test database setup and data directory:
t/test_postgresql /dev/shm/tpg
TEST_PG='DBI:Pg:dbname=openqa_test;host=/dev/shm/tpg' OPENQA_DATABASE=test OPENQA_BASEDIR=t/data script/openqa daemon
This creates an empty temporary database and starts the web application using
that specific database (ignoring the configuration from database.ini
). Be
aware that this may cause unwanted changes in the t/data
directory.
Also find more details in Run tests without Container.
-
It is also useful to start openQA with morbo which allows applying changes without restarting the server:
morbo -m development -w assets -w lib -w templates -l http://localhost:9526 script/openqa daemon
-
In case you have problems with broken rendering of the web page it can help to delete the asset cache and let the webserver regenerate it on first startup. For this delete the subdirectories
.sass-cache/
,assets/cache/
andassets/assetpack.db
. Make sure to look for error messages on startup of the webserver and to force the refresh of the web page in your browser. -
If you get errors like "ERROR: Failed to build gem native extension." make sure you have all listed dependencies including the "sass" application installed.
-
For a concrete example some developers use under openSUSE Tumbleweed have a look at the openQA-helper repository.
Install third-party JavaScript and CSS files via their corresponding npm
packages and add the paths of those files to assets/assetpack.def
.
If a dependency is not available on npm you may consider adding those files
under assets/3rdparty
. Additionally, add the license(s) for the newly added
third-party code to the root directory of the repository. Do not duplicate
common/existing licenses; extend the Files:
-section at the beginning of those
files instead.
In openQA, there is a dependencies.yaml
file including a list of
dependencies, separated in groups. For example the openQA client does not need
all modules required to run openQA. Edit this file to add or change a dependency
and run make update-deps
. This will generate the cpanfile
and
dist/rpm/openQA.spec
files.
The same applies to os-autoinst
where make update-deps
will generate the
cpanfile
, os-autoinst.spec
and container/os-autoinst_dev/Dockerfile
.
If changing any package dependencies make sure packages and updated packages are available in openSUSE Factory and whatever current Leap version is in development. New package dependencies can be submitted. Before merging the according change into the main openQA repo the dependency should be published as part of openSUSE Tumbleweed.
-
The CI of os-autoinst and openQA uses the container made using
container/devel:openQA:ci/base/Dockerfile
and further dependencies listed intools/ci/ci-packages.txt
(see CircleCI documentation). -
There is an additional check running using OBS to check builds of packages against openSUSE Tumbleweed and openSUSE Leap.
During the development process there are cases in which the database schema needs to be changed. there are some steps that have to be followed so that new database instances and upgrades include those changes.
After modifying files in lib/OpenQA/Schema/Result
. However, not all changes
require to update the schema. Adding just another method or altering/adding
functions like has_many
doesn’t require an update. However, adding new
columns, modifying or removing existing ones requires to follow the steps
mentioned above. In doubt, just follow the instructions below. If an empty
migration has been emitted (SQL file produced in step 3. does not contain
any statements) you can just drop the migration again.
-
First, you need to increase the database version number in the
$VERSION
variable in thelib/OpenQA/Schema.pm
file. Note that it is recommended to notify the other developers before doing so, to synchronize in case there are more developers wanting to increase the version number at the same time. -
Then you need to generate the deployment files for new installations, this is done by running
./script/initdb --prepare_init
. -
Afterwards you need to generate the deployment files for existing installations, this is done by running
./script/upgradedb --prepare_upgrade
. After doing so, the directoriesdbicdh/$ENGINE/deploy/<new version>
anddbicdh/$ENGINE/upgrade/<prev version>-<new version>
for PostgreSQL should have been created with some SQL files inside containing the statements to initialize the schema and to upgrade from one version to the next in the corresponding database engine. -
Custom migration scripts to upgrade from previous versions can be added under
dbicdh/_common/upgrade
. Create a<prev_version>-<new_version>
directory and put some files there with DBIx commands for the migration. For examples just have a look at the migrations which are already there. The custom migration scripts are executed in addition to the automatically generated ones. If the name of the custom migration script comes before001-auto.sql
in alphabetical order it will be executed before the automatically created migration script. That is most of the times not desired.
The above steps are only for preparing the required SQL statements for the migration.
The migration itself (which alters your database!) is done automatically the first time the web UI is (re)started. So be sure to backup your database before restarting to be able to downgrade again if something goes wrong or you just need to continue working on another branch. To do so, the following command can be used to create a copy:
createdb -O ownername -T originaldb newdb
To initialize or update the database manually before restarting the web UI you can run
either ./script/initdb --init_database
or ./script/upgradedb --upgrade_database
.
Migrations that affect possibly big tables should be tested against a local import of a production database to see how much time they need. Checkout the Importing production data section for details.
A migration can cause the analyser to regress so it produces worse query plans leading to impaired performance. Checkout the Working on database-related performance problems section for how to tackle this problem.
Note: This section is not about the fixtures for the testsuite. Those are located under t/fixtures.
Note: This section might not be relevant anymore. At least there are currently none of the mentioned directories with files containing SQL statements present.
Fixtures (initial data stored in tables at installation time) are stored
in files into the dbicdh/_common/deploy/_any/<version>
and
dbicdh/_common/upgrade/<prev_version>-<next_version>
directories.
You can create as many files as you want in each directory. These files contain SQL statements that will be executed when initializing or upgrading a database. Note that those files (and directories) have to be created manually.
Executed SQL statements can be traced by setting the DBIC_TRACE
environment
variable.
export DBIC_TRACE=1
openQA comes with two authentication modules providing authentication methods: OpenID and Fake (see User authentication).
All authentication modules reside in lib/OpenQA/Auth
directory. During
openQA start, the [auth]/method
section of /etc/openqa/openqa.ini
is read
and according to its value (or default OpenID) openQA tries to require
OpenQA::WebAPI::Auth::$method. If successful, the module for the given method
is imported or openQA ends with error.
Each authentication module is expected to export auth_login
and auth_logout
functions. In case of request-response mechanism (as in
OpenID), auth_response
is imported on demand.
Currently there is no login page because all implemented methods use either 3rd party page or none.
Authentication module is expected to return HASH:
%res = (
# error = 1 signals auth error
error => 0|1
# where to redirect the user
redirect => ''
);
Authentication module is expected to create or update user entry in openQA database after user validation. See included modules for inspiration.
Beside simply running the testsuite, it is also possible to use containers. Using containers, tests are executed in the same environment as on CircleCI. This allows to reproduce issues specific to that environment.
Be sure to install all required dependencies. The package openQA-devel
will
provide them.
If the package is not available the dependencies can also be found in the file
dist/rpm/openQA.spec
in the openQA repository. In this case also the package
perl-Selenium-Remote-Driver
is required to run UI tests. You also need to
install chromedriver and either chrome or chromium for the UI tests.
To execute the testsuite use make test
. This will also initialize a
temporary PostgreSQL database used for testing. To do this step manually run
t/test_postgresql /dev/shm/tpg
to initialize a temporary PostgreSQL database
and export the environment variable as instructed by that script.
It is also possible to run a particular test, for example
prove t/api/01-workers.t
. When using prove
directly, make sure an English
locale is set (e.g. export LC_ALL=C.utf8 LANGUAGE=
before initializing the database
and running prove
).
To keep the test database running after executing tests with the Makefile
, add
KEEP_DB=1
to the make arguments. To access the test database, use
psql --host=/dev/shm/tpg openqa_test
.
To watch the execution of the UI tests, set the environment variable NOT_HEADLESS
.
The container used in this section of the documentation is not identical with the container used within the CI. To run tests within the CI environment locally, checkout the CircleCI documentation below.
To run tests in a container please be sure that a container runtime environment, for example podman, is installed. To launch the test suite first it is required to pull the container image:
podman pull registry.opensuse.org/devel/openqa/containers/opensuse/openqa_devel:latest
This container image is provided by the OBS repository https://build.opensuse.org/package/show/devel:openQA/openQA-devel-container
and based on the Dockerfile
within the container/devel
sub directory of the openQA repository.
Run tests by spawning a container manually, e.g.:
podman run --rm -v OPENQA_LOCAL_CODE:/opt/openqa -e VAR1=1 -e VAR2=1 openqa_devel:latest make run-tests-within-container
Replace OPENQA_LOCAL_CODE
with the location where you have the openQA code.
The command line to run tests manually reveals that the Makefile target
run-tests-within-container
is used to run the tests inside the container.
It does some preparations to be able to run the full stack test within a
container and considers a few environment variables defining our test matrix:
CHECKSTYLE=1 |
|
FULLSTACK=0 |
UITESTS=0 |
FULLSTACK=0 |
UITESTS=1 |
FULLSTACK=1 |
|
HEAVY=1 |
|
GH_PUBLISH=true |
So by replacing VAR1 and VAR2 with those values one can trigger the different tests of the matrix.
Of course it is also possible to run (specific) tests directly via prove
instead of using the Makefile targets.
Running UI tests in non-headless mode is also possible, eg.:
xhost +local:root podman run --rm -ti --name openqa-testsuite -v /tmp/.X11-unix:/tmp/.X11-unix:rw -e DISPLAY="$DISPLAY" -e NOT_HEADLESS=1 openqa_devel:latest prove -v t/ui/14-dashboard.t xhost -local:root
It is also possible to use a custom os-autoinst checkout using the following arguments:
podman run … -e CUSTOM_OS_AUTOINST=1 -v /path/to/your/os-autoinst:/opt/os-autoinst make run-tests-within-container
By default, configure
and make
are still executed (so a clean checkout is expected). If your checkout is already prepared to use,
set CUSTOM_OS_AUTOINST_SKIP_BUILD
to prevent this. Be aware that the build produced outside of the container might not work inside the
container if both environments provide different, incompatible library versions (eg. OpenCV).
In general, if starting the tests via a container seems to hang, it is a good idea to inspect the process tree to see which command is currently executed.
Logs are redirected to a logfile when running tests within the CI. The output
can therefore not be asserted using Test::Output
. This can be worked around
by temporarily assigning a different Mojo::Log
object to the application. To
test locally under the same condition set the environment variable
OPENQA_LOGFILE
.
Note that redirecting the logs to a logfile only works for tests which run
OpenQA::Log::setup_log
. In other tests the log is just printed to the
standard output. This makes use of Test::Output
simple but it should be
taken care that the test output is not cluttered by log messages which can be
quite irritating.
The test modules use OpenQA::Test::TimeLimit
to introduce a test module
specific timeout. The timeout is automatically scaled up based on environment
variables, e.g. CI
for continuous integration environments, as well as when
executing while test coverage data is collected as longer runtimes should be
expected in these cases. Consider lowering the timeout value based on usual
local execution times whenever a test module is optimized in runtime. If the
timeout is hit the test module normally aborts with a corresponding message.
To disable the timeout globably set the environment variable
OPENQA_TEST_TIMEOUT_DISABLE=1
.
Please be aware of the exception when the timeout triggers after the actual test part of a test module has finished but not all involved processes have finished or END blocks are processed. In this case the output can look like
t/my_test.t .. All 1 subtests passed
Test Summary Report
-------------------
t/my_test.t (Wstat: 14 Tests: 1 Failed: 0)
Non-zero wait status: 14
Files=1, Tests=1, 2 wallclock secs ( 0.03 usr 0.00 sys + 0.09 cusr 0.00 csys = 0.12 CPU)
Result: FAIL
where "Wstat: 14" and "Non-zero wait status: 14" mean that the test process received the "ALRM" signal (signal number 14).
In case of problems with timeouts look into OpenQA::Test::TimeLimit
to find
environment variables that can tweaked to disable or change timeout values or
timeout scale factors. If you want to disable the timeout for indefinite
manual debugging, set the environment variable
OPENQA_TEST_TIMEOUT_DISABLE=1
. The option OPENQA_TEST_TIMEOUT_SCALE_CI
is
only effective if the environment variable CI
is set, which e.g. it is in
circleCI and OBS but not in local development environments. When running with
coverage analysis enabled the scaling factor of
OPENQA_TEST_TIMEOUT_SCALE_COVER
is applied to account for the runtime
overhead.
In case of Selenium based UI tests timing out trying to find a local
chromedriver instance the variable OPENQA_SELENIUM_TEST_STARTUP_TIMEOUT
can
be set to a higher value. See
https://metacpan.org/pod/Selenium::Chrome#startup_timeout for details.
The goal of the following workflow is to provide a way to run tests with a pre-approved list of dependencies both in the CI and locally.
-
ci-packages.txt lists dependencies to test against.
-
autoinst.sha contains sha of os-autoinst commit for integration testing. The testing will run against the latest master if empty.
ci-packages.txt
and autoinst.sha
are aimed to represent those dependencies
which change often. In normal workflow these files are generated automatically
by dedicated Bot, then go in PR through CI, then reviewed and accepted by
human.
So, in normal workflow it is guaranteed that everyone always works on list of
correct and approved dependencies (unless they explicitly tell CI to use
custom dependencies).
The Bot tracks dependencies only in master branch by default, but this may be
extended in circleci config file.
The Bot uses tools/ci/build_dependencies.sh
script to detect any changes.
This script can be used manually as well.
Alternatively just add newly introduced dependencies into ci-packages.txt, so
CI will run tests with them.
Occasionally it may be a challenge to work with ci-packages.txt
(e.g. package version is not available anymore). In such case you can either
try to rebuild ci-packages.txt using tools/ci/build_dependencies.sh
or
just remove all entries and put only openQA-devel into it
Script tools/ci/build_dependencies.sh
can be also modified when major
changes are performed, e.g. different OS version or packages from forked OBS
project, etc.
One way is to build an image using the build_local_container.sh
script, start a
container and then use the same commands one would use to test locally.
Pull the latest base image (otherwise it may be outdated):
podman pull registry.opensuse.org/devel/openqa/ci/containers/base:latest
Create an image called localtest
based on the contents of ci-packages.txt
and autoinst
:
tools/ci/build_local_container.sh
Mount the openQA checkout under /opt/testing_area
within the container and run
tests as usual, e.g.:
podman run -it --rm -v $PWD:/opt/testing_area localtest bash -c 'make test TESTS=t/ui/25*'
Alternatively, start the container and execute commands via podman exec
, e.g.:
podman run --rm --name t1 -v $PWD:/opt/testing_area localtest tail -f /dev/null & sleep 1
podman exec -it t1 bash -c 'make test TESTS=t/ui/25-developer_mode.t'
podman stop -t 0 t1
After installing the circleci
tool the following commands will be available.
They will build the container and use committed changes from current local branch.
circleci local execute --job test1
circleci local execute --job testui
circleci local execute --job testfullstack
circleci local execute --job testdeveloperfullstack
Not all code needs to be included in openQA itself. openQA also supports the use of 3rd party plugins that follow the standards for plugins used by the Mojolicious web framework. These can be distributed as normal CPAN modules and installed as such alongside openQA.
Plugins are a good choice especially for extensions to the UI and HTTP API, but also for notification systems listening to various events inside the web server.
If your plugin was named OpenQA::WebAPI::Plugin::Hello
, you would install it
in one of the include directories of the Perl used to run openQA, and then
configure it in openqa.ini
. The plugins
setting in the global
section will
tell openQA what plugins to load.
# Tell openQA to load the plugin
[global]
plugins = Hello
# Plugin specific configuration (optional)
[hello_plugin]
some = value
The plugin specific configuration is optional, but if defined would be available
in $app→config→{hello_plugin}
.
To extend the UI or HTTP API there are various named routes already defined that will take care of authentication for your plugin. You just attach the plugin routes to them and only authenticated requests will get through.
package OpenQA::WebAPI::Plugin::Hello;
use Mojo::Base 'Mojolicious::Plugin';
sub register {
my ($self, $app, $config) = @_;
# Only operators may use our plugin
my $ensure_operator = $app->routes->find('ensure_operator');
my $plugin_prefix = $ensure_operator->any('/hello_plugin');
# Plain text response (under "/admin/hello_plugin/")
$plugin_prefix->get('/' => sub {
my $c = shift;
$c->render(text => 'Hello openQA!');
})->name('hello_plugin_index');
# Add a link to the UI menu
$app->config->{plugin_links}{operator}{'Hello'} = 'hello_plugin_index';
}
1;
The plugin_links
configuration setting can be modified by plugins to add links
to the operator
and admin
sections of the openQA UI menu. Route names or
fully qualified URLs can be used as link targets. If your plugin uses templates,
you should reuse the bootstrap
layout provided by openQA. This will ensure a
consistent look, and make the UI menu available everywhere.
% layout 'bootstrap';
% title 'Hello openQA!';
<div>
<h2>Hello openQA!</h2>
</div>
For UI plugins there are two named authentication routes defined:
-
ensure_operator
: under/admin/
, only allows logged in users withoperator
privileges -
ensure_admin
: under/admin/
, only allows logged in users withadmin
privileges
And for HTTP API plugins there are four named authentication routes defined:
-
api_public
: under/api/v1/
, allows access to everyone -
api_ensure_user
: under/api/v1/
, only allows authenticated users -
api_ensure_operator
: under/api/v1/
, only allows authenticated users withoperator
privileges -
api_ensure_admin
: under/api/v1/
, only allows authenticated nusers withadmin
privileges
To generate a minimal installable plugin with a CPAN distribution directory structure you can use the Mojolicious tools. It can be packaged just like any other Perl module from CPAN.
$ mojo generate plugin -f OpenQA::WebAPI::Plugin::Hello
...
$ cd OpenQA-WebAPI-Plugin-Hello/
$ perl Makefile.PL
...
$ make test
...
And if you need code examples, there are some plugins included with openQA.
One can use the tool jshint
to check for problems within JavaScript code. It can be installed
easily via npm
.
npm install jshint
node_modules/jshint/bin/jshint path/to/javascript.js
-
Install NYTProf, under openSUSE Tumbleweed:
zypper in perl-Devel-NYTProf perl-Mojolicious-Plugin-NYTProf
-
Put
profiling_enabled = 1+ in `openqa.ini
. -
Optionally import production data like described in the official contributors documentation.
-
Restart the web UI, browse some pages. Profiling is done in the background.
-
Access profiling data via
/nytprof
route.
After changing documentation, consider generating documentation locally to
verify it is rendered correctly using tools/generate-docs
. It is possible to
do that inside the provided development container by invoking:
podman run --rm -v OPENQA_LOCAL_CODE:/opt/openqa openqa_devel:latest make generate-docs
Replace OPENQA_LOCAL_CODE
with the location where you have the openQA code.
The documentation will be built inside the container and put into docs/build/
subfolder.
You can also utilize the make serve-docs
target which will additionally spawn a
simple Python HTTP server inside the target folder, so you can just point your
browser to port 8000 to view the documentation. That could be handy for example
in situations where you do not have the filesystem directly accessible (i.e.
remote development). The magic line in this case would be:
podman run --rm -it -p 8000:8000 -v .:/opt/openqa openqa_devel:latest make serve-docs