Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing suite for LÖVE's APIs #1745

Closed
slime73 opened this issue Nov 25, 2021 · 5 comments
Closed

Testing suite for LÖVE's APIs #1745

slime73 opened this issue Nov 25, 2021 · 5 comments
Labels
feature New feature or request help wanted Extra attention is needed
Milestone

Comments

@slime73
Copy link
Member

slime73 commented Nov 25, 2021

It would help the long term stability of LÖVE if there's a set of tests that verify whether certain LÖVE APIs function correctly, and the ability to add new tests or change existing tests with as little friction as possible.

We don't need something overengineered to start with - simple is good in my opinion, as long as it fits our needs. We can also start with a very small number of tests and add to them as time goes on, rather than worrying about building a robust amount immediately.

Primary goals:

  • Ability to have simple pass/fail tests written in Lua with minimal setup code.
  • Ability to run all tests with a simple command.
  • Ability to see how many tests are passing and how many are failing.
  • No platform-specific dependencies / scripts without strong justification.

Stretch goals:

  • Automatic testing that happens after every commit.
  • Ability to run a subset of tests (for example all image module tests).
  • Ability to easily run an individual test.
  • Ability to test loading different combinations of modules.
  • Tests can compare visual results to a reference image, possibly with an optional tolerance value in situations where different graphics drivers don't guarantee bit-exact results.
  • Ability to see all visual results at a glance (for example an output folder of screenshots and maybe a generated HTML file that shows thumbnails of them all).
  • Performance tests (doesn't necessarily need to output a pass/fail, but can be useful for tracking numbers over time or between changes, on the same system).

Some questions I have:

  • Should all tests run in the same instance of LÖVE, or should each one be isolated, or should certain subsections be isolated from each other? The former is easiest but it prevents certain kinds of tests, and if there's a crash it can't continue. Isolating everything is "safest" but might cause tests to take a lot longer than they need to, if there's a lot of them.
  • What testing tools are out there already? Do any suit our needs well enough?
@slime73 slime73 added help wanted Extra attention is needed feature New feature or request labels Nov 25, 2021
@slime73 slime73 added this to the 13.0 milestone Jan 30, 2022
@IsometricShahil
Copy link

What do you think of using a pure lua solution such as bjornbytes/lust? I think it achives all the primary goals along with the stretch goals, except for generating an HTML file which might be tricky in lua.

@zorggn
Copy link

zorggn commented Feb 11, 2022

Generating a basic html file is probably not that hard though if you don't want too many bells and whistles in it (e.g. rxi's lovebird comes to mind)

i looked at lust, it seems useful, though the "full-battery"/group/individual test stuff would probably need to be implemented on top of that. (and besides the toggleable ANSI console command stuff, it looks like it does not have any platform-specific deps)

In my opinion, if a test makes LÖVE crash, then that's already a problem in and of itself, since that shouldn't really happen at all (if such a crash is fixed, then the test can continue, and if another crash happens, repeat ad completum); i do believe that using the same instance is probably good for most tests.

What would be an easy way to help out regarding this issue, as in, contribute code?
I mean, i'd rather not clone the löve repo and make a PR if i don't need to; i feel that might have too slow a turnaround rate for suggestions...

@gcmartijn
Copy link

Well, what I use this folder structure in combination with luaunit.

This will help me with two things at the moment.

  1. unit test
  2. integration test (using the actual game/program)
- scripts
-- unittest.sh
-- integrationtest.sh
- test
-- integration
--- assets
--- data
--- example.lua
-- unittest
--- example.lua
  • unittest/example.lua
-- https://luaunit.readthedocs.io/en/latest/

local lu = require("luaunit")
local Math = require "framework.helper.math"

TestClass = {}

function TestClass:testRound()
    lu.assertEquals(Math.round(50.30), 50)
    lu.assertEquals(Math.round(50.50), 51)
    lu.assertEquals(Math.round(50.70), 51)
    lu.assertEquals(Math.round(50), 50)
end

function TestClass:testNearest()
    lu.assertEquals(Math.nearest({10, 13}, 14), 13)
    lu.assertEquals(Math.nearest({10, 13}, 13), 13)
    lu.assertEquals(Math.nearest({10, 13}, 12), 13)
    lu.assertEquals(Math.nearest({10, 13}, 11), 10)
    lu.assertEquals(Math.nearest({10, 13}, 10), 10)
    lu.assertEquals(Math.nearest({10, 13}, 9), 10)
end

os.exit(lu.LuaUnit.run())
  • unittest.sh
#!/bin/bash

source config

cd ../src/

for testFile in $(find ${UNITTEST_DIR} -path ${UNITTEST_DIR}/data -prune -false -o -type f); do
    echo "$testFile"
    OUTPUT=$(${LUA} "$testFile")

    echo "$OUTPUT"
    
    # stop testing if one thing failed
    if [[ "$OUTPUT" == *"stack"* ]]; then
        break
    fi
done
  • integrationtest.sh
#!/bin/bash

source config

cd ../
$LOVE_EXE src/test/integration
  • src/test/integration/main.lua
local lu = require("luaunit")
local runner = lu.LuaUnit.new()

function love.load()
    runner:runSuiteByInstances(
        {
            {"rect-helper", require "rect-helper"},
            {"image-assets", require "image-assets"},
            {"audio-assets", require "audio-assets"},
            {"entity-manager", require "entity-manager"},
            {"callback", require "callback"},
            {"frameset", require "frameset"}
        }
    )

    love.event.push("quit", 0)
end

Another thing I really like is
luacheck
-- https://luacheck.readthedocs.io/en/0.23.0/config.html#config-options

@premek
Copy link
Contributor

premek commented Oct 21, 2022

Should the tests be more like:

  1. unit tests
    • tests individual API functions separately
    • somehow uses the c/lua code, not the compiled binary
    • could use mocking/spying etc to inspect if the called api method internally did what it's supposed to do?
    • should be ok for love.math for example?
  2. integration testing
    • would run the compiled love executable, tests could be written in (one or multiple) love "games"
    • tests would generate some output data (e.g. screenshots for love.graphics, some other files for love.filesystem etc) and those would be compared to some expected results
  3. combination of both?

@slime73
Copy link
Member Author

slime73 commented Nov 30, 2023

@ellraiser implemented pretty much all of this in #1974, which is now merged into the 12.0-development branch in 36783d3

@slime73 slime73 closed this as completed Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants