Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make code more robust with good tests #6

Open
lukegre opened this issue Apr 17, 2020 · 6 comments
Open

Make code more robust with good tests #6

lukegre opened this issue Apr 17, 2020 · 6 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@lukegre
Copy link
Member

lukegre commented Apr 17, 2020

I've written a few tests as examples. These are not robust, but they show how to implement a test.
The only requirement for a test is that they do raise an Exception.
There is some testing data in ./tests/data/

I highly recommend doing this in VS code with the correct Python environment enabled.

@lukegre lukegre added the help wanted Extra attention is needed label Apr 17, 2020
@lukegre lukegre assigned lukegre and unassigned lukegre Sep 22, 2020
@dhruvbalwada
Copy link
Member

@jbusecke is there some good documentation online that one can read about how to make tests for packages like GT? or do you think it makes sense to give a simple example about how to make a test that can be added to the https://glidertools.readthedocs.io/en/latest/contributing.html ?

@jbusecke
Copy link
Contributor

This is really one of the hardest topics to give "general" advice on. I have some starting tips in my cookie-cutter template repo.

For this sort of project I would strongly recommend working with a combination of synthetic data and some simple real world datasets. Set up some simple data and check that the function you are testing has the desired results.
For the real world data you could have a 'completely reprocessed' ground truth, that you compare a 'full processing' run each time things get changed?

I would just start very simple, and then expand as you find bugs. Some tests are always better than having none and waiting for that 'perfect test setup'.

A general advice for debugging: First implement a test that fails due to the bug. Then fix the code until it does not fail anymore. This ensures that this error will never go undetected in the future.

A good way to work on testing is to look at the code coverage. We can implement this as part of the CI and you can see which lines of code are being 'touched' by your code and which arent. You can then expand them accordingly.

@jbusecke
Copy link
Contributor

Ill add some tips to the docs, because this will be the first thing that presents a friction point.

@jbusecke
Copy link
Contributor

#53 Will help with browsing the coverage of the code.

@jbusecke
Copy link
Contributor

Ok I have setup a rudimentary report to codecov. We can now see the code coverage (the lines that are actually run during testing) in a nice visualization https://codecov.io/gh/GliderToolsCommunity/GliderTools/tree/86f02bcd17764b2ad816a8a8ea4b331e2d428682/glidertools.

This will be generated automatically for each PR going forward. We need to do 2 things:

  1. Exclude the files that should not be tested (e.g. __init__.py and the testfiles themselves)
  2. Actually write test to increase the coverage, which is pretty bad and will be worse after 1. 😬

To proceed with 1. I would need some help from any of the folks more in touch with the actual code (@lukegre @sarahnicholson @marcelduplessis ?) I assume the files in glidertools/load/... are only for loading example datasets? Or are these also used in the package code itself? If its the former I would exclude them from the coverage, if the latter they need to be included and tested.

@jbusecke
Copy link
Contributor

I started 1. in #63

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants