It’s been a little while since I’ve written up unit tests for my code, as I’ve been working on projects that don’t have much need for continuous integration (CI). The last (and only) CI tool I used is Travis CI, back when it had its .org domain name without credit limits or pricing plans. While looking into various CI services, I happened upon Github Actions mentioned in a third party Github repo, and it seems to be just the thing I’m looking for! It’s already integrated into my repository (no need to set up webhooks), and it’s super easy to get started.
To make use of Github Actions, there are already many workflows available, from both Github and the greater developer community. Using those resources, I set up a workflow which (1) lints my code, (2) runs unit tests, and (3) generates a coverage report that I can access on Codecov. If you haven’t yet used Github Actions, check out their docs, or at the very least, skim their quick start guide. Here are some tips I picked up while setting up continuous integration via Github Actions.
Use the appropriate workflow based on your Python package manager.
In your Python setup, if you use Pip as your package manager, then start with python-app.yml
for your workflow. If you use Conda, then use python-package-conda.yml
for your workflow. While the requirements.txt file generated by conda and pip look similar, they cannot be used interchangably. Specifically,
- Pip cannot successfully install packages from a Conda requirements.txt file. Running
pip freeze > requirements.txt
will give something that looks like this:amqp==2.6.1
whereas runningconda list --export > requirements.txt
will give something that looks like this:amqp=2.6.1=pypi_0
- If most of your environment’s packages were installed with Conda, generating a requirements.txt file using Pip will give path-dependent file locations. This will be an issue for building the test environment. For example, this was generated with Pip in a Conda environment:
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1605083924122/work
whereas this was generated with Conda in the same environment:
beautifulsoup4 @ file:///tmp/build/80754af9/beautifulsoup4_1601924105527/workthis:attrs=20.3.0=pyhd3deb0d_0
beautifulsoup4=4.6.0=pypi_0
Put your file with the list of package dependencies in the root folder.
Whether you’re using the requirements.txt file or the environment.yml file, this goes in the main folder of the Github repo. This is in contrast to the YAML workflow file, which goes into the .github/workflows folder.
Define all branches for the workflows to take place.
If you’re working in a branch other than the main/master, make sure that is specified in the branches list. The default is:
on: push: branches: [ $default-branch ] pull_request: branches: [ $default-branch ]
I wanted to trigger my workflow with any change, so I changed it to the following:
on: [push, pull_request]
Add code coverage.
This is less of a tip and more of a how-to. You can add a section to generate a coverage report after running tests and send that to Codecov by adding the lines:
# connect to codecov - name: Generate coverage report run: | conda install pytest-cov pytest --cov=./ --cov-report=xml - name: Upload coverage to Codecov uses: codecov/codecov-action@v1 with: flags: unittests env_vars: OS,PYTHON name: codecov-umbrella fail_ci_if_error: true verbose: true
Make all modules accessible for tests.
This one is not specific to Github Actions but is for testing in general. If you don’t have the tested code as a module, you can add the following lines as a workaround to specify where to look for the modules:
import sys, os
sys.path.append(os.path.realpath(os.path.dirname(__file__) + "/.." ))
Replace "/.."
with the path, relative to your tests, of where the code-to-be-imported is located.
And that’s it! Hit me up if you have any questions.