Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run doctest in CI and fix errors in the documentation examples #1193

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 82 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
name: Tests

on:
push:
branches:
- main
- checks
pull_request:
branches:
- main
Comment on lines +5 to +10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these settings preventing the workflow from running in this PR?


# Cancels all previous workflow runs for pull requests that have not completed.
concurrency:
# The concurrency group contains the workflow name and the branch name for pull requests
# or the commit hash for any other events.
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

jobs:
test:
name: "Doctest${{ matrix.python-version }}: ${{ matrix.part }}"
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
python-version: ["3.7", "3.9"]
fast-compile: [0]
float32: [0]
install-numba: [1]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: conda-incubator/setup-miniconda@v2
with:
mamba-version: "*"
channels: conda-forge,defaults
channel-priority: true
python-version: ${{ matrix.python-version }}
auto-update-conda: true

- name: Create matrix id
id: matrix-id
env:
MATRIX_CONTEXT: ${{ toJson(matrix) }}
run: |
echo $MATRIX_CONTEXT
export MATRIX_ID=`echo $MATRIX_CONTEXT | md5sum | cut -c 1-32`
echo $MATRIX_ID
echo "::set-output name=id::$MATRIX_ID"

- name: Install dependencies
shell: bash -l {0}
run: |
mamba install --yes -q "python~=${PYTHON_VERSION}=*_cpython" mkl numpy scipy pip mkl-service graphviz cython pytest coverage pytest-cov sympy
if [[ $INSTALL_NUMBA == "1" ]]; then mamba install --yes -q -c conda-forge "python~=${PYTHON_VERSION}=*_cpython" "numba>=0.55" numba-scipy; fi
mamba install --yes -q -c conda-forge "python~=${PYTHON_VERSION}=*_cpython" jax "jaxlib!=0.3.15"
pip install -e ./
pip install -r requirements-rtd.txt
mamba list && pip freeze
python -c 'import aesara; print(aesara.config.__str__(print_doc=False))'
python -c 'import aesara; assert(aesara.config.blas__ldflags != "")'
env:
PYTHON_VERSION: ${{ matrix.python-version }}
INSTALL_NUMBA: ${{ matrix.install-numba }}

- name: Run doctests
shell: bash -l {0}
run: |
if [[ $FAST_COMPILE == "1" ]]; then export AESARA_FLAGS=$AESARA_FLAGS,mode=FAST_COMPILE; fi
if [[ $FLOAT32 == "1" ]]; then export AESARA_FLAGS=$AESARA_FLAGS,floatX=float32; fi
export AESARA_FLAGS=$AESARA_FLAGS,warn__ignore_bug_before=all,on_opt_error=raise,on_shape_error=raise,gcc__cxxflags=-pipe
python ./doc/scripts/docgen.py --test
env:
MATRIX_ID: ${{ steps.matrix-id.outputs.id }}
MKL_THREADING_LAYER: GNU
MKL_NUM_THREADS: 1
OMP_NUM_THREADS: 1
PART: ${{ matrix.part }}
FAST_COMPILE: ${{ matrix.fast-compile }}
FLOAT32: ${{ matrix.float32 }}
2 changes: 1 addition & 1 deletion aesara/gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -2319,7 +2319,7 @@ def grad_clip(x, lower_bound, upper_bound):
>>> z2 = aesara.gradient.grad(x**2, x)
>>> f = aesara.function([x], outputs = [z, z2])
>>> print(f(2.0))
[array(1.0), array(4.0)]
[array(1.), array(4.)]

Notes
-----
Expand Down
1 change: 1 addition & 0 deletions aesara/graph/basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -926,6 +926,7 @@ def orphans_between(

Examples
--------
>>> from aesara.graph.basic import orphans_between
>>> orphans_between([x], [(x+y).out])
[y]

Expand Down
4 changes: 2 additions & 2 deletions aesara/tensor/rewriting/math.py
Original file line number Diff line number Diff line change
Expand Up @@ -648,9 +648,9 @@ class AlgebraicCanonizer(NodeRewriter):
--------
>>> import aesara.tensor as at
>>> from aesara.tensor.rewriting.math import AlgebraicCanonizer
>>> add_canonizer = AlgebraicCanonizer(add, sub, neg, \\
>>> add_canonizer = AlgebraicCanonizer(add, sub, neg, \
... lambda n, d: sum(n) - sum(d))
>>> mul_canonizer = AlgebraicCanonizer(mul, true_div, inv, \\
>>> mul_canonizer = AlgebraicCanonizer(mul, true_div, inv, \
... lambda n, d: prod(n) / prod(d))

Examples of rewrites `mul_canonizer` can perform:
Expand Down
7 changes: 5 additions & 2 deletions doc/extending/creating_an_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -516,14 +516,17 @@ We can test this by running the following segment:

.. testcode:: properties

import numpy as np
import aesara.tensor as at

mult4plus5op = AXPBOp(4, 5)
another_mult4plus5op = AXPBOp(4, 5)
mult2plus3op = AXPBOp(2, 3)

assert mult4plus5op == another_mult4plus5op
assert mult4plus5op != mult2plus3op

x = aesara.tensor.matrix()
x = at.matrix()
f = aesara.function([x], mult4plus5op(x))
g = aesara.function([x], mult2plus3op(x))

Expand Down Expand Up @@ -667,8 +670,8 @@ For instance, to verify the :meth:`Rop` method of the ``DoubleOp``, you can use

.. testcode:: tests

import numpy
import tests
import numpy as np
from tests.test_rop import RopLop_checker
class TestDoubleRop(RopLop_checker):
def setUp(self):
Expand Down
2 changes: 1 addition & 1 deletion doc/extending/ctype.rst
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ Final version

.. testcode::

from aesara.graph.type import
from aesara.graph.type import Type

class Double(Type):

Expand Down
100 changes: 49 additions & 51 deletions doc/extending/graph_rewriting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ results as follows:
>>> res.evaled_obj
add.0
>>> aesara.dprint(res.evaled_obj)
add [id A] ''
add [id A]
|y [id B]
|y [id B]

Expand All @@ -400,10 +400,10 @@ varying number of arguments:
>>> args_lv = var()
>>> s = unify(cons(op_lv, args_lv), add(x, y))
>>> s
{~_2: <aesara.scalar.basic.Add at 0x7f54dfa5a350>, ~_3: e(x, y)}
{~_2: <aesara.scalar.basic.Add object at 0x7fc277b841c0>, ~_3: ExpressionTuple((x, y))}
>>> s = unify(cons(op_lv, args_lv), add(x, y, z))
>>> s
{~_2: <aesara.scalar.basic.Add at 0x7f54dfa5a350>, ~_3: e(x, y, z)}
{~_2: <aesara.scalar.basic.Add object at 0x7fc277b841c0>, ~_3: ExpressionTuple((x, y, z))}
Comment on lines -403 to +406
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we need to take another approach altogether here (e.g. one that doesn't print local locations/addresses).


From here, we can check ``s[op_lv] == add`` to confirm that we have the correct :class:`Op` and
proceed with our rewrite.
Expand All @@ -412,7 +412,7 @@ proceed with our rewrite.
>>> res
e(<aesara.scalar.basic.Mul at 0x7f54dfa5ae10>, x, y, z)
>>> aesara.dprint(res.evaled_obj)
mul [id A] ''
mul [id A]
|x [id B]
|y [id C]
|z [id D]
Expand Down Expand Up @@ -440,52 +440,49 @@ turning :mod:`kanren` relations into :class:`NodeRewriter`\s; however,

The following is an example that distributes dot products across additions.

.. code::

import aesara
import aesara.tensor as at
from aesara.graph.rewriting.kanren import KanrenRelationSub
from aesara.graph.rewriting.basic import EquilibriumGraphRewriter
from aesara.graph.rewriting.utils import rewrite_graph
from aesara.tensor.math import _dot
from etuples import etuple
from kanren import conso, eq, fact, heado, tailo
from kanren.assoccomm import assoc_flatten, associative
from kanren.core import lall
from kanren.graph import mapo
from unification import vars as lvars


# Make the graph pretty printing results a little more readable
aesara.pprint.assign(
_dot, aesara.printing.OperatorPrinter("@", -1, "left")
)

# Tell `kanren` that `add` is associative
fact(associative, at.add)


def dot_distributeo(in_lv, out_lv):
"""A `kanren` goal constructor relation for the relation ``A.dot(a + b ...) == A.dot(a) + A.dot(b) ...``."""
A_lv, add_term_lv, add_cdr_lv, dot_cdr_lv, add_flat_lv = lvars(5)

return lall(
# Make sure the input is a `_dot`
eq(in_lv, etuple(_dot, A_lv, add_term_lv)),
# Make sure the term being `_dot`ed is an `add`
heado(at.add, add_term_lv),
# Flatten the associative pairings of `add` operations
assoc_flatten(add_term_lv, add_flat_lv),
# Get the flattened `add` arguments
tailo(add_cdr_lv, add_flat_lv),
# Add all the `_dot`ed arguments and set the output
conso(at.add, dot_cdr_lv, out_lv),
# Apply the `_dot` to all the flattened `add` arguments
mapo(lambda x, y: conso(_dot, etuple(A_lv, x), y), add_cdr_lv, dot_cdr_lv),
)

.. testcode::

dot_distribute_rewrite = EquilibriumGraphRewriter([KanrenRelationSub(dot_distributeo)], max_use_ratio=10)
import aesara
import aesara.tensor as at
from aesara.graph.rewriting.kanren import KanrenRelationSub
from aesara.graph.rewriting.basic import EquilibriumGraphRewriter
from aesara.graph.rewriting.utils import rewrite_graph
from aesara.tensor.math import _dot
from etuples import etuple
from kanren import conso, eq, fact, heado, tailo
from kanren.assoccomm import assoc_flatten, associative
from kanren.core import lall
from kanren.graph import mapo
from unification import vars as lvars

# Make the graph pretty printing results a little more readable
aesara.pprint.assign(
_dot, aesara.printing.OperatorPrinter("@", -1, "left")
)

# Tell `kanren` that `add` is associative
fact(associative, at.add)

def dot_distributeo(in_lv, out_lv):
"""A `kanren` goal constructor relation for the relation ``A.dot(a + b ...) == A.dot(a) + A.dot(b) ...``."""
A_lv, add_term_lv, add_cdr_lv, dot_cdr_lv, add_flat_lv = lvars(5)

return lall(
# Make sure the input is a `_dot`
eq(in_lv, etuple(_dot, A_lv, add_term_lv)),
# Make sure the term being `_dot`ed is an `add`
heado(at.add, add_term_lv),
# Flatten the associative pairings of `add` operations
assoc_flatten(add_term_lv, add_flat_lv),
# Get the flattened `add` arguments
tailo(add_cdr_lv, add_flat_lv),
# Add all the `_dot`ed arguments and set the output
conso(at.add, dot_cdr_lv, out_lv),
# Apply the `_dot` to all the flattened `add` arguments
mapo(lambda x, y: conso(_dot, etuple(A_lv, x), y), add_cdr_lv, dot_cdr_lv),
)

dot_distribute_rewrite = EquilibriumGraphRewriter([KanrenRelationSub(dot_distributeo)], max_use_ratio=10)


Below, we apply `dot_distribute_rewrite` to a few example graphs. First we create simple test graph:
Expand All @@ -506,9 +503,10 @@ Next we apply the rewrite to the graph:
We see that the dot product has been distributed, as desired. Now, let's try a
few more test cases:

>>> import aesara.tensor as at
>>> z_at = at.vector("z")
>>> w_at = at.vector("w")
>>> test_at = A_at.dot((x_at + y_at) + (z_at + w_at))
>>> test_at = at.dot((x_at + y_at) + (z_at + w_at))
>>> print(aesara.pprint(test_at))
(A @ ((x + y) + (z + w)))
>>> res = rewrite_graph(test_at, include=[], custom_rewrite=dot_distribute_rewrite, clone=False)
Expand All @@ -517,7 +515,7 @@ few more test cases:

>>> B_at = at.matrix("B")
>>> w_at = at.vector("w")
>>> test_at = A_at.dot(x_at + (y_at + B_at.dot(z_at + w_at)))
>>> test_at = at.dot(x_at + (y_at + B_at.dot(z_at + w_at)))
>>> print(aesara.pprint(test_at))
(A @ (x + (y + ((B @ z) + (B @ w)))))
>>> res = rewrite_graph(test_at, include=[], custom_rewrite=dot_distribute_rewrite, clone=False)
Expand Down
6 changes: 3 additions & 3 deletions doc/extending/graphstructures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,9 +108,9 @@ All of the above can be succinctly summarized with the :func:`aesara.dprint`
function:

>>> aesara.dprint(y)
Elemwise{mul,no_inplace} [id A] ''
Elemwise{mul,no_inplace} [id A]
|x [id B]
|InplaceDimShuffle{x,x} [id C] ''
|InplaceDimShuffle{x,x} [id C]
|TensorConstant{2.0} [id D]

Starting from this graph structure it is easier to understand how
Expand Down Expand Up @@ -353,7 +353,7 @@ Consider the following example of rewrites:
>>> b = a + a ** 10 # build symbolic expression
>>> f = aesara.function([a], b) # compile function
>>> print(f([0, 1, 2])) # prints `array([0,2,1026])`
[ 0. 2. 1026.]
[ 0. 2. 1026.]
>>> aesara.printing.pydotprint(b, outfile="./pics/symbolic_graph_no_rewrite.png", var_with_name_simple=True) # doctest: +SKIP
The output file is available at ./pics/symbolic_graph_no_rewrite.png
>>> aesara.printing.pydotprint(f, outfile="./pics/symbolic_graph_rewite.png", var_with_name_simple=True) # doctest: +SKIP
Expand Down
4 changes: 2 additions & 2 deletions doc/extending/type.rst
Original file line number Diff line number Diff line change
Expand Up @@ -349,12 +349,12 @@ must define ``filter`` and ``values_eq_approx``.

# note that we shadow python's function ``filter`` with this
# definition.
def filter(x, strict=false, allow_downcast=none):
def filter(x, strict=False, allow_downcast=None):
if strict:
if isinstance(x, float):
return x
else:
raise typeerror('expected a float!')
raise TypeError('expected a float!')
elif allow_downcast:
return float(x)
else: # covers both the false and none cases.
Expand Down
2 changes: 1 addition & 1 deletion doc/library/compile/function.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ You've already seen example usage in the basic tutorial... something like this:
>>> x = aesara.tensor.dscalar()
>>> f = aesara.function([x], 2*x)
>>> f(4)
array(8.0)
array(8.)

The idea here is that we've compiled the symbolic graph (``2*x``) into a function that can be called on a number and will do some computations.

Expand Down
Loading