Skip to content

Commit e9354b1

Browse files
authored
Merge branch 'main' into 462-numpy-logical-aliases-lm
2 parents 37da12f + 89fe939 commit e9354b1

File tree

172 files changed

+5080
-4266
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

172 files changed

+5080
-4266
lines changed

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 29 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,31 @@
11
<!-- !! Thank your for opening a PR !! -->
22

3-
### Motivation for these changes
4-
...
5-
6-
### Implementation details
7-
...
8-
9-
10-
### Checklist
11-
+ [ ] Explain motivation and implementation 👆
12-
+ [ ] Make sure that [the pre-commit linting/style checks pass](https://docs.pymc.io/en/latest/contributing/python_style.html).
13-
+ [ ] Link relevant issues, preferably in [nice commit messages](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
14-
+ [ ] The commits correspond to [_relevant logical changes_](https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes). Note that if they don't, we will [rewrite/rebase/squash the git history](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#_rewriting_history) before merging.
15-
+ [ ] Are the changes covered by tests and docstrings?
16-
+ [ ] Fill out the short summary sections 👇
17-
18-
19-
## Major / Breaking Changes
20-
- ...
21-
22-
## New features
23-
- ...
24-
25-
## Bugfixes
26-
- ...
27-
28-
## Documentation
29-
- ...
30-
31-
## Maintenance
32-
- ...
33-
3+
<!--- Provide a self-contained summary of your changes in the Title above -->
4+
<!--- This is what will be shown in the automatic release notes: https://github.com/pymc-devs/pytensor/releases -->
5+
6+
## Description
7+
<!--- Describe your changes in detail -->
8+
9+
## Related Issue
10+
<!--- It is good practice to first open an issue explaining the bug / new feature that is addressed by this PR -->
11+
<!--- Please type an `x` in one of the boxes below and provide the issue number after the # sign: -->
12+
- [ ] Closes #
13+
- [ ] Related to #
14+
15+
## Checklist
16+
<!--- Make sure you have completed the following steps before submitting your PR -->
17+
<!--- Feel free to type an `x` in all the boxes below to let us know you have completed the steps: -->
18+
- [ ] Checked that [the pre-commit linting/style checks pass](https://docs.pymc.io/en/latest/contributing/python_style.html)
19+
- [ ] Included tests that prove the fix is effective or that the new feature works
20+
- [ ] Added necessary documentation (docstrings and/or example notebooks)
21+
- [ ] If you are a pro: each commit corresponds to a [relevant logical change](https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes)
22+
<!--- You may find this guide helpful: https://mainmatter.com/blog/2021/05/26/keeping-a-clean-git-history/ -->
23+
24+
## Type of change
25+
<!--- Select one of the categories below by typing an `x` in the box -->
26+
- [ ] New feature / enhancement
27+
- [ ] Bug fix
28+
- [ ] Documentation
29+
- [ ] Maintenance
30+
- [ ] Other (please specify):
31+
<!--- Additionally, if you are a maintainer or reviewer, please make sure that the appropriate labels are added to this PR -->

.github/workflows/pypi.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ jobs:
8888
name: artifact
8989
path: dist
9090

91-
- uses: pypa/[email protected].10
91+
- uses: pypa/[email protected].11
9292
with:
9393
user: __token__
9494
password: ${{ secrets.pypi_password }}

.github/workflows/test.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ jobs:
120120
with:
121121
fetch-depth: 0
122122
- name: Set up Python ${{ matrix.python-version }}
123-
uses: conda-incubator/setup-miniconda@v2
123+
uses: conda-incubator/setup-miniconda@v3
124124
with:
125125
miniforge-variant: Mambaforge
126126
miniforge-version: latest
@@ -191,7 +191,7 @@ jobs:
191191
with:
192192
fetch-depth: 0
193193
- name: Set up Python 3.9
194-
uses: conda-incubator/setup-miniconda@v2
194+
uses: conda-incubator/setup-miniconda@v3
195195
with:
196196
miniforge-variant: Mambaforge
197197
miniforge-version: latest

doc/extending/creating_a_numba_jax_op.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -135,16 +135,16 @@ Here's a small example of a test for :class:`Eye`:
135135

136136
.. code:: python
137137
138-
import pytensor.tensor as at
138+
import pytensor.tensor as pt
139139
140140
def test_jax_Eye():
141141
"""Test JAX conversion of the `Eye` `Op`."""
142142
143143
# Create a symbolic input for `Eye`
144-
x_at = at.scalar()
144+
x_at = pt.scalar()
145145
146146
# Create a variable that is the output of an `Eye` `Op`
147-
eye_var = at.eye(x_at)
147+
eye_var = pt.eye(x_at)
148148
149149
# Create an PyTensor `FunctionGraph`
150150
out_fg = FunctionGraph(outputs=[eye_var])

doc/extending/creating_an_op.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -786,7 +786,7 @@ signature:
786786
.. testcode:: asop
787787

788788
import pytensor
789-
import pytensor.tensor as at
789+
import pytensor.tensor as pt
790790
import numpy as np
791791
from pytensor import function
792792
from pytensor.compile.ops import as_op
@@ -797,17 +797,17 @@ signature:
797797
return [ashp[:-1] + bshp[-1:]]
798798

799799

800-
@as_op(itypes=[at.matrix, at.matrix],
801-
otypes=[at.matrix], infer_shape=infer_shape_numpy_dot)
800+
@as_op(itypes=[pt.matrix, pt.matrix],
801+
otypes=[pt.matrix], infer_shape=infer_shape_numpy_dot)
802802
def numpy_dot(a, b):
803803
return np.dot(a, b)
804804

805805
You can try it as follows:
806806

807807
.. testcode:: asop
808808

809-
x = at.matrix()
810-
y = at.matrix()
809+
x = pt.matrix()
810+
y = pt.matrix()
811811
f = function([x, y], numpy_dot(x, y))
812812
inp1 = np.random.random_sample((5, 4))
813813
inp2 = np.random.random_sample((4, 7))

doc/extending/extending_pytensor_solution_1.py

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@
1414

1515
class ProdOp(Op):
1616
def make_node(self, x, y):
17-
x = at.as_tensor_variable(x)
18-
y = at.as_tensor_variable(y)
17+
x = pt.as_tensor_variable(x)
18+
y = pt.as_tensor_variable(y)
1919
outdim = x.type.ndim
2020
output = TensorType(
2121
dtype=pytensor.scalar.upcast(x.dtype, y.dtype), shape=(None,) * outdim
@@ -39,8 +39,8 @@ def grad(self, inputs, output_grads):
3939

4040
class SumDiffOp(Op):
4141
def make_node(self, x, y):
42-
x = at.as_tensor_variable(x)
43-
y = at.as_tensor_variable(y)
42+
x = pt.as_tensor_variable(x)
43+
y = pt.as_tensor_variable(y)
4444
outdim = x.type.ndim
4545
output1 = TensorType(
4646
dtype=pytensor.scalar.upcast(x.dtype, y.dtype), shape=(None,) * outdim
@@ -62,20 +62,16 @@ def infer_shape(self, fgraph, node, i0_shapes):
6262
def grad(self, inputs, output_grads):
6363
og1, og2 = output_grads
6464
if og1 is None:
65-
og1 = at.zeros_like(og2)
65+
og1 = pt.zeros_like(og2)
6666
if og2 is None:
67-
og2 = at.zeros_like(og1)
67+
og2 = pt.zeros_like(og1)
6868
return [og1 + og2, og1 - og2]
6969

7070

7171
# 3. Testing apparatus
72-
73-
import numpy as np
74-
7572
from tests import unittest_tools as utt
76-
from pytensor import tensor as at
73+
from pytensor import tensor as pt
7774
from pytensor.graph.basic import Apply
78-
from pytensor.graph.op import Op
7975
from pytensor.tensor.type import dmatrix, matrix
8076

8177

@@ -182,8 +178,8 @@ def infer_shape_numpy_dot(fgraph, node, input_shapes):
182178

183179

184180
@as_op(
185-
itypes=[at.fmatrix, at.fmatrix],
186-
otypes=[at.fmatrix],
181+
itypes=[pt.fmatrix, pt.fmatrix],
182+
otypes=[pt.fmatrix],
187183
infer_shape=infer_shape_numpy_dot,
188184
)
189185
def numpy_add(a, b):
@@ -197,17 +193,17 @@ def infer_shape_numpy_add_sub(fgraph, node, input_shapes):
197193

198194

199195
@as_op(
200-
itypes=[at.fmatrix, at.fmatrix],
201-
otypes=[at.fmatrix],
196+
itypes=[pt.fmatrix, pt.fmatrix],
197+
otypes=[pt.fmatrix],
202198
infer_shape=infer_shape_numpy_add_sub,
203199
)
204200
def numpy_add(a, b):
205201
return np.add(a, b)
206202

207203

208204
@as_op(
209-
itypes=[at.fmatrix, at.fmatrix],
210-
otypes=[at.fmatrix],
205+
itypes=[pt.fmatrix, pt.fmatrix],
206+
otypes=[pt.fmatrix],
211207
infer_shape=infer_shape_numpy_add_sub,
212208
)
213209
def numpy_sub(a, b):

doc/extending/graph_rewriting.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -443,7 +443,7 @@ The following is an example that distributes dot products across additions.
443443
.. code::
444444
445445
import pytensor
446-
import pytensor.tensor as at
446+
import pytensor.tensor as pt
447447
from pytensor.graph.rewriting.kanren import KanrenRelationSub
448448
from pytensor.graph.rewriting.basic import EquilibriumGraphRewriter
449449
from pytensor.graph.rewriting.utils import rewrite_graph
@@ -462,7 +462,7 @@ The following is an example that distributes dot products across additions.
462462
)
463463
464464
# Tell `kanren` that `add` is associative
465-
fact(associative, at.add)
465+
fact(associative, pt.add)
466466
467467
468468
def dot_distributeo(in_lv, out_lv):
@@ -473,13 +473,13 @@ The following is an example that distributes dot products across additions.
473473
# Make sure the input is a `_dot`
474474
eq(in_lv, etuple(_dot, A_lv, add_term_lv)),
475475
# Make sure the term being `_dot`ed is an `add`
476-
heado(at.add, add_term_lv),
476+
heado(pt.add, add_term_lv),
477477
# Flatten the associative pairings of `add` operations
478478
assoc_flatten(add_term_lv, add_flat_lv),
479479
# Get the flattened `add` arguments
480480
tailo(add_cdr_lv, add_flat_lv),
481481
# Add all the `_dot`ed arguments and set the output
482-
conso(at.add, dot_cdr_lv, out_lv),
482+
conso(pt.add, dot_cdr_lv, out_lv),
483483
# Apply the `_dot` to all the flattened `add` arguments
484484
mapo(lambda x, y: conso(_dot, etuple(A_lv, x), y), add_cdr_lv, dot_cdr_lv),
485485
)
@@ -490,10 +490,10 @@ The following is an example that distributes dot products across additions.
490490
491491
Below, we apply `dot_distribute_rewrite` to a few example graphs. First we create simple test graph:
492492

493-
>>> x_at = at.vector("x")
494-
>>> y_at = at.vector("y")
495-
>>> A_at = at.matrix("A")
496-
>>> test_at = A_at.dot(x_at + y_at)
493+
>>> x_at = pt.vector("x")
494+
>>> y_at = pt.vector("y")
495+
>>> A_at = pt.matrix("A")
496+
>>> test_at = A_pt.dot(x_at + y_at)
497497
>>> print(pytensor.pprint(test_at))
498498
(A @ (x + y))
499499

@@ -506,18 +506,18 @@ Next we apply the rewrite to the graph:
506506
We see that the dot product has been distributed, as desired. Now, let's try a
507507
few more test cases:
508508

509-
>>> z_at = at.vector("z")
510-
>>> w_at = at.vector("w")
511-
>>> test_at = A_at.dot((x_at + y_at) + (z_at + w_at))
509+
>>> z_at = pt.vector("z")
510+
>>> w_at = pt.vector("w")
511+
>>> test_at = A_pt.dot((x_at + y_at) + (z_at + w_at))
512512
>>> print(pytensor.pprint(test_at))
513513
(A @ ((x + y) + (z + w)))
514514
>>> res = rewrite_graph(test_at, include=[], custom_rewrite=dot_distribute_rewrite, clone=False)
515515
>>> print(pytensor.pprint(res))
516516
(((A @ x) + (A @ y)) + ((A @ z) + (A @ w)))
517517

518-
>>> B_at = at.matrix("B")
519-
>>> w_at = at.vector("w")
520-
>>> test_at = A_at.dot(x_at + (y_at + B_at.dot(z_at + w_at)))
518+
>>> B_at = pt.matrix("B")
519+
>>> w_at = pt.vector("w")
520+
>>> test_at = A_pt.dot(x_at + (y_at + B_pt.dot(z_at + w_at)))
521521
>>> print(pytensor.pprint(test_at))
522522
(A @ (x + (y + ((B @ z) + (B @ w)))))
523523
>>> res = rewrite_graph(test_at, include=[], custom_rewrite=dot_distribute_rewrite, clone=False)

doc/extending/graphstructures.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,10 @@ The following illustrates these elements:
2828

2929
.. testcode::
3030

31-
import pytensor.tensor as at
31+
import pytensor.tensor as pt
3232

33-
x = at.dmatrix('x')
34-
y = at.dmatrix('y')
33+
x = pt.dmatrix('x')
34+
y = pt.dmatrix('y')
3535
z = x + y
3636

3737
**Diagram**

doc/extending/tips.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,10 @@ simple function:
2020

2121
.. code::
2222
23-
from pytensor import tensor as at
23+
from pytensor import tensor as pt
2424
2525
def sum_square_difference(a, b):
26-
return at.sum((a - b)**2)
26+
return pt.sum((a - b)**2)
2727
2828
Even without taking PyTensor's rewrites into account, it is likely
2929
to work just as well as a custom implementation. It also supports all

doc/extending/unittest.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -98,13 +98,13 @@ Example:
9898
.. code-block:: python
9999
100100
import numpy as np
101-
import pytensor.tensor as at
101+
import pytensor.tensor as pt
102102
103103
104104
def test_dot_validity():
105-
a = at.dmatrix('a')
106-
b = at.dmatrix('b')
107-
c = at.dot(a, b)
105+
a = pt.dmatrix('a')
106+
b = pt.dmatrix('b')
107+
c = pt.dot(a, b)
108108
109109
c_fn = pytensor.function([a, b], [c])
110110
@@ -187,7 +187,7 @@ symbolic variable:
187187

188188
def test_verify_exprgrad():
189189
def fun(x,y,z):
190-
return (x + at.cos(y)) / (4 * z)**2
190+
return (x + pt.cos(y)) / (4 * z)**2
191191

192192
x_val = np.asarray([[1], [1.1], [1.2]])
193193
y_val = np.asarray([0.1, 0.2])
@@ -207,7 +207,7 @@ Here is an example showing how to use :func:`verify_grad` on an :class:`Op` inst
207207
"""
208208
a_val = np.asarray([[0,1,2],[3,4,5]], dtype='float64')
209209
rng = np.random.default_rng(42)
210-
pytensor.gradient.verify_grad(at.Flatten(), [a_val], rng=rng)
210+
pytensor.gradient.verify_grad(pt.Flatten(), [a_val], rng=rng)
211211

212212
.. note::
213213

doc/glossary.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Glossary
66
.. testsetup::
77

88
import pytensor
9-
import pytensor.tensor as at
9+
import pytensor.tensor as pt
1010

1111
.. glossary::
1212

@@ -31,7 +31,7 @@ Glossary
3131
A variable with an immutable value.
3232
For example, when you type
3333

34-
>>> x = at.ivector()
34+
>>> x = pt.ivector()
3535
>>> y = x + 3
3636

3737
Then a `constant` is created to represent the ``3`` in the graph.
@@ -151,7 +151,7 @@ Glossary
151151
The the main data structure you work with when using PyTensor.
152152
For example,
153153

154-
>>> x = at.ivector()
154+
>>> x = pt.ivector()
155155
>>> y = -x**2
156156

157157
``x`` and ``y`` are both :class:`Variable`\s, i.e. instances of the :class:`Variable` class.

doc/introduction.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,11 +66,11 @@ its features, but it illustrates concretely what PyTensor is.
6666
.. code-block:: python
6767
6868
import pytensor
69-
from pytensor import tensor as at
69+
from pytensor import tensor as pt
7070
7171
# declare two symbolic floating-point scalars
72-
a = at.dscalar()
73-
b = at.dscalar()
72+
a = pt.dscalar()
73+
b = pt.dscalar()
7474
7575
# create a simple expression
7676
c = a + b

0 commit comments

Comments
 (0)