@@ -34,25 +34,25 @@ First, we need to define symbolic variables for our inputs (this
34
34
is similar to eg SymPy's `Symbol `)::
35
35
36
36
import pytensor
37
- import pytensor.tensor as at
37
+ import pytensor.tensor as pt
38
38
# We don't specify the dtype of our input variables, so it
39
39
# defaults to using float64 without any special config.
40
- a = at .scalar('a')
41
- x = at .vector('x')
42
- # `at .ivector` creates a symbolic vector of integers.
43
- y = at .ivector('y')
40
+ a = pt .scalar('a')
41
+ x = pt .vector('x')
42
+ # `pt .ivector` creates a symbolic vector of integers.
43
+ y = pt .ivector('y')
44
44
45
45
Next, we use those variables to build up a symbolic representation
46
46
of the output of our function. Note that no computation is actually
47
47
being done at this point. We only record what operations we need to
48
48
do to compute the output::
49
49
50
50
inner = a * x**3 + y**2
51
- out = at .exp(inner).sum()
51
+ out = pt .exp(inner).sum()
52
52
53
53
.. note ::
54
54
55
- In this example we use `at .exp ` to create a symbolic representation
55
+ In this example we use `pt .exp ` to create a symbolic representation
56
56
of the exponential of `inner `. Somewhat surprisingly, it
57
57
would also have worked if we used `np.exp `. This is because numpy
58
58
gives objects it operates on a chance to define the results of
@@ -77,8 +77,8 @@ We can call this function with actual arrays as many times as we want::
77
77
78
78
For the most part the symbolic PyTensor variables can be operated on
79
79
like NumPy arrays. Most NumPy functions are available in `pytensor.tensor `
80
- (which is typically imported as `at `). A lot of linear algebra operations
81
- can be found in `at .nlinalg ` and `at .slinalg ` (the NumPy and SciPy
80
+ (which is typically imported as `pt `). A lot of linear algebra operations
81
+ can be found in `pt .nlinalg ` and `pt .slinalg ` (the NumPy and SciPy
82
82
operations respectively). Some support for sparse matrices is available
83
83
in `pytensor.sparse `. For a detailed overview of available operations,
84
84
see :mod: `the pytensor api docs <pytensor.tensor> `.
@@ -88,9 +88,9 @@ NumPy arrays are operations involving conditional execution.
88
88
89
89
Code like this won't work as expected::
90
90
91
- a = at .vector('a')
91
+ a = pt .vector('a')
92
92
if (a > 0).all():
93
- b = at .sqrt(a)
93
+ b = pt .sqrt(a)
94
94
else:
95
95
b = -a
96
96
@@ -100,28 +100,28 @@ and according to the rules for this conversion, things that aren't empty
100
100
containers or zero are converted to `True `. So the code is equivalent
101
101
to this::
102
102
103
- a = at .vector('a')
104
- b = at .sqrt(a)
103
+ a = pt .vector('a')
104
+ b = pt .sqrt(a)
105
105
106
- To get the desired behaviour, we can use `at .switch `::
106
+ To get the desired behaviour, we can use `pt .switch `::
107
107
108
- a = at .vector('a')
109
- b = at .switch((a > 0).all(), at .sqrt(a), -a)
108
+ a = pt .vector('a')
109
+ b = pt .switch((a > 0).all(), pt .sqrt(a), -a)
110
110
111
111
Indexing also works similarly to NumPy::
112
112
113
- a = at .vector('a')
113
+ a = pt .vector('a')
114
114
# Access the 10th element. This will fail when a function build
115
115
# from this expression is executed with an array that is too short.
116
116
b = a[10]
117
117
118
118
# Extract a subvector
119
119
b = a[[1, 2, 10]]
120
120
121
- Changing elements of an array is possible using `at .set_subtensor `::
121
+ Changing elements of an array is possible using `pt .set_subtensor `::
122
122
123
- a = at .vector('a')
124
- b = at .set_subtensor(a[:10], 1)
123
+ a = pt .vector('a')
124
+ b = pt .set_subtensor(a[:10], 1)
125
125
126
126
# is roughly equivalent to this (although pytensor avoids
127
127
# the copy if `a` isn't used anymore)
@@ -167,7 +167,7 @@ this is happening::
167
167
# in exactly this way!
168
168
model = pm.Model()
169
169
170
- mu = at .scalar('mu')
170
+ mu = pt .scalar('mu')
171
171
model.add_free_variable(mu)
172
172
model.add_logp_term(pm.Normal.dist(0, 1).logp(mu))
173
173
@@ -195,15 +195,15 @@ is roughly equivalent to this::
195
195
196
196
# For illustration only, not real code!
197
197
model = pm.Model()
198
- mu = at .scalar('mu')
198
+ mu = pt .scalar('mu')
199
199
model.add_free_variable(mu)
200
200
model.add_logp_term(pm.Normal.dist(0, 1).logp(mu))
201
201
202
- sd_log__ = at .scalar('sd_log__')
202
+ sd_log__ = pt .scalar('sd_log__')
203
203
model.add_free_variable(sd_log__)
204
204
model.add_logp_term(corrected_logp_half_normal(sd_log__))
205
205
206
- sigma = at .exp(sd_log__)
206
+ sigma = pt .exp(sd_log__)
207
207
model.add_deterministic_variable(sigma)
208
208
209
209
model.add_logp_term(pm.Normal.dist(mu, sigma).logp(data))
@@ -214,8 +214,8 @@ PyTensor operation on them::
214
214
215
215
design_matrix = np.array([[...]])
216
216
with pm.Model() as model:
217
- # beta is a at .dvector
217
+ # beta is a pt .dvector
218
218
beta = pm.Normal('beta', 0, 1, shape=len(design_matrix))
219
- predict = at .dot(design_matrix, beta)
219
+ predict = pt .dot(design_matrix, beta)
220
220
sigma = pm.HalfCauchy('sigma', beta=2.5)
221
221
pm.Normal('y', mu=predict, sigma=sigma, observed=data)
0 commit comments