Skip to content

garch example fails #1491

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mikemac8888 opened this issue Nov 1, 2016 · 32 comments
Closed

garch example fails #1491

mikemac8888 opened this issue Nov 1, 2016 · 32 comments

Comments

@mikemac8888
Copy link

Running examples/garch_example.py fails with error:

Traceback (most recent call last):
  File "garch_example.py", line 40, in <module>
    beta1 = BoundedNormal('beta1', 0, sd=1e6)
  File "/Users/**/anaconda3/envs/py35/lib/python3.5/site-packages/pymc3/distributions/continuous.py", line 1102, in __call__
    *args, **kwargs)
  File "/Users/**/anaconda3/envs/py35/lib/python3.5/site-packages/pymc3/distributions/distribution.py", line 27, in __new__
    return model.Var(name, dist, data)
  File "/Users/**/anaconda3/envs/py35/lib/python3.5/site-packages/pymc3/model.py", line 288, in Var
    transform=dist.transform)
  File "/Users/**/anaconda3/envs/py35/lib/python3.5/site-packages/pymc3/model.py", line 689, in __init__
    transformed_name = "{}_{}_".format(name, transform.name)
AttributeError: 'int' object has no attribute 'name'

Python 3.5.2
pymc3 (3.0rc2)

@twiecki
Copy link
Member

twiecki commented Nov 1, 2016

There's actually two issues at play:

  1. Normal needs mu=0 explicitly.
  2. That leads to the problem of passing a tensor to Bounded which infers the bounds using np.isinf() which chokes on the tensor. @ColCarroll any ideas?

@twiecki
Copy link
Member

twiecki commented Nov 1, 2016

Also not sure why the tests pass.

@springcoil
Copy link
Contributor

This is interesting - did I write this example originally?

@springcoil
Copy link
Contributor

Is this related? #1170 I think it might be - because the sample is taking advi normally, and Bound might not be working properly. Any ideas @ColCarroll ?

@springcoil
Copy link
Contributor

I can confirm that changing the sampling to nuts for example makes no difference.

@ColCarroll
Copy link
Member

the stuff in the examples folder is not tested -- we moved some of these to the test suite, and recommend moving the others to full fledged notebooks. i'm looking at why this doesn't run, though!

@ColCarroll
Copy link
Member

Actually, there are a few funny things going on here -- I haven't touched much of the transform code, but here's what I've got:

-- As pointed out, you have to supply mu= in a few places to get things running
-- You also have to change all the checks in Bounded to use tt.isinf instead of np.isinf
-- The example hits Bounded.__init__ with lower = -inf and upper being a theano tensor, but falls through all three checks (we should change those transform checks to if...elif...elif...else)
-- the call to self.__dict__.update(locals()) means the "default" transform is the string "infer"

@springcoil
Copy link
Contributor

springcoil commented Dec 13, 2016 via email

@ColCarroll
Copy link
Member

Oy, it is a little more complicated than all that -- I was fooled because tt.isinf is truthy for any input. I think it can work for this example still -- I'll take a look.

@kris-singh
Copy link

kris-singh commented Feb 10, 2017

@twiecki @ColCarroll i think i figured it out. we could check if the type(float) is float or not. Then use np.isinf for the float variable.
But i am having trouble understanding this,

  1. in the given example upper evaluates to (TensorConstant{1} - alpha1) the alpha1 value will available only when we sample the from the normal distribution. So how can we compare default >= upper .
  2. is default also a theano variable

is this making any sense?

@kris-singh
Copy link

if isinstance(upper, tt.TensorVariable):
                upperisinf = tt.isinf(upper).eval()
            else:
                upperisinf = np.isinf(upper)

            if isinstance(lower, tt.TensorVariable):
                lowerisinf = tt.inf(lower).eval()
            else:
                lowerisinf = np.isinf(lower)

            if not lowerisinf and not upperisinf:
                self.transform = transforms.interval(lower, upper)
                if default <= lower or default >= upper:
                    self.testval = 0.5 * (upper + lower)

            if not lowerisinf and upperisinf:
                self.transform = transforms.lowerbound(lower)
                if default <= lower:
                    self.testval = lower + 1

            if lowerisinf and not upperisinf:
                self.transform = transforms.upperbound(upper)
                if default >= upper:
                    self.testval = upper - 1

Error: theano.gof.fg.MissingInputError: An input of the graph, used to compute Elemwise{sub,no_inplace}(DimShuffle{x}.0, alpha1), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.

@kris-singh
Copy link

kris-singh commented Feb 11, 2017

Is there any code that is actually compiling the theano function. I tried to look for it in the sampling code but coudn't find it. Could anybody help.

@kris-singh
Copy link

this gist might help clarifying what i am asking.
https://gist.github.com/kris-singh/218542ddab4b62248731926c22da27b5

@twiecki
Copy link
Member

twiecki commented Feb 13, 2017

@kris-singh The code is not run like python code. Maybe reading a bit on the theano docs clears things up: http://deeplearning.net/software/theano/

@kris-singh
Copy link

@twiecki if you could take a look at the gist. you would understand my doubt better.

@twiecki
Copy link
Member

twiecki commented Feb 13, 2017

@kris-singh I looked at the gist. What output do you expect?

@kris-singh
Copy link

@twiecki since we get the theano variables lets say var1 and var2 and var3 = var1*var2. theano would have only symbolic variable. we need to supply inputs and then compile the var3 to get its output. where in the pymc3 code is this happening.

@kris-singh
Copy link

@twiecki something like this

import theano.tensor as T
from theano import function
a = T.dscalar('a')
b = T.dscalar('b')
c = a*b
f = function([a.b],c)
f(2,3)

@twiecki
Copy link
Member

twiecki commented Feb 13, 2017

That all happens in model.py.

@twiecki
Copy link
Member

twiecki commented Feb 13, 2017

@kris-singh
Copy link

kris-singh commented Feb 13, 2017

@twiecki and the inputs to the these functions are provided using the sample function right?

@twiecki
Copy link
Member

twiecki commented Feb 13, 2017

Exactly, in step.

@kris-singh
Copy link

any ideas on how to solve this. I am stuck. if use something like this tt.le(default,upper). should the upper values from model.var. or how ???

@kris-singh
Copy link

can anybody help

@twiecki
Copy link
Member

twiecki commented Feb 15, 2017

This is quite a tricky issue. Can we evaluate the tensor as it is passed in? We don't need to do the inference during sampling, only during instantiation.

@twiecki
Copy link
Member

twiecki commented Feb 15, 2017

Oh, that's what you are doing above with isinf. I think we can do the same with lower and upper, no?

@kris-singh
Copy link

@twiecki but while doing this with isinf also there is a theano error. The inputs are not provided. So we need to figure out how we are going to provide the input values to these variables.

@twiecki
Copy link
Member

twiecki commented Feb 15, 2017

Ah, I think if a tensor is passed in we can safely assume that it's not inf.

@twiecki
Copy link
Member

twiecki commented Feb 17, 2017

@kris-singh I think we just need to remove the checks if a tensor is passed. We will assume that it's not inf and not set a test_val (and not check for its value). Does that make sense?

@kris-singh
Copy link

yes it does. but who to evaluate the tensor like upper when doinging default >= upper ?

@twiecki
Copy link
Member

twiecki commented Feb 19, 2017 via email

@twiecki
Copy link
Member

twiecki commented Feb 19, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants