You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loading a json file with large integers (> 2^32), results in "Value is too big". I have tried changing the orient to "records" and also passing in dtype={'id': numpy.dtype('uint64')}. The error is the same.
id
count 1
unique 1
top 10254939386542155531
freq 1
Actual Output (even with dtype passed in)
File "./parse_dispatch_table.py", line 34, in <module>
print(pandas.read_json('''{"id": 10254939386542155531}''', dtype=dtype_conversions).describe())
File "/users/XXX/.local/lib/python3.4/site-packages/pandas/io/json.py", line 234, in read_json
date_unit).parse()
File "/users/XXX/.local/lib/python3.4/site-packages/pandas/io/json.py", line 302, in parse
self._parse_no_numpy()
File "/users/XXX/.local/lib/python3.4/site-packages/pandas/io/json.py", line 519, in _parse_no_numpy
loads(json, precise_float=self.precise_float), dtype=None)
ValueError: Value is too big
I'm bumping up into this same issue too where a 64bit integer is being used as an id. Any workaround for overriding? Would have been nice if the dtype specification drove an override but type coercion must occur after default inferred type loading
This comes up during a system log archive collection in MacOS High Sierra executing from a bash shell that is later rendered to text with a json styling...
log collect
log show --style json > ~/syslogarchive.json
python
>import pandas
>dfSysLog = pandas.read_json( '~/syslogarchive.json' )
...
ValueError: Value is too big
Uh oh!
There was an error while loading. Please reload this page.
Loading a json file with large integers (> 2^32), results in "Value is too big". I have tried changing the orient to "records" and also passing in dtype={'id': numpy.dtype('uint64')}. The error is the same.
Expected Output
Actual Output (even with dtype passed in)
No problem using read_csv:
Output using read_csv
Output of
pd.show_versions()
commit: None
python: 3.4.3.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-327.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.0
nose: None
pip: 8.1.2
setuptools: 28.6.0
Cython: None
numpy: 1.11.2
scipy: None
statsmodels: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.5.3
pytz: 2016.7
blosc: None
bottleneck: None
tables: None
numexpr: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
boto: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: