-
Notifications
You must be signed in to change notification settings - Fork 197
python 3 return single collection of JSONs #165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Could you provide the driver and server version that you are seeing this issue? |
server version is 3.1.3, will upload the driver version when I get home |
driver version is 1.2.1 |
@zhenlineo I've seen a similar issue when trying to fetch entities from the graph, Exception BufferError: 'Existing exports of data: object cannot be re-sized' in 'neo4j.bolt._io.ChunkedInputBuffer.receive' ignored I tracked it down to nodes in the graph which had a property with a fairly large JSON encoded string and thus I wonder whether the initial buffer isn't large enough and/or doesn't support resizing. The temporary workaround was merely to remove the property. Our configuration is: Python 2.7, Neo4j 3.1.0, neo4j-driver 1.2.1. |
@tomasonjo / @johnbodley - could you provide information on how big the oversized property needs to be to trigger this error? |
@technige it's hard for me to say exactly as we've now removed the problematic data. My rough estimate would be that the JSON blob was between 1 - 10 MB. |
I am having this exact problem when using a COLLECT that return a large list of objects. If more info is needed I can open a separate issue. |
I am also having this problem using a COLLECT statement that is collecting more than a few thousand relationships. The query works without any problems in the web-client, however through the python bolt interface i get this error: My Setup: According to the profiler it happens when it tries to collect 200 000 nodes/relationships. EDIT: The resizing of the Python array probably works without any problems however construction of the new memoryview possibly fails. With my query of a returning size of about 2MB the overflow part of the code is triggered and then fails resulting in said BufferError with a subsequent connection breakup in I hope this helps with resolving the issue. I am currently using some workarounds of iterative querying to be able to use Neo4J in my projekt. However it seems that I still hit the wall of the limited buffer size and can not proceed further with the existing problem. If there is anything more I can do to help resolve this issue please let me know and I will provide you with all the information I possibly can. |
From what I could test with my current setup and database all my problems regarding the memory problem in the driver with large queries/results seem to have been resolved with the recent 1.5.3 update. As for me this issue can be closed. |
@palladionIT, thank you for the feedback. I also take this issue as solved so far. @tomasonjo @john-bodley @holmrenser The recent 1.5.3 release should have had this issue addressed. I am closing this thread now. If anyone who still have any issue regarding handling big record, please be free to open another new issue. Zhen |
I have this query i am running:
from neo4j.v1 import GraphDatabase
Now this works fine on my MAC, which uses python 2.7, but on my Windows Laptop, where I have Anaconda3 and python3 i get the following error:
The text was updated successfully, but these errors were encountered: