Skip to content
This repository was archived by the owner on Jan 18, 2019. It is now read-only.

Keep fetching on empty pages #46

Merged
merged 1 commit into from
Oct 11, 2018

Conversation

helfi92
Copy link
Contributor

@helfi92 helfi92 commented Oct 5, 2018

@helfi92 helfi92 self-assigned this Oct 5, 2018
@helfi92 helfi92 requested a review from djmitche October 5, 2018 12:52
Copy link
Contributor

@djmitche djmitche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this file is intended to allow paging both forward and backward through results. The issue with the result property not always having the same name means that this won't work as-written. But I think you might benefit from also divorcing the REST API's pagination from the GraphQL pagination. Azure can happily return empty pages, but also pages with fewer than LIMIT items. Users paging through a table of 100 items at a time, but only seeing 2-3 items, will still be confused just as they are seeing 0 items at a time. So basically this is fixing the "0 results" case without fixing the "1 results" case.

To solve the first issue, I think the best fix is to pass the result property name (or maybe a function for extracting items from result) when building the ConnectionLoader.

For the second, you could construct cursors of the form b64("<contToken>/<offset>") (being careful on decoding to split on the last / in case that character appears in the continuation token). Then, when fetching, continue doing so until you've seen a full page of results, even if that takes a few REST API calls. When starting from a cursor, split it into continuation token and offset, fetch with the continuation token, and then discard results based on the offset. This will have the advantage of behaving much more like a GraphQL consumer expects (ask for 17 items, get 17 items if that many exist).

On the other hand, as we start using postgres, the API is likely to be more faithful to the limit option, so another approach is just to leave all of this alone and wait until that happens.

? await singleConnectionHandler({ connection, options, ...props })
: await singleConnectionHandler({ options, ...props });

if (result.continuationToken && !result.items.length) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The returned items are not always in a property named "items". Bug 1436212 has some background on why this is hard.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every connection loader in taskcluster-web-server returns an items property (https://github.com/taskcluster/taskcluster-web-server/search?utf8=%E2%9C%93&q=items%3A&type=). I guess we can write tests to assert that each connection handler returns an object with an items fields.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, awesome. I didn't know these were results from a loader. One of the downsides of this heavily-functional style of programming is that it's not easy to follow control flow!

@helfi92
Copy link
Contributor Author

helfi92 commented Oct 11, 2018

To solve the first issue, I think the best fix is to pass the result property name (or maybe a function for extracting items from result) when building the ConnectionLoader.

I'm not sure I understand what you mean. The items are always under result.items. See earlier comment.

Regarding the second issue, I agree that we shouldn't sweat over it right now. Maybe later if migration to postgres starts taking too long.

@djmitche
Copy link
Contributor

OK, sounds good :)

@helfi92 helfi92 merged commit 7aa380d into taskcluster:master Oct 11, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants