diff --git a/cspell.yml b/cspell.yml index e8aa73355..4326d677a 100644 --- a/cspell.yml +++ b/cspell.yml @@ -4,6 +4,7 @@ ignoreRegExpList: - /[a-z]{2,}'s/ words: # Terms of art + - deprioritization - endianness - interoperation - monospace @@ -11,6 +12,7 @@ words: - parallelization - structs - subselection + - errored # Fictional characters / examples - alderaan - hagrid diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 7c116bf81..cf0cd42b7 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -758,8 +758,9 @@ And will yield the subset of each object type queried: When querying an Object, the resulting mapping of fields are conceptually ordered in the same order in which they were encountered during execution, excluding fragments for which the type does not apply and fields or fragments -that are skipped via `@skip` or `@include` directives. This ordering is -correctly produced when using the {CollectFields()} algorithm. +that are skipped via `@skip` or `@include` directives or temporarily skipped via +`@defer`. This ordering is correctly produced when using the {CollectFields()} +algorithm. Response serialization formats capable of representing ordered maps should maintain this ordering. Serialization formats which can only represent unordered @@ -1901,6 +1902,11 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. +GraphQL implementations are not required to implement the `@defer` and `@stream` +directives. If either or both of these directives are implemented, they must be +implemented according to this specification. GraphQL implementations that do not +support these directives must not make them available via introspection. + GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the schema. @@ -2116,3 +2122,99 @@ to the relevant IETF specification. ```graphql example scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ``` + +### @defer + +```graphql +directive @defer( + label: String + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT +``` + +The `@defer` directive may be provided for fragment spreads and inline fragments +to inform the executor to delay the execution of the current fragment to +indicate deprioritization of the current fragment. A query with `@defer` +directive will cause the request to potentially return multiple responses, where +non-deferred data is delivered in the initial response and data deferred is +delivered in a subsequent response. `@include` and `@skip` take precedence over +`@defer`. + +```graphql example +query myQuery($shouldDefer: Boolean) { + user { + name + ...someFragment @defer(label: "someLabel", if: $shouldDefer) + } +} +fragment someFragment on User { + id + profile_picture { + uri + } +} +``` + +#### @defer Arguments + +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred (See + [related note](#note-088b7)). When `false`, fragment will not be deferred and + data will be included in the initial response. Defaults to `true` when + omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding defer directive. If + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. + +### @stream + +```graphql +directive @stream( + label: String + if: Boolean! = true + initialCount: Int = 0 +) on FIELD +``` + +The `@stream` directive may be provided for a field of `List` type so that the +backend can leverage technology such as asynchronous iterators to provide a +partial list in the initial response, and additional list items in subsequent +responses. `@include` and `@skip` take precedence over `@stream`. + +```graphql example +query myQuery($shouldStream: Boolean) { + user { + friends(first: 10) { + nodes @stream(label: "friendsStream", initialCount: 5, if: $shouldStream) + } + } +} +``` + +#### @stream Arguments + +- `if: Boolean! = true` - When `true`, field _should_ be streamed (See + [related note](#note-088b7)). When `false`, the field will not be streamed and + all list items will be included in the initial response. Defaults to `true` + when omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding stream directive. If + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. +- `initialCount: Int` - The number of list items the service should return as + part of the initial response. If omitted, defaults to `0`. A field error will + be raised if the value of this argument is less than `0`. + +Note: The ability to defer and/or stream parts of a response can have a +potentially significant impact on application performance. Developers generally +need clear, predictable control over their application's performance. It is +highly recommended that GraphQL services honor the `@defer` and `@stream` +directives on each execution. However, the specification allows advanced use +cases where the service can determine that it is more performant to not defer +and/or stream. Therefore, GraphQL clients _must_ be able to process a response +that ignores the `@defer` and/or `@stream` directives. This also applies to the +`initialCount` argument on the `@stream` directive. Clients _must_ be able to +process a streamed response that contains a different number of initial list +items than what was specified in the `initialCount` argument. diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index dceec126b..d51c39ace 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -422,6 +422,7 @@ FieldsInSetCanMerge(set): {set} including visiting fragments and inline fragments. - Given each pair of members {fieldA} and {fieldB} in {fieldsForName}: - {SameResponseShape(fieldA, fieldB)} must be true. + - {SameStreamDirective(fieldA, fieldB)} must be true. - If the parent types of {fieldA} and {fieldB} are equal or if either is not an Object Type: - {fieldA} and {fieldB} must have identical field names. @@ -455,6 +456,16 @@ SameResponseShape(fieldA, fieldB): - If {SameResponseShape(subfieldA, subfieldB)} is false, return false. - Return true. +SameStreamDirective(fieldA, fieldB): + +- If neither {fieldA} nor {fieldB} has a directive named `stream`. + - Return true. +- If both {fieldA} and {fieldB} have a directive named `stream`. + - Let {streamA} be the directive named `stream` on {fieldA}. + - Let {streamB} be the directive named `stream` on {fieldB}. + - If {streamA} and {streamB} have identical sets of arguments, return true. +- Return false. + **Explanatory Text** If multiple field selections with the same response names are encountered during @@ -463,7 +474,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectFields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. @@ -1517,6 +1528,174 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Defer And Stream Directives Are Used On Valid Root Field + +** Formal Specification ** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- Let {mutationType} be the root Mutation type in {schema}. +- Let {subscriptionType} be the root Subscription type in {schema}. +- If {directiveName} is "defer" or "stream": + - The parent type of {directive} must not be {mutationType} or + {subscriptionType}. + +**Explanatory Text** + +The defer and stream directives are not allowed to be used on root fields of the +mutation or subscription type. + +For example, the following document will not pass validation because `@defer` +has been used on a root mutation field: + +```raw graphql counter-example +mutation { + ... @defer { + mutationField + } +} +``` + +### Defer And Stream Directives Are Used On Valid Operations + +** Formal Specification ** + +- Let {subscriptionFragments} be the empty set. +- For each {operation} in a document: + - If {operation} is a subscription operation: + - Let {fragments} be every fragment referenced by that {operation} + transitively. + - For each {fragment} in {fragments}: + - Let {fragmentName} be the name of {fragment}. + - Add {fragmentName} to {subscriptionFragments}. +- For every {directive} in a document: + - If {directiveName} is not "defer" or "stream": + - Continue to the next {directive}. + - Let {ancestor} be the ancestor operation or fragment definition of + {directive}. + - If {ancestor} is a fragment definition: + - If the fragment name of {ancestor} is not present in + {subscriptionFragments}: + - Continue to the next {directive}. + - If {ancestor} is not a subscription operation: + - Continue to the next {directive}. + - Let {if} be the argument named "if" on {directive}. + - {if} must be defined. + - Let {argumentValue} be the value passed to {if}. + - {argumentValue} must be a variable, or the boolean value "false". + +**Explanatory Text** + +The defer and stream directives can not be used to defer or stream data in +subscription operations. If these directives appear in a subscription operation +they must be disabled using the "if" argument. This rule will not permit any +defer or stream directives on a subscription operation that cannot be disabled +using the "if" argument. + +For example, the following document will not pass validation because `@defer` +has been used in a subscription operation with no "if" argument defined: + +```raw graphql counter-example +subscription sub { + newMessage { + ... @defer { + body + } + } +} +``` + +### Defer And Stream Directive Labels Are Unique + +** Formal Specification ** + +- Let {labelValues} be an empty set. +- For every {directive} in the document: + - Let {directiveName} be the name of {directive}. + - If {directiveName} is "defer" or "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - {argumentValue} must not be present in {labelValues}. + - Append {argumentValue} to {labelValues}. + +**Explanatory Text** + +The `@defer` and `@stream` directives each accept an argument "label". This +label may be used by GraphQL clients to uniquely identify response payloads. If +a label is passed, it must not be a variable and it must be unique within all +other `@defer` and `@stream` directives in the document. + +For example the following document is valid: + +```graphql example +{ + dog { + ...fragmentOne + ...fragmentTwo @defer(label: "dogDefer") + } + pets @stream(label: "petStream") { + name + } +} + +fragment fragmentOne on Dog { + name +} + +fragment fragmentTwo on Dog { + owner { + name + } +} +``` + +For example, the following document will not pass validation because the same +label is used in different `@defer` and `@stream` directives.: + +```raw graphql counter-example +{ + dog { + ...fragmentOne @defer(label: "MyLabel") + } + pets @stream(label: "MyLabel") { + name + } +} + +fragment fragmentOne on Dog { + name +} +``` + +### Stream Directives Are Used On List Fields + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "stream": + - Let {adjacent} be the AST node the directive affects. + - {adjacent} must be a List type. + +**Explanatory Text** + +GraphQL directive locations do not provide enough granularity to distinguish the +type of fields used in a GraphQL document. Since the stream directive is only +valid on list fields, an additional validation rule must be used to ensure it is +used correctly. + +For example, the following document will only pass validation if `field` is +defined as a List type in the associated schema. + +```graphql counter-example +query { + field @stream(initialCount: 0) +} +``` + ## Variables ### Variable Uniqueness diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 28862ea89..3c90ab5ab 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -31,6 +31,10 @@ request is determined by the result of executing this operation according to the ExecuteRequest(schema, document, operationName, variableValues, initialValue): +Note: the execution assumes implementing language supports coroutines. +Alternatively, the socket can provide a write buffer pointer to allow +{ExecuteRequest()} to directly write payloads into the buffer. + - Let {operation} be the result of {GetOperation(document, operationName)}. - Let {coercedVariableValues} be the result of {CoerceVariableValues(schema, operation, variableValues)}. @@ -131,12 +135,8 @@ ExecuteQuery(query, schema, variableValues, initialValue): - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level Selection Set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -153,11 +153,8 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level Selection Set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet, true)}. ### Subscription @@ -256,15 +253,17 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. -- Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, - selectionSet, variableValues)}. +- Let {fieldsByTarget} be the result of calling + {AnalyzeSelectionSet(subscriptionType, selectionSet, variableValues)}. +- Let {groupedFieldSet} be the first entry in {fieldsByTarget}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldGroup} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldGroup}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {node}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, - field, variableValues)} + node, variableValues)} - Let {fieldStream} be the result of running {ResolveFieldEventStream(subscriptionType, initialValue, fieldName, argumentValues)}. @@ -301,15 +300,19 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {fieldsByTarget} be the result of calling + {AnalyzeSelectionSet(subscriptionType, selectionSet, variableValues)}. +- Let {groupedFieldSet} be the first entry in {fieldsByTarget}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, subscriptionType, initialValue, variableValues)} _normally_ (allowing parallelization). - Let {errors} be the list of all _field error_ raised while executing the - selection set. + {groupedFieldSet}. - Return an unordered map containing {data} and {errors}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to -{ExecuteQuery()} since this is how each event result is produced. +{ExecuteQuery()} since this is how each event result is produced. Incremental +delivery, however, is not supported within ExecuteSubscriptionEvent. #### Unsubscribe @@ -322,44 +325,686 @@ Unsubscribe(responseStream): - Cancel {responseStream} -## Executing Selection Sets +## Incremental Delivery + +If an operation contains `@defer` or `@stream` directives, execution may also +result in an Subsequent Result stream in addition to the initial response. +Because execution of Subsequent Results may begin prior to the completion of the +initial result, an Incremental Publisher may be required to manage the ordering +and potentially filtering of the Subsequent Result stream, as described below. + +The Incremental Publisher is responsible for providing: + +1. a message handler for the different Execution Events described below. +2. a method for introspecting the current delivery state ({HasNext()}). +3. an iterator that resolves to an Subsequent Result stream, when appropriate. + +### Create Incremental Publisher: + +CreateIncrementalPublisher(): + +A Publisher Record consists of: + +- {released}: the set of Subsequent Result records for this response that are + currently available to subscribers. +- {pending}: the set of Subsequent Result records for this response that are + pending, whether or not they have been released to subscribers. +- {signal}: An asynchronous signal that can be awaited and triggered. + +#### CreatePublisher(): -To execute a selection set, the object value being evaluated and the object type -need to be known, as well as whether it must be executed serially, or may be -executed in parallel. +- Let {publisherRecord} be a new publisher record. +- Initialize {released} on {publisherRecord} to an empty set. +- Initialize {pending} on {publisherRecord} to an empty set. +- Initialize {signal}. +- Return {publisherRecord}. -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. +### Incremental Delivery Records -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +Internal records are used to manage the internal publishing state as follows: -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. +#### Incremental Result Records + +An Incremental Result Record is either an Initial Result Record or a Subsequent +Result Record. A Subsequent Result Record is either a Deferred Fragment Record +or a Stream Items Record. + +An Initial Result Record is a structure containing: + +- {children}: the set of children Subsequent Result Records that may be + introduced when the initial result. + +A Deferred Fragment Record is a structure containing: + +- {label}: value derived from the corresponding `@defer` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@defer` directive. +- {children}: the set of children Subsequent Result Records that may be + introduced when this record is pushed. +- {deferredGroupedFieldSetRecords}: the set of Deferred Group Field Set Records + that compose this Deferred Fragment Record. +- {pending}: the set of still pending Deferred Group Field Set Records; all of + these must complete for the Deferred Fragment Record to be pushed. +- {children}: the set of children Subsequent Result Records that may be + introduced when this record is pushed. +- {errors}: a list of unrecoverable errors encountered when attempting to + deliver this record or {undefined} if no such errors were encountered. +- {isCompleted}: a boolean value indicating whether this record is complete. +- {filtered}: a boolean value indicating whether this record has been filtered. + +A Stream Items Record is a structure containing: + +- {path}: a list of field names and indices from root to the location of the + corresponding list item contained by this Stream Items Record. +- {children}: the set of children Subsequent Result Records that may be + introduced when this record is pushed. +- {streamRecord}: the Stream Record which this Stream Items Record partially + fulfills. +- {items}: a list that will contain the streamed item. +- {errors}: a list of all _field error_ raised while executing this record. +- {isCompleted}: a boolean value indicating whether this record is complete. +- {filtered}: a boolean value indicating whether this record has been filtered. +- {isCompletedIterator}: a boolean value indicating whether this record + represents completion of the iterator. of an iterator without actual items. +- {sent}: a boolean value indicating whether this record has been previously + pushed to the client. + +A Stream Record is a structure containing the following: + +- {label}: value derived from the corresponding `@stream` directive. +- {path}: a list of field names and indices from root to the location of the + corresponding `@stream` directive. +- {streamedFieldGroup}: A Field Group record for completing stream items. +- {iterator}: The underlying iterator. +- {errors}: a list of unrecoverable errors encountered when attempting to + deliver this record or {undefined} if no such errors were encountered. + +#### Incremental Data Records + +An Incremental Data Record is either an Initial Result Record, a Deferred +Grouped Field Set Record or a Stream Items Record. + +A Deferred Grouped Field Set Record is a structure containing: + +- {path}: a list of field names and indices from root to the location of this + deferred grouped field set. +- {deferredFragmentRecords}: a list of Deferred Fragment Records containing this + record. +- {data}: an ordered map that will contain the result of execution for this + fragment on completion. +- {errors}: a list of all _field error_ raised while executing this record. +- {shouldInitiateDefer}: a boolean value indicating whether implementation + specific deferral of execution should be initiated. +- {sent}: a boolean value indicating whether this data has been previously + pushed to the client. + +Deferred Grouped Field Set Records may fulfill multiple Deferred Fragment +Records secondary to overlapping fields. Initial Result Records and Stream Items +Records always each fulfills a single result record and so they represents both +a unit of Incremental Data as well as an Incremental Result. + +### Internal Record Creation + +#### Prepare Initial Result Record + +PrepareInitialResultRecord(): + +- Let {initialResultRecord} be a new Initial Result Record. +- Return {initialResultRecord}. + +#### Prepare New Deferred Fragment Record + +PrepareNewDeferredFragmentRecord(path, label, parent): + +- Let {deferredFragmentRecord} be a new Deferred Fragment record created from + {path}, {label}, and {parent}. +- Let {children} be the corresponding entry on {parent}. +- Add {deferredFragmentRecord} to {children}. +- Return {deferredFragmentRecord}. {deferredFragmentRecord}. + +#### Prepare New Deferred Grouped Field Set Record: + +PrepareNewDeferredGroupedFieldSetRecord(path, deferredFragmentRecords, +groupedFieldSet, shouldInitiateDefer): + +- Let {deferredGroupedFieldSetRecord} be a new Deferred Grouped Field Set record + created from {path}, {deferredFragmentRecords}, {groupedFieldSet}, and + {shouldInitiateDefer}. +- For each {deferredFragmentRecord} of {deferredFragmentRecords}: + - Let {deferredGroupedFieldSetRecords} and {pending} be the corresponding + entries on {deferredFragmentRecord}. + - Add {deferredGroupedFieldSetRecord} to both {deferredGroupedFieldSetRecords} + and {pending}. +- Return {deferredGroupedFieldSetRecord}. + +#### Prepare New Stream Record + +PrepareNewStreamRecord(fieldGroup, label, path, iterator): + +- Let {streamedFields} be an empty list. +- Let {fields} be the corresponding entry on {fieldGroup}. +- For each {fieldDetails} in {fields}: + - Let {node} be the corresponding entry on {fieldDetails}. + - Let {newFieldDetails} be a new Field Details record created from {node} and + {undefined}. + - Append {newFieldDetails} to {streamedFields}. +- Let {targets} be a set containing the value {undefined}. +- Let {streamedFieldGroup} be a new Field Group record created from + {streamedFields} and {targets}. +- Let {streamRecord} be a new Stream Record created from {streamedFieldGroup}, + {label}, {path}, and {iterator}. +- Return {streamRecord}. + +#### Prepare New Stream Items Record + +PrepareNewStreamItemsRecord(streamRecord, path, incrementalDataRecord): + +- Let {streamItemsRecord} be a new Stream Record created from {streamRecord} and + {path}. +- If {incrementalDataRecord} is a Deferred Grouped Field Set Record: + - Let {deferredFragments} be the corresponding entry on + {incrementalDataRecord}. + - For each {parent} in {deferredFragments}: + - Let {children} be the corresponding entry on {parent}. + - Add {streamRecord} to {children}. +- Otherwise: + - Let {children} be the corresponding entry on {incrementalDataRecord}. + - Add {streamRecord} to {children}. +- Return {streamRecord}. + +### Publisher State Manipulation + +Multiple methods manipulate publisher state. + +#### Complete Deferred Grouped Field Set Record + +CompleteDeferredGroupedFieldSet(publisherRecord, deferredGroupedFieldSetRecord, +data, errors): + +- Set the corresponding entries on {deferredGroupedFieldSetRecord} to {data} and + {errors}. +- Let {deferredFragmentRecords} be the corresponding entry on + {deferredGroupedFieldSetRecord}. +- For each {deferredFragmentRecord} in {deferredFragmentRecords}: + - Let {pending} be the corresponding entry on {deferredFragmentRecord}. + - Remove {deferredGroupedFieldSetRecord} from {pending}. + - If {pending} is not empty: + - Continue to the next entry in {deferredFragmentRecords}. + - Call {ReleaseSubsequentResult(publisherRecord, deferredFragmentRecord)}. + +ReleaseSubsequentResult(publisherRecord, subsequentResult): + +- Let {released} and {pending} be the corresponding entries on + {publisherRecord}. +- If {subsequentResult} is not within {pending}, return. +- Add {subsequentResult} to {released}. + +#### Mark Errored Deferred Grouped Field Set Record + +MarkErroredDeferredGroupedFieldSetRecord(publisherRecord, +deferredGroupedFieldSetRecord, errors): + +- Let {deferredFragmentRecords} be the corresponding entry on + {deferredGroupedFieldSetRecord}. +- For each {deferredFragmentRecord} in {deferredFragmentRecords}: + - Set the corresponding entry on {deferredFragmentRecord} to {errors}. + - Set {isCompleted} on {deferredFragmentRecord} to {true}. + - Call {ReleaseSubsequentResult(publisherRecord, deferredFragmentRecord)}. + +#### Complete Stream Items Record + +CompleteStreamItemsRecord(publisherRecord, streamItemsRecord, items, errors): + +- Set the corresponding entry on {streamItemsRecord} to {items}. +- Set the corresponding entry on {streamItemsRecord} to {errors}. +- Set {isCompleted} on {streamItemsRecord} to {true}. +- Call {ReleaseSubsequentResult(publisherRecord, streamItemsRecord)}. + +#### Complete Final Empty Stream Items Record + +CompleteFinalEmptyStreamItemsRecord(publisherRecord, streamItemsRecord): + +- Set {isCompletedIterator} on {streamItemsRecord} to {true}. +- Set {isCompleted} on {streamItemsRecord} to {true}. +- Call {ReleaseSubsequentResult(publisherRecord, streamItemsRecord)}. + +#### Mark Errored Stream Items Record + +MarkErroredStreamItemsRecord(publisherRecord, streamItemsRecord, errors): + +- Let {streamRecord} be the corresponding entry on {streamItemsRecord}. +- Set the corresponding entry on {streamRecord} to {errors}. +- Set {isCompleted} on {streamItemsRecord} to {true}. +- Call {ReleaseSubsequentResult(publisherRecord, streamItemsRecord)}. + +#### Build Response + +If an operation contains subsequent result records resulting from `@stream` or +`@defer` directives, the {BuildResponse} algorithm will return an initial result +as well as a stream of incremental results. + +BuildResponse(publisherRecord, initialResultRecord, data, errors): + +- Let {children} be the corresponding entry on {initialResultRecord}. +- For each {child} in {children}: + - Let {filtered} be the corresponding entry on {child}. + - If {filtered} is {true}: + - Continue to the next {child} in children. + - Otherwise: + - Call {PublishSubsequentResult(publisherRecord, child)}. +- Initialize {initialResult} to an empty unordered map. +- If {errors} is not empty: + - Set {errors} on {initialResult} to {errors}. +- Set {data} on {initialResult} to {data}. +- Let {pending} be the property on {publisherRecord} for {pending}. +- If {pending} is empty: + - Return {initialResult}. +- Set {pending} on {initialResult} to the result of + {PendingRecordsToResults(pending)}. +- Set {hasNext} on {initialResult} to {true}. +- Let {iterator} be the result of running + {YieldSubsequentResults(publisherRecord)}. +- Return {initialResult} and {iterator}. + +PublishSubsequentResult(publisherRecord, subsequentResult): + +- Let {isCompleted} be the corresponding entry on {subsequentResult}. +- If {isCompleted} is {true}: + - Call {PushSubsequentResult(publishRecord, subsequentResult)}. + - Return. +- If {subsequentResult} is a Stream Items Record: + - Call {IntroduceSubsequentResult(publishRecord, subsequentResult)}. + - Return. +- Let {pending} be the corresponding entry on {subsequentResult}. +- If {pending} is empty: + - Set {isCompleted} on {subsequentResult} to {true}. + - Call {PushSubsequentResult(publishRecord, subsequentResult)}. + - Return. +- Call {IntroduceSubsequentResult(publishRecord, subsequentResult)}. + +IntroduceSubsequentResult(publisherRecord, subsequentResult): + +- Let {pending} be the corresponding entry on {publisherRecord}. +- Add {subsequentResult} to {pending}. + +PushSubsequentResult(publisherRecord, subsequentResult): + +- Let {released}, {pending}, and {signal} be the corresponding entries on + {publisherRecord}. +- Add {subsequentResult} to both {released} and {pending}. +- Trigger {signal}. + +PendingRecordsToResults(records): + +- Initialize {pendingResults} to an empty list. +- For each {record} in {records}: + - Set {pendingSent} on {record} to {true}. + - Let {path} and {label} be the corresponding entries on {record}. + - Let {pendingResult} be an unordered map containing {path} and {label}. + - Append {pendingResult} to {pendingResults}. +- Return {pendingResults}. + +#### Yield Subsequent Results + +If an operation contains subsequent result records resulting from `@stream` or +`@defer` directives, the {YieldSubsequentResults} algorithm defines how the +payloads are produced. + +YieldSubsequentResults(publisherRecord): + +- Let {pending} be the corresponding entry on {publisherRecord}. +- While {pending} is not empty: + - If a termination signal is received: + - Initialize {streams} to the empty set. + - Let {pendingRecords} be the corresponding record on {publisherRecord}. + - Let {descendants} be the result of {GetDescendants(pendingRecords)}. + - For each {record} in {descendants}: + - If {record} is a Stream Items Record: + - Let {streamRecord} be the corresponding entry on {record}. + - Add {streamRecord} to {streams}. + - For each {streamRecord} in {streams}: + - If {streamRecord} contains {iterator}: + - Send a termination signal to {iterator}. + - Return. + - Let {released} be the corresponding entry on {publisherRecord}. + - If {released} is empty: + - Let {signal} be the corresponding entry on {publisherRecord}. + - Wait for {signal} to be triggered. + - Reinitialize {signal} on {publisherRecord}. + - Otherwise: + - Initialize {current} to the empty set. + - Let {pending} be the corresponding entry on {publisherRecord}. + - For each {record} in {released}: + - Add {record} to {current}. + - Remove {record} from both {released} and {pending}. + - Let {subsequentResponse} be the result of + {GenerateSubsequentResponse(publisherRecord, current)}. + - Yield {subsequentResponse}. + +GetDescendants(children): + +- Let {descendants} be the empty set. +- For each {child} in {children}: + - Add {child} to {descendants}. + - Let {grandchildren} be the value for {children} on {child}. + - Let {grandDescendants} be the result of {GetDescendants(grandchildren)}. + - For each {grandDescendant} in {grandDescendants}: + - Add {grandDescendant} to {descendants}. +- Return {descendants}. + +GenerateSubsequentResponse(publisherRecord, current): + +- Initialize {pendingRecords} to the empty set. +- Initialize {incremental} to an empty list. +- Initialize {completedRecords} to the empty set. +- For each {record} in {current}: + - Let {children} be the corresponding entry on {record}. + - For each {child} in {children}: + - If {child} is a Stream Items Record: + - Let {streamRecord} be the corresponding entry on {child}. + - Let {pendingSent} be the corresponding entry on {streamRecord}. + - If {pendingSent} is not {true}: + - Add {streamRecord} to {pendingRecords}. + - Otherwise: + - Let {pendingSent} be the corresponding entry on {child}. + - If {pendingSent} is not {true}: + - Add {child} to {pendingRecords}. + - Call {PublishSubsequentResult(publisherRecord, child)}. + - If {record} is a Stream Items Record: + - Let {sent} be the corresponding entry on {record}. + - If {sent} is {true}: + - Continue to the next {record} in {current}. + - Set {sent} on {record} to {true}. + - Let {streamRecord} be the corresponding entry on {record}. + - Let {isCompletedIterator} be the corresponding entry on {record}. + - If {isCompletedIterator} is {true}: + - Remove {streamRecord} from {pendingRecords}, if present. + - Add {streamRecord} to {completedRecords}. + - Let {streamErrors} be the entry for {errors} on {streamRecord}. + - If {streamErrors} is not empty: + - Continue to the next {record} in {current}. + - Let {items} be the corresponding entry on {record}. + - Let {path} and {errors} be the corresponding entries on {record}. + - Let {incrementalResult} be an unordered map containing {items}, {path}, + and {errors}. + - Append {incrementalResult} to {incremental}. + - Otherwise: + - Remove {record} from {pendingRecords}, if present. + - Add {record} to {completedRecords}. + - Let {errors} be the corresponding entry on {record}. + - If {errors} is not empty: + - Continue to the next {record} in {current}. + - Let {deferredGroupedFieldSetRecords} be the corresponding entry on + {record}. + - For each {deferredGroupedFieldSetRecord} in + {deferredGroupedFieldSetRecords}: + - Let {sent} be the corresponding entry on {deferredGroupedFieldSetRecord}. + - If {sent} is {true}: + - Continue to the next {record} in {current}. + - Set {sent} on {deferredGroupedFieldSetRecord} to {true}. + - Let {path} and {errors} be the corresponding entries on {record}. + - Let {incrementalResult} be an unordered map containing {items}, {path}, + and {errors}. + - Append {incrementalResult} to {incremental}. +- Let {pending} be the corresponding entry on {publisherRecord}. +- Let {hasNext} be {true} if {pending} is empty; otherwise, let it be {false}. +- Let {subsequentResult} be an unordered map containing {hasNext}. +- If {pendingRecords} is not empty: + - Let {pending} be {PendingRecordsToResults(pendingRecords)}. + - Set the corresponding entry on {subsequentResult} to {pending}. +- If {incremental} is not empty: + - Set the corresponding entry on {subsequentResult} to {incremental}. +- If {completedRecords} is not empty: + - Let {completed} be {CompletedRecordsToResults(completedRecords)}. + - Set the corresponding entry on {subsequentResult} to {completed}. +- Return {subsequentResult}. + +CompletedRecordsToResults(records): + +- Initialize {completedResults} to an empty list. +- For each {record} in {records}: + - Let {path}, {label}, and {errors} be the corresponding entries on {record}. + - Let {completedResult} be an unordered map containing {path}, {label} and + {errors}. + - Append {completedResult} to {completedResults}. +- Return {completedResults}. + +#### Filter Subsequent Results + +When a field error is raised, there may be unpublished Subsequent Result records +with a path that points to a location that has been removed or set to null due +to null propagation. These subsequent results must not be sent to clients. + +In {FilterSubsequentResults}, {nullPath} is the path which has resolved to null +after propagation as a result of a field error. {erroringIncrementalDataRecord} +is the Incremental Data record where the field error was raised. +{incrementalDataRecord} will not be set for field errors that were raised during +the initial execution outside of {ExecuteDeferredGroupedFieldSets} or +{ExecuteStreamField}. + +FilterSubsequentResults(publisherRecord, nullPath, +erroringIncrementalDataRecord): + +- Let {children} be the result of {GetChildren(erroringIncrementalDataRecord)}. +- Let {streams} be an empty set of Stream Records. +- Let {descendants} be the result of {GetDescendants(children)}. +- For each {descendant} in {descendants}: + - If {NullsSubsequentResultRecord(descendant, nullPath)} is not {true}: + - Continue to the next {descendant} in {descendants}. + - Set the entry for {filtered} on {subsequentResult} to {true}. + - If {subsequentResult} is a Stream Items record: + - Add {streamRecord} to {streams}. +- For each {stream} in {streams}: + - Let {iterator} be the corresponding entry on {stream}. + - Optionally, notify the {iterator} that no additional items will be + requested. + +GetChildren(incrementalDataRecord): + +- Initialize {subsequentResultRecords} to an empty list. +- If {incrementalDataRecord} is an Initial Result record or a Stream Items: + - Append {incrementalDataRecord} to {subsequentResultRecords}. +- Otherwise: + - Let {deferredFragmentRecords} be the corresponding entry on + {incrementalDataRecord}. + - For each {deferredFragmentRecord} in {deferredFragmentRecords}: + - Append {deferredFragmentRecord} to {subsequentResultRecords}. +- Let {children} be the empty set. +- For each {subsequentResultRecord} in {subsequentResultRecords}: + - Add {erroringSubsequentResultRecord} to {children}. +- Return {children} + +NullsSubsequentResultRecord(subsequentResultRecord, nullPath): + +- If {subsequentResultRecord} is a Stream Items Record: + - Let {incrementalDataRecords} be a list containing {subsequentResultRecord}. +- Otherwise: + - Let {incrementalDataRecords} be the value corresponding the entry for + {deferredGroupedFieldSetRecords} on {subsequentResultRecord}. +- Let {matched} equal {false}. +- For each {incrementalDataRecord} in {incrementalDataRecords}: + - Let {path} be the corresponding entry on {incrementalDataRecord}. + - If {MatchesPath(path, nullPath)} is {true}: + - Set {matched} equal to {true}. + - Optionally, cancel any incomplete work in the execution of + {incrementalDataRecord}. +- Return {matched}. + +MatchesPath(testPath, basePath): + +- Initialize {index} to zero. +- While {index} is less then the length of {basePath}: + - Initialize {basePathItem} to the element at {index} in {basePath}. + - Initialize {testPathItem} to the element at {index} in {testPath}. + - If {basePathItem} is not equivalent to {testPathItem}: + - Return {true}. + - Increment {index} by one. + - Return {false}. + +For example, assume the field `alwaysThrows` is a `Non-Null` type that always +raises a field error: + +```graphql example +{ + myObject { + ... @defer { + name + } + alwaysThrows + } +} +``` + +In this case, only one response should be sent. The async payload record +associated with the `@defer` directive should be removed and its execution may +be cancelled. + +```json example +{ + "data": { "myObject": null }, + "hasNext": false +} +``` + +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- Let {fieldsByTarget}, {targetsByKey}, and {newDeferUsages} be the result of + calling {AnalyzeSelectionSet(objectType, selectionSet, variableValues)}. +- Let {groupedFieldSet}, {newGroupedFieldSetDetails} be the result of calling + {BuildGroupedFieldSets(fieldsByTarget, targetsByKey)}. +- Let {publisherRecord} be the result of {CreatePublisher()}. +- Let {newDeferMap} be the result of {AddNewDeferFragments(publisherRecord, + newDeferUsages, incrementalDataRecord)}. +- Let {newDeferredGroupedFieldSetRecords} be the result of + {AddNewDeferredGroupedFieldSets(publisherRecord, newGroupedFieldSetDetails, + newDeferMap)}. +- Let {initialResultRecord} be the result of {PrepareInitialResultRecord()}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, + queryType, initialValue, variableValues, publisherRecord, + initialResultRecord)} _serially_ if {serial} is {true}, _normally_ (allowing + parallelization) otherwise. +- In parallel, call {ExecuteDeferredGroupedFieldSets(queryType, initialValues, + variableValues, publisherRecord, newDeferredGroupedFieldSetRecords, + newDeferMap)}. +- Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. +- Return the result of {BuildResponse(publisherRecord, initialResultRecord, + data, errors)}. + +AddNewDeferFragments(publisherRecord, newDeferUsages, incrementalDataRecord, +deferMap, path): + +- Initialize {newDeferredGroupedFieldSetRecords} to an empty list. +- If {newDeferUsages} is empty: + - Let {newDeferMap} be {deferMap}. +- Otherwise: + - Let {newDeferMap} be a new empty unordered map of Defer Usage records to + Deferred Fragment records. + - For each {deferUsage} and {deferredFragmentRecord} in {deferMap}. + - Set the entry for {deferUsage} in {newDeferMap} to + {deferredFragmentRecord}. +- For each {deferUsage} in {newDeferUsages}: + - Let {label} be the corresponding entry on {deferUsage}. + - Let {parent} be (GetParentTarget(deferUsage, deferMap, + incrementalDataRecord)). + - Let {deferredFragmentRecord} be the result of + {PrepareNewDeferFragmentRecord(publisherRecord, path, label, parent)}. + - Set the entry for {deferUsage} in {newDeferMap} to {deferredFragmentRecord}. + - Return {newDeferMap}. + +GetParentTarget(deferUsage, deferMap, incrementalDataRecord): + +- Let {ancestors} be the corresponding entry on {deferUsage}. +- Let {parentDeferUsage} be the first member of {ancestors}. +- If {parentDeferUsage} is not defined, return {incrementalDataRecord}. +- Let {parentRecord} be the corresponding entry in {deferMap} for + {parentDeferUsage}. +- Return {parentRecord}. + +AddNewDeferredGroupedFieldSets(publisherRecord, newGroupedFieldSetDetails, +deferMap, path): + +- Initialize {newDeferredGroupedFieldSetRecords} to an empty list. +- For each {deferUsageSet} and {groupedFieldSetDetails} in + {newGroupedFieldSetDetails}: + - Let {groupedFieldSet} and {shouldInitiateDefer} be the corresponding entries + on {groupedFieldSetDetails}. + - Let {deferredFragmentRecords} be the result of + {GetDeferredFragmentRecords(deferUsageSet, newDeferMap)}. + - Let {deferredGroupedFieldSetRecord} be the result of + {PrepareNewDeferredGroupedFieldSet(publisherRecord, path, + deferredFragmentRecords, groupedFieldSet, shouldInitiateDefer)}. + - Append {deferredGroupedFieldSetRecord} to + {newDeferredGroupedFieldSetRecords}. +- Return {newDeferredGroupedFieldSetRecords}. + +GetDeferredFragmentRecords(deferUsageSet, deferMap): + +- Let {deferredFragmentRecords} be an empty list of Deferred Fragment records. +- For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragmentRecord} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragmentRecord} to {deferredFragmentRecords}. +- Return {deferredFragmentRecords}. + +## Executing a Grouped Field Set + +To execute a grouped field set, the object value being evaluated and the object +type need to be known, as well as whether it must be executed serially, or may +be executed in parallel. + +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path, deferMap, incrementalDataRecord, publisherRecord): + +- If {path} is not provided, initialize it to an empty list. - Initialize {resultMap} to an empty ordered map. -- For each {groupedFieldSet} as {responseKey} and {fields}: - - Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. +- For each {groupedFieldSet} as {responseKey} and {fieldGroup}: + - Let {fieldDetails} be the first entry in {fieldGroup}. + - Let {node} be the corresponding entry on {fieldDetails}. + - Let {fieldName} be the name of {node}. Note: This value is unaffected if an + alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + fieldGroup, variableValues, path, publisherRecord, + incrementalDataRecord)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - Return {resultMap}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Selection Set Analysis section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire grouped field set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet yielded a value may be cancelled to avoid unnecessary work. +Additionally, unpublished Subsequent Result records must be filtered if their +path points to a location that has resolved to {null} due to propagation of a +field error. This is described in +[Filter Subsequent Results](#sec-Filter-Subsequent-Payloads). These subsequent +results must not be sent to clients. If these subsequent results have not yet +executed or have not yet yielded a value they may also be cancelled to avoid +unnecessary work. + Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about this behavior. @@ -459,15 +1104,21 @@ A correct executor must generate the following result for that selection set: } ``` -### Field Collection +When subsections contain a `@stream` or `@defer` directive, these subsections +are no longer required to execute serially. Execution of the deferred or +streamed sections of the subsection may be executed in parallel, as defined in +{ExecuteDeferredGroupedFieldSets} and {ExecuteStreamField}. + +### Selection Set Analysis Before execution, the selection set is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. +calling {AnalyzeSelectionSet()} and {BuildGroupedFieldSets()}. Each entry in the +grouped field set is a Field Group record describing all fields that share a +response key (the alias if defined, otherwise the field name). This ensures all +fields with the same response key (including those in referenced fragments) are +executed at the same time. -As an example, collecting the fields of this selection set would collect two +As an example, analysis of the fields of this selection set would return two instances of the field `a` and one of field `b`: ```graphql example @@ -486,14 +1137,65 @@ fragment ExampleFragment on Query { } ``` -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. +The depth-first-search order of the field groups produced by selection set +processing is maintained through execution, ensuring that fields appear in the +executed response in a stable and predictable order. + +{AnalyzeSelectionSet()} also returns a list of references to any new deferred +fragments encountered the selection set. {BuildGroupedFieldSets()} also +potentially returns additional deferred grouped field sets related to new or +previously encountered deferred fragments. Additional grouped field sets are +constructed carefully so as to ensure that each field is executed exactly once +and so that fields are grouped according to the set of deferred fragments that +include them. + +Information derived from the presence of a `@defer` directive on a fragment is +returned as a Defer Usage record, unique to the label, a structure containing: + +- {label}: value of the corresponding argument to the `@defer` directive. +- {ancestors}: a list, where the first entry is the parent Defer Usage record + corresponding to the deferred fragment enclosing this deferred fragment and + the remaining entries are the values included within the {ancestors} entry of + that parent Defer Usage record, or, if this Defer Usage record is deferred + directly by the initial result, a list containing the single value + {undefined}. + +A Field Group record is a structure containing: + +- {fields}: a list of Field Details records for each encountered field. +- {targets}: the set of Defer Usage records corresponding to the deferred + fragments enclosing this field, as well as possibly the value {undefined} if + the field is included within the response. + +A Field Details record is a structure containing: + +- {node}: the field node itself. +- {target}: the Defer Usage record corresponding to the deferred fragment + enclosing this field or the value {undefined} if the field was not deferred. + +Additional deferred grouped field sets are returned as Grouped Field Set Details +records which are structures containing: + +- {groupedFieldSet}: the grouped field set itself. +- {shouldInitiateDefer}: a boolean value indicating whether the executor should + defer execution of {groupedFieldSet}. + +Deferred grouped field sets do not always require initiating deferral. For +example, when a parent field is deferred by multiple fragments, deferral is +initiated on the parent field. New grouped field sets for child fields will be +created if the child fields are not all present in all of the deferred +fragments, but these new grouped field sets, while representing deferred fields, +do not require additional deferral. + +Similar algorithms govern root and sub-selection set processing: + +AnalyzeSelectionSet(objectType, selectionSet, variableValues, visitedFragments, +parentTarget, newTarget): + +- If {visitedFragments} is not defined, initialize it to the empty set. +- Initialize {targetsByKey} to an empty unordered map of sets. +- Initialize {fieldsByTarget} to an empty unordered map of ordered maps. +- Initialize {newDeferUsages} to an empty list of Defer Usage records. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -508,14 +1210,28 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for + - Let {target} be {newTarget} if {newTarget} is defined; otherwise, let + {target} be {parentTarget}. + - Let {targetsForKey} be the list in {targetsByKey} for {responseKey}; if no + such list exists, create it as an empty set. + - Add {target} to {targetsForKey}. + - Let {fieldsForTarget} be the map in {fieldsByTarget} for {responseKey}; if + no such map exists, create it as an unordered map. + - Let {groupForResponseKey} be the list in {fieldsForTarget} for {responseKey}; if no such list exists, create it as an empty list. - Append {selection} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -524,31 +1240,80 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. + - If {deferDirective} is defined: + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {ancestors} be an empty list. + - Append {parentTarget} to {ancestors}. + - If {parentTarget} is defined: + - Let {parentAncestors} be the {ancestor} entry on {parentTarget}. + - Append all items in {parentAncestors} to {ancestors}. + - Let {target} be a new Defer Usage record created from {label} and + {ancestors}. + - Append {target} to {newDeferUsages}. + - Otherwise: + - Let {target} be {newTarget}. + - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + {fragmentNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, + visitedFragments, parentTarget, target)}. + - For each {target} and {fragmentMap} in {fragmentFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {fragmentList} in {fragmentMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {fragmentTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined: + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {ancestors} be an empty list. + - Append {parentTarget} to {ancestors}. + - If {parentTarget} is defined: + - Let {parentAncestors} be {ancestor} on {parentTarget}. + - Append all items in {parentAncestors} to {ancestors}. + - Let {target} be a new Defer Usage record created from {label} and + {ancestors}. + - Append {target} to {newDeferUsages}. + - Otherwise: + - Let {target} be {newTarget}. + - Let {fragmentTargetByKeys}, {fragmentFieldsByTarget}, + {fragmentNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues, + visitedFragments, parentTarget, target)}. + - For each {target} and {fragmentMap} in {fragmentFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {fragmentList} in {fragmentMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {fragmentTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. +- Return {fieldsByTarget}, {targetsByKey}, and {newDeferUsages}. + +Note: The steps in {AnalyzeSelectionSet()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. DoesFragmentTypeApply(objectType, fragmentType): @@ -562,8 +1327,138 @@ DoesFragmentTypeApply(objectType, fragmentType): - if {objectType} is a possible type of {fragmentType}, return {true} otherwise return {false}. -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. +BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets) + +- If {parentTargets} is not provided, initialize it to a set containing the + value {undefined}. +- Let {parentTargetKeys} and {targetSetDetailsMap} be the result of + {GetTargetSetDetails(targetsByKey, parentTargets)}. +- Initialize {remainingFieldsByTarget} to an empty unordered map of ordered + maps. + - For each {target} and {fieldsForTarget} in {fieldsByTarget}: + - Initialize {remainingFieldsForTarget} to an empty ordered map. + - For each {responseKey} and {fieldList} in {fieldsForTarget}: + - Set {responseKey} on {remainingFieldsForTarget} to {fieldList}. +- Initialize {groupedFieldSet} to an empty ordered map. +- If {keysWithParentTargets} is not empty: + - Let {firstTarget} be the first entry in {parentTargets}. + - Let {firstFields} be the entry for {firstTarget} in + {remainingFieldsByTarget}. + - For each {responseKey} in {firstFields}: + - If {keysWithParentTargets} does not contain {responseKey}, continue to the + next member of {firstFields}. + - Let {fieldGroup} be the Field Group record in {groupedFieldSet} for + {responseKey}; if no such record exists, create a new such record from the + empty list {fields} and the set of {parentTargets}. + - Let {targets} be the entry in {targetsByKeys} for {responseKey}. + - For each {target} in {targets}: + - Let {remainingFieldsForTarget} be the entry in {remainingFieldsByTarget} + for {target}. + - Let {nodes} be the list in {remainingFieldsByTarget} for {responseKey}. + - Remove the entry for {responseKey} from {remainingFieldsByTarget}. + - For each {node} of {nodes}: + - Let {fieldDetails} be a new Field Details record created from {node} + and {target}. + - Append {fieldDetails} to the {fields} entry on {fieldGroup}. +- Initialize {newGroupedFieldSetDetails} to an empty unordered map. +- For each {maskingTargets} and {targetSetDetails} in {targetSetDetailsMap}: + - Initialize {newGroupedFieldSet} to an empty ordered map. + - Let {keys} be the corresponding entry on {targetSetDetails}. + - Let {firstTarget} be the first entry in {maskingTargets}. + - Let {firstFields} be the entry for {firstTarget} in + {remainingFieldsByTarget}. + - For each {responseKey} in {firstFields}: + - If {keys} does not contain {responseKey}, continue to the next member of + {firstFields}. + - Let {fieldGroup} be the Field Group record in {newGroupedFieldSet} for + {responseKey}; if no such record exists, create a new such record from the + empty list {fields} and the set of {parentTargets}. + - Let {targets} be the entry in {targetsByKeys} for {responseKey}. + - For each {target} in {targets}: + - Let {remainingFieldsForTarget} be the entry in {remainingFieldsByTarget} + for {target}. + - Let {nodes} be the list in {remainingFieldsByTarget} for {responseKey}. + - Remove the entry for {responseKey} from {remainingFieldsByTarget}. + - For each {node} of {nodes}: + - Let {fieldDetails} be a new Field Details record created from {node} + and {target}. + - Append {fieldDetails} to the {fields} entry on {fieldGroup}. + - Let {shouldInitiateDefer} be the corresponding entry on {targetSetDetails}. + - Let {details} be a new Grouped Field Set Details record created from + {newGroupedFieldSet} and {shouldInitiateDefer}. + - Set {maskingTargets} on {newGroupedFieldSetDetails} to {details}. +- Return {groupedFieldSet} and {newGroupedFieldSetDetails}. + +GetTargetSetDetails(targetsByKey, parentTargets): + +- Initialize {keysWithParentTargets} to the empty set. +- Initialize {targetSetDetailsMap} to an empty unordered map. +- For each {responseKey} and {targets} in {targetsByKey}: + - Initialize {maskingTargets} to an empty set. + - For each {target} in {targets}: + - If {target} is not defined: + - Add {target} to {maskingTargets}. + - Continue to the next entry in {targets}. + - Let {ancestors} be the corresponding entry on {target}. + - For each {ancestor} of {ancestors}: + - If {targets} contains {ancestor}, continue to the next member of + {targets}. + - Add {target} to {maskingTargets}. + - If {IsSameSet(maskingTargets, parentTargets)} is {true}: + - Append {responseKey} to {keysWithParentTargets}. + - Continue to the next entry in {targetsByKey}. + - For each {key} in {targetSetDetailsMap}: + - If {IsSameSet(maskingTargets, key)} is {true}, let {targetSetDetails} be + the map in {targetSetDetailsMap} for {maskingTargets}. + - If {targetSetDetails} is defined: + - Let {keys} be the corresponding entry on {targetSetDetails}. + - Add {responseKey} to {keys}. + - Otherwise: + - Initialize {keys} to the empty set. + - Add {responseKey} to {keys}. + - Let {shouldInitiateDefer} be {false}. + - For each {target} in {maskingTargets}: + - If {parentTargets} does not contain {target}: + - Set {shouldInitiateDefer} equal to {true}. + - Create {newTargetSetDetails} as an map containing {keys} and + {shouldInitiateDefer}. + - Set {newTargetSetDetails} as the entry within {targetSetDetailsMap} for + {maskingTargets}. +- Return {keysWithParentTargets} and {targetSetDetailsMap}. + +IsSameSet(setA, setB): + +- If the size of setA is not equal to the size of setB: + - Return {false}. +- For each {item} in {setA}: + - If {setB} does not contain {item}: + - Return {false}. +- Return {true}. + +## Executing Deferred Grouped Field Sets + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +publisherRecord, path, newDeferredGroupedFieldSetRecords, deferMap) + +- If {path} is not provided, initialize it to an empty list. +- For each {deferredGroupedFieldSetRecord} of + {newDeferredGroupedFieldSetRecords}: + - Let {shouldInitiateDefer} and {groupedFieldSet} be the corresponding entries + on {deferredGroupedFieldSetRecord}. + - If {shouldInitiateDefer} is {true}: + - Initiate implementation specific deferral of further execution, resuming + execution as defined. + - Let {data} be the result of calling {ExecuteGroupedFieldSet(groupedFieldSet, + objectType, objectValue, variableValues, path, deferMap, publisherRecord, + deferredGroupedFieldSetRecord)}. + - Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. + - If a field error was raised, causing a {null} to be propagated to {data}: + - Call {MarkErroredDeferredGroupedFieldSetRecord(publisherRecord, + deferredGroupedFieldSetRecord, errors)}. + - Otherwise: + - Call {CompleteDeferredGroupedFieldSet(publisherRecord, + deferredGroupedFieldSetRecord, data, errors)}. ## Executing Fields @@ -573,16 +1468,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fieldGroup, variableValues, +path, deferMap, publisherRecord, incrementalDataRecord): -- Let {field} be the first entry in {fields}. -- Let {fieldName} be the field name of {field}. +- Let {fieldDetails} be the first entry in {fieldGroup}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the field name of {node}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. + variableValues, path, deferMap, publisherRecord, incrementalDataRecord)}. ### Coercing Field Arguments @@ -651,6 +1549,12 @@ As an example, this might accept the {objectType} `Person`, the {field} {"soulMate"}, and the {objectValue} representing John Lennon. It would be expected to yield the value representing Yoko Ono. +List values are resolved similarly. For example, {ResolveFieldValue} might also +accept the {objectType} `MusicBand`, the {field} {"members"}, and the +{objectValue} representing the Beatles. It would be expected to yield a +collection of values representing John Lennon, Paul McCartney, Ringo Starr and +George Harrison. + ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): - Let {resolver} be the internal function provided by {objectType} for @@ -661,30 +1565,122 @@ ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): Note: It is common for {resolver} to be asynchronous due to relying on reading an underlying database or networked service to produce a value. This necessitates the rest of a GraphQL executor to handle an asynchronous execution -flow. +flow. In addition, an implementation for collections may leverage asynchronous +iterators or asynchronous generators provided by many programming languages. +This may be particularly helpful when used in conjunction with the `@stream` +directive. ### Value Completion After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the -field execution process continues recursively. +field execution process continues recursively. If the return type is a List +type, each member of the resolved collection is completed using the same value +completion process. In the case where `@stream` is specified on a field of list +type, value completion iterates over the collection until the number of items +yielded items satisfies `initialCount` specified on the `@stream` directive. + +#### Execute Stream Field + +ExecuteStreamField(streamRecord, index, innerType, variableValues, +publisherRecord, parentIncrementalDataRecord): + +- Let {path} and {iterator} be the corresponding entries on {streamRecord}. +- Let {itemPath} be {path} with {index} appended. +- Let {streamItemsRecord} be the result of + {PrepareNewStreamItemsRecord(streamRecord, itemPath, + parentIncrementalDataRecord)}. +- Wait for the next item from {iterator}. +- Let {errors} be the corresponding entry on {streamRecord}. +- If {errors} is not empty, null bubbling from a different item has caused the + entire stream to error: + - Return, avoiding unnecessary work for items that will not be published. +- If an item is not retrieved because {iterator} has completed: + - Call {CompleteFinalEmptyStreamItemsRecord(streamItemsRecord)}. + - Return. +- Or, if an item is not retrieved because of an error: + - Let {errors} be an empty list. + - Append {error} to {errors}. + - Call {FilterSubsequentResults(publisherRecord, path, streamItemsRecord)}. + - Call {MarkErroredStreamItemsRecord(publisherRecord, streamItemsRecord, + errors)}. + - Optionally, notify the {iterator} that no additional items will be + requested. + - Return. +- Otherwise: + - Let {item} be the item retrieved from {iterator}. + - Let {streamedFieldGroup} be the corresponding entry on {streamRecord}. + - Let {newDeferMap} be an empty unordered map. + - Let {errors} be the corresponding entry on {streamRecord}. + - Let {data} be the result of calling {CompleteValue(innerType, + streamedFieldGroup, item, variableValues, itemPath, newDeferMap, + publisherRecord, parentIncrementalDataRecord)}. + - Append any encountered field errors to {errors}. + - Increment {index}. + - Call {ExecuteStreamField(streamRecord, index, innerType, variableValues, + streamRecord)}. + - If a field error was raised, causing a {null} to be propagated to {data} and + {innerType} is a Non-Nullable type: + - Call {FilterSubsequentResults(publisherRecord, path, streamItemsRecord)}. + - Call {MarkErroredStreamItemsRecord(publisherRecord, streamItemsRecord, + errors)}. + - Optionally, notify the {iterator} that no additional items will be + requested. + - Return. +- Otherwise: + - Let {items} be the corresponding entry on {streamItemsRecord}. + - Append {data} to {items}. + - Call {CompleteStreamItemsRecord(publisherRecord, streamItemsRecord, items, + errors)}. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fieldGroup, result, variableValues, path, deferMap, +publisherRecord, incrementalDataRecord): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + fieldGroup, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - Return {completedResult}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - If {result} is not a collection of values, raise a _field error_. + - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - If {field} provides the directive `@stream` and its {if} argument is not + {false} and is not a variable in {variableValues} with the value {false} and + {innerType} is the outermost return type of the list type defined for + {field}: + - Let {streamDirective} be that directive. + - If this execution is for a subscription operation, raise a _field error_. + - Let {initialCount} be the value or variable provided to + {streamDirective}'s {initialCount} argument. + - If {initialCount} is less than zero, raise a _field error_. + - Let {label} be the value or variable provided to {streamDirective}'s + {label} argument. + - Let {iterator} be an iterator for {result}. + - Let {items} be an empty list. + - Let {index} be zero. + - While {result} is not closed: + - If {streamDirective} is defined and {index} is greater than or equal to + {initialCount}: + - Let {streamRecord} be the result of {PrepareNewStreamRecord(fieldGroup, + label, path, iterator)}. + - Call {ExecuteStreamField(streamRecord, index, innerType, variableValues, + publisherRecord, incrementalDataRecord)}. + - Return {items}. + - Otherwise: + - Wait for the next item from {result} via the {iterator}. + - If an item is not retrieved because of an error, raise a _field error_. + - Let {resultItem} be the item retrieved from {result}. + - Let {itemPath} be {path} with {index} appended. + - Let {resolvedItem} be the result of calling {CompleteValue(innerType, + fields, resultItem, variableValues, itemPath, deferMap, publisherRecord, + incrementalDataRecord)}. + - Append {resolvedItem} to {items}. + - Increment {index}. + - Return {items}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -692,10 +1688,52 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {groupedFieldSet}, {newGroupedFieldSetDetails}, and {deferUsages} be the + result of {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}. + - Let {newDeferMap} be the result of {AddNewDeferFragments(publisherRecord, + newDeferUsages, incrementalDataRecord, deferMap, path)}. + - Let {newDeferredGroupedFieldSetRecords} be the result of + {AddNewDeferredGroupedFieldSets(publisherRecord, newGroupedFieldSetDetails, + newDeferMap, path)}. + - Let {completed} be the result of evaluating + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues, + path, newDeferMap, publisherRecord, incrementalDataRecord)} _normally_ + (allowing for parallelization). + - In parallel, call {ExecuteDeferredGroupedFieldSets(objectType, result, + variableValues, publisherRecord, newDeferredGroupedFieldSetRecords, + newDeferredFragmentRecords, newDeferMap)}. + - Return {completed}. + +ProcessSubSelectionSets(objectType, fieldGroup, variableValues): + +- Initialize {targetsByKey} to an empty unordered map of sets. +- Initialize {fieldsByTarget} to an empty unordered map of ordered maps. +- Initialize {newDeferUsages} to an empty list. +- Let {fields} and {targets} be the corresponding entries on {fieldGroup}. +- For each {fieldDetails} within {fields}: + - Let {node} and {target} be the corresponding entries on {fieldDetails}. + - Let {fieldSelectionSet} be the selection set of {fieldNode}. + - If {fieldSelectionSet} is null or empty, continue to the next field. + - Let {subfieldsFieldsByTarget}, {subfieldTargetsByKey}, and + {subfieldNewDeferUsages} be the result of calling + {AnalyzeSelectionSet(objectType, fieldSelectionSet, variableValues, + visitedFragments, target)}. + - For each {target} and {subfieldMap} in {subfieldFieldsByTarget}: + - Let {mapForTarget} be the ordered map in {fieldsByTarget} for {target}; + if no such map exists, create it as an empty ordered map. + - For each {responseKey} and {subfieldList} in {subfieldMap}: + - Let {listForResponseKey} be the list in {fieldsByTarget} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {subfieldList} to {listForResponseKey}. + - For each {responseKey} and {targetSet} in {subfieldTargetsByKey}: + - Let {setForResponseKey} be the set in {targetsByKey} for {responseKey}; + if no such set exists, create it as the empty set. + - Add all items in {targetSet} to {setForResponseKey}. + - Append all items in {subfieldNewDeferUsages} to {newDeferUsages}. +- Let {parentTargets} be the corresponding entry on {fieldGroup}. +- Let {groupedFieldSet} and {newGroupedFieldSetDetails} be the result of calling + {BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets)}. +- Return {groupedFieldSet}, {newGroupedFieldSetDetails}, and {newDeferUsages}. **Coercing Results** @@ -758,17 +1796,9 @@ sub-selections. } ``` -After resolving the value for `me`, the selection sets are merged together so -`firstName` and `lastName` can be resolved for one value. - -MergeSelectionSets(fields): - -- Let {selectionSet} be an empty list. -- For each {field} in {fields}: - - Let {fieldSelectionSet} be the selection set of {field}. - - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. +After resolving the value for `me`, the selection sets are merged together by +calling {ProcessSubSelectionSets()} so `firstName` and `lastName` can be +resolved for one value. ### Handling Field Errors @@ -803,6 +1833,160 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. +When a field error is raised inside `ExecuteDeferredGroupedFieldSets` or +`ExecuteStreamField`, the defer and stream payloads act as error boundaries. +That is, the null resulting from a `Non-Null` type cannot propagate outside of +the boundary of the defer or stream payload. + +If a field error is raised while executing the selection set of a fragment with +the `defer` directive, causing a {null} to propagate to the object containing +this fragment, the {null} should not be sent to the client, as this will +overwrite existing data. In this case, the associated Defer Payload's +`completed` entry must include the causative errors, whose presence indicated +the failure of the payload to be included within the final reconcilable object. + +For example, assume the `month` field is a `Non-Null` type that raises a field +error: + +```graphql example +{ + birthday { + ... @defer(label: "monthDefer") { + month + } + ... @defer(label: "yearDefer") { + year + } + } +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "birthday": {} }, + "pending": [ + { "path": ["birthday"], "label": "monthDefer" } + { "path": ["birthday"], "label": "yearDefer" } + ], + "hasNext": true +} +``` + +Response 2, the defer payload for label "monthDefer" is completed with errors. +Incremental data cannot be sent, as this would overwrite previously sent values. + +```json example +{ + "completed": [ + { + "path": ["birthday"], + "label": "monthDefer", + "errors": [...] + } + ], + "hasNext": false +} +``` + +Response 3, the defer payload for label "yearDefer" is sent. The data in this +payload is unaffected by the previous null error. + +```json example +{ + "incremental": [ + { + "path": ["birthday"], + "data": { "year": "2022" } + } + ], + "completed": [ + { + "path": ["birthday"], + "label": "yearDefer" + } + ], + "hasNext": false +} +``` + +If the `stream` directive is present on a list field with a Non-Nullable inner +type, and a field error has caused a {null} to propagate to the list item, the +{null} similarly should not be sent to the client, as this will overwrite +existing data. In this case, the associated Stream's `completed` entry must +include the causative errors, whose presence indicated the failure of the stream +to complete successfully. For example, assume the `films` field is a `List` type +with an `Non-Null` inner type. In this case, the second list item raises a field +error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "pending": [{ "path": ["films"] }], + "hasNext": true +} +``` + +Response 2, the stream is completed with errors. Incremental data cannot be +sent, as this would overwrite previously sent values. + +```json example +{ + "completed": [ + { + "path": ["films"], + "errors": [...], + } + ], + "hasNext": false +} +``` + +In this alternative example, assume the `films` field is a `List` type without a +`Non-Null` inner type. In this case, the second list item also raises a field +error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "hasNext": true +} +``` + +Response 2, the first stream payload is sent; the stream is not completed. The +{items} entry has been set to a list containing {null}, as this {null} has only +propagated as high as the list item. + +```json example +{ + "incremental": [ + { + "path": ["films", 1], + "items": [null], + "errors": [...], + } + ], + "hasNext": true +} +``` + If all fields from the root of the request to the source of the field error return `Non-Null` types, then the {"data"} entry in the response should be {null}. diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 8dcd9234c..0bea49561 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,7 +10,7 @@ the case that any _field error_ was raised on a field and was replaced with ## Response Format -A response to a GraphQL request must be a map. +A response to a GraphQL request must be a map or a response stream of maps. If the request raised any errors, the response map must contain an entry with key `errors`. The value of this entry is described in the "Errors" section. If @@ -22,14 +22,40 @@ key `data`. The value of this entry is described in the "Data" section. If the request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. +When the response of the GraphQL operation is a response stream, the first value +will be the initial response. All subsequent values may contain an `incremental` +entry, containing a list of Defer or Stream payloads. + +The `label` and `path` entries on Defer and Stream payloads are used by clients +to identify the `@defer` or `@stream` directive from the GraphQL operation that +triggered this response to be included in an `incremental` entry on a value +returned by the response stream. When a label is provided, the combination of +these two entries will be unique across all Defer and Stream payloads returned +in the response stream. + +If the response of the GraphQL operation is a response stream, each response map +must contain an entry with key `hasNext`. The value of this entry is `true` for +all but the last response in the stream. The value of this entry is `false` for +the last response of the stream. This entry must not be present for GraphQL +operations that return a single response map. + +The GraphQL service may determine there are no more values in the response +stream after a previous value with `hasNext` equal to `true` has been emitted. +In this case the last value in the response stream should be a map without +`data` and `incremental` entries, and a `hasNext` entry with a value of `false`. + The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional -restrictions on its contents. +restrictions on its contents. When the response of the GraphQL operation is a +response stream, implementors may send subsequent response maps containing only +`hasNext` and `extensions` entries. Defer and Stream payloads may also contain +an entry with the key `extensions`, also reserved for implementors to extend the +protocol however they see fit. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the -three described above. +five described above. Note: When `errors` is present in the response, it may be helpful for it to appear first when serialized to make it more clear when errors are present in a @@ -107,14 +133,8 @@ syntax element. If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key `path` that details the path of the response field which experienced the error. This allows clients to identify whether a -`null` result is intentional or caused by a runtime error. - -This field should be a list of path segments starting at the root of the -response and ending with the field associated with the error. Path segments that -represent fields should be strings, and path segments that represent list -indices should be 0-indexed integers. If the error happens in an aliased field, -the path to the error should use the aliased name, since it represents a path in -the response, not in the request. +`null` result is intentional or caused by a runtime error. The value of this +field is described in the [Path](#sec-Path) section. For example, if fetching one of the friends' names fails in the following operation: @@ -244,6 +264,172 @@ discouraged. } ``` +### Incremental Delivery + +The `pending` entry in the response is a non-empty list of references to pending +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field should appear on the initial and possibly subsequent +payloads. + +The `incremental` entry in the response is a non-empty list of data fulfilling +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field may appear on the subsequent payloads. + +The `completed` entry in the response is a non-empty list of references to +completed Defer or Stream results. If errors are + +For example, a query containing both defer and stream: + +```graphql example +query { + person(id: "cGVvcGxlOjE=") { + ...HomeWorldFragment @defer(label: "homeWorldDefer") + name + films @stream(initialCount: 1, label: "filmsStream") { + title + } + } +} +fragment HomeWorldFragment on Person { + homeWorld { + name + } +} +``` + +The response stream might look like: + +Response 1, the initial response does not contain any deferred or streamed +results. + +```json example +{ + "data": { + "person": { + "name": "Luke Skywalker", + "films": [{ "title": "A New Hope" }] + } + }, + "pending": [ + { "path": ["person"], "label": "homeWorldDefer" }, + { "path": ["person", "films"], "label": "filmStream" } + ], + "hasNext": true +} +``` + +Response 2, contains the defer payload and the first stream payload. + +```json example +{ + "incremental": [ + { + "path": ["person"], + "data": { "homeWorld": { "name": "Tatooine" } } + }, + { + "path": ["person", "films"], + "items": [{ "title": "The Empire Strikes Back" }] + } + ], + "completed": [{ "path": ["person"], "label": "homeWorldDefer" }], + "hasNext": true +} +``` + +Response 3, contains the final stream payload. In this example, the underlying +iterator does not close synchronously so {hasNext} is set to {true}. If this +iterator did close synchronously, {hasNext} would be set to {false} and this +would be the final response. + +```json example +{ + "incremental": [ + { + "path": ["person", "films"], + "items": [{ "title": "Return of the Jedi" }] + } + ], + "hasNext": true +} +``` + +Response 4, contains no incremental payloads. {hasNext} set to {false} indicates +the end of the response stream. This response is sent when the underlying +iterator of the `films` field closes. + +```json example +{ + "completed": [{ "path": ["person", "films"], "label": "filmStream" }], + "hasNext": false +} +``` + +#### Streamed data + +Streamed data may appear as an item in the `incremental` entry of a response. +Streamed data is the result of an associated `@stream` directive in the +operation. A stream payload must contain `items` and `path` entries and may +contain `errors`, and `extensions` entries. + +##### Items + +The `items` entry in a stream payload is a list of results from the execution of +the associated @stream directive. This output will be a list of the same type of +the field with the associated `@stream` directive. If an error has caused a +`null` to bubble up to a field higher than the list field with the associated +`@stream` directive, then the stream will complete with errors. + +#### Deferred data + +Deferred data is a map that may appear as an item in the `incremental` entry of +a response. Deferred data is the result of an associated `@defer` directive in +the operation. A defer payload must contain `data` and `path` entries and may +contain `errors`, and `extensions` entries. + +##### Data + +The `data` entry in a Defer payload will be of the type of a particular field in +the GraphQL result. The adjacent `path` field will contain the path segments of +the field this data is associated with. If an error has caused a `null` to +bubble up to a field higher than the field that contains the fragment with the +associated `@defer` directive, then the fragment will complete with errors. + +#### Path + +A `path` field allows for the association to a particular field in a GraphQL +result. This field should be a list of path segments starting at the root of the +response and ending with the field to be associated with. Path segments that +represent fields should be strings, and path segments that represent list +indices should be 0-indexed integers. If the path is associated to an aliased +field, the path should use the aliased name, since it represents a path in the +response, not in the request. + +When the `path` field is present on a Stream payload, it indicates that the +`items` field represents the partial result of the list field containing the +corresponding `@stream` directive. All but the non-final path segments must +refer to the location of the list field containing the corresponding `@stream` +directive. The final segment of the path list must be a 0-indexed integer. This +integer indicates that this result is set at a range, where the beginning of the +range is at the index of this integer, and the length of the range is the length +of the data. + +When the `path` field is present on a Defer payload, it indicates that the +`data` field represents the result of the fragment containing the corresponding +`@defer` directive. The path segments must point to the location of the result +of the field containing the associated `@defer` directive. + +When the `path` field is present on an "Error result", it indicates the response +field which experienced the error. + +#### Label + +Stream and Defer payloads may contain a string field `label`. This `label` is +the same label passed to the `@defer` or `@stream` directive associated with the +response. This allows clients to identify which `@defer` or `@stream` directive +is associated with this value. `label` will not be present if the corresponding +`@defer` or `@stream` directive is not passed a `label` argument. + ## Serialization Format GraphQL does not require a specific serialization format. However, clients