Skip to content

Clarify execution section. #221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Oct 24, 2016
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Follow up improvements thanks to @jjergus feedback
  • Loading branch information
leebyron committed Oct 22, 2016
commit 237a9cf9f32a8de8382e6c35dc012947dfbdcecc
24 changes: 23 additions & 1 deletion spec/Section 3 -- Type System.md
Original file line number Diff line number Diff line change
Expand Up @@ -786,7 +786,8 @@ not provided, or for which the value {null} was provided, an error should
be thrown.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the introduction of a null literal, do we need a separate concept of "non-nullable" and "required"? For example, should it be possible to declare an input object field which must be provided, but its value may be null?

{ required_field: 42 } # valid
{ required_field: null } # valid
{ } # invalid
{ required_field: $var } # valid if $var is required but $var may be nullable...I guess?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make sure to specifically address this point in the null literal proposal.

I think it's actually not worth separating these notions. I think there probably isn't a ton of value beyond fairly pedantic specificity in the cases of required but nullable and optional but non-nullable.


The result of coercion is an environment-specific unordered map defining slots
for each field of the input object type.
for each field both defined by the input object type and provided by the
original value.

For each field of the input object type, if the original value has an entry with
the same name, and the value at that entry is a literal value or a variable
Expand All @@ -797,6 +798,27 @@ The value of that entry in the result is the outcome of input coercing the
original entry value according to the input coercion rules of the
type declared by the input field.

Following are examples of Input Object coercion for the type:

```graphql
input ExampleInputObject {
a: String
b: Int!
}
```

Original Value | Variables | Coerced Value
-------------------------------------------------------------------------------
`{ a: "abc", b: 123 }` | `{}` | `{ a: "abc", b: 123 }`
`{ a: 123, b: "123" }` | `{}` | `{ a: "123", b: 123 }`
`{ a: "abc" }` | `{}` | Error: Missing required field {b}
`{ b: $var }` | `{ var: 123 }` | `{ b: 123 }`
`{ b: $var }` | `{ var: null }` | Error: {b} must be non-null.
`{ b: $var }` | `{}` | Error: {b} must be non-null.
`{ b: $var }` | `{}` | Error: {b} must be non-null.
`{ a: $var, b: 1 }` | `{ var: null }` | `{ a: null, b: 1 }`
`{ a: $var, b: 1 }` | `{}` | `{ b: 1 }`

#### Input Object type validation

1. An Input Object type must define one or more fields.
Expand Down
104 changes: 60 additions & 44 deletions spec/Section 6 -- Execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,36 @@ A request for execution consists of a few pieces of information:
* Optionally: An initial value corresponding to the root type being executed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know about this -- so the same query with the same variables could return different results if executed with different "initial values"? What would that be useful for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This matches current reality for the semantics. The root type and executing the top level selection set is almost exactly the same as executing any other selection set (with the exception of mutations being executed serially).

You can think of initialValue as the universe over which you're running the query. For pure functional environments this value is critical. It's also totally reasonable for a server to only ever have one of these.


Given this information, the result of {ExecuteRequest()} produces the response,
to be formatted according to the Reponse section below.
to be formatted according to the Response section below.


## Validating Requests

As explained in the Validation section, only requests which pass all validation
rules should be executed. If validation errors are known, they should be
reported in the list of "errors" in the response and the request must fail
without execution.

Typically validation is performed in the context of a request immediately
before execution, however a GraphQL service may execute a request without
immediately validating it if that exact same request is known to have been
validated before. A GraphQL service should only execute requests which *at some
point* were known to be free of any validation errors, and have since
not changed.

For example: the request may be validated during development, provided it does
not later change, or a service may validate a request once and memoize the
result to avoid validating the same request again in the future.


## Executing Requests

To execute a request, the executor must have a parsed `Document` (as defined
in the “Query Language” part of this spec) and a selected operation name to
run if the document defines multiple operations.
run if the document defines multiple operations, otherwise the document is
expected to only contain a single operation. The result of the request is
determined by the result of executing this operation according to the "Executing
Operations” section below.

ExecuteRequest(schema, document, operationName, variableValues, initialValue):

Expand All @@ -29,12 +51,6 @@ ExecuteRequest(schema, document, operationName, variableValues, initialValue):
* Otherwise if {operation} is a mutation operation:
* Return {ExecuteMutation(operation, schema, coercedVariableValues, initialValue)}.

The executor should find the `Operation` in the `Document` with the given
operation name. If no such operation exists, the executor should throw an
error. If the operation is found, then the result of executing the request
should be the result of executing the operation according to the "Executing
Operations” section.

GetOperation(document, operationName):

* If {operationName} is not {null}:
Expand All @@ -47,24 +63,6 @@ GetOperation(document, operationName):
* Produce a query error requiring a non-null {operationName}.


## Validation of operation

As explained in the Validation section, only requests which pass all validation
rules should be executed. If validation errors are known, they should be
reported in the list of "errors" in the response and the operation must fail
without execution.

Typically validation is performed in the context of a request immediately
before execution, however a GraphQL service may execute a request without
explicitly validating it if that exact same request is known to have been
validated before. For example: the request may be validated during development,
provided it does not later change, or a service may validate a request once and
memoize the result to avoid validating the same request again in the future.

A GraphQL service should only execute requests which *at some point* were
known to be free of any validation errors, and have not changed since.


## Coercing Variable Values

If the operation has defined any variables, then the values for
Expand All @@ -73,10 +71,28 @@ of variable's declared type. If a query error is encountered during
input coercion of variable values, then the operation fails without
execution.

If any variable defined as non-null is not provided, or is provided the value
{null}, then the operation fails without execution.
CoerceVariableValues(schema, operation, variableValues):

* Let {coercedValues} be an empty unordered Map.
* Let {variables} be the variables defined by {operation}.
* For each {variables} as {variableName} and {variableType}:
* If no value was provided in {variableValues} for the name {variableName}:
* If {variableType} is a Non-Nullable type, throw a query error.
* Continue to the next variable.
* Let {value} be the value provided in {variableValues} for the name {variableName}.
* If {value} cannot be coerced according to the
input coercion rules of {variableType}, throw a query error.
* Let {coercedValue} be the result of coercing {value} according to the
input coercion rules of {variableType}.
* Add an entry to {coercedValues} named {variableName} with the value {coercedValue}.
* Return {coercedValues}.

Note: This algorithm is very similar to {CoerceArgumentValues()}, however is
less forgiving of non-coerceable values.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it less forgiving by design, or should we change that?

nit: I honestly don't know which is the correct spelling, but we use "coercible" instead of "coerceable" in the rest of the spec.


Note: If any variable defined as non-null is not provided, or is provided the
value {null}, then the operation fails without execution.

CoerceVariableValues(schema, operation, variableValues)

## Executing Operations

Expand Down Expand Up @@ -110,8 +126,6 @@ mutations ensures against race conditions during these side-effects.

ExecuteMutation(mutation, schema, variableValues, initialValue):

* Let {variableValues} be the set of variable values to be used by any
field argument value coercion.
* Let {mutationType} be the root Mutation type in {schema}.
* Assert: {mutationType} is an Object type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above (schema validation).

* Let {selectionSet} be the top level Selection Set in {mutation}.
Expand All @@ -135,9 +149,8 @@ response map.

ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues):

* Initialize {visitedFragments} to be the empty set.
* Let {groupedFieldSet} be the result of
{CollectFields(objectType, selectionSet, visitedFragments, variableValues)}.
{CollectFields(objectType, selectionSet, variableValues)}.
* Initialize {resultMap} to an empty ordered map.
* For each {groupedFields} in {groupedFieldSet}:
* Let {entryTuple} be {GetFieldEntry(objectType, objectValue, groupedFields, variableValues)}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tuple thing seems hacky and non-intuitive. The "responseKey" is just the key for the current entry in "groupedFieldSet", so there is no reason to have GetFieldEntry return it. GetFieldEntry doesn't, and shouldn't have the ability to affect the response key of the current field.

It's not very clear from just looking at the algorithm, but as far as I can tell the only reason we're returning a tuple is so that we can distinguish between the "field should be omitted" case and the "field should have null value" case. I think we could find a more elegant way to do that.

I would rewrite this part to be something like:

  • For each {responseKey: groupedFields} in {groupedFieldSet}:
    • Let {fieldName} be ...
    • Let {fieldType} be the result of calling {GetFieldTypeFromObjectType(objectType, fieldName)}
    • If {fieldType} is not null:
      • Let {responseValue} be {GetFieldValue(...)}
      • Set {responseValue} as the value for {responseKey} in {resultMap}.

Another possibility would be to at least change the first value in the tuple to be a bool flag "should include field" instead of making it the response key just so that we can check if it's null.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tuple approach was decided when @dschafer and I were writing this section last year. Namely we need to distinguish between a field value of null vs a field not being included at all. Otherwise as you're pointing out, a larger part of the GetFieldEntry would end up in this algorithm instead. It aligns to the reference implementation (which in JS returns either null as a value or undefined as a skip).

I like this suggestion though - I'd like to explore this as a follow up since it might also suggest changing the reference implementation as well.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a little bit more explanation or example regarding {responseKey: groupedFields}, in what case a responseKey might have grouped fields? In such case how to determine the result of responseKey.

Expand All @@ -151,7 +164,7 @@ is explained in greater detail in the Response section below.

Note: Normally, each call to {GetFieldEntry()} in the algorithm above is
performed in parallel. However there are conditions in which each call must be
done in serial, such as for mutations. This is explain in more detail in the
done in serial, such as for mutations. This is explained in more detail in the
sections below.

Before execution, the selection set is converted to a grouped field set by
Expand All @@ -161,8 +174,9 @@ fields that share a response key.
This ensures all fields with the same response key (alias or field name)
included via referenced fragments are executed at the same time.

CollectFields(objectType, selectionSet, visitedFragments, variableValues):
CollectFields(objectType, selectionSet, variableValues, visitedFragments):

* If {visitedFragments} if not provided, initialize it to the empty set.
* Initialize {groupedFields} to an empty ordered list of lists.
* For each {selection} in {selectionSet}:
* If {selection} provides the directive `@skip`, let {skipDirective} be that directive.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make it clear (if it isn't already) somewhere in the spec that each selection item can only have one @skip/@include directive. Probably add a validation rule?

e.g. some_field @skip(if: $var1) @skip(if: $var2) is not valid

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually isn't already a validation rule and is valid, though it does weird things. Changing this is probably not appropriate in an editing PR, but we should definitely fix this.

Filed #223

Expand Down Expand Up @@ -203,7 +217,7 @@ CollectFields(objectType, selectionSet, visitedFragments, variableValues):
* If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue
with the next {selection} in {selectionSet}.
* Let {fragmentSelectionSet} be the top-level selection set of {selection}.
* Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, visitedFragments)}.
* Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, variableValues, visitedFragments)}.
* For each {fragmentGroup} in {fragmentGroupedFieldSet}:
* Let {responseKey} be the response key shared by all fields in {fragmentGroup}
* Let {groupForResponseKey} be the list in {groupedFields} for
Expand Down Expand Up @@ -502,22 +516,24 @@ CoerceArgumentValues(objectType, field, variableValues)
* Let {fieldName} be the name of {field}.
* Let {argumentDefinitions} be the arguments defined by {objectType} for the
field named {fieldName}.
* Let {coercedArgumentValues} be an empty Map.
* Let {coercedValues} be an empty unordered Map.
* For each {argumentDefinitions} as {argumentName} and {argumentType}:
* If no value was provided in {argumentValues} for the name {argumentName}:
* Continue to the next argument definition.
* If {argumentType} is a Non-Nullable type, throw a field error.
* Otherwise, continue to the next argument definition.
* Let {value} be the value provided in {argumentValues} for the name {argumentName}.
* If {value} is a Variable:
* If a value exists in {variableValues} for the Variable {value}:
* Add an entry to {coercedArgumentValues} named {argName} with the
* Add an entry to {coercedValues} named {argName} with the
value of the Variable {value} found in {variableValues}.
* Otherwise:
* Otherwise if {value} can be coerced according to the input coercion rules
of {argType}:
* Let {coercedValue} be the result of coercing {value} according to the
input coercion rules of {argType}.
* Add an entry to {coercedArgumentValues} named {argName} with the
* Add an entry to {coercedValues} named {argName} with the
value {coercedValue}.
* Return {coercedArgumentValues}.
* Return {coercedValues}.

Note: Variable values are not coerced because they are expected to be coerced
based on the type of the variable, and valid queries must only allow usage of
variables of appropriate types.
before executing the request in {CoerceVariableValues()}, and valid queries must
only allow usage of variables of appropriate types.