GraphQL Composite Schemas Spec

Introduction

The GraphQL Composite Schemas Spec introduces a comprehensive specification for creating distributed GraphQL systems that seamlessly merges multiple GraphQL schemas. This specification describes the process of composing a federated GraphQL schema and outlines algorithms for executing GraphQL queries on the federated schema effectively by using query plans. This specification was originally created by ChilliCream and was transferred to the GraphQL foundation.

The GraphQL Foundation was formed in 2019 as a neutral focal point for organizations who support the GraphQL ecosystem, and the GraphQL Specification Project was established also in 2019 as the Joint Development Foundation Projects, LLC, GraphQL Series.

If your organization benefits from GraphQL, please consider becoming a member and helping us to sustain the activities that support the health of our neutral ecosystem.

The GraphQL Specification Project has evolved and may continue to evolve in future editions of this specification. Previous editions of the GraphQL specification can be found at permalinks that match their release tag. The latest working draft release can be found at https://spec.graphql.org/draft.

Conformance

A conforming implementation of the GraphQL Composite Schemas Spec must fulfill all normative requirements. Conformance requirements are described in this document via both descriptive assertions and key words with clearly defined meanings.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative portions of this document are to be interpreted as described in IETF RFC 2119. These key words may appear in lowercase and still retain their meaning unless explicitly declared as non-normative.

A conforming implementation of the GraphQL Composite Schemas Spec may provide additional functionality, but must not where explicitly disallowed or would otherwise result in non-conformance.

Non-Normative Portions

All contents of this document are normative except portions explicitly declared as non-normative.

Examples in this document are non-normative, and are presented to aid understanding of introduced concepts and the behavior of normative portions of the specification. Examples are either introduced explicitly in prose (e.g. “for example”) or are set apart in example or counter-example blocks, like this:

Example № 1This is an example of a non-normative example.
Counter Example № 2This is an example of a non-normative counter-example.

Notes in this document are non-normative, and are presented to clarify intent, draw attention to potential edge-cases and pit-falls, and answer common questions that arise during implementation. Notes are either introduced explicitly in prose (e.g. “Note: “) or are set apart in a note block, like this:

Note This is an example of a non-normative note.

1Overview

The GraphQL Composite Schemas specification describes how to construct a single unified GraphQL schema, the composite schema, from multiple GraphQL schemas, each termed a source schema.

The composite schema presents itself as a regular GraphQL schema; the implementation details and complexities of the underlying distributed systems are not visible to clients, all observable behavior is the same as described by the GraphQL specification.

The GraphQL Composite Schemas specification has a number of design principles:

Note Although the GraphQL Composite Schemas specification does not describe how to combine arbitrary schemas, tooling may be built to transform existing or external schemas into compliant source schemas. Details of building such tooling is beyond the scope of this specification.

To enable greater interoperability between different implementations of tooling and gateways, this specification focuses on two core components: schema composition and distributed execution.

2Source Schema

A source schema is a GraphQL schema that is part of a larger composite schema. Source schemas use directives to express intent and requirements for the composition process as well as to describe runtime behavior. The following chapters describe the directives that are used to annotate a source schema.

2.1@lookup

directive @lookup on FIELD_DEFINITION

The @lookup directive is used within a source schema to specify output fields that can be used by the distributed GraphQL executor to resolve an entity by a stable key.

The stable key is defined by the arguments of the field. Each argument must match a field on the return type of the lookup field.

Source schemas can provide multiple lookup fields for the same entity that resolve the entity by different keys.

In this example, the source schema specifies that the Product entity can be resolved with the productById field or the productByName field. Both lookup fields are able to resolve the Product entity but do so with different keys.

Example № 3type Query {
  version: Int # NOT a lookup field.
  productById(id: ID!): Product @lookup
  productByName(name: String!): Product @lookup
}

type Product @key(fields: "id") @key(fields: "name") {
  id: ID!
  name: String!
}

The arguments of a lookup field must correspond to fields specified as an entity key with the @key directive on the entity type.

Example № 4type Query {
  node(id: ID!): Node @lookup
}

interface Node @key(fields: "id") {
  id: ID!
}

Lookup fields may return object, interface, or union types. In case a lookup field returns an abstract type (interface type or union type), all possible object types are considered entities and must have keys that correspond with the field’s argument signature.

Example № 5type Query {
  product(id: ID!, categoryId: Int): Product @lookup
}

union Product = Electronics | Clothing

type Electronics @key(fields: "id categoryId") {
  id: ID!
  categoryId: Int
  name: String
  brand: String
  price: Float
}

type Clothing @key(fields: "id categoryId") {
  id: ID!
  categoryId: Int
  name: String
  size: String
  price: Float
}

The following example shows an invalid lookup field as the Clothing type does not declare a key that corresponds with the lookup field’s argument signature.

Counter Example № 6type Query {
  product(id: ID!, categoryId: Int): Product @lookup
}

union Product = Electronics | Clothing

type Electronics @key(fields: "id categoryId") {
  id: ID!
  categoryId: Int
  name: String
  brand: String
  price: Float
}

# Clothing does not have a key that corresponds
# with the lookup field's argument signature.
type Clothing @key(fields: "id") {
  id: ID!
  categoryId: Int
  name: String
  size: String
  price: Float
}

If the lookup returns an interface, the interface must also be annotated with a @key directive and declare its keys.

Example № 7interface Node @key(fields: "id") {
  id: ID!
}

Lookup fields must be accessible from the Query type. If not directly on the Query type, they must be accessible via fields that do not require arguments, starting from the Query root type.

Example № 8type Query {
  lookups: Lookups!
}

type Lookups {
  productById(id: ID!): Product @lookup
}

type Product @key(fields: "id") {
  id: ID!
}

Lookups can also be nested within other lookups and allow resolving nested entities that are part of an aggregate. In the following example the Product can be resolved by its ID but also the ProductPrice can be resolved by passing in a composite key containing the product ID and region name of the product price.

Example № 9type Query {
  productById(id: ID!): Product @lookup
}

type Product @key(fields: "id") {
  id: ID!
  price(regionName: String!): ProductPrice @lookup
}

type ProductPrice @key(fields: "regionName product { id }") {
  regionName: String!
  product: Product
  value: Float!
}

Nested lookups must immediately follow the parent lookup and cannot be nested with fields in between.

Counter Example № 10type Query {
  productById(id: ID!): Product @lookup
}

type Product @key(fields: "id") {
  id: ID!
  details: ProductDetails
}

type ProductDetails {
  price(regionName: String!): ProductPrice @lookup
}

type ProductPrice @key(fields: "regionName product { id }") {
  regionName: String!
  product: Product
  value: Float!
}

2.2@internal

directive @internal on OBJECT | FIELD_DEFINITION

The @internal directive is used to mark types and fields as internal within a source schema. Internal types and fields do not appear in the final client-facing composite schema and are internal to the source schema they reside in.

Example № 11# Source Schema
type Query {
  productById(id: ID!): Product
  productBySku(sku: ID!): Product @internal
}

# Composite Schema
type Product {
  productById(id: ID!): Product
}

Internal types and field do not participate in the normal schema-merging process.

Example № 12# Source Schema A
type Query {
  # this field follows the standard field merging rules
  productById(id: ID!): Product

  # this field is internal and does not follow any field merging rules.
  productBySku(sku: ID!): Product @internal
}

# Source Schema B
type Query {
  productById(id: ID!): Product
  productBySku(sku: ID!, name: String!): Product @internal
}

# Composite Schema
type Product {
  productById(id: ID!): Product
}

Internal fields may be used by the distributed GraphQL executor as lookup fields for entity resolution or to supply additional data.

Example № 13# Source Schema A
type Query {
  productById(id: ID!): Product @lookup
  lookups: InternalLookups! @internal
}

# all lookups within this internal type are hidden from the public API
# but can be used for entity resolution.
type InternalLookups @internal {
  productBySku(sku: ID!): Product @lookup
}

# Composite Schema
type Product {
  productById(id: ID!): Product
}

In contrast to @inaccessible the effect of @internal is local to it’s source schema.

Example № 14# Source Schema A
type Query {
  # this field follows the standard field merging rules
  productById(id: ID!): Product

  # this field is internal and does not follow any field merging rules.
  productBySku(sku: ID!): Product @internal
}

# Source Schema B
type Query {
  # this field follows the standard field merging rules
  productById(id: ID!): Product

  # this field follows the standard field merging rules
  productBySku(sku: Int!): Product
}

# Composite Schema
type Product {
  productById(id: ID!): Product
  productBySku(sku: Int!): Product
}

2.3@inaccessible

directive @inaccessible on OBJECT | FIELD_DEFINITION

The @inaccessible directive is used to prevent specific objects or fields from being accessible through the client-facing composite schema, even if they are accessible in the underlying source schemas.

This directive is useful for restricting access to fields or objects that are either irrelevant to the client-facing composite schema or sensitive in nature, such as internal identifiers or fields intended only for backend use.

In the following example, the key field sku is inaccessible from the composite schema. However, type system members marked as @inaccessible can still be used by the distributed executor to fulfill requirements.

Example № 15type Product @key(fields: "id") @key(fields: "sku") {
  id: ID!
  sku: String! @inaccessible
  note: String
}

type Query {
  productById(id: ID!): Product
  productBySku(sku: String!): Product @inaccessible
}

In contrast to the @internal directive @inaccessible hides an object type or output field from the composite schema even if other source schemas on the same type system member have no @inaccessible directive.

Example № 16# Source Schema A
type Product @key(fields: "id") @key(fields: "sku") {
  id: ID!
  sku: String! @inaccessible
  note: String
}

# Source Schema B
type Product @key(fields: "sku") {
  sku: String!
  price: Float!
}

# Composite Schema
type Product {
  id: ID!
  note: String
  price: Float!
}

2.4@is

directive @is(field: FieldSelectionMap!) on ARGUMENT_DEFINITION

The @is directive is utilized on lookup fields to describe how the arguments can be mapped from the entity type that the lookup field resolves. The mapping establishes semantic equivalence between disparate type system members across source schemas and is used in cases where the argument does not 1:1 align with a field on the entity type.

In the following example, the directive specifies that the id argument on the field Query.personById and the field Person.id on the return type of the field are semantically the same.

Note In this case the @is directive could also be omitted as the argument and field names match.
Example № 17extend type Query {
  personById(id: ID! @is(field: "id")): Person @lookup
}

The @is directive also allows referring to nested fields relative to Person.

Example № 18extend type Query {
  personByAddressId(id: ID! @is(field: "address.id")): Person
}

The @is directive is not limited to a single argument.

Example № 19extend type Query {
  personByAddressId(
    id: ID! @is(field: "address.id")
    kind: PersonKind @is(field: "kind")
  ): Person
}

The @is directive can also be used in combination with @oneOf to specify lookup fields that can resolve entities by different keys.

Example № 20extend type Query {
  person(
    by: PersonByInput
      @is(field: "{ id } | { addressId: address.id } | { name }")
  ): Person
}

input PersonByInput @oneOf {
  id: ID
  addressId: ID
  name: String
}
Arguments:
  • field: Represents a selection path map syntax.

2.5@require

directive @require(field: FieldSelectionMap!) on ARGUMENT_DEFINITION

The @require directive is used to express data requirements with other source schemas. Arguments annotated with the @require directive are removed from the composite schema and the value for these will be resolved by the distributed executor.

Example № 21type Product {
  id: ID!
  delivery(
    zip: String!
    size: Int! @require(field: "dimension.size")
    weight: Int! @require(field: "dimension.weight")
  ): DeliveryEstimates
}

The above example would translate to the following in the composite schema.

Example № 22type Product {
  id: ID!
  delivery(zip: String!): DeliveryEstimates
}

This can also be done by using input types. The selection path map specifies which data is required and needs to be resolved from other source schemas. If the input type is only used to express requirements it is removed from the composite schema.

Example № 23type Product {
  id: ID!
  delivery(
    zip: String!
    dimension: ProductDimensionInput! @require(field: "{ size: dimension.size, weight: dimension.weight }"))
  ): DeliveryEstimates
}

If the input types do not match the output type structure the selection map syntax can be used to specify how requirements translate to the input object.

Example № 24type Product {
  id: ID!
  delivery(
    zip: String!
    dimension: ProductDimensionInput!
      @require(field: "{ productSize: dimension.size, productWeight: dimension.weight }"))
  ): DeliveryEstimates
}

type ProductDimension {
  size: Int!
  weight: Int!
}

input ProductDimensionInput {
  productSize: Int!
  productWeight: Int!
}
Arguments:
  • field: Represents a selection path map syntax.

2.6@key

directive @key(fields: SelectionSet!) repeatable on OBJECT | INTERFACE

The @key directive is used to designate an entity’s unique key, which identifies how to uniquely reference an instance of an entity across different source schemas. It allows a source schema to indicate which fields form a unique identifier, or key, for an entity.

Example № 25type Product @key(fields: "id") {
  id: ID!
  sku: String!
  name: String!
  price: Float!
}

Each occurrence of the @key directive on an object or interface type specifies one distinct unique key for that entity, which enables a gateway to perform lookups and resolve instances of the entity based on that key.

Example № 26type Product @key(fields: "id") @key(fields: "key") {
  id: ID!
  sku: String!
  name: String!
  price: Float!
}

While multiple keys define separate ways to reference the same entity based on different sets of fields, a composite key allows for uniquely identifying an entity by using a combination of multiple fields.

Example № 27type Product @key(fields: "id sku") {
  id: ID!
  sku: String!
  name: String!
  price: Float!
}

The directive is applicable to both OBJECT and INTERFACE types. This allows entities that implement an interface to inherit the key(s) defined at the interface level, ensuring consistent identification across different implementations of that interface.

Arguments:
  • fields: Represents a selection set syntax.

2.7@shareable

directive @shareable repeatable on OBJECT | FIELD_DEFINITION

By default, only a single source schema is allowed to contribute a particular field to an object type. This prevents source schemas from inadvertently defining similarly named fields that are not semantically equivalent.

Counter Example № 28# Schema A
type Product {
  name: String!
  description: String!
}

# Schema B
type Product {
  name: String!
  variation: ProductVariation!
}

Fields must be explicitly marked as @shareable to allow multiple source schemas to define them, ensuring that the decision to serve a field from more than one source schema is intentional and coordinated.

Example № 29# Schema A
type Product {
  name: String! @shareable
  description: String!
}

# Schema B
type Product {
  name: String! @shareable
  variation: ProductVariation!
}

If multiple source schemas define the same sharable field, they are assumed to be semantically equivalent, and the executor is free to choose between them as it sees fit.

The @shareable directive can also be applied at the object-type level, having the same effect as if @shareable were applied to each field of the type.

Example № 30# Schema A
type Product @shareable {
  name: String!
  description: String!
}

# Schema B
type Product {
  name: String! @shareable
  variation: ProductVariation!
}

Key fields of an object-type are considered shareable by default and do not need to be explicitly marked with @shareable.

Example № 31# Schema A
type Product @key(fields: "id") {
  id: ID!
  name: String! @shareable
  description: String!
}

# Schema B
type Product @key(fields: "id") {
  id: ID!
  name: String! @shareable
  variation: ProductVariation!
}

2.8@provides

directive @provides(fields: SelectionSet!) on FIELD_DEFINITION

The @provides directive indicates that a field can provide certain subfields of its return type from the same source schema, without requiring an additional resolution step elsewhere.

Example № 32type Review {
  id: ID!
  body: String!
  author: User @provides(fields: "email")
}

type User @key(fields: "id") {
  id: ID!
  email: String! @external
  name: String!
}

type Query {
  reviews: [Review!]
  users: [User!]
}

When a field annotated with @provides returns an object, interface or union type that may also be contributed by other source schemas, this directive declares which of that type’s subfields the current source schema can resolve directly.

Example № 33{
  reviews {
    body
    author {
      name
      email
    }
  }
}

If a client tries to fetch the same subfield (User.email) through a different path (e.g., users query field), the source schema will not be able to resolve it and will throw an error.

Counter Example № 34{
  users {
    # The source schema does NOT provide email in this context,
    # and this field will fail at runtime.
    email
  }
}

The @provides directive may reference multiple fields or nested fields:

Example № 35type Review {
  id: ID!
  product: Product @provides(fields: "sku variation { size }")
}

type Product @key(fields: "sku variation { id }") {
  sku: String! @external
  variation: ProductVariation!
  name: String!
}

type ProductVariation {
  id: String!
  size: String! @external
}

When a field annotated with the provides directive has an abstract return type the fields syntax can leverage inline fragments to express fields that can be resolved locally.

Example № 36type Review {
  id: ID!
  # The @provides directive tells us that this source schema can supply different
  # fields depending on which concrete type of Product is returned.
  product: Product
    @provides(
      fields: """
      ... on Book { author }
      ... on Clothing { size }
      """
    )
}

interface Product @key(fields: "id") {
  id: ID!
}

type Book implements Product {
  id: ID!
  title: String!
  author: String! @external
}

type Clothing implements Product {
  id: ID!
  name: String!
  size: String! @external
}

type Query {
  reviews: [Review!]!
}
Arguments:
  • fields: Represents a selection set syntax describing the subfields of the returned type that can be provided by the current source schema.

2.9@external

directive @external on FIELD_DEFINITION

The @external directive indicates that a field is recognized by the current source schema but is not directly contributed (resolved) by it. Instead, this schema references the field for specific composition purposes.

Entity Keys

When combined with one or more @key directives, an external field can serve as an entity identifier (or part of a composite identifier).

Example № 37type Query {
  productBySku(sku: String!): Product @lookup
  productByUpc(upc: String!): Product @lookup
}

type Product @key(fields: "sku") @key(fields: "upc") {
  sku: String! @external
  upc: String! @external
  name: String
}
Field Resolution

When another field in the same source schema uses @provides to declare that it can resolve certain external fields in a single data-fetching step.

Example № 38type Review {
  id: ID!
  text: String
  author: User @provides(fields: "email")
}

extend type User {
  id: ID!
  email: String! @external
}

When a field is marked @external, the composition process understands that the field is provided by another source schema. The current source schema references it only for entity identification (via @key) or for providing a field through @provides. If no such usage exists, the presence of an @external field produces a composition error.

2.10@override

directive @override(from: String!) on FIELD_DEFINITION

The @override directive is used to migrate a field from one source schema to another. When a field in the local schema is annotated with @override(from: "Catalog"), it signals that the local schema overrides the field previously contributed by the Catalog source schema. As a result, the composite schema will source this field from the local schema, rather than from the original source schema.

The following example shows how a field can be migrated from the Catalog schema to the new Payments schema. By using @override, a field can be moved to a new schema without requiring any change to the original Catalog schema.

Example № 39# The original "Catalog" schema:
type Product @key(fields: "id") {
  id: ID!
  name: String!
  price: Float!
}

# The new "Payments" schema:
extend type Product @key(fields: "id") {
  id: ID! @external
  price: Float! @override(from: "Catalog")
  tax: Float!
}

Fields that are annotated can themselves be migrated.

Example № 40# The original "Catalog" schema:
type Product @key(fields: "id") {
  id: ID!
  name: String!
  price: Float!
}

# The new "Payments" schema:
extend type Product @key(fields: "id") {
  id: ID! @external
  price: Float! @override(from: "Catalog")
  tax: Float!
}

# The new "Pricing" schema:
extend type Product @key(fields: "id") {
  id: ID! @external
  price: Float! @override(from: "Payments")
  tax: Float!
}

If the composition detects cyclic overrides it must throw a composition error.

Example № 41# The original "Catalog" schema:
type Product @key(fields: "id") {
  id: ID!
  name: String!
  price: Float! @override(from: "Pricing")
}

# The new "Payments" schema:
extend type Product @key(fields: "id") {
  id: ID! @external
  price: Float! @override(from: "Catalog")
  tax: Float!
}
Arguments:
  • from: The name of the source schema that originally provided this field.

3Schema Composition

The schema composition describes the process of merging multiple source schemas into a single GraphQL schema, known as the composite execution schema, which is a valid GraphQL schema annotated with execution directives. This composite execution schema is the output of the schema composition process. The schema composition process is divided into three main steps: Validate Source Schemas, Merge Source Schemas, and Validate Satisfiability, which are run in sequence to produce the composite execution schema.

Although this chapter describes schema composition as a sequence of phases, an implementation is not required to implement these steps exactly as presented. Implementations may interleave or reorder the specified checks, or introduce additional processing stages, provided that the final composed schema complies with the requirements set forth in this specification. The composition rules and resulting schema must remain consistent, but the specific structure or timing of each validation step is left to the implementer.

3.1Validate Source Schemas

In this phase, each source schema is validated in isolation to ensure that it satisfies the GraphQL specification and composition requirements. No cross-schema references are considered here. Each source schema must have valid syntax, well-formed type definitions, and correct directive usage. If any source schema fails these checks, composition does not proceed.

3.2Merge Source Schemas

Once all source schemas have passed individual validation, they are merged into a single composite schema. This merging process is subdivided into three stages: pre-merge validation, merge, and post-merge validation.

3.2.1Pre Merge Validation

Prior to merging the schemas, additional validations are performed that require visibility into all source schemas but treat them as separate entities. This step detects conflicts such as incompatible fields or default argument values that would render the merged schema unusable. Detecting such conflicts early prevents errors that would otherwise be discovered during the merge process.

3.2.1.1Enum Type Default Value Uses Inaccessible Value

Error Code

ENUM_TYPE_DEFAULT_VALUE_INACCESSIBLE

Formal Specification
ValidateArgumentDefaultValues()
  1. Let arguments be all arguments of fields and directives across all source schemas
  2. For each argument in arguments
    1. If IsExposed(argument) is true and has a default value:
      1. Let defaultValue be the default value of argument
      2. If not ValidateDefaultValue(defaultValue)
        1. return false
  3. return true
ValidateInputFieldDefaultValues()
  1. Let inputFields be all input fields across all source schemas
  2. For each inputField in inputFields:
    1. If IsExposed(inputField) is true and inputField has a default value:
      1. Let defaultValue be the default value of inputField
      2. If ValidateDefaultValue(defaultValue) is false
        1. return false
  3. return true
ValidateDefaultValue(defaultValue)
  1. If defaultValue is a ListValue:
    1. For each valueNode in defaultValue:
      1. If ValidateDefaultValue(valueNode) is false
        1. return false
  2. If defaultValue is an ObjectValue:
    1. Let objectFields be a list of all fields of defaultValue
    2. Let fields be a list of all fields objectFields are referring to
    3. For each field in fields:
      1. If IsExposed(field) is false
        1. return false
    4. For each objectField in objectFields:
      1. Let value be the value of objectField
      2. return ValidateDefaultValue(value)
  3. If defaultValue is an EnumValue:
    1. If IsExposed(defaultValue) is false
      1. return false
  4. return true
Explanatory Text

This rule ensures that inaccessible enum values are not exposed in the composed schema through default values. Output field arguments, input fields, and directive arguments must only use enum values as their default value when not annotated with the @inaccessible directive.

In this example the FOO value in the Enum1 enum is not marked with @inaccessible, hence it does not violate the rule.

type Query {
  field(type: Enum1 = FOO): [Baz!]!
}

enum Enum1 {
  FOO
  BAR
}

The following example violates this rule because the default value for the field field in type Input1 references an enum value (FOO) that is marked as @inaccessible.

Counter Example № 42type Query {
  field(arg: Enum1 = FOO): [Baz!]!
}

input Input1 {
  field: Enum1 = FOO
}

directive @directive1(arg: Enum1 = FOO) on FIELD_DEFINITION

enum Enum1 {
  FOO @inaccessible
  BAR
}
Counter Example № 43type Query {
  field(arg: Input1 = { field2: "ERROR" }): [Baz!]!
}

directive @directive1(arg: Input1 = { field2: "ERROR" }) on FIELD_DEFINITION

input Input1 {
  field1: String
  field2: String @inaccessible
}

3.2.1.2Output Field Types Mergeable

Error Code

OUTPUT_FIELD_TYPES_NOT_MERGEABLE

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types.
      • FieldsAreMergeable(fields) must be true.
FieldsAreMergeable(fields)
  1. Given each pair of members fieldA and fieldB in fields:
    1. Let typeA be the type of fieldA
    2. Let typeB be the type of fieldB
    3. SameTypeShape(typeA, typeB) must be true.
Explanatory Text

Fields on objects or interfaces that have the same name are considered semantically equivalent and mergeable when they have a mergeable field type.

Fields with the same type are mergeable.

Example № 44type User {
  birthdate: String
}

type User {
  birthdate: String
}

Fields with different nullability are mergeable, resulting in a merged field with a nullable type.

Example № 45type User {
  birthdate: String!
}

type User {
  birthdate: String
}
Example № 46type User {
  tags: [String!]
}

type User {
  tags: [String]!
}

type User {
  tags: [String]
}

Fields are not mergeable if the named types are different in kind or name.

Counter Example № 47type User {
  birthdate: String!
}

type User {
  birthdate: DateTime!
}
Counter Example № 48type User {
  tags: [Tag]
}

type Tag {
  value: String
}

type User {
  tags: [Tag]
}

scalar Tag

3.2.1.3Disallowed Inaccessible Elements

Error Code

DISALLOWED_INACCESSIBLE

Severity

ERROR

Formal Specification
  • Let type be the set of all types from all source schemas.
  • For each type in types:
    • If type is a built-in scalar type or introspection type:
      • IsAccessible(type) must be true.
      • For each field in type:
        • IsAccessible(field) must be true.
        • For each argument in field:
          • IsAccessible(argument) must be true.
  • For each directive in directives:
    • If directive is a built-in directive:
      • For each argument in directive:
        • IsAccessible(argument) must be true.
Explanatory Text

This rule ensures that certain essential elements of a GraphQL schema, particularly built-in scalars, directive arguments, and introspection types, cannot be marked as @inaccessible. These types are fundamental to GraphQL. Making these elements inaccessible would break core GraphQL functionality.

Here, the String type is not marked as @inaccessible, which adheres to the rule:

Example № 49type Product {
  price: Float
  name: String
}

In this example, the String scalar is marked as @inaccessible. This violates the rule because String is a required built-in type that cannot be inaccessible:

Counter Example № 50scalar String @inaccessible

type Product {
  price: Float
  name: String
}

In this example, the introspection type __Type is marked as @inaccessible. This violates the rule because introspection types must remain accessible for GraphQL introspection queries to work.

Counter Example № 51type __Type @inaccessible {
  kind: __TypeKind!
  name: String
  fields(includeDeprecated: Boolean = false): [__Field!]
}

3.2.1.4External Argument Default Mismatch

Error Code

EXTERNAL_ARGUMENT_DEFAULT_MISMATCH

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types in types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types in types.
      • Let externalFields be the set of all fields in fields that are marked with @external.
      • If externalFields is not empty
        • Let argumentNames be the set of all argument names from all fields in fields.
        • For each argumentName in argumentNames
          • Let arguments be the set of all arguments with the name argumentName from all fields in fields.
          • Let defaultValue be the first default value found in arguments.
          • Let externalArguments be the set of all arguments with the name argumentName from all fields in externalFields.
          • For each externalArgument in externalArguments
            • The default value of externalArgument must be equal to defaultValue.
Explanatory Text

This rule ensures that arguments on fields marked as @external have default values compatible with the corresponding arguments on fields from other source schemas where the field is defined (non-@external). Since @external fields represent fields that are resolved by other source schemas, their arguments and defaults must match to maintain consistent behavior across different source schemas.

Here, the name field on Product is defined in one source schema and marked as @external in another. The argument language has the same default value in both source schemas, satisfying the rule:

Example № 52# Source schema A
type Product {
  name(language: String = "en"): String
}

# Source schema B
type Product {
  name(language: String = "en") @external: String
}

Here, the name field on Product is defined in one source schema and marked as @external in another. The argument language has different default values in the two source schemas, violating the rule:

Counter Example № 53# Source schema A
type Product {
  name(language: String = "en"): String
}

# Source schema B
type Product {
  name(language: String = "de") @external: String
}

In the following counter example, the name field on Product is defined in one source schema and marked as @external in another. The argument language has a default value in the source schema where the field is defined, but it does not have a default value in the source schema where the field is marked as @external, violating the rule:

Counter Example № 54# Source schema A
type Product {
  name(language: String = "en"): String
}

# Source schema B
type Product {
  name(language: String): String @external
}

3.2.1.5External Argument Missing

Error Code

EXTERNAL_ARGUMENT_MISSING

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types in types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types in types.
      • Let externalFields be the set of all fields in fields that are marked with @external.
      • Let nonExternalFields be the set of all fields in fields that are not marked with @external.
      • If externalFields is not empty
        • Let argumentNames be the set of all argument names from all fields in nonExternalFields
        • For each argumentName in argumentNames:
          • For each externalField in externalFields
            • argumentName must be present in the arguments of externalField.
Explanatory Text

This rule ensures that fields marked with @external have all the necessary arguments that exist on the corresponding field definitions in other source schemas. Each argument defined on the base field (the field definition in the source source schema) must be present on the @external field in other source schemas. If an argument is missing on an @external field, the field cannot be resolved correctly, which is an inconsistency.

In this example, the language argument is present on both the @external field in source schema B and the base field in source schema A, satisfying the rule:

Example № 55# Source schema A
type Product {
  name(language: String): String
}

# Source schema B
type Product {
  name(language: String): String @external
}

Here, the @external field in source schema B is missing the language argument that is present in the base field definition in source schema A, violating the rule:

Counter Example № 56# Source schema A
type Product {
  name(language: String): String
}

# Source schema B
type Product {
  name: String @external
}

3.2.1.6External Argument Type Mismatch

Error Code

EXTERNAL_ARGUMENT_TYPE_MISMATCH

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types in types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types in types.
      • Let externalFields be the set of all fields in fields that are marked with @external.
      • Let nonExternalFields be the set of all fields in fields that are not marked with @external.
      • If externalFields is not empty
        • Let argumentNames be the set of all argument names from all fields in nonExternalFields
        • For each argumentName in argumentNames:
          • For each externalField in externalFields
            • Let externalArgument be the argument with the name argumentName from externalField.
            • externalArgument must strictly equal all arguments with the name argumentName from nonExternalFields.
Explanatory Text

This rule ensures that arguments on fields marked as @external have types compatible with the corresponding arguments on the fields defined in other source schemas. The arguments must have the exact same type signature, including nullability and list nesting.

Here, the @external field’s language argument has the same type (Language) as the base field, satisfying the rule:

Example № 57# Source schema A
type Product {
  name(language: Language): String
}

# Source schema B
type Product {
  name(language: Language): String
}

In this example, the @external field’s language argument type does not match the base field’s language argument type (Language vs. String), violating the rule:

Example № 58# Source schema A
type Product {
  name(language: Language): String
}

# Source schema B
type Product {
  name(language: String): String
}

3.2.1.7External Missing on Base

Error Code

EXTERNAL_MISSING_ON_BASE

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types in types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types in types.
      • Let externalFields be the set of all fields in fields that are marked with @external.
      • Let nonExternalFields be the set of all fields in fields that are not marked with @external.
      • If externalFields is not empty
        • nonExternalFields must not be empty.
Explanatory Text

This rule ensures that any field marked as @external in a source schema is actually defined (non-@external) in at least one other source schema. The @external directive is used to indicate that the field is not usually resolved by the source schema it is declared in, implying it should be resolvable by at least one other source schema.

Here, the name field on Product is defined in source schema A and marked as @external in source schema B, which is valid because there is a base definition in source schema A:

Example № 59# Source schema A
type Product {
  id: ID
  name: String
}

# Source schema B
type Product {
  id: ID
  name: String @external
}

In this example, the name field on Product is marked as @external in source schema B but has no non-@external declaration in any other source schema, violating the rule:

Counter Example № 60# Source schema A
type Product {
  id: ID
}

# Source schema B
type Product {
  id: ID
  name: String @external
}

3.2.1.8External Type Mismatch

Error Code

EXTERNAL_TYPE_MISMATCH

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the name typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types in types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the name fieldName from all types in types.
      • Let externalFields be the set of all fields in fields that are marked with @external.
      • Let nonExternalFields be the set of all fields in fields that are not marked with @external.
      • For each externalField in externalFields
        • The type of externalField must strictly equal all types of nonExternalFields.
Explanatory Text

This rule ensures that a field marked as @external has a return type compatible with the corresponding field defined in other source schemas. Fields with the same name must represent the same data type to maintain schema consistency

Here, the @external field name has the same return type (String) as the base field definition, satisfying the rule:

Example № 61# Source schema A
type Product {
  name: String
}

# Source schema B
type Product {
  name: String @external
}

In this example, the @external field name has a return type of ProductName that doesn’t match the base field’s return type String, violating the rule:

Counter Example № 62# Source schema A
type Product {
  name: String
}

# Source schema B
type Product {
  name: ProductName @external
}

3.2.1.9External Unused

Error Code

EXTERNAL_UNUSED

Severity

ERROR

Formal Specification
  • For each schema in all source schemas
    • Let types be the set of all composite types (object, interface) in schema.
    • For each type in types:
      • Let fields be the set of fields for type.
      • For each field in fields:
        • If field is marked with @external:
          • Let referencingFields be the set of fields in schema that reference type.
          • referencingFields must contain at least one field that references field in @provides
Explanatory Text

This rule ensures that every field marked as @external in a source schema is actually used by that source schema in a @provides directive.

Examples

In this example, the name field is marked with @external and is used by the @provides directive, satisfying the rule:

Example № 63# Source schema A
type Product {
  id: ID
  name: String @external
}

type Query {
  productByName(name: String): Product @provides(fields: "name")
}

In this example, the name field is marked with @external but is not used by the @provides directive, violating the rule:

Counter Example № 64# Source schema A
type Product {
  title: String @external
  author: Author
}

3.2.1.10Root Mutation Used

Error Code

ROOT_MUTATION_USED

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let rootMutation be the root mutation type defined in schema, if it exists.
    • Let namedMutationType be the type with the name Mutation in schema, if it exists.
    • If rootMutation is defined:
      • rootMutation must be named Mutation.
    • Otherwise, namedMutationType must not be defined.
Explanatory Text

This rule enforces that, for any source schema, if a root mutation type is defined, it must be named Mutation. Defining a root mutation type with a name other than Mutation or using a differently named type alongside a type explicitly named Mutation creates inconsistencies in schema design and violates the composite schema specification.

Examples

Valid example:

Example № 65schema {
  mutation: Mutation
}

type Mutation {
  createProduct(name: String): Product
}

type Product {
  id: ID!
  name: String
}

The following counter-example violates the rule because RootMutation is used as the root mutation type, but a type named Mutation is also defined.

Counter Example № 66schema {
  mutation: RootMutation
}

type RootMutation {
  createProduct(name: String): Product
}

type Mutation {
  deprecatedField: String
}

3.2.1.11Root Query Used

Error Code

ROOT_QUERY_USED

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let rootQuery be the root mutation type defined in schema, if it exists.
    • Let namedQueryType be the type with the name Query in schema, if it exists.
    • If rootQuery is defined:
      • rootQuery must be named Query.
    • Otherwise, namedQueryType must not be defined.
Explanatory Text

This rule enforces that the root query type in any source schema must be named Query. Defining a root query type with a name other than Query or using a differently named type alongside a type explicitly named Query creates inconsistencies in schema design and violates the composite schema specification.

Examples

Valid example:

Example № 67schema {
  query: Query
}

type Query {
  product(id: ID!): Product
}

type Product {
  id: ID!
  name: String
}

The following counter-example violates the rule because RootQuery is used as the root query type, but a type named Query is also defined.

Counter Example № 68schema {
  query: RootQuery
}

type RootQuery {
  product(id: ID!): Product
}

type Query {
  deprecatedField: String
}

3.2.1.12Root Subscription Used

Error Code

ROOT_SUBSCRIPTION_USED

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let rootSubscription be the root mutation type defined in schema, if it exists.
    • Let namedSubscriptionType be the type with the name Subscription in schema, if it exists.
    • If rootSubscription is defined:
      • rootSubscription must be named Subscription.
    • Otherwise, namedSubscriptionType must not be defined.
Explanatory Text

This rule enforces that, for any source schema, if a root subscription type is defined, it must be named Subscription. Defining a root subscription type with a name other than Subscription or using a differently named type alongside a type explicitly named Subscription creates inconsistencies in schema design and violates the composite schema specification.

Examples

Valid example:

Example № 69schema {
  subscription: Subscription
}

type Subscription {
  productCreated: Product
}

type Product {
  id: ID!
  name: String
}

The following counter-example violates the rule because RootSubscription is used as the root subscription type, but a type named Subscription is also defined.

Counter Example № 70schema {
  subscription: RootSubscription
}

type RootSubscription {
  productCreated: Product
}

type Subscription {
  deprecatedField: String
}

3.2.1.13Key Fields Select Invalid Type

Error Code

KEY_FIELDS_SELECT_INVALID_TYPE

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let types be the set of all object or interface types that are annotated with the @key directive in schema.
    • For each type in types:
      • Let keyDirectives be the set of all @key directives on type.
      • For each keyDirective in keyDirectives
        • Let keyFields be the set of all fields (including nested) referenced by the fields argument of keyDirective.
        • For each field in keyFields:
          • Let fieldType be the type of field.
          • fieldType must not be a List, Interface, or Union type.
Explanatory Text

The @key directive is used to define the set of fields that uniquely identify an entity. These fields must reference scalars or object types to ensure a valid and consistent representation of the entity across schemas. Fields of types List, Interface, or Union cannot be part of a @key because they do not have a well-defined unique value.

Examples

In this valid example, the Product type has a valid @key directive referencing the scalar field sku.

Example № 71type Product @key(fields: "sku") {
  sku: String!
  name: String
}

In the following counter-example, the Product type has an invalid @key directive referencing a field (featuredItem) whose type is an interface, violating the rule.

Counter Example № 72type Product @key(fields: "featuredItem { id }") {
  featuredItem: Node!
  sku: String!
}

interface Node {
  id: ID!
}

In this counter example, the @key directive references a field (tags) of type List, which is also not allowed.

Counter Example № 73type Product @key(fields: "tags") {
  tags: [String!]!
  sku: String!
}

In this counter example, the @key directive references a field (relatedItems) of type Union, which violates the rule.

Counter Example № 74type Product @key(fields: "relatedItems") {
  relatedItems: Related!
  sku: String!
}

union Related = Product | Service

type Service {
  id: ID!
}

3.2.1.14Key Directive in Fields Argument

Error Code

KEY_DIRECTIVE_IN_FIELDS_ARG

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let types be the set of all object and interface types in schema.
    • For each type in types:
      • Let keyDirectives be the set of all @key directives on type.
      • For each keyDirective in keyDirectives:
        • Let fields be the string value of the fields argument of keyDirective.
        • fields must not contain a directive application.
Explanatory Text

The @key directive specifies the set of fields used to uniquely identify an entity. The fields argument must consist of a valid GraphQL selection set that does not include any directive applications. Directives in the fields argument are not supported.

Examples

In this example, the fields argument of the @key directive does not include any directive applications, satisfying the rule.

Example № 75type User @key(fields: "id name") {
  id: ID!
  name: String
}

In this counter-example, the fields argument of the @key directive includes a directive application @lowercase, which is not allowed.

Counter Example № 76directive @lowercase on FIELD_DEFINITION

type User @key(fields: "id name @lowercase") {
  id: ID!
  name: String
}

In this example, the fields argument includes a directive application @lowercase nested inside the selection set, which is also invalid.

Counter Example № 77directive @lowercase on FIELD_DEFINITION

type User @key(fields: "id name { firstName @lowercase }") {
  id: ID!
  name: FullName
}

type FullName {
  firstName: String
  lastName: String
}

3.2.1.15Key Fields Has Arguments

Error Code

KEY_FIELDS_HAS_ARGS

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let types be the set of all object types that are annotated with the @key directive in schema.
    • For each type in types:
      • Let keyFields be the set of fields referenced by the fields argument of the @key directive on type.
      • For each field in keyFields:
        • HasKeyFieldsArguments(field) must be true.
HasKeyFieldsArguments(field)
  1. If field has arguments:
    1. return false
  2. If field has a selection set:
    1. Let subFields be the set of all fields in the selection set of field.
    2. For each subField in subFields:
      1. HasKeyFieldsArguments(subField) must be true.
  3. return true
Explanatory Text

The @key directive is used to define the set of fields that uniquely identify an entity. These fields must not include any field that is defined with arguments, as arguments introduce variability that prevents consistent and valid entity resolution across schemas. Fields included in the fields argument of the @key directive must be static and consistently resolvable.

Examples

In this example, the User type has a valid @key directive that references the argument-free fields id and name.

Example № 78type User @key(fields: "id name") {
  id: ID!
  name: String
  tags: [String]
}

In this counter-example, the @key directive references a field (tags) that is defined with arguments (limit), which is not allowed.

Counter Example № 79type User @key(fields: "id tags") {
  id: ID!
  tags(limit: Int = 10): [String]
}

3.2.1.16Key Invalid Syntax

Error Code

KEY_INVALID_SYNTAX

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
    • Let types be the set of all object or interface types in each schema.
    • For each type in types:
      • Let keyDirectives be the set of all @key directives on type.
      • For each keyDirective in keyDirectives:
        • Let fieldsArg be the string value of the fields argument of keyDirective.
        • Attempt to parse fieldsArg as a valid GraphQL selection set.
        • Parsing must not fail (e.g., missing braces, invalid tokens, unbalanced curly braces, or other syntax errors).
Explanatory Text

Each @key directive must specify the fields that uniquely identify an entity using a valid GraphQL selection set in its fields argument. If the fields argument string is syntactically incorrect-missing closing braces, containing invalid tokens, or otherwise malformed – it cannot be composed into a valid schema and triggers the KEY_INVALID_SYNTAX error.

Examples

In this valid scenario, the fields argument is a correctly formed selection set: "sku featuredItem { id }" is properly balanced and contains no syntax errors.

Example № 80type Product @key(fields: "sku featuredItem { id }") {
  sku: String!
  featuredItem: Node!
}

interface Node {
  id: ID!
}

Here, the selection set "featuredItem { id" is missing the closing brace }. It is thus invalid syntax, causing a KEY_INVALID_SYNTAX error.

Counter Example № 81type Product @key(fields: "featuredItem { id") {
  featuredItem: Node!
  sku: String!
}

interface Node {
  id: ID!
}

3.2.1.17Key Invalid Fields

Error Code

KEY_INVALID_FIELDS

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let types be the set of all object and interface types in schema.
    • For each type in types:
      • Let keyDirectives be the set of all @key directives on type.
      • For each keyDirective in keyDirectives:
        • Let fieldsArg be the string value of the fields argument of keyDirective.
        • Let selections be the set of fields in the selection set of fieldsArg.
        • For each selection in selections:
IsValidKeyField(selection, type)
  1. If selection is not defined on type:
    1. return false
  2. If selection has a selection set:
    1. Let subType be the return type of field.
    2. Let subFields be the set of all fields in the selection set of field.
    3. For each subField in subFields:
      1. IsValidKeyField(subField, subType) must be true.
  3. return true
Explanatory Text

Even if the selection set for @key(fields: "…") is syntactically valid, field references within that selection set must also refer to actual fields on the annotated type. This includes nested selections, which must appear on the corresponding return type. If any referenced field is missing or incorrectly named, composition fails with a KEY_INVALID_FIELDS error because the entity key cannot be resolved correctly.

Examples

In this valid example, the fields argument of the @key directive is properly defined with valid syntax and references existing fields.

Example № 82type Product @key(fields: "sku featuredItem { id }") {
  sku: String!
  featuredItem: Node!
}

interface Node {
  id: ID!
}

In this counter-example, the fields argument of the @key directive references a field id, which does not exist on the Product type.

Counter Example № 83type Product @key(fields: "id") {
  sku: String!
}

3.2.1.18Provides Directive in Fields Argument

Error Code

PROVIDES_DIRECTIVE_IN_FIELDS_ARG

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let fieldsWithProvides be the set of all fields annotated with the @provides directive in schema.
    • For each field in fieldsWithProvides:
      • Let fields be the selected fields of the fields argument of the @provides directive on field.
      • For each selection in fields:
HasProvidesDirective(selection)
  1. If selection has a directive application:
    1. return true
  2. If selection has a selection set:
    1. Let subSelections be the selections in selection
    2. For each subSelection in subSelections:
      1. If HasProvidesDirective(subSelection) is true
        1. return true
Explanatory Text

The @provides directive is used to specify the set of fields on an object type that a resolver provides for the parent type. The fields argument must consist of a valid GraphQL selection set without any directive applications, as directives within the fields argument are not supported.

Examples

In this example, the fields argument of the @provides directive does not have any directive applications, satisfying the rule.

Example № 84type User @key(fields: "id name") {
  id: ID!
  name: String
  profile: Profile @provides(fields: "name")
}

type Profile {
  id: ID!
  name: String
}

In this counter-example, the fields argument of the @provides directive has a directive application @lowercase, which is not allowed.

Counter Example № 85directive @lowercase on FIELD_DEFINITION

type User @key(fields: "id name") {
  id: ID!
  name: String
  profile: Profile @provides(fields: "name @lowercase")
}

type Profile {
  id: ID!
  name: String
}

3.2.1.19Provides Fields Has Arguments

Error Code

PROVIDES_FIELDS_HAS_ARGS

Severity

ERROR

Formal Specification
  • Let schema be the set of all source schemas.
    • Let fieldsWithProvides be the set of all fields annotated with the @provides directive in schema.
    • For each field in fieldsWithProvides:
      • Let selections be the field selections of the fields argument of the @provides directive on field.
      • Let type be the return type of field
      • For each selection in selections:
ProvidesHasArguments(selection, type)
  1. Let field be the field of type selected by selection
  2. If field has arguments:
    1. return true
  3. If selection has a selection set:
    1. Let subSelections be the selections in selection
    2. Let subType be the return type of field
    3. For each subSelection in subSelections:
      1. If ProvidesHasArguments(subField, subSelection) is true
        1. return true
Explanatory Text

The @provides directive specifies fields that a resolver provides for the parent type. The fields argument must reference fields that do not have arguments, as fields with arguments introduce variability that is incompatible with the consistent behavior expected of @provides.

Examples
Example № 86type User @key(fields: "id") {
  id: ID!
  tags: [String]
}

type Article @key(fields: "id") {
  id: ID!
  author: User! @provides(fields: "tags")
}

This violates the rule because the tags field referenced in the fields argument of the @provides directive is defined with arguments (limit: UserType = ADMIN).

Counter Example № 87type User @key(fields: "id") {
  id: ID!
  tags(limit: UserType = ADMIN): [String]
}

enum UserType {
  REGULAR
  ADMIN
}

type Article @key(fields: "id") {
  id: ID!
  author: User! @provides(fields: "tags")
}

3.2.1.20Provides Fields Missing External

Error Code

PROVIDES_FIELDS_MISSING_EXTERNAL

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas
    • Let objectTypes be the set of all object types in schema.
    • For each objectType in objectTypes:
      • Let providingFields be the set of fields on objectType annotated with @provides.
      • For each field in providingFields:
        • Let referencedFields be the set of fields referenced by the fields argument of the @provides directive on field.
        • For each referencedField in referencedFields:
          • If referencedField is not marked as @external
            • Produce a PROVIDES_FIELDS_MISSING_EXTERNAL error.
Explanatory Text

The @provides directive indicates that an object type field will supply additional fields belonging to the return type in this execution-specific path. Any field listed in the @provides(fields: ...) argument must therefore be external in the local schema, meaning that the local schema itself does not provide it.

This rule disallows selecting non-external fields in a @provides selection set. If a field is already provided by the same schema in all execution paths, there is no need to @provide.

Examples

Here, the Order type from this schema is providing fields on User through @provides. The name field of User is not defined in this schema; it is declared with @external indicating that the name field comes from elsewhere. Thus, referencing name under @provides(fields: "name") is valid.

Example № 88type Order {
  id: ID!
  customer: User @provides(fields: "name")
}

type User @key(fields: "id") {
  id: ID!
  name: String @external
}

In this counter-example, User.address is not marked as @external in the same schema that applies @provides. This means the schema already provides the address field in all possible paths, so using @provides(fields: "address") is invalid.

Counter Example № 89type User {
  id: ID!
  address: String
}

type Order {
  id: ID!
  buyer: User @provides(fields: "address")
}

3.2.1.21Query Root Type Inaccessible

Error Code

QUERY_ROOT_TYPE_INACCESSIBLE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let queryType be the query operation type defined in schema.
    • If queryType is annotated with @inaccessible:
      • Produce a QUERY_ROOT_TYPE_INACCESSIBLE error.
Explanatory Text

Every source schema that contributes to the final composite schema must expose a public (accessible) root query type. Marking the root query type as @inaccessible makes it invisible to the gateway, defeating its purpose as the primary entry point for queries and lookups.

Examples

In this example, no @inaccessible annotation is applied to the query root, so the rule is satisfied.

Example № 90extend schema {
  query: Query
}

type Query {
  allBooks: [Book]
}

type Book {
  id: ID!
  title: String
}

Since the schema marks the query root type as @inaccessible, the rule is violated. QUERY_ROOT_TYPE_INACCESSIBLE is raised because a schema’s root query type cannot be hidden from consumers.

Counter Example № 91extend schema {
  query: Query
}

type Query @inaccessible {
  allBooks: [Book]
}

type Book {
  id: ID!
  title: String
}

3.2.1.22Require Directive in Fields Argument

Error Code

REQUIRE_DIRECTIVE_IN_FIELDS_ARG

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let compositeTypes be the set of all composite types in schema.
    • For each composite in compositeTypes:
      • Let fields be the set of fields on composite
      • Let arguments be the set of all arguments on fields
      • For each argument in arguments:
        • If argument is not marked with @require:
          • Continue
        • Let fieldsArg be the value of the fields argument of the @require directive on argument.
        • If fieldsArg contains a directive application:
          • Produce a REQUIRE_DIRECTIVE_IN_FIELDS_ARG error.
Explanatory Text

The @require directive is used to specify fields on the same type that an argument depends on in order to resolve the annotated field. When using @require(fields: "…"), the fields argument must be a valid selection set string without any additional directive applications. Applying a directive (e.g., @lowercase) inside this selection set is not supported and triggers the REQUIRE_DIRECTIVE_IN_FIELDS_ARG error.

Examples

In this valid usage, the @require directive’s fields argument references name without any directive applications, avoiding the error.

Example № 92type User @key(fields: "id name") {
  id: ID!
  profile(name: String! @require(fields: "name")): Profile
}

type Profile {
  id: ID!
  name: String
}

Because the @require selection (name @lowercase) includes a directive application (@lowercase), this violates the rule and triggers a REQUIRE_DIRECTIVE_IN_FIELDS_ARG error.

Counter Example № 93type User @key(fields: "id name") {
  id: ID!
  name: String
  profile(name: String! @require(fields: "name @lowercase")): Profile
}

type Profile {
  id: ID!
  name: String
}

3.2.1.23Require Invalid Fields Type

Error Code

REQUIRE_INVALID_FIELDS_TYPE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas:
    • Let compositeTypes be the set of all composite types in schema.
    • For each composite in compositeTypes:
      • Let fields be the set of fields on composite.
      • Let arguments be the set of all arguments on fields.
      • For each argument in arguments:
        • If argument is not annotated with @require:
          • Continue
        • Let fieldsArg be the value of the fields argument of the @require directive on argument.
        • If fieldsArg is not a string:
          • Produce a REQUIRE_INVALID_FIELDS_TYPE error.
Explanatory Text

When using the @require directive, the fields argument must always be a string that defines a (potentially nested) selection set of fields from the same type. If the fields argument is provided as a type other than a string (such as an integer, boolean, or enum), the directive usage is invalid and will cause schema composition to fail.

Examples

In the following example, the @require directive’s fields argument is a valid string and satisfies the rule.

Example № 94type User @key(fields: "id") {
  id: ID!
  profile(name: String! @require(fields: "name")): Profile
}

type Profile {
  id: ID!
  name: String
}

Since fields is set to 123 (an integer) instead of a string, this violates the rule and triggers a REQUIRE_INVALID_FIELDS_TYPE error.

Counter Example № 95type User @key(fields: "id") {
  id: ID!
  profile(name: String! @require(fields: 123)): Profile
}

type Profile {
  id: ID!
  name: String
}

3.2.1.24Require Invalid Syntax

Error Code

REQUIRE_INVALID_SYNTAX

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas
    • Let compositeTypes be the set of all composite types in schema.
    • For each composite in compositeTypes:
      • Let fields be the set of fields on composite.
      • Let arguments be the set of all arguments on fields.
      • For each argument in arguments:
        • If argument is not annotated with @require:
          • Continue
        • Let fieldsArg be the string value of the fields argument of the @require directive on argument.
        • fieldsArg must be be parsable as a valid selection map
Explanatory Text

The @require directive’s fields argument must be syntactically valid GraphQL. If the selection map string is malformed (e.g., missing closing braces, unbalanced quotes, invalid tokens), then the schema cannot be composed correctly. In such cases, the error REQUIRE_INVALID_SYNTAX is raised.

Examples

In the following example, the @require directive’s fields argument is a valid selection map and satisfies the rule.

Example № 96type User @key(fields: "id") {
  id: ID!
  profile(name: String! @require(fields: "name")): Profile
}

type Profile {
  id: ID!
  name: String
}

In the following counter-example, the @require directive’s fields argument has invalid syntax because it is missing a closing brace.

This violates the rule and triggers a REQUIRE_INVALID_FIELDS error.

Counter Example № 97type Book {
  id: ID!
  title(lang: String! @require(fields: "author { name ")): String
}

type Author {
  name: String
}

3.2.1.25Type Definition Invalid

Error Code

TYPE_DEFINITION_INVALID

Severity

ERROR

Formal Specification
  • Let schema be one of the source schemas.
  • Let types be the set of built-in types (for example, FieldSelectionMap) defined by the composition specification.
  • For each type in types:
    • type must strictly equal the built-in type defined by the composition specification.
Explanatory Text

Certain types are reserved in composite schema specification for specific purposes and must adhere to the specification’s definitions. For example, FieldSelectionMap is a built-in scalar that represents a selection of fields as a string. Redefining these built-in types with a different kind (e.g., an input object, enum, union, or object type) is disallowed and makes the composition invalid.

This rule ensures that built-in types maintain their expected shapes and semantics so the composed schema can correctly interpret them.

Examples

In the following counter-example, FieldSelectionMap is declared as an input type instead of the required scalar. This leads to a TYPE_DEFINITION_INVALID error because the defined scalar FieldSelectionMap is being overridden by an incompatible definition.

Counter Example № 98directive @require(field: FieldSelectionMap!) on ARGUMENT_DEFINITION

input FieldSelectionMap {
  fields: [String!]!
}

3.2.1.26Type Kind Mismatch

Error Code

TYPE_KIND_MISMATCH

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each type name typeName defined in at least one of these schemas:
    • Let types be the set of all types named typeName across all source schemas.
    • Let typeKinds be the set of type kinds in types
    • typeKinds must contain exactly one element.
Explanatory Text

Each named type must represent the same kind of GraphQL type across all source schemas. For instance, a type named User must consistently be an object type, or consistently be an interface, and so forth. If one schema defines User as an object type, while another schema declares User as an interface (or input object, union, etc.), the schema composition process cannot merge these definitions coherently.

This rule ensures semantic consistency: a single type name cannot serve multiple, incompatible purposes in the final composed schema.

Examples

All schemas agree that User is an object type:

# Schema A
type User {
  id: ID!
  name: String
}

# Schema B
type User {
  id: ID!
  email: String
}

# Schema C
type User {
  id: ID!
  joinedAt: String
}

In the following counter-example, User is defined as an object type in one of the schemas and as an interface in another. This violates the rule and results

# Schema A: `User` is an object type
type User {
  id: ID!
  name: String
}

# Schema B: `User` is an interface
extend interface User {
  id: ID!
  friends: [User!]!
}

# Schema C: `User` is an input object
extend input User {
  id: ID!
}

3.2.1.27Provides Invalid Syntax

Error Code

PROVIDES_INVALID_SYNTAX

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas.
  • For each schema in schemas
    • Let fieldsWithProvides be the set of all fields annotated with the @provides directive in schema.
    • For each field in fieldsWithProvides:
      • Let fieldsArg be the string value of the fields argument of the @provides directive on field.
      • fieldsArg must be a valid selection set string
Explanatory Text

The @provides directive’s fields argument must be a syntactically valid selection set string, as if you were selecting fields in a GraphQL query. If the selection set is malformed (e.g., missing braces, unbalanced quotes, or invalid tokens), the schema composition fails with a PROVIDES_INVALID_SYNTAX error.

Examples

Here, the @provides directive’s fields argument is a valid selection set:

Example № 99type User @key(fields: "id") {
  id: ID!
  address: Address @provides(fields: "street city")
}

type Address {
  street: String
  city: String
}

In this counter-example, the fields argument is missing a closing brace. It cannot be parsed as a valid GraphQL selection set, triggering a PROVIDES_INVALID_SYNTAX error.

Counter Example № 100type User @key(fields: "id") {
  id: ID!
  address: Address @provides(fields: "{ street city ")
}

3.2.1.28Invalid GraphQL

Error Code

INVALID_GRAPHQL

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For Each schema in schemas
    • schema must be a syntactically valid
    • schema must be a semantically valid GraphQL schema according to the GraphQL specification.
Explanatory Text

Before composition, every individual source schema must be valid as per the official GraphQL specification. Common reasons a schema may be considered “invalid GraphQL” include:

  • Syntax Errors: Missing braces, invalid tokens, or misplaced punctuation.
  • Unknown Types: Referencing types that are not defined within the schema or imported from elsewhere.
  • Invalid Directive Usage: Omitting required arguments to directives or using directives in disallowed locations.
  • Invalid Default Values: Providing default values for arguments or fields that do not conform to the type (e.g., a default of null for a non-null field, an invalid enum value, etc.).
  • Conflicting Type Definitions: Defining or overriding a built-in type or directive incorrectly.

When any of these validation checks fail for a particular source schema, that schema does not meet the baseline requirements for composition, and the composition process cannot proceed. An INVALID_GRAPHQL error is raised, prompting the schema owner to correct the GraphQL violations before retrying composition.

Examples

In the following counter-example, the schema is invalid because the type User is referenced in the Query type but never defined:

Counter Example № 101type Query {
  user: User
}

# The type "User" is never defined; this is invalid GraphQL.

In this counter-example, "INVALID_VALUE" is not a valid Role, causing INVALID_GRAPHQL.

Counter Example № 102enum Role {
  ADMIN
  USER
}

type Query {
  users(role: Role = "INVALID_VALUE"): [String]
}

The GraphQL spec requires all non-null directive arguments to be supplied. The omission of the fields argument in the @provides directive triggers INVALID_GRAPHQL.

Counter Example № 103directive @provides(fields: String!) on FIELD_DEFINITION

type Product {
  price: Float @provides
  # "fields" argument is required, but not provided.
}

3.2.1.29Override Collision with Another Directive

Error Code

OVERRIDE_COLLISION_WITH_ANOTHER_DIRECTIVE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • If field is annotated with @override:
          • field must not be annotated with @external
Explanatory Text

The @override directive designates that ownership of a field is transferred from one source schema to another in the resulting composite schema. When such a transfer occurs, that field cannot also be annotated @external. A field declared as @external is originally defined in a different source schema. Overriding a field and simultaneously claiming it is external to the local schema is contradictory.

In this case composition fails with an OVERRIDE_COLLISION_WITH_ANOTHER_DIRECTIVE error.

Examples

In this scenario, User.fullName is defined in Schema A but overridden in Schema B. Since @override is not combined with any of @external on the same field, no collision occurs.

Example № 104# Source Schema A
type User {
  id: ID!
  fullName: String
}

# Source Schema B
type User {
  id: ID!
  fullName: String @override(from: "SchemaA")
}

Here, amount is marked with both @override and @external. This violates the rule because the field is simultaneously labeled as “override from another schema” and “external” in the local schema, producing an OVERRIDE_COLLISION_WITH_ANOTHER_DIRECTIVE error.

Counter Example № 105# Source Schema A
type Payment {
  id: ID!
  amount: Int
}

# Source Schema B
type Payment {
  id: ID!
  amount: Int @override(from: "SchemaA") @external
}

3.2.1.30Override from Self

Error Code

OVERRIDE_FROM_SELF

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • If field is annotated with @override:
          • Let from be the value of the from argument of the @override directive on field.
          • from must not be the same as the name of schema:
Explanatory Text

When using @override, the from argument indicates the name of the source schema that originally owns the field. Overriding from the same schema creates a contradiction, as it implies both local and transferred ownership of the field within one schema. If the from value matches the local schema name, it triggers an OVERRIDE_FROM_SELF error.

Examples

In the following example, Schema B overrides the field amount from Schema A. The two schema names are different, so no error is raised.

Example № 106# Source Schema A
type Bill {
  id: ID!
  amount: Int
}

# Source Schema B
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaA")
}

In the following counter-example, the local schema is also "SchemaA", and the from argument is "SchemaA". Overriding a field from the same schema is not allowed, causing an OVERRIDE_FROM_SELF error.

Counter Example № 107# Source Schema A (named "SchemaA")
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaA")
}

3.2.1.31Override on Interface

Error Code

OVERRIDE_ON_INTERFACE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all interface types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • field must not be annotated with @override
Explanatory Text

The @override directive designates that ownership of a field is transferred from one source schema to another. In the context of interface types, fields are abstract—objects that implement the interface are responsible for providing the actual fields. Consequently, it is invalid to attach @override directly to an interface field. Doing so leads to an OVERRIDE_ON_INTERFACE error because there is no concrete field implementation on the interface itself that can be overridden.

Examples

In this valid example, @override is used on a field of an object type, ensuring that the field definition is concrete and can be reassigned to another schema.

Since @override is not used on any interface fields, no error is produced.

Example № 108# Source Schema A
type Order {
  id: ID!
  amount: Int
}

# Source Schema B
type Order {
  id: ID!
  amount: Int @override(from: "SchemaA")
}

In the following counter-example, Bill.amount is declared on an interface type and annotated with @override. This violates the rule because the interface field itself is not eligible for ownership transfer. The composition fails with an OVERRIDE_ON_INTERFACE error.

Counter Example № 109# Source Schema A
interface Bill {
  id: ID!
  amount: Int @override(from: "SchemaB")
}

3.2.1.32Override Source Has Override

Error Code

OVERRIDE_SOURCE_HAS_OVERRIDE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • Let groupedTypes be a map grouping all object types from schemas by their type name.
  • For each typeGroup in groupedTypes:
    • Let types be the set of object types in typeGroup.
    • Let groupedFields be a map grouping every field across all types by their field name.
    • For each fieldGroup in groupedFields:
      • Let fields be the set of field definitions in fieldGroup.
      • Let overrides be the list of @override directives present among those fields.
      • If overrides has fewer than 2 elements:
        • Continue
      • Let firstOverride be the first directive in overrides.
      • Let from be the value of the from argument on firstOverride.
      • Let sourceSchema be the schema defining firstOverride.
      • Let visited be an empty set.
      • Add sourceSchema to visited.
      • While from is not null:
        • from must not be in visited.
        • Add from to visited.
        • Let sourceField be the field in fields that belongs to the schema named from.
        • If sourceField does not exist:
          • Break
        • If sourceField is not annotated with @override:
          • Break
        • Let from be the value of the from argument on that @override directive.
      • The size of visited must be equal to the size of overrides.
Explanatory Text

A field marked with @override signifies that its ownership is being taken over by another schema. If multiple schemas try to override the same field, or if the ownership chain loops back on itself, the composed schema has more than one @override for a single field. This creates ambiguity about which schema ultimately owns that field.

Hence, only one @override may ever apply to a particular field across all source schemas. Attempting multiple overrides, or forming any cycle of overrides for the same field, triggers the OVERRIDE_SOURCE_HAS_OVERRIDE error.

Examples

In this scenario, Bill.amount is originally owned by Schema A but is overridden in Schema B. No other schema further attempts to override the same field, so the composition is valid.

Example № 110# Source Schema A
type Bill {
  id: ID!
  amount: Int
}

# Source Schema B
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaA")
}

Here, Schema A overrides Bill.amount from Schema B, while Schema B also overrides the same field from Schema A. This circular override makes it impossible to discern a single “owner” of the field Bill.amount, raising an OVERRIDE_SOURCE_HAS_OVERRIDE error.

Counter Example № 111# Source Schema A (named "SchemaA")
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaB")
}

# Source Schema B (named "SchemaB")
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaA")
}

In this case, the same field Bill.amount is overridden successively by A, then B, then C. Tracing these overrides forms a cycle (A → B → C → A). This again produces an OVERRIDE_SOURCE_HAS_OVERRIDE error.

Counter Example № 112# Source Schema A (named "A")
type Bill {
  id: ID!
  amount: Int @override(from: "B")
}

# Source Schema B (named "B")
type Bill {
  id: ID!
  amount: Int @override(from: "C")
}

# Source Schema C (named "C")
type Bill {
  id: ID!
  amount: Int @override(from: "A")
}

In the following counter-example, the field Bill.amount is overridden by multiple schemas. The overrides do not form a cycle, hence there are multiple overrides for the same field, triggering an OVERRIDE_SOURCE_HAS_OVERRIDE error.

Counter Example № 113# Source Schema A
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaC")
}

# Source Schema B
type Bill {
  id: ID!
  amount: Int @override(from: "SchemaC")
}

# Source Schema C
type Bill {
  id: ID!
  amount: Int
}

3.2.1.33External Collision with Another Directive

Error Code

EXTERNAL_COLLISION_WITH_ANOTHER_DIRECTIVE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • If field is annotated with @external:
          • For each argument in field:
            • argument must not be annotated with @require
          • field must not be annotated with @provides
Explanatory Text

The @external directive indicates that a field is defined in a different source schema, and the current schema merely references it. Therefore, a field marked with @external must not simultaneously carry directives that assume local ownership or resolution responsibility, such as:

  • @provides: Declares that the field can supply additional nested fields from the local schema, which conflicts with the notion of an external field whose definition resides elsewhere.
  • @require: Specifies dependencies on other fields to resolve this field. Since @external fields are not locally resolved, there is no need for @require.
  • @override: Transfers ownership of the field’s definition from one schema to another, which is incompatible with an already-external field definition. Yet this is covered by the OVERRIDE_COLLISION_WITH_ANOTHER_DIRECTIVE rule.

Any combination of @external with either @provides or @require on the same field results in inconsistent semantics. In such scenarios, an EXTERNAL_COLLISION_WITH_ANOTHER_DIRECTIVE error is raised.

Examples

In this example, method is only annotated with @external in Schema B, without any other directive. This usage is valid.

Example № 114# Source Schema A
type Payment {
  id: ID!
  method: String
}

# Source Schema B
type Payment {
  id: ID!
  # This field is external, defined in Schema A.
  method: String @external
}

In this counter-example, description is annotated with @external and also with @provides. Because @external and @provides cannot co-exist on the same field, an EXTERNAL_COLLISION_WITH_ANOTHER_DIRECTIVE error is produced.

Counter Example № 115# Source Schema A
type Invoice {
  id: ID!
  description: String
}

# Source Schema B
type Invoice {
  id: ID!
  description: String @external @provides(fields: "length")
}

The following example is invalid, since title is marked with both @external and has an argument that is annotated with @require. This conflict leads to an EXTERNAL_COLLISION_WITH_ANOTHER_DIRECTIVE error.

Counter Example № 116# Source Schema A
type Book {
  id: ID!
  title: String
  subtitle: String
}

# Source Schema B
type Book {
  id: ID!
  title(subtitle: String @require(fields: "subtitle")) @external
}

3.2.1.34Key Invalid Fields Type

Error Code

KEY_INVALID_FIELDS_TYPE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • If type is annotated with @key:
        • Let fieldsArg be the value of the fields argument in the @key directive.
        • fieldsArg must be a string.
Explanatory Text

The @key directive designates the fields used to identify a particular object uniquely. The fields argument accepts a string that represents a selection set (for example, "id", or "id otherField"). If the fields argument is provided as any non-string type (e.g., Boolean, Int, Array), the schema fails to compose correctly because it cannot parse a valid field selection.

Examples

In this example, the @key directive’s fields argument is the string "id uuid", identifying two fields that form the object key. This usage is valid.

Example № 117type User @key(fields: "id uuid") {
  id: ID!
  uuid: ID!
  name: String
}

type Query {
  users: [User]
}

Here, the fields argument is provided as a boolean (true) instead of a string. This violates the directive requirement and triggers a KEY_INVALID_FIELDS_TYPE error.

Counter Example № 118type User @key(fields: true) {
  id: ID
}

3.2.1.35Provides Invalid Fields Type

Error Code

PROVIDES_INVALID_FIELDS_TYPE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • If field is annotated with @provides:
          • Let fieldsArg be the value of the fields argument on the @provides directive.
          • fieldsArg must be a string.
Explanatory Text

The @provides directive indicates that a field is providing one or more additional fields on the returned (child) type. The fields argument accepts a string representing a GraphQL selection set (for example, "title author"). If the fields argument is given as a non-string type (e.g., Boolean, Int, Array), the schema fails to compose because it cannot interpret a valid selection set.

Examples

In this valid example, the @provides directive on details uses the string "features specifications" to specify that both fields are provided in the child type ProductDetails.

Example № 119type Product {
  id: ID!
  details: ProductDetails @provides(fields: "features specifications")
}

type ProductDetails {
  features: [String]
  specifications: String
}

type Query {
  products: [Product]
}

Here, the @provides directive includes a numeric value (123) instead of a string in its fields argument. This invalid usage raises a PROVIDES_INVALID_FIELDS_TYPE error.

Counter Example № 120type Product {
  id: ID!
  details: ProductDetails @provides(fields: 123)
}

type ProductDetails {
  features: [String]
  specifications: String
}

3.2.1.36Provides on Non-Composite Field

Error Code

PROVIDES_ON_NON_COMPOSITE_FIELD

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all object and interface types in schema.
    • For each type in types:
      • Let fields be the set of fields on type.
      • For each field in fields:
        • If field is annotated with @provides:
          • Let fieldType be the base return type of field (i.e., unwrapped of any [ ] or !).
          • fieldType must be a interface or object type.
Explanatory Text

The @provides directive allows a field to “provide” additional nested fields on the composite type it returns. If a field’s base type is not an object or interface type (e.g., String, Int, Boolean, Enum, Union, or an Input type), it cannot hold nested fields for @provides to select. Consequently, attaching @provides to such a field is invalid and raises a PROVIDES_ON_NON_OBJECT_FIELD error.

Examples

Here, profile has an object base type Profile. The @provides directive can validly specify sub-fields like settings { theme }.

Example № 121type Profile {
  email: String
  settings: Settings
}

type Settings {
  notificationsEnabled: Boolean
  theme: String
}

type User {
  id: ID!
  profile: Profile @provides(fields: "settings { theme }")
}

In this counter-example, email has a scalar base type (String). Because scalars do not expose sub-fields, attaching @provides to email triggers a PROVIDES_ON_NON_OBJECT_FIELD error.

Counter Example № 122type User {
  id: ID!
  email: String @provides(fields: "length")
}

3.2.1.37External on Interface

Error Code

EXTERNAL_ON_INTERFACE

Severity

ERROR

Formal Specification
  • Let schemas be the set of all source schemas to be composed.
  • For each schema in schemas:
    • Let types be the set of all composite types in schema.
    • For each type in types:
      • If type is an interface type:
        • Let fields be the set of fields on type.
        • For each field in fields:
          • field must not be annotated with @external
Explanatory Text

The @external directive indicates that a field is defined and resolved elsewhere, not in the current schema. In the case of an interface type, fields are abstract - they do not have direct resolutions at the interface level. Instead, each implementing object type provides the concrete field implementations. Marking an interface field with @external is therefore nonsensical, as there is no actual field resolution in the interface itself to “borrow” from another schema. Such usage raises an EXTERNAL_ON_INTERFACE error.

Examples

Here, the interface Node merely describes the field id. Object types User and Product implement and resolve id. No @external usage occurs on the interface itself, so no error is triggered.

Example № 123interface Node {
  id: ID!
}

type User implements Node {
  id: ID!
  name: String
}

type Product implements Node {
  id: ID!
  price: Int
}

Since id is declared on an interface and marked with @external, the composition fails with EXTERNAL_ON_INTERFACE. An interface does not own the concrete field resolution, so it is invalid to mark any of its fields as external.

Counter Example № 124interface Node {
  id: ID! @external
}

3.2.1.38Lookup Returns Non-Nullable Type

Error Code

LOOKUP_RETURNS_NON_NULLABLE_TYPE

Severity

WARNING

Formal Specification
  • Let fields be the set of all field definitions annotated with @lookup in the schema.
  • For each field in fields:
    • Let type be the return type of field.
    • type must be a nullable type.
Explanatory Text

Fields annotated with the @lookup directive are intended to retrieve a single entity based on provided arguments. To properly handle cases where the requested entity does not exist, such fields should have a nullable return type. This allows the field to return null when an entity matching the provided criteria is not found, following the standard GraphQL practices for representing missing data.

In a distributed system, it is likely that some entities will not be found on other schemas, even when those schemas contribute fields to the type. Ensuring that @lookup fields have nullable return types also avoids GraphQL errors on schemas and prevents result erasure through non-null propagation. By allowing null to be returned when an entity is not found, the system can gracefully handle missing data without causing exceptions or unexpected behavior.

Ensuring that @lookup fields have nullable return types allows gateways to distinguish between cases where an entity is not found (receiving null) and other error conditions that may have to be propagated to the client.

For example, the following usage is recommended:

Example № 125extend type Query {
  userById(id: ID!): User @lookup
}

type User {
  id: ID!
  name: String
}

In this example, userById returns a nullable User type, aligning with the recommendation.

Examples

This counter-example demonstrates an invalid usage:

Counter Example № 126extend type Query {
  userById(id: ID!): User! @lookup
}

type User {
  id: ID!
  name: String
}

Here, userById returns a non-nullable User!, which does not align with the recommendation that a @lookup field should have a nullable return type.

3.2.1.39Lookup Returns List

Error Code

LOOKUP_RETURNS_LIST

Severity ERROR

Formal Specification
  • Let fields be the set of all field definitions annotated with @lookup in the schema.
  • For each field in fields:
    • Let type be the return type of field.
    • IsListType(type) must be false.
IsListType(type)
  1. If type is a Non-Null type:
    1. Let innerType be the inner type of type.
    2. Return IsListType(innerType).
  2. Else if type is a List type:
    1. Return true.
  3. Else:
    1. Return false.
Explanatory Text

Fields annotated with the @lookup directive are intended to retrieve a single entity based on provided arguments. To avoid ambiguity in entity resolution, such fields must return a single object and not a list. This validation rule enforces that any field annotated with @lookup must have a return type that is NOT a list.

Examples

For example, the following usage is valid:

Example № 127extend type Query {
  userById(id: ID!): User @lookup
}

type User {
  id: ID!
  name: String
}

In this example, userById returns a User object, satisfying the requirement.

This counter-example demonstrates an invalid usage:

Counter Example № 128extend type Query {
  usersByIds(ids: [ID!]!): [User!] @lookup
}

type User {
  id: ID!
  name: String
}

Here, usersByIds returns a list of User objects, which violates the requirement that a @lookup field must return a single object.

3.2.1.40Input Field Default Mismatch

Error Code

INPUT_FIELD_DEFAULT_MISMATCH

Formal Specification
  • Let inputFieldsByName be a map where the key is the name of an input field and the value is a list of input fields from different source schemas from the same type with the same name.
  • For each inputFields in inputFieldsByName:
    • Let defaultValues be a set containing the default values of each input field in inputFields.
    • If the size of defaultValues is greater than 1:
InputFieldsHaveConsistentDefaults(inputFields)
  1. Given each pair of input fields inputFieldA and inputFieldB in inputFields:
    1. If the default value of inputFieldA is not equal to the default value of inputFieldB:
      1. return false
  2. return true
Explanatory Text

Input fields in different source schemas that have the same name are required to have consistent default values. This ensures that there is no ambiguity or inconsistency when merging input fields from different source schemas.

A mismatch in default values for input fields with the same name across different source schemas will result in a schema composition error.

Examples

In the the following example both source schemas have an input field field1 with the same default value. This is valid:

Example № 129# Schema A

input BookFilter {
  genre: Genre = FANTASY
}

enum Genre {
  FANTASY
  SCIENCE_FICTION
}

# Schema B
input BookFilter {
  genre: Genre = FANTASY
}

enum Genre {
  FANTASY
  SCIENCE_FICTION
}

In the following example both source schemas define an input field minPageCount with different default values. This is invalid:

Counter Example № 130# Schema A

input BookFilter {
  minPageCount: Int = 10
}

# Schema B

input BookFilter {
  minPageCount: Int = 20
}

3.2.1.41Input Field Types mergeable

Error Code

INPUT_FIELD_TYPES_NOT_MERGEABLE

Formal Specification
  • Let fieldsByName be a map of field lists where the key is the name of a field and the value is a list of fields from mergeable input types from different source schemas with the same name.
  • For each fields in fieldsByName:
InputFieldsAreMergeable(fields)
  1. Given each pair of members fieldA and fieldB in fields:
    1. Let typeA be the type of fieldA.
    2. Let typeB be the type of fieldB.
    3. SameTypeShape(typeA, typeB) must be true.
Explanatory Text

The input fields of input objects with the same name must be mergeable. This rule ensures that input objects with the same name in different source schemas have fields that can be merged consistently without conflicts.

Input fields are considered mergeable when they share the same name and have compatible types. The compatibility of types is determined by their structure (e.g., lists), excluding nullability. Mergeable input fields with different nullability are considered mergeable, and the resulting merged field will be the most permissive of the two.

In this example, the field name in AuthorInput has compatible types across source schemas, making them mergeable:

Example № 131input AuthorInput {
  name: String!
}

input AuthorInput {
  name: String
}

The following example shows that fields are mergeable if they have different nullability but the named type is the same and the list structure is the same.

Example № 132input AuthorInput {
  tags: [String!]
}

input AuthorInput {
  tags: [String]!
}

input AuthorInput {
  tags: [String]
}

In this example, the field birthdate on AuthorInput is not mergeable as the field has different named types (String and DateTime) across source schemas:

Counter Example № 133input AuthorInput {
  birthdate: String!
}

input AuthorInput {
  birthdate: DateTime!
}

3.2.1.42Enum Values Mismatch

Error Code

ENUM_VALUES_MISMATCH

Formal Specification
  • Let enumNames be the set of all enum type names across all source schemas.
  • For each enumName in enumNames:
    • Let enums be the list of all enum types from different source schemas with the name enumName.
    • EnumsAreMergeable(enums) must be true.
EnumsAreMergeable(enums)
  1. If enums has fewer than 2 elements:
    1. Return true.
  2. Let inaccessibleValues be the set of values that are declared as @inaccessible in enums.
  3. Let requiredValues be the set of values in enums that are not in inaccessibleValues.
  4. For each enum in enums
    1. Let enumValues be the set of all values of enum that are not in inaccessibleValues.
    2. requiredValues must be equal to enumValues
Explanatory Text

This rule ensures that enum types with the same name across different source schemas in a composite schema have identical sets of values. Enums must be consistent across source schemas to avoid conflicts and ambiguities in the composite schema.

When an enum is defined with differing values, it can lead to confusion and errors in query execution. For instance, a value valid in one schema might be passed to another where it’s unrecognized, leading to unexpected behavior or failures. This rule prevents such inconsistencies by enforcing that all instances of the same named enum across schemas have an exact match in their values.

In this example, both source schemas define Genre with the same value FANTASY, satisfying the rule:

Example № 134enum Genre {
  FANTASY
}

enum Genre {
  FANTASY
}

Here, the two definitions of Genre have different values (FANTASY and SCIENCE_FICTION), violating the rule:

Counter Example № 135enum Genre {
  FANTASY
}

enum Genre {
  SCIENCE_FICTION
}

Here, the two definitions of Genre have shared values and additional values declared as @inaccessible, satisfying the rule:

Example № 136enum Genre {
  FANTASY
  SCIENCE_FICTION @inaccessible
}

enum Genre {
  FANTASY
}

3.2.1.43Input With Missing Required Fields

Error Code:

INPUT_WITH_MISSING_REQUIRED_FIELDS

Severity:

ERROR

Formal Specification:
  • Let typeNames be the set of all input object types names from all source schemas that are not declared as @inaccessible.
  • For each typeName in typeNames:
    • Let types be the list of all input object types from different source schemas with the name typeName.
    • AreTypesConsistent(types) must be true.
AreTypesConsistent(inputs)
  1. Let requiredFields be the intersection of all field names across all input objects in inputs that are not marked as @inaccessible in any schema and have a non-nullable type in at least one schema.
  2. For each input in inputs:
    1. For each requiredField in requiredFields:
      1. If requiredField is not in input:
        1. Return false
Explanatory Text:

Input types are merged by intersection, meaning that the merged input type will have all fields that are present in all input types with the same name. This rule ensures that input object types with the same name across different schemas share a consistent set of required fields.

Examples

If all schemas define BookFilter with the required field title, the rule is satisfied:

# Schema A
input BookFilter {
  title: String!
  author: String
}

# Schema B
input BookFilter {
  title: String!
  yearPublished: Int
}

If title is required in one source schema but missing in another, this violates the rule:

# Schema A
input BookFilter {
  title: String!
  author: String
}

# Schema B
input BookFilter {
  author: String
  yearPublished: Int
}

In this invalid case, title is mandatory in Schema A but not defined in Schema B, causing inconsistency in required fields across schemas.

3.2.1.44Field Argument Types Mergeable

Error Code

FIELD_ARGUMENT_TYPES_NOT_MERGEABLE

Severity

ERROR

Formal Specification
  • Let typeNames be the set of all output type names from all source schemas.
  • For each typeName in typeNames
    • Let types be the set of all types with the typeName from all source schemas.
    • Let fieldNames be the set of all field names from all types.
    • For each fieldName in fieldNames
      • Let fields be the set of all fields with the fieldName from all types.
      • For each field in fields
        • Let argumentNames be the set of all argument names from all fields.
        • For each argumentName in argumentNames
          • Let arguments be the set of all arguments with the argumentName from all fields.
          • For each pair of argumentA and argumentB in arguments
ArgumentsAreMergeable(argumentA, argumentB)
  1. Let typeA be the type of argumentA
  2. Let typeB be the type of argumentB
  3. InputTypesAreMergeable(typeA, typeB) must be true.
Explanatory Text

When multiple schemas define the same field name on the same output type (e.g., User.field), these fields can be merged if their arguments are compatible. Compatibility extends not only to the output field types themselves, but to each argument’s input type as well. The schemas must agree on each argument’s name and have compatible types, so that the composed schema can unify the definitions into a single consistent field specification.

Nullability

Different nullability requirements on arguments are still considered mergeable. For example, if one schema accepts String! and the other accepts String, these schemas can merge; the resulting argument type typically adopts the least restrictive (nullable) version.

Lists Lists of different nullability (e.g., [String!] vs. [String]! vs. [String]) remain mergeable as long as they otherwise refer to the same inner type. Essentially, the same principle of “least restrictive” nullability merges them successfully.

Incompatible Types

If argument types differ on the named type itself – for example, one uses String while the other uses DateTime - this causes a FIELD_ARGUMENT_TYPES_NOT_MERGEABLE error. Similarly, if one schema has [String] but another has [DateTime], they are incompatible.

Example № 137type User {
  field(argument: String): String
}

type User {
  field(argument: String): String
}

Arguments that differ on nullability of an argument type are mergeable.

Example № 138type User {
  field(argument: String!): String
}

type User {
  field(argument: String): String
}
Example № 139type User {
  field(argument: [String!]): String
}

type User {
  field(argument: [String]!): String
}

type User {
  field(argument: [String]): String
}

Arguments are not mergeable if the named types are different in kind or name.

Counter Example № 140type User {
  field(argument: String!): String
}

type User {
  field(argument: DateTime): String
}
Counter Example № 141type User {
  field(argument: [String]): String
}

type User {
  field(argument: [DateTime]): String
}

3.2.2Merge

During this stage, all definitions from each source schema are combined into a single schema. This section defines the rules for merging schema definitions. The goal is to create a composite schema that includes all type system members from each source schema that are publicly accessible.

MergeSchemas(schemas)
  1. Let mergedSchema be an empty schema.
  2. Let memberNames be the set of all object, interface, union, enum and input type names in schemas.
  3. For each memberName in memberNames:
    1. Let types be the set of all types named memberName across all source schemas.
    2. Let mergedType be the result of MergeTypes(types).
    3. If mergedType is not null:
      1. Add mergedType to mergedSchema.
  4. Return mergedSchema.
MergeTypes(types)
  1. Let firstType be the first type in types.
  2. Let kind be the kind of firstType.
  3. Assert: All types in types have the same kind.
  4. If kind is SCALAR:
    1. Return the result of MergeScalarTypes(types).
  5. If kind is INTERFACE:
    1. Return the result of MergeInterfaceTypes(types).
  6. If kind is ENUM:
    1. Return the result of MergeEnumTypes(types).
  7. If kind is UNION:
    1. Return the result of MergeUnionTypes(types).
  8. If kind is INPUT_OBJECT:
    1. Return the result of MergeInputTypes(types).
  9. If kind is OBJECT:
    1. Return the result of MergeObjectTypes(types).

3.2.2.1Merge Scalar Types

Formal Specification
MergeScalarTypes(scalars)
  1. If any scalar in scalars is marked with @inaccessible
    1. Return null
  2. Let firstScalar be the first scalar in scalars.
  3. Let description be the description of firstScalar.
  4. For each scalar in scalars:
    1. If description is null:
      1. Set description to the description of scalar.
  5. Return a new scalar type with the name of firstScalar and description of description.
Explanatory Text

MergeScalarTypes(scalars) merges multiple scalar definitions that share the same name into a single scalar type. It filters out scalars marked with @inaccessible and unifies descriptions so that the final type retains the first available non-null description.

Inaccessible Scalars

If any scalar is labeled with @inaccessible, the merge immediately returns null. A scalar that cannot be exposed to consumers renders the entire type unusable.

Combining Descriptions

The final description is determined by the first non-null description found in the list of scalars. If no descriptions are found, the final description is null.

Examples

Here, two Date scalar types from different schemas are merged into a single composed Date scalar type.

Example № 142# Schema A

scalar Date

# Schema B

"A scalar representing a calendar date."
scalar Date

# Composed Result

"A scalar representing a calendar date."
scalar Date

3.2.2.2Merge Interface Types

Formal Specification
MergeInterfaceTypes(types)
  1. If any type in types is marked with @inaccessible
    1. Return null
  2. Let firstType be the first type in types.
  3. Let typeName be the name of firstType.
  4. Let description be the description of firstType.
  5. Let fields be an empty set.
  6. For each type in types:
    1. If description is null:
      1. Set description to the description of type.
  7. Let fieldNames be the set of all field names in types.
  8. For each fieldName in fieldNames:
    1. Let field be the set of fields with the name fieldName in types.
    2. Let mergedField be the result of MergeFieldDefinitions(fields).
    3. If mergedField is not null:
      1. Add mergedField to fields.
  9. Return a new interface type with the name of typeName, description of description, and fields of fields.
Explanatory Text

MergeInterfaceTypes(types) unifies multiple interface definitions (all sharing the same name) into a single composed interface type. If any one of these interfaces is marked @inaccessible, the merge immediately returns null, preventing inclusion of that interface in the final schema.

Inaccessible Interfaces

A type marked @inaccessible disqualifies the entire merge, ensuring no references to inaccessible types appear in the final schema.

Combining Descriptions

Among the valid interfaces, the description is taken from the first non-null description encountered. If all interfaces lack a description, the resulting interface has none.

Merging Fields

Each interface contributes its fields. Those fields that share the same name across multiple interfaces are reconciled via MergeFieldDefinitions(fields). This ensures any differences in type, nullability, or other constraints are resolved before appearing in the final interface.

By applying these steps, MergeInterfaceTypes(types) produces a coherent interface type definition that reflects the fields from all compatible sources while adhering to accessibility constraints.

Examples

Here, two Product interface types from different schemas are merged into a single composed Product interface type.

Example № 143# Schema A

interface Product {
  id: ID!
  name: String
}

# Schema B

interface Product {
  id: ID!
  createdAt: String
}

# Composed Result

interface Entity {
  id: ID!
  name: String
  createdAt: String
}

In this example, the Product interface type from two schemas is merged. The id field is shared across both schemas, while name and createdAt fields are contributed by the individual source schemas. The resulting composed type includes all fields.

The following example shows how the description is retained when merging interface types:

Example № 144# Schema A

"""
First description
"""
interface Product {
  id: ID!
}

# Schema B

"""
Second description
"""
interface Product {
  id: ID!
}

# Composed Result

"""
First description
"""
interface Product {
  id: ID!
}

3.2.2.3Merge Enum Types

Formal Specification
MergeEnumTypes(enums)
  1. If any enum in enums is marked with @inaccessible
    1. Return null
  2. Let firstEnum be the first enum in enums.
  3. If enums contains only one enum
    1. Return a new enum type with the name of firstEnum, description of firstEnum, and enum values of firstEnum excluding any marked with @inaccessible.
  4. Let typeName be the name of firstEnum.
  5. Let description be the description of firstEnum.
  6. Let enumValues be the set of all enum values in enums.
  7. For each enum in enums:
    1. If description is null:
      1. Set description to the description of enum.
    2. For each enumValue in the enum values of enum:
      1. If enumValue is marked with @inaccessible
        1. Remove enumValue from enumValues.
  8. Return a new enum type with the name of typeName, description of description, and enum values of enumValues.
Explanatory Text

MergeEnumTypes(enums) consolidates multiple enum definitions (all sharing the same name) into one final enum type, while filtering out any parts marked with @inaccessible. If an entire enum is inaccessible, the merge returns null.

Inaccessible Enums

If any enum in the input set is marked @inaccessible, the entire merge operation is invalid. The algorithm immediately returns null, since that type cannot appear in the composed schema.

Single vs. Multiple Enum Definitions

When only one enum definition is present (after removing any inaccessible ones), it is used as is, except that any values marked with @inaccessible are excluded.

However, if an enum appears in multiple schemas, the enums must match exactly in their values and structure unless some values are excluded using the @inaccessible directive. This behavior is enforced by prior validation but is important to note as it determines how mismatched enums are handled.

Combining Descriptions

The first non-null description encountered among the enums is used for the final definition. If no definitions supply a description, the merged enum will have none.

Examples

Here, two Status enums from different schemas are merged into a single composed Status enum. The enums are identical, so the composed enum exactly matches the source enums.

Example № 145# Schema A

enum Status {
  ACTIVE
  INACTIVE
}

# Schema B

enum Status {
  ACTIVE
  INACTIVE
}

# Composed Result

enum Status {
  ACTIVE
  INACTIVE
}

If the enums differ in their values, the source schemas must define their unique values as @inaccessible to exclude them from the composed enum.

Example № 146# Schema A

enum Status {
  ACTIVE @inaccessible
  INACTIVE
}

# Schema B

enum Status {
  PENDING @inaccessible
  INACTIVE
}

# Composed Result

enum Status {
  INACTIVE
}

3.2.2.4Merge Union Types

Formal Specification
MergeUnionTypes(unions)
  1. If any union in unions is marked with @inaccessible
    1. Return null
  2. Let firstUnion be the first union in unions.
  3. Let name be the name of firstUnion.
  4. Let description be the description of firstUnion.
  5. Let possibleTypes be an empty set.
  6. For each union in unions:
    1. If description is null:
      1. Set description to the description of union.
    2. For each possibleType in the possible types of union:
      1. If possibleType is not marked with @inaccessible or @internal:
        1. Add possibleType to possibleTypes.
  7. If possibleTypes is empty:
    1. Return null
  8. Return a new union with the name of name, description of description, and possible types of possibleTypes.
Explanatory Text

MergeUnionTypes(unions) aggregates multiple union type definitions that share the same name into one unified union type. This process skips any union marked with @inaccessible and excludes possible types marked with @inaccessible or @internal.

Inaccessible Unions

If any union in the input list is marked @inaccessible, the merged result must be null and cannot appear in the final schema.

Combining Descriptions

The first non-empty description that is found is used as the description for the merged union. If no descriptions are found, the merged union will have no description.

Combining Possible Types

Each union’s possible types are considered in turn. Only those that are not marked @internal or @inaccessible are included in the final composed union. This preserves the valid types from all sources while systematically filtering out anything inaccessible or intended for internal use only.

In case there are no possible types left after filtering, the merged union is considered @inaccessible and cannot appear in the final schema.

Examples

Here, two SearchResult union types from different schemas are merged into a single composed SearchResult type.

Example № 147# Schema A

union SearchResult = Product | Order

# Schema B

union SearchResult = User | Order

# Composed Result

union SearchResult = Product | Order | User

In this example, the SearchResult union type from two schemas is merged. The Order type is shared across both schemas, while Product and User types are contributed by the individual source schemas. The resulting composed type includes all valid possible types.

Another example shows how @inaccessible on a possible affects the merge:

Example № 148# Schema A

union SearchResult = Product | Order

type Product @inaccessible {
  id: ID!
}

# Schema B

union SearchResult = User | Order

# Composed Result

union SearchResult = Order | User

In this case, the Product type is marked with @inaccessible in the first schema. As a result, the Product type is excluded from the composed SearchResult

3.2.2.5Merge Input Types

Formal Specification
MergeInputTypes(types)
  1. If any type in types is marked with @inaccessible
    1. Return null
  2. Let firstType be the first type in types.
  3. Let typeName be the name of firstType.
  4. Let description be the description of firstType.
  5. Let fields be an empty set.
  6. For each type in types:
    1. If description is null:
      1. Set description to the description of type.
  7. Let fieldNames be the set of all field names in types.
  8. For each fieldName in fieldNames:
    1. Let field be the set of fields with the name fieldName in types.
    2. Let mergedField be the result of MergeInputField(fields).
    3. If mergedField is not null:
      1. Add mergedField to fields.
Explanatory Text

The MergeInputTypes(types) algorithm produces a single input type definition by unifying multiple input types that share the same name. Each of these input types may come from different sources, yet must align into one coherent definition. Any type marked @inaccessible disqualifies the entire merge result from inclusion in the composed schema.

Inaccessible Types

If an input type is annotated with @inaccessible, the algorithm immediately returns null. Including an inaccessible type would mean exposing a field that’s not allowed in the composed schema.

Combining Descriptions

The first non-null description encountered is used for the final input type. If no such description exists among the source types, the resulting input type definition has no description.

Merging Fields

After filtering out inaccessible types, the algorithm merges each input field name found across the remaining types. For each field, MergeInputField(fields) is called to reconcile differences in type, nullability, default values, etc.. If a merged field ends up being null - for instance, because one of its underlying definitions was inaccessible – that field is not included in the final definition. The end result is a single input type that correctly unifies every compatible field from the various sources.

Examples

Here, two OrderInput input types from different schemas are merged into a single composed OrderInput type.

Example № 149# Schema A

input OrderInput {
  id: ID!
  description: String
}

# Schema B

input OrderInput {
  id: ID!
  total: Float
}

# Composed Result

input OrderInput {
  id: ID!
  description: String
  total: Float
}

In this example, the OrderInput type from two schemas is merged. The id field is shared across both schemas, while description and total fields are contributed by the individual source schemas. The resulting composed type includes all fields.

Another example demonstrates preserving descriptions during merging:

Example № 150# Schema A

"""
First Description
"""
input OrderInput {
  id: ID!
}

# Schema B

"""
Second Description
"""
input OrderInput {
  id: ID!
}

# Composed Result

"""
First Description
"""
input OrderInput {
  id: ID!
}

In this case, the description from the first schema is retained, while the fields are merged from both schemas to create the final OrderInput type.

3.2.2.6Merge Object Types

Formal Specification
MergeObjectTypes(types)
  1. If any type in types is marked with @inaccessible
    1. Return null
  2. Remove all types marked with @internal from types.
  3. Let firstType be the first type in types.
  4. Let typeName be the name of firstType.
  5. Let description be the description of firstType.
  6. Let fields be an empty set.
  7. For each type in types:
    1. If description is null:
      1. Set description to the description of type.
  8. Let fieldNames be the set of all field names in types.
  9. For each fieldName in fieldNames:
    1. Let field be the set of fields with the name fieldName in types.
    2. Let mergedField be the result of MergeOutputField(fields).
    3. If mergedField is not null:
      1. Add mergedField to fields.
  10. Return a new object type with the name of typeName, description of description, fields of fields.
Explanatory Text

The MergeObjectTypes(types) algorithm combines multiple object type definitions (all sharing the same name) into a single composed type. It processes each candidate type, discarding any that are inaccessible or internal, and then unifies their descriptions and fields.

Inaccessible Types

If an object type is marked with @inaccessible, the entire merged result must be null; we cannot include that type in the composed schema. Inaccessible types are disqualified at the outset.

Internal Types

Any type marked with @internal is removed from consideration before merging begins. None of its fields or descriptions will factor into the final composed type.

Combining Descriptions

The first non-null description encountered is used for the final object type’s description. If no non-null description is found, the resulting object type simply has no description.

Merging Fields

All remaining object types contribute their fields. The algorithm gathers every field name across these types, then calls MergeOutputField(fields) for each name to reconcile any differences. If MergeOutputField(fields) returns null (for instance, because a field is marked @inaccessible), that field is excluded from the final object type. The result is a unified set of fields that reflects each source definition while maintaining compatibility across them.

Examples

Here, two Product object types from different schemas are merged into a single composed Product type.

Example № 151# Schema A

type Product @key(fields: "id") {
  id: ID!
  name: String
}

# Schema B

type Product @key(fields: "id") {
  id: ID!
  price: Int
}

# Composed Result

type Product {
  id: ID!
  name: String
  price: Int
}

In this example, the Product type from two schemas is merged. The id field is shared across both schemas, while name and price fields are contributed by the individual source schemas. The resulting composed type includes all fields.

Another example demonstrates preserving descriptions during merging:

Example № 152# Schema A

"""
First Description
"""
type Order @key(fields: "id") {
  id: ID!
}

# Schema B

"""
Second Description
"""
type Order @key(fields: "id") {
  id: ID!
  total: Float
}

# Composed Result

"""
First Description
"""
type Order {
  id: ID!
  total: Float
}

In this case, the description from the first schema is retained, while the fields are merged from both schemas to create the final Order type.

In the following example, one of the Product types is marked with @internal. All its fields are excluded from the composed type.

Example № 153# Schema A

type Product @key(fields: "id") {
  id: ID!
  name: String
}

# Schema B

type Product @key(fields: "id") @internal {
  id: ID!
  price: Int
}

# Composed Result

type Product {
  id: ID!
  name: String
}

3.2.2.7Merge Output Field

Formal Specification
MergeOutputField(fields)
  1. If any field in fields is marked with @inaccessible
    1. Return null
  2. Filter out all fields marked with @internal from fields.
  3. If fields is empty:
    1. Return null
  4. Let firstField be the first field in fields.
  5. Let fieldName be the name of firstField.
  6. Let fieldType be the type of firstField.
  7. Let description be the description of firstField.
  8. For each field in fields:
    1. Set fieldType to be the result of LeastRestrictiveType(fieldType, type).
    2. If description is null:
      1. Let description be the description of field.
  9. Let arguments be an empty set.
  10. Let argumentNames be the set of all argument names in fields.
  11. For each argumentName in argumentNames:
    1. Let arguments be the set of arguments with the name argumentName in fields.
    2. Let mergedArgument be the result of MergeArgumentDefinitions(arguments).
    3. If mergedArguments is not null:
      1. Add mergedArgument to arguments.
  12. Return a new field with the name of fieldName, type of fieldType, arguments of arguments, and description of description.
Explanatory Text

The MergeOutputField(fields) algorithm is used when multiple fields across different object or interface types share the same field name and must be merged into a single composed field. This algorithm ensures that the final composed schema has one definitive definition for that field, resolving differences in type, description, and arguments.

Inaccessible Fields

If any of the fields is marked with @inaccessible, the entire merged field is discarded by returning null. A field that cannot be exposed in a composed schema prevents the field from being composed at all.

Internal Fields

Any field marked with @internal is removed from consideration before merging begins. This ensures that internal fields do not appear in the final composed schema and also do not affect the merging process. Internal fields are intended for internal use only and are not part of the composed schema and can collide in their definitions.

In the case where all fields are marked with @internal, the field will not appear in the composed schema.

Combining Descriptions

The first field that defines a description is used as the description for the merged field. If no description is found, the merged field will have no description.

Determining the Field Type

The return type of the composed field is determined by invoking LeastRestrictiveType(typeA, typeB). This helper function computes a type that is compatible with all the provided field types, ensuring that the composed schema does not break schemas expecting any of those types. For example, LeastRestrictiveType(typeA, typeB) might unify String! and String into String.

Merging Arguments

Each field can declare arguments. The algorithm collects all all argument names across these fields and merges them using MergeArgumentDefinitions(arguments), ensuring argument definitions remain compatible. If any of the arguments for a particular name is @inaccessible, then that argument is removed from the final set of arguments. Otherwise, any differences in argument type, default value, or description are resolved via the merging rules in MergeArgumentDefinitions(arguments).

This algorithm preserves as much information as possible from the source fields while ensuring they remain mutually compatible. It also systematically excludes fields or arguments deemed inaccessible.

Example

Imagine two schemas with a discountPercentage field on a Product type that slightly differ in return type:

Example № 154# Schema A

type Product {
  """
  Computes a discount as a percentage of the product's list price.
  """
  discountPercentage(percent: Int = 10): Int!
}

# Schema B

type Product {
  discountPercentage(percent: Int): Int
}

# Composed Result

type Product {
  """
  Computes a discount as a percentage of the product's list price.
  """
  discountPercentage(percent: Int): Int
}

3.2.2.8Merge Input Field

Formal Specification
MergeInputField(fields)
  1. If any field in fields is marked with @inaccessible
    1. Return null
  2. Let firstField be the first field in fields.
  3. Let fieldName be the name of firstField.
  4. Let fieldType be the type of firstField.
  5. Let description be the description of firstField.
  6. Let defaultValue be the default value of firstField or undefined if none exists.
  7. For each field in fields:
    1. Set fieldType to be the result of MostRestrictiveType(fieldType, type).
    2. If defaultValue is undefined:
      1. Set defaultValue to the default value of field or undefined if none exists.
    3. If description is null:
      1. Let description be the description of field.
  8. Return a new input field with the name of fieldName, type of fieldType, and description of description and default value of defaultValue.
Explanatory Text

The MergeInputField(fields) algorithm merges multiple input field definitions, all sharing the same field name, into a single composed input field. This ensures the final input type in a composed schema maintains a consistent type, description, and default value for that field. Below is a breakdown of how MergeInputField(fields) operates:

Inaccessible Fields

If any of the fields is marked with @inaccessible, we cannot include the field in the composed schema, and the merge algorithm returns null.

Combining Descriptions

The name of the merged field is taken from the first field in the list. The description is set to the first non-null description encountered among the fields. If no description is found, the merged field will have no description.

Combining Field Types

The merged field type is computed by calling MostRestrictiveType(typeA, typeB) . Unlike output fields, where LeastRestrictiveType(typeA, typeB) is used, input fields often follow stricter constraints. If one source schema defines a field as non-nullable and another as nullable, the merged field type must be non-nullable to satisfy both schemas. MostRestrictiveType(typeA, typeB) ensures a final input type that is compatible with all definitions of that field.

Inheriting Default Values

If multiple fields define default values, whichever appears first in the list effectively wins. If there are non compatible default values, the pre merge validation has already asserted that the default values are compatible.

Examples

Suppose we have two input type definitions for the same OrderFilter input field, defined in separate schemas:

Example № 155# Schema A

input OrderFilter {
  """
  Filter by the minimum order total
  """
  minTotal: Int = 0
}

# Schema B

input OrderFilter {
  minTotal: Int!
}

# Composed Result

input OrderFilter {
  """
  Filter by the minimum order total
  """
  minTotal: Int = 0
}

In the final schema, minTotal is defined using the most restrictive type (Int!), has a default value of 0, and includes the description from the original field in Schema A.

3.2.2.9Merge Argument Definitions

Formal Specification
MergeArgumentDefinitions(arguments)
  1. If any argument in arguments is marked with @inaccessible
    1. Return null
  2. Let mergedArgument be the first argument in arguments that is not marked with @require
  3. If mergedArgument is null
    1. Return null
  4. For each argument in arguments:
    1. If argument is marked with @require
      1. Continue
    2. Set mergedArgument to the result of MergeArgument(mergedArgument, argument)
  5. Return mergedArgument
Explanatory Text

MergeArgumentDefinitions(arguments) merges multiple arguments that share the same name across different field definitions into a single composed argument definition.

Inaccessible Arguments

If any argument in the set is marked with @inaccessible, the entire argument definition is discarded by returning null. An inaccessible argument should not appear in the final composed schema.

Handling @require

The @require directive is used to indicate that the argument is required for the field to be resolved, yet it specifies it as a dependency that is resolved at runtime. Therefore, this argument should not affect the merge process. If there are only @require arguments in the set, the merge algorithm returns null.

Merging Arguments

All arguments that are not marked with @require are merged using the MergeArgument algorithm. This algorithm ensures that the final composed argument is compatible with all definitions of that argument, resolving differences in type, default value, and description.

By selectively including or excluding certain arguments (via @inaccessible or @require), and merging differences where possible, this algorithm ensures that the resulting composed argument is both valid and compatible with the source definitions.

Example

Consider two field definitions that share the same filter argument, but with slightly different types and descriptions:

Example № 156# Schema A

type Query {
  searchProducts(
    """
    Filter to apply to the search
    """
    filter: ProductFilter!
  ): [Product]
}

# Schema B

type Query {
  searchProducts(
    """
    Search filter to apply
    """
    filter: ProductFilter
  ): [Product]
}

# Composed Result

type Query {
  searchProducts(
    """
    Filter to apply to the search
    """
    filter: ProductFilter!
  ): [Product]
}

In the merged schema, the filter argument is defined with the most restrictive type (ProductFilter!), includes the description from the original field in Schema A, and is marked as required.

3.2.2.10Merge Argument

Formal Specification
MergeArgument(argumentA, argumentB)
  1. Let typeA be the type of argumentA.
  2. Let typeB be the type of argumentB.
  3. Let type be MostRestrictiveType(typeA, typeB).
  4. Let description be the description of argumentA or undefined if none exists.
  5. If description is undefined:
    1. Let description be the description of argumentB.
  6. Let defaultValue be the default value of argumentA or undefined if none exists.
  7. If defaultValue is undefined:
    1. Set defaultValue to the default value of argumentB or undefined if none exists.
  8. Return a new argument with the name of argumentA, type of type, and description of description.
Explanatory Text

MergeArgument(argumentA, argumentB) takes two arguments with the same name but possibly differing in type, description, or default value, and returns a single, unified argument definition.

Unifying the Type

The algorithm uses MostRestrictiveType(typeA, typeB) to determine the final argument type. For input positions (like arguments), the most restrictive type is needed to ensure that the merged argument type accepts all values the sources demand. For instance, if one argument type is String! and the other is String, the merged type must be String! so that it remains valid from both perspectives.

Choosing the Description

The description of the first argument is used if it is defined, otherwise the description of the second argument is used.

Inheriting the Default Value

The algorithm takes the first defined default value it encounters. Pre-merge validation has already asserted that any differing defaults are compatible.

Examples

Suppose we have two variants of the same argument, limit, from different services:

Service A
Example № 157# Schema A

limit: Int = 10

# Schema B

"""
Number of items to fetch
"""
limit: Int!

# Composed Result

"""
Number of items to fetch
"""
limit: Int! = 10

3.2.2.11Least Restrictive Type

Formal Specification
LeastRestrictiveType(typeA, typeB)
  1. Let isNullable be true.
  2. If typeA and typeB are non nullable types:
    1. Set isNullable to false.
  3. If typeA is a non nullable type:
    1. Set typeA to the inner type of typeA.
  4. If typeB is a non nullable type:
    1. Set typeB to the inner type of typeB.
  5. If typeA is a list type:
    1. Assert: typeB is a list type.
    2. Let innerTypeA be the inner type of typeA.
    3. Let innerTypeB be the inner type of typeB.
    4. Let innerType be LeastRestrictiveType(innerTypeA, innerTypeB).
    5. If isNullable is true:
      1. Return innerType as a nullable list type.
    6. Otherwise:
      1. Return innerType as a non nullable list type.
  6. Otherwise:
    1. Assert: typeA is equal to typeB
    2. If isNullable is true:
      1. Return typeA as a nullable type.
    3. Otherwise:
      1. Return typeA as a non nullable type.
Explanatory Text

LeastRestrictiveType(typeA, typeB) identifies a single type that safely handles all possible runtime values produced by the sources defining typeA and typeB. If one source can return null while another cannot, the merged type becomes nullable to avoid runtime exceptions – because a strictly non-null signature would be violated whenever null appears. Similarly, if both sources enforce non-null, the result remains non-null.

Nullability

When merging types of differing nullability (e.g., one String! vs. another String), the presence of a nullable type in one source effectively dictates that the final type must accept null. If either source can produce null, a strictly non-null field would break the contract if null were ever returned.

Lists

If both sources provide a list type, then the function unifies those list types by merging their inner types (e.g., the element type of the list). Whether the list itself is nullable depends on whether both sources treat the list as non-null. In other words, if any source can return null for the list, the final list type must also be nullable.

Scalar Types

When neither source specifies a list type, the algorithm confirms that both sources refer to the same underlying named type (e.g., String vs. String). If they differ (e.g., String vs. Int), the schemas are fundamentally incompatible for merging, yet the pre merge validation should have already caught this issue.

Examples

In the following scenario, one source might return null, so the resulting merged type must allow null.

Example № 158# Schema A
typeA: String!

# Schema B
typeB: String

# Merged Result
type: String

Here, both sources use lists of Int, but they differ in nullability. Consequently, the merged list type is [Int], which permits a null list or null elements.

Example № 159# Schema A
typeA: [Int]!

# Schema B
typeB: [Int!]

# Merged Result
type: [Int]

3.2.2.12Most Restrictive Type

Formal Specification
MostRestrictiveType(typeA, typeB)
  1. Let isNullable be false.
  2. If typeA and typeB are nullable types:
    1. Set isNullable to true.
  3. If typeA is a non nullable type:
    1. Set typeA to the inner type of typeA.
  4. If typeB is a non nullable type:
    1. Set typeB to the inner type of typeB.
  5. If typeA is a list type:
    1. Assert: typeB is a list type.
    2. Let innerTypeA be the inner type of typeA.
    3. Let innerTypeB be the inner type of typeB.
    4. Let innerType be MostRestrictiveType(innerTypeA, innerTypeB).
    5. If isNullable is true:
      1. Return innerType as a nullable list type.
    6. Otherwise:
      1. Return innerType as a non nullable list type.
  6. Otherwise
    1. Assert: typeA is equal to typeB
    2. If isNullable is true:
      1. Return typeA as a nullable type.
    3. Otherwise:
      1. Return typeA as a non nullable type.
Explanatory Text

MostRestrictiveType(typeA, typeB) determines a single input type that strictly honors the constraints of both sources. If either source requires a non-null value, the merged type also becomes non-null so that no invalid (e.g., null) data can be introduced at runtime. Conversely, if both sources allow null, the merged type remains nullable. The same principle applies to list types, where the more restrictive settings (non-null list or non-null elements) is used.

Nullability

For input fields, if either source are non null, it’s unsafe to allow null in the merged schema. Consequently, when one type is non-nullable (String!) and the other is nullable (String), the resulting type is non-nullable (String!). Only if both types are explicitly nullable does the merged type remain nullable (e.g., String).

Lists

When merging list types, both sources must be lists. Inside the list, the same merging logic applies: if either source disallows null elements (e.g., [Int!] vs. [Int]), the final merged list also disallows null elements to avoid unexpected runtime failures. If both lists can have null elements, then the merged list similarly allows null.

Scalar Types

Like other merging steps, if the underlying base types (e.g., String vs. Int) differ, the types cannot be reconciled. A merged schema cannot reinterpret String as Int, so the process fails if there’s a fundamental mismatch. This should already be caught by the pre merge validation.

Examples

Here, because one source disallows null, the final merged type must also disallow null to avoid a situation where a null could be passed where it isn’t allowed:

Example № 160# Schema A
typeA: String!

# Schema B
typeB: String

# Merged Result
type: String!

In the following example, since one definition mandates non-null items ([Int!]), it is more restrictive and prevents null elements in the list. Additionally, the other source mandates a non-null list ([Int]!). The merged result, [Int!]!, preserves these constraints to ensure the field does not accept or produce values that violate either source.

Example № 161# Schema A
typeA: [Int!]

# Schema B
typeB: [Int]!

# Merged Result
type: [Int!]!

3.2.3Post Merge Validation

After the schema is composed, there are certain validations that are only possible in the context of the fully merged schema. These validations verify overall consistency: for example, ensuring that no type is left without accessible fields, or that interfaces and their implementors remain compatible. This stage confirms that the combined schema remains coherent when considered as a whole.

3.2.3.1Empty Merged Object Type

Error Code

EMPTY_MERGED_OBJECT_TYPE

Severity ERROR

Formal Specification
  • Let types be the set of all input object types in the composite schema.
  • For each type in types:
    • Let fields be a set of all fields in type.
    • fields must not be empty.
Explanatory Text

For object types defined across multiple source schemas, the merged object type is the superset of all fields defined in these source schemas. However, any field marked with @inaccessible in any source schema is hidden and not included in the merged object type. An object type with no fields, after considering @inaccessible annotations, is considered empty and invalid.

Examples

In the following example, the merged object type Author is valid. It includes all fields from both source schemas, with age being hidden due to the @inaccessible directive in one of the source schemas:

# Schema A

type Author {
  name: String
  age: Int @inaccessible
}

# Schema B
type Author {
  age: Int
  registered: Boolean
}

If the @inaccessible directive is applied to an object type itself, the entire merged object type is excluded from the composite execution schema, and it is not required to contain any fields.

# Schema A

type Author @inaccessible {
  name: String
  age: Int
}

# Schema B
type Author {
  registered: Boolean
}

This counter-example demonstrates an invalid merged object type. In this case, Author is defined in two source schemas, but all fields are marked as @inaccessible in at least one of the source schemas, resulting in an empty merged object type:

Counter Example № 162# Schema A

type Author {
  name: String @inaccessible
  registered: Boolean
}

# Schema B

type Author {
  name: String
  registered: Boolean @inaccessible
}

3.2.3.2No Queries

Error Code

NO_QUERIES

Severity

ERROR

Formal Specification
  • Let fields be the set of all fields in the Query type of the merged schema.
  • HasPublicField(fields) must be true.
HasPublicField(fields)
  1. For each field in fields:
    1. If IsExposed(field) is true
      1. return true
  2. return false
Explanatory Text

This rule ensures that the composed schema includes at least one accessible field on the root Query type.

In GraphQL, the Query type is essential as it defines the entry points for read operations. If none of the composed schemas expose any query fields, the composed schema would lack a root query, making it a invalid GraphQL schema.

Examples

In this example, at least one schema provides accessible query fields, satisfying the rule.

# Schema A
type Query {
  product(id: ID!): Product
}

type Product {
  id: ID!
}
type Query {
  review(id: ID!): Review
}

# Schema B
type Review {
  id: ID!
  content: String
  rating: Int
}

Even if some query fields are marked as @inaccessible, as long as there is at least one accessible query field in the composed schema, the rule is satisfied.

In this case, Schema A exposes an internal query field internalData marked with @inaccessible, making it hidden in the composed schema. However, Schema B provides an accessible product query field. Therefore, the composed schema has at least one accessible query field, adhering to the rule.

# Schema A
type Query {
  internalData: InternalData @inaccessible
}

type InternalData {
  secret: String
}
# Schema B
type Query {
  product(id: ID!): Product
}

type Product {
  id: ID!
  name: String
}

If all query fields in all schemas are marked as @inaccessible, the composed schema will lack accessible query fields, violating the rule.

In the following counter-example, both schemas have query fields, but all are marked as @inaccessible.

This means there are no accessible query fields in the composed schema, triggering the NO_QUERIES error.

# Schema A
type Query {
  internalData: InternalData @inaccessible
}

type InternalData {
  secret: String
}
# Schema B
type Query {
  adminStats: AdminStats @inaccessible
}

type AdminStats {
  userCount: Int
}

3.2.3.3Implemented by Inaccessible

Error Code

IMPLEMENTED_BY_INACCESSIBLE

Severity

ERROR

Formal Specification
  • Let schema be the merged composite execution schema.
  • Let types be the set of all object types in schema.
  • For each type in types:
    • If type is not marked with @inaccessible:
      • Let implementedInterfaces be the set of all interfaces implemented by type.
      • For each field in type‘s fields:
        • If field is marked with @inaccessible:
          • For each implementedInterface in implementedInterfaces:
            • Let interfaceField be the field on implementedInterface that has the same name as field
            • If interfaceField exists:
              • IsExposed(interfaceField) must be false
Explanatory Text

This rule ensures that inaccessible fields (@inaccessible) on an object type are not exposed through an interface. An object type that implements an interface must provide public access to each field defined by the interface. If a field on an object type is marked as @inaccessible but implements an interface field that is visible in the composed schema, this creates a contradiction: the interface contract requires that field to be accessible, yet the object type implementation hides it.

This rule prevents inconsistencies in the composed schema, ensuring that every interface field visible in the composed schema is also publicly visible on all types implementing that interface.

Examples

In the following example, User.id is accessible and implements Node.id which is also accessible, no error occurs.

# The interface field `id` is visible and provided by `User` without @inaccessible.
interface Node {
  id: ID!
}

type User implements Node {
  id: ID!
  name: String
}

Since Auditable and its field lastAudit are @inaccessible, the Order.lastAudit field is allowed to be @inaccessible because it does not implement any visible interface field in the composed schema.

# The entire interface is @inaccessible, thus its fields are not publicly visible.
interface Auditable @inaccessible {
  lastAudit: DateTime!
}

type Order implements Auditable {
  lastAudit: DateTime! @inaccessible
  orderNumber: String
}

In this example, Node.id is visible in the public schema (no @inaccessible), but User.id is marked @inaccessible. This violates the interface contract because User claims to implement Node, yet does not expose the id field to the public schema.

Counter Example № 163interface Node {
  id: ID!
}

type User implements Node {
  id: ID! @inaccessible
  name: String
}

3.2.3.4Interface Field No Implementation

Error Code

INTERFACE_FIELD_NO_IMPLEMENTATION

Severity

ERROR

Formal Specification
  • Let schema be the merged composite execution schema.
  • Let objectTypes be the set of all object types defined in schema.
  • For each objectType in objectTypes:
    • Let interfaces be the set of interface types that objectType implements.
    • For each interface in interfaces:
      • Let interfaceFields be the set of fields defined on interface that are visible in the merged schema.
      • For each field in interfaceFields:
        • If field is not present on objectType:
          • Produce an INTERFACE_FIELD_NO_IMPLEMENTATION error.
Explanatory Text

In GraphQL, any object type that implements an interface must provide a field definition for every field declared by that interface. If an object type fails to implement a particular field required by one of its interfaces, the composite schema becomes invalid because the resulting schema breaks the contract defined by that interface.

This rule checks that object types merged from different sources correctly implement all interface fields. In scenarios where a schema defines an interface field, but the implementing object type in another schema omits that field, an error is raised.

Examples

In this valid example, the User interface has three fields: id, name, and email. Both the RegisteredUser and GuestUser types implement all three fields, satisfying the interface contract.

Example № 164# Schema A
interface User {
  id: ID!
  name: String!
  email: String
}

type RegisteredUser implements User {
  id: ID!
  name: String!
  email: String
  lastLogin: DateTime
}

# Schema B
interface User {
  id: ID!
  name: String!
  email: String
}

type GuestUser implements User {
  id: ID!
  name: String!
  email: String
  temporaryCartId: String
}

In this counter-example, the User interface is defined with three fields, but the GuestUser type omits one of them (email), causing an INTERFACE_FIELD_NO_IMPLEMENTATION error.

Although GuestUser implements User, it does not provide the email field. Since the merged schema sees that the interface User has email but GuestUser does not provide it, the schema composition fails with the INTERFACE_FIELD_NO_IMPLEMENTATION error.

Counter Example № 165# Schema A
interface User {
  id: ID!
  name: String!
  email: String
}

type RegisteredUser implements User {
  id: ID!
  name: String!
  email: String
  lastLogin: DateTime
}

# Schema B
interface User {
  id: ID!
  name: String!
}

type GuestUser implements User {
  id: ID!
  name: String!
  temporaryCartId: String
}

3.2.3.5Invalid Field Sharing

Error Code

INVALID_FIELD_SHARING

Severity

ERROR

Formal Specification
  • Let schema be the merged composite execution schema.
  • Let types be the set of all object and interface types in schema.
  • For each type in types:
    • If type is the Subscription type:
      • Let fields be the set of all fields in type.
      • For each field in fields:
        • If field is marked with @shareable:
          • Produce an INVALID_FIELD_SHARING error.
    • Otherwise:
      • Let fields be the set of all fields on type.
      • For each field in fields:
        • If field is not part of a @key directive:
          • Let fieldDefinitions be the set of all field definitions for field across all source schemas excluding fields marked with @external or @override.
          • If fieldDefinitions has more than one element:
            • field must be marked as @shareable in at least one schema.
Explanatory Text

A field in a federated GraphQL schema may be marked @shareable, indicating that the same field can be resolved by multiple schemas without conflict. When a field is not marked as @shareable (sometimes called “non-shareable”), it cannot be provided by more than one schema.

Field definitions marked as @external or @override are excluded when validating whether a field is shareable. These annotations indicate specific cases where field ownership lies with another schema or has been replaced.

Additionally, subscription root fields cannot be shared (i.e., they are effectively non-shareable), as subscription events from multiple schemas would create conflicts in the composed schema. Attempting to mark a subscription field as shareable or to define it in multiple schemas triggers the same error.

Examples

In this example, the User type field fullName is marked as shareable in both schemas, allowing them to serve consistent data for that field without conflict.

Example № 166# Schema A
type User @key(fields: "id") {
  id: ID!
  username: String
  fullName: String @shareable
}

# Schema B
type User @key(fields: "id") {
  id: ID!
  fullName: String @shareable
  email: String
}

In the following example, User.fullName is overridden in one schema and therefore the field can be define in multiple schemas without being marked as @shareable.

Example № 167# Schema A
type User @key(fields: "id") {
  id: ID!
  fullName: String @override(from": "B")
}

# Schema B
type User @key(fields: "id") {
  id: ID!
  fullName: String
}

In the following example, User.fullName is marked as @external in one schema and therefore the field can be define in the other schema without being marked as @shareable.

Example № 168# Schema A
type User @key(fields: "id") {
  id: ID!
  fullName: String @external
}

# Schema B
type User @key(fields: "id") {
  id: ID!
  fullName: String
}

In the following counter-example, User.profile is non-shareable but is defined and resolved by two different schemas, resulting in an INVALID_FIELD_SHARING error.

Counter Example № 169# Schema A
type User @key(fields: "id") {
  id: ID!
  profile: Profile
}

type Profile {
  avatarUrl: String
}

# Schema B
type User @key(fields: "id") {
  id: ID!
  profile: Profile
}

type Profile {
  avatarUrl: String
}

By definition, root subscription fields cannot be shared across multiple schemas. In this example, both schemas define a subscription field newOrderPlaced:

Counter Example № 170# Schema A
type Subscription {
  newOrderPlaced: Order @shareable
}

type Order {
  id: ID!
  items: [String]
}

# Schema B
type Subscription {
  newOrderPlaced: Order @shareable
}

3.2.3.6Invalid Shareable Usage

Error Code

INVALID_SHAREABLE_USAGE

Severity

ERROR

Formal Specification
  • Let schema be one of the composed schemas.
  • Let types be the set of types defined in schema.
  • For each type in types:
    • If type is an interface type:
      • For each field definition field in type:
        • If field is annotated with @shareable, produce an INVALID_SHAREABLE_USAGE error.
Explanatory Text

The @shareable directive is intended to indicate that a field on an object type can be resolved by multiple schemas without conflict. As a result, it is only valid to use @shareable on fields of object types (or on the entire object type itself).

Applying @shareable to interface fields is disallowed and violates the valid usage of the directive. This rule prevents schema composition errors and data conflicts by ensuring that @shareable is used only in contexts where shared field resolution is meaningful and unambiguous.

Examples

In this example, the field orderStatus on the Order object type is marked with @shareable, which is allowed. It signals that this field can be served from multiple schemas without creating a conflict.

Example № 171type Order {
  id: ID!
  orderStatus: String @shareable
  total: Float
}

In this example, the InventoryItem interface has a field sku marked with @shareable, which is invalid usage. Marking an interface field as shareable leads to an INVALID_SHAREABLE_USAGE error.

Counter Example № 172interface InventoryItem {
  sku: ID! @shareable
  name: String
}

3.2.3.7Only Inaccessible Children

Error Code

ONLY_INACCESSIBLE_CHILDREN

Severity

ERROR

Formal Specification
HasObjectTypeAccessibleChildren(type)
  1. Let fields be the set of all fields in type.
  2. For each field in fields:
    1. If field is not marked with @inaccessible and not @internal:
      1. return true
  3. return false
HasEnumAccessibleChildren(type)
  1. Let values be the set of all values in type.
  2. For each value in values:
    1. If value is not marked with @inaccessible:
      1. return true
  3. return false
HasInputObjectAccessibleChildren(type)
  1. Let fields be the set of all fields in type.
  2. For each field in fields:
    1. If value is not marked with @inaccessible:
      1. return true
  3. return false
HasInterfaceAccessibleChildren(type)
  1. Let fields be the set of all fields in type.
  2. For each field in fields:
    1. If field is not marked with @inaccessible:
      1. return true
  3. return false
HasUnionAccessibleChildren(type)
  1. Let members be the set of all member types in type.
  2. For each member in members:
    1. Let type be the type of member.
    2. If type is not marked with @inaccessible:
      1. return true
  3. return false
Explanatory Text

A type that is not annotated with @inaccessible is expected to appear in the composed schema. If, however, all of its child elements (fields in an object or interface, values in an enum, fields in an input object or all types of a union) are individually marked @inaccessible (or @internal), then there are no accessible sub-parts of that type for consumers to query or reference.

Allowing such a type to remain in the composed schema despite having no publicly visible fields or values leads to an invalid schema. This rule enforces that a type visible in the composed schema must have at least one accessible child. Otherwise, it triggers an ONLY_INACCESSIBLE_CHILDREN error, prompting the user to either make the entire type @inaccessible or expose at least one child element.

Additionally, the rule applies to all types except the query, mutation, and subscription root types.

Examples

In the following example, the Profile type is included in the composed schema, and Profile.email is not marked with @inaccessible. This satisfies the rule, as there is at least one accessible child element.

type User {
  id: ID!
  profile: Profile
}

type Profile {
  name: String @inaccessible
  email: String
}

In the following example, all fields of the Profile type are marked with @inaccessible. But since Profile itself is marked with @inaccessible, it is not required to have any accessible children.

type User {
  id: ID!
  profile: Profile @inaccessible
}

type Profile @inaccessible {
  name: String @inaccessible
  email: String @inaccessible
}

The Profile type is included in the composed schema (no @inaccessible on the type), but all of its fields are marked @inaccessible, triggering an ONLY_INACCESSIBLE_CHILDREN error.

Counter Example № 173type User {
  id: ID!
  profile: Profile
}

type Profile {
  name: String @inaccessible
  email: String @inaccessible
}

In this example, the DeliveryStatus enum is not annotated with @inaccessible, yet all of its values are.

Since there are no publicly visible values, an ONLY_INACCESSIBLE_CHILDREN error is produced.

Counter Example № 174enum DeliveryStatus {
  PENDING @inaccessible
  SHIPPED @inaccessible
  DELIVERED @inaccessible
}

3.2.3.8Require Invalid Fields

Error Code

REQUIRE_INVALID_FIELDS

Severity

ERROR

Formal Specification
  • Let schema be the merged composite execution schema.
  • Let compositeTypes be the set of all composite types in schema.
  • For each composite in compositeTypes:
    • Let fields be the set of fields on composite.
    • Let arguments be the set of all arguments on fields.
    • For each argument in arguments:
      • If argument is not annotated with @require:
        • Continue
      • Let fieldsArg be the string value of the fields argument of the @require directive on argument.
      • Let parsedFieldsArg be the parsed selection map from fieldsArg.
      • ValidateSelectionMap(parsedFieldsArg, parentType) must be true.
ValidateSelectionMap(selectionMap, parentType)
  1. For each selection in selectionMap:
    1. Let field be the field selected by selection on parentType.
    2. If field is not defined on parentType:
      1. return false
    3. Let fieldType be the type of field.
    4. If fieldType is not a scalar type
      1. Let subSelections be the selections in selection
      2. If subSelections is empty
        1. return false
      3. If ValidateSelectionMap(subSelections, fieldType) is false
        1. return false
  2. return true
Explanatory Text

Even if the selection map for @require(fields: "…") is syntactically valid, its contents must also be valid within the composed schema. Fields must exist on the parent type for them to be referenced by @require. In addition, fields requiring unknown fields break the valid usage of @require, leading to a REQUIRE_INVALID_FIELDS error.

Examples

In the following example, the @require directive’s fields argument is a valid selection set and satisfies the rule.

Example № 175type User @key(fields: "id") {
  id: ID!
  name: String!
  profile(name: String! @require(fields: "name")): Profile
}

type Profile {
  id: ID!
  name: String
}

In this counter-example, the @require directive does not have a valid selection set and triggers a REQUIRE_INVALID_FIELDS error.

Counter Example № 176type Book {
  id: ID!
  title(lang: String! @require(fields: "author { }")): String
}

type Author {
  name: String
}

In this counter-example, the @require directive references a field (unknown) that does not exist on the parent type (Book), causing a REQUIRE_INVALID_FIELDS error.

Counter Example № 177type Book {
  id: ID!
  pages(pageSize: Int @require(fields: "unknownField")): Int
}

3.2.3.9Provides Invalid Fields

Error Code

PROVIDES_INVALID_FIELDS

Severity

ERROR

Formal Specification
  • Let schema be the merged composite execution schema.
  • Let fieldsWithProvides be the set of all fields annotated with the @provides directive in schema.
  • For each field in fieldsWithProvides:
    • Let fieldsArg be the string value of the fields argument of the @provides directive on field.
    • Let parsedSelectionSet be the parsed selection set from fieldsArg.
    • Let returnType be the return type of field.
    • ValidateSelectionSet(parsedSelectionSet, returnType) must be true.
ValidateSelectionSet(selectionSet, parentType)
  1. For each selection in selectionSet:
    1. Let selectedField be the field named by selection in parentType.
    2. If selectedField does not exist on parentType:
      1. return false
    3. If selectedField returns a composite type then selection
      1. Let subSelections be the selections in selection
      2. If subSelections is empty
        1. return false
      3. If ValidateSelectionSet(subSelections, fieldType) is false
        1. return false
  2. return true
Explanatory Text

Even if the @provides(fields: "…") argument is well-formed syntactically, the selected fields must actually exist on the return type of the field. Invalid field references- e.g., selecting non-existent fields, referencing fields on the wrong type, or incorrectly omitting required nested selections-lead to a PROVIDES_INVALID_FIELDS error.

Examples

In the following example, the @provides directive references a valid field (hobbies) on the UserDetails type.

Example № 178type User @key(fields: "id") {
  id: ID!
  details: UserDetails @provides(fields: "hobbies")
}

type UserDetails {
  hobbies: [String]
}

In the following counter-example, the @provides directive specifies a field named unknownField which is not defined on UserDetails. This raises a PROVIDES_INVALID_FIELDS error.

Counter Example № 179type User @key(fields: "id") {
  id: ID!
  details: UserDetails @provides(fields: "unknownField")
}

type UserDetails {
  hobbies: [String]
}

3.2.3.10Empty Merged Input Object Type

Error Code

EMPTY_MERGED_INPUT_OBJECT_TYPE

Severity

ERROR

Formal Specification
  • Let inputTypes be the set of all input object types in the composite schema.
  • For each inputType in inputTypes:
    • Let fields be a set of all fields in inputType.
    • fields must not be empty.
Explanatory Text

For input object types defined across multiple source schemas, the merged input object type is the intersection of all fields defined in these source schemas. Any field marked with the @inaccessible directive in any source schema is hidden and not included in the merged input object type. An input object type with no fields, after considering @inaccessible annotations, is considered empty and invalid.

Examples

In the following example, the merged input object type BookFilter is valid.

input BookFilter {
  name: String
}

input BookFilter {
  name: String
}

If the @inaccessible directive is applied to an input object type itself, the entire merged input object type is excluded from the composite execution schema, and it is not required to contain any fields.

input BookFilter @inaccessible {
  name: String
  minPageCount: Int
}

input BookFilter {
  name: Boolean
}

This counter-example demonstrates an invalid merged input object type. In this case, BookFilter is defined in two source schemas, but all fields are marked as @inaccessible in at least one of the source schemas, resulting in an empty merged input object type:

Counter Example № 180input BookFilter {
  name: String @inaccessible
  paperback: Boolean
}

input BookFilter {
  name: String
  paperback: Boolean @inaccessible
}

Here is another counter-example where the merged input object type is empty because no fields intersect between the two source schemas:

Counter Example № 181input BookFilter {
  paperback: Boolean
}

input BookFilter {
  name: String
}

3.2.3.11Non-Null Input Fields cannot be inaccessible

Error Code

NON_NULL_INPUT_FIELD_IS_INACCESSIBLE

Formal Specification
  • Let fields be the set of all fields across all input types in all source schemas.
  • For each field in fields:
    • If field is a non-null input field:
      • Let coordinate be the coordinate of field.
      • coordinate must be in the composite schema.
Explanatory Text

When an input field is declared as non-null in any source schema, it imposes a hard requirement: queries or mutations that reference this field must provide a value for it. If the field is then marked as @inaccessible or removed during schema composition, the final schema would still implicitly demand a value for a field that no longer exists in the composed schema, making it impossible to fulfill the requirement.

As a result:

  • Nullable (optional) fields can be hidden or removed without invalidating the composed schema, because the user is never required to supply a value for them.
  • Non-null (required) fields, however, must remain exposed in the composed schema so that users can provide values for those fields. Hiding a required input field breaks the schema contract and leads to an invalid composition.
Examples

The following is valid because the age field, although @inaccessible in one source schema, is nullable and can be safely omitted in the final schema without breaking any mandatory input requirement.

Example № 182# Schema A
input BookFilter {
  author: String!
  age: Int @inaccessible
}

# Schema B
input BookFilter {
  author: String!
  age: Int
}

# Composite Schema
input BookFilter {
  author: String!
}

Another valid case is when a nullable input field is removed during merging:

Example № 183# Schema A
input BookFilter {
  author: String!
  age: Int
}

# Schema B
input BookFilter {
  author: String!
}

# Composite Schema
input BookFilter {
  author: String!
}

An invalid case is when a non-null input field is inaccessible:

Counter Example № 184# Schema A
input BookFilter {
  author: String!
  age: Int!
}

# Schema B
input BookFilter {
  author: String!
  age: Int @inaccessible
}

# Composite Schema
input BookFilter {
  author: String!
}

Another invalid case is when a non-null input field is removed during merging:

Counter Example № 185# Schema A
input BookFilter {
  author: String!
  age: Int!
}

# Schema B
input BookFilter {
  author: String!
}

# Composite Schema
input BookFilter {
  author: String!
}

3.2.3.12Input Fields cannot reference inaccessible type

Error Code

INPUTFIELDREFERENCESINACCESSIBLETYPE

Formal Specification
  • Let fields be the set of all fields of the input types
  • For each field in fields:
    • If field is not declared as @inaccessible
      • Let namedType be the named type that field references
      • namedType must not be declared as @inaccessible
Explanatory Text

In a composed schema, a field within an input type must only reference types that are exposed. This requirement guarantees that public types do not reference inaccessible structures which are intended for internal use.

A valid case where a public input field references another public input type:

Example № 186input Input1 {
  field1: String!
  field2: Input2
}

input Input2 {
  field3: String
}

Another valid case is where the field is not exposed in the composed schema:

Example № 187input Input1 {
  field1: String!
  field2: Input2 @inaccessible
}

input Input2 @inaccessible {
  field3: String
}

An invalid case is when an input field references an inaccessible type:

Counter Example № 188input Input1 {
  field1: String!
  field2: Input2!
}

input Input2 @inaccessible {
  field3: String
}

3.3Validate Satisfiability

The final step confirms that the composite schema supports executable queries without leading to invalid conditions. Each query path defined in the merged schema is checked to ensure that every field can be resolved. If any query path is unresolvable, the schema is deemed unsatisfiable, and composition fails.

4Executor

A distributed GraphQL executor acts as an orchestrator that uses schema metadata to rewrite a GraphQL request into a query plan. This plan resolves the required data from subgraphs and coerces this data into the result of the GraphQL request.

4.1Configuration

The supergraph is a GraphQL IDL document that contains metadata for the query planner that describes the relationship between type system members and the type system members on subgraphs.

5Shared Types

In this section we outline directives and types that are shared between the subgraph configuration and the gateway configuration document.

5.1Name

scalar Name

The scalar Name represents a valid GraphQL type name.

5.2FieldSelection

scalar FieldSelection

The scalar FieldSelection represents a GraphQL field selection syntax.

Example № 189abc(def: 1) { ghi }

6Appendix A: Specification of FieldSelectionMap Scalar

6.1Introduction

This appendix focuses on the specification of the FieldSelectionMap scalar type. FieldSelectionMap is designed to express semantic equivalence between arguments of a field and fields within the result type. Specifically, it allows defining complex relationships between input arguments and fields in the output object by encapsulating these relationships within a parsable string format. It is used in the @is and @require directives.

To illustrate, consider a simple example from a GraphQL schema:

type Query {
  userById(userId: ID! @is(field: "id")): User! @lookup
}

In this schema, the userById query uses the @is directive with FieldSelectionMap to declare that the userId argument is semantically equivalent to the User.id field.

An example query might look like this:

query {
  userById(userId: "123") {
    id
  }
}

Here, it is expected that the userId “123” corresponds directly to User.id, resulting in the following response if correctly implemented:

{
  "data": {
    "userById": {
      "id": "123"
    }
  }
}

The FieldSelectionMap scalar is represented as a string that, when parsed, produces a SelectedValue.

A SelectedValue must exactly match the shape of the argument value to be considered valid. For non-scalar arguments, you must specify each field of the input type in SelectedObjectValue.

Example № 190extend type Query {
  findUserByName(user: UserInput! @is(field: "{ firstName: firstName }")): User
    @lookup
}
Counter Example № 191extend type Query {
  findUserByName(user: UserInput! @is(field: "firstName")): User @lookup
}

6.1.1Scope

The FieldSelectionMap scalar type is used to establish semantic equivalence between an argument and fields within a specific output type. This output type is always a composite type, but the way it’s determined can vary depending on the directive and context in which the FieldSelectionMap is used.

For example, when used with the @is directive, the FieldSelectionMap maps between the argument and fields in the return type of the field. However, when used with the @require directive, it maps between the argument and fields in the object type on which the field is defined.

Consider this example:

type Product {
  id: ID!
  delivery(
    zip: String!
    size: Int! @require(field: "dimension.size")
    weight: Int! @require(field: "dimension.weight")
  ): DeliveryEstimates
}

In this case, "dimension.size" and "dimension.weight" refer to fields of the Product type, not the DeliveryEstimates return type.

Consequently, a FieldSelectionMap must be interpreted in the context of a specific argument, its associated directive, and the relevant output type as determined by that directive’s behavior.

Examples

Scalar fields can be mapped directly to arguments.

This example maps the Product.weight field to the weight argument:

Example № 192type Product {
  shippingCost(weight: Float @require(field: "weight")): Currency
}

This example maps the Product.shippingWeight field to the weight argument:

Example № 193type Product {
  shippingCost(weight: Float @require(field: "shippingWeight")): Currency
}

Nested fields can be mapped to arguments by specifying the path. This example maps the nested field Product.packaging.weight to the weight argument:

Example № 194type Product {
  shippingCost(weight: Float @require(field: "packaging.weight")): Currency
}

Complex objects can be mapped to arguments by specifying each field.

This example maps the Product.width and Product.height fields to the dimension argument:

Example № 195type Product {
  shippingCost(
    dimension: DimensionInput @require(field: "{ width: width height: height }")
  ): Currency
}

The shorthand equivalent is:

Example № 196type Product {
  shippingCost(
    dimension: DimensionInput @require(field: "{ width height }")
  ): Currency
}

In case the input field names do not match the output field names, explicit mapping is required.

Example № 197type Product {
  shippingCost(
    dimension: DimensionInput @require(field: "{ w: width h: height }")
  ): Currency
}

Even if Product.dimension has all the fields needed for the input object, an explicit mapping is always required.

This example is NOT allowed because it lacks explicit mapping:

Counter Example № 198type Product {
  shippingCost(dimension: DimensionInput @require(field: "dimension")): Currency
}

Instead, you can traverse into output fields by specifying the path.

This example shows how to map nested fields explicitly:

Example № 199type Product {
  shippingCost(
    dimension: DimensionInput
      @require(field: "{ width: dimension.width height: dimension.height }")
  ): Currency
}

The path does NOT affect the structure of the input object. It is only used to traverse the output object:

Example № 200type Product {
  shippingCost(
    dimension: DimensionInput
      @require(field: "{ width: size.width height: size.height }")
  ): Currency
}

To avoid repeating yourself, you can prefix the selection with a path that ends in a dot to traverse INTO the output type.

This affects how fields get interpreted but does NOT affect the structure of the input object:

Example № 201type Product {
  shippingCost(
    dimension: DimensionInput @require(field: "dimension.{ width height }")
  ): Currency
}

This example is equivalent to the previous one:

Example № 202type Product {
  shippingCost(
    dimension: DimensionInput @require(field: "size.{ width height }")
  ): Currency
}

The path syntax is required for lists because list-valued path expressions would be ambiguous otherwise.

This example is NOT allowed because it lacks the dot syntax for lists:

Counter Example № 203type Product {
  shippingCost(
    dimensions: [DimensionInput]
      @require(field: "{ width: dimensions.width height: dimensions.height }")
  ): Currency
}

Instead, use the path syntax and brackets to specify the list elements:

Example № 204type Product {
  shippingCost(
    dimensions: [DimensionInput] @require(field: "dimensions[{ width height }]")
  ): Currency
}

With the path syntax it is possible to also select fields from a list of nested objects:

Example № 205type Product {
    shippingCost(partIds: @require(field: "parts[id]")): Currency
}

For more complex input objects, all these constructs can be nested. This allows for detailed and precise mappings.

This example nests the weight field and the dimension object with its width and height fields:

Example № 206type Product {
  shippingCost(
    package: PackageInput
      @require(field: "{ weight, dimension: dimension.{ width height } }")
  ): Currency
}

This example nests the weight field and the size object with its width and height fields:

Example № 207type Product {
  shippingCost(
    package: PackageInput
      @require(field: "{ weight, size: dimension.{ width height } }")
  ): Currency
}

The label can be used to nest values that aren’t nested in the output.

This example nests Product.width and Product.height under dimension:

Example № 208type Product {
  shippingCost(
    package: PackageInput
      @require(field: "{ weight, dimension: { width height } }")
  ): Currency
}

In the following example, dimensions are nested under dimension in the output:

Example № 209type Product {
  shippingCost(
    package: PackageInput
      @require(field: "{ weight, dimension: dimension.{ width height } }")
  ): Currency
}

6.2Language

According to the GraphQL specification, an argument is a key-value pair in which the key is the name of the argument and the value is a Value.

The Value of an argument can take various forms: it might be a scalar value (such as Int, Float, String, Boolean, Null, or Enum), a list (ListValue), an input object (ObjectValue), or a Variable.

Within the scope of the FieldSelectionMap, the relationship between input and output is established by defining the Value of the argument as a selection of fields from the output object.

Yet only certain types of Value have a semantic meaning. ObjectValue and ListValue are used to define the structure of the value. Scalar values, on the other hand, do not carry semantic importance in this context.

While variables may have legitimate use cases, they are considered out of scope for the current discussion.

However, it’s worth noting that there could be potential applications for allowing them in the future.

Given that these potential values do not align with the standard literals defined in the GraphQL specification, a new literal called SelectedValue is introduced, along with SelectedObjectValue.

Beyond these literals, an additional literal called Path is necessary.

6.2.1Name

Is equivalent to the Name defined in the GraphQL specification

6.2.2Path

FieldName
Name
TypeName
Name

The Path literal is a string used to select a single output value from the return type by specifying a path to that value. This path is defined as a sequence of field names, each separated by a period (.) to create segments.

Example № 210book.title

Each segment specifies a field in the context of the parent, with the root segment referencing a field in the return type of the query. Arguments are not allowed in a Path.

To select a field when dealing with abstract types, the segment selecting the parent field must specify the concrete type of the field using angle brackets after the field name if the field is not defined on an interface.

In the following example, the path mediaById<Book>.isbn specifies that mediaById returns a Book, and the isbn field is selected from that Book.

Example № 211mediaById<Book>.isbn

6.2.3SelectedValue

A SelectedValue is defined as either a Path or a SelectedObjectValue

A Path is designed to point to only a single value, although it may reference multiple fields depending on the return type. To allow selection from different paths based on type, a Path can include multiple paths separated by a pipe (|).

In the following example, the value could be title when referring to a Book and movieTitle when referring to a Movie.

Example № 212mediaById<Book>.title | mediaById<Movie>.movieTitle

The | operator can be used to match multiple possible SelectedValue. This operator is applied when mapping an abstract output type to a @oneOf input type.

Example № 213{ movieId: <Movie>.id } | { productId: <Product>.id }
Example № 214{ nested: { movieId: <Movie>.id } | { productId: <Product>.id }}

6.2.4SelectedObjectValue

SelectedObjectValue are unordered lists of keyed input values wrapped in curly-braces {}. It has to be used when the expected input type is an object type.

This structure is similar to the ObjectValue defined in the GraphQL specification, but it differs by allowing the inclusion of Path values within a SelectedValue, thus extending the traditional ObjectValue capabilities to support direct path selections.

A SelectedObjectValue following a Path is scoped to the type of the field selected by the Path. This means that the root of all SelectedValue inside the selection is no longer scoped to the root (defined by @is or @require) but to the field selected by the Path. The Path does not affect the structure of the input type.

This allows for reducing repetition in the selection.

The following example is valid:

Example № 215type Product {
  dimension: Dimension!
  shippingCost(
    dimension: DimensionInput! @require(field: "dimension.{ size weight }")
  ): Int!
}

The following example is equivalent to the previous one:

Example № 216type Product {
  dimensions: Dimension!
  shippingCost(
    dimensions: DimensionInput!
      @require(field: "{ size: dimensions.size weight: dimensions.weight }")
  ): Int! @lookup
}

6.2.5SelectedListValue

A SelectedListValue is an ordered list of SelectedValue wrapped in square brackets []. It is used to express semantic equivalence between an argument expecting a list of values and the values of a list field within the output object.

The SelectedListValue differs from the ListValue defined in the GraphQL specification by only allowing one SelectedValue as an element.

The following example is valid:

Example № 217type Product {
  parts: [Part!]!
  partIds(partIds: [ID!]! @require(field: "parts[id]")): [ID!]!
}

In this example, the partIds argument is semantically equivalent to the id fields of the parts list.

The following example is invalid because it uses multiple SelectedValue as elements:

Counter Example № 218type Product {
  parts: [Part!]!
  partIds(parts: [PartInput!]! @require(field: "parts[id name]")): [ID!]!
}

input PartInput {
  id: ID!
  name: String!
}

A SelectedObjectValue can be used as an element of a SelectedListValue to select multiple object fields as long as the input type is a list of structurally equivalent objects.

Similar to SelectedObjectValue, a SelectedListValue following a Path is scoped to the type of the field selected by the Path. This means that the root of all SelectedValue inside the selection is no longer scoped to the root (defined by @is or @require) but to the field selected by the Path. The Path does not affect the structure of the input type.

The following example is valid:

Example № 219type Product {
  parts: [Part!]!
  partIds(parts: [PartInput!]! @require(field: "parts[{ id name }]")): [ID!]!
}

input PartInput {
  id: ID!
  name: String!
}

In case the input type is a nested list, the shape of the input object must match the shape of the output object.

Example № 220type Product {
  parts: [[Part!]]!
  partIds(
    parts: [[PartInput!]]! @require(field: "parts[[{ id name }]]")
  ): [ID!]!
}

input PartInput {
  id: ID!
  name: String!
}

The following example is valid:

Example № 221type Query {
  findLocation(
    location: LocationInput!
      @is(field: "{ coordinates: coordinates[{lat: x lon: y}]}")
  ): Location @lookup
}

type Coordinate {
  x: Int!
  y: Int!
}

type Location {
  coordinates: [Coordinate!]!
}

input PositionInput {
  lat: Int!
  lon: Int!
}

input LocationInput {
  coordinates: [PositionInput!]!
}

6.3Validation

Validation ensures that FieldSelectionMap scalars are semantically correct within the given context.

Validation of FieldSelectionMap scalars occurs during the composition phase, ensuring that all FieldSelectionMap entries are syntactically correct and semantically meaningful relative to the context.

Composition is only possible if the FieldSelectionMap is validated successfully. An invalid FieldSelectionMap results in undefined behavior, making composition impossible.

In this section, we will assume the following type system in order to demonstrate examples:

type Query {
  mediaById(mediaId: ID!): Media
  findMedia(input: FindMediaInput): Media
  searchStore(search: SearchStoreInput): [Store]!
  storeById(id: ID!): Store
}

type Store {
  id: ID!
  city: String!
  media: [Media!]!
}

interface Media {
  id: ID!
}

type Book implements Media {
  id: ID!
  title: String!
  isbn: String!
  author: Author!
}

type Movie implements Media {
  id: ID!
  movieTitle: String!
  releaseDate: String!
}

type Author {
  id: ID!
  books: [Book!]!
}

input FindMediaInput @oneOf {
  bookId: ID
  movieId: ID
}

type SearchStoreInput {
  city: String
  hasInStock: FindMediaInput
}

6.3.1Path Field Selections

Each segment of a Path must correspond to a valid field defined on the current type context.

Formal Specification
  • For each segment in the Path:
    • If the segment is a field
      • Let fieldName be the field name in the current segment.
      • fieldName must be defined on the current type in scope.
Explanatory Text

The Path literal is used to reference a specific output field from a input field. Each segment in the Path must correspond to a field that is valid within the current type scope.

For example, the following Path is valid in the context of Book:

Example № 222title
Example № 223<Book>.title

Incorrect paths where the field does not exist on the specified type is not valid result in validation errors. For instance, if <Book>.movieId is referenced but movieId is not a field of Book, will result in an invalid Path.

Counter Example № 224movieId
Counter Example № 225<Book>.movieId

6.3.2Path Terminal Field Selections

Each terminal segment of a Path must follow the rules regarding whether the selected field is a leaf node.

Formal Specification
  • For each segment in the Path:
    • Let selectedType be the unwrapped type of the current segment.
    • If selectedType is a scalar or enum:
      • There must not be any further segments in Path.
    • If selectedType is an object, interface, or union:
      • There must be another segment in Path.
Explanatory Text

A Path that refers to scalar or enum fields must end at those fields. No further field selections are allowed after a scalar or enum. On the other hand, fields returning objects, interfaces, or unions must continue to specify further selections until you reach a scalar or enum field.

For example, the following Path is valid if title is a scalar field on the Book type:

Example № 226book.title

The following Path is invalid because title should not have subselections:

Counter Example № 227book.title.something

For non-leaf fields, the Path must continue to specify subselections until a leaf field is reached:

Example № 228book.author.id

Invalid Path where non-leaf fields do not have further selections:

Counter Example № 229book.author

6.3.3Type Reference Is Possible

Each segment of a Path that references a type, must be a type that is valid in the current context.

Formal Specification
  • For each segment in a Path:
    • If segment is a type reference:
      • Let type be the type referenced in the segment.
      • Let parentType be the type of the parent of the segment.
      • Let applicableTypes be the intersection of GetPossibleTypes(type) and GetPossibleTypes(parentType).
      • applicableTypes must not be empty.
GetPossibleTypes(type)
  1. If type is an object type, return a set containing type.
  2. If type is an interface type, return the set of types implementing type.
  3. If type is a union type, return the set of possible types of type.
Explanatory Text

Type references inside a Path must be valid within the context of the surrounding type. A type reference is only valid if the referenced type could logically apply within the parent type.

6.3.4Values of Correct Type

Formal Specification
  • For each SelectedValue value:
    • Let type be the type expected in the position value is found.
    • value must be coercible to type.
Explanatory Text

Literal values must be compatible with the type expected in the position they are found.

The following examples are valid use of value literals in the context of FieldSelectionMap scalar:

Example № 230type Query {
  storeById(id: ID! @is(field: "id")): Store! @lookup
}

type Store {
  id: ID
  city: String!
}

Non-coercible values are invalid. The following examples are invalid:

Counter Example № 231type Query {
  storeById(id: ID! @is(field: "id")): Store! @lookup
}

type Store {
  id: Int
  city: String!
}

6.3.5Selected Object Field Names

Formal Specification
  • For each Selected Object Field field in the document:
    • Let fieldName be the Name of field.
    • Let fieldDefinition be the field definition provided by the parent selected object type named fieldName.
    • fieldDefinition must exist.
Explanatory Text

Every field provided in an selected object value must be defined in the set of possible fields of that input object’s expected type.

For example, the following is valid:

Example № 232type Query {
  storeById(id: ID! @is(field: "id")): Store! @lookup
}

type Store {
  id: ID
  city: String!
}

In contrast, the following is invalid because it uses a field “address” which is not defined on the expected type:

Counter Example № 233extend type Query {
  storeById(id: ID! @is(field: "address")): Store! @lookup
}

type Store {
  id: ID
  city: String!
}

6.3.6Selected Object Field Uniqueness

Formal Specification
  • For each selected object value selectedObject:
    • For every field in selectedObject:
      • Let name be the Name of field.
      • Let fields be all Selected Object Fields named name in selectedObject.
      • fields must be the set containing only field.
Explanatory Text

Selected objects must not contain more than one field with the same name, as it would create ambiguity and potential conflicts.

For example, the following is invalid:

Counter Example № 234extend type Query {
  storeById(id: ID! @is(field: "id id")): Store! @lookup
}

type Store {
  id: ID
  city: String!
}

6.3.7Required Selected Object Fields

Formal Specification
  • For each Selected Object:
    • Let fields be the fields provided by that Selected Object.
    • Let fieldDefinitions be the set of input object field definitions of that Selected Object.
    • For each fieldDefinition in fieldDefinitions:
      • Let type be the expected type of fieldDefinition.
      • Let defaultValue be the default value of fieldDefinition.
      • If type is Non-Null and defaultValue does not exist:
        • Let fieldName be the name of fieldDefinition.
        • Let field be the input object field in fields named fieldName.
        • field must exist.
Explanatory Text

Input object fields may be required. This means that a selected object field is required if the corresponding input field is required. Otherwise, the selected object field is optional.

For instance, if the UserInput type requires the id field:

Example № 235input UserInput {
  id: ID!
  name: String!
}

Then, an invalid selection would be missing the required id field:

Counter Example № 236extend type Query {
  userById(user: UserInput! @is(field: "{ name: name }")): User! @lookup
}

If the UserInput type requires the name field, but the User type has an optional name field, the following selection would be valid.

Example № 237extend type Query {
  findUser(input: UserInput! @is(field: "{ name: name }")): User! @lookup
}

type User {
  id: ID
  name: String
}

input UserInput {
  id: ID
  name: String!
}

But if the UserInput type requires the name field but it’s not defined in the User type, the selection would be invalid.

Counter Example № 238extend type Query {
  findUser(input: UserInput! @is(field: "{ id: id }")): User! @lookup
}

type User {
  id: ID
}

input UserInput {
  id: ID
  name: String!
}

§Index

  1. AreTypesConsistent
  2. ArgumentsAreMergeable
  3. EnumsAreMergeable
  4. FieldName
  5. FieldsAreMergeable
  6. GetPossibleTypes
  7. HasEnumAccessibleChildren
  8. HasInputObjectAccessibleChildren
  9. HasInterfaceAccessibleChildren
  10. HasKeyFieldsArguments
  11. HasObjectTypeAccessibleChildren
  12. HasProvidesDirective
  13. HasPublicField
  14. HasUnionAccessibleChildren
  15. InputFieldsAreMergeable
  16. InputFieldsHaveConsistentDefaults
  17. IsListType
  18. IsValidKeyField
  19. LeastRestrictiveType
  20. MergeArgument
  21. MergeArgumentDefinitions
  22. MergeEnumTypes
  23. MergeInputField
  24. MergeInputTypes
  25. MergeInterfaceTypes
  26. MergeObjectTypes
  27. MergeOutputField
  28. MergeScalarTypes
  29. MergeSchemas
  30. MergeTypes
  31. MergeUnionTypes
  32. MostRestrictiveType
  33. Path
  34. PathSegment
  35. ProvidesHasArguments
  36. SelectedListValue
  37. SelectedObjectField
  38. SelectedObjectValue
  39. SelectedValue
  40. TypeName
  41. ValidateArgumentDefaultValues
  42. ValidateDefaultValue
  43. ValidateInputFieldDefaultValues
  44. ValidateSelectionMap
  45. ValidateSelectionSet
  1. 1Overview
  2. 2Source Schema
    1. 2.1@lookup
    2. 2.2@internal
    3. 2.3@inaccessible
    4. 2.4@is
    5. 2.5@require
    6. 2.6@key
    7. 2.7@shareable
    8. 2.8@provides
    9. 2.9@external
    10. 2.10@override
  3. 3Schema Composition
    1. 3.1Validate Source Schemas
    2. 3.2Merge Source Schemas
      1. 3.2.1Pre Merge Validation
        1. 3.2.1.1Enum Type Default Value Uses Inaccessible Value
        2. 3.2.1.2Output Field Types Mergeable
        3. 3.2.1.3Disallowed Inaccessible Elements
        4. 3.2.1.4External Argument Default Mismatch
        5. 3.2.1.5External Argument Missing
        6. 3.2.1.6External Argument Type Mismatch
        7. 3.2.1.7External Missing on Base
        8. 3.2.1.8External Type Mismatch
        9. 3.2.1.9External Unused
        10. 3.2.1.10Root Mutation Used
        11. 3.2.1.11Root Query Used
        12. 3.2.1.12Root Subscription Used
        13. 3.2.1.13Key Fields Select Invalid Type
        14. 3.2.1.14Key Directive in Fields Argument
        15. 3.2.1.15Key Fields Has Arguments
        16. 3.2.1.16Key Invalid Syntax
        17. 3.2.1.17Key Invalid Fields
        18. 3.2.1.18Provides Directive in Fields Argument
        19. 3.2.1.19Provides Fields Has Arguments
        20. 3.2.1.20Provides Fields Missing External
        21. 3.2.1.21Query Root Type Inaccessible
        22. 3.2.1.22Require Directive in Fields Argument
        23. 3.2.1.23Require Invalid Fields Type
        24. 3.2.1.24Require Invalid Syntax
        25. 3.2.1.25Type Definition Invalid
        26. 3.2.1.26Type Kind Mismatch
        27. 3.2.1.27Provides Invalid Syntax
        28. 3.2.1.28Invalid GraphQL
        29. 3.2.1.29Override Collision with Another Directive
        30. 3.2.1.30Override from Self
        31. 3.2.1.31Override on Interface
        32. 3.2.1.32Override Source Has Override
        33. 3.2.1.33External Collision with Another Directive
        34. 3.2.1.34Key Invalid Fields Type
        35. 3.2.1.35Provides Invalid Fields Type
        36. 3.2.1.36Provides on Non-Composite Field
        37. 3.2.1.37External on Interface
        38. 3.2.1.38Lookup Returns Non-Nullable Type
        39. 3.2.1.39Lookup Returns List
        40. 3.2.1.40Input Field Default Mismatch
        41. 3.2.1.41Input Field Types mergeable
        42. 3.2.1.42Enum Values Mismatch
        43. 3.2.1.43Input With Missing Required Fields
        44. 3.2.1.44Field Argument Types Mergeable
      2. 3.2.2Merge
        1. 3.2.2.1Merge Scalar Types
        2. 3.2.2.2Merge Interface Types
        3. 3.2.2.3Merge Enum Types
        4. 3.2.2.4Merge Union Types
        5. 3.2.2.5Merge Input Types
        6. 3.2.2.6Merge Object Types
        7. 3.2.2.7Merge Output Field
        8. 3.2.2.8Merge Input Field
        9. 3.2.2.9Merge Argument Definitions
        10. 3.2.2.10Merge Argument
        11. 3.2.2.11Least Restrictive Type
        12. 3.2.2.12Most Restrictive Type
      3. 3.2.3Post Merge Validation
        1. 3.2.3.1Empty Merged Object Type
        2. 3.2.3.2No Queries
        3. 3.2.3.3Implemented by Inaccessible
        4. 3.2.3.4Interface Field No Implementation
        5. 3.2.3.5Invalid Field Sharing
        6. 3.2.3.6Invalid Shareable Usage
        7. 3.2.3.7Only Inaccessible Children
        8. 3.2.3.8Require Invalid Fields
        9. 3.2.3.9Provides Invalid Fields
        10. 3.2.3.10Empty Merged Input Object Type
        11. 3.2.3.11Non-Null Input Fields cannot be inaccessible
        12. 3.2.3.12Input Fields cannot reference inaccessible type
    3. 3.3Validate Satisfiability
  4. 4Executor
    1. 4.1Configuration
  5. 5Shared Types
    1. 5.1Name
    2. 5.2FieldSelection
  6. 6Appendix A: Specification of FieldSelectionMap Scalar
    1. 6.1Introduction
      1. 6.1.1Scope
    2. 6.2Language
      1. 6.2.1Name
      2. 6.2.2Path
      3. 6.2.3SelectedValue
      4. 6.2.4SelectedObjectValue
      5. 6.2.5SelectedListValue
    3. 6.3Validation
      1. 6.3.1Path Field Selections
      2. 6.3.2Path Terminal Field Selections
      3. 6.3.3Type Reference Is Possible
      4. 6.3.4Values of Correct Type
      5. 6.3.5Selected Object Field Names
      6. 6.3.6Selected Object Field Uniqueness
      7. 6.3.7Required Selected Object Fields
  7. §Index