This version is still in development and is not considered stable yet. For the latest stable version, please use Spring GraphQL 1.3.2! |
This version is still in development and is not considered stable yet. For the latest stable version, please use Spring GraphQL 1.3.2! |
ExecutionGraphQlService
is the main Spring abstraction to call GraphQL Java to execute
requests. Underlying transports, such as the HTTP, delegate to
ExecutionGraphQlService
to handle requests.
The main implementation, DefaultExecutionGraphQlService
, is configured with a
GraphQlSource
for access to the graphql.GraphQL
instance to invoke.
GraphQLSource
GraphQlSource
is a contract to expose the graphql.GraphQL
instance to use that also
includes a builder API to build that instance. The default builder is available via
GraphQlSource.schemaResourceBuilder()
.
The Boot Starter creates an instance of this builder and further initializes it
to load schema files from a configurable location,
to expose properties
to apply to GraphQlSource.Builder
, to detect
RuntimeWiringConfigurer
beans,
Instrumentation beans for
GraphQL metrics,
and DataFetcherExceptionResolver
and SubscriptionExceptionResolver
beans for
exception resolution. For further customizations, you can also
declare a GraphQlSourceBuilderCustomizer
bean, for example:
@Configuration(proxyBeanMethods = false)
class GraphQlConfig {
@Bean
public GraphQlSourceBuilderCustomizer sourceBuilderCustomizer() {
return (builder) ->
builder.configureGraphQl(graphQlBuilder ->
graphQlBuilder.executionIdProvider(new CustomExecutionIdProvider()));
}
}
Schema Resources
GraphQlSource.Builder
can be configured with one or more Resource
instances to be
parsed and merged together. That means schema files can be loaded from just about any
location.
By default, the Boot starter
looks for schema files with extensions
".graphqls" or ".gqls" under the location classpath:graphql/**
, which is typically
src/main/resources/graphql
. You can also use a file system location, or any location
supported by the Spring Resource
hierarchy, including a custom implementation that
loads schema files from remote locations, from storage, or from memory.
Use classpath*:graphql/**/ to find schema files across multiple classpath
locations, e.g. across multiple modules.
|
Schema Creation
By default, GraphQlSource.Builder
uses the GraphQL Java SchemaGenerator
to create the
graphql.schema.GraphQLSchema
. This works for typical use, but if you need to use a
different generator, e.g. for federation, you can register a schemaFactory
callback:
GraphQlSource.Builder builder = ...
builder.schemaResources(..)
.configureRuntimeWiring(..)
.schemaFactory((typeDefinitionRegistry, runtimeWiring) -> {
// create GraphQLSchema
})
The GraphQlSource section explains how to configure that with Spring Boot.
For an example with Apollo Federation, see federation-jvm-spring-example.
RuntimeWiringConfigurer
You can use RuntimeWiringConfigurer
to register:
-
Custom scalar types.
-
Directives handling code.
-
Default
TypeResolver
for interface and union types. -
DataFetcher
for a field although applications will typically use Annotated Controllers, and those are detected and registered asDataFetcher
s byAnnotatedControllerConfigurer
, which is aRuntimeWiringConfigurer
. The Boot Starter automatically registersAnnotatedControllerConfigurer
.
GraphQL Java, server applications use Jackson only for serialization to and from maps of data. Client input is parsed into a map. Server output is assembled into a map based on the field selection set. This means you can’t rely on Jackson serialization/deserialization annotations. Instead, you can use custom scalar types. |
The Boot Starter detects beans of type RuntimeWiringConfigurer
and
registers them in the GraphQlSource.Builder
. That means in most cases, you’ll' have
something like the following in your configuration:
@Configuration
public class GraphQlConfig {
@Bean
public RuntimeWiringConfigurer runtimeWiringConfigurer(BookRepository repository) {
GraphQLScalarType scalarType = ... ;
SchemaDirectiveWiring directiveWiring = ... ;
DataFetcher dataFetcher = QuerydslDataFetcher.builder(repository).single();
return wiringBuilder -> wiringBuilder
.scalar(scalarType)
.directiveWiring(directiveWiring)
.type("Query", builder -> builder.dataFetcher("book", dataFetcher));
}
}
If you need to add a WiringFactory
, e.g. to make registrations that take into account
schema definitions, implement the alternative configure
method that accepts both the
RuntimeWiring.Builder
and an output List<WiringFactory>
. This allows you to add any
number of factories that are then invoked in sequence.
TypeResolver
GraphQlSource.Builder
registers ClassNameTypeResolver
as the default TypeResolver
to use for GraphQL Interfaces and Unions that don’t already have such a registration
through a RuntimeWiringConfigurer
. The purpose of
a TypeResolver
in GraphQL Java is to determine the GraphQL Object type for values
returned from the DataFetcher
for a GraphQL Interface or Union field.
ClassNameTypeResolver
tries to match the simple class name of the value to a GraphQL
Object Type and if it is not successful, it also navigates its super types including
base classes and interfaces, looking for a match. ClassNameTypeResolver
provides an
option to configure a name extracting function along with Class
to GraphQL Object type
name mappings that should help to cover more corner cases:
GraphQlSource.Builder builder = ...
ClassNameTypeResolver classNameTypeResolver = new ClassNameTypeResolver();
classNameTypeResolver.setClassNameExtractor((klass) -> {
// Implement Custom ClassName Extractor here
});
builder.defaultTypeResolver(classNameTypeResolver);
The GraphQlSource section explains how to configure that with Spring Boot.
Directives
The GraphQL language supports directives that "describe alternate runtime execution and type validation behavior in a GraphQL document". Directives are similar to annotations in Java but declared on types, fields, fragments and operations in a GraphQL document.
GraphQL Java provides the SchemaDirectiveWiring
contract to help applications detect
and handle directives. For more details, see
Schema Directives in the
GraphQL Java documentation.
In Spring GraphQL you can register a SchemaDirectiveWiring
through a
RuntimeWiringConfigurer
. The Boot Starter detects
such beans, so you might have something like:
@Configuration
public class GraphQlConfig {
@Bean
public RuntimeWiringConfigurer runtimeWiringConfigurer() {
return builder -> builder.directiveWiring(new MySchemaDirectiveWiring());
}
}
For an example of directives support check out the Extended Validation for Graphql Java library. |
ExecutionStrategy
An ExecutionStrategy
in GraphQL Java drives the fetching of requested fields.
To create an ExecutionStrategy
, you need to provide a DataFetcherExceptionHandler
.
By default, Spring for GraphQL creates the exception handler to use as described in
Exceptions and sets it on the
GraphQL.Builder
. GraphQL Java then uses that to create AsyncExecutionStrategy
instances with the configured exception handler.
If you need to create a custom ExecutionStrategy
, you can detect
DataFetcherExceptionResolver
s and create an exception handler in the same way, and use
it to create the custom ExecutionStrategy
. For example, in a Spring Boot application:
@Bean
GraphQlSourceBuilderCustomizer sourceBuilderCustomizer(
ObjectProvider<DataFetcherExceptionResolver> resolvers) {
DataFetcherExceptionHandler exceptionHandler =
DataFetcherExceptionResolver.createExceptionHandler(resolvers.stream().toList());
AsyncExecutionStrategy strategy = new CustomAsyncExecutionStrategy(exceptionHandler);
return sourceBuilder -> sourceBuilder.configureGraphQl(builder ->
builder.queryExecutionStrategy(strategy).mutationExecutionStrategy(strategy));
}
Schema Transformation
You can register a graphql.schema.GraphQLTypeVisitor
via
builder.schemaResources(..).typeVisitorsToTransformSchema(..)
if you want to traverse
and transform the schema after it is created, and make changes to the schema. Keep in mind
that this is more expensive than Schema Traversal so generally
prefer traversal to transformation unless you need to make schema changes.
Schema Traversal
You can register a graphql.schema.GraphQLTypeVisitor
via
builder.schemaResources(..).typeVisitors(..)
if you want to traverse the schema after
it is created, and possibly apply changes to the GraphQLCodeRegistry
. Keep in mind,
however, that such a visitor cannot change the schema. See
Schema Transformation, if you need to make changes to the schema.
Schema Mapping Inspection
If a query, mutation, or subscription operation does not have a DataFetcher
, it won’t
return any data, and won’t do anything useful. Likewise, fields on schema types returned
by an operation that are covered neither explicitly through a DataFetcher
registration, nor implicitly by the default PropertyDataFetcher
, which looks for a
matching Java object property, will always be null
.
GraphQL Java does not perform checks to ensure every schema field is covered, and that
can result in gaps that might not be discovered depending on test coverage. At runtime
you may get a "silent" null
, or an error if the field is not nullable. As a lower level
library, GraphQL Java simply does not know enough about DataFetcher
implementations and
their return types, and therefore can’t compare schema type structure against Java object
structure.
Spring for GraphQL defines the SelfDescribingDataFetcher
interface to allow a
DataFetcher
to expose return type information. All Spring DataFetcher
implementations
implement this interface. That includes those for Annotated Controllers, and those for
Querydsl and Query by Example Spring Data repositories. For annotated
controllers, the return type is derived from the declared return type on a
@SchemaMapping
method.
On startup, Spring for GraphQL can inspect schema fields, DataFetcher
registrations,
and the properties of Java objects returned from DataFetcher
implementations to check
if all schema fields are covered either by an explicitly registered DataFetcher
, or
a matching Java object property. The inspection also performs a reverse check looking for
DataFetcher
registrations against schema fields that don’t exist.
To enable inspection of schema mappings:
GraphQlSource.Builder builder = ...
builder.schemaResources(..)
.inspectSchemaMappings(report -> {
logger.debug(report);
})
Below is an example report:
GraphQL schema inspection: Unmapped fields: {Book=[title], Author[firstName, lastName]} (1) Unmapped registrations: {Book.reviews=BookController#reviews[1 args]} (2) Skipped types: [BookOrAuthor] (3)
1 | List of schema fields and their source types that are not mapped |
2 | List of DataFetcher registrations on fields that don’t exist |
3 | List of schema types that are skipped, as explained next |
There are limits to what schema field inspection can do, in particular when there is
insufficient Java type information. This is the case if an annotated controller method is
declared to return java.lang.Object
, or if the return type has an unspecified generic
parameter such as List<?>
, or if the DataFetcher
does not implement
SelfDescribingDataFetcher
and the return type is not even known. In such cases, the
Java object type structure remains unknown, and the schema type is listed as skipped in
the resulting report. For every skipped type, a DEBUG message is logged to indicate why
it was skipped.
Schema union types are always skipped because there is no way for a controller method to declare such a return type in Java, and the Java type structure is unknown.
Schema interface types are supported only as far as fields declared directly, which are
compared against properties on the Java type declared by a SelfDescribingDataFetcher
.
Additional fields on concrete implementations are not inspected. This could be improved
in a future release to also inspect schema interface
implementation types and to try
to find a match among subtypes of the declared Java return type.
Operation Caching
GraphQL Java must parse and validate an operation before executing it. This may impact
performance significantly. To avoid the need to re-parse and validate, an application may
configure a PreparsedDocumentProvider
that caches and reuses Document instances. The
GraphQL Java docs provide more details on
query caching through a PreparsedDocumentProvider
.
In Spring GraphQL you can register a PreparsedDocumentProvider
through
GraphQlSource.Builder#configureGraphQl
:
.
// Typically, accessed through Spring Boot's GraphQlSourceBuilderCustomizer
GraphQlSource.Builder builder = ...
// Create provider
PreparsedDocumentProvider provider =
new ApolloPersistedQuerySupport(new InMemoryPersistedQueryCache(Collections.emptyMap()));
builder.schemaResources(..)
.configureRuntimeWiring(..)
.configureGraphQl(graphQLBuilder -> graphQLBuilder.preparsedDocumentProvider(provider))
The GraphQlSource section explains how to configure that with Spring Boot.
Use classpath*:graphql/**/ to find schema files across multiple classpath
locations, e.g. across multiple modules.
|
GraphQL Java, server applications use Jackson only for serialization to and from maps of data. Client input is parsed into a map. Server output is assembled into a map based on the field selection set. This means you can’t rely on Jackson serialization/deserialization annotations. Instead, you can use custom scalar types. |
For an example of directives support check out the Extended Validation for Graphql Java library. |
1 | List of schema fields and their source types that are not mapped |
2 | List of DataFetcher registrations on fields that don’t exist |
3 | List of schema types that are skipped, as explained next |
Thread Model
Most GraphQL requests benefit from concurrent execution in fetching nested fields. This is
why most applications today rely on GraphQL Java’s AsyncExecutionStrategy
, which allows
data fetchers to return CompletionStage
and to execute concurrently rather than serially.
Java 21 and virtual threads add an important ability to use more threads efficiently, but it is still necessary to execute concurrently rather than serially in order for request execution to complete more quickly.
Spring for GraphQL supports:
-
Reactive data fetchers, and those are adapted to
CompletionStage
as expected byAsyncExecutionStrategy
. -
CompletionStage
as return value. -
Controller methods that are Kotlin coroutine methods.
-
@SchemaMapping and @BatchMapping methods can return
Callable
that is submitted to anExecutor
such as the Spring FrameworkVirtualThreadTaskExecutor
. To enable this, you must configure anExecutor
onAnnotatedControllerConfigurer
.
Spring for GraphQL runs on either Spring MVC or WebFlux as the transport. Spring MVC
uses async request execution, unless the resulting CompletableFuture
is done
immediately after the GraphQL Java engine returns, which would be the case if the
request is simple enough and did not require asynchronous data fetching.
Reactive DataFetcher
The default GraphQlSource
builder enables support for a DataFetcher
to return Mono
or Flux
which adapts those to a CompletableFuture
where Flux
values are aggregated
and turned into a List, unless the request is a GraphQL subscription request,
in which case the return value remains a Reactive Streams Publisher
for streaming
GraphQL responses.
A reactive DataFetcher
can rely on access to Reactor context propagated from the
transport layer, such as from a WebFlux request handling, see
WebFlux Context.
Context Propagation
Spring for GraphQL provides support to transparently propagate context from the
HTTP transport, through GraphQL Java, and to
DataFetcher
and other components it invokes. This includes both ThreadLocal
context
from the Spring MVC request handling thread and Reactor Context
from the WebFlux
processing pipeline.
WebMvc
A DataFetcher
and other components invoked by GraphQL Java may not always execute on
the same thread as the Spring MVC handler, for example if an asynchronous
WebGraphQlInterceptor
or DataFetcher
switches to a
different thread.
Spring for GraphQL supports propagating ThreadLocal
values from the Servlet container
thread to the thread a DataFetcher
and other components invoked by GraphQL Java to
execute on. To do this, an application needs to implement
io.micrometer.context.ThreadLocalAccessor
for a ThreadLocal
values of interest:
public class RequestAttributesAccessor implements ThreadLocalAccessor<RequestAttributes> {
@Override
public Object key() {
return RequestAttributesAccessor.class.getName();
}
@Override
public RequestAttributes getValue() {
return RequestContextHolder.getRequestAttributes();
}
@Override
public void setValue(RequestAttributes attributes) {
RequestContextHolder.setRequestAttributes(attributes);
}
@Override
public void reset() {
RequestContextHolder.resetRequestAttributes();
}
}
You can register a ThreadLocalAccessor
manually on startup with the global
ContextRegistry
instance, which is accessible via
io.micrometer.context.ContextRegistry#getInstance()
. You can also register it
automatically through the java.util.ServiceLoader
mechanism.
WebFlux
A Reactive DataFetcher
can rely on access to Reactor context that
originates from the WebFlux request handling chain. This includes Reactor context
added by WebGraphQlInterceptor components.
Exceptions
In GraphQL Java, DataFetcherExceptionHandler
decides how to represent exceptions from
data fetching in the "errors" section of the response. An application can register a
single handler only.
Spring for GraphQL registers a DataFetcherExceptionHandler
that provides default
handling and enables the DataFetcherExceptionResolver
contract. An application can
register any number of resolvers via GraphQLSource
builder and those are in
order until one them resolves the Exception
to a List<graphql.GraphQLError>
.
The Spring Boot starter detects beans of this type.
DataFetcherExceptionResolverAdapter
is a convenient base class with protected methods
resolveToSingleError
and resolveToMultipleErrors
.
The Annotated Controllers programming model enables handling data fetching exceptions with
annotated exception handler methods with a flexible method signature, see
@GraphQlExceptionHandler
for details.
A GraphQLError
can be assigned to a category based on the GraphQL Java
graphql.ErrorClassification
, or the Spring GraphQL ErrorType
, which defines the following:
-
BAD_REQUEST
-
UNAUTHORIZED
-
FORBIDDEN
-
NOT_FOUND
-
INTERNAL_ERROR
If an exception remains unresolved, by default it is categorized as an INTERNAL_ERROR
with a generic message that includes the category name and the executionId
from
DataFetchingEnvironment
. The message is intentionally opaque to avoid leaking
implementation details. Applications can use a DataFetcherExceptionResolver
to customize
error details.
Unresolved exception are logged at ERROR level along with the executionId
to correlate
to the error sent to the client. Resolved exceptions are logged at DEBUG level.
Request Exceptions
The GraphQL Java engine may run into validation or other errors when parsing the request
and that in turn prevent request execution. In such cases, the response contains a
"data" key with null
and one or more request-level "errors" that are global, i.e. not
having a field path.
DataFetcherExceptionResolver
cannot handle such global errors because they are raised
before execution begins and before any DataFetcher
is invoked. An application can use
transport level interceptors to inspect and transform errors in the ExecutionResult
.
See examples under WebGraphQlInterceptor
.
Subscription Exceptions
The Publisher
for a subscription request may complete with an error signal in which case
the underlying transport (e.g. WebSocket) sends a final "error" type message with a list
of GraphQL errors.
DataFetcherExceptionResolver
cannot resolve errors from a subscription Publisher
,
since the data DataFetcher
only creates the Publisher
initially. After that, the
transport subscribes to the Publisher
that may then complete with an error.
An application can register a SubscriptionExceptionResolver
in order to resolve
exceptions from a subscription Publisher
in order to resolve those to GraphQL errors
to send to the client.
Pagination
The GraphQL Cursor Connection specification defines a way to navigate large result sets by returning a subset of items at a time where each item is paired with a cursor that clients can use to request more items before or after the referenced item.
The specification calls the pattern "Connections". A schema type with a name that ends
on Connection is a Connection Type that represents a paginated result set. All ~Connection
types contain an "edges" field where ~Edge
type pairs the actual item with a cursor, as
well as a "pageInfo" field with boolean flags to indicate if there are more items forward
and backward.
Connection Types
Connection
type definitions must be created for every type that needs pagination, adding
boilerplate and noise to the schema. Spring for GraphQL provides
ConnectionTypeDefinitionConfigurer
to add these types on startup, if not already
present in the parsed schema files. That means in the schema you only need this:
Query {
books(first:Int, after:String, last:Int, before:String): BookConnection
}
type Book {
id: ID!
title: String!
}
Note the spec-defined forward pagination arguments first
and after
that clients can use
to request the first N items after the given cursor, while last
and before
are backward
pagination arguments to request the last N items before the given cursor.
Next, configure ConnectionTypeDefinitionConfigurer
as follows:
GraphQlSource.schemaResourceBuilder()
.schemaResources(..)
.typeDefinitionConfigurer(new ConnectionTypeDefinitionConfigurer)
and the following type definitions will be transparently added to the schema:
type BookConnection {
edges: [BookEdge]!
pageInfo: PageInfo!
}
type BookEdge {
node: Book!
cursor: String!
}
type PageInfo {
hasPreviousPage: Boolean!
hasNextPage: Boolean!
startCursor: String
endCursor: String
}
The Boot Starter registers ConnectionTypeDefinitionConfigurer
by default.
ConnectionAdapter
Once Connection Types are available in the schema, you also need
equivalent Java types. GraphQL Java provides those, including generic Connection
and
Edge
, as well as a PageInfo
.
One option is to populate a Connection
and return it from your controller method or
DataFetcher
. However, this requires boilerplate code to create the Connection
,
creating cursors, wrapping each item as an Edge
, and creating the PageInfo
.
Moreover, you may already have an underlying pagination mechanism such as when using
Spring Data repositories.
Spring for GraphQL defines the ConnectionAdapter
contract to adapt a container of items
to Connection
. Adapters are applied through a DataFetcher
decorator that is in turn
installed through a ConnectionFieldTypeVisitor
. You can configure it as follows:
ConnectionAdapter adapter = ... ;
GraphQLTypeVisitor visitor = ConnectionFieldTypeVisitor.create(List.of(adapter)) (1)
GraphQlSource.schemaResourceBuilder()
.schemaResources(..)
.typeDefinitionConfigurer(..)
.typeVisitors(List.of(visitor)) (2)
1 | Create type visitor with one or more ConnectionAdapter s. |
2 | Resister the type visitor. |
There are built-in ConnectionAdapter
s for Spring Data’s
Window
and Slice
. You can also create your own custom adapter. ConnectionAdapter
implementations rely on a CursorStrategy
to
create cursors for returned items. The same strategy is also used to support the
Subrange
controller method argument that contains
pagination input.
CursorStrategy
CursorStrategy
is a contract to encode and decode a String cursor that refers to the
position of an item within a large result set. The cursor can be based on an index or
on a keyset.
A ConnectionAdapter
uses this to encode cursors for returned items.
Annotated Controllers methods, Querydsl repositories, and Query by Example
repositories use it to decode cursors from pagination requests, and create a Subrange
.
CursorEncoder
is a related contract that further encodes and decodes String cursors to
make them opaque to clients. EncodingCursorStrategy
combines CursorStrategy
with a
CursorEncoder
. You can use Base64CursorEncoder
, NoOpEncoder
or create your own.
There is a built-in CursorStrategy
for the Spring Data
ScrollPosition
. The Boot Starter registers a CursorStrategy<ScrollPosition>
with
Base64Encoder
when Spring Data is present.
Sort
There is no standard way to provide sort information in a GraphQL request. However, pagination depends on a stable sort order. You can use a default order, or otherwise expose input types and extract sort details from GraphQL arguments.
There is built-in support for Spring Data’s Sort
as a controller
method argument. For this to work, you need to have a SortStrategy
bean.
1 | Create type visitor with one or more ConnectionAdapter s. |
2 | Resister the type visitor. |
Batch Loading
Given a Book
and its Author
, we can create one DataFetcher
for a book and another
for its author. This allows selecting books with or without authors, but it means books
and authors aren’t loaded together, which is especially inefficient when querying multiple
books as the author for each book is loaded individually. This is known as the N+1 select
problem.
DataLoader
GraphQL Java provides a DataLoader
mechanism for batch loading of related entities.
You can find the full details in the
GraphQL Java docs. Below is a
summary of how it works:
-
Register
DataLoader
's in theDataLoaderRegistry
that can load entities, given unique keys. -
DataFetcher
's can accessDataLoader
's and use them to load entities by id. -
A
DataLoader
defers loading by returning a future so it can be done in a batch. -
DataLoader
's maintain a per request cache of loaded entities that can further improve efficiency.
BatchLoaderRegistry
The complete batching loading mechanism in GraphQL Java requires implementing one of
several BatchLoader
interface, then wrapping and registering those as DataLoader
s
with a name in the DataLoaderRegistry
.
The API in Spring GraphQL is slightly different. For registration, there is only one,
central BatchLoaderRegistry
exposing factory methods and a builder to create and
register any number of batch loading functions:
@Configuration
public class MyConfig {
public MyConfig(BatchLoaderRegistry registry) {
registry.forTypePair(Long.class, Author.class).registerMappedBatchLoader((authorIds, env) -> {
// return Mono<Map<Long, Author>
});
// more registrations ...
}
}
The Boot Starter declares a BatchLoaderRegistry
bean that you can inject into
your configuration, as shown above, or into any component such as a controller in order
register batch loading functions. In turn the BatchLoaderRegistry
is injected into
DefaultExecutionGraphQlService
where it ensures DataLoader
registrations per request.
By default, the DataLoader
name is based on the class name of the target entity.
This allows an @SchemaMapping
method to declare a
DataLoader argument with a generic type, and
without the need for specifying a name. The name, however, can be customized through the
BatchLoaderRegistry
builder, if necessary, along with other DataLoaderOptions
.
To configure default DataLoaderOptions
globally, to use as a starting point for any
registration, you can override Boot’s BatchLoaderRegistry
bean and use the constructor
for DefaultBatchLoaderRegistry
that accepts Supplier<DataLoaderOptions>
.
For many cases, when loading related entities, you can use
@BatchMapping controller methods, which are a shortcut
for and replace the need to use BatchLoaderRegistry
and DataLoader
directly.
BatchLoaderRegistry
provides other important benefits too. It supports access to
the same GraphQLContext
from batch loading functions and from @BatchMapping
methods,
as well as ensures Context Propagation to them. This is why applications are expected
to use it. It is possible to perform your own DataLoader
registrations directly but
such registrations would forgo the above benefits.
Testing Batch Loading
Start by having BatchLoaderRegistry
perform registrations on a DataLoaderRegistry
:
BatchLoaderRegistry batchLoaderRegistry = new DefaultBatchLoaderRegistry();
// perform registrations...
DataLoaderRegistry dataLoaderRegistry = DataLoaderRegistry.newRegistry().build();
batchLoaderRegistry.registerDataLoaders(dataLoaderRegistry, graphQLContext);
Now you can access and test individual DataLoader
's as follows:
DataLoader<Long, Book> loader = dataLoaderRegistry.getDataLoader(Book.class.getName());
loader.load(1L);
loader.loadMany(Arrays.asList(2L, 3L));
List<Book> books = loader.dispatchAndJoin(); // actual loading
assertThat(books).hasSize(3);
assertThat(books.get(0).getName()).isEqualTo("...");
// ...