Docs
Launch GraphOS Studio
Since 1.40.0

Subgraph entity caching for the Apollo Router

Redis-backed caching for entities


This feature is only available with a GraphOS Enterprise plan.
You can test it out by signing up for a free Enterprise trial.

This feature is in preview. Your questions and feedback are highly valueddon't hesitate to get in touch with your Apollo contact or on the official Apollo GraphQL Discord.

Learn how the can cache responses using Redis to improve your query latency for entities in the .

Overview

An gets its from one or more . To respond to a client request for an entity, the must make multiple subgraph requests. Different clients requesting the same entity can make redundant, identical subgraph requests.

enables the to respond to identical queries with cached subgraph responses. The router uses Redis to cache data from subgraph responses. Because cached data is keyed per subgraph and entity, different clients making the same client querywith the same or different shit the same cache entries of response data.

Benefits of entity caching

Compared to caching entire client responses, supports finer control over:

  • the time to live (TTL) of cached data
  • the amount of data being cached

When caching an entire client response, the must store it with a shorter TTL because application data can change often. Real-time data needs more frequent updates.

A client-response cache might not be shareable between users, because the application data might contain personal and private information. A client-response cache might also duplicate a lot of data between client responses.

For example, consider the Products and Inventory from the Entities guide:

Products subgraph
type Product @key(fields: "id") {
id: ID!
name: String!
price: Int
}
Inventory subgraph
type Product @key(fields: "id") {
id: ID!
inStock: Boolean!
}

Assume the client for a shopping cart application requests the following for each product in the cart:

  • The product's name and price from the Products .
  • The product's availability in inventory from the Inventory .

If caching the entire client response, it would require a short TTL because the cart data can change often and the real-time inventory has to be up to date. A client-response cache couldn't be shared between users, because each cart is personal. A client-response cache might also duplicate data because the same products might appear in multiple carts.

With enabled for this example, the can:

  • Store each product's description and price separately with a long TTL.
  • Minimize the number of requests made for each client request, with some client requests fetching all product data from the cache and requiring no subgraph requests.
  • Share the product cache between all users.
  • Cache the cart per user, with a small amount of data.
  • Cache inventory data with a short TTL or not cache it at all.

Use entity caching

Follow this guide to enable and configure in the .

Prerequisites

To use in the , you must set up:

Configure router for entity caching

In router.yaml, configure preview_entity_cache:

  • Enable globally.
  • Configure Redis using the same conventions described in distributed caching.
  • Configure per , with overrides per subgraph for disabling entity caching and TTL.

For example:

router.yaml
# Enable entity caching globally
preview_entity_cache:
enabled: true
# Configure Redis
redis:
urls: ["redis://..."]
timeout: 5ms # Optional, by default: 2ms
ttl: 24h # Optional, by default no expiration
# Configure entity caching per subgraph
subgraphs:
products:
ttl: 120s # overrides the global TTL
inventory:
enabled: false # disable for a specific subgraph

Configure time to live (TTL)

Besides configuring a global TTL for all the entries in Redis, the also honors the Cache-Control header returned with the response. It generates a Cache-Control header for the client response by aggregating the TTL information from all response parts.

Customize Redis cache key

If you need to store data for a particular request in different cache entries, you can configure the cache key through the apollo_entity_cache::key context entry.

This entry contains an object with the all to affect all requests under one client request, and fields named after subgraph to affect individual subgraph queries. The field's value can be any valid JSON value (object, string, etc).

{
"all": 1,
"subgraph_operation1": "key1",
"subgraph_operation2": {
"data": "key2"
}
}

Implementation notes

Responses with errors not cached

To prevent transient errors from affecting the cache for a long duration, responses with errors are not cached.

Authorization and entity caching

When used alongside the 's authorization directives, cache entries are separated by authorization context. If a contains that need a specific scope, the requests providing that scope have different cache entries from those not providing the scope. This means that data requiring authorization can still be safely cached and even shared across users, without needing invalidation when a user's roles change because their requests are automatically directed to a different part of the cache.

Schema updates and entity caching

On schema updates, the ensures that queries unaffected by the changes keep their cache entries. Queries with affected need to be cached again to ensure the router doesn't serve invalid data from before the update.

Entity cache invalidation not supported

Cache invalidation is not yet supported and is planned for a future release.

Previous
Distributed caching
Next
Errors
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company