Adam's Blog
A Different Approach to GraphQL Caching
GraphQL is an incredibly exciting innovation for web development. You specify your data, queries, and mutations in a statically typed way, and get an endpoint users can consume, requesting whatever slice of data they happen to need at any given moment. If you're old enough to remember OData, it's sort of like that, but good.
What makes GraphQL especially exciting is how standardized and structured the requests, and responses are. For example, a request (query) might look like this
query getSubjects {
allSubjects(name_contains: "History") {
Subjects {
_id
name
}
}
}
which would return something like this
{
"data": {
"allSubjects": {
"Subjects": [
{
"_id": "573d1b97120426ef0078aa93",
"name": "History"
},
{
"_id": "583087e18d9110f200d29f5c",
"name": "American History"
},
{
"_id": "595c4c69c7a7831100692e26",
"name": "Military History"
},
{
"_id": "59594ef40ea2691100021550",
"name": "English History"
}
]
}
}
}
Caching
Since the data come back in such a structured way, it's, in theory, feasible to keep track of things on the client, and cache data as they come in. The old joke about the two difficult problems in computer science being cache invalidation and naming has some truth, though; this is hard to get right.
I'll spend the rest of this post describing how existing GraphQL libraries approach this problem, the tradeoffs they incur, and then discuss my own GraphQL library (and the tradeoffs it incurs). There's no silver bullet here. I created micro-graphql-react
because the existing options didn't fit well for me, for what I need from a GraphQL library. It's likely others will have different requirements; I'll do my best to articulate what these tradeoffs are, so readers can best choose for themselves.
Apollo
The crown jewel of the GraphQL ecosystem. Apollo will do some client-side massaging of your GraphQL queries, mainly to force the __typename
metadata to come back for all your data. This allows them to analyze and cache every piece of data you get back. For example, let's say you run this query
query getTasks {
tasks(assignedTo: "Adam") {
Tasks {
id
description
assignedTo
}
}
}
and get back
{
"tasks": {
"Tasks": [
{
"id": 1,
"description": "Adam's Task 1",
"assignedTo": "Adam"
},
{
"id": 2,
"description": "Adam's Task 2 ABC",
"assignedTo": "Adam"
}
]
}
}
of course, in real life, Apollo would have silently changed your query to something like this
query getTasks {
tasks(assignedTo: "Adam") {
Tasks {
id
description
assignedTo
__typename
}
__typename
}
}
which would cause you to get back this
{
"tasks": {
"Tasks": [
{
"id": 1,
"description": "Adam's Task 1",
"assignedTo": "Adam",
"__typename": "Task"
},
{
"id": 2,
"description": "Adam's Task 2 ABC",
"assignedTo": "Adam",
"__typename": "Task"
}
],
"__typename": "TaskQueryResult"
}
}
With this information, Apollo can do some amazing things. Later on, if you run something like this
query getTask {
task(id: 2) {
Task {
id
description
assignedTo
}
}
}
Apollo can reach into its cache, see that your needed data are all present, and just return you the following, with no network requests needed (I'll omit the __typename
values for simplicity)
{
"task": {
"Task": {
"id": 2,
"description": "Adam's Task 2 ABC",
"assignedTo": "Adam"
}
}
}
It gets even better, though. If you run a mutation like this
mutation {
updateTask(id: 2, description: "Adam's Task 2") {
Task {
id
description
}
}
}
GraphQL will return the new task's values, and Apollo will update it's cache under the covers. Now, if you run
query getTasks {
tasks(assignedTo: "Adam") {
Tasks {
id
description
assignedTo
}
}
}
you'll get back
{
"tasks": {
"Tasks": [
{
"id": 1,
"description": "Adam's Task 1",
"assignedTo": "Adam"
},
{
"id": 2,
"description": "Adam's Task 2",
"assignedTo": "Adam"
}
]
}
}
again, without any network requests.
Apollo's Tradeoff
The above may seem like perfection, but consider a similar starting point. This query:
query getTasks {
tasks(assignedTo: "Adam") {
Tasks {
id
description
assignedTo
}
}
}
with these results
{
"tasks": {
"Tasks": [
{
"id": 1,
"description": "Adam's Task 1",
"assignedTo": "Adam"
},
{
"id": 2,
"description": "Adam's Task 2",
"assignedTo": "Adam"
}
]
}
}
now, if the user runs this mutation
mutation {
updateTask(id: 2, description: "Bob's Task", assignedTo: "Bob") {
Task {
id
assignedTo
description
}
}
}
we're changing the assignedTo
value, instead of the description. Apollo will update its normalized cache as before, updating that task's fields, as before, but this time modifying the assignedTo
property, instead of the description
property.
This means that later, if the user re-runs this query
query getTasks {
tasks(assignedTo: "Adam") {
Tasks {
id
description
assignedTo
}
}
}
they'll get back
{
"tasks": {
"Tasks": [
{
"id": 1,
"description": "Adam's Task 1",
"assignedTo": "Adam"
},
{
"id": 2,
"description": "Bob's Task",
"assignedTo": "Bob"
}
]
}
}
which is horribly wrong. We requested tasks assigned to "Adam"
, but got back results assigned to "Bob"
. What happened?
Well, Apollo inspected our query, and saw that it had a match for that same query, already, so it promptly returned it for us. The problem is, the mutation we ran happened to invalidate one of the results for that query—but Apollo had no way of knowing that. Apollo has no way of knowing that changes to the description
field on this task have no effect on the correctness of the results for this particular query, while changes to assignedTo
, do. Of course we can just as easily imagine a query against the description
field which would have the reverse problem: changes to assignedTo
would have no affect on correctness, while changes to description
, would.
Apollo has ways of fixing this, of course. It provides the ability to manually update the specific results for a particular query (which requires you to match the query text, and the identical variable values). It provides a refetchQueries
method that can refetch a specific query (ie with specific variable values), or even all instances of a certain query text—here's a sandbox demonstrating this, from Apollo Engineer Hugh Wilson. Lastly, Apollo also gives you the ability to wipe away your entire cache.
Apollo is a brilliant work of engineering, but these caching options weren't quite flexible enough for me, and I found them a tad complex to implement. I wanted something simpler, and also more flexible, which is why I created micro-graphql-react
. But before I get to that, I'd like to talk about:
Urql
Urql is the creation of Bourbon connoisseur, professional Twitter Shit-poster, and one of my favorite people: Ken Wheeler. His is a considerable improvement on Apollo for solving this problem. Urql does the same __typename
modification Apollo does, but then caches things more at the query level, and keeps track of what types are returned for each query. If any data modifications are performed on a particular type, he clears the cache for all queries that hold that type. For example, if you run
mutation {
updateTask(id: 2, assignedTo: "Bob") {
Task {
id
assignedTo
}
}
}
with Urql, the metadata returned will show that a task was modified, and so all queries holding task results will be invalidated, and run against the network the next time they're needed.
This completely solves the problems from above; however, even this approach is not perfect. For example, if you run
query getTasks {
tasks(assignedTo: "Fred") {
Tasks {
id
description
assignedTo
}
}
}
and get back
{
"tasks": {
"Tasks": [],
"__typename": "TaskQueryResult"
}
}
Urql has no way of knowing that query holds Tasks, since it has no way of knowing what TaskQueryResult
is. This means that if you run a mutation creating a task that's assigned to Fred, the mutation result will not be able to indicate that this particular query needs to be cleared.
Interestingly, this is actually a solvable problem with a build step. A build step would be able to manually introspect the entire GraphQL endpoint, and figure out that TaskQueryResult
contains Task
objects, and fix this problem.
micro-graphql-react
micro-graphql-react
was written with the assumption that managing cache invalidation transparently is a problem that's too difficult to solve. Instead, it's designed to make this easy to manage yourself. To be clear, there's no cache management out of the box. By default, mutations will not update previously cached query results at all. You can easily turn caching off entirely, but to get something intelligent working, like Apollo, you need to implement it yourself. Fortunately the structured nature of GraphQL makes this surprisingly easy—this is one of the many reasons I like GraphQL so much.
A side-effect of the above is that unlike Apollo and Urql, this library does no client-side parsing of your queries (or mutations). This not only keeps the library tiny (2.8K min+gzip) but it also allows you to even omit the GraphQL queries themselves from your bundles, if you use something like my generic-persistgraphql library.
The remainder of this post will go over how micro-graphql-react
handles some common caching use cases. It'll paint with broad strokes, so be sure to check out the docs if you want to see some more detail on anything. As usual, all the code samples herein are from my booklist project.
Note All of the code below was written for my particular application, based on what its GraphQL endpoint looks like. Don't expect this code to work in your application, which will almost certainly have some differences in its GraphQL endpoint. In my particular case, my endpoint was created with my mongo-graphql-starter project.
Use case 1 - non-searched data
Let's start with data that is not filtered or searched. For the case of my booklist project, the hierarchical subjects which can be applied to books. Here, a query to fetch all the subjects is fired at startup, and then kept in sync as updates are made.
Here's what the application code looks like
import SubQuery from "graphQL/subjects/allSubjects.graphql";
import { graphqlClient } from "./appRoot";
graphqlClient.subscribeMutation([
{
when: /updateSubject/,
run: (op, r /* r is the response */) =>
syncUpdates(SubQuery, r.updateSubject, "allSubjects", "Subjects")
},
{
when: /deleteSubject/,
run: (op, r) =>
syncDeletes(SubQuery, r.deleteSubject, "allSubjects", "Subjects")
}
]);
I'm grabbing my graphql client (created elsewhere) and telling it that on any mutation that has a result set matching /updateSubject/
to call my syncUpdates
method, and similarly for /deleteSubject/
and syncDeletes
. Those subscriptions are global, and will span the applications lifetime; they make sure our cache is always correct.
For the finishing touch, let's see how we tell our query's React hook to sync up these these changes when relevant mutations happen
let { loading, loaded, data } = useQuery(
buildQuery(
AllSubjectsQuery,
{ publicUserId, userId },
{
onMutation: {
when: /(update|delete)Subject/,
run: ({ refresh }) => refresh()
}
}
)
);
The React hook allows us to subscribe to mutations, and in the callback, provide methods to do things like hard reset the results, soft reset, or in this case, just refresh from what's already in the cache. Subscriptions run in order, so the global sync is guaranteed to run first, before we do our refresh.
That boilerplate, tho!
If you're thinking that that amount of boilerplate, while not huge, still would not scale in a large web application, then take a step back, and look more broadly. More specifically, replace any and all references to the word "subject"
, no matter the casing, or plurality, with X
. Then remove references to SubQuery
(the query that reads the subjects) with Y
. It should look something like this
graphqlClient.subscribeMutation([
{
when: /updateX/,
run: (op, res) => syncUpdates(Y, res.update, "allX", "X")
},
{ when: /deleteX/, run: (op, r) => syncDeletes(Y, r.delete, "allX", "X") }
]);
let { loading, loaded, data } = useQuery(
buildQuery(
Y,
{ publicUserId, userId },
{
onMutation: { when: /(update|delete)X/, run: ({ refresh }) => refresh() }
}
)
);
You've now got a prime refactoring opportunity. It would be straightforward to replace those two chunks of code with two function calls, passing in the query name (or even array of query names, if you need), and the type name.
In fact - the beauty of hooks is how well they compose. Those two global mutation handlers could absolutely be inside the hook (onMutation
can take a single object, or an array of them). If your application is big enough to justify it, a custom hook like this would work fine
const useSyncdQuery = (Query, Type, variables) => {
let plural = Type + "s";
let updateOp = `update${Type}`;
let deleteOp = `delete${Type}`;
return useQuery(
buildQuery(Query, variables, {
onMutation: [
{
when: new RegExp(updateOp),
run: (op, res) =>
syncUpdates(Query, res[updateOp], `all${plural}`, plural)
},
{
when: new RegExp(deleteOp),
run: (op, res) =>
syncDeletes(Query, res[deleteOp], `all${plural}`, plural)
},
{
when: new RegExp(`(update|delete)${Type}`),
run: ({ refresh }) => refresh()
}
]
})
);
};
And now anytime we want to use a query that syncs up, we can just do
let { loaded, data } = useSyncdQuery(SubQuery, "Subject", {
publicUserId,
userId
});
Use case 2 - soft resetting search results
Let's look at a use case similar what I started with: running mutations against data that was searched, where the mutations may (or may not) invalidate the current search results. We'll assume books can be searched with any number of filters, and sorted in any number of directions. As a result, it's exceedingly difficult to track all the ways in which modifying a book might cause it to either no longer show up in the current results, or show up in a different place (ie sorted differently).
Let's assume that for any mutation on a book, we'll just clear the entire cache of book search results, but update the current results on the screen. So if you're searching all History
books, but you change one of their subject's from History
to Literature
, then the UI will change to reflect that, but the next time that search is run, a new network request will fire. This is a UI choice, and certainly subject to debate. Some might prefer the new network request fire immediately, with the recently updated book vanishing as soon as the new results come back. I happen to prefer the former, but the library supports either; check the docs for more info, and examples.
Here's the code we'll start with
import BooksQuery from "graphQL/books/getBooks.graphql";
//...
const variables = getBookSearchVariables(searchState);
const onBooksMutation = [
{
when: /updateBooks?/,
run: ({ currentResults, softReset }, r /* response */) => {
syncResults(
currentResults.allBooks,
"Books",
r.updateBooks ? r.updateBooks.Books : [r.updateBook.Book]
);
softReset(currentResults);
}
},
{
when: /deleteBook/,
run: ({ refresh }, res, req) => {
syncDeletes(BooksQuery, [req._id], "allBooks", "Books");
refresh();
}
}
];
const { data, loading, loaded, currentQuery } = useQuery(
buildQuery(BooksQuery, variables, { onMutation: onBooksMutation })
);
This time, updates and deletes can only ever happen while the query hook is active, so those mutation subscriptions are placed right in the hook. Here though, when an update or delete comes in, we sync it with the results that are currently shown, while removing absolutely everything from cache, including those same currently-showing results; if the user runs a new query, then comes back, the query will re-fire. Of course, softReset
is a helper that does just that. The code works, but as before, it's a bit cumbersome. But again as before, if this is a common use case in our application, we can refactor it reasonably easily. Let's see about making a custom hook that wraps useQuery
, while performing this cache invalidation.
const useSoftResetQuery = (Query, Type, variables) => {
let plural = Type + "s";
return useQuery(
buildQuery(Query, variables, {
onMutation: [
{
when: new RegExp(`update${plural}?`),
run: ({ currentResults, softReset }, resp) => {
syncResults(
currentResults[`all${plural}`],
plural,
resp[`update${plural}`]
? resp[`update${plural}`][plural]
: resp[`update${Type}`][Type]
);
softReset(currentResults);
}
},
{
when: new RegExp(`delete${Type}`),
run: ({ refresh }, res, req) => {
syncDeletes(Query, [req._id], `all${plural}`, plural);
refresh();
}
}
]
})
);
};
The code's hardly more readable, but it's not terribly complex, and it's code you'd write once, and re-use everywhere. Here's what the application code querying data with this cache strategy would now look like.
const variables = getBookSearchVariables(searchState);
const { data, loading, loaded, currentQuery } = useSoftResetQuery(
BooksQuery,
"Book",
variables
);
You could drop that anywhere you needed data that respected this particular cache invalidation strategy.
The point of this library is not to figure out a way to solve the cache invalidation problem; the point of this library is to make it easy for you to solve the cache invalidation problem yourself, in a way that's tailored to your own app.
Don't overdo your abstractions
This library was designed to provide useful primitives, which can be tailored to your particular application. But don't overdo your abstractions. As always, try to wait until you have three identical pieces of code before you look to move them into one shared call. If you never get to three, it might mean your application is too small to bother (like my booklist project); or it might mean that you have a lot of special cases in your code, in which case a one-size-fit-all solution would never have suited you anyway. Of course, it might also mean that you have a lot of inconsistencies in your GraphQL endpoint naming scheme, which regardless you should look to standardize.
Helpers implementation
Let's check out syncUpdates
and syncDeletes
import { getDefaultClient } from "micro-graphql-react";
let graphqlClient = getDefaultClient();
export const syncUpdates = (cacheName, newResults, resultSet, arrName, options = {}) => {
const cache = graphqlClient.getCache(cacheName);
[...cache.entries].forEach(([uri, currentResults]) =>
syncResults(currentResults.data[resultSet], arrName, newResults, options)
);
if (options.force) {
graphqlClient.forceUpdate(cacheName);
}
};
export const syncResults = (resultSet, arrName, newResults, { sort } = {}) => {
if (!Array.isArray(newResults)) {
newResults = [newResults];
}
const lookupNew = new Map(newResults.map(o => [o._id, o]));
resultSet[arrName] = resultSet[arrName].concat();
resultSet[arrName].forEach((o, index) => {
if (lookupNew.has(o._id)) {
resultSet[arrName][index] = Object.assign({}, o, lookupNew.get(o._id));
}
});
const existingLookup = new Set(resultSet[arrName].map(o => o._id));
resultSet[arrName].push(
...newResults.filter(o => !existingLookup.has(o._id))
);
return sort ? resultSet[arrName].sort(sort) : resultSet[arrName];
};
export const syncDeletes = (cacheName, _ids, resultSet, arrName, { sort } = {}) => {
const cache = graphqlClient.getCache(cacheName);
const deletedMap = new Set(_ids);
[...cache.entries].forEach(([uri, currentResults]) => {
let res = currentResults.data[resultSet];
res[arrName] = res[arrName].filter(o => !deletedMap.has(o._id));
sort && res[arrName].sort(sort);
});
};
It's not the simplest code, but neither is it terribly complex. It's mostly boring boilerplate that runs through result sets, updating what's needed. These are centralized helper methods my app uses with micro-graphql-react
's hooks.
Wrapping up
If you think micro-graphql-react
might be a good fit for your app, check out the docs. There's more examples, and fuller explanations of how everything works.
Happy Coding!