Open navigation

Eleventy and GraphQL sitting in a tree

Consuming a headless CMS GraphQL API with Eleventy

With Eleventy, consuming data coming from a GraphQL API to generate static pages is as straightforward as using Markdown files.

The many flavours of headless CMSes

If you want to add a headless CMS to a JAMstack website, you have the choice between two main approaches: Git-backed or API driven.

Both will present content creators with a familiar graphical interface, but what happens behind the scene when content is created, modified or deleted is quite different.

Git-backed headless CMSes

Git-backed CMSes like Netlify CMS or Forestry will save your content in text files and commit them to your git repository. This is my favourite approach for the following reasons:

  • content and code share the same workflow
  • content is version controlled by git with a clear history
  • content in the form of text files (markdown, YAML, etc.) is highly portable

API driven headless CMSes

API driven CMSes like Contentful or DatoCMS will save your content in a database in the cloud and make it available through an API. If you want to host your own data, Craft CMS and its first party GraphQL API makes it a great option, too. GraphQL is quickly a becoming popular way to query and consume those APIs. In my opinion this approach is interesting when:

  • content is consumed by various platforms
  • the project needs highly relational content models

Project structure

Eleventy (11ty), which is quickly becoming my static site generator of choice, can handle both approaches fairly elegantly and with a minimal amount of efforts. Querying a GraphQL API and using the returned data to generate static pages is actually a straightforward and simple process. Who knew?

DatoCMS is a headless CMS I have recommended to clients in the past. Pricing and options are fair, it is very flexible, it handles locales elegantly and has good developer and user experiences.

Although this blogpost is geared towards DatoCMS, this methodology is applicable to any headless CMS offering a GraphQL API.

Here is the folder architecture we will be working with in Eleventy, which is a fairly basic one:

+-- src
  +-- _data
      +-- blogposts.js
  +-- _includes
      +-- layouts
          +-- base.njk
  +-- blogposts
      +-- entry.njk
      +-- list.njk
+-- .eleventy.js
+-- .env
+-- .env.example
+-- package-lock.json
+-- package.json

DatoCMS configuration

After getting a DatoCMS account, we need a data model and some entries in DatoCMS. For this example, I created a data model called blogposts with a series of fields and a few entries.

We can then use our API token to connect to the GraphQL API Explorer and see what queries and options are available and what JSON is returned.

Again, most headless CMSes with a GraphQL API offer this functionality in some form or fashion.

Eleventy configuration

We will need our API token to authenticate with the DatoCMS GraphQL server. We can use dotenv to store it in a .env file that we add to our .gitignore so it does not end up in our repository. After installing the package, we create a .env file at the root of the project and add our DatoCMS API token to it:


Then, we just need to add the following line at the top of our .eleventy.js file:


Since that file is processed really early by Eleventy, our token will be available anywhere in our templates using process.env.DATOCMS_TOKEN.

Using JavaScript data files

Instead of getting our data using collections and markdown files with YAML front matters, we are going to use Eleventy's Javascript data files. We will use src/_data/blogposts.js to connect to DatoCMS' Content Delivery API at build time and export a JSON file containing a list of all blogposts with all the fields we need. The content of that file will be available in our templates under the blogposts key.

Eleventy will then be able to use that single JSON file to build all detail and list pages for our blog.

Here is the full file required to retrieve all our blogposts. The code is based on the Vanilla JS request example available in the DatoCMS documentation.

I went with node-fetch rather than Apollo and friends to minimise dependencies.

The GraphQL API from DatoCMS has a hard limit: you can only get 100 records per query (thanks to Dan Fascia for pointing that out to me initially). If we have a large blog of more than 100 posts, we just have to make multiple queries and concatenate the results to make sure we get all blogposts.

Update June 8th 2020: don't be a dummy like me and use await in a while loop. There are more performant ways to fetch data from an API

// required packages
const fetch = require("node-fetch");

// DatoCMS token
const token = process.env.DATOCMS_TOKEN;

// get blogposts
// see
async function getAllBlogposts() {
  // max number of records to fetch per query
  const recordsPerQuery = 100;

  // number of records to skip (start at 0)
  let recordsToSkip = 0;

  // do we make a query ?
  let makeNewQuery = true;

  // Blogposts array
  let blogposts = [];

  // make queries until makeNewQuery is set to false
  while (makeNewQuery) {
    try {
      // initiate fetch
      const dato = await fetch("", {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          Accept: "application/json",
          Authorization: `Bearer ${token}`,
        body: JSON.stringify({
          query: `{
              first: ${recordsPerQuery},
              skip: ${recordsToSkip},
              orderBy: _createdAt_DESC,
              filter: {
                _status: {eq: published}
              body(markdown: true)
              image {
              relatedBlogs {

      // store the JSON response when promise resolves
      const response = await dato.json();

      // handle DatoCMS errors
      if (response.errors) {
        let errors = response.errors; => {
        throw new Error("Aborting: DatoCMS errors");

      // update blogpost array with the data from the JSON response
      blogposts = blogposts.concat(;

      // prepare for next query
      recordsToSkip += recordsPerQuery;

      // stop querying if we are getting back less than the records we fetch per query
      if ( < recordsPerQuery) {
        makeNewQuery = false;
    } catch (error) {
      throw new Error(error);

  // format blogposts objects
  const blogpostsFormatted = => {
    return {
      date: item._createdAt,
      title: item.title,
      slug: item.slug,
      image: item.image.url,
      imageAlt: item.image.alt,
      summary: item.intro,
      body: item.body,
      relatedBlogs: item.relatedBlogs,

  // return formatted blogposts
  return blogpostsFormatted;

// export for 11ty
module.exports = getAllBlogposts;

Instead of directly using data from the JSON response, I generally reformat it to future proof my templates a little. If something changes at the CMS level, I know I only have to fiddle with data files, not with all the templates that are using them.

Images and thumbnails

Every file or image uploaded to DatoCMS is stored on Imgix, which means we can simply add some parameters to any image URL to resize, crop, and manipulate them in various ways. These transformations happen on-the-fly and get cached on the CDN as well for future reuse.

Most headless CMSes out there will offer you similar functionalities, either by integrating with third party services like Cloudinary or Uploadcare or by having their own images API.

Relational fields

DatoCMS' GraphQL API deals very well will deep data structures and will easily let you retrieve the data you need from your relational fields. However, I generally rely on a simpler approach:

  • Create a big JSON file for each data types (blogposts, projects, sponsors, etc), each content item has a unique ID
  • For relational fields, only get the IDs of related items
  • Use nested loops at the template level to get the data we need using IDs

Since fast static sites generators like Hugo or Eleventy have a very low performance penalty for loops at the template level, I never encountered major performance problems with this solution. It gives you a lot of flexibility and keeps your queries simple and flat.

Generate a paginated list of blogposts with 11ty

Using the pagination feature of Eleventy, we can easily walk through our JSON file (accessible via the blogposts key) and generate a paginated list of blogposts. In this case, we are going to generate a paginated list with 12 items on each page, as specified by the size key.

Here is the full code for src/blogposts/list.njk:

  data: blogposts
  size: 12
permalink: blog{% if pagination.pageNumber > 0 %}/page{{ pagination.pageNumber + 1}}{% endif %}/index.html

{% extends "layouts/base.njk" %}
{% set htmlTitle = item.title %}

{% block content %}

  {# loop through paginated item #}
  {% for item in pagination.items %}
    {% if loop.first %}<ul>{% endif %}
        <p><img src="{{ item.image }}?fit=crop&amp;w=200&amp;h=200" alt="{{ item.imageAlt }}"></p>
        <h2><a href="/blog/{{ item.slug }}">{{ item.title }}</a></h2>
        <p><time datetime="{{ | date('Y-M-DD') }}">{{|date("MMMM Do, Y") }}</time></p>
        <p>{{ item.summary }}</p>
    {% if loop.last %}</ul>{% endif %}
  {% endfor %}

  {# pagination #}
  {% if pagination.hrefs | length > 0 %}
    {% if pagination.previousPageHref %}
      <li><a href="{{ pagination.previousPageHref }}">Previous page</a></li>
    {% endif %}
    {% if pagination.nextPageHref %}
      <li><a href="{{ pagination.nextPageHref }}">Next page</a></li>
    {% endif %}
  {% endif %}

{% endblock %}

Generate individual posts with 11ty

Using the same pagination feature, we can also easily generate all our individual pages. The only trick here is to use pagination with a size of 1, combined with dynamic permalinks. Here is the full code for src/blogposts/entry.njk:

  data: blogposts
  size: 1
  alias: blogpost
permalink: blog/{{ blogpost.slug }}/index.html
{% extends "layouts/base.njk" %}
{% set htmlTitle = blogpost.title %}

{% block content %}
  {# blogpost #}
  <img src="{{ blogpost.image }}?fit=crop&amp;w=1024&amp;h=576"
       srcset="{{ blogpost.image }}?fit=crop&amp;w=600&amp;h=338 600w,
               {{ blogpost.image }}?fit=crop&amp;w=800&amp;h=450 800w,
               {{ blogpost.image }}?fit=crop&amp;w=1024&amp;h=576 1024w"
       alt="{{ blogpost.imageAlt }}">

  <h1>{{ blogpost.title }}</h1>
  <p><time datetime="{{ | date('Y-M-DD') }}">{{|date("MMMM Do, Y") }}</time></p>
  <p>{{ blogpost.intro }}</p>
  {{ blogpost.body | safe }}

  {# related blogposts #}
  {% if blogpost.relatedBlogs|length %}
    <h2>You might also like</h2>
    {% for item in blogpost.relatedBlogs %}
      {% for post in blogposts %}
        {% if == %}
            <a href="/blog/{{ post.slug }}">{{ post.title }}</a>
        {% endif %}
      {% endfor %}
    {% endfor %}
  {% endif %}
{% endblock %}

Trigger builds automatically

Most headless CMSes will provide webhooks that will send a request to a URL when data changes. If you are hosting your site on Netlify (why wouldn't you, they're awesome), creating a build hook is a couple of clicks away. That build hook will give us a URL that will trigger a site build when hit by a POST request.

DatoCMS offers a one-click deployment integration with Netlify. We just have to activate it and voilà, our blog is rebuilt every time the data changes.

We now have a blog that combines the power of a relational database with the speed and reliability of a static, CDN hosted website.