Analyzing Notion app performance

notion.png

Web performance isn’t going to save you in this crisis.

But if you’re building a software product, chances are you’re relatively unaffected. And in this case, having a quick app is more important than ever. Internet is slowing down due to increased demand, and people are holding on to their phones for longer – so if your app is slow, your users will be affected by it.

And slow app means worse business.

Today, let’s take a look at Notion – an advanced note-taking web app. Notion is a great product, but one thing customers complain about is its startup time:

Notion shipped tremendous speed enhancements recently, but there’s still a lot of space to improve. Let’s reverse-engineer it and see what else can be optimized!

Notion is a React web app embedded into a native shell. This means its “startup time” is largely the “loading time of the web part”.

To be concrete. On the desktop, the Notion app is a web app wrapped with Electron. On mobile, from my understanding, the Notion app runs both React Native parts (likely responsible for some mobile experiences) and web parts (likely responsible for the overall editing UI).

Due to (apparently) HTTPS certificate pinning in the Android Notion app, I wasn’t able to verify whether the mobile app runs the same bundles the desktop app does. But even if the bundles are different, issues they are having are, likely, similar.

To understand how the web part is loading, let’s create a public Notion page:

Analyzing Notion app performance 1

and run a WebPageTest audit over it. (This works because public pages run the same code the whole app runs.)

WebPageTest is an advanced performance testing tool.

WebPageTest returns a lot of useful information, but the most interesting part is the loading waterfall:

Analyzing Notion app performance 2

Woah, that’s a lot of info. What’s going on there?

Analyzing Notion app performance 3

Here’s what’s going on:

  1. You open the page. The page loads a few stylesheets – and two JavaScript bundles, vendor and app.
  2. Once both bundles are loaded, they start to execute – and spend a whole second doing so.
  3. Once the app has been initialized, it starts sending API requests for page data, loading analytics…
  4. and executing more code…
  5. until, at 5.6 seconds, the first paint happens:

    Analyzing Notion app performance 4

    …but it’s just a spinner.

  6. At 6.2 seconds, the page content gets actually rendered.

    Analyzing Notion app performance 5

    It takes a couple more seconds to finish loading all the hero image.

6.2 seconds for a desktop computer is quite a lot. However, with a medium-tier phone like Nexus 5, this time increases to 12.6 seconds. Here’s how that feels:

Let’s see how we can improve it.

When one talks about “loading speed”, one typically means networking performance. From the networking standpoint, Notion is doing well: they’re using HTTP/2, they’re gzipping files, and they’re using Cloudflare as a proxying CDN.

However, another part of “loading speed” that people talk less about is processing performance. All downloaded resources have a processing cost: gzip archives need to be decompressed; images need to be decoded; JS needs to be executed.

Unlike networking performance, processing performance doesn’t improve with better networks – it’s only as fast as the user’s CPU is. And user’s CPUs in phones – especially Android phones – are bad:

Analyzing Notion app performance 6

For Notion, processing performance is even more significant. It’s easy to avoid networking costs by caching network resources in the app. But processing costs are paid every single time the app starts – which means a phone user may see a 10-second splash screen multiple times a day.

On our test Nexus 5, executing vendor and app bundles takes 4.9 seconds. This whole time, the page – and the app – stay non-interactive and empty:

Analyzing Notion app performance 7

What’s happening there? WebPageTest doesn’t record JS traces, but if we go to DevTools and run a local audit, we’ll see this:

Analyzing Notion app performance 8

First, the vendor bundle is being compiled (for 0.4s). Second, the app bundle is being compiled (for 1.2s). Third, both bundles start executing – and spend 3.3s doing so.

So how can we reduce that time?

Let’s take a look at the bundle execution phase. What are all these functions?

Analyzing Notion app performance 9

Turns out that’s bundle initialization:

  • Functions with four-character names, like bkwR or Cycz, are application modules.

    | | When webpack builds a bundle, it wraps each module with a function – and assigns it an ID. That ID becomes the function name. In the bundle, this looks as follows: | | Before: | | js | import formatDate from './formatDate.js'; | | // ... | | | After: | | js | fOpr: function(module, __webpack_exports__, __webpack_require__) { | "use strict"; | | __webpack_require__.r(__webpack_exports__); | | var _formatDate__WEBPACK_IMPORTED_MODULE_0__ = | __webpack_require__("xN6P"); | | // ... | }, |

  • And the s function is actually __webpack_require__.

    __webpack_require__ is the webpack’s internal function that it uses for requiring modules. Whenever you write an import, webpack transforms it to __webpack_require__().

Bundle initialization takes so much time because it executes all the modules. Each module may take just a few milliseconds to execute, but with Notion’s 1100+ modules, this adds up.

The only way to fix it is to execute fewer modules upfront.

Use code splitting

The best way to improve the startup time is to code-split away some features that’re not needed immediately. In webpack, this is done using import():


<Button onClick={openModal} />


<Button
  onClick={() => import('./Modal').then(m => m.openModal())}
/>

Code splitting is the best first optimization you can do. It brings huge performance benefits: after doing code splitting, Tinder reported a 60% decrease in the complete load time; and our client, Framer, managed to cut 40-45% off the CPU Idle time.

There are several common approaches to code splitting:

  • splitting the bundle by pages,
  • splitting away below-the-fold code,
  • and splitting away conditional content (any dynamic UIs that are not visible immediately)

The Notion app doesn’t have pages, and code-splitting below-the-fold content is hard because pages are very dynamic. This means the only useful approach is code-splitting conditional content. The following parts may be good candidates for that:

  • Settings, Import, Trash – all UIs that are rarely used
  • Sidebar, Share, Page options – all UIs that are frequently used but are not needed immediately when the app starts. Those could be preloaded and initialized right after the app starts
  • Heavy page blocks. Some page blocks are quite heavy – e.g. the Code block supports highlighting 68 languages, which bundles 120+ minified KBs of language definitions from Prism.js. Notion seems to be code-splitting some blocks already (e.g., Math equation), but it might make sense to extend it to others as well.

Check that module concatenation is working

In webpack, the module concatenation feature is responsible for merging multiple small ES modules into one large. This reduces the module processing overhead and makes removing unused code more effective.

To confirm that module concatenation is working:

Fun fact. Remember that all imports are transformed into the __webpack_require__ function?

Well, what happens when the same function is called 1100 times throughout initialization? Right, it becomes a hot path taking 26.8% of the total time:

Analyzing Notion app performance 10

(s is the minified name of __webpack_require__.)

Unfortunately, apart from concatenating more modules, there’s not much to optimize there.

Try the lazy option of Babel’s plugin-transform-modules-commonjs

Note: this suggestion relies on disabling module concatenation. Because of that, it’s incompatible with the previous one.

@babel/plugin-transform-modules-commonjs is an official Babel plugin that transforms ES imports into CommonJS require()s:


import formatDate from './formatDate.js';
export function getToday() {
  return formatDate(new Date());
}


const formatDate = require('./formatDate.js');
exports.getToday = function getToday() {
  return formatDate(new Date());
};

And with its lazy option enabled, it also inlines all requires right into where they’re used:


exports.getToday = function getToday() {
  return require('./formatDate.js')(new Date());
};

Thanks to this transformation, if the getToday() function is never called, ./formatDate.js is never imported! And we don’t pay the import cost.

There are a few drawbacks, however:

  • Switching the existing codebase to lazy might be tricky. Some modules may rely on side effects from other modules, which we’re delaying here. Also, the plugin docs warn that the lazy option breaks cyclic dependencies
  • Switching to CommonJS modules disables module concatenation. This means the module processing overhead will be higher

These drawbacks make this option riskier compared to others – but if it plays right, its benefits might far outweigh its costs.

How many modules could be deferred like this? Chrome DevTools let us find an easy answer. Open any JS-heavy page (e.g. the Notion one), go to DevTools, press Ctrl+Shift+P (Windows) or ⌘⇧P (macOS), type in “start coverage”, and press Enter. The page will reload, and you’ll see how much code was executed in the initial render.

In Notion, 39% of the vendor bundle and 61% of the app bundle are unused after the page renders:

Analyzing Notion app performance 11

Let’s take a look at the bundle initialization trace again:

Analyzing Notion app performance 8

A significant part here is “Compile Script” (parts 1 and 2), which takes 1.6s in total. What is that?

V8 (Chrome’s JS engine), just like other JS engines, uses just-in-time compilation to run JavaScript. This means all code it executes has to be compiled into machine code first.

And the more code there is, the more time it takes to compile it. In 2018, on average, V8 was spending 10-30% of total execution time in parsing and compiling JavaScript. In our case, the compilation step takes 1.6s out of a total of 4.9s – a whopping 32%.

The only way to reduce the compilation time is to serve less JavaScript.

Another great approach would be to precompile JavaScript into machine code – and avoid parsing costs altogether by running compiled JavaScript. However, this is not currently possible.

Use code splitting

Yes, this again. By code-splitting unused functionality, you not only reduce bundle init time, but also decrease the compilation time. The less JS code there is, the faster it compiles.

Check out the previous section on code splitting where we’ve talked about common code-splitting approaches & how Notion could benefit from it.

Remove unused vendor code

As we saw, when a page loads, almost 40% of the Notion’s vendor bundle stay unused:

Analyzing Notion app performance 11

Some of that code will be needed later when the user does something in the app. But how much?

Notion doesn’t publish source maps, which means we can’t use source-map-explorer to explore the bundle and see the largest modules. However, we could still guess libraries from their minified source – by looking at non-minified strings & searching for them in GitHub.

Based on my analysis, here’re the 10 largest modules in the vendor bundle:

  1. fingerprintjs2 → 29 KB
  2. moment-timezone → 32 KB
  3. chroma-js → 35 KB
  4. tinymce → 48 KB
  5. diff-match-patch → 54 KB
  6. amplitude-js → 55 KB
  7. lodash → 71 KB
  8. libphonenumber-js/metadata.min.json → 81 KB
  9. react-dom → 111 KB
  10. moment with all locales → 227 KB

This list does not include libraries that are composed of multiple small files.
For example, the bundle also includes core-js, which occupies 154 KB but consists of 300+ small files.

Out of all these modules, the most significant and easy to optimize ones are moment, lodash and libphonenumber-js.

moment, a JS library for manipulating dates, is bundling 160+ minified KBs of localization files. Given than Notion is only available in English, this is hardly needed.

What can one do here?

  • First, drop unused moment locales using moment-locales-webpack-plugin.
  • Second, consider switching from moment to date-fns. Unlike with moment, when you’re using date-fns, you’re importing only specific date manipulation methods you need. So if you only use addDays(date, 5), you won’t end up bundling the date parser.

    You-Dont-Meed-Momentjs: List of functions which you can use to replace moment.js

lodash, a set of data manipulation utilities, bundles 300+ functions for working with data. That’s too much – from what I’ve seen, apps typically use 5-30 of those methods at most.

The easiest way to drop unused methods is to use babel-plugin-lodash. Apart from that, lodash-webpack-plugin supports removing some lodash features (like caching or Unicode support) from inside these methods.

libphonenumber-js, a library for parsing and formatting phone numbers, bundles an 81 KB JSON file with phone number metadata.

I can’t see any places where phone numbers are used, so it’s likely this library supports a single use case somewhere deep in the Notion UI. It’d be great to replace it with another library or custom code – and drop the whole dependency.

Remove polyfills

Another major dependency present in the vendor bundle is polyfills from the core-js library:

Analyzing Notion app performance 14

There are two problems with it.

It’s unnecessary. We’re testing Notion in Chrome 81, which supports all the modern JS features. However, the bundle still includes polyfills for Symbol, Object.assign, and many other methods. These polyfills have to be downloaded, parsed, and compiled – all for nothing.

This also affects Notion apps. In the desktop app (and probably in the mobile one as well), the JS engine version is modern, fixed, and well-known. There’s zero chance Symbol or Object.assign would be absent there – however, the app still downloads the same polyfills.

What should we do instead? Ship polyfills for older browsers, but skip them for modern ones. See “How to load polyfills only when needed” for a few ways to do this.

It’s bundled multiple times. The vendor bundle includes the core-js copyright 3 times. Each time, the copyright is identical, but is shipped in a different module and with different dependencies:

Analyzing Notion app performance 15

This means core-js itself is bundled 3 times. But why? Let’s dig deeper.

In a non-minified form, the module with the copyright looks like this:

var core = require('./_core');
var global = require('./_global');
var SHARED = '__core-js_shared__';
var store = global[SHARED] || (global[SHARED] = {});

(module.exports = function (key, value) {
  return store[key] || (store[key] = value !== undefined ? value : {});
})('versions', []).push({
  version: core.version,
  mode: require('./_library') ? 'pure' : 'global',
  copyright: '© 2019 Denis Pushkarev (zloirock.ru)',
});

Here, we have two bits that describe the library:

  • var core = require('./_core'); core.version for the library version, and
  • require('./_library') ? 'pure' : 'global' for the library mode

In the minified code, that corresponds to:

  • var r=n(<MODULE_ID>);r.version for the library version, and
  • n(<MODULE_ID>)?"pure":"global" for the mode

If we follow these module IDs in the bundle, we’ll see this:

Analyzing Notion app performance 16

Woah. This means these three versions of core-js are:

  • 2.6.9 in the global mode,
  • 2.6.11 in the global mode, and
  • 2.6.11 in the pure mode

Turns out, this is a common issue. This happens when your app depends on one version of core-js, but some of your dependencies depend on another.

How to solve it? Run yarn why core-js to figure out what depends on the remaining two versions. And either remove/reconfigure dependencies that bundle extra core-js versions; or deduplicate all three versions into one using webpack’s resolve.alias:

Let’s take another look at how Notion is loading:

Analyzing Notion app performance 17

A few things are striking the attention here:

  • API requests don’t start happening until the bundle is fully downloaded
  • Contentful paint (which is when the actual content becomes visible) doesn’t happen until most of the API requests are done. (Specifically, it waits for request 35, loadPageChunk)
  • API requests are mixed with third parties: Intercom, Segment, and Amplitude

Here’s how to optimize that.

Defer third parties

In the real life, we can’t simply remove all the Notion third parties. But we can defer them – like this:


async function installThirdParties() {
  if (state.isIntercomEnabled) intercom.installIntercom();

  if (state.isSegmentEnabled) segment.installSegment();

  if (state.isAmplitudeEnabled) amplitude.installAmplitude();
}


async function installThirdParties() {
  setTimeout(() => {
    if (state.isIntercomEnabled) intercom.installIntercom();

    if (state.isSegmentEnabled) segment.installSegment();

    if (state.isAmplitudeEnabled) amplitude.installAmplitude();
  }, 15 * 1000);
}

This would make sure they are not loaded until the app has fully initialized.

setTimeout vs requestIdleCallback vs events. setTimeout is not the best approach (hard-coding the timeout is hacky), but it’s good enough.

The best approach would be to listen for some kind of a “page fully rendered” in-app event, but I’m not sure whether Notion has one.

requestIdleCallback might sound like the perfect tool for the job, but it’s not. In my tests in Chromium, it triggers too early – merely 60 ms after the main thread becomes idle.

Loading analytics on interaction. Another great approach to defer analytics is to avoid loading it until the first user’s interaction – the first click or tap.

However, note that this makes analytics invisible for synthetic tests (like Lighthouse or PageSpeed Insights). To measure the real JavaScript cost for users, you should install a Real User Monitoring library – e.g. LUX from SpeedCurve or Browser Insights from Cloudflare.

Preload API data

In Notion, before the page is rendered, the browser has to send 9 requests to the API:

Analyzing Notion app performance 18

Each request may take from 70 ms (in case of a cable connection) to 300-500 ms (in case of a 4G connection and a medium-tier phone). And some of these requests seem sequential – they aren’t sent before previous requests complete.

This means slow API requests can easily result in significant latency. In my tests, removing this latency makes the page render 10% faster.

But how can we remove the latency in the real app?

Inline page data into the HTML. The best approach would be to calculate the API data on the server side – and include it directly into the HTML response. E.g., like this:

app.get('*', (req, res) => {
  
  
  
  res.write(`
    <div id="notion-app"></div>
    <script src="/vendors-2b1c131a5683b1af62d9.js" defer></script>
    <script src="/app-c87b8b1572429828e701.js" defer></script>
  `);
  
  
  const stateJson = await getStateAsJsonObject();
  res.write(`
    <script>
      window.__INITIAL_STATE__ = JSON.parse(${stateString})
    </script>
  `);
})

Make sure to:
a) encode data as JSON for best performance;
b) escape data with jsesc (json: true, isScriptContext: true) to avoid XSS attacks.

Also, note that bundles have the defer attribute. We need it to execute bundles after the __INITIAL_STATE__ script.

With this approach, the app won’t need to wait for API responses. It will retrieve the initial state from the window and start rendering immediately.

Cloudflare workers. Notion uses Cloudflare as a CDN provider. If Notion’s HTML pages are static (e.g., they’re served by AWS S3), Cloudflare workers might be useful instead.

With Cloudflare workers, you can intercept the page, fetch dynamic data straight from the CDN worker, and append the data into the end of the page. See:

Inline a script to prefetch page data. Another approach is to write an inline script that will request the data ahead of time:

<div id="notion-app"></div>
<script>
  fetchAnalytics();
  fetchExperiments();
  fetchPageChunk();

  function fetchAnalytics() {
    window._analyticsSettings = fetch(
      '/api/v3/getUserAnalyticsSettings',
      {
        method: 'POST',
        body: '{"platform": "web"}',
      }
    ).then((response) => response.json());
  }

  async function fetchExperiments() {  }

  async function fetchPageChunk() {  }
</script>
<script src="/vendors-2b1c131a5683b1af62d9.js"></script>
<script src="/app-c87b8b1572429828e701.js"></script>

The app can then simply await on window._analyticsSettings (and similar promises). If the data is loaded by that time, the app will get it near-immediately.

The important bit: the script should start sending requests as soon as possible. This will increase the chance that responses will arrive – and be handled – while the bundles are still loading and the main thread is idle.

Optimizations above should bring the most benefits. But there are a few other things that are worth paying attention to.

Cache-Control on responses

Notion doesn’t set the Cache-Control header on its responses. This doesn’t disable caching – but means each browser would cache the response differently. This could lead to some unexpected client-side bugs.

To avoid this, set the proper Cache-Control header on bundle assets and API responses:

Loading skeleton

The Notion app has a spinner that’s shown while the page is loading:

Analyzing Notion app performance 19

The spinner helps to signify that “something is loading”. However, sometimes, the spinner actually worsens the perceived performance. Users see the spinner and pay attention to the fact that something is loading – which makes the app feel slower.

What could be done instead is showing a skeleton of the UI:

Analyzing Notion app performance 20

It’s small enough to be inlineable, and it prepares the user for the actual UI.

So, how much time can all these optimizations save us?

In total, based on this (very rough) calculation, we save 3.9 out of 12.6 seconds – a 30% improvement just by tuning some configs and deferring some loading. And this is after great speed improvements the Notion team has already done.

It turns out, almost every app has low-hanging fruits that could be implemented just by tuning the bundler configuration and doing a few precise code changes. So here’s the easiest way to find and pick them. And if you read this far and liked this case study, consider spreading the word about it:

Thanks to Radion Chernyakov, Semyon Muravyov, Victor Kolb, Nikolay Kost for their draft reviews and helpful suggestions.


This post was originally posted here

Check out our Starter Sites built with #ToolWeLove including Toolset, Elementor Pro, and Astra Pro.

 

Share this page
Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on email
Pixallus Website Giveaway

We understand COVID has created shifts in many people’s lives. That’s why we’ve decided to giveaway a fully functional website with all the tools you

Read More »
How to Choose WordPress Hosting

Pixallus WordPress Hosting. Simplified. Enjoy blazing fast WordPress website loading speeds and 99.9% uptime. Whether you’re a freelancer, small business, or a large organization, we

Read More »
sendinblue