How to use Reassure with Vega

:bulb: Editor’s note: This knowledge base article is the first in a series of guides from our verified Vega development agencies. Special thanks to Callstack for sharing how to implement their popular perf testing library for Vega.

Welcome! In this tutorial we will focus on the broad topic of monitoring our app’s performance by using Reassure with Vega! We will start with a brief explanation of what Reassure is, how it works, and steps on how to integrate it to effortlessly monitor Vega performance.

Why do we need Reassure?

In the development cycle, we start by implementing new features and writing tests to gain confidence that it works as expected now and after future refactors). Once that’s all completed, we pass it to QAs. Naturally, this is a very simplified version of this cycle, as we completely ignored the wild world of bugs.

When we spot and identify the root cause of the bug, we (hopefully) update our test suite to make sure this does not break again - nobody likes regressions! However, there is a big puzzle missing here: what if by quickly fixing the bug, we also made the application slower?

The problem is that you will likely not notice this performance change while actively developing the feature. Running our test suite will also not bring it to our attention, as we are not testing performance. The CI/CD pipelines don’t care either about the test execution time. Usually, once we get a green “test passed” message, we move to the next item on our bucket list. Unfortunately, the moment someone notices performance issues, it’s usually too late; these issues can pile up over time until they crossed some threshold, and now it’s the classic “death by a thousand papercuts” scenario!

This is where Reassure kicks in. It solves the problem by continuously monitoring the performance vitals of both your app, and your components!

How does it work?

Simply put, Reassure magically measures both the execution time and render counts of your components in a way that can be compared later. Taking two measurements - before and after a change - gives us a nice summary of the performance cost of this change. The beauty lies in its simplicity: it wraps your component with React’s profiler, collects the aforementioned metrics in multiple passes, and performs statistical analysis for you, separating the signal from the noise to produce a nice summarized list of tests with significant changes versus those that are negligible.

Installation

Let’s assume you’ve already created a Vega project, either via Vega Studio or Vega CLI.
Before you write your first performance test, you need to install Reassure, so open your terminal of choice, navigate to the project’s directory (if needed) and issue the following command:

npm install --save-dev reassure

# or if you use Yarn instead:
yarn add --dev reassure

We hope that you are familiar with React Native Testing Library. There is no need to install it as every Vega project uses it out-of-the-box!

Write your component under test

If you created a brand new Vega project, you need to have something to test, right?

To keep things simple, we decided to build on the “Build a Vega Video App” guide, from the end of the first part, right after you created and rendered a Video List.

To mix things up a little bit, we added three buttons to LandingScreen to allow users to sort the videos by title, duration, or channel_id.
So let’s see how our src/screens/LandingScreen.tsx looks like:

import React, { useState } from 'react';
import { StyleSheet, View } from 'react-native';
import { IVideo, useVideos } from '../VideoApi';
import Button from '../components/Button';
import Header from '../components/Header';
import VideoList from '../components/VideoList';

type SortOrder = 'title' | 'duration' | 'channel';

const sortFunctions: {[key in SortOrder]: (a: IVideo, b: IVideo) => number} = {
  'title': (a, b) => a.title.localeCompare(b.title),
  'duration': (a, b) => a.duration.localeCompare(b.duration, undefined, { numeric: true }),
  'channel': (a, b) => a.channel_id.localeCompare(b.channel_id),
};

function sortVideos(videos: IVideo[], sortOrder: SortOrder) {
  return [...videos].sort(sortOrderFunctions[sortOrder]);
}

const LandingScreen = () => {
  const url = 'https://d2ob7xfxpe6plv.cloudfront.net/TestData.json';
  const { videos } = useVideos(url);
  const [sortOrder, setSortOrder] = useState<SortOrder>('title');
  const sortedVideos = sortVideos(videos, sortOrder);
  const colorForSortButton = (order: SortOrder) =>
    order === sortOrder ? '#FC4C02' : 'gray';

  return (
    <>
      <Header />
      <View style={styles.buttonContainer}>
        <Button
          style={styles.sortButton}
          title='By name'
          onPress={() => setSortOrder('title')}
          color={colorForSortButton('title')}
        />
        <Button
          style={styles.sortButton}
          title='By duration'
          onPress={() => setSortOrder('duration')}
          color={colorForSortButton('duration')}
        />
        <Button
          style={styles.sortButton}
          title='By channel'
          onPress={() => setSortOrder('channel')}
          color={colorForSortButton('channel')}
        />
      </View>
      <VideoList
        title='All videos'
        videos={sortedVideos}
      />
    </>
  );
};

const styles = StyleSheet.create({
  buttonContainer: {
    width: '100%',
    flexDirection: 'row',
  },
  sortButton: {
    flex: 1,
  },
});

export default LandingScreen;

In addition, we extracted the video list from LandingScreen to a separate component, which is helpful for the homework.

Here’s the revised src/components/VideoList.tsx file:

import React from 'react';
import { FlatList, StyleSheet, Text, View } from 'react-native';
import { IVideo } from '../VideoApi';
import VideoCard from './VideoCard';

interface IProps {
  title: string;
  videos: IVideo[];
}

const renderVideoCard = ({ item }: { item: IVideo }) => (
  <View
    key={item.id}
    style={styles.itemContainer}
  >
    <VideoCard
      title={item.title}
      description={item.description}
      imgURL={item.imgURL}
    />
  </View>
);

const VideoList = ({ title, videos }: IProps) => (
  <>
    <Text style={styles.labelText}>{title}</Text>
    <FlatList
      style={styles.flatList}
      horizontal
      data={videos}
      renderItem={renderVideoCard}
    />
  </>
);

const styles = StyleSheet.create({
  labelText: {
    fontSize: 30,
    fontWeight: '700',
    color: 'white',
    paddingHorizontal: 10,
  },
  flatList: {
    padding: 10,
  },
  itemContainer: {
    margin: 10,
  },
});

export default VideoList;

We also extracted the video fetching logic out of LandingScreen into a separate file (./src/VideoApi.ts), which now has the following content:

import { useEffect, useState } from 'react';

export interface IVideo {
  id: string;
  title: string;
  description: string;
  duration: string;
  thumbURL: string;
  imgURL: string;
  videoURL: string;
  categories: string[];
  channel_id: string;
}

const initialVideoState = {
  videos: [] as IVideo[],
  isLoading: true,
  isError: false,
};

export const useVideos = (url: string) => {
  const [queryState, setQueryState] = useState(initialVideoState);

  useEffect(() => {
    fetch(url)
    .then((response) => response.json())
    .then((data) => setQueryState({
      videos: data.testData,
      isLoading: false,
      isError: false,
    }))
    .catch((error) => {
      setQueryState({
        videos: [],
        isLoading: false,
        isError: true,
      });
      console.log(error);
    });
  }, [url,]);

  return queryState;
};

For this example, we created a sample (primitive) hook for fetch the videos. Keep in mind for your production code, we highly recommend using an existing solution with proper caching and refetching, such as TanStack Query.

Writing your first performance test

Now go ahead and create a new file ./tst/VideoList.perf-test.tsx with following content:

import React from 'react';
import { fireEvent, RenderAPI } from '@testing-library/react-native';
import { measurePerformance } from 'reassure';
import LandingScreen from '../src/screens/LandingScreen';

jest.mock('../src/VideoApi', () => ({
  useVideos: (_url: string) => {
    const testVideos = require('./TestVideos.json');
    return {
      videos: testVideos.testData,
      isLoading: false,
      isError: false,
    };
  },
}));

test('Sorts the video lists', async () => {
  const scenario = async (screen: RenderAPI) => {
    const byDurationButton = screen.getByText('By duration');
    const byChannelButton = screen.getByText('By channel');
    await screen.findByText('All videos');

    fireEvent.press(byDurationButton);
    fireEvent.press(byChannelButton);
    fireEvent.press(byChannelButton);
  };

  await measurePerformance(<LandingScreen />, { scenario });
});

You may have noticed we named our file with the .perf-test.tsx suffix. Reassure (by default) will match filenames using Jest’s --testMatch option with value <rootDir>/**/*.perf-test.[jt]s?(x). If you would like to customize or simply change it, you may pass your own glob pattern to --testMatch option to the reassure measure script.

Running performance tests

Reassure works by comparing performance measurements: the baseline which is measured on the unoptimized version (or before change) and the current after modifications. We can get reliable and insightful information only by comparing those two measurements!

As this is our first measurement and we just installed Reassure, go ahead and run the following command:

npx reassure check-stability

# or if you use Yarn instead:
yarn reassure check-stability

This command will run both measurements on the same code base. Ideally, we will get similar results, but as Murphy’s law shows, sometimes this may not be the case and we need to investigate discrepancies before progressing any further.

:warning: WARNING :warning:

Fixing Vega’s default Jest configuration

If the first test run failed with a message similar to:

no native wasm support detected
FAIL tst/Counter.perf-test.tsx
  ● Test suite failed to run

    ReferenceError: WebAssembly is not defined

      at Object.<anonymous> (node_modules/hermes-parser/dist/HermesParserWASM.js:6:2656)
      at Module._compile (node_modules/pirates/lib/index.js:117:24)
      at Object.newLoader [as .js] (node_modules/pirates/lib/index.js:121:7)
      at Object.<anonymous> (node_modules/hermes-parser/dist/HermesParser.js:13:24)
      at Module._compile (node_modules/pirates/lib/index.js:117:24)
      at Object.newLoader [as .js] (node_modules/pirates/lib/index.js:121:7)
      at Object.<anonymous> (node_modules/hermes-parser/dist/index.js:12:20)
      at Module._compile (node_modules/pirates/lib/index.js:117:24)
      at Object.newLoader [as .js] (node_modules/pirates/lib/index.js:121:7)
      at Object.<anonymous> (node_modules/metro-react-native-babel-transformer/src/index.js:13:22)
      at Module._compile (node_modules/pirates/lib/index.js:117:24)
      at Object.newLoader [as .js] (node_modules/pirates/lib/index.js:121:7)
      at Object.<anonymous> (node_modules/react-native/jest/preprocessor.js:31:21)
      at ScriptTransformer._getTransformer (node_modules/@jest/transform/build/ScriptTransformer.js:347:21)
      at ScriptTransformer.transformSource (node_modules/@jest/transform/build/ScriptTransformer.js:427:28)
      at ScriptTransformer._transformAndBuildScript (node_modules/@jest/transform/build/ScriptTransformer.js:569:40)
      at ScriptTransformer.transform (node_modules/@jest/transform/build/ScriptTransformer.js:607:25)

Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 0.958 s, estimated 5 s

then you have to update the configuration in jest.config.js. Simply remove following line from the "transform" section:

"^.+\\.jsx?$": "<rootDir>/node_modules/react-native/jest/preprocessor.js"

It might take a some time. We should end up with similar looking message in our terminal:

➡️  Significant changes to render duration

➡️  Meaningless changes to render duration
 - Sorts the video lists: 21.6 ms → 21.1 ms (-0.5 ms, -2.3%)  | 4 → 4

➡️  Render count changes

➡️  Added scenarios

➡️  Removed scenarios

✅  Written JSON output file .reassure/output.json
🔗 ~/KeplerReassured/.reassure/output.json

✅  Written output markdown output file .reassure/output.md
🔗 ~/KeplerReassured/.reassure/output.md

After this command finishes, you might also notice that a new directory appeared: .reassure containing four files: baseline.perf, current.perf, output.json and output.md. The former two files (with .perf extension) are the measurements mentioned above, while the latter two contain the generated performance report.

Get your baseline measurements

We would not recommend using check-stability all the time, especially to acquire baseline measurements. It is meant to check whether our metaphorical “thermometer” yields similar readings in the same weather.
Instead, if you want only baseline measurement, please run a dedicated command for it:

npx reassure --baseline

# or if you use Yarn instead:
yarn reassure --baseline

You will receive a similar looking message in the terminal:

 PASS  tst/Counter.perf-test.tsx
  ✓ Count increments on press (1133 ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        3.691 s, estimated 4 s
Ran all test suites.


✅  Written Baseline performance measurements to .reassure/baseline.perf
🔗 ~/KeplerReassured/.reassure/baseline.perf

Hint: You can now run 'reassure' to measure & compare performance against modified code.

Let’s make a more ‘drastic’ change

Let’s modify the code a little bit to see how Reassure will not only help you write more performant code or keep the number of re-renders low, but it can also warn you (and possibly the reviewers of your PR) when your change will introduce a performance penalty.

The simplest way to demonstrate this can be achieved by swapping FlatList with a ScrollView in the VideoList component, which is a really, really bad idea, so please don’t do this at home:

const VideoList = ({ title, videos }: IProps) => {
  return (
    <>
      <Text style={styles.labelText}>{title}</Text>
      <ScrollView
        style={styles.flatList}
        horizontal
      >
        {videos.map((item) => renderVideoCard({ item }))}
      </ScrollView>
    </>
  );
};

When we ask Reassure to get a new measurement by running:

npx reassure

# or if you use Yarn instead:
yarn reassure

we should see something like that:

➡️  Signficant changes to render duration
 - Sorts the video lists: 21.6 ms → 35.7 ms (+14.1 ms, +65.3%) 🔴🔴 | 4 → 4 

➡️  Meaningless changes to render duration

➡️  Render count changes

➡️  Added scenarios

➡️  Removed scenarios

✅  Written JSON output file .reassure/output.json
🔗 ~/KeplerReassured/.reassure/output.json

✅  Written output markdown output file .reassure/output.md
🔗 ~/KeplerReassured/.reassure/output.md

As you can see, this command will compare its measurements to the previously saved baseline. In most cases, the baseline measurement is captured once, before introducing a change or on a different branch, and the normal measurements are run constantly on the changed code. You can also push it further, integrate it with Danger.js and include Reassure’s report in the PR for your reviewers, but that’s a story for another time.

Summary

That’s all, folks! In this introductory article, we described how Reassure works, how to set it up, and how tracking rendering telemetry is beneficial for spotting performance issues early on.

In addition, if you are already using react-native-testing-library (which you should!), then writing performance tests should be a breeze!

Thanks for reading!

~ Mario
Software Engineer @ Callstack

3 Likes