Consistently Improve App Performance With DMAIC and Reassure

Consistently Improve App Performance With DMAIC and Reassure

In short

This article discusses the importance of consistently monitoring and improving the performance of React Native apps to ensure a seamless user experience. It introduces the DMAIC process, a structured methodology consisting of Define, Measure, Analyze, Improve, and Control phases, and highlights tools like Reassure for performance regression testing, ultimately emphasizing the significance of proactive performance optimization in app development.

Originally published in February 2023, updated in March 2024.

Use the DMAIC process to help you prevent regressing on app performance

Problem: Every once in a while, after fixing a performance issue, the app gets slow again.

Customers have very little patience for slow apps. There is so much competition on the market that customers can quickly switch to another app. According to the Unbounce report, nearly 70% of consumers admit that page speed influences their willingness to buy. Good examples here are Walmart and Amazon—both of these companies noticed an increase in revenue by up to 1% for every 100 milliseconds of load time improvement. The performance of websites and mobile apps can thus noticeably impact businesses' performance.

It's becoming increasingly important to not only fix performance issues but also make sure they don't happen again. You want your React Native app to perform well and fast at all times.

Solution: Use the DMAIC methodology to help you solve performance issues consistently.

From the technical perspective, we should begin by avoiding guesswork and base all decisions on data. Poor assumptions lead to false results. We should also remember that improving performance is a process, so it's impossible to fix everything at once. Small steps can provide big results.

One of the most effective ways of doing that is using the DMAIC methodology, which stands for Define, Measure, Analyze, Improve, and Control. It's a solid data-driven approach that can be used to improve React Native apps. Let's see how we can do it!


In this phase, we should focus on defining the problem. It's important to listen to the customer's voice – their expectations and feedback. It helps to better understand the needs, preferences, and problems they are facing.

Next, it is very important to measure it somehow. Let's say the customer wants a fast checkout. After analyzing the components, we know that to achieve this we need a swift checkout process, a short wait time, and smooth animations and transitions. All of these points can be decomposed into CTQ (Critical-to-Quality) that are measurable and can be tracked. For example, a short wait time can be decomposed into a quick server response and a low number of server errors.

Another handy tool is analyzing common user paths. With good tracking, we can analyze and understand what parts of the app are mostly used.

At this stage, it's very important to choose priorities. It should end up with defining the order in which we will optimize things. Any tools and techniques for prioritizing will help here.

Ultimately, we need to define where we want to go – we should define our goals and what exactly we want to achieve. Keep in mind that it all should be measurable! It's a good practice to put the goals in the project scope.


Since we already know where we want to go, it's time to assess the starting point. It's all about collecting as much data as possible to get the actual picture of the problem. We need to ensure the measurement process is precise. It's really helpful to create a data collection plan and engage the development team to build the metrics. After that, it's time to do some profiling.

When profiling in React Native, the main question is whether to do this on JavaScript or the native side. It heavily depends on the architecture of the app, but most of the time it's a mix of both.

One of the most popular tools is React Profiler, which allows us to wrap a component to measure the render time and the number of renders. It's very helpful because many performance issues come from unnecessary re-renders. Discover how to use it here:

It will output the data:

The second tool is a library created by Shopify – react-native-performance. It allows you to place some markers in the code and measure the execution time. There is also a pretty nice Flipper plugin that helps to visualize the output:

Flipper plugin visualizing output
Source: https://shopify.github.io/react-native-performance/docs/guides/flipper-react-native-performance

Speaking of Flipper, it has some more plugins that help us measure the app performance and speed up the development process. We can use, e.g. React Native Performance Monitor Plugin for a Lighthouse-like experience or React Native Performance ListsProfiler Plugin.

On the native side, the most common method is using NativeIDEs – Xcode and Android Studio. There are plenty of useful insights which can be analyzed and lead to some conclusions and results.

The most important aspect of this phase is measurement variation. Due to different environments, we have to be very careful when profiling. Even if the app is run on the same device, there might be some external factors that affect performance measurements. That’s why we should base all the measurements on release builds.


The goal of this phase is to find the root cause of our problem. It’s a good idea to start with a list of things that could potentially cause the problem. A little brainstorming with a team is really helpful here.

One of the most popular tools to define a problem is called a cause and effect diagram. It looks like a fish and we should draw it from right to left. We start from the head and it should contain the problem statement - at this stage, we should already have it based on the Define phase. Then, we identify all the potential major causes of the problem and assign them to the fish bones. After that, we assign all the potential causes to each major cause. Many things could have an impact on performance. The list could get really long, so it’s important to narrow it down. Outline the most important factors and focus on them

Finally, it’s time to test the hypothesis. For example, if the main problem is low FPS, and the potential major cause is related to list rendering, we can think of some improvements in the area of images in the list items. We need to design a test that will help us accept or reject the hypothesis - it will probably be some kind of proof of concept. Next, we interpret the results and decide if it was improved or not. Then we make a final decision.

Cause and effect diagram example
Cause and effect diagram example


Now we know what our goal is and how we want to achieve it, it’s time to make some improvements. This is the phase where optimization techniques start to make sense.

Before starting, it’s a good idea to have the next brainstorming session and identify potential solutions. Depending on the root cause, there might be a lot of them. Based on the last example with images on the list item, we can think about implementing proper image caching and reducing unnecessary renders.

After outlining the solutions, it’s time to pick the best one. Sometimes the solution that gives the best effects might be extremely costly, e.g. when it’s necessary to make some architectural changes.

It’s then time to implement the solution. After that, it’s required to properly test it and we are done!


The last step is the control phase. We need to make sure that everything works well now. The performance will degrade if it is not under control. People tend to blame devices, the used technology, or even users when it comes to bad performance. So what do we need to do to keep our performance on a high level?

We need to make sure that we have a control plan. We can use some of our work from the previous phases to make it. We should identify focal points, some measurement characteristics, acceptable ranges for indicators, and testing frequency. Additionally, it is a good practice to write down some procedures and what to do if we spot issues.

The most important aspect of the control phase is monitoring regressions. Until recently it was quite difficult to do that in React Native, but now we have plenty of options to improve our monitoring.

Real-time user monitoring

One way to keep the performance improvements we introduce in our apps is through real-time monitoring tools. Such as Firebase Performance Monitoring, which is a service that gives us some insights into performance issues in production. Or Sentry Performance Monitoring, which tracks application performance, collects metrics like throughput and latency and displays the impact of errors across multiple services.

It's a great addition to any app builders that want to have insights on how the performance is distributed across all the devices that install their apps. Based on real user data.

Testing regressions as a part of the development process

Another way to keep performance regressions under control is through automated testing. Profiling, measuring, and running on various devices is quite manual and time-consuming. That’s why developers avoid doing it. However, it gets too easy to unintentionally introduce performance regressions that would only get caught during QA, or worse, by your users. Thankfully, we have a way to write automated performance regression tests in JavaScript for React and React Native.

Reassure allows you to automate React Native app performance regression testing on CI or a local machine. In the same way, you write your integration and unit tests that automatically verify that your app is still working correctly, you can write performance tests that verify that your app is still working performantly. You can think about it as a React performance testing library. Reassure is designed to reuse as much of your React Native Testing Library tests and setup as possible. As it’s designed by its maintainers and creators.

It works by measuring certain characteristics—render duration and ren- der count—of the testing scenario you provide and comparing that to the stable version measured beforehand. It repeats the scenario multiple times to reduce the impact of random variations in render times caused by the runtime environment. Then it applies a statistical analysis to figure out whether the code changes are statistically significant or not. As a result, it generates a human-readable report summarizing the results and displays it on CI or as a comment to your pull request.

The simplest test you can write would look like this:

This test will measure the render times of the Component during mounting and the resulting sync effects. Let’s take a look at a more complex example though. Here we have a component that has a counter and a slow list component:

And the performance test looks as follows:

When run through its CLI, Reassure will generate a performance comparison report. It’s important to note that to get a diff of measurements, we need to run it twice. The first time with a <rte-code>--baseline flag<rte-code>, which collects the measurements under the <rte-code>.reassure/ directory<rte-code>.

Performance comparison report created by Reassure

After running this command, we can start optimizing our code and see how it affects the performance of our component. Normally, we would keep the baseline measurement and wait for performance regressions to be caught and reported by Reassure. In this case, we’ll skip that step and jump straight into optimizing, because we just noticed a nice possibility to do so. And since we have our baseline measurement for reference, we can actually verify our assumptions and whether the improvement was real or only subjective.

The possibility we noticed is that the <rte-code><SlowList/><rte-code> component can be memoized, as it doesn’t depend on any external variables. We can leverage <rte-code>useMemo<rte-code> for that case:

Once we’re done, we can run Reassure a second time. Now without the <rte-code>--baseline<rte-code> flag.

Performance comparison report created with Reassure

Now that Reassure has two test runs to compare—the current and the baseline—it can prepare a performance comparison report. As you can notice, thanks to applying memoization to the <rte-code>SlowList<rte-code> component rendered by <rte-code>AsyncComponent<rte-code>, the render duration went from 78.4 ms to 26.3 ms, which is roughly a 66% performance improvement.

Test results are assigned to certain categories:

  • Significant Changes To Render Duration shows a test scenario where the change is statistically significant and should be looked into as it marks a potential performance loss/improvement.
  • Meaningless Changes To Render Duration shows test scenarios where the change is not statistically significant.
  • Changes To Render Count shows test scenarios where the render count did change.
  • Added Scenarios show test scenarios that do not exist in the baseline measurements.
  • Removed Scenarios show test scenarios that do not exist in the current measurements.

When connected with Danger JS, Reassure can output this report as a GitHub comment, which helps catch the regressions during code review.

Performance comparison resort
Report generated by Reassure with Danger JS

You can discover more use cases and examples in the docs.

Benefits of the DMAIC approach

Advantage: A well-structured and organized optimization process.

When working on an app, regardless of its size, it’s important to have a clear path for reaching our goals. The main benefit of using DMAIC when optimizing React Native applications is a structured and direct approach. Without it, it may be difficult to verify what works (and why). Sometimes our experience and intuition are just enough. But that’s not always the case.

Having a process like this allows us to focus on problem-solving and constantly increase productivity. Thanks to the DMAIC approach, performance optimization becomes a part of your normal development workflow. Making your app closer to being performant by default. Spotting the performance issues even before they hit your users.

No software is flawless. Bugs and performance issues will happen even if you’re the most experienced devel- oper on the team. But we can take ac- tion to mitigate those risks by using automated tools like Sentry, Firebase, or Reassure. Use them in your project and enjoy the additional confidence they bring to your projects. And the improved UX they bring to your users in turn. If you’re looking for a tech partner who can improve the performance of your application in a well-thought-out process, contact us.


No items found.
React Galaxy City
Get our newsletter
Sign up

By subscribing to the newsletter, you give us consent to use your email address to deliver curated content. We will process your email address until you unsubscribe or otherwise object to the processing of your personal data for marketing purposes. You can unsubscribe or exercise other privacy rights at any time. For details, visit our Privacy Policy.

Callstack astronaut
Download our ebook

I agree to receive electronic communications By checking any of the boxes, you give us consent to use your email address for our direct marketing purposes, including the latest tech & biz updates. We will process your email address and names (if you have entered them into the above form) until you withdraw your consent to the processing of your names, or unsubscribe, or otherwise object to the processing of your personal data for marketing purposes. You can unsubscribe or exercise other privacy rights at any time. For details, visit our Privacy Policy.

By pressing the “Download” button, you give us consent to use your email address to send you a copy of the Ultimate Guide to React Native Optimization.