Vega App Performance Certification Guidebook - App Latency

Introduction

App Latency refers to how quickly the app starts up when user selects to launch the app and is ready to be used and interacted by the user. There are 2 main areas of App latency TTFF: Time to First Frame and TTFD/RTU: Time to fully drawn (Ready to Use). This KPI is very important from user experience aspect as this provides the first impression of app performance to user when the app is launched. In this guide we will outline how the developers can measure the App latency KPIs using the tools from Vega SDK, Investigate performance KPI failures and self certification.


App Latency KPI Targets

App Latency is divided into 2 areas Time to First Frame (TTFF) and Time to Fully Drawn (TTFD) . below are the KPI targets defined for performance certification on the Vega device. These KPIs are further divided into 2 use cases, Cold Start: This is when the app is launching from fresh new launch and is not already running in the background. this is typically after a device restart or when the app has been previously terminated, and Warm Start: This use case refers to when the app being launched is already running in the background.

KPI Targets
Cool TTFF - 1.5s
Warm TTFF - 0.5s
Cool TTFD - 8s
Warm TTFD - 1.5s

:light_bulb: For more details on the App Latency KPI Targets for Vega apps refer to the documentation here.


Measuring App Latency

The developers can use the Vega Studio performance tools provided in Vega SDK for measuring the app latency KPIs for certification of the app. The performance tools in SDK can be used from command line or through the Vega Studio plugin for VS Code IDE (also installed as part of SDK installation). The developers can use the CLI tool for running performance certification tests on the app while the VS Code extension can be used if you have the project setup with source code of the app for investigation into the app performance.

The VS Code plugin cannot be used for running performance tests on app which is already installed on the device.

Pre-Requisites:

  • Vega device: It is desirable to run the performance certification testing on a real Vega device and not on simulator. It is desirable to have the device setup in network conditions similar to a production environment with the right backend config.
  • App Under Test: the app that needs to be tested can be sideloaded on the device prior to running the tests using the VDA command line tools or “Vega device” CLI tools in the SDK.
  • Vega performance CLI tools: These tools are installed as part of Vega SDK so make sure that all the relevant tools are installed during SDK installation. this can be confirmed by running “kepler platform doctor”
  • reportFullyDrawn(): you need to integrate the reportFullyDrawn() marker inside your app which will be used for measuring TTFD/RTU KPI. This marker needs to be set in your app when it finishes loading the main page of your app ready for user interaction. you can refer to documentation here for more details on how to integrate reportFullylDrawn().

How to Measure:
Use Vega SDK perf cli tool to run performance measurement on your app. you can use the kpi-visualizer command on hte perf cli to measure performance KPIs.

bash-3.2$ perf kpi-visualizer --help

NAME:

perf kpi-visualizer

DESCRIPTION:

KPI Visualizer tool.

SYNOPSYS:

perf kpi-visualizer [parameters]

Use 'perf command --help' for information of the specific command.

PARAMETERS:

--iterations ITERATIONS
Set number of test iterations, this overrides .conf setting
--record-cpu-profiling
Allow recording for CPU profiling
--sourcemap-file-path SOURCEMAP_FILE_PATH
Specify the path to source map file
--grpc-port <port>
Port on which grpc server is started.
--certification
Configures iterations to 30 and aggregation percentile to 90 in certification mode.
--expected-video-fps EXPECTED_VIDEO_FPS
Set the expected frames per second by the application under test.
--kpi KPI

Scenario to measure application performance KPI (Optional).
If not specified, the cool-start-latency scenario will be
picked up which measures the time to first frame generated
(TTFF) and time to first frame fully drawn on display (TTFD).

Following scenarios are supported:
1. cool-start-latency: Measure the latency for the first
frame being generated and displayed.
2. ui-fluidity: Measure UI fluidity.
3. warm-start-latency: Measure the latency for the first
frame being generated and displayed after the application
has been warmed up and put in foreground.
4. foreground-memory: Measure memory used by the application
when put in foreground
5. background-memory: Measure memory used by the application
when put in background
6. video-fluidity: Measure video playback fluidity. A test
scenario (--test-scenario) to start video playback from
application is required for the test case.

--test-scenario TEST_SCENARIO
Python script defining UI test scenario. Please use generate-test-template command to generate the template for the test scenario.
--monitor-processes MONITOR_PROCESSES [MONITOR_PROCESSES ...]
Specify any dependent services to monitor. For example: webview.renderer_service
--ignore-trace-loss
Disables trace loss checks.
--help, -h
Show this help message.

We will be using the “cool-start-latency” parameter above to measure the TTFF and TTFD for cool start use case. Also configure the iterations to 50 for certification testing.

perf kpi-visualizer --kpi cool-start-latency --iteration 50 --app-name com.amazondeveloper.keplervideoapp.main

NOTE: replace the “—app-name” in above command with the appID/package ID for your application to test.

At the end of the test run the tool will provide a summary of the results and the performance measurement values. This will give an indication of whether the application is within the target values for the latency.

Warm Start:
you can use the same method above but using “warm-start-latency“ instead of cool-start to run the measurements for the warm start App latency KPIs.

perf kpi-visualizer --kpi warm-start-latency --iteration 50 --app-name com.amazondeveloper.keplervideoapp.main

:light_bulb: For more details on the KPI measurements using perf cli for Vega apps refer to the documentation here.


Analyse results and Best practices for improving App startup performance:

  • KS VS Code : You can use the Vega Studio VS Code extension to run the KPI measurements while monitoring the app. Refer to details here to see how to use VS code plugin to run performance tests
  • Monitor App performance: Use Vega Studio extension in VS Code to monitor and record app performance. you can investigate app launch scenario and CPU utilization during running of performance measurements. You can find more information on how to do this in the tech docs here.
  • Splash screen: In order to improve the TTFF KPI you can make use of SplashScreenManager API in Vega SDK. This will allow you to put up a splash screen image for your app when the launch is triggered. This eases the requirement on your application to render its first frame since this will be done by the Vega OS it self when launch is triggered. your app can then remove the splash screen when it is ready to render its first screen. More details on SplashScreen manager can be found here.
  • Thread Analysis: You can investigate the threads for your app and the states to identify performance issues in your app. Information on how to use the Vega Studio tools for investigating thread states can be found here.

:light_bulb: For more details on best practices for performance in Vega apps can be found here.