Using the User Timing API to Record Custom Performance Metrics

TOC

Intro

The User Timing API, an extension of the Performance API, provides read-only access to a high-resolution timer allowing measurements between two marks in an application via a measure method.

A “mark” is basically just a timestamp, but a high-resolution one, meaning very precise.

A “measure” is the calculation of the difference between any two marks.

Marks and measures receive names, but these are only for us humans, so we can easily refer to them.

Browser support for the User Timing API is quite good in all modern browsers, including IE10+ and Safari/iOS Safari 11+.

A Simple Example

A simple example would be:

performance.mark('mark-1');
/* Stuff happens... */
performance.mark('mark-2');
const timeElapsed = performance.measure(
    'measurement-1', // name of measure (can be any text string, for us humans only)
    'mark-1', // name of mark to measure FROM
    'mark-2' // name of mark to measure TO
);
console.log(timeElapsed.duration); // time elapsed between `mark-1` and `mark-2`

The above example creates one mark, does some stuff, then creates another mark.

After the second mark is created, a measure is performed, which calculates the time between the two marks, which is available via the measure object’s duration property.

Note that the measure can happen at any time after the second mark is created, including after the page has finished loading.

This means that numerous marks could be created during the page load, then all measurements could be calculated after the page load has completed.

A specific Example

A more specific example could be to measure the time between a page starting to load in the browser and a client-side-rendered product grid starting to load into that page:

  1. As close as possible to the top of the HTML, create a “start” mark:
    performance.mark('page-load-start');
  2. When the product grid is added to the DOM, create an “end” mark:
    performance.mark('product-grid-added');
  3. Then, at any point after the second mark is created, perform a “measure” between those two marks:
    const gridMeasure = performance.measure(
        'product-grid-loading', // name of measurement
        'page-load-start',      // name of "start" mark
        'product-grid-added'    // name of "end" mark
    );
    
  4. The duration of that measure could then be output to your browser console, sent to some API that collects metrics (such as Google Analytics), collected into your synthetics tooling, etc.

A local-version of the above example can be seen via Chrome’s DevTools, logging the page start and grid load events into the console:
(Note that the label is wrong in these examples, showing “ms” when it should be “seconds”)

Here is the completed output of metrics, showing each mark and each measure:
user-timing-2-console-output-2

In the above example, TTFB happened at .42 seconds and the grid loaded 7.17 seconds after TTFB.

Advanced Use Cases

Monitoring marks and measures

The recommended method for monitoring marks and measures is to use an Observer object, such as:

function observeUserTimingAPI(list, observer) {
  list.getEntries().forEach((entry) =>  {
    if (entry.entryType === 'mark') {
      console.log(`${entry.name}: ${entry.startTime} ms`);
    };
    if (entry.entryType === 'measure') {
      console.log(`${entry.name} duration: ${entry.duration} ms`);
    };
  });
}
const perfObserver = new PerformanceObserver(observeUserTimingAPI);
perfObserver.observe({ entryTypes: ['measure', 'mark'] });

Note the above merely sends the output to the Console, but could just as easily send it to Splunk, GA, an API to store in a database, etc.

Dynamically added content

When monitoring content that is added dynamically, you would need to either add the mark to the JS that fetches and appends that content, or create a Mutation Observer to “watch” for the content to be added to the page, such as:

const targetNode = document.querySelector('.product-grid');
const config = { attributes: false, childList: true, subtree: false };
const callback = (mutationList, observer) => {
  for (const mutation of mutationList) {
    if (mutation.type === 'childList') {
        mutationObserver.disconnect();
        performance.mark('product-grid-add');
        const gridMeasure = performance.measure(
            'product-grid-load',
            'page-load-start',
            'product-grid-add'
        );
        console.log(`The product grid loaded ${gridMeasure.duration} ms after the page started loading in the browser.`);
    }
  }
};
const mutationObserver = new MutationObserver(callback);
mutationObserver.observe(targetNode, config);

First identify the element to observe for mutations (a parent element); above targetNode.

Next, we create a config for the Observer; in the above case, watching only for child nodes to change, but an Observer can also watch for attributes or the subtree of the target element to change.

A callback function is then created which will check all mutations to see if they match our criteria, then creates a mark, a measurement, and a gets the duration before reporting it to the Console.

XHR Requests

It is also possible to set marks and measures for XHR requests, such as:

var reqCnt = 0;
var myReq = new XMLHttpRequest();
myReq.open('GET', url, true);
myReq.onload = function(e) {
    window.performance.mark('mark_end_xhr');
    reqCnt++;
    window.performance.measure('measure_xhr_' + reqCnt, 'mark_start_xhr', 'mark_end_xhr');
    do_something(e.responseText);
}
window.performance.mark('mark_start_xhr');
myReq.send();

By setting a global variable (or object parameter, or localStorage value, etc.), and setting a mark just before sending the XHR request, we can then set another mark and perform the measure within the success callback, storing or sending that value as needed.

Retrieving Marks and/or Measures

It is possible to simply ignore marks and measures as they are created, then fetch and do something with them when it is convenient or needed.

  • performance.getEntries() returns all entries in the performance timeline; check each entry’s entryType for either “mark” or “measure”.
  • performance.getEntriesByName(name, entryType) returns all entries in the performance timeline with the specified name and entryType (so “measure” retrieves all, and only, measure entries).
  • performance.getEntriesByType(entryType) returns all entries in the performance timeline with the specified entryType (so “measure” retrieves all, and only, measure entries).

Removing Marks and/or Measures

To keep the browser environment clean and help preserve resources, you may want to remove your marks and/or measures when you are done with them.

performance.clearMarks() and performance.clearMeasures() each delete all of their respective entries.

Both of the above methods also take an optional name parameter to delete a single entry from the performance timeline.

A combination of a good naming convention, the above “retrieve” methods, a loop and the respective “clear” method should allow you to kill any marks or measures that you want to kill.

KeyCDN offers several other common User Timing “recipes” such as measuring blocking CSS or JS, hero image loads, and custom font loads.

How to Use

All of the above is great, but how do you actually put this into use and then see the results? Well, that depends on what you you need.

Local Workspace

If you just want to test something out yourself, you can implement marks and measures right in Chrome DevTools by using their Overrides feature.

To learn how Overrides work, you can read this article or watch this video.

Once you have this setup, you can edit any website HTML or JS to add marks and get measures.

Sharing with others

The above works fine if you just need to test something yourself, maybe as a rough proof of concept. But what if you want to show your work to someone else?

One option would be to do the above and capture a video of your personal browser.

Otherwise, you would need access to the actual HTML files on a server that you and others can access.

Using the Metrics

In the above examples, we have used simple console.log to view the resulting metrics. But in reality, we would want to see these “in the wild”, via our synthetic and RUM systems. I have used these with both Splunk and mPulse, but should be available in any tool.

Splunk

This couldn’t be easier in Splunk: Any performance measure that you record is automatically added to the Run and is available as a KPI for RBCs, Dashboards and Reports.

Note that Splunk only tracks User Timing until the Fully Loaded Time. Any actions after that, such as clicks, swipes, form submissions, etc., will not be recorded by Splunk.

However, RBCs can be written with Steps that include clicking something, waiting for a page element to be visible in the page, filling in form elements, submitting forms, etc., and the RBC can be told to wait for all of those steps to be completed.

Here you see a list of all of the User Timing metrics being collected for this RBC:
user-timing-3-splunk-ui
Here you see the metrics available within the RBC:
user-timing-4-splunk-metrics
And here the metric being added to a Custom Report:
user-timing-5-splunk-report

mPulse

mPulse also automatically collects and stores custom metrics during a page load, then sends that data to their system as part of the Boomerang beacon.

In order to view that data, however, you must first setup the custom metric in the App in mPulse. To do so, follow these instructions:
https://techdocs.akamai.com/mpulse/docs/use-metrics#how-to

And for details specific to User Timing metrics, follow these instructions:
https://techdocs.akamai.com/mpulse/docs/use-metrics#define-a-timer-with-user-timing

Note that mPulse only allows up to 10 custom metrics per page, and mPulse stops collecting data upon page load, when it sends all beacon info to the system. You would need to manually send any data collected after the beacon is sent. For more info on manually sending beacon data, follow these instructions:
https://techdocs.akamai.com/mpulse-boomerang/docs/how-mpulse-xhr-and-spa-monitoring-works

Resources

Happy testing,
Atg

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.