The overwhelming evidence indicates a website's performance (speed) correlates directly to its success, across industries and business metrics. With such a clear correlation (and even proven causation), it is important to monitor how your website performs. So, how fast is your website?
First, it is important to understand that no single number will answer that question. Even if you have defined exactly what you are trying to measure on your website, performance will vary widely across your user base and across the different pages on your site.
Here, I discuss active testing techniques that have been traditionally used and then explain newer technologies that permit the browser to report back to the server accurate timing data.
Traditionally, monitoring tools are used in active testing to measure the performance of a website, and the results end up being plotted on a time-series chart. You may choose specific pages to measure and geographic locations from which to measure, and then the test machines load the pages periodically from the various locations and the performance gets reported. The results are usually quite consistent and provide a great baseline for identifying variations, but the measurements are not representative of actual users.
The main limiting factors in using active testing for understanding Web performance are:
The impact of connectivity cannot be understated. As illustrated in Figure 1, bandwidth can have a significant impact on Web performance, particularly at the slower speeds (less than 2Mbps), and latency has a near-linear correlation with Web performance.
Testing from within a data center skews heavily with effectively unlimited bandwidth and no last-mile latency (which for actual users can range anywhere from 20 to 200-plus milliseconds depending on the type of connection). The connectivity skew also is exaggerated by CDNs (content distribution networks) because these usually have edge nodes collocated in the same data centers as the test machines for many monitoring providers, reducing latency to close to zero.
When loading a page, the browser generally:
In 2010, the browser vendors got together under the W3C banner and formed the Web Performance Working Group15 to standardize the interfaces and work toward improving the state of Web-performance measurement and APIs in the browsers. (As of this writing, browser support for the various timing and performance information is somewhat varied, as shown in the accompanying table.)
In late 2010 the group released the Navigation Timing specification,9 which has since been implemented in Internet Explorer (9+ desktop and mobile), Chrome (15+ all platforms), Firefox (7+), and Android (4+).
The largest benefit navigation timing provides is that it exposes a lot of timings that lead up to the HTML loading, as shown in Figure 2. In addition to providing a good start time, it exposes information about any redirects, DNS lookup times, time to connect to the server, and how long it takes the Web server to respond to the requestfor every user and for every page the user visits.
The following code calculates the page load time:
Just make sure to record measurements after the point you are measuring (
loadEventStart will not be set until the actual load event happens, so attaching a listener to the load event is a good way of knowing it is available for measurement).
What is interesting about users' clocks is that they don't always move forward and are not always linear, so error checking and data filtering are necessary to make sure only reasonable values are being reported. For example, if a user's computer connects to a time server and adjusts the clock, it can jump forward or backward in between measurements, resulting in negative times. If a user switches apps on a phone, then the page can be paused midload until the user opens the browser again, leading to load times that span days.
performance.now() and is relative to the page navigation start time.
Sites can now report on the actual performance for all of their users, and monitoring and analytics providers are also providing ways to mine the data (Google Analytics reports the navigation timing along with business metrics, for example2), so websites may already be getting real user performance-measurement data. Google Analytics also makes an interface available for reporting arbitrary performance data so website operators can add timings for anything else they would like to measure.3
Unlike the results from active testing, the data from real users is noisy, and averages tend not to work well at all. Performance data turns into regular analytics like the rest of a company's business metrics, and managers have to do a lot of similar analysis such as looking at users from specific regions, looking at percentiles instead of averages, and looking at the distribution of the results.
For example, Figure 3 shows the distribution of page-load times for recent visitors to a website (approximately two million data points). In this case U.S. and European traffic skews toward faster load times, while Asia and Africa skew slower with a significant number of users experiencing page-load times of more than 11 seconds. Demographic issues drive some of the skew (particularly around Internet connectivity in different regions), but it is also a result of the CDN used to distribute static files. The CDN does not have nodes in Eastern Asia or Africa, so one experiment would be to try a different CDN for a portion of the traffic and watch for a positive impact on the page-load times from real users.
One thing to consider when measuring the performance of a site is when to stop the timing. Just about every measurement tool or service will default to reporting the time until the browser's load event fires. This is largely because it is the only well-defined point that is consistent across sites and reasonably consistent across browsers.8 In the world of completely static pages, the load event is the point where the browser has finished loading all of the content referred to by the HTML (including all style sheets and images, among others).
All of the browsers implement the basic load event consistently as long as scripts on the page do not change the content. The HTML5 specification clarified the behavior for when scripts modify the DOM (adding images, other scripts, and so on), and the load event now also includes all of the resources that are added to the DOM dynamically. The notable exception to this behavior is Internet Explorer prior to version 10, which did not block the load event for dynamically inserted content (since it predated the spec that made the behavior clear).
The time until the load event is an excellent technical measurement, but it may not convey how fast or slow a page was for the user. The main issues are that it measures the time until every request completes, even those that are not visible to the user (content below the currently displayed page, invisible tracking pixels, among others), and it does not tell you anything about how quickly the content was displayed to the user. As illustrated in Figure 4, a page that displays a completely blank screen right up until the load event and a page that displays all visible content significantly earlier can both have the same time as reported by measuring the load event.
One thing to consider when measuring the performance of a site is when to stop the timing. Just about every measurement tool or service will default to reporting the time until the browser's load event fires.
Several attempts have been made to find a generic measurement that can accurately reflect the user experience:
msFirstPainttime on the
performance.timinginterface, and Chrome exposes a time under
chrome.loadTimes().first-PaintTime;in both cases, however, it is possible that the first thing the browser painted was still a blank white screen (though this is better than nothing and good for trending).
Speed index.14 After experimenting with several point-in-time measurements, we evolved our visible measurement frameworks to capture the visual progress of displaying a page over time. This current best effort for representing the user experience in a single number takes the visual progress as a page is loading and calculates a metric based on how quickly content is painted to the screen. Figure 5 shows the example from Figure 4 but with visual progress included each step of the way.
Figure 6 shows how to plot the progress and calculate the unrendered area over time to boil the overall experience down to a single number. The speed index is an improvement in the ability to capture the user experience, but it is only measurable in a lab. It still requires that you determine what the endpoint of the measurement is, and it does not deal well with pages that have large visual changes as they load.
As browsers go through a page and build the DOM, they also execute inline scripts when they get to them. They cannot execute the scripts until all previous scripts have been executed. If you want to know when the browser made it to the main article, then you can just add a tiny inline script that records the current time right after the HTML for the main article. When you report the performance metrics, you can gather the different custom measurements and include them as well.
The W3C Web Performance Group considered this use case as well and created the User Timing specification.13 It provides a simple way of marking points in time using
This still isn't perfect because there is no guarantee the browser actually painted the content to the screen, but it is a good proxy for it. To be even more accurate with the actual painting event, the animation timing interface can request a callback when the browser is ready to paint through
One specification the community has been eager to see implemented is the resource timing interface.11 It exposes timing information about every network request the browser had to make to load a page and what triggered the resource to be requested (whether stylesheet, script, or image).
Some security provisions are in place to limit the information that is provided across domains, but in all cases the list (and count) of requests that were made on the network are available, as are the start and end times for each of them. For requests where you have permission to see the granular timing information, you get full visibility into the individual component timings: DNS lookup, time to connect, redirects, SSL (Secure Sockets Layer) negotiation time, server response time, and time to download.
By default the granular timing is visible for all of the resources served by the same domain as the current page, but cross-domain visibility can also be enabled by including a "Timing-Allow-Origin" response header for requests served by other domains (the most common case being a separate domain from which static content can be served).
You can query the list of resources through:
The possibilities for diagnosing issues in the field with this interface are huge:
The default buffer configuration will store up to 150 requests, but there are interfaces available to:
Theoretically, you could report all of the timings for all of your users and be able to diagnose any session after the fact, but that would involve a significant amount of data. More likely than not you will want to identify specific things to measure or diagnose in the field and report on the results only of the analysis that is done in the client (and this being the Web, you can change and refine this as needed when you need more information about specific issues).
If you run a website, make sure you are regularly looking at its performance. The data has never been more readily available, and it is often quite surprising.
The W3C Web Performance Working Group has made great progress over the past two years in defining interfaces that will be useful for getting performance information from real users and for driving the implementation in the actual browsers. The working group is quite active and plans to continue to improve the ability to measure performance, as well as standardize on solutions for common performance issues (most notably, a way to fire tracking beacons without holding up a page).5 If you have thoughts or ideas, I encourage you to join the mailing list10 and share them with the working group.
High Performance Web Sites
Building Scalable Web Services
Improving Performance on the Internet
1. Brutlag, J., Abrams, Z. and Meenan, P. Above-the-fold time: measuring Web page performance visually (2011); http://cdn.oreillystatic.com/en/assets/1/event/62/Above%20the%20Fold%20Time_%20Measuring%20Web%20Page%20Performance%20Visually%20Presentation.pdf.
2. Google Analytics (2012); Measure your website's performance with improved Site Speed reports; http://analytics.blogspot.com/2012/03/measure-your-websites-performance-with.html.
3. Google Developers (2013); User timings Web tracking (ga.js); https://developers.google.com/analytics/devguides/collection/gajs/gaTrackingTiming.
4. Hallock, A. Some interesting performance statistics; http://torbit.com/blog/2012/09/19/some-interesting-performance-statistics/.
5. Mann, J. W3C Web performance: continuing performance investments. IEBlog; http://blogs.msdn.com/b/ie/archive/2012/11/27/w3c-web-performance-continuing-performance-investments.aspx.
6. Souders, S. The performance golden rule; http://www.stevesouders.com/blog/2012/02/10/the-performance-golden-rule/.
7. W3C. High Resolution Timing (2012); http://www.w3.org/TR/hr-time/.
8. W3C. HTML5 (2012); http://www.w3.org/TR/2012/CR-html5-20121217/webappapis.html#handler-window-onload.
9. W3C. Navigation timing (2012); http://www.w3.org/TR/navigation-timing/.
10. W3C. Publicemail@example.com mail archives (2012); http://lists.w3.org/Archives/Public/public-web-perf/.
11. W3C. Resource timing (2012); http://www.w3.org/TR/resource-timing/.
12. W3C. Timing control for script-based animations (2012); http://www.w3.org/TR/animation-timing/.
13. W3C. User timing (2012); http://www.w3.org/TR/user-timing/.
14. WebPagetest documentation; https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index.
15. Web Performance Working Group. W3C; http://www.w3.org/2010/webperf/.
©2013 ACM 0001-0782/13/04
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
No entries found