Search This Blog

Friday, April 28, 2017

Browser fingerprints – the invisible cookies you can’t delete – Naked Security

Browser fingerprints – the invisible cookies you can’t delete – Naked Security

Browser fingerprints – the invisible cookies you can’t delete

Dear reader, it seems that you are causing headaches in dark corners of the web.
I pinpoint you specifically, as a reader of Naked Security, because I assume that if you’re a regular to this site then you’re more likely than most to care about who’s watching you online.
For the people trying to track you, profile you and sell to you, you’re a problem.
Historically, techniques for tracking people’s movements around the web have relied on HTTP cookies – small messages that ‘tag’ your browser so it can be uniquely identified.
Unfortunately for snoopers, profilers and marketers, cookie-based tracking leaves the final decision about whether you’re followed or not in your hands because you can delete their cookies and disappear.
It’s no secret that some vendors have moved on from cookies – local storage, Flash cookies and ETags have all been used in-the-wild, either as cookie replacements or as backups from which cookies can be ‘respawned’.
These techniques have been successful because they’re obscure but they all have the same fundamental weakness as cookies – they rely on things that you can delete.
The holy grail for tracking is to find a unique ID that you can’t delete, something that identifies you uniquely based on who or what you are, not what you have.

FINGERPRINTING BROWSERS

In July I wrote about Panopticlick, a fingerprinting tool that does exactly that. It was created by the Electronic Frontier Foundation (EFF) for its research paper How Unique Is Your Web Browser?.
Panopticlick asks your browser a few questions, such what fonts you have installed, what HTTP headers your browser sends, your screen size and your timezone.
That collection of information varies so much from one browser to the next that it’s enough to tell any two browsers apart with startling accuracy.
The EFF used Panopticlick to show that in the population of internet users it tested (a group likely to be more privacy concious than average) users had a 1 in 286,777 chance of sharing their fingerprint with somebody else.
That’s certainly good enough to use as a fall-back ‘respawning’ technique but perhaps not good quite enough to work as a cookie replacement.
Since Panopticlick was only designed to show that fingerprinting was viable it didn’t exhaust all the possible browser features that might be exploited for truly bomb-proof fingerprinting.
That such unexplored features exist was alluded to by the authors in their conclusion (my emphasis.)
We implemented and tested one particular browser fingerprinting method. It appeared, in general, to be very effective, though as noted in Section 3.1 there are many measurements that could be added to strengthen it.

FINGERPRINTING BEYOND THE BROWSER

As chance would have it, at the same time as I was writing about Panopticlick, a well known internet company with a foothold on 13 million websites was caught experimenting with one of those ‘missing’ techniques; canvas fingerprinting.
AddThis is the internet’s premier purveyor of social media sharing widgets.
Its code is embedded in millions of websites, which gives it a huge platform on which to run its anonymous personalization and audience technology.
Between February and July 2014 that technology included a live test for a canvas fingerprinting technique.
To illustrate the point I’ve included two pictures of the letter T below with its SHA1 hashes. One was rendered by Firefox 33 on OS X and the other by Safari 8 on the same machine.
The <canvas> element is a feature of HTML5, the language used to build web pages. It’s a ‘drawing surface’ on to which small computer programs, written in JavaScript and embedded in the same page, can paint pictures, animations and other visual elements (our Asteroids game is a fine example – just search our site for Asteroids.)
Often the most sensible and efficient way for web browsers to handle canvas graphics is to hand over font rendering and 2D compositing to the underlying operating system and hardware GPU.
Different graphics cards and operating systems work slightly differently, which means that different browsers given identical instructions on what to draw will draw slightly different pictures.
T rendered by Firefox 33 on OS X55b2257ad0f20ecbf927fb66a15c61981f7ed8fc
 
T rendered by Safari 8 on OS X17bc79f8111e345f572a4f87d6cd780b445625d3
In 2012, researchers Keaton Mowery and Hovav Shacham published a research paper entitled Pixel Perfect: Fingerprinting Canvas in HTML5 which showed that there was enough variation to create a reliable browser fingerprint.
In their own words:
...the behavior of <canvas> text and WebGL scene rendering on modern browsers forms a new system fingerprint. The new fingerprint is consistent, high-entropy, orthogonal to other fingerprints, transparent to the user, and readily obtainable.
Remarkably, they didn’t have to try very hard to tease out the differences between graphics cards…
Our experiments show that graphics cards leave a detectable fingerprint while rendering even the simplest scenes.
…nor the way that even common fonts are rendered.
Even Arial, a font which is 30 years old, renders in new and interesting ways depending on the underlying operating system and browser. In the 300 samples collected for the text_arial test, there are 50 distinct renderings.
Since the technique relies on rendering pictures you might think that there would be something you could see that gives the game away, right? Not so.
Our tests can be performed, offscreen, in a fraction of a second. There is no indication, visual or otherwise, that the user's system is being fingerprinted.
Finally, the messy business of comparing pictures is neatly accomplished by converting the picture rendered on the canvas into a string of base64 data (using the toDataURL() method) and running it through a hashing function to create a short, fixed length ID.
This makes dealing with canvas fingerprints almost as easy as dealing with cookies.
Mowery and Shacham estimated the entropy of their fingerprint to be about 10 bits, which is impressive but fewer than the 18.1 bits found in the Panopticlick fingerprint.
Just as the Panopticlick researchers did, they conclude that there’s more entropy to found:
We were surprised at the amount of variability we observed in even very simple tests ... We conjecture that it is possible to distinguish even systems for which we obtained identical fingerprints, by rendering complicated scenes that come closer to stressing the underlying hardware

FINGERPRINTING IN THE WILD

The potential for canvas fingerprinting was obvious but Mowery and Shacham had only shown that it was possible, not that it was being used in the real world.
In 2014, a group of researchers from Princeton and the University of Leuven set out to see if canvas fingerprinting was being used in the wild.
They crawled the home pages of the 100,000 most popular websites and found 20 distinct implementations of canvas fingerprinting.
Nine of them appeared to be home-brewed implementations unique to a single site while 11 of them were third party scripts shared across a number of sites.
The lion’s share of the sites they found though, some 95% of the 5,542 unique sites that were using canvas fingerprinting, were using code provided by AddThis.
I should be absolutely clear that neither site owners nor users were aware that they were part of an AddThis test bed.
The AddThis code that the researchers found was to provide social media sharing functionality and the fingerprinting code bundled with it unannounced was being used by AddThis for its own ends, and not by its customers.
The results of the research were published in a paper, The Web Never Forgets, in July 2014, and caused a bit of a stir in the computer security press.
By a happy and remarkable coincidence, the six month “preliminary initiative to evaluate alternatives to browser cookies” ended at exactly the same time.
AddThis came clean in a blog post shortly after concluding the test and was at pains to reassure users that their privacy had been protected.
... this data was never used for personalization or targeted advertising.
... We don't identify individuals ... and we honor user opt-out preferences any time we act on our data.
... We adhere to industry standards, and have an opt-out process that complies with our membership in the NAI and the DAA. We honored our opt-out policy during this test, and the data was only used for internal research.
In the comments, a representative from AddThis revealed that the test wasn’t wrapped-up as a matter of conscience, or even damage limitation, but because it didn’t work very well.
Had the identification actually been good, we would have kicked off a whole new investigation ... But given the results, we're halting the project.
Disappointingly, the post also seeks to justify the company’s actions by invoking an excuse familiar to parents of teenagers the world over – everyone else is doing it, so why can’t we:
Many other companies are working on cookie alternatives, and we wanted to see if this approach worked.

THE BOTTOM LINE

What AddThis didn’t address in its mea culpa is the fundamental thing that makes fingerprinting and other exotic tracking techniques so obnoxious:
They only exist to rob users of the ability to control who tracks them.
Cookies provide a perfectly decent way to identify users – they’re reliable, benign, well understood by users, easy to implement and easy for users to control.
The only ‘problem’ that super cookies, evercookies, fingerprints and other methods ‘solve’ is that of users having opinions about who tracks them.
Users who delete cookies are sending out a clear message that they don’t wish to be tracked. Vendors who use fingerprinting are looking for ways to drown out that message.

HOW TO PROTECT YOURSELF

Fingerprinting is a viable alternative to cookies that’s being used in the wild.
The techniques shown by Mowery, Shacham and the EFF are individually useful but both sets of researchers pointed to ways their techniques might be made better still. The most obvious way to strengthen either technique is to combine it with the other since the two don’t overlap.
That work has already been done and an off-the-peg fingerprinting library that incorporates both techniques is available for free on GitHub.
Existing counter-measures are of limited use; Private Browsing and Incognito mode don’t alter a browser’s fingerprint and, according to the author of the code I mentioned above, they have no effect.
Privacy conscious users who deploy browser plugins to manage cookies and other tracking mechanisms are also likely to make their fingerprints more distinct, not less.
There is no single, good way to protect yourself but there are things that you can do to make your fingerprint less distinct.
Turning off Flash, Java, WebGL and Javascript will reduce your fingerprint massively but you may find the web unusable if you do. A reasonable compromise would be to disable Flash and Java and use a plugin like noscript.
Privacy plugins like Ghostery should protect you from fingerprinting code served from known, third party domains used for advertising or tracking.
According to the EFF the browser most resistant to fingerprinting is the Tor browser because of its bland User-Agent string and aggressive approach to blocking JavaScript.
Tor also asks for a user’s permission before giving websites access to data on canvas elements, which completely disrupts canvas fingerprinting. The same functionality is available in plugins for Chrome and Firefox.
The EFF is also promising that future versions of its PrivacyBadger plugin will include countermeasures against fingerprinting.

Wednesday, April 12, 2017

A Comprehensive Guide To HTTP/2 Server Push – Smashing Magazine

A Comprehensive Guide To HTTP/2 Server Push – Smashing Magazine

A Comprehensive Guide To HTTP/2 Server Push

  • By 
  • The landscape for the performance-minded developer has changed significantly in the last year or so, with the emergence of HTTP/2 being perhaps the most significant of all. No longer is HTTP/2 a feature we pine for. It has arrived, and with it comes server push!
    Aside from solving common HTTP/1 performance problems (e.g., head of line blocking and uncompressed headers), HTTP/2 also gives us server push! Server push allows you to send site assets to the user before they’ve even asked for them. It’s an elegant way to achieve the performance benefits of HTTP/1 optimization practices such as inlining, but without the drawbacks that come with that practice.
    In this article, you’ll learn all about server push, from how it works to the problems it solves. You’ll also learn how to use it, how to tell if it’s working, and its impact on performance. Let’s begin!

    FURTHER READING ON SMASHINGMAG: LINK

    What Is Server Push, Exactly? Link

    Accessing websites has always followed a request and response pattern. The user sends a request to a remote server, and with some delay, the server responds with the requested content.
    The initial request to a web server is commonly for an HTML document. In this scenario, the server replies with the requested HTML resource. The HTML is then parsed by the browser, where references to other assets are discovered, such as style sheets, scripts and images. Upon their discovery, the browser makes separate requests for those assets, which are then responded to in kind.
    Typical Web Server Communication.
    Typical web server communication (Large preview)
    The problem with this mechanism is that it forces the user to wait for the browser to discover and retrieve critical assets until after an HTML document has been downloaded. This delays rendering and increases load times.
    With server push, we have a solution to this problem. Server push lets the server preemptively “push” website assets to the client without the user having explicitly asked for them. When used with care, we can send what we know the user is going to need for the page they’re requesting.
    Let’s say you have a website where all pages rely on styles defined in an external style sheet named styles.css. When the user requests index.html from the server, we can push styles.css to the user just after we begin sending the response for index.html.
    Web Server Communication with HTTP/2 server push.
    Web server communication with HTTP/2 server push. (Large preview)
    Rather than waiting for the server to send index.html and then waiting for the browser to request and receive styles.css, the user only has to wait for the server to respond with both index.html and styles.css on the initial request. This means that the browser can begin rendering the page faster than if it had to wait.
    As you can imagine, this can decrease the rendering time of a page. It also solves some other problems, particularly in front-end development workflows.

    What Problems Does Server Push Solve? Link

    While reducing round trips to the server for critical content is one of the problems that server push solves, it’s not the only one. Server push acts as a suitable alternative for a number of HTTP/1-specific optimization anti-patterns, such as inlining CSS and JavaScript directly into HTML, as well as using the data URI scheme to embed binary data into CSS and HTML.
    These techniques found purchase in HTTP/1 optimization workflows because they decrease what we call the “perceived rendering time” of a page, meaning that while the overall loading time of a page might not be reduced, the page will appear to load faster for the user. It makes sense, after all. If you inline CSS into an HTML document within <style> tags, the browser can begin applying styles immediately to the HTML without waiting to fetch them from an external source. This concept holds true with inlining scripts and inlining binary data with the data URI scheme.
    Web Server Communication with Inlined Content.
    Web server communication with inlined content (Large preview)
    Seems like a good way to tackle the problem, right? Sure — for HTTP/1 workflows, where you have no other choice. The poison pill we swallow when we do this, however, is that the inlined content can’t be efficiently cached. When an asset like a style sheet or JavaScript file remains external and modular, it can be cached much more efficiently. When the user navigates to a subsequent page that requires that asset, it can be pulled from the cache, eliminating the need for additional requests to the server.
    Optimal caching behavior.
    Optimal caching behavior. (Large preview)
    When we inline content, however, that content doesn’t have its own caching context. Its caching context is the same as the resource it’s inlined into. Take an HTML document with inlined CSS, for instance. If the caching policy of the HTML document is to always grab a fresh copy of the markup from the server, then the inlined CSS will never be cached on its own. Sure, the document that it’s a part of may be cached, but subsequent pages containing this duplicated CSS will be downloaded repeatedly. Even if the caching policy is more lax, HTML documents typically have limited shelf life. This is a trade-off that we’re willing to make in HTTP/1 optimization workflows, though. It does work, and it’s quite effective for first-time visitors. First impressions are often the most important.
    These are the problems that server push addresses. When you push assets, you get the practical benefits that come with inlining, but you also get to keep your assets in external files that retain their own caching policy. There is a caveat to this point, though, and it’s covered toward the end of this article. For now, let’s continue.
    I’ve talked enough about why you should consider using server push, as well as the problems that it fixes for both the user and the developer. Now let’s talk about how it’s used.

    How To Use Server Push Link

    Using server push usually involves using the Link HTTP header, which takes on this format:
    Link: </css/styles.css>; rel=preload; as=style
    
    Note that I said usually. What you see above is actually the preload resource hint in action. This is a separate and distinct optimization from server push, but most (not all) HTTP/2 implementations will push an asset specified in a Link header containing a preload resource hint. If either the server or the client opts out of accepting the pushed resource, the client can still initiate an early fetch for the resource indicated.
    The as=style portion of the header is not optional. It informs the browser of the pushed asset’s content type. In this case, we use a value of style to indicate that the pushed asset is a style sheet. You can specify other content types. It’s important to note that omitting the as value can result in the browser downloading the pushed resource twice. So don’t forget it!
    Now that you know how a push event is triggered, how do we set the Link header? You can do so through two routes:
    • your web server configuration (for example, Apache httpd.conf or .htaccess);
    • a back-end language function (for example, PHP’s header function).
    Here’s an example of configuring Apache (via httpd.conf or .htaccess) to push a style sheet whenever an HTML file is requested:
    <FilesMatch "\.html$">
        Header set Link "</css/styles.css>; rel=preload; as=style"
    <FilesMatch>
    
    Here, we use the FilesMatch directive to match requests for files ending in .html. When a request comes along that matches this criteria, we add a Link header to the response that tells the server to push the resource at /css/styles.css.
    Side note: Apache’s HTTP/2 module can also initiate a push of resources using the H2PushResource directive. The documentation for this directive states that this method can initiate pushes earlier than if the Link header method is used. Depending on your specific setup, you may not have access to this feature. The performance tests shown later in this article use the Link header method.
    As of now, Nginx doesn’t support HTTP/2 server push, and nothing so far in the software’s changelog has indicated that support for it has been added. This may change as Nginx’s HTTP/2 implementation matures.
    Another way to set a Link header is through a server-side language. This is useful when you aren’t able to change or override the web server’s configuration. Here’s an example of how to use PHP’s header function to set the Link header:
    header("Link: </css/styles.css>; rel=preload; as=style");
    If your application resides in a shared hosting environment where modifying the server’s configuration isn’t an option, then this method might be all you’ve got to go on. You should be able to set this header in any server-side language. Just be sure to do so before you begin sending the response body, to avoid potential runtime errors.

    PUSHING MULTIPLE ASSETS LINK

    All of our examples so far only illustrate how to push one asset. What if you want to push more than one? Doing that would make sense, right? After all, the web is made up of more than just style sheets. Here’s how to push multiple assets:
    Link: </css/styles.css>; rel=preload; as=style, </js/scripts.js>; rel=preload; as=script, </img/logo.png>; rel=preload; as=image
    
    When you want to push multiple resources, just separate each push directive with a comma. Because resource hints are added via the Link tag, this syntax is how you can mix in other resource hints with your push directives. Here’s an example of mixing a push directive with a preconnect resource hint:
    Link: </css/styles.css>; rel=preload; as=style, <https://fonts.gstatic.com>; rel=preconnect
    
    Multiple Link headers are also valid. Here’s how you can configure Apache to set multiple Link headers for requests to HTML documents:
    <FilesMatch "\.html$">
        Header add Link "</css/styles.css>; rel=preload; as=style"
        Header add Link "</js/scripts.js>; rel=preload; as=script"
    <FilesMatch>
    
    This syntax is more convenient than stringing together a bunch of comma-separated values, and it works just the same. The only downside is that it’s not quite as compact, but the convenience is worth the few extra bytes sent over the wire.
    Now that you know how to push assets, let’s see how to tell whether it’s working.

    How To Tell Whether Server Push Is Working Link

    So, you’ve added the Link header to tell the server to push some stuff. The question that remains is, how do you know if it’s even working?
    This varies by browser. Recent versions of Chrome will reveal a pushed asset in the initiator column of the network utility in the developer tools.
    Chrome indicating that an asset has been pushed by the server
    Chrome indicating that an asset has been pushed by the server (Large preview)
    Furthermore, if we hover over the asset in the network request waterfall, we’ll get detailed timing information on the asset’s push:
    Chrome showing detailed timing information of the pushed asset
    Chrome showing detailed timing information of the pushed asset (Large preview)
    Firefox is less obvious in identifying pushed assets. If an asset has been pushed, its status in the browser’s network utility in the developer tools will show up with a gray dot.
    Firefox indicating that an asset has been pushed by the server
    Firefox indicating that an asset has been pushed by the server (Large preview)
    If you’re looking for a definitive way to tell whether an asset has been pushed by the server, you can use the nghttp command-line client to examine a response from an HTTP/2 server, like so:
    nghttp -ans https://jeremywagner.me
    This command will show a summary of the assets involved in the transaction. Pushed assets will have an asterisk next to them in the program output, like so:
    id  responseEnd requestStart  process code size request path
     13     +50.28ms      +1.07ms  49.21ms  200   3K /
      2     +50.47ms *   +42.10ms   8.37ms  200   2K /css/global.css
      4     +50.56ms *   +42.15ms   8.41ms  200  157 /css/fonts-loaded.css
      6     +50.59ms *   +42.16ms   8.43ms  200  279 /js/ga.js
      8     +50.62ms *   +42.17ms   8.44ms  200  243 /js/load-fonts.js
     10     +74.29ms *   +42.18ms  32.11ms  200   5K /img/global/jeremy.png
     17     +87.17ms     +50.65ms  36.51ms  200  668 /js/lazyload.js
     15     +87.21ms     +50.65ms  36.56ms  200   2K /img/global/book-1x.png
     19     +87.23ms     +50.65ms  36.58ms  200  138 /js/debounce.js
     21     +87.25ms     +50.65ms  36.60ms  200  240 /js/nav.js
     23     +87.27ms     +50.65ms  36.62ms  200  302 /js/attach-nav.js
    
    Here, I’ve used nghttp on my own website, which (at least at the time of writing) pushes five assets. The pushed assets are marked with an asterisk on the left side of the requestStart column.
    Now that we can identify when assets are pushed, let’s see how server push actually affects the performance of a real website.

    Measuring Server Push Performance Link

    Measuring the effect of any performance enhancement requires a good testing tool. Sitespeed.io is an excellent tool available via npm; it automates page testing and gathers valuable performance metrics. With the appropriate tool chosen, let’s quickly go over the testing methodology.

    TESTING METHODOLOGY LINK

    I wanted measure the impact of server push on website performance in a meaningful way. In order for the results to be meaningful, I needed to establish points of comparison across six separate scenarios. These scenarios are split across two facets: whether HTTP/2 or HTTP/1 is used. On HTTP/2 servers, we want to measure the effect of server push on a number of metrics. On HTTP/1 servers, we want to see how asset inlining affects performance in the same metrics, because inlining is supposed to be roughly analogous to the benefits that server push provides. Specifically, these scenarios are the following:
    • HTTP/2 without server push
      In this state, the website runs on the HTTP/2 protocol, but nothing whatsoever is pushed. The website runs “stock,” so to speak.
    • HTTP/2 pushing only CSS
      Server push is used, but only for the website’s CSS. The CSS for the website is quite small, weighing in at a little over 2 KB with Brotli compression applied.
    • Pushing the kitchen sink
      All assets in use on all pages across the website are pushed. This includes the CSS, as well as 1.4 KB of JavaScript spread across six assets, and 5.9 KB of SVG images spread across five assets. All quoted file sizes are, again, after Brotli compression has been applied.
    • HTTP/1 with no assets inlined
      The website runs on HTTP/1, and no assets are inlined to reduce the number of requests or increase rendering speed.
    • Inlining only CSS
      Only the website’s CSS is inlined.
    • Inlining the kitchen sink
      All assets in use on all pages across the website are inlined. CSS and scripts are inlined, but SVG images are base64-encoded and embedded directly into the markup. It should be noted that base64-encoded data is roughly 1.37 times larger than its unencoded equivalent.
    In each scenario, I initiated testing with the following command:
    sitespeed.io -d 1 -m 1 -n 25 -c cable -b chrome -v https://jeremywagner.me
    If you want to know the ins and outs of what this command does, you can check out the documentation. The short of it is that this command tests my website’s home page at https://jeremywagner.me with the following conditions:
    • The links in the page are not crawled. Only the specified page is tested.
    • The page is tested 25 times.
    • A “cable-like” network throttling profile is used. This translates to a round trip time of 28 milliseconds, a downstream speed of 5,000 kilobits per second and an upstream speed of 1,000 kilobits per second.
    • The test is run using Google Chrome.
    Three metrics were collected and graphed from each test:
    • first paint time
      This is the point in time at which the page can first be seen in the browser. When we strive to make a page “feel” as though it is loading quickly, this is the metric we want to reduce as much as possible.
    • DOMContentLoaded time
      This is the time at which the HTML document has completely loaded and has been parsed. Synchronous JavaScript code will block the parser and cause this figure to increase. Using the async attribute on <script> tags can help to prevent parser blocking.
    • page-loading time
      This is the time it takes for the page and its assets to fully load.
    With the parameters of the test determined, let’s see the results!

    TEST OUTCOMES LINK

    Tests were run across the six scenarios specified earlier, with the results graphed. Let’s start by looking at how first paint time is affected in each scenario:
    First paint time
    First paint time (Large preview)
    Let’s first talk a bit about how the graph is set up. The portion of the graph in blue represents the average first paint time. The orange portion is the 90th percentile. The grey portion represents the maximum first paint time.
    Now let’s talk about what we see. The slowest scenarios are both the HTTP/2- and HTTP/1-driven websites with no enhancements at all. We do see that using server push for CSS helps to render the page about 8% faster on average than if server push is not used at all, and even about 5% faster than inlining CSS on an HTTP/1 server.
    When we push all assets that we possibly can, however, the picture changes somewhat. First paint times increase slightly. In HTTP/1 workflows where we inline everything we possibly can, we achieve performance similar to when we push assets, albeit slightly less so.
    The verdict here is clear: With server push, we can achieve results that are slightly better than what we can achieve on HTTP/1 with inlining. When we push or inline many assets, however, we observe diminishing returns.
    It’s worth noting that either using server push or inlining is better than no enhancement at all for first-time visitors. It’s also worth noting that these tests and experiments are being run on a website with small assets, so this test case may not reflect what’s achievable for your website.
    Let’s examine the performance impacts of each scenario on DOMContentLoaded time:
    DOMContentLoaded time
    DOMContentLoaded time (Large preview)
    The trends here aren’t much different than what we saw in the previous graph, except for one notable departure: The instance in which we inline as many assets as practical on a HTTP/1 connection yields a very low DOMContentLoaded time. This is presumably because inlining reduces the number of assets needed to be downloaded, which allows the parser to go about its business without interruption.
    Now, let’s look at how page-loading times are affected in each scenario:
    Page-loading time
    Page-loading time (Large preview)
    The established trends from earlier measurements generally persist here as well. I found that pushing only the CSS realized the greatest benefit to loading time. Pushing too many assets could, on some occasions, make the web server a bit sluggish, but it was still better than not pushing anything at all. Compared to inlining, server push yielded better overall loading times than inlining did.
    Before we conclude this article, let’s talk about a few caveats you should be aware of when it comes to server push.

    Caveats On Using Server Push Link

    Server push isn’t a panacea for your website’s performance maladies. It has a few drawbacks that you need to be cognizant of.

    YOU CAN PUSH TOO MUCH STUFF LINK

    In one of the scenarios above, I am pushing a lot of assets, but all of them altogether represent a small portion of the overall data. Pushing a lot of very large assets at once could actually delay your page from painting or being interactive sooner, because the browser needs to download not only the HTML, but all of the other assets that are being pushed alongside of it. Your best bet is to be selective in what you push. Style sheets are a good place to start (so long as they aren’t massive). Then evaluate what else makes sense to push.

    YOU CAN PUSH SOMETHING THAT’S NOT ON THE PAGE LINK

    This is not necessarily a bad thing if you have visitor analytics to back up this strategy. A good example of this may be a multi-page registration form, where you push assets for the next page in the sign-up process. Let’s be crystal clear, though: If you don’t know whether you should force the user to preemptively load assets for a page they haven’t seen yet, then don’t do it. Some users might be on restricted data plans, and you could be costing them real money.

    CONFIGURE YOUR HTTP/2 SERVER PROPERLY LINK

    Some servers give you a lot of server push-related configuration options. Apache’s mod_http2 has some options for configuring how assets are pushed. TheH2PushPriority setting should be of particular interest, although in the case of my server, I left it at the default setting. Some experimentation could yield additional performance benefits. Every web server has a whole different set of switches and dials for you to experiment with, so read the manual for yours and find out what’s available!

    PUSHES MAY NOT BE CACHED LINK

    There has been some gnashing of teeth over whether server push could hurt performance in that returning visitors may have assets needlessly pushed to them again. Some servers do their best to mitigate this. Apache’s mod_http2 uses the H2PushDiarySize setting to optimize this somewhat. H2O Server has a feature called Cache Aware server push that uses a cookie mechanism to remember pushed assets.
    If you don’t use H2O Server, you can achieve the same thing on your web server or in server-side code by only pushing assets in the absence of a cookie. If you’re interested in learning how to do this, then check out a post I wrote about it on CSS-Tricks. It’s also worth mentioning that browsers can send an RST_STREAM frame to signal to a server that a pushed asset is not needed. As time goes on, this scenario will be handled much more gracefully.
    As sad it may seem, we’re nearing the end of our time together. Let’s wrap things up and talk a bit about what we’ve learned.

    Final Thoughts Link

    If you’ve already migrated your website to HTTP/2, you have little reason not to use server push. If you have a highly complex website with many assets, start small. A good rule of thumb is to consider pushing anything that you were once comfortable inlining. A good starting point is to push your site’s CSS. If you’re feeling more adventurous after that, then consider pushing other stuff. Always test changes to see how they affect performance. You’ll likely realize some benefit from this feature if you tinker with it enough.
    If you’re not using a cache-aware server push mechanism like H2O Server’s, consider tracking your users with a cookie and only pushing assets to them in the absence of that cookie. This will minimize unnecessary pushes to known users, while improving performance for unknown users. This not only is good for performance, but also shows respect to your users with restricted data plans.
    All that’s left for you now is to try out server push for yourself. So get out there and see what this feature can do for you and your users! If you want to know more about server push, check out the following resources:
    Thanks to Yoav Weiss for clarifying that the as attribute is required (and not optional as the original article stated), as well as a couple of other minor technical issues. Additional thanks goes to Jake Archibald for pointing out that the preload resource hint is an optimization distinct from server push.
    This article is about an HTTP/2 feature named server push. This and many other topics are covered in Jeremy’s book Web Performance in Action. You can get it or any other Manning Publications book for 42% off with the coupon code sswagner!