Finally I did read the book I mentioned in my previous post. Here are the 14 steps/rules presented by Steve Souders and my short description of them.
Make fewer HTTP requests
The main rule which is connected to some of the next rules. It’s also the most effective for first-time visitors to your web site and pretty good for subsequent views. The chapter in the book has sub-chapters which are more detailed rules focused on particular components of a web. If you strict to all of them you’ll make fewer HTTP request and your page will be faster. Those more detailed rules are: image maps (56% better performance of sample page), CSS sprites (57% better performance), inline images (50% better performance) and combined scripts and stylesheets.
Use a content delivery network
CDN is a collection of web servers distributed across multiple geographic locations. They have the same data and deliver them to a user more efficiently than one web server with that data. The example page done by the author loaded 18% faster with CDN than without. I’m surprised Steve Souders put this rule at the beginning. In my opinion the saving here are smaller than in next rule and sometimes it doesn’t make any sense to buy such a service as CDN. Something from category nice-to-have when your page finally got bigger audience. Interesting is the fact that in Poland I found only one company in Google who has such a service in their offer as CDN (but I only had 5 minutes to do the search — maybe the market is bigger). I had the feeling polish companies usually build their own CDN architecture.
Add an expires header
As I mentioned before implementing this rule had better results than the previous one. But, what’s important, the performance isn’t as much better as you visit the page by the first time. The way how to achieve the results was presented by me in my previous post — a easy reconfiguration of your web server. Example provided by the author had 57% reduction of time for the page to load.
And it’s all based on the caching system. The only problem with this you can meet is when you change already cached by user’s browser file. It’s quite likely the user won’t get updated file because the browser wouldn’t even ask for the file because its cached version didn’t expired. The solution is really simple: include a version number in your files’ names. Every time you want to be sure users will get the newest version of a file you’ve just changed, change the number everywhere the file was included and the browser will download it.
Gzip components
Once again you can get 53% faster loading page when you compress its components. However, we need to be careful here. We have to remember that gziping things costs us a higher usage of our server’s CPU and also it costs a higher usage our users computer’s CPUs. But there are also possibilities to gzip only some, choosen files i.e. bigger one. Author presents Apache servers’ directive mod_gzip_minimum_file_size
we can set it to 1 or 2K and server will compress only those files.
Another issue are proxy servers between our web server and users. A user visits our page using proxy server. His browser does support gzip so the response from our server is a compressed content. Another user visits our page using the same proxy server. But her browser doesn’t support gzip and still gets compressed content. To solve this problem we need to use Accept-Encoding
in the server’s Vary
response header.
Put stylesheets at the top
By going through this chapter you’ll finish with more detailed rule:
[important]Put your stylesheets in the document HEAD
using the LINK
tag.[/important]
This rule is more about a user experience. Steve Souders explains us two phenomenons: blank white screen and Flash of Unstyled Content (FOUC). By putting our stylesheets at the top we risk less unwanted user experience and it won’t block progressive rendering of a page.
Put scripts at the bottom
This chapter is very similar to the previous one. Again it’s more about good user experience. Scripts block parallel downloading. They also block progressive rendering for all content below the script. It’s slightly different than CSS because progressive rendering is blocked as long as the stylesheets aren’t downloaded.
Avoid CSS expressions
Another rule of better user experience. But this time this is one of the few rules that addresses performance of a page after it has been loaded. I never used CSS expressions because I didn’t find an issue which could be solved only with a CSS expression. For a different page background we use SASS servers, for a different width of a page or an element on a page we use JavaScript.
Make javascript and CSS external
This one is quite tricky. The test at the beginning of the chapter shows inline scripts and CSS rules make better performance of a page. But this is true as long as you don’t have many pageviews. If you make your scripts and CSS external it’s more likely they’ll be cached by a user’s browser. And it can result with more than 50% faster page loading as described in previous chapters.
However, author finds the only exception where inlining is preferable. The exception is a homepage. There are three metrics presented by Steve Souders in this chapter: page views, empty cache vs. primed cache and component reuse. Analyzing these metrics creates an inclination toward inlining over using external files.
Reduce DNS Lookups
The chapter starts with an introduction to the Internet, IP addresses and Domain Name Systems. Then we can read about DNS maps in our (or users’) operating systems, browsers and finally we get know each new DNS lookup could take up to 20-120 milliseconds. It’s better to avoid many of those lookups.
Minify javascript
We learn here about minification and obfuscation. There are also two tests for small files and bigger files. In the first case it’s 17-19% faster than a normal case. And in bigger files test there is 30-31% faster page loading than normal.
Avoid redirects
Redirects are worse than CSS at the bottom of a page and JavaScript at the top of it. We can find few alternatives to avoid redirects in this chapter. We should remember about trailing slash which is the most wasteful redirect.
Remove duplicate scripts
Duplicate scripts happen even on big portals or mostly on big portals. Factors are various but the bigger is developing team, the bigger is codebase, the bigger are chances a script will be duplicated. They hurt performance in two ways: unnecessary HTTP requests and wasted JavaScript execution.
As a way of avoiding duplicate scripts the author propose implementing a script management module in our templating system.
Configure ETags
ETags are mechanism that web servers and browsers use to validate cached components. The problem with them is that they are typically constructed using attributes that make them unique to a specific server hosting a site. ETags won’t match when a browser gets the original component from one server and later makes a conditional GET request that goes to a different server. If Expires attribute in header is enough for you and you don’t need ETags just remove them from your server configuration. If you have components that have to be validated based on something other than the last-modified date, ETags are a powerful way of doing that and you can change the way how they are constructed.
Make AJAX cacheable
We can learn some basics about AJAX at the beginning of this chapter. Author informs us that the best way to optimize AJAX request is to follow some of introduced before rules. The most important rule is to make the response cacheable. But we can also gzip the response, make sure there are no additional DNS lookups, minify the response if it’s a javascript, avoid redirects and configure or remove ETags.
Summary
Below I placed the list of all rules. As you noticed few times I mentioned something about Steve Souders’ tests. All of the examples can be found here.
- Make fewer HTTP requests.
- Use a content delivery network.
- Add a far future
Expires
header to your components. - Gzip your scripts and stylesheets.
- Put your stylesheets in the document
HEAD
using theLINK
tag. - Move scripts to the bottom of the page.
- Avoid CSS expressions.
- Put your JavaScript and CSS in external files.
- Reduce DNS lookups by using
Keep-Alive
and fewer domains. - Minify your JavaScript source code.
- Find ways to avoid redirects.
- Make sure scripts are included only once.
- Reconfigure or remove ETags.
- Make sure your AJAX requests follow the performance guidelines, especially having a far future Expires header.
I still keep my opinion that not all of the rules could be respected and done only with a front-end engineer’s work. Mentioned by me in the previous post rules: adding expires headers and gziping our page’s components can be done properly (and easy) only with changes in our server configuration. The same comes with next rules such as: reducing DNS lookups or configuring ETags. But I still keep my opinion that it doesn’t harm any front-end engineer if she gets know about these ways to make her page performance better. She could take care of it and set the right configuration or pass it to someone who knows better how to configure the web server.
I’m looking forward to read the second part and I’ll probably write about it too.
Stay tuned!
1 comment
andrew
2014/01/27 at 00:28 (UTC 1) Link to this comment
More interesting reading about web performance (some of the ideas have been already covered by Steve Souders):
1) http://www.sitepoint.com/ten-quick-fixes-reduce-page-weight/
2) http://www.sitepoint.com/10-tougher-tasks-reduce-page-weight/