The browser is ubiquitous, contentious, and the one app that everybody uses on every kind of hardware imaginable. Here's how we gauge performance.
(Credit: Screenshot by Seth Rosenblatt/CNET)
The Web browser is the most-used kind of software in the world, having become the de facto way that people access the Internet. Today, virtually all computing tasks can be completed in the browser. Testing browsers can veer from incredibly complex to shockingly simple, depending on what you're looking for and why. At CNET, we prefer a holistic approach to browser benchmarking, looking at a combination of tests that benchmark general browser behavior, as well as several "real-world" tests that look at browser performance in common scenarios.
Note about mobile testing: We are still finalizing our standards for mobile browser testing, and will update this post as soon as they're ready. For now, the following procedures apply only to desktop browsers.
Is your favorite browser on our test list?
Unless your favorite browser is some obscure remixed version of Netscape, chances are good we test it. However, browser testing is made even more complex than it would otherwise be by the fact that two of the five major browsers, Firefox and Chrome, update on a six-week release cycle. Sometimes those updates bring dramatic changes, but often they don't. Because of the sadly human limitations of your humble editors, CNET will not be testing all browsers simultaneously. Instead, we will conduct quarterly tests for the most-used and best-known browsers, and biannual tests for a wider range of competitors. Also, tests are staggered according to platform, so Windows browsers are not tested simultaneously with Mac, Android, or iOS browsers.
Desktop browsers tested, both Windows and Mac unless otherwise noted:Chrome Firefox Internet Explorer 9 (Windows 7 only) Internet Explorer 10 (Windows 8 only) Opera Safari (Mac only) Desktop browsers tested biannually will include:Maxthon Avant (Windows only)
How we test desktop browsers
We run each of the following three times, and restart the computer before each test, so that the browser is starting "cold." We also wait 30 seconds after booting the browser to ensure that any background processes have been completed. Then we average the three tests.
(Credit: Screenshot by Seth Rosenblatt/CNET)
Performance benchmarksThe Acid3 test from the Web Standards Project checks browser compliance with accepted standards. Slightly outdated since it doesn't look at HTML5, which has now been finalized, it remains a good way to establish a baseline. Browsers that don't hit 100 out of 100 on the Acid3 are behind the times in a fundamental, crucial way.
Google Octane, the successor to Google's V8 benchmark test, looks at JavaScript performance by testing such areas as code optimization, encryption and decryption, emulation, and array manipulation, and assigning each sub-test a number. The higher the final score, the better.
Mozilla Kraken is another JavaScript performance test, which looks specifically at rendering times for audio, imaging, AI, JSON, and encryption. A smaller number is better for the final score.
The HTML5 Test assigns points for each HTML5 feature that the browser supports, out of a total of 500. This is a rough way to gauge how future-forward the browser is.
JSGameBench, GUImark3 (gaming test, text test), and Microsoft FishIE Tank look at HTML5 Canvas performance in several different game-like environments. Canvas is an important part of HTML5 to test because it creates all the nifty 2D images and shapes that can move across your screen.
(Credit: Screenshot by Seth Rosenblatt/CNET)
Each of these three tests uses a different standard. FishIE Tank, for example, lets the tester set the number of fish on the screen. The test will then show you in how many frames per second they can be rendered. Microsoft Chalkboard runs a series of timed tests on HTML5 panning, zooming, and scaling. Faster is better.
Facebook Ringmark checks HTML5 feature support, geared for the needs of the mobile browser. Nevertheless, it works well on desktops and provides a good and rare point of comparison between desktop and mobile.
Real-world tests
We also perform four "real-world" tests to see how the browser performs under in-use conditions. These tests look at specific browser behaviors that you're likely to encounter: startup from cold boot, memory used while open, shutdown time, and wake from sleep.
Like the benchmark testing, each test is performed three times and then averaged. Unlike those tests, which are performed only with the tab running the test opened, our real-world tests are conducted twice: with five tabs open, and with 50 tabs. This is to replicate the real-world scenario of keeping many tabs open simultaneously, something that many people do (even if you don't).
Tabs are chosen according to categories based on realistic use cases: search engine, streaming media, news site, gaming site, and Web mail. The five tabs we open in the less-intensive test are: Google.com, CNET.com, Outlook.com, YouTube.com, and Pandora.com. We're not going to list all 50 sites here for space considerations.
source via cnet
0 comments:
Post a Comment