Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Actual JavaScript Engine Performance (crockford.com)
107 points by riffraff on April 21, 2011 | hide | past | favorite | 62 comments


If I was trying to think of something less representative of large well written javascript application I would have a hard time thinking of something better than JSLint.

It doesnt touch the dom, is not event driven, doesnt involve loading lots of files and does not render anything


> It doesnt touch the dom, is not event driven, doesnt involve loading lots of files and does not render anything.

It can be argued what constitutes JS engine, and what makes other parts of browser. I'd tend toward `pure JS' view of the JS engine; excluding DOM document and DOM events. [0]

Measuring speed of `naked, bare' JS engine makes some sense, even if not directly for web developers and web users. As some other posters pointed out, JS itself may be used for server-side scripting or general data processing, where access to DOM may be irrelevant. Just raw JS and some custom API for I/O.

However, I don't trust this method of benchmarking fully. There are optimization techniques that may skew results quite a bit. I guess dead code optimization could end up taking whole loops out of JSLint's code, if there are no side effects. That was implicated in discussions about some earlier benchmarks by other authors.

----

[0] Not sure if it makes a strong case, but here goes: when Apple forked Konqueror's KHTML into WebKit, they used own JS engine instead of KJS.


I agree that its a good idea to test raw js performance, particularly for browser developers, however its extremely disingenuous to represent it as a typical workload of a javascript browser application when its pretty much the opposite


You are 100% correct. Benchmarking Javascript on "well-written Javascript applications" doesn't even make sense to do. When benchmarking stuff, we should be worried about things where speed actually matters.

For instance:

What FPS rate can your browser can get when performing certain rendering operations on an HTML5 canvas?

Or how well does it perform with WebGL? Oh wait, IE10 won't even have WebGL.


True.

Let's be honest here. Most people are using jQuery to write new applications. Shouldn't we mostly be benchmarking jQuery and some common plugins? Maybe prototype as well. And benchmark the most common functions etc.


Chrome should just pre-compile jQuery using SSE and stuff, and provide a ABI/API.


There's http://dromaeo.com/?jslib by John Resig. Chrome is pretty good at it.


There are many server-side applications that do exactly what you just described and JavaScript is starting to move into that space. So this is quite relevant and will become even more so over time.


The original post was comparing client-side browsers, not server-side javascript engines.


exactly, it specifically tests browsers execution

and that aside I dont think its anywhere near a good test for server side javascript either, server side js in particular its bottlenecks are mostly about shifting bits, pulling stuff down from the database or retrieving from the http socket, they usually do some string manipulation as well but its fairly minor compared to io


It is worth taking this with a liberal sprinkling of salt.

The first problem is that there is no documentation of method. It appears that the results were obtained by simply running JSLint on itself once. Without some indication of how the results are obtained and how stable they are it is essentially meaningless. Of course this is quite fixable but until it is fixed the data presented is basically worthless.

The second, and arguably larger, problem is that the page makes grandiose claims about the applicability of the benchmark that it doesn't even attempt to back up. In particular the claim that the performance on JSLint will be a better proxy for "other large, well-written JavaScript applications" than existing benchmarks. If we examine the Microsoft paper linked, it says:

"Specific common behaviors of real web sites that are underemphasized in the benchmarks include event-driven execution, instruction mix similarity, cold-code dominance, and the prevalence of short functions"

It is not demonstrated, nor is it obviously apparent, that JSLint will be any more typical in these respects than other benchmarks. I haven't examined the JSLint source code but I assume it isn't event-driven, deals mainly with string manipulation and makes many calls to the same few functions during parsing. If my guesses are correct it sounds like it will not, on its own, be a significantly better proxy for real-world performance than existing benchmarks. Of course it may be that it exercises a different subset of the ECMAScript engine than existing benchmarks; in this case a test like this would be a good addition to, rather than replacement for, an existing benchmark suite.


V8/Chrome uses the constructor as a big part of the heuristic that determines the 'hidden class' of an object. The other part of the heuristic is the names and ordering of the properties on the object.

Unfortunately, JSlint creates all its objects with Object.create and not with a constructor function. This causes objects with a similar structure to have a different 'hidden class'. This causes most of the optimizations in V8 to break down.

This is definitely a fixable problem and the benchmark is a good illustratiom of the issue. I'm not sure if the benchmark illustrates 'actual performance' more than any other benchmark, but it does seem to illustrate something that should go fast. In this respect it is ahead of Kraken 1.0 which illustrates how fast you can multiply NaN with undefined.


Interestingly, the 2nd argument to Object.create was specifically designed (by me) so that an object's "shape" could be statically determined. If the descriptor is all literals (they usually are) then each Object.create call site is essentially a construction site for a "class" of objects that share a common structure.

As Erik says, this should be fixable. It sounds to me like Crock's coding style is just ahead of the engine implementation curve. Hopefully it will help push this optimization into the actual implementations.


Crockford's code style is very peculiar:

https://github.com/douglascrockford/JSLint/blob/master/jslin...

(I really hope that this is not coding-style-of-the-future)

As you can see single Object.create callsite becomes a construction site for objects that might _not_ share a common structure (they potentially have different prototypes).


I don't see why it's an issue that Object.create is used, as long as attributes are assigned in a consistent order the optimization should still hold, since each assignment (or read) site would be monomorphic with respect to the current map (to use the Self terminology), even if the type isn't a useful indicator.

(If I'm horribly off let me know, I'm using my knowledge of PyPy to try to make some guesses about how precisely V8 applies these techniques).


In V8 Object.create(arg) is implemented roughly as var o = {}; o.__proto__ = arg. The first statement sets the default hidden class for the newly created object. Since hidden classes in V8 capture the prototype structure as well, it has to change in the second statement. Now the problem is that unlike with normal properties no hidden class transition happens and we get a brand new hidden class each time.


"So I have come up with a benchmark that should be more representative of large, well-written JavaScript applications. It is in fact a popular, large, well-written JavaScript application: JSLint."

Translation From Crockford Speak to English:

So I have come up with a benchmark that should be more representative of my code. It is in fact my code.


That's kind of mean.


I think even this test fails to capture true JS performance due to it's lack of DOM performance testing. This may not be terribly fair - is the performance of the DOM API still "Javascript"? - but then again, I'd prefer "useful" over "fair." Most of my JS tends to interact with the DOM in some manner, so it won't really matter if the pure JS stuff screams if the DOM is slow.

Another question: Crockford claims that JSLint is more indicative of true JS performance, but doesn't really explain why. I think we're all predisposed to take him at his word, but I'd still like an explanation.


Crockford claims that JSLint is more indicative of true JS performance, but doesn't really explain why. I think we're all predisposed to take him at his word, but I'd still like an explanation

"So I have come up with a benchmark that should be more representative of large, well-written JavaScript applications. It is in fact a popular, large, well-written JavaScript application"


That doesn't fly. It's missing an explanation of the characteristics of JSLint, and how those characteristics are often shared with other large JavaScript applications. My own personal intuition is that JSLint is not representative of typical web applications, so I need convincing.

To get an idea of where I come from regarding this, check out "The Landscape of Parallel Computing Research: A View from Berkeley": http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-18... They systematically classified several different kinds of computational patterns, and explained how those patterns appear in application areas. I don't expect a blog post to have that level of rigor, but I expect something at least in that form.


I would recommend that anybody interested in parallelism check out the article you linked to. It can be a tad verbose, but it's got a lot of really good insights.


At the same time, "Programming Language Analysis Tool With Little To No Front-End or Real-Time Interaction" is not terribly representative of the average web application.

(what exactly is a "Javascript application" anyway? This seems like an attempt to pretend that JS is the only thing required to create a web application...)


Sure, but all "large, well-written JavaScript applications" are not created equal. This test in particular tests JSLint's performance at parsing its own JS source file. How many large Javascript applications spend most of their runtime parsing Javascript rather than traversing the DOM? I suspect not many.


I study visualization websites frequently, and found Chrome much faster than other browsers (among all stable releases). Example page (not sure how well written): http://vis.stanford.edu/protovis/ex/force.html


Chrome 10.0.648 certainly does better than Firefox 4.0.1, when running on Ubuntu, for the example you gave.

This has also been my experience; I don't know what these benchmarks are doing, but Chrome is still the fastest in my perception.

Also, the article doesn't mention OS used for the tests. It is certainly of importance since the performance on Firefox 4.0.1 certainly suffers on Linux and Win XP, versus Windows 7 -- which makes the comparison unfair, and a good benchmark would certainly mention results on multiple OSes.


It shouldn't matter much.

The test was just a javascript-engine test, this test does not do any rendering by itself.


I agree; throughout the development of Clojure Atlas (which uses Raphael and arborjs heavily), I've observed this rough ranking:

Chrome > IE >= Safari > Firefox


Make sure you're using mozRequestAnimationFrame() and not setInterval(). This is by far the commonest reason for slow performance in Firefox 4.x. I looked through your source and found only setInterval(), so that may be the problem.


More than that, from studying various VM techniques it seems to me, that most optimizations that make Javascript fast for computation actually harm performance of calling into foreign code (like DOM) even more so because in most browsers where is often quite significant interface mismatch between JS VM and DOM code (like plain JS objects lying on separately managed heap from DOM objects).


Wow, IE10 has some legs. I'm shocked to see how Chrome performance compares.

If Chrome is on the slower side, I think the real winner is everyone who uses the web, because that means all of these modern browsers are FAST.


Comparing IE10 to Chrome 10 isn't exactly apples to apples. Chrome 11 or 12 should be compared to IE10 since those are the development versions.


> Comparing IE10 to Chrome 10 isn't exactly apples to apples. Chrome 11 or 12 should be compared to IE10 since those are the development versions.

That is true - comparing an unreleased IE to a released Chrome isn't fair.

However, the released Chrome was much slower than all other released browsers - Firefox, Safari, Opera, even IE9. Chrome usually does well on benchmarks, so it is interesting to see it doing so poorly on real-world code.


I wonder if this will/can effect node.js. Will they ditch V8 for the faster, free, javascript engine.


I agree, your point is probably the most interesting thing and at the time of my comment IE10 beating Chrome seemed to be stealing the show. Hence my mention of the apples to oranges comparison.


Could you provide links to recommended builds? If Crockford is reading maybe he can add them.


The Chromium authors provide information on how to use newer builds here: http://www.chromium.org/getting-involved/dev-channel

There's also an AppleScript to download the latest 'stable' continuous build of Chromium (full disclosure: I wrote/modified parts of it) here: https://gist.github.com/370298


You can get installers for whatever the latest build is in each channel here: http://dev.chromium.org/getting-involved/dev-channel

It's updated so frequently I'm not sure there's a better way to get a specific build.


For IE10 MS pushed out a CTP and said this is a good build to test with. Is there a build in the Chrome dev branches that are more vetted then others. I just wouldn't want someone wasting their time posting results and someone else saying, "Duh, it's obvious that Chrome 12.0.0.12.a had Javascript loop optimization turned off, didn't you read the commit logs? Those results are no good. You should go back to Chrome 12.0.0.5.c to get one that is usable for JS testing".


Yeah, I certainly understand and appreciate the concern. I simply don't know if there's a set of specific builds for that purpose or I would have certainly included them.


You can get specific Chromium builds for most platforms here: http://build.chromium.org/f/chromium/snapshots/


What about comparing Chrome10 against IE6? They are both installed on my work computer. I think that is more fair.


Browsers have been optimizing the hell out of their JavaScript engines over the past several years, which is awesome for both client-side web apps and server-side JavaScript. 5 years ago JavaScript performance may have been the bottleneck, but now it's likely often DOM operations.

Dromaeo (http://dromaeo.com/) from John Resig is one of the few JavaScript benchmarks that includes DOM performance tests. We need more of these.

I think the only way to get a good picture of JavaScript (and DOM) performance is to put each benchmark result for each engine in a matrix and draw conclusions from there. Browsers will inevitably optimize for popular benchmarks (see: Acid3) or have benchmarks that suit their engine. If we aggregate everyone's benchmarks and that is the standard for measuring JavaScript/browser performance there is less incentive to do these micro-optimizations.


Definitely a valid point, that benchmarks can be misleading and real-world code is more important.

One complaint, though - the tested browsers are all released versions, except for IE10. Why test a single unreleased browser, and not unreleased versions of all the browsers?

With the unreleased IE10 included, it comes out fastest - but then perhaps other unreleased browsers would have done better. Ignoring IE10, which would have been more fair, Firefox 4 is the fastest, closely followed by Safari.


I think it's more interesting than anything that an IE release is that fast. I'm glad he included it and I don't think it's unfair at all.


Now who is going to mix this with DOM manipulation, so we can actually see how the end-user is effected.

If the JS engine is crazy fast and the DOM renderer blows, it won't make a bit of difference, unless it's purpose is as a server technology.


Exactly. In my experience, the real bottleneck in real world websites is not pure JavaScript execution speed but DOM manipulation from JavaScript.


John-David Dalton put JSLint into a proper benchmark:

http://jsperf.com/jslint


This is really fascinating, although a blow for the chrome team.

It would be even more interested to see performance based on the javascript that, say, twitter and facebook use, but I would guess that it's very difficult to feed the scripts the correct sort of data for benchmarking.


This is not a blow to the Chrome team. Chrome came out to raise the water level and lift all boats. They're achieving that. Sure, a nice bonus is to be the fastest at all times, but leapfrogging is probably even better as it keeps everyone heads down rather than just conceeding to Chrome.


Needs IE7 and IE8. It would be great to see how far we've come.


Tests are generally biased. Unless you're going to run each engine against a weighted distribution of operating systems (as relevant to your existing client base), on a weighted distribution of hardware architectures (ditto), using your application's library (weighted for real world use, like your clients do) - you're not going to see anything actually applicable. They're still going to be using the browser they use. The only applicable take away is 'module/class/function $x is performing badly in case $y on $environment/configuration $z and we should fix it'.

I'm assuming the use case of desktop, browser usage here, and not server usage. Regardless, there is no ultimate $engineA > $engineB that applies globally.


I am not seeing any links to either a working instance of jsmeter or to its source code. I find research papers full of words and pretty pictures, but no source code to examine and test myself.

Does anyone know where we can see this in action?


There's already a javascript testing tool called jsmeter at http://jsmeter.info/. Not the same developers.


There's a pretty lively discourse on the internet about the various benchmarking systems, many of which are grounded in that discourse and have solid and nuanced assumptions behind them. There's also an accepted way to measure performance -- eg. averaging hundreds or thousands of tests, meticulously documenting the environment. Unfortunately, neither of these seem to inform this claim to "actual" performance.


You can debate the validity of Crockford's claims all you want, but I think you're missing the point.

This exercise just shows how silly benchmarking is, especially when all the contenders are "sufficiently good."


Why is it centered around IE10? Seems biased. A researcher wouldn't do that.


it's not, IE10 is just the fastest in the benchmark


it would be better to show percent difference averaged across many inputs


It is a Microsoft paper, a Microsoft test... be careful.


The paper (http://research.microsoft.com/pubs/118663/paper_tr.pdf) is an academic-style tech report by researchers, not ad-copy by people in HR.


We all know that academic parpers are biaised on who are funding it. There is event a paper on that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: