Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> webassembly is faster than javascript

Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

It's the speed of generic 32 bit C. It leaves a lot to be desired in the way of performance. My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code. Keep in mind this library is doing integer arithmetic, and fast integer arithmetic does not explicitly exist in javascript -- JS is at a _huge_ disadvantage here and is still producing comparable numbers to WASM.

This same library is maybe 15 or 16 times faster than JS when compiled natively, as it is able to utilize 128 bit arithmetic, SIMD, inline asm, and so on.

Maybe once WASM implementations are optimized more the situation will be different, but I am completely unimpressed with the speed of WASM at the moment.



I also have a WASM crypto library, focused on hashing algorithms: https://www.npmjs.com/package/hash-wasm#benchmark

I was able to archive 10x-60x speedups compared to the performance of most popular JS-only implementations.

You can make your own measurements here: https://csb-9b6mf.daninet.now.sh/


Yeah, hashing in WASM seems to be fine in terms of speed, though 60x faster does still sound surprising to me. Hashes with 32 bit words (e.g. sha256) can be optimized fairly well in javascript due to the SMI optimization in engines like v8. I should play around with hashing more.

I was in particular benchmarking ECC, which is much harder to optimize in JS (and in general).

Code is here:

JS: https://github.com/bcoin-org/bcrypto/tree/master/lib/js

C: https://github.com/bcoin-org/libtorsion

To benchmark:

    $ git clone https://github.com/bcoin-org/bcrypto
    $ cd bcrypto
    $ npm install
    $ node bench/ec.js -f 'secp256k1 verify' -B js

    $ git clone https://github.com/bcoin-org/libtorsion
    $ cd libtorsion
    $ cmake . && make
    $ ./torsion_bench
    $ make -f Makefile.wasi SDK=/path/to/wasi-sdk
    $ ./scripts/run-wasi.sh torsion_bench.wasm ecdsa


I think he doesn't complain at all about WASM speed vs JS.

He wants WASM to be closer to C performance. I.e. to move further along this line:

    |JS|========>|WASM|=========>=========>========>|C|


>> webassembly is faster than javascript

> Everyone says this, but I would dispute it as misleading in a lot of cases. I've been experimenting a lot with wasm lately. Yes, it is faster than javascript, but not by all that much.

I think even if it is faster in general, you might lose all that advantage as soon as you have to cross the WASM <-> JS boundary and have to create new object instances (and associated garbage) that you never would have needed to create if you had used only one language.

Therefore moving to WASM for performance reasons on a project which crosses the language boundaries very often due to browser API access doesn't seem too promising to me.


I am writing an app that needs worker threads both on the backend and also on the frontend (because of some heavy processing of large amounts of "objects") and my experience with TS so far is very poor. JS runtimes are just not suitable for heavy concurrent/parallel processing. Serialization/deserialization overhead between threads is probably (much) worse than it would be if the worker threads were in Rust.

It's not a matter of speed here. It's a matter of enabling certain types of programs which are borderline impossible with pure JS runtimes.

So I will probably move most of the logic to Rust


Javascript runtimes do fine with concurrent operations, but obviously are not intended for parallelism.

On the WASM side: Does WASM support real threads yet? Otherwise moving to Rust wouldn't really help you? If it's just "WebWorker" like multiple runtimes, you might still pay serialization costs to move objects between workers.


No, JS runtimes don't do "fine" with concurrent operations, unless you are "waiting". If you are doing heavy processing, the whole service freezes. That's indeed the primary reason I need worker threads.

Erlang's runtime does "fine" with its preemptive concurrency model, JS runtimes are a joke in this regard.


Have you tried async generator functions?


> Yes, it is faster than javascript, but not by all that much. ... My crypto library, when compiled to web assembly, is maybe 2-3x the speed of the equivalent javascript code.

2-3x may not be the 15-16x you see in native code, but it's still a massive speedup in already optimized code, and is likely enough to make a bunch of applications that weren't quite feasible to do in on the web now feasible.


I think the point is only certain use cases (usually related to number crunching like crypto, but undoubtedly games too) may see substantial improvements, they still aren't close to "native" speed, and nearly all other use cases won't see much if any benefit, especially compared to the additional complexity of another language, compiling to was, etc.

Plus things like competitive games and what I'll call "pretty" games have to squeeze out as much performance as possible, and no hitching is acceptable to competitive games, which IMO means WASM is still a no-go for those types of games(although games that don't have this requirement undoubtedly benefit)


Another consideration here is that you can use languages that offer control over data layout as a natural feature, which can matter a lot for good cache utilization etc. In many cases the data layout matters a lot for performance.

You can do this in JS too with TypedArray and whatnot but the key word here is 'natural.'

I've been working on a game engine as a side project with C++ and WASM -- and there are already many improvements over what I was getting with JS due to less GC, better data layout (the data layout thing is also esp. helpful for managing buffers that you drop into the GPU). I don't think it was about 'pure compute' as much as these things. C++ and Rust give you tools to manage deterministic resource utilization automatically which really helps.

A bonus is that the game runs with the same code on native desktop and mobile.


Chrome already has experimental support for SIMD, have you tried that as well?

Otherwise, eventually I expect WebAssembly to match what Flash Crossfire and PNaCL were capable of 10 years ago.


No, but I've been meaning to test this. I did notice it was available in node.js with --experimental-wasm-simd. I hope this proves me wrong about wasm, but I'll have to try it.


Note that Emscripten has an implementation of C++ intrinsics based on wasm simd: https://emscripten.org/docs/porting/simd.html


Have you tried with firefox? Last I heard they've got far and away the fastest wasm implementation.


No. I've just been building with the WASI SDK and running the resulting binary with a small node.js wrapper script. So, I've only tested v8's WASM implementation so far.

Does firefox have a headless mode, a standalone implementation, or some CLI tool I can use to run a WASM binary? Running stuff in the browser is cumbersome.


You can use jsvu to grab command-line shell binaries for spidermonkey (firefox's JS engine), v8, and jsc (safari's JS engine) to toy around with.


I think there should be a headless mode. For exampl e you can run unit tests using the gecko driver (?) without firefox showing up, running it as some process to perform the testing steps and give results.


On algorithmic code, idiomatic Rust + WASM is often about 10 to 40 times faster than idiomatic JS. The problem however is that each call between WASM and JS has a hefty cost of about 750ns. So your algorithm needs to be doing a significant amount of independent calculations before you will see these performance differences.


> each call between WASM and JS has a hefty cost of about 750ns

Why is this the case? Can we expect it to go away as browsers mature?


But 2-3x speedup over a well written Javascript version running in a modern Javascript engine is quite impressive, isn't it?

One thing that's often overlooked is that even though "idiomatic Javascript" using lots of objects and properties is fairly slow, it can be made fast (within 2x of native code compiled from 'generic' C code) by using the same tricks as asm.js (basically, use numbers and typed arrays for everything). But the resulting Javascript code will be much less readable and maintainable than cross-platform C code that's compiled to WASM.


> maybe 2-3x the speed of the equivalent javascript code.

That is an insane difference even if nowhere close to native code.

Some engineering fields would be crazy with a 20% gain... 200% is huge!


I agree it still is signficant, but this is not how wasm is being touted. Look at wikipedia: https://en.wikipedia.org/wiki/WebAssembly#History

They even say asm.js is supposed to be "near-native code execution speeds". In my experience, it is no where close to native speed. People should avoid this kind of deceptive marketing.


On that front, I agree. Wasm is being sold as the greatest innovation ever, when it is just another virtual ISA with lacking features and performance.


WebAssembly with Rust is sometimes bigger and "just as fast as JS".

But it's way more predictable, no more crazy 95th percentiles.


I'd be interested to see how current-day WASM stacks up against current-day Java. Don't suppose you've ported your crypto code to Java? Other than the maturity of the JIT compilers, are there reasons we should expect WASM to be any slower?


In some ways it should be faster because it isn't garbage-collected. But I agree that would be a much better benchmark for what should be possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: