Ameo 19 hours ago

The GoL example it loaded with seemed to be running way slower than I expected it to. It turns out that there's actually a `usleep(1000 * 100)` call in the code which was inserted to make it easier to see the output; the actual kernels execute quickly and take up very little GPU time.

When I looked at the profiler, I was confused to see that one worker thread was at 100% usage the whole time it was running. At first, I thought that maybe it was actually running the code via Wasm on the CPU rather than on the GPU like it said.

Instead, it turns out that the worker was just running `emscripten_futex_wait` - which as far as I can tell is implemented by busy waiting in a loop. Probably doesn't matter for performance since I imagine that's just for the sleep call anyway.

----

Altogether this is an incredibly cool tool. I'm sure there is some performance gap compared to native, but even so this is a extremely impressive and likely has a ton of potential use cases.

  • btown 17 hours ago

    Thank you so much for this! I was a bit concerned that the performance on my Mac was nearly identical to my new 3090 on PC and thought I might have messed up the setup there!

_nalply 19 hours ago

Firefox supports WebGPU, but needs a setting in about:config. I enabled the setting but HipScript still denies running on Firefox with the message: "Please try a Chromium-based browser like Google Chrome or Microsoft Edge."

Please do feature detection, not browser detection.

  • lights0123 18 hours ago

    I do do feature detection—WebGPU is blocked on Release Firefox regardless of config, you'll need nightly. It does support Safari with its experimental mode enabled for example.

    • _nalply 5 hours ago

      OK, I installed Firefox Nightly and enabled dom.webgpu.enabled and gfx.webrender.all corresponding to this article: https://hacks.mozilla.org/2020/04/experimental-webgpu-in-fir..., but still this error message. Which config options are needed for HipScript to run on Firefox?

      Edit: on Nightly `navigator.gpu` is available, I checked that in the console.

    • doctoboggan 18 hours ago

      I enabled WebGPU in safari on my m1 Mac and got this error when running the GoL demo:

      ``` TypeError: B.values().some is not a function. (In 'B.values().some(r=>r.args.length)', 'B.values().some' is undefined) ```

      EDIT: I got the same error with all three sample scripts

      • lights0123 15 hours ago

        I believe I just fixed this—JavaScriptCore doesn't have support for a recently-introduced function.

        • doctoboggan 10 hours ago

          I can confirm it is working for me now, thanks!

bagels 19 hours ago

What an incredible demo/hack. This is actually the simplest way to actually execute CUDA code that I've seen.

  • btown 19 hours ago

    Not to mention that it Just Works on Apple devices! Really, really cool.

grconner 15 hours ago

This is just amazing. So clean and straightforward. On a Mac just run it in Chrome see things work automagically. For real fun change the generations constant to 2000 and delete the usleep line to see this thing really cook.

samagra14 11 hours ago

Sounds interesting! I love all these edge experiments. But as long as there is architecture dependent code for models, I feel these edge experiments can't fully express their strong suit.

You try to run something and Voila you need Ampere or Hopper or Laplace for flash attnt.

Cieric 17 hours ago

I love the idea of this, but sadly I can't get it to work in Firefox, Chrome or Edge on my work pc, probably because I can't find "--enable-features=Vulkan" equivalent in the about:flags and the argument doesn't appear to work on windows. I'm actually a bit more curious about a standalone application that skips the webgpu part and goes straight to Vulkan as I would love to be able to experiment with some Cuda only applications.

  • lights0123 17 hours ago

    I don't currently expose the option to run the compiler if a GPU isn't detected, but on systems that do, it exposes a download option that lets you download the SPIR-V kernel to run with Vulkan.

  • sroussey 16 hours ago

    I got a `Uncaught (in promise) TypeError: WebAssembly compilation aborted: Network error: Response body loading was aborted` error the first time, but after a reload it worked.

punnerud 19 hours ago

How performant is the CUDA code in browser compared to standalone program?

Could we have PyTorch / ML training with CUDA through the browser performing ok?

  • saagarjha 6 hours ago

    Well the bigger issue is that even natively I doubt there is enough performance to train anything interesting.

  • mordechai9000 18 hours ago

    Distributed large model training with web clients?

    • sroussey 16 hours ago

      With 10Tb/s Internet, then sure!

  • machinekob 6 hours ago

    WebGPU don't have access to most accelerators in GPU so it would be super slow compared to CUDNN version but pure CUDA would be only x times times faster.

JackYoustra 19 hours ago

I don't know why this never occurred to me. What a great website, glad you made it!

mlepath 10 hours ago

This is really awesome. Tested it on both Nvidia and Mac GPUs.

Interested to know how debugging in a real application would work since WASM is pretty hard to debug and GPU code is pretty hard to debug. I assume WASM GPU is ... very difficult to debug.

  • saagarjha 6 hours ago

    You can download the intermediate files fwiw

uncheckederror 15 hours ago

Impressed that this runs on my RX 6900XT (an RDNA2 GPU) in Chrome without any trouble. Very cool demo, excited to see how people leverage this capability.

JonChesterfield 17 hours ago

This feels like more stages than should be necessary (something should be able to do LLVM IR direct to WebGPU) but it's great to see it running, very nice!

ryanmerket 19 hours ago

Very cool. Thank you for creating this!

bloomingkales 18 hours ago

How is this different than web-llm?

  • lights0123 18 hours ago

    web-llm provides optimized kernels for neural network operations, and a convenient API for it. This project provides a place to experiment with CUDA, for any purpose—not necessarily for anything related to machine learning.