Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD GPU Debugger (thegeeko.me)
148 points by ibobev 4 hours ago | hide | past | favorite | 18 comments




Non-AMD, but Metal actually has a [relatively] excellent debugger and general dev tooling. It's why I prefer to do all my GPU work Metal-first and then adapt/port to other systems after that: https://developer.apple.com/documentation/Xcode/Metal-debugg...

I'm not like a AAA game developer or anything so I don't know how it holds up in intense 3D environments, but for my use cases it's been absolutely amazing. To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.

I'm sure it's linked somewhere there but in addition to traditionally debugging, you can actually emit formatted log strings from your shaders and they show up interleaved with your app logs. Absolutely bonkers.

The app I develop is GPU-powered on both Metal and OpenGL systems and I haven't been able to find anything that comes near the quality of Metal's tooling in the OpenGL world. A lot of stuff people claim is equivalent but for someone who has actively used both, I strongly feel it doesn't hold a candle to what Apple has done.


There also exists cuda-gdb[1], a first-party GDB for NVIDIA's CUDA. I've found it to be pretty good. Since CUDA uses a threading model, it works well with the GDB thread ergonomics (though you can only single-step at the warp granularity IIRC by the nature of SM execution).

[1] https://docs.nvidia.com/cuda/cuda-gdb/index.html


For NVIDIA cards, you can use NSight. There's also RenderDoc that works on a large number of GPUs.

nsys and nvtx are awesome.

many don't know but you can use them without GPUs :)


Is there not an official tool from AMD?


It's worth noting that upstream gdb (and clang) are somewhat limited in GPU debugging support because they only use (and emit) standardized DWARF debug information. The DWARF standard will need updates before gdb and clang can reach parity with the AMD forks, rocgdb and amdclang, in terms of debugging support. It's nothing fundamental, but the AMD forks use experimental DWARF features and the upstream projects do not.

It's a little out of date now, but Lance Six had a presentation about the state of AMD GPU debugging in upstream gdb at FOSDEM 2024. https://archive.fosdem.org/2024/events/attachments/fosdem-20...


amd gdb is an actual debugger but it only works with applications that emit dwarf and use the amdkfd KMD aka it doesn't work with graphics .. all of the rest are not a actual debuggers .. UMR does support wave stepping but it doesn't try to be a shader debugger rather a tool for drivers developers and the AMD tools doesn't have any debugging capabilities.

> After searching for solutions, I came across rocgdb, a debugger for AMD’s ROCm environment.

It's like the 3rd sentence in the blog post.......


to be fair it wasn't clear that was an official AMD debugger and besides that's only for debugging ROCm applications.

this sentence doesn't make any sense a) ROCm is an AMD product b) ROCm "applications" are GPU "applications".

But not all GPU applications are ROCm applications (I would think).

I can certainly understand OP's confusion. Navigating parts of the GPU ecosystem that are new to you can be incredibly confusing.


there's 2 AMD KMD(kernel mode drivers) in linux: amdkfd and amdgpu .. the graphics applications use the amdgpu which is not supported by amdgdb .. amdgdb also has the limitation of requiring dwarf and and mesa/amd UMDs doesn't generate that ..

Tangent: is anyone using a 7900 XTX for local inference/diffusion? I finally installed Linux on my gaming pc, and about 95% of the time it is just sitting off collecting dust. I would love to put this card to work in some capacity.

I tested some image and text generation models, and generally things just worked after replacing the default torch libraries with AMD's rocm variants.

I've done it with a 6800XT, which should be similar. It's a little trickier than with an Nvidia card (because everything is designed for CUDA) but doable.

You'd be much better off wiht any decent nVidia against the 7900 series.

AMD doesn't have a unified architecture across GPU and compute like nVidia.

AMD compute cards are sold under the Insinct line and are vastly more powerfull than their GPUs.

Supposedly, they are moving back to a unified architecture in the next generation of GPU cards.


try it with ramalama[1]. worked fine here with a 7840u and a 6900xt.

[1] https://ramalama.ai/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: