kentonv a day ago

If you have a reproducible test case that runs reasonably quickly, then I think printf debugging is usually just as good as a "real" debugger, and a lot easier. I typically have my test output log open in one editor frame. I make a change to the code in another frame, save it, my build system immediately re-runs the test, and the test log immediately refreshes. So when the test isn't working, I just keep adding printfs to get more info out of the log until I figure it out.

This sounds so dumb but it works out to be equivalent to some very powerful debugger features. You don't need a magical debugger that lets you modify code on-the-fly while continuing the same debug session... just change the code and re-run the test. You don't need a magical record-replay debugger that lets you go "back in time"... just add a printf earlier in the control flow and re-run the test. You don't need a magical debugger that can break when a property is modified... just temporarily modify the property to have a setter function and printf in the setter.

Most importantly, though, this sort of debugging is performed using the same language and user interface I use all day to write code in the first place, so I don't have to spend time trying to remember how to do stuff in a debugger... it's just code.

BUT... this is all contingent on having fast-running automated tests that can reproduce your bugs. But you should have that anyway.

  • whatevertrevor a day ago

    > Most importantly, though, this sort of debugging is performed using the same language and user interface I use all day to write code in the first place, so I don't have to spend time trying to remember how to do stuff in a debugger... it's just code.

    Absolutely! Running a commandline debugger adds additional context I have to keep in my head (syntax for all the commands, output format etc), that actively competes with context required to debug my code. Printfs work just fine without incurring that penalty, granted this argument applies less to IDE debuggers because their UX is usually intuitive.

  • eru a day ago

    > BUT... this is all contingent on having fast-running automated tests that can reproduce your bugs. But you should have that anyway.

    Ideally, yes. But for many bugs, getting to a reproduction is already more than half the battle. And a debugger can help you with that.

    • skissane a day ago

      > But for many bugs, getting to a reproduction is already more than half the battle.

      Today I am working on a bug where a token expires after 2 hours and then we fail to request a new one instead we just keep on using the now expired one, which (of course) doesn’t work. I have a script to reproduce it but it takes 2 hours to run. It would be great if there was some configuration knob to turn down the expiry just for this test so we can reproduce it faster - but there isn’t because nobody thought of that.

      • atq2119 a day ago

        Maybe it takes less than 2 hours to add such a configuration knob?

        • skissane a day ago

          That’s a good idea except it is in another team’s service and I’m not very familiar with their code. But I can try

          Well, the hard part isn’t actually the code change (reasonably obvious) it is deploying my own copy of their service…

          • pantalaimon a day ago

            Can you just set the local time two hours into the future?

            Or modify the expiry time locally / not use the one from the token at all.

            • skissane a day ago

              The token isn’t being verified in my service it is being verified in a remote service. I can’t easily change the clock on the remote service. It runs in a K8S pod and (AFAIK) K8S doesn’t support time namespaces. And even if it does, the pod is deployed by a custom K8S operator so unsure if I could make the operator turn that on even if it is available. And the remote service is a complex monolith which uses lots of RAM so trying to run it on my own laptop will be painful

              • majewsky a day ago

                Reminds me of the classic adage that "everything can be solved by another layer of abstraction, except for the problem of too much abstraction layers".

            • kmoser a day ago

              Or modify the token itself so it's no longer valid? (Although that might come back with "invalid token" rather than "expired token.")

  • roca a day ago

    Adding print statements, rebuilding and rerunning the program and figuring out where in the logs the bug showed up this time can a lot more tedious than setting a (possibly conditional) breakpoint in a reverse-execution debugger and doing "reverse-continue".

  • seanmcdirmid a day ago

    The crashes/bugs I deal with are rarely straight down failures, they are often the 1 out of 100 runs kind, so printf debugging is the only way to go really. And I used to be big on using debuggers, but now I’m horribly out of practice.

    • skissane a day ago

      > The crashes/bugs I deal with are rarely straight down failures, they are often the 1 out of 100 runs kind, so printf debugging is the only way to go really.

      Another thing I’ve found helpful, is to write out a “system state dump” (say in JSON) to a file whenever certain errors happen. Like we had a production system that was randomly hanging and running out of DB connections. So now whenever the DB connection pool is exhausted, it dumps a JSON file to S3 listing the status of every DB connection, including the stack dump of the thread that owns it, the HTTP request that thread is serving, the user account, etc. Once we did that, it went from “we don’t understand why this application is randomly falling over” to “oh this is the thing that is consistently triggering it”

      When it writes a dump, it then starts a “lockout period” in which it won’t write any further dumps even if the error reoccurs. Don’t want to make a meltdown worse by getting bogged down endlessly writing out diagnostics.

      • alextingle a day ago

        Why not just dump a core, rather than going to all that effort?

        • skissane a day ago

          Because it is a lot easier to analyse a few megabytes of JSON than a heap dump which is gigabytes in size.

          Not my original idea, I got it from some Oracle products which have a similar feature (most notably Oracle Database)

    • roca a day ago

      Record-and-replay debuggers are often better than anything else for debugging intermittent failures: run the program with recording many times until you eventually get the bug, then debug that recording at your leisure; you'll never have to waste time reproducing it again.

      • johnisgood a day ago

        I'm very rusty when it comes to debuggers.

        Where may I read about particular workflows involving debuggers, e.g. gdb?

        (Mainly for programs written in C.)

        Most of my issues are related to issues with concurrency though, deadlocks and whatnot.

    • throwaway2037 a day ago

          > they are often the 1 out of 100 runs kind
      
      Can you share an example? In my whole career, I have only seen one or two of them, but most of my work is CRUD type of stuff, not really gaming or systems programming where such a thing might happen.
      • arter4 a day ago

        Let's say your application talks to a database.

        You reuse connections with a connection pool, but you accidentally reuse connections with different privileges and scopes. As a result, sometimes you get to read some data you shouldn't read and sometimes you don't.

        Or, concurrency bugs.

        You don't properly serialize transactions and sometimes two transactions overlap in time leading to conflicts.

      • seanmcdirmid a day ago

        In Android they are way too common. Like an animation that decides to fire after the UI element it was made for has been destroyed, or a race condition in destroying a resource, or VE logging.

  • rtpg a day ago

    I would say that like... saying "OK so `a.b` looks like this, what does `a.b.c` look like?" is a very nice flow with deep and messy objects (especially when working in languages with absolutely garbage default serialization logic like Javascript)

    But even with a debugger, there's still loads of value of sitting up, moving from writing a bunch of single line statements all over, and to writing a real test harness ASAP to not have to rely on the debugger.

    For any non-trivial problem, you'll often very quickly appreciate a properly formatted output stack, and that output will be shaped to the problem you are looking at. Very hard for an in-process debugger to have the answer for you there immediately.

    Being serious about debugging can even get into things like writing actual new entrypoints (scripts with arguments and whatnot) and things like adding branches togglable through environment variables.

    I think a lot of people's mindset in debugging is "if I walk the tightrope and put in _one more hack_ I'll find my answer" and it gets more and more precarious. Debugging is always a bit of an exercise of mental gymnastics, but if you practice being really good at printf debugging _and_ configuring program flow easily, you can be a lot less stressed out.

    Like if you think your problem is going to take more than 20 minutes to debug, you probably should start writing a couple helper functions to get you on the right foot.

  • NikkiA a day ago

    Things like variable watchpoints mean that debuggers are still the better option.

    • guelo a day ago

      You can add the condition to the print statement and grep for it if it gets too verbose.

      • NikkiA a day ago

        Watchpoints can tell you which function is causing the variable to change, making printf's conditional cannot.

victorNicollet a day ago

One of the hardest bugs I've investigated required the extreme version of debugging with printf: sprinkling the code with dump statements to produce about 500GiB of compressed binary trace, and writing a dedicated program to sift through it.

The main symptom was a non-deterministic crash in the middle of a 15-minute multi-threaded execution that should have been 100% deterministic. The debugger revealed that the contents of an array had been modified incorrectly, but stepping through the code prevented the crash, and it was not always the same array or the same position within that array. I suspected that the array writes were somehow dependent on a race, but placing a data breakpoint prevented the crash. So, I started dumping trace information. It was a rather silly game of adding more traces, running the 15-minute process 10 times to see if the overhead of producing the traces made the race disappear, and trying again.

The root cause was a "read, decompress and return a copy of data X from disk" method which was called with the 2023 assumption that a fresh copy would be returned every time, but was written with the 2018 optimization that if two threads asked for the same data "at the same time", the same copy could be returned to both to save on decompression time...

  • overfl0w a day ago

    Those are the kind of bugs one remembers for life.

    • victorNicollet a day ago

      Indeed. Worst week of 2023 !

      But I consider myself lucky that the issue could be reproduced on a local machine (arguably, one with 8 cores and 64GiB RAM) and not only on the 32 core, 256GiB RAM server. Having to work remotely on a server would have easily added another week of investigation.

onre 2 days ago

I've gotten an OS to run on a new platform with a debugging tool portfolio consisting of a handful of LEDs and a pushbutton. After getting to the point where I could printf() to the console felt like more than anyone could ever ask for.

Anecdote aside, it certainly doesn't hurt to be able to debug things without a debugger if it comes to that.

  • shadowgovt 2 days ago

    Since most of my work is in distributed systems, I find the advice to never printf downright laughable.

    "Oh sure, lemme just set a breakpoint on this network service. Hm... Looks like my error is 'request timed out', how strange."

    That having been said: there are some very clever solutions in cloud-land for "printf" debugging. (Edit: forgot this changed names) Snapshot Debugger (https://github.com/GoogleCloudPlatform/snapshot-debugger) can set up a system where some percentage of your instances are run in a breakpointed mode, and for some percentage of requests passing through the service, they can log relevant state. You can change what you're tracking in realtime by adding listeners in the source code view. Very slick.

    • mark_undoio a day ago

      > "Oh sure, lemme just set a breakpoint on this network service. Hm... Looks like my error is 'request timed out', how strange."

      Time travel debugging (https://en.wikipedia.org/wiki/Time_travel_debugging) can help with this because it separates "recording" (i.e. reproducing the bug) from "replaying" (i.e. debugging).

      Breakpoints only need to be set in the replay phase, once you've captured a recording of the bug.

pryelluw 2 days ago

Whatever works so I can fix it and be home on time. I will even print (in paper) the code and step through it with a pen. Again, whatever works.

Also, will we ever move forward from these sort of discussions? Back when I was a mechanic no one argued about basics troubleshooting strategies. We just aimed to learn them and apply them all (as necessary).

  • chrsig 2 days ago

    eternal september; people new people are continually learning new strategies.

    You're correct, the real issue comes down to

    1. making sure those printf statements don't wind up in prod, spilling potentially sensitive data or corrupting a data stream

    2. making sure that non-printf tooling is built so that only printf debugging isn't used

    We tend to get caught up in false dichotomies.

    • torstenvl a day ago

      I use a DBUG() macro that wraps fprintf to stderr when a DEBUG is defined and NDEBUG is not defined, or otherwise is nothing.

    • nxobject a day ago

      Re 1: the recent iTerm2 fracas. Yikes.

      https://news.ycombinator.com/item?id=42579472

      • Cthulhu_ a day ago

        oof, iterm's done a few bad releases recently, first with their AI integration and now this sloppiness. I think they're feeling the heat from competitors like Warp, especially now that it's removed the "create an account" barrier to entry.

      • chrsig a day ago

        ouch, i didn't even know about that

  • skissane 2 days ago

    > Whatever works so I can fix it and be home on time. I will even print (in paper) the code and step through it with a pen.

    I’ve found before that sometimes I can’t see what’s wrong with the code on my screen but I can when I print it out. I think the printed page activates different regions of the brain compared to looking at a computer screen

    • jclulow a day ago

      You should try other things that force the brain to begin interpreting something again as new input; e.g., I have found success with changing the font, or the colour scheme, or looking at a diff rendition of a change, or looking at the file with less(1), or even reading the thing in reverse order, provoking a similarly new context where I see stuff I won't have seen staring at the editor for an hour previously. Another thing printing usually induces is that you simply stop looking at it for at least a few minutes before looking again, and a break often helps too!

      Printing is obviously fine if you like doing that, but I've found there are lots of ways to shake yourself loose and more thoroughly review your own writing.

      • __MatrixMan__ a day ago

        If I'm looking for something and can't find it despite all signs pointing to it being nearby, I'll try to continue looking while upside down. It's often helpful for I think the same reasons that you describe.

        One time, I was looking for a set of keys which had been tossed over the aisle partition in a grocery story and promptly disappeared. We tore that place apart and still couldn't find them. So I was laying on my back on the aisle floor so I could see the search area with my head upside down, and there they were. They had bounced and were hanging from the bottom of one of the shelves.

        When in doubt, do something weird.

      • pryelluw a day ago

        I increase the font size and just stare. Tends to work, too. Maybe we should be compiling them somewhere …

  • johnnyanmac 2 days ago

    Based on the "lucky 10k" and the size of the internet, likely not. I'm sure if you find a mechanics enthusiast forum you will in fact find these relatively trivial arguments. It's just the nature of the beast.

    • pryelluw 2 days ago

      I’d say programmers are taught to nitpick away at arguments by default. The nature of the code review is to verify that the code I wrote meets certain (opinions) standards. Hence nitpicking is built into the profession. Maybe one day it’ll improve. Till then, I’ll continue arguing about everything there is :-)

      • bee_rider a day ago

        I wouldn’t say programmers are taught to nitpick, so much as look for non-obvious problems. The nitpicking comes as a side effect. Maybe it is cargo-culting.

        • bee_rider 21 hours ago

          (This is a meta-nitpicking joke).

    • shadowgovt 2 days ago

      I find the key thing to avoid arguments is to not make the options adversarial.

      Present them as options. "If you like X, you may also like Y." Leave it to the audience to discriminate when to apply the tool.

      • pryelluw a day ago

        Yes, good approach. Do you ever feel like the internet has been turned adversarial? Flame wars are as old as email but not the default.

malkia 2 days ago

25 years ago I worked on a port of PC -> Playstation 1 game. We did not had proper devkits, but the yaroze model ("amateur", allowing for "printf"-debugging of sorts)

Long story short, our game worked as long as the printfs we had were kept, we had macro to remove them (in "Release/Ship") but the game crashed.

The crash was due to side-effect of printf clearing some math errors.... So here you go!

  • ykonstant a day ago

    Seems like printf debugging worked! \(≧▽≦)/

recursivedoubts 2 days ago

agree entirely that both techniques are useful and should be learned by programmers

one thing I think the "just do print debugging" folks miss is what a good teaching tool a visual debugger is: you can learn about what a call stack really is, step through conditionals, iterations, closures, etc and get a feel for how they really work

at some level being a good programmer means you can emulate the code in your head, and a good visual developer can really help new programmers develop that skill, especially if they aren't naturals

i emphasize the debugger in all the classes i teach for this reason

  • Stratoscope a day ago

    > at some level being a good programmer means you can emulate the code in your head, and a good visual developer can really help new programmers develop that skill [emphasis added]

    I think you meant "a good visual debugger". And I agree completely.

    A visual debugger isn't just a tool for fixing bugs.

    It's also a tool for understanding the code.

    At my last job, the codebase was so complicated that you could spend hours scratching your head over what was going on in a function, especially how the code got to that function and what data the calling functions had that led to this point.

    Of course you could add print or log statements, but then the question is what to print! And which calling functions needed more print statements.

    With a visual debugger, I could just set a breakpoint in the confusing function, see all the data it had, and also move up the stack to see what data all the calling functions had.

    There are cases where you need print debugging. I added one feature that worked perfectly locally and on a test server, but failed in a Jenkins job (ironically running the job on that same test server).

    That was a case where I added print statements throughout the code, just to see how far it got when running locally vs. under Jenkins.

    There are many ways to debug a problem. It is wise to be familiar with all of them and know what to use when.

saagarjha a day ago

I find that a lot of this discussion just melts away if you are aware of the options that are available and then take your pick. Why, sometimes I've switched between debuggers and printfs as many as six times before breakfast.

If you don't know what a debugger does though that's something you should really get on ASAP. Likewise if you can't figure out how to get log messages out of your thing. Really all there is to it, figure out what you want to do after than and spend your time actually doing something productive instead of getting in a stupid holy war on the internet about it.

GuB-42 a day ago

I have changed my mind a couple of times on that subject. My opinion now is that printf debugging is not ok, but that it is not your fault.

printf debugging is a symptom of poor tooling. It is like saying that driving in nails with a rock is fine. It works, but the truth is that if you are using a rock, that's probably because you don't have a good hammer. And if on every job site, there are seasoned workers banging rocks like cavemen, maybe hammer manufacturers should take notice.

  • poincaredisk a day ago

    Maybe, but it's also the only method that works everywhere. I write very different things - python scripts, scripts for weird programs that use ancient or weird languages, web applications, binary exploits, I do small patches to large projects - most recently a custom LLVM pass. With maybe exception of web dev, having a toollit for debugging every of that kinds of software would be hard to impossible to get.

  • lolc a day ago

    Often I want to know the sequence of some events, what points are reached. Adding print statements is a quick way to trace that. How can tooling make that more convenient?

  • K0balt a day ago

    The nice thing about printf debug is the preemptive debugging. You can leave them in the source as conditional comments, and it makes the code self documenting at runtime. I find it useful as a pre-source review step reopening a project after a few years. Documentation at runtime gives a faster mental map of the execution then reviewing the source without seeing the runtime debug flow.

    So my debug builds often self document at runtime, and in production the printfs become comments.

bluenose69 a day ago

Balance is the key, but if I only had a debugger, and not the printf option, I'd be working more slowly and swearing a bit more.

I work in a natural science, and use computing for numerical simulations and data analyses. For coding problems, a debugger is pretty handy. But for finding errors in the underlying mathematics, I tend to rely on a printf -> grep -> analysis chain. This might make files of several hundred Mb in size. The last part is generally done in R, because that lets me apply graphical and statistical methods to discover problematic conditions. Often these conditions do not crop up for quite a long time, and my task will be to find out just what I did wrong, with something like a problematic formulation of boundary condition that created an anomalous signal near an edge of a domain, but only once a signal had propagated from a forcing zone into that region of state space.

gghoop a day ago

If using the debugger is less efficient than using printf then it's a symptom of a wider problem. To be clear though, "printf debugging" is not the same thing as adding structured debug logs to your service and enabling it on demand via some config. Most production services should never be logging unstructured output to stdout. Printf debugging is just throwing out some text to stdout, and it generally means running unit tests locally and iterating through an (add printf, run test, add more printf) loop until the bug is found. Unfortunately it's the default way to debug locally reproducible bugs for so many engineers. So while I don't see the point in not using printf for purely ideological reasons, I avoid building software where this is the simplest option and use the service's structured logger if not easily reproducible. I also generally think it's bad to default to printf debugging in the way I have defined it here and find that competent use of debugger is more often a faster way to debug.

  • hot_gril a day ago

    That wider problem may simply be that you're using C or C++ with optimized binaries and can't reproduce the bug in the unoptimized/debug build.

omgJustTest a day ago

An interesting note: printf'ing a variable can sometimes alter how a variable is cached. This is a particularly useful fact in memory locations that can change as a result of hardware operations (like a change of a status bit by a hardware element).

printf'ing effectively enforces a similar condition to `volatile` on the underlying memory segment when it is read.

One can encounter tersely written code that works perfectly with printf statements, but status bits never get "updated" (CPU cache purged) without the printf and hangs the program.

  • dennis_moore a day ago

    Ah yes, it can be quite fun chasing down heisenbugs like that, half sarcastically :)

bradley13 a day ago

If it's a one-off situation, as opposed to something that should be part of your long-term test cases? Then yes, using print-statements is fine. Everyone does it, this isn't (or shouldn't be) controversial.

Sometimes a series of print-statements are better for helping you understand exactly when and how a bug occurs. This is particularly true in situations where the debugger is difficult or impossible to use (e.g., multi-threading). Of course, in that situation, logging may be better, but that's just glorified printf.

  • mobilemidget a day ago

    the glorified printf :) I prefer over non glorified, as it goes to log for future reference etc, imho it has advantages over regular printf.

    • XorNot a day ago

      Most of the time these days I'm starting to just want compile-in profiling or something. At some point I'm just logging every line of code executed + locals, and I wish that was easier to just ask for.

overfl0w a day ago

And then there are those rare cases where inserting a print or a new condition to use for conditional breakpoint forces the compiler to output slightly different code which does not produce the bug. Essentially this is similar to the Observer effect in quantum mechanics where the system is disturbed simply by observing it. Also the bug cannot be reproduced with optimizations disabled.

How are those cases debugged then? By enabling the debug symbols AND the optimizations and using the debugger, looking at the code and the disassembly side by side and trying to keep your sanity as the steps hop back and forth through the code. Telling yourself that the bug is real and it just cannot be reproduced easily because it depends on multiple factors + hardware states. Ah! I sometimes miss those kinds of bugs which make you question your reality.

  • K0balt a day ago

    Those kinds are almost never that the bug isn’t created unless you don’t put in the printf, it’s that the bug only causes the overt manifestation when the printf isn’t there. The actual bug is almost always there in both situations.

    It’s almost never the compiler. It’s almost never an error in the bare metal.

    Almost.

    • overfl0w a day ago

      The bug in question was a out of bounds writing to a stack allocated buffer. The compiler would choose to store some variables to registers for optimization purposes. When calling a function - these registers' contents would get pushed to the stack. The faulty called function would modify those same register contents on the stack. When returning to the parent function and restoring the context - the registers would have faulty values.

      When adding a print or a check - the compiler would choose different variables to store in the registers. They would still get overwritten by the faulty function but the bug would not be observed.

      I agree that it's almost never the compiler's fault though - but sometimes its optimization choices make it harder to reproduce a bug.

      Edit: The faulty function was a somewhat standard function, part of the SDK. This taught me that the standard functions are almost never faulty. Until they are :-)

      • K0balt a day ago

        Sounds like a fun one. I know Im a broken man because I actually-like- tracking down those kinds of bugs lol.

  • sixthDot 7 hours ago

    yeah, printf is not "pure", it can modify CPU flags so it's not always an adequate tool.

mark_undoio a day ago

I found this interesting:

> I know for some people this is often a terrible UX because of the performance of debug builds, so a prerequisite here is fast debug builds.

The reasons debug builds perform badly are kind of mixed, in my experience looking at other people's set ups:

Building without optimisations

It's fairly common to believe that debug builds have to be built with -O0 (no optimisations) but this isn't true (at least, not on the most common platforms out there). There's no need to build something that's too slow to be useful.

You can always add debug info by using -g (on gcc / clang). Use -g3 to get the maximum level. This is independent of optimisation.

You can build with any level of optimisation you want and the debugger will do its best to provide you a sensible interpretation - at higher optimisation levels this can give some unintuitive behaviours but, fundamentally, the debugger will still work.

Gcc provides the "-Og" optimisation level, which attempts to balance reasonably intuitive behaviour under the debugger with decent performance (clang also supports this but, last I checked, it's just an alias to -O1.

Doing a ton of self-checks

People often add a load of self-checking, stress testing behaviours and other things "I might want when looking for a bug" to their code and gate it on the NDEBUG macro.

The logic here is reasonable - you have a build that people use for debugging, so over time that build accumulates loads of behaviours that might help find a bug.

The trouble is, this can give you a build that's too slow / weird in its behaviours to actually be representative. And then it's no use for finding some of your bugs anymore!

I think it would be better here to have a separate "dimension" for self-checking (e.g. have a separate macro you define to activate it), rather than forcing "debug build" to mean so many things.

mark_undoio a day ago

> You could use a conditional breakpoint within the debugger, but they can be slow and for me historically unreliable, but this little snippet gives you your own conditional breakpoint that works without fail.

In theory, you can make conditional breakpoints very fast using an in-process agent. For GDB (for instance) this gives the ability to evaluate conditional breakpoints within the process itself, rather than switching back to GDB: https://sourceware.org/gdb/current/onlinedocs/gdb.html/In_00...

I've always found the GDB documentation to be a bit vague about how you set up the in-process agent but I found this: (see 20.3.4 "Tracepoints support in gdbserver") https://sourceware.org/gdb/current/onlinedocs/gdb.html/Serve...

When we implemented in-process evaluation of conditional breakpoints in UDB (https://undo.io/products/udb/) it made the software run about 3000x faster than bare GDB with a frequently-hit conditional breakpoint in place. In principle, with a setup that just uses GDB and in-process evaluation you should do even better.

thrdbndndn 2 days ago

I have a very specific technical (UX) reason for using `print()` to debug sometimes.

In VS Code, if you want to run debugger with arguments (especially for CLI programs), you have to put these arguments in launch.json and then run the debugger.

This is often tedious to do, because I usually have typed these arguments and tested in terminal before, and now I have to convert them into json format, which is annoying.

To make it worse, VS Code uses a separate window for debug console than your main terminal, so they don't share history/output.

So if I know what to look at already and don't really need a full debugger, I often just use print() temporally.

  • skissane 2 days ago

    I have the same issue. The tool I’m working on has a CLI with multiple subcommands. So I just added a REPL subcommand to my CLI. That way, I can start the CLI REPL in VSCode debug and then paste in the subcommand/arguments I want to debug.

  • shlomo_z a day ago

    Yes, this is extremely frustrating sometimes.

ok123456 2 days ago

A trace log of single-character print statements or equivalent is sometimes the simplest and most effective way to debug flow. This is particularly true for recursive functions, where anything more becomes too much, too fast.

frou_dh a day ago

You shouldn't have to go back and modify a program's source-code just to find out what some part does when run. That's just plainly obvious to me as a principle.

If we have no other option then sometimes we have to use non-ideal approaches, but I don't get the impulse to start saying that tooling/observability poverty is actually good.

seba_dos1 2 days ago

The main advantage of printf is that it's usually just there, one line of code away. Setting up a useful debugger session can be much more involved (it varies widely depending on what you're working on), so my ADHD mind will absolutely try to get away with printf first if it's possible. Sometimes it ends up being counter-productive, and you just learn to recognize when to bother with experience.

That said, there are some contexts where this is reversed - printing something useful without affecting the debugged code may actually be more involved than, say, attaching a JTAG probe and stepping through with a debugger. Though sometimes both of those are a luxury you can't afford, so you better be able to manage without them anyway (and this may happen to you regardless of whether you're working on low-level hardware bring-up or some shiny new JavaScript framework).

binary_slinger a day ago

Languages like Python and Java could benefit greatly from adopting the kind of powerful console logging you see in browsers. The ability to inspect objects by simply logging their pointers is incredibly useful for debugging and understanding program state.

  • cha42 a day ago

    Logging is static in CLI world but you can reproduce this easily with pdb in python

oreally a day ago

The most ideal and default go-to for quick inspections of program flow and variables should always be quick access to a good debugger when possible (ie. one single keystroke).

Everytime you have to do a printf it's a slowdown, and you can't out-argue the fact that you have to type up to 20ish keystrokes and excite a number of brain neuron cells trying to remember what that printf syntax or variable name was. In comparison to a debugger that automatically prints out your call stack and local variables even without you having to prompt them.

bArray a day ago

Honestly, it's just about doing what is easier at the time. Re-compiling an application in debug can be a pain sometimes, or getting back to a specific state within the application to inspect what is going on. Some have mentioned about hardware (which marches on with or without software running), but similarly part of a system where you only really control one sub-system/interface.

My print statements normally come when "I have no idea why this is breaking", and I start sanity checking the basics to make sure that things I previously thought were true, remain so.

Just recently I was doing something in C after a long time, and had something like this (simplified):

    #include <stdio.h>
    int main(){
      int a = 0; // Input from elsewhere
      switch(a){
        case 0 :
          printf("0\n");
          break;
        defult :
          printf("?\n");
          break;
      }
      return 0;
    }
It was late at night, it compiled, so I knew it wasn't a grammar issue. But after testing with printf()'s I realised that the default case was never being hit and performing the core action. It turns out 'defult' highlights in my editor and compiles fine in GCC. Turns out that any word in that location compiles fine in GCC. Nasty!
  • amszmidt a day ago

    Well, yes .. it is a label, and a label can be put almost anywhere. This is normal, expected C behavior and any standard conformant compiler would behave the same, not just GCC.

    • bArray a day ago

      I think it should at least warn that it is not used by default, or recognise that unused labels within switch statements are likely to be a mistake.

      Anyway, the point was, my tired eyes did not easily grep the spelling mistake, hence the print statements.

pjmlp a day ago

Printf debugging is definitely OK, when that is the only tool available.

Other than that, people should spend time learning the ins and outs of their debugging tools, like they do for the compiler switches and language details.

Additionally, when having the option to pick the programming language, it isn't only the syntax sugar of the grammar, or its semantics that matter, it is the whole ecosystem, including debugging tools.

Personally I rather have a great debugging experience than less characters per line of code.

hot_gril a day ago

I joined a research project a while back that was mostly about squeezing performance out of something and testing various configurations. The previous researcher had set up a fancy metric collector that was cumbersome, besides also affecting the performance a little. I replaced it with printf + a Python script that parses logs.

That said, I was pleasantly surprised I was able to attach a debugger to that system. Some bugs really needed it.

giancarlostoro a day ago

In my eyes Kibana / Elastic logging is even better, basic logging is useful for local-only dev work, but once its getting deployed, a more serious logging setup is infinitely more useful. You can log all relevant data down to a specific event or request and really dig into things as needed. If you use log levels correctly, you can get drastically more detailed by getting all the "debug" logs. This was the bread and butter at a former employer.

sixthDot 20 hours ago

That post does not mention how contracts can help as preliminaries clues on the nature of the bugs.

Sure you can printf or run gdb, or whathever, but first if something like a contract has failed it will be easyer.

chmod775 a day ago

Why even obsess over how others ought to get stuff done? Judge the results.

atq2119 a day ago

Regarding the first paragraph, I have the impression that gamedev and graphics folks have largely moved to Mastodon. There's a gamedev Mastodon instance.

gringow 2 days ago

I also use something like printf() from the beginning, just with a shorter name, and its printing is conditional, so I usually don't remove it from the code. :)

konschubert a day ago

Print debugging gives you more of a Birds Eye view on the bug search.

Stepping through code is more like having your nose on the ground.

Both have their merits.

petabyt a day ago

As somebody who hacks cameras and does reverse engineering and firmware development, log debugging is actually a luxury :D

coolgoose a day ago

I usually call this ddd, debug driven development, or in case of php die() driven development :)

nejsjsjsbsb a day ago

Do people think it is not?

The only time it is not OK is when breakpoint debugging is overall faster but you are avoiding the hassle of setting up the debugger.

Also OK: adding a console.log or print in your node modules or python package cache.

And btw splunk, datadog etc. is just printf at scale.

forrestthewoods 2 days ago

It’s shocking how many people never use a debugger. Like sure printf debugging is good enough sometimes. But never? That’s wild.

Honestly I think attaching a debugger should be the first debugging tool you reach for. Sometimes printf debugging is required. Sometimes printf debugging is even better. But a debugger should always be the tool of first choice.

And if your setup makes it difficult to attach a debugger then you should re-evaluate your setup. I definitely jump through hoops to make sure that all the components can be run locally so I can debug. Sure you’ll have some “only in production” type bugs. But the more bugs you can trivially debug locally the better.

Of course I also primarily write C++ code. If you’re stuck with JavaScript you maybe the calculus is different. I wouldn’t know.

  • seba_dos1 a day ago

    To be frank, I actually find debugger to rarely be useful. When it is useful, it is super useful - but those occasions aren't that numerous, so I can totally believe that many people may get away without learning how to use it. It feels like 90% of my time in gdb is reading backtraces after segfaults, which I wouldn't really consider to be a use of a debugger. I'm still glad I can use it for the other 10%, but it's just 10%.

    • forrestthewoods a day ago

      GDB is a dog shit trash debugger. I keep forgetting that Linux and Mac have shit tools. I suppose if the only debugger available to me were GDB I would also prefer printf debugging!

      The typical modern dev loves to shit on Windows. But Visual Studio (the adult version, not VSCode) is still a best-in-class debugger. Xcode is bloated as hell but did help me last week. Linux has… poor bastards.

      • seba_dos1 a day ago

        I've spent plenty of time with various debuggers for various platforms and languages, both graphical and CLI-based, and it really doesn't matter to what I said. A debugger may be someone's preferred tool anyway, sure, but the cases where it's actually non-trivially improving workflow are relatively rare. Most of the time it just makes you a bit faster, if at all.

        • forrestthewoods a day ago

          I could not possibly disagree more strongly. But hey you do you.

      • throwaway2037 a day ago

        What is wrong with JetBrains' CLion? It is a cross-platform C & C++ IDE. I have used it on Linux, and the GUI-based debugger is so much easier to use compared to GDB from the command line.

        Also: Do also say that LLDB is a "dog shit trash debugger"?

      • fragmede a day ago

        Has VSCode, which also runs on Mac.

        Does Visual Studio do time traveling yet? https://www.replay.io/ is cross-platform.

        • forrestthewoods a day ago

          VSCode is an extremely mediocre debugger. It’s better than nothing. But it’s not great.

          WinDbg has a time traveling debugger.

          replay.io does not support a single language or environment that I care about.

0xDEAFBEAD 2 days ago

>The main arguments are “if you need to use a debugger you’re an idiot and you don’t understand the code you are writing” (that’s not an actual quote but there was a similar take along those lines). Then there is “If you can’t use a debugger you’re an idiot”. The hating on the ‘printf’ crew is omnipresent.

My holier-than-thou take on this topic is: Whenever possible, debug by adding assert statements and test cases.

from-nibly a day ago

Print debugging us for when you have a quick itteration cycle. I don't know what debuggers are for because if you don't have a quick itteration cycle you should be killing yourself to have a quick iteration cycle.

  • Too a day ago

    On the contrary, adding print statements increases iteration time, because you need to edit the file, recompile and relaunch the program. Whereas adding a breakpoint is just one click in the margin of the editor.

jfoster 2 days ago

Of course it's okay, and at the same time no one will use it as soon as something else is more convenient & effective.

Been waiting for "something else" ~30 years & counting.

  • topspin a day ago

    Merging PREEMPT_RT (real-time) into the Linux kernel only occurred in Sept, 2024, after 20 years of work, in part due to a fundamental conflict with printk(), the traditional "printf" of Linux, widely used for kernel debugging. Linus is unwilling to forego printf debugging in the kernel, and has blocked significant evolutions of Linux on its behalf.

    That counts for a lot to me. Anti-printf people have to go a long way to convince me that it's somehow not ok. I use debuggers every day. I use some form of printf debugging every day. I use whatever affordance I can get my hands on, and I ignore the peanut gallery.

semiinfinitely a day ago

working at google forced me to learn how to code without using a debugger

  • mark_undoio a day ago

    Can I ask what about it forced the issue? Working on network services? The languages you developed in? Culture?

mvdtnz a day ago

We literally just saw a case[0] where an application (iterm2) mistakenly left verbose logging on in a component of a production build and logged sensitive console contents to connected hosts.

Are you diligent enough to remove your sensitive logging/printf statements EVERY time, for the rest of your career? Or should you make a habit of doing things properly from day one?

0 https://news.ycombinator.com/item?id=42579472

  • jaredsohn a day ago

    >Are you diligent enough to remove your sensitive logging/printf statements EVERY time

    Yes, when a linter is set up that fails when printf debugging is found

readthenotes1 2 days ago

TDD is ok, but LDD is better. If you can figure out what went wrong with your code from the logs, there's a non-negative chance that whoever gets the joy of maintaining said code might be able to as well

imron a day ago

> I usually name my variable ‘a’.

Me too!

LordGrignard 2 days ago

all the rest of the stuff is okay but seriously? 2 finger typing? as a programmer? you're gonna be there for AAAGES and your keyboard probably runs a sidebusiness with how long it takes you to type

I know I'm stirring up shit here but there really are benefits to touch typing (I mean just think about it, using 10 fingers instead of 2 is gonna be so much faster assuming you have 10 dingers)

  • kadoban 2 days ago

    Touch typing is objectively better, but 2 finger is enough to have a career at programming, it's not what is going to hold you back.

    I'd be more concerned about RSI than speed. You _really_ don't need to type fast for programming. If you do, your tools should be helping you do boilerplate more.

    • Izkata a day ago

      > Touch typing is objectively better

      Even that depends on what exactly you mean. I agree with the literal meaning of the words, but for too many people "touch typing" means "home-row touch typing". Which I find extremely awkward and difficult. But literally just typing by touch, I'm self-taught due to StarCraft multiplayer back in the 90s and early 2000s, and have a style that confuses home-row typists when they realize what I'm doing.

      So it's more like, you need to type smoothly enough that you're not thinking about typing. As long as your hands are pretty much moving automatically to get your thoughts into the computer, you can use any number of fingers at (almost) any speed, and you're good.

    • atq2119 a day ago

      The RSI thing is real. Until my late 20s I would type blindly with what I jokingly called 5 and a half fingers. But I did start to notice some strain, which is when I decided to finally teach myself "proper" touch typing. I don't think my typing got meaningfully faster, but it got smoother, and the strain went away.

      • mark_undoio a day ago

        Getting a split-layout "ergo" keyboard helped me learn to type better back in the day. I could already touch type fast but not with great technique.

        When I got a split layout key it became quite apparent that my technique was "weird" - sometimes I'd notice one hand come wandering over to the other side of the keyboard to find a key it was used to pressing. That became easy to correct once it was so visible!

  • dkjaudyeqooe 2 days ago

    I'm a programmer, not a secretary, I don't need more than 2-4 fingers because I don't need to write code that fast. I don't understand why anyone who writes code that is not semi-trivial would need to write code so fast they needed to touch type.

    There are ergonomic issues too. Hunt and peck gives you a lot more flexibility as to how you can use a keyboard.

    Most programmers who don't touch type can type pretty fast. There is more than one way to do it, and I'm not convinced touch typing is the right one.

  • Jtsummers 2 days ago

    > dingers

    I'd suggest practicing your touch typing a bit more.

  • BrouteMinou 2 days ago

    9 fingers... Not 10, 9.

    • johnnyanmac 2 days ago

      I use both my thumbs on the spacebar to assert dominance on every word.

      But now I'm curious, which thumb do most people use on a keyboard?

      • devsda 2 days ago

        My spacebar(s) are very smooth and shiny on the right compared to left.

        • Izkata a day ago

          By the same measure, looks like I use my left thumb by far the most. I'm left-handed.

  • georgemcbay 2 days ago

    I've been programming for decades and I like... three finger type. Two index fingers and a thumb. I don't look at the keyboard though (I know where the keys are relative to each other inherently through years of typing this way) and I type at a very high rate of speed for someone using so few fingers.

mherkender a day ago

I can't imagine what it's like to be so passionate about something that works, and it's not some philosophical/moral issue, just -- real programmers use X?

I use vim, IDEs, debuggers, printf debugging, whatever works. A tool is a tool. I guess my holier-than-thou position is against the idea that there's one right way to do anything.

anal_reactor a day ago

Printf debugging is best debugging because it's the only technique available in all environments. You've just learned to use Visual C++ debugger? Great, here is some Python code to debug. Or maybe the bug is in a bash script that only works within a container on a cloud. Or is it the ansible deployment script that is wrong? IDK, have fun, use your debugging skills.

shadowgovt 2 days ago

As with so many categorical guidelines, there are circumstances to use it and circumstances to not.

The key insight is that printf() is a heavyweight operation ("What, you want to build a string? A human readable string? Okay, one second, lemme just pull in the locale library..."). If you're debugging something at the business-logic layer, it's probably fine.

If you're debugging a memory leak, calling a function that's going to make a deep-dive on the stack and move a lot of memory around is likely to obscure your bug.

  • mark_undoio a day ago

    > If you're debugging a memory leak, calling a function that's going to make a deep-dive on the stack and move a lot of memory around is likely to obscure your bug.

    shudder memories of my early days in kernel-level programming where using a printf used to just "fix" some bugs.

    That came down to an uninitialised variable (which calling printf was helpfully initialising by using the stack).

    As a result of that early experience, when I'm in the headspace of very low-level bugs, I find it helps to think of printf both as "this will tell me some variable values" and as a source of other clues: if it makes a weird value disappear or change then you might have uninitialised data on your stack, if it makes a flaky behaviour become stable (either disappear or become repeatable) then it's probably a race condition, etc.

    A reference to James Mickens' "The Night Watch" feels appropriate here: https://www.usenix.org/system/files/1311_05-08_mickens.pdf

  • fweimer a day ago

    Isn't localization triggered for specific format strings only? There are many corner cases when typical printf implementations call malloc, but it's usually not too hard to avoid them.

    If you are worried about including <stdio.h> and potential side effects from that, you can use __builtin_printf instead.

    • shadowgovt a day ago

      You're correct, but then you're adding the additional burden to your debugging process of memorizing which format strings trigger localization. I'd have to look it up, but I'm pretty sure numbers do, which means anytime you %d or %f you're inviting the devil in.

      When I say move memory around in this context, I mean do a lot of stack operations. You can leak from the stack too (drop a pointer from the stack without freeing the underlying heap memory it referenced), and it's harder to catch that if printf has come along and completely rewritten your unused stack memory as consequence of reporting on the state of your program.

      That having been said, the point is that context matters and what you're debugging matters for the question of what tool to use. If you're operating in an interpreted language, you can probably trust that The interpreter is making it difficult to leak memory like that. On the other hand, interpreters have bugs too, and using a language that is interpreted instead of compiled machine code makes bugs in the execution layer unlikely, but not impossible...

      • fweimer a day ago

        Admittedly, I forgot about %f/%g and the decimal separator. I think the rest is fairly obscure (such as %lc for wide character to multibyte character translation). For integers you need %Id (variant digits) or %'d (grouping).

        The decimal separator is a bit of a problem for JSON generation, too. Some systems have snprintf_l, but it's not very widespread.

invalidname 2 days ago

Counterpoint: nope.

Logging is great for long term issue resolution. There's tracepoints/logpoints which let you refine the debugging experience in runtime without accidentally committing prints to the repo.

There are specific types of projects that are very hard to debug (I'm working on one right now), that's a valid exception but it also indicates something that should be fixed in our stack. Print debugging is a hack, do we use it? Sure. Is it OK to hack? Maybe. Should a hack be normalized? Nope.

Print debugging is a crutch that covers up bad logging or problematic debugging infrastructure. If you reach for it too much it probably means you have a problem.

  • bogota 2 days ago

    What is the difference between print debugging and logging? With most deployments setups now days you can pipe prints to some logging solution anyways.

    • invalidname 2 days ago

      Print is ephemeral by design.

      Logging lets you refine the level of printing and is designed to make sense in the long term. There are many technical differences (structured logging, MDC etc.) that I won't get into but they are important.

      To me it's mostly about the way you write the logs vs. the way you write a print. A log tries to solve the generic problem so you can deal with it in production if the need arises by dynamically enabling logs. It solves the core issue of a potential problem. A print is a local bandaid. E.g. when print debugging one would write stuff like "f*ck 1 this was reached"... In logs you would write something sensible like "Subsystem X initialized with the following arguments %s". That's a very different concept.

      • ricktdotorg 2 days ago

        totally agree re: structured logging. my intro to that was w/GCP Logging and it changed my mind on when and what to log. proper structured logging and keying metrics/alerts from those metrics is extremely satisfying and legitimately useful.

  • tliltocatl a day ago

    If your code is connected to some external factory, it's impossible to pause on a breakpoint without breaking everything. And no, "just mock it" doesn't work if you have no idea on how the factory actually behaves. Printf is good because it doesn't messes up timing.

    • mark_undoio a day ago

      Time travel debugging (https://en.wikipedia.org/wiki/Time_travel_debugging) can be good for this situation too because it separates capturing a bug ("recording") and understanding it (the debugging phase, which happens while "replaying").

      You don't need to pause anything whilst capturing the bug.

      Now, if your code is literally connected to running critical systems in an actual factory then you've probably got additional realtime and safety-critical considerations that might push you towards debugging.

      But (for more conventional use cases) time travel debuggers can handle multiple communicating systems without causing timeouts, capture bugs in software that interacts directly with hardware devices, etc. And you don't have to keep rebuilding / rerunning once you've reproduced the bug.

    • invalidname a day ago

      I suggest re-reading my comment and learning about tracepoints/logpoints.

      Your comment highlights my exact problem with print debugging... You just aren't aware of the tools available to you and you reach to the rusty old broken hammer.

  • nofunsir 2 days ago

    Agree. Safety critical kernels do not even have printf.