292: Debugging with Joël Quenneville
The Bike Shed - Een podcast door thoughtbot - Dinsdagen
Categorieën:
On this week's episode, Steph and Chris are joined by fellow thoughtbotter, Joël Quenneville, to discuss all things debugging. Joël is helping publish a weekly debugging blog series and in this conversation they discuss how the series got started, technology agnostic debugging strategies, writing less bug-prone software, and speculate if Joël moonlights as a hockey coach. Debugging Blog Series 2021 Classical Reasoning and Debugging Monodraw An Elm debugging story Chelsea Troy - PoSD 2: What causes insidious bugs? Follow Joël Quenneville on Twitter Joël Quenneville on the thoughtbot blog post Transcript: STEPH: All right. And then who will be editing this episode will be Mandy. So as we run into blunders, which we never do, but if we do, then we can talk to Mandy and ask her to edit things for us. So I will try very hard to do that because I will likely still talk to Thom. [chuckles] CHRIS: Hello, Mandy. It is a pleasure to meet you. In the last recording that will be going through you, I was referring to you indirectly as our next producer. But now that we know your name, I'm so excited to have you on the team and to know who is on the other side of these, hopefully not too nonsensical recordings. So pleasure to meet you. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey Chris, today is an extra special day as we have a guest today. Joining us is Joël Quenneville, a thoughtbot developer extraordinaire and a previous Bike Shed guest. Welcome, Joël! JOËL: Hi. CHRIS: It's a pleasure to have you. JOËL: It's good to be on the show. STEPH: So, Joël, you and I are on the same client project. And during the past few weeks, we have encountered some very challenging bugs. In fact, if I look back at my developer journal, the last five or so of my entries begin with a very cheesy mystery title that's like the case of the missing data or Harry Potter and the chamber of errors. But in addition to spending your time bug hunting with me on this project, I understand that you and other thoughtboters, Jesse Bailey and Louis Antonopoulos, have been publishing weekly blog posts specifically about debugging. JOËL: Yes, that's correct. This has been a project that we've been doing for a little while. We spent about three months doing research, conducting a bunch of interviews, and gathering a lot of material on debugging. And then, starting in April and through the middle of the summer, we are publishing an article every week on the topic of debugging and exploring different aspects of it. STEPH: I'm really curious; what prompted y'all to start talking about debugging? I'm really excited to talk about the specific topics that are included in the series. And then also you mentioned all the research. I'd love to know how you went about that research. But before we go there, what prompted this conversation and then led to the creation of the series? JOËL: Within thoughtbot, we spend a lot of time trying to improve ourselves, improve the broader community as well. And there was a conversation that got started about how we could get better at debugging. It's a thing that, as developers, we do all the time, and it's not something that's often taught explicitly. Many of us our experience with debugging has been very much just learning on the job and picking it up by osmosis and trial and error. And when you're more junior, a lot of that is just random stuff and changing random lines of code and maybe copying something from the internet and hoping things go well. And as you build experience, you tend to start becoming a little bit more methodical because you've seen what works and what doesn't. And so we wondered within thoughtbot, we have all these different people who've come up with their own experience. Can we meld that together and share the summary of all thoughtbot debugging knowledge combined and help everybody level up? Initially, we were wondering could this be just an internal workshop or something where we get together and exchange on the different ways that we've learned or different techniques that each of us uses? But then this evolved into a project that we wanted to share not just within thoughtbot but with the wider world. And so its final form, or at least its current form, has been a blog post series that's going to run over the course of three to four months. STEPH: I appreciate how you've taken the opportunity to take all of this knowledge and then turn it outward-facing, so it's available to the public as well. And it really wasn't until you were talking about the series and then I started reading the weekly blog posts that are being published how little I read about concrete strategies around debugging. I think you said it very well earlier in terms of it as something that you pick up on the job, and you learn different strategies as you go. It's really hard to know exactly how to go about debugging unless you've had experience with that type of bug before. CHRIS: I've been also reading the blog posts as they come out. And I'm similarly very grateful for both the general theme of thoughtbot of hey, let's take this thing and actually make it a shared resource for the world. But specific to debugging, it really is this interesting intersection of practical steps that you can take but also almost an art form. There's like, oh, I use Pry, and I put a debugger statement here, and that's a mechanical approach that we have. But then there's also the more general how do I think about code, and how do I build a model in my head and compare that to the thing that's running on my screen? And that is really so much more something that you learn in passing, or at least it typically is. And this idea of trying to be a little more purposeful and share more of that with the world is something that I absolutely love because it is both the harder aspects of the job but also probably one of the more common. I'm spending a lot of time being like, why isn't that working the way that I thought it would? I don't know, maybe 90% of my time as a developer, maybe I'm a bad developer, but it's so much of the time that I spend. And so any amount that I can get better at this or learn from others, I'm super open to that. So thank you for producing this and for sharing all the secrets. JOËL: I liked your example of saying you drop in and you drop Pry to put a debugger at a particular location. And I think that maybe that's something that most people are familiar with and might use. But even something like that sounds like a fairly simple, basic concept. I think someone with a lot of experience, if you're pairing with them, you might ask, "Why did you put the Pry here and not there?" And that might make the difference between spending all day versus spending half an hour to find the bug. And that's, I think, a lot of where the experience comes in and where being a little bit more structured, having a strategy can really make the difference in being efficient. Because oftentimes, you're using the same tools and doing roughly the same techniques, but one can be totally flailing and doing random stuff, hoping you'll get lucky with a solution. And the other one is very methodically working towards finding the problem. CHRIS: I love that framing of the question, but I also love the idea of you ask that to someone, and they're like, "Huh? I don't actually know. But now that I think about it…" and then they sort of discover the answer. But it's very much they're operating from intuition and just like, well, obviously I put the debugger right before the line that I think has this going on. But that's not necessarily something that's top of mind. So when you ask the question, you almost help them to formalize it. So yeah, there's gold in those hills. JOËL: Absolutely. Definitely. I think gaining self-awareness about why you do what you do is always such a rich exercise, at least in my experience. STEPH: There's an interesting comparison here where people will ask me how to do something in Vim. And I'm like, oh, let me do it first because I need the muscle memory for it, and then I can tell you how to do it. And I feel like debugging is often the same way where I'm like, well, let me hear about the problem, and then I can work through it with you, but I can't necessarily tell you off the top of my head all the different strategies that I would apply. So I really liked the idea of becoming more self-aware as to how you would approach that so then you can provide those tips to someone else. You mentioned earlier about the research for creating the series, and I'd love to hear more about that. Because I imagine there's so much that you could talk about in terms of debugging but then still wanting it to be helpful to people that are working in different technologies. So how did that research go? What did that look like? JOËL: Initially, this was focused on experience within thoughtbot. So we focused a lot on more internal research. We had a survey where people just shared their debugging thoughts, tips, and tricks. And then, we also set up a bunch of interviews with multiple thoughtboters across varying levels of experience to get their thoughts on debugging, and that was incredibly rich. So we did, I think, 10 or 12 interviews within the company. We captured all of that and then synthesized the output of those interviews, the survey with all of the random tips and tricks, existing thoughtbot blog content. And then, we also pulled some external resources that we had found as well as some podcasts, some other blog posts elsewhere. And we have tried to mix all of that together to come up with 10 or 12 high-level topics that we thought were particularly valuable to talk about and then turn those into blog posts. STEPH: That's really awesome. I love how you took that approach of interviewing different people to take that instinctual knowledge that we have around how we're debugging but then helping people put that into words and then capturing that and then sharing it. What type of questions did you ask people to help people walk through what are their debugging strategies? JOËL: That was actually pretty fun because we came up with a list of questions ahead of time, not that it was a strict list to be adhered to, but we wanted to have something to have a little bit of commonality between the interviews and then let them go where they will. Most of them we opened up with just asking people, "What does the word debugging make you feel?" What is your personal connection to debugging?" And I expected most people to be like, "Oh, dread or frustration," And that was not the case. Most people said, "Actually, I like debugging. It's a challenge. It's a puzzle. It appeals to the analytical side of me." One person mentioned that it's like the code equivalent of an escape room and that escape rooms are one of their favorite things, and so they actually really enjoyed it. Where a lot of the frustration comes in is when you have deadlines, and this bug is unexpected and therefore is taking time away from things you really need to be doing now or yesterday. And so the timeline can cause pressure, but the bug itself most people seem to enjoy finding the bug. STEPH: I love so much that you just used the comparison of an escape room because I was just chatting with someone recently about how my week has been going, and I'm like, I am in an escape room. That is my job this week is to figure out how to get out of this or to understand what is happening in the system. So that's really funny. That's also very encouraging to hear that so many people have a positive association with debugging. But I certainly understand that it's more the deadline, the timeline that is then putting pressure on us to solve something quickly, but we actually enjoy the hunt is what it sounds like. CHRIS: And in my experience, when it goes well, it can be really fun. But then there are those days where not just under deadline pressure but just sort of I lost a day to blah. I couldn't figure out what was going on. And especially when it turns out to be something relatively simple, that feeling is somewhat crushing. But when there's this like, oh no, things are not working the way I expect it, and then I'm able to dig in, read a bunch of Stack Overflow, put a bunch of debug or put statements and crack that, that is a fantastic feeling. And there is the optimum flow level of where it's just at the edge of your knowledge and skill set, and it doesn't take too long, but it's not too easy. Because, like you said, the thrill of the hunt is there. So there is a sweet spot, I think, and there are ways that it can go wrong on either side. But I definitely resonate with the idea that a medium-scale bug that I'm able to tackle that's a good day, actually. I feel good at the end of that day. STEPH: There's a delightful blog post by Chelsea Troy where they share a graph that talks about the time spent debugging versus the probability of a one-line fix. And so the more hours that you've invested into fixing a bug -- Joël, I think I found this particular blog post thanks to the Debugging Series. It's like two in one of those posts. And so the more hours that you sink in finding a bug, the more likely that it's going to be just a one-line fix. And those are the ones that feel the most painful to me. It's something that is small and comes down to an incorrect assumption about the system. And those are the ones that feel good that I solved it, but I didn't really enjoy the hunt for that one. There's a specific time window in which I'm enjoying myself, which then just becomes stressful and frustrating. JOËL: Sometimes, it almost feels like the number of lines of code for the solution needs to match the amount of effort it took to find the bug. And so if I spend a day searching for the bug, it's frustrating that it's only one line and that the solution took 30 seconds, but the search took eight hours. CHRIS: I think my measure would be the length of the commit message associated. I'm fine with a one-line change as long as there is a couple of paragraphs of explanation of the journey that I went on and not just self-serving like, hey everyone, listen, because this was rough for me. But the look at all the things I learned, let me capture that knowledge here because there's actually a bunch that this one code line change encapsulates. But it's actually looking at the history of HTTP and the way that the different headers are handled in different browsers and, for many reasons, this one-line change. But if it's a one-line change and the commit message is like, "Oh, typo, sorry," then I feel bad. That's a bad day for me. [chuckles] JOËL: This reminds me of the project, Steph, that you and I are on where I've definitely had several bugs that I've gone through to fix where the commit message is definitely quite a bit longer than the actual diff. And the commit message includes ASCII diagrams showing the structure of certain database tables and why I had to change a particular query. And it's weird. And you would never understand why without realizing the schema. So yeah, I definitely feel you there, Chris, where sometimes you go on a journey, and it's very important to record that for the next person. STEPH: Yeah, your commit messages have been phenomenal. And I love all the diagrams that you've included because that has helped me have context for what exactly you understand about the system and what appears to be wrong with the system. That has been wonderful. Speaking along those lines, as we were just talking about how it can feel very ephemeral in terms of the strategies that we use for debugging, I'm really curious what strategies do y'all use for debugging? Do you have particular tools that you use? I know Joël, you are a fan of diagrams. Is there a particular tool that you use for that? JOËL: There's a variety of tools that I'll use. Recently I've been using Monodraw, which is a tool for macOS that allows you to make diagrams that can be exported as text using various ASCII characters, which means that you can then draw an entity-relationship diagram of your database and include that in a commit message where it needs to be text. You can also export it as an image. But the fact that I can use it as text in a commit message has made it particularly valuable recently. So that's my latest go-to diagramming tool. STEPH: That's really nice. I haven't used that, but I've been seeing you use it so extensively that I've added it to my list of things to check out very soon. Are there any other particular strategies that you use, since we're on the topic of concrete strategies, for debugging and how you approach debugging? JOËL: I am a big fan of binary search, which sounds like a fancy computer science term. But I think from a very practical standpoint; it's just a process of elimination. Sometimes finding where the bug is, is harder than where it's not. And so can I eliminate roughly half of this file or this project or whatever and know that it's not a concern and then keep repeating that until I have a pretty small surface area to actually find the bug itself. And this can be as simple as just commenting out lines of code. And so, like, I'm going to comment out roughly half the lines in this method and see can I still reproduce the bug? If so, then I know it's in the ones that haven't been commented repeat, repeat until I find the bug. And because you remove so much code every time that you have, it takes relatively few steps until you find a very narrow area in which the bug likely is. CHRIS: I love that. Binary search is definitely one of my favorite approaches. And I think that was a very welcoming and friendly introduction to the topic for anyone that might not be familiar. But it really is such a simple and yet effective tool for narrowing down the scope. Because when you look at an entire application, you're like, ah, something's wrong, and you start from the outside, that's overwhelming. But if you can start to narrow it down, especially, like you said, in this more methodical, purposeful approach, that is really wonderful. I think one of the tactics that I have been reaching for more and more is using minimal reproduction cases. So rather than actually working in the context of the full application -- This is especially true if I'm working on, say, a JavaScript app or Svelte is the most recent example. Svelte has a REPL on the svelte.dev website. And I find myself more and more reaching for that and just trying to very minimally reproduce code that just isolates that bug. And then I try and tease away the pieces, but now I'm left with this minimal reproduction there in an executable format, and that ends up being really useful. The same sort of thing if I'm on the Ruby side, I might actually do in a Spec just because that's a really nice way to harness execution and be like, I want to do these things. Here's the setup. And it's one of the reasons we love Specs, but I find it's actually a really great tool for setting up some data, executing the app in a certain way, and then testing it. And I find particularly with RSpec and Rails; I feel like I have good control over getting the system into a certain shape. Other applications I find that's a little more difficult, so other techniques may be necessary. But yeah, that's definitely one of the things that I've been leaning on more and more is minimal reproduction so that I can really narrow down the scope of what I'm looking at. JOËL: I like that example, Chris. And that actually is a variant of probably one of my favorite approaches, which is reasoning by analogy. So you have a hard problem, and you can't figure out a way to solve it. So you find a similar easy problem, find the solution to that, and then try to backport the solution to your hard problem. Oftentimes, the easy problem is just a simplified version of your hard problem, such as stripping out all the unnecessary detail, then you can backport that. But sometimes, it's just something in a different domain that you understand more easily, and then you can take that solution and backport it. And I had a magical experience a while back. For those of you who know me, I'm a big fan of the Elm language. And I spend a lot of time in their Slack just helping out people. And someone ran into a situation with random generators, which are a concept in that language that can generate random values. And they're trying to combine a bunch of them and having really weird bugs. And I tried a bunch of different techniques to figure out what was going on, and I couldn't figure it out. But the thing that I did figure out is that random generators in this particular dimension we were trying to understand work very similarly to functions, which are a much more simple concept. And the moment I realized, oh, in this very particular way, generators and functions are the same, all of a sudden, that unlocked in my mind; wait, I can reason by analogy here because I know how to solve this problem with functions. I don't know how to solve it with random generators. So I went and solved it by saying, "I don't know how to compose a bunch of generators. I do know how to compose a bunch of functions, figure that out and then take that solution and bring it back to generators." And I followed that process through, and it worked, and it was amazing. I came in to work the next day, and I was really excited about this. And I was talking with some colleagues, and everyone's like, "You should write about that." So there's a blog post from a year or two ago where I walked through my whole process and all the different debugging strategies I used, including reasoning by analogy to solve what was a pretty tricky bug. STEPH: I love how the typical response from talking to a thoughtboter about going through something like that is often, "Oh, you should write about it." CHRIS: And I'll help you edit it, not just "Please go write that." And Joël, you live that to the extreme. You have been an absolutely prolific author on the thoughtbot blog. And you bring some of the images and things like that that you're talking about. You really, I think, provide such a great example of paying knowledge forward and sharing better. So if anyone wants to learn about blogging on the internet, just go follow Joël's, work for a while, and you'll learn some great things. STEPH: Yeah, that is so true. I love how much you publish, and I'm a big fan of everything that you write. One of the debugging strategies that was mentioned in the blog post that really rang true for me was talking about identifying assumptions because that is one that I typically fall into where I will read about a problem, and then I will say, "Okay, I understand exactly what problem they're running into." And once I start troubleshooting, if I'm unable to reproduce -- Because I follow a similar strategy, Chris, that you just mentioned where I will try to replicate the issue either if I'm doing it locally or ideally if I'm writing a test, so then I can write a test that fails and then I can then make that test pass. But if I'm unable to reproduce, then I'm forced to go back and say, "Okay, am I making an incorrect assumption about what's being reported?" And that has been so helpful. Like, there are just little things where I realize I'm on autopilot for where it's like, the user downloads a report, and I'm like, oh well, they mean this report. And then I find out that they actually meant something else. Also, assumptions in the codebase, and that's one that you and I, Joël, have run into so much with this past week in regards to assumptions in the codebase as to how many associations a record can have. Is it one? Is it many? It has many, but it really only wants one record. So assumptions from the perspective of when someone is reporting an issue and then also assumptions in the codebase. For the first one, I have found that, especially when I'm new to a team when someone reports an issue, I often like to hop on a quick call with them and say, "Hey, are you able to reproduce this for me? And so I can watch you, and I can understand your workflow." And I have found that typically speeds me up drastically. JOËL: One of the things that was mentioned in the article on listing assumptions which is maybe a bold claim, but it opens by saying that all bugs are a form of miscommunication. And this might be human-to-human communication where you didn't understand the requirements or what was trying to be done. But code is also communication between us and computers. We want computers to do a thing. And if the computer doesn't do what we're telling it to, it's not doing that just to show us who's boss; it's because we didn't communicate correctly what we wanted. And so yeah, trying to better understand ourselves what we mean and our assumptions is a key part of debugging. And that's the thing that came up over and over and over in all the interviews was one, build self-awareness about the assumptions you have, and there's a bunch of different techniques for doing that. And then once you have self-awareness of what your assumptions are, never trust anything, validate, validate, validate, because yeah, you're often wrong. CHRIS: There's an interesting parallel to that in my mind of we often end up with these systems, and it's behaving in an odd way. And so we have to build this mental map of okay; what are all of the different states and workflows that can get us into those various states? And having now debugged a handful of times in my life, I'm trying as much as possible to flip the script and go with an ounce of prevention is worth a pound of cure. And this is why one of the most common things that I say in a pull request is, "Hey, can we make that null false on that new database column? Hey, can we change this type constraint so that instead of it being a Boolean and then another attribute, it's actually a three-state enum?" and et cetera, et cetera. How can I collapse the states down so that when I'm in debugging mode, I actually can take some things as givens? Still, maybe validate from time to time, but the more I can learn to trust a type system or the database or things like that, things that are a little more trustworthy than I am or other humans, I'm increasingly loving that. And there's obviously a gentle balance there, but that's something that I've been leaning into more and more. And I think it's directly informed by the years of my life that have gone into debugging at this point. Is that accurate? That seems like a high estimate, but it's a lot. It's a bunch of weeks at a minimum. JOËL: I felt that really strongly. I'm kind of disappointed as an industry that we default things to be nullable so often. I wish database columns were non-nullable by default, and you had to opt in to make them nullable. I wish GraphQL didn't make columns nullable by default. And I think oftentimes, when you're working in a dynamic language, you don't care about that distinction. And so you just let it go by. And let's say with GraphQL, again, I hang out a lot in the Elm Slack channel, and I spend a lot of time helping people integrate Elm in GraphQL. And when you get a schema and try to load that into Elm because it has a type system, it will read your schema and wrap everything in maybes because it's as though this field is optional, this thing is optional, this thing is optional. And then people come to the Slack, and they're like, "Why is there maybe everywhere with deeply nested -- This is a terrible mess in Elm. What went wrong?" And then I have to tell them, "It's not the Elm tool that's wrong. If your schema has this implicitly, the default thing was to make it null." And so it just looks normal, and you want to put all those exclamation marks everywhere. And a lot of time people don't believe me. And I have to say, "No, no, you're right. It really is the schema that's the problem. Please go put all these exclamation points." I'll give an example of a non-null schema, and then they try it, and they're like, "Wow, this makes such a difference." STEPH: Joël, the defender of Elm. JOËL: [laughs] STEPH: And I am with you, and it is interesting. And I've been there myself, too, where there's a fear of over-restriction in terms of if I make this not null and something blows up, then that feels like a bad outcome. And so I've seen a number of projects where we let nil get through so easily because then we just always handle the nil versus having that restriction earlier and making that decision. It's like we're pushing off that decision of like, well, that'd be nil for now, and then we'll figure it out later. Versus starting with that decision upfront and saying, "No, let's go ahead and make that decision now. What do we do if this is nil?" And I agree that it would be wonderful if we had more restriction upfront and then we loosened the requirements as we find out that we need to versus starting with the loose requirements because walking that backwards is incredibly difficult. JOËL: Yes, having to backfill nullable columns that we don't have values for in the database because now we want to restrict it, but for years we didn't collect that value. And so what do we do now? That becomes really tricky. STEPH: Chris, a minute ago, you were mentioning prevention, which I love so much because then we can avoid a number of these debugging discussions, although this is also delightful. But there's an opposite end of that spectrum that has taken me a while to gain comfort with, and it is when you don't know enough about the bug, and you can't reproduce, and then you essentially have to let it go. And there are other ways that you can debug, but you can't fix it in the moment. So let's say that you have an error or something that happens every once in a while, but you can't actually find the reasons that it's happening or if you have data that's getting created in a certain state, but you don't know what in your application is creating that state. So instead of spending what could be hours and days triaging how your system got in that state is instead perhaps adding some logging around it to say, "Hey, this is the moment where we are causing this to happen," or maybe it's adding a constraint, so something fails very loudly whenever the system tries to put data in a particular state. And in those cases, it took me a while to become comfortable with the idea that I can't solve this today. I can't solve it now, but I can take steps to then know how this is happening. So then, in the future, we can actually prevent this or apply the fix. But initially, that always felt like a really bad outcome for a ticket that's reporting a bug where it's like, hey, I can't fix this today. I don't know exactly what's happening, but I've added some logging, or I've added something that's going to raise when this happens. So when we do get notified of it again, then we can more quickly triage and put the right fix in. CHRIS: Yeah. If anything, I feel like there can be a -- Like we were talking about earlier, there's almost an enjoyment to solving a bug where it's a puzzle. It's a thing that may capture our attention, but sometimes, actually, perhaps often, the correct answer is that's actually not the most important thing right now, or the cost is too high to try and solve it given the information that we have. It's affecting a very small number of users. Maybe we can make that experience better. It's not just a generic 500-page, but we turn it into a, "Hey, sorry, you got into a…" like a more explicit error state but defer the actual debugging because there are other things that are slightly more pressing, affecting more users, or again, we just don't have enough information. And I feel like that actually can be a difficult thing to be like, no, but I want to solve the puzzle now. This is a fun puzzle. Please let me solve the puzzle. But sometimes, we don't get to solve the puzzle today. JOËL: One of the articles in our series that I'm really excited about takes a look at classical philosophy and various categories of reasoning identified in classical philosophy and how we can apply them to debugging to debug in a more strategic and methodical manner. And one of those categories is inductive reasoning, which is very similar to, say, the scientific method where you gather a lot of information. And then, based off of those cases, you try to come up with a pattern, have a hypothesis for it, test it. And then if you can show that it's actually the case, then you're likely correct. But of course, that depends on having enough test cases for there to actually be a pattern and for it to be statistically significant. And so for some bugs, if there's only one instance of it happening and you just say, oh, somehow a bad value made it from our front end UI into the database, and we don't know how. But it's not happening to every user. Like, there's only one use case, and we can't figure out how that happened. We don't have enough information to build up those test cases to try to find a pattern. And so that's where getting more test cases becomes really important. As Steph mentioned, logging can be really helpful here, raising whatever you need to do. Maybe it's adding a constraint or a validation to say, "Blow up the next time this happens." But once you have enough test cases, then you can start seeing patterns. And that's your inductive reasoning to solve the bug. STEPH: Something that we touched on earlier but I don't think I've given enough credit to but is something that I really appreciate is the fact that all of these strategies that are being talked about in each blog post are applicable across technology stacks, across languages. Because you really highlighted a point just a moment ago around how most of the bugs that we're working with are all bespoke. They're all special little bugs in their own special way. And so it's really hard to have a blanket strategy that then applies to each one because they are unique. And the fact that y'all are creating so much content that has general strategies that people can apply when debugging is really impressive to me. So I'm really curious how are y'all doing that? JOËL: When we came up with a series idea, we laid down a few principles for how we wanted the outcome to be. And one thing that came up pretty early was it's important for this to be language agnostic. We don't want to just teach like, here are some very specific Ruby tools you can use, which that's a very helpful article, but that's not really the kind of information that we were looking for. So we were trying to find what are some higher-level techniques and strategies that are usable throughout your career? And then secondly, we wanted to focus on finding bugs rather than how do you solve them or how do you prevent them? Chris mentioned a little bit earlier about techniques for preventing bugs from happening in the first place, and we might have a bonus episode or article on how to write bug-resistant code. But the focus of the series is what are some language agnostic ways that we can improve your search to find the root cause of a bug? And a lot of that has just been synthesis, so saying okay, here's a bunch of different things that different people told us. And we were fine with having language-specific examples in the interviews. But how can we then find what's common and what's not? One thing that I think was really interesting was talking how different people gain information about a particular point in code, and debuggers are a pretty common way of doing this. But print line debugging is also a really common way to do that. And every language does this slightly differently. And you can even do this, not just via text, but visually. So if you're writing CSS and you are trying to figure out when a particular rule triggers, you might put a 1-pixel red border around something. And that's CSS' equivalent to print here or console.log in JavaScript or Ruby or some other language. So the idea is to see all what different people told us and then see can we extract a general principle out of this? And walking a fine line between we want the theories to have practical advice for people that they can use but also zoom out just a little bit so that we have some of the big picture so that you can make some of these big connections and see some of the patterns that can help you apply in different situations that you run into for debugging without necessarily getting head in the clouds, you know, what is debugging? And I'm sure there's a really fascinating philosophy article, not the classical philosophy article I mentioned; that one's actually good. But you can philosophize about debugging in a way that's too abstract to be useful. So we want just a touch of the philosophy to keep it big picture while also giving very concrete, useful tips and techniques that are language agnostic that folks can use. STEPH: That's fabulous how y'all are able to separate that thought process away from the direct, specific action that someone takes. So when someone is dropping in that console.log statement or that print statement, it's like, okay, but what thought process took you there to the point that then you were trying that action? I really like that. There's one other trick that was mentioned in one of the articles that I also really enjoy. It's the analogy of taking a ball of yarn with you, so you're always tracking where you've been. And that is the other thing that I do heavily when I'm debugging is that I always have a note-taking application that's open because I always document what I've looked at, what were my findings. And then I think through what do I want to look at next? And sometimes I'll write down a list of three or four different questions of it could be this, it could be that. And then, I will prioritize those based on what's the quickest to look at? What can I replicate the fastest? What can I remove from this list? Back to earlier, when you were talking about the process of elimination and then walk through each one. So that way, when I do find the bug, I also have those steps that I can look back. So in case, someone has a question about it, in case they're like, "Well, what about this?" I can say, "Well, yes, I also checked that," or there just may be extra helpful tidbits that fall out of that process. At least they prevent me from checking something twice. JOËL: I've been pairing with you, Steph, on several bugs recently, and I've really appreciated the notes that you keep. They're very thorough, and that's something that I've tried to bring into my own practice by seeing you do that. CHRIS: That's something I've been iterating on in my own workflow of late is having a directory of notes associated with each project. And as I'm working on each, it's almost not even just bugs at this point but any new feature anything that I'm exploring. Because often, initial exploration of integrating with a library feels a little bit like debugging, sort of poking at the edges and what's true and what's not and writing up little reproducible steps of okay, run this code, then this code, then this code. And I've now just taken to keeping those forever because it turns out like, oh, I know I integrated that, but what was the step? I feel like there was something I did. And being able to go back and have that artifact now is so useful. And it's actually something that I've only really gotten in the habit of over, I'd say, the last two years, but that archive of notes is now very useful even to this day. STEPH: So I think we've covered a number or hinted at a number of the wonderful topics that are included in the series, including some of the different ways that we can identify our assumptions or ways that we can get unstuck. That was one of my favorite ones on all the different ways to get yourself unstuck when you are in the throes of debugging. There's also the idea of when you encounter a bug that if you can't fix it right away, but then some of the steps that you can take to then be able to fix it in the future. I would love to know, if you don't mind sharing some spoilers, as to some upcoming topics that will be in the unpublished but soon-to-be-published blog posts. JOËL: Yeah. So for all of The Bike Shed listeners who want the inside scoop, there are a few that I'm really interested in. So we opened the series by talking about mindset issues, how to approach a bug, how to think about assumptions, and now we're moving into some more concrete techniques and tooling. One that I'm really excited for is ways you can use Git more effectively. There are a couple of people that we interviewed who mentioned just how important it was to their workflow. And we've got some really interesting notes on that. There are also some really interesting ideas around areas in our codebase where bugs tend to accrue that are more likely to be, particularly around the nebulous concept of boundaries. This was a conversation that we had with one of our interviewees where we had our initial list of questions, and then we ended up completely throwing them away and going down this long, random tangent about boundaries; it was so good. We decided to dedicate a whole article to it because there are really interesting things around that. STEPH: All of that sounds really exciting. I love that you mentioned Git because even in the conversation that we've been having right now, that didn't cross my mind, but yeah, that is such an incredible debugging tool. So I'm really looking forward to that and also the one that's going to dive into boundaries. All of that sounds really exciting. JOËL: One thing that I think is really fun with digging into Git is that generally, when we think of debugging, we're trying to find where the bug is. But oftentimes, the real question we need to answer is not just where is the bug it's when is the bug? So not just debugging through space but debugging through time, and Git is the tool to do that. STEPH: Oh, that made me laugh but also made me depressed: when is the bug? [laughs] JOËL: And I feel like this is going to turn into a cheesy sci-fi TV series. CHRIS: It doesn't need to be cheesy. JOËL: True. CHRIS: Yeah, it does. [laughter] STEPH: For it to be good, I think it has to be a little cheesy. Sci-fi bugs coming to your application next summer. CHRIS: Everybody hop in the Tardis. We're going to find the bug. JOËL: I feel like there's some variation of that line that shows up in a lot of time travel series. Like not where is X, but when is X? STEPH: On that delightful note, thank you, Joël, so much for coming onto The Bike Shed and chatting with Chris and I about debugging. For everyone that would like to follow along for the Debugging Series, where can they find those articles? JOËL: So you can go to the thoughtbot blog. There is a tag we created specifically for that, Debugging Series 2021. And I'm sure that you'll link to that. All the articles are also going to be linked from the first article of the series, and I'm sure that will be included in the notes as well. STEPH: Perfect. And where can people follow your work? CHRIS: People can follow me on Twitter @joelquen, J-O-E-L-Q-U-E-N. It's not the hockey coach, although I can neither confirm nor deny that the two are the same person. We've never seen them in the same room together. STEPH: I didn't know you were a hockey coach in your spare time. Oh wait, this is the part that you can't confirm, right? JOËL: [laughs] Well, that would be letting the secret out. STEPH: All right. We will try to maintain your secret identity or whichever one that is. CHRIS: It's a really terrible secret identity if it's actually the same name. Joël, you really should have put more effort into this, coach Q. JOËL: [laughs] STEPH: Coach Q. I'm going to start calling you Coach Q. That's wonderful. Well, with your permission. JOËL: That's the real nickname. For those who don't know, Joël Quenneville is or formerly was the coach of the Chicago Blackhawks NHL hockey team, one of the best coaches ever in the National Hockey League. And a couple of years ago, I got to give a talk in Chicago, a conference talk. And everybody asked me, "Ooh, any connection to the coach?" STEPH: To which you replied? JOËL: Oh, I had a whole slide about the conspiracy that we may or may not be the same person. STEPH: That's really fun. Well, thank you again so much for coming on our show, Coach, and walking us through the wonderful Debugging Series. The show notes for this episode can be found at bikeshed.fm. CHRIS: This episode was produced and edited by Mandy Moore. STEPH: If you enjoy listening, one really easy way to support the show is to leave us a quick rating or a review on iTunes as it helps other people find the show. CHRIS: If you have feedback for this or any of our other episodes, you can reach us at @ _bikeshed on Twitter. And I'm @christoomey. STEPH: I'm @SViccari. JOËL: And I'm @joelquen CHRIS: Or you can email us at [email protected]. STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Bye. Support The Bike Shed