#99: AI & Agile Learning with Hunter Hillegas

Agile Mentors Podcast - Een podcast door Brian Milner and Guests - Woensdagen

Categorieën:

Join Brian Milner and Hunter Hillegas as they unveil Goatbot, Mountain Goat Software's latest AI innovation designed to transform how we access and learn from Agile and Scrum resources. Tune in to hear Hunter delve into the intricate process of developing and testing this AI platform. Overview In this episode of the Agile Mentors Podcast, Brian and Hunter Hillegas, Mountain Goat Software’s CTO, dive deep into the capabilities of Goatbot, an AI-powered tool that makes accessibility of reliable Agile and Scrum knowledge easier. Developed by Mountain Goat Software, Goatbot answers queries using the company’s extensive array of training materials, blog posts, and articles, ensuring that users receive precise and reliable information. The discussion also covers the rapid advancements in AI technology, exploring its burgeoning role in coding and testing. Join Brian and Hunter as they explore how Goatbot was created, its impact on learning Agile methodologies, and the exciting future of AI in software development. Listen Now to Discover: [1:05] - Brian welcomes our beloved Chief Executive Officer at Mountain Goat Software, and creator of Goatbot, our Agile & Scrum AI, Hunter Hillegas. [4:06] - Hunter explains how he and the team were able to make an AI platform to answer Scrum and Agile questions accurately every time. [7:07] - Hunter talks about the experience of working with and developing AI from a long-term programmer's perspective. [10:35] - Hunter and Brian share that Goatbot is available, accessible, and free on the Mountain Goat Software website. [12:25] - Hunter walks through the process of creating Goatbot and some of the challenges the team faced to bring it to life. [15:42] - Brian invites you to come on over and test out Goatbot, run it through its paces, and tell us what you think. Goatbot is free to use and specifically programmed to answer all your Agile and Scrum questions. Ask away! [17:23] - As a technologist, Hunter talks about the parts of AI that are exciting and interesting in the tech world. [19:05] - Brian points out the pace that AI technology is improving, underscoring its impact on setting new standards of improvement industry-wide. [22:36] - Hunter shares his approach to integrating AI tools in his coding and testing processes, highlighting when they're beneficial and when they're not. [28:46] - Brian shares a big thank you to Hunter for joining him on the show. [29:30] - Brian shares the array of complimentary tools available at Mountain Goat Software, including Relative Weighting, as well as those tailored for users in the Agile Mentors Community, such as Planning Poker. [30:41] - If you want to ask a question or provide feedback on Goatbot, email [email protected] [31:06] - We invite you to subscribe to the Agile Mentors Podcast. Do you have feedback or a great idea for an episode of the show? Great! Just send us an email. [31:39] - If you’d like to continue this discussion, join the Agile Mentors Community. You get a year of free membership into that site by taking any class with Mountain Goat Software. References and resources mentioned in the show: Hunter Hillegas Goatbot Mountain Goat Software’s Free Tools Mountain Goat Software’s Relative Weighting Tool Mountain Goat Software’s Planning Poker Mountain Goat Software Subscribe to the Agile Mentors Podcast Join the Agile Mentors Community Want to get involved? This show is designed for you, and we’d love your input. Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one. Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at [email protected] This episode’s presenters are: Brian Milner is SVP of coaching and training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work. Hunter Hillegas is CTO at Mountain Goat Software. With over 20 years in software development and a knack for creating high-quality digital solutions, he thrives at a company that values excellence in education and customer satisfaction, living in Santa Barbara with his wife and their distinctive Pitsky, Enzo. Auto-generated Transcript: Brian (00:00) Welcome in Agile Mentors. We are back for another episode of the Agile Mentors Podcast. I am with you as always, Brian Milner. And today I have a very special guest, a coworker of mine, Mr. Hunter Hillegas is with us. Welcome in Hunter. Hunter (00:17) Hey Brian, thanks for having me. Brian (00:19) Absolutely. Hunter is our CTO. He is the guy who's all about technology. And anytime I have technology questions, this is who I go to. And he is very patient with me and helps me to understand things when I don't understand them. But we wanted to have Hunter on to talk about one thing in particular that we felt like maybe not everyone knows about yet. or maybe you've crossed paths with it, but really are kind of interested in the story behind it a little bit. That is Goatbot. If you don't know, actually, I shouldn't be answering this stuff. If you don't know what Goatbot is, Hunter, tell them what Goatbot is. Hunter (00:54) Yes. Sure, so as the listeners may know, Mountain Goat, Mike Cohn in particular, but also Brian and other contributors, we generate a lot of content, whether it's training material or blog posts, other articles. And for a long time, we've wanted to make that more accessible to people. There's just so much of it that that's always been a little bit of a challenge, even using search and other technologies. And like, A lot of other people, about I guess 18 months or so ago when ChatGPT launched, I became a lot more interested in large language models and got a better sense of what the state of the art was there. So Goatbot is our attempt to meld those two things together to try to solve that problem to make all of our content more accessible using some of that technology. So it is a tool that lets you ask questions about scrimmage topics and it will answer them based on all the stuff that we've written and trained on, etc. Brian (02:06) That's a great explanation. Yeah, that's what it is. That's the idea behind it. And I'm sure Hunter will back me up on this. I'll tell you, when we first started doing this, I was trying as much as I could to put it through the paces. And I think I may have even said this on other podcasts before, but my go -to question I used to ask any kind of LLM about Agile and Scrum, was to have it tell me the difference between a product owner and a product manager. And in fact, I would say specifically, what does Scrum say the difference is between these two things? And the answers I would typically get, they would just give me an answer. They would say, well, a product manager is this and a product owner is that. Hunter (02:46) Hmm. Brian (03:00) And I remember telling Hunter, this is wrong. It shouldn't say that, right? Because Scrum doesn't have a product manager. How are we able to handle those kind of one -off kind of exceptions in working with this? Hunter (03:14) Sure. So anybody that's used some of the more general tools like chat GPT is like I think the go -to example that probably most people are familiar with even though there are other chatbots out there. know that, you know, sometimes it will give you wrong answers. Sometimes it will give you sort of strange answers. It will do what they call hallucinate and make things up essentially, because it really wants to give you an answer, even if it doesn't know what the answer should be. And, you know, that was something that I was worried about when I started to prototype this. Like we are, we have a, I think, I hope. to say, a great reputation in terms of the stuff that we put out there. And the last thing that I wanted to do was to put that in danger in any way by having some kind of a tool that's spouting nonsense out there. So. That was important. And I didn't really know how well it was going to work when I started sort of prototyping this whole idea. And I had some doubts. But what I discovered was that if I gave the model very specific instructions about where it should be pulling. It's, uh, it's information from, i .e. please only use this information I'm giving you, um, that is, we know it's from our materials and not sort of maybe some random article that they found on the web somewhere. That's part of the larger trainings that be very specific. And if you don't know an answer, say, you don't know, you don't need to make anything up. Um, so those sorts of what they call system messages did a lot of tuning there. Um, and ended up with something that actually works. pretty well, I think. I mean, it exceeded my expectations once I started getting, once I got to the point where there was something where I felt like I should share it with the team, that it wasn't embarrassing. And I feel like, you know, the feedback internally was really good and the feedback that we've gotten from people that have been using it has been really good. So I'm happy with how it's going so far. Brian (05:09) Yeah, I've been really, really impressed. It's been just a really nice tool. It's done a good job. We initially had it sort of internal, like you said, and we put it through its paces. We had kind of widening circles of people that would test it out and try it and give us feedback. And Hunter kept tweaking and making adjustments to it. But I'll tell you, I don't know if I even told this to you, Hunter, but I was doing a... ACSM class a few weeks back. And one of the opening exercises we do in that class is to really kind of consider some of the other Agile frameworks that are out there, not just Scrum and see how they compare. And one of the ones I have on my list is one that doesn't get used very often. It's kind of one that died out, but it's called Crystal or Crystal Clear if you know that or you're familiar with that. Hunter (06:00) Mm -hmm. Brian (06:07) But I wanted to see what the Goatbot would say about it, so I asked it specifically, just give me an overview of what Crystal is. And I think I said specifically the Agile framework Crystal, just to make sure it wasn't anything strange. But the response that came back was, I don't have any information on that. I know about Scrum, and I can give you answers about that, but I just don't have any information on anything else. And I. Hunter (06:21) Right. Brian (06:34) Honestly, it really impressed me. Here's another thing that it could have made something up and said, oh, yeah, yeah, it's this. Or it could have pulled from some general database or something else out there. But it's tuned really well to only pull from our data. And I just think that's awesome. Hunter (06:50) One of the things that's been strange slash interesting for me as a long time programmer in all kinds of different technologies from the web to native applications to other things. is how different it is working with these LLMs and trying to get them to bend them to your will. There are instructions in the system message that in all capital letters say do not make anything up. And the fact that I'm having to program a computer, I'm doing scare quotes here, program a computer by telling it not to invent things is just so bizarre coming from a very two plus two equals four world of more traditional programming. But it's also been really exciting and interesting. Brian (07:32) Yeah. Yeah, it is kind of completely opposite from a programming perspective, right? Because we're so used to, oh, it's not going to do anything but exactly what you tell it to do. And it can't fill in gaps at all. And now the problem is it could fill in too many gaps or try to fill in too many gaps. Yeah. Hunter (07:44) Right. Right. Absolutely. And you can think even just beyond, you know, your example of a, of a, of a framework that's not in common use and probably not something that we've talked a lot about on the website and our own own materials. There's all kinds of other instructions that I had to put in. Cause I didn't want this thing sort of going far afield and, and, and, you know, coming up with a really wacky, potentially terrible answer, um, to some of these questions. And so, yeah, we're, you know, we, we do give it some very specific instructions on how it should behave. Brian (08:21) I tell people who come to the class that, you know, I can't a hundred percent guarantee. I can't a hundred percent say, yeah, it's always going to give you a hundred percent the right answer. But what I can tell you is I've, you know, we've all put it through its paces. We've all asked it things that we feel like, Hey, this is kind of tricky. I wonder what it would do with this. And, uh, you know, just my own personal perspective has been when I, when I ask it a question and it gives me an answer, it's, it's. damn close to what my answer would be. It's really close to what I would say on that matter. Hunter (08:55) Yeah, it's encouraging to hear that. And I've heard that from you and from Mike as well, and then also from customers that have been using it over the past, so whatever month and change or so it's been more publicly available that they are really happy with the output. And it's a great way for us to take advantage of all of this material that we've built up over all of these years that otherwise some of it probably would be. something from 10 years ago, still really relevant in a lot of cases, but maybe gathering justice. Cause it's not, you know, the top blog posts on the website or something. So some of this knowledge, it's a little bit more varied. We can resurface it with a tool like this. Brian (09:33) Now, this was something that we initially just had in the Agile mentors community, right? Hunter (09:39) That's how it started. And that was for a few reasons, mostly because, well, a couple of important reasons. One is that with these types of LLM things, there are costs associated with it in the sense that it does cost us per question effectively. And just we wanted to make sure that we understood what those costs were before that we just let it loose on the wider internet. So that was part of it. But also to get a sense of how people would react to it in those early weeks and months got a lot of feedback. feedback on the responses just to get a general sense of did people think that this was a good answer or not so good and use that to calibrate it because you know frankly if it underperformed where we wanted it to be that would be a good signal that we needed to do some more work on it or give it some more time to bake. Brian (10:26) Yeah. Yeah. And now we kind of have opened that up, and it's available to anyone. You can go to Mount Goat Software and look in our menus. And there in our tools, it's under Tools, right? Hunter (10:41) Yeah, so you can get to it from the Mountain Goat site. You can either go to it is in the navigation, I think right next to the podcast for those of you that are familiar with where to find that on the website. I think it's right next to it, at least for now. No matter what you can go to mountaingoatsoftware .com slash goat bot and that will take you to the right place. That's G -O -A -T -B -O -T. And it is free right now. So we've couched that with at least for a limited time. We are again, sort of experimenting with the model and where it's going to go. But you can sign up for an account today and use it for free and get put it through its paces. And we're pretty happy with what we've got so far. So please do do that. And hopefully it's useful and give us feedback if you find something that you think could be improved or or let us know if you worked out for you to like to hear those as well. Brian (11:29) Yeah, yeah, absolutely. And yeah, we kind of buried our lead here just to say that, yeah, it is free, right? We're not selling you on something. We don't have a package of things that we're not. Yes, we did originally have it in our Agile mentors community. But like Hunter said, there was a lot of reasons for that. We wanted to be safe with it. We wanted to have a smaller audience, see what kind of responses we got. Hunter (11:38) Right. Yes. Yeah. Brian (11:58) We'd hate to put something out there in the world and then have people say, you know what kind of crazy stuff this thing told me to do? So yeah, kind of a safer audience there to start with. But yeah, it's available to anyone for free. You can just go to our site and use it. And as Hunter said, yeah, please give us feedback. If there's anything that you want to just let us know that it was useful to you in any way, or maybe you used it for something unusual, we wouldn't have anticipated. Hunter (12:04) Right. Right. Hahaha. Brian (12:27) Yeah, let us know. We're trying to tweak it and make it as useful as we possibly can. What surprised you most in this work of putting this together? Did it go just as you expected or did it throw you for some loops along the way? Hunter (12:42) There were definitely some loops. I mean, I sort of alluded to the fact before that it's a little bit different of a mentality in terms of how you get it to do what you want. GoPod is actually a few technologies glued together. There's the content itself. So as you might imagine, we've got... all of this various content, whether it's transcripts from training courses and videos or blog posts that Mike has written or weekly tips, books, all kinds of stuff. So there's a ton of different content. It's all in these different places. So, you know, step one was creating a sort of pipeline that could take all of this content that's all in these different places and clean it up a little bit so that it didn't have, say, you know, editors notes in it and other things like that that don't make any sense. and then put that content into a vector database. So I'm sure that many listeners are familiar with more traditional relational databases. Vector databases are not new, but they have become a lot more popular with the rise of the LLM stuff. And basically a vector database will chunk up the various content and lets you query to figure out how content that is close to the query that you asked in a mathematical vector space. And so we use that when you... pose a query to go, but it will go and find relevant pieces of information related to your query that the LLM in this case we use GPT -4 as the model underneath our underneath go pot can take the content that was retrieved these sort of chunks of content that by themselves don't read very well, wouldn't be a very good answer. And it can use that to reformat it, to summarize, to put stuff together into a way that makes sense. Putting those pieces together was something that was new to me. I can't remember the last time I had used a vector database for anything. And the LLM bit was new for me as well. But despite the fact that it was new to me technologies, at least in those cases, the pieces of it coming together actually was simpler than I imagined for how well it worked even in its first incarnation. It kind of came together more quickly, the basic core of it came together more quickly than I thought that it would. There was then a lot of refining, especially around the prompting and the messaging stuff. But... These technologies, especially if you are a programmer, even if you don't have any background in machine learning or AI stuff, I think it's accessible and, in my opinion at least, fun to play with because it is kind of like a whole new world there. So I guess I'd say for those that are interested and maybe are worried that I don't know anything about these technologies, I would go check them out because I think you'll find that they're more accessible than you may think. Brian (15:39) Yeah, and they're getting better. I mean, the pace of their improvement is just so rapid. You know, you tried something, you know, two weeks ago, and then two weeks later, it's just a completely different experience because it's just incrementally, you know, nonstop getting better all the time. Gosh, I'm... Hunter (15:45) Yep. The models, I'm sorry to interrupt you, Brian, but yeah, I mean, I agree 100%. The models that we've been using have gotten faster and cheaper, I think two or three times in big step change moments since we started the project. I mean, that is a technology that's moving quickly. Brian (16:13) Yeah, yeah. No, I was just going to make a joke about the fact that I think we've quoted about two or three songs there, and I just did another one with getting better all the time. Yeah, so this is a fascinating topic, Gary, obviously, for a lot of people. And what I'm kind of curious here, because we're maybe about halfway through our time, and I'm just kind of curious if we shift gears a little bit. from talking about GoPot to talking about AI in general, because you've done a lot of work in this area, and you're obviously in technology, and you're an aficionado, a technologist. What have you seen most recently in this area? Where do you think this is headed? What kind of trends have you noticed recently? Hunter (17:01) Well, I mean, it's obviously an area of great interest for the development community. It also, it seems like in the last year, every product that we use as tools or whatever, they are talking about the AI features that they're adding. Um, and at least in my experience, you know, some of those are really interesting, you know, like we use zoom often for internal meetings and, you know, it has a feature now that can automatically summarize a meeting and you can ask it, you know, what were the follow -up items and stuff like that. That's great. There's also maybe a little bit of sort of round peg square hole with some of this. Like, I don't know if every tool in the world needs an AI feature, and there are definitely some where I seems a little bit useless. Um, Brian (17:40) Yeah. Hunter (17:48) I guess that's to be expected with something like this that's got so much interest. The things that I'm excited about, and it feels like it's still very much early days, but you can see the contours are what many people would refer to as agents. So basically, AI tools that are going to go out and do things on your behalf. So not just write me a blog post or summarize this email, but... You know, and the example that's often used in some of these demos is like book me a vacation. I personally, I want to pick the seat I'm sitting in. So I don't know if I'm going to do that, but, um, you know, when the, when the tools can get good enough to go out and do things for you. Right. So I don't know example of this podcast recording, uh, we, you sent me a link to the calendar tool. I found a time that was open, but in theory that could be completely automated away, right? My agent could talk to your agent and it could just find the time. And that would, we would just both be told like, Oh, you guys are recording. Brian (18:21) Yeah, me too. Hunter (18:44) You know, at X time, that sort of thing, right? And then going several steps beyond that. I think that's really interesting. People are starting to build those. The models need to get, I think, a good bit better before, you know, that really works well. But I can't wait to see where that goes in particular. I think that's gonna be a lot of fun. Brian (19:05) Yeah, I agree. I mean, like Zoom is a great example because we were even having conversations about this recently, just that there's a lot of criticism about how good and the quality of those summaries that take place after a meeting. But previously, when we encounter a technology tool like that, you'd see the product and you'd say, oh, it's either useful or it's not useful. Hunter (19:18) Mm -hmm. Mm -hmm. Brian (19:34) And it's kind of a binary one or zero. If it wasn't useful, then it really was about how that company implemented that feature. And they weren't going to do a massive overhaul of how it was implemented. It kind of is what it is. So we're kind of conditioned, I think, to have this response. Or at least if you're of a certain age, you're kind of conditioned to have this response of, hey, if it didn't work, it probably is not going to work. I can move on and find something else. But. Hunter (19:43) Right. Right. Brian (20:03) The pace of how this gets better is such that you try the Zoom tool, you look at the response and think, oh, that wasn't very useful. But you do it again in two weeks later. And all of a sudden, it's everything that you wanted it to be in the first pass. Because people have been saying, hey, this doesn't work, and I wish it was this way, and then the tool can update and modify. And it's crazy to think about, we have to get our heads wrapped around the pace of improvement is now vastly different. Hunter (20:13) Mm -hmm. It's it is definitely that's a very good point and it is different and you know, I know that there are some folks that you know, you take exception at calling these things AIs because they're not actually smart, right? How do how they work? They're not. They're not intelligent. But they are pretty impressive and they do unlock a whole lot of interesting new categories of stuff. And maybe you could have done some of these tasks before in other ways procedurally, but these can make some of those problems a lot easier to solve because of how they work. Now, I mean, I don't think I'd want ChatGPT to do my taxes, but... Brian (21:12) Yeah. Hunter (21:14) But it does have a lot of really interesting use cases. And you are absolutely right that these models are improving so quickly. And it is kind of like that little brain in there. And they can upgrade it with the latest version. And all of a sudden, it's just a little bit smarter. And again, I know some people take Umbridge as smart and intelligent. But I think you know what I mean. Brian (21:34) Yeah, no, and the funny thing there about, I mean, your example about doing your taxes is, you know, I laughed about that and thought, oh yeah, I'm kind of with you. I wouldn't want to have an AI do my taxes, but that's our opinion. There's probably others that would say, no, I'm fine with it doing it. If it's been trained and it's, you know, been programmed to do it a certain way, then yeah, that's fine. And I, you know, I'm aware of services that will do that with legal documents now that will create entire contracts and... wills and all sorts of stuff from a legal perspective. And that kind of leap, maybe that's the line for me. I look at that and say, oh, I could see that. Because it's a kind of a limited knowledge base. But taxes are kind of the same thing. They're, yeah. Hunter (22:18) I think it's doable. Yeah. I, I, I'm still at the stage with something like that where I'm in the trust, but verify mode. Like I, you know, it's, I would want to check it over and make sure it was correct, but I can easily see, you know, a little bit further down the road where a defined problem set like that, you can pretty much guarantee that it's going to, you know, give you a valid answer. Brian (22:24) Yeah. Yeah. Well, so I do want to dip our toe just a little bit in this other area too, because I know there's probably people wondering about this. You're in technology. You're at the levels where you're doing coding work. And I'm sure that you have made use or tried to make use of some of the tools that are out there now that assist and help with coding. What's your opinion of the current state of that art? Hunter (22:56) Mm. Yeah, no, great question. And I do use those tools. I'll use ChatGPT sometimes to work through a problem, or GitHub Copilot is the other tool that I use often. It's integrated into some of my other tools. I have found it useful in a bunch of different ways. I can find it useful to either... Help me write code that I 100 % could have written myself, no problem, but it would have just taken a little bit longer because I would have to go look something up and then remember some function name that I had forgotten. So like a script to reformat a CSV file or something like that, right? Not complicated, not breaking any new ground, something that I'm gonna use once and throw away. It's really good at that sort of thing and just creating something for me that I, you know, again, could have done myself easily, but it'll save me 10 minutes and I'll take it. I've also used it to help me understand a little bit. Maybe there's code in a language that I don't use very often or a framework I don't use. It's kind of like, what is happening here exactly? And having it try to explain it to me and walk through certain things. And that's also been useful for me, kind of just like I would talk to a colleague that might know a different area of a system better than I do, have a little bit of a back and forth. And that's been useful. I do not. do and at least right now would not feel comfortable with is like, I guess the equivalent of like copy and pasting code from Stack Overflow, right? So just taking huge chunks of code where I have no idea how it works or what it does and saying, it seems to give me the right answer, so I'm just gonna use it. That I wouldn't feel comfortable with in any context, whether it was AI generated or something that I found on a website someplace. So. I kind of treat it like, and others have said this too, but maybe like a pair program situation or like a junior programmer that might not have all of the experience, but is definitely very competent and can help with things. And I do find it saves me time. So I'm glad that it's there. Brian (25:10) Yeah, you know, it's funny because when you said that I wouldn't copy and paste, you know, things over, I kind of feel that same way. I'm not doing code, but I just didn't, you know, anything I would write or anything I would, uh, you know, kind of come up with in that way. Um, my, my kind of opinion is it hasn't really helped me as much with brainstorming type activities. I don't find it to be as, as creative. Hunter (25:38) Mm -hmm. Brian (25:38) To give me different ideas. But I do really, really enjoy how I can take a rough draft of something and then put it in and say, help me tweak this or help me make this better. It seems like it does a really good job with something like that. Hunter (25:42) Right. Yep, I'd spend my general experience as well. And it's, you know, I, I will take any tool assistance I can find here and there. And I do think even if I, even if it's not perfect, it's still an improvement and I can get some, maybe it'll make a suggestion for something I wouldn't have thought of or looking at it a slightly different way, which I find useful. So I'm happy to have it. Brian (26:19) Have you utilized it in any ways to test any of the code that you work on? Hunter (26:24) Oh. Yeah, that's a great question. For many programmers, writing tests can be kind of drudgery. And actually, I do find that it can be pretty good at writing certain kinds of tests for you. And actually, it's interesting if it struggles to write a test for something, it may be a sign that what you're trying to test needs to be refactored because it may not be, if it's not understandable enough or it's too Brian (26:50) Ha. Hunter (26:55) Big of chunks or whatever, that can also be an interesting indicator that maybe you need to go back and tweak that as well. But yes, I mean, I don't always enjoy writing automated tests, but they are very important. And so it is nice to have any kind of labor saving in that department. It's another area where I welcome. Brian (27:17) Yeah, you threw out the term refactor as well. I'm kind of curious there. If it does a good job of reformatting a couple of paragraphs, how good a job does it do with refactoring code? Hunter (27:28) It depends, like a lot of these things, but I definitely have thrown stuff in and said, hey, rewrite this function, tell me how you would do it. And there have been times where it will say, oh, well, you could, this is maybe a little verbose, you could compact this down, you could write this like this. In some cases, I originally wrote it in a certain way on purpose, because sometimes it'll generate code that is correct, but kind of hard to read. And there's a tension there between. something that's that future you will be able to read and understand versus the most compact terse code possible, right? So I don't always take its suggestions, but it can be good. Also it's good at finding, you know, silly programmer bugs off by one errors and those sorts of things that are really common. that we, the kinds of mistakes that we all make. Um, and so, yeah, another area where I'm happy to, to get a helping hand and just save me from banging my head into the wall, trying to figure out why this thing should work. And it turns out I just put a stomach hole in the wrong place. You know, something like that. Brian (28:25) Yeah, I'm always kind of careful with how I phrase this, but I know there's a lot of panic and there's a lot of concern about these things being able to replace the human element. And so I always try to preface this by saying, right now, the way the thing is right now. And the kind of examples we both gave, I think, are good examples to show that right now, it can't really do that. It can't really just completely wholesale. Hunter (28:35) Yeah. Yeah. Brian (28:53) Replace the human element in it. But I think that our examples are good examples of how it can be a beneficial tool to help create these things. Hunter (29:05) I definitely see it as a productivity enhancer. I don't think, at least none of the models that I've seen, none of the chat bots that I've seen, I have seen a couple of products that claim they're sort of an AI developer in a box. I have not seen any that are very good. I mean, I saw a chat GPT demo of a guy that drew a picture of an iPhone app on the piece of paper and it... Brian (29:23) Yeah. Hunter (29:32) Gave it the code for a working app, but OK, great, now I want to add another feature. And like, oh, well, you can't really, because it's a piece. So some of those demos are impressive for sure. But in terms of the kinds of things that working software developers are doing every day, I have not seen anything that could replace the people that I work with on my various teams. Brian (29:40) Ha ha ha. Yeah, I mean, who knows where it'll be a year from now or two years from now. But yeah, I think we can only kind of deal with the state of it today. And that's sort of the state of it today. Well, Hunter, I really appreciate you coming on. This has been a fascinating topic. Again, for those who want to check it out, the whole reason we wanted to have this is just to make sure people were aware of this Goatbot tool. And hey, if you want to give someone some thanks for it, this is your guy. Hunter (29:57) Right. Yep. Hahaha. Brian (30:25) You can, if you want to send something to hello at mountegoatsoftware .com, that's our general kind of email address for anything from Mountain Goat. So send something to hello at mountegoatsoftware .com and tell us what you think of it. Let us know what you think. And if you have suggestions, let us know, right? Hunter (30:42) We're very excited to hear your feedback. I appreciate the kind words, Brian. I will say GoPot would be nothing without all of the content that you guys write for. So I can't take all of that credit, but it is fun to kind of pull all these things together in a way that people seem to enjoy. Brian (30:50) Ha ha ha. Awesome. Well, thank you again for coming on. Hunter. Hunter (30:59) Thank you.

Visit the podcast's native language site