Does a GPT future need software engineers?
Oxide and Friends - Een podcast door Oxide Computer Company
Categorieën:
Bryan and Adam and the Oxide Friends take on GPT and its implications for software engineering. Many aspiring programmers are concerned that the future of the profession is in jeopardy. Spoiler: the Oxide Friends see a bright future for human/GPT collaboration in software engineering.We've been hosting a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording from March 20th, 2023.In addition to Bryan Cantrill and Adam Leventhal, speakers on MM DD included Josh Clulow, Keith Adams, Ashley Williams, and others. (Did we miss your name and/or get it wrong? Drop a PR!)Live chat from the show (lightly edited):ahl: John Carmack's tweetahl: ...and the discussionWizord: https://twitter.com/balajis/status/1636797265317867520 (the $1M bet on BTC, I take)dataphract: "prompt engineering" as in "social engineering" rather than "civil engineering"Grevian: I was surprised at how challenging getting good prompts could be, even if I wouldn't quite label it engineeringTronDD: https://www.aiweirdness.com/search-or-fabrication/MattCampbell: I tested ChatGPT in an area where I have domain expertise, and it got it very wrong.TronDD: Also interesting https://www.youtube.com/watch?v=jPhJbKBuNnAWizord: the question is, when will it be in competition with people?Wizord: copilot also can review code and find bugs if you ask it in a right wayag_dubs: i suspect that a new job will be building tools that help make training sets better and i strongly suspect that will be a programming job. ai will need tools and data and content and there's just a whole bunch of jobs to build tools for AI instead of peopleWizord: re "reading manual and writing DTrace scripts" I think it's possible, if done with a large enough token window.Wizord: (there are already examples of GPT debugging code, although trivial ones)flaviusb: The chat here is really interesting to me, as it seems to miss the point of the thing. ChatGPT does not and can not ever 'actually work' - and whether it works is kind of irrelevant. Like, the Jaquard Looms and Numerical Control for machining did not 'work', but that didn't stop the roll out.Columbus: Maybe it has read the dtrace manual 😉JustinAzoff: I work with a "long tail" language, and chatgpt sure is good at generating code that LOOKS like it might work, but is usually completely wrongclairegiordano: Some definite fans of DTrace on this showag_dubs: a thing i want to chat about is how GPT can affect the "pace" of software developmentsudomateo: I also think it's a lot less than 100% of engineers that engage in code review.Wizord: yes, I've had some good experience with using copilot for code reviewag_dubs: chatgpt is good at things that are already established... its not good at new things, or things that were just publishedWizord: very few people I know use it for the purpose of comments/docs. just pure codegen/boilerplayeschadbrewbaker: "How would you write a process tree with dtrace?" (ChatGPT4)#!/usr/sbin/dtrace -s BEGIN { printf(""%5s %5s %5s %s\n"", ""PID"", ""PPID"", ""UID"", ""COMMAND""); } proc:::exec-success { printf(""%5d %5d %5d %s\n"", pid, ppid, uid, execname); }TronDD: That's interesting as expensive, specialized code analysis tools have been varying level of terrible for a long timeJustinAzoff: I did an experiment before where I asked it to write me some php to insert a record into a database. so of course it generated code with sql injectionchiefnoah: It's ability seems to scale with how many times someone has done the exact thing you're trying to do beforeJustinAzoff: but then I asked if sql injection was bad, which it explained that it was. then I asked if the code it wrote me was vulnerable to sql injection. it then explained it wasColumbus: It misses empirical verification; forming a hypothesis, testing it, and learning from the result. There have been some attempts to implement this by feeding back e.g. command output into the promptJustinAzoff: so then the crazy part, I asked if sql injection was bad, why did it give me code that was vulnerable to sql injection. It the went on to say that the first thing it gave me was just for example purposesJustinAzoff: so no wonder people get into "prompt engineering" since it's clear that you need to do things like ask for code that does something, and is secure, high performance, does not have buffer overflows or sql injection vulnsMattCampbell: In my test case ("Write a Win32 UI Automation provider in C++"), all it did was produce plausible-sounding crapag_dubs: pattern matching over very very large data setsclairegiordano: Bryan just said this and I wanted to write it down, re GPT-3: "the degree that it changes its answers when you tell GPT-3 to think like someone else"JustinAzoff: or even just, "do that, but better"ag_dubs: i think a lot of the awe of gpt is recognizing how simple our own strategies are instead of how complex the AI's strategy ischadbrewbaker: "How would Bryan Cantrill re-write this script?" (ChatGPT4)#!/usr/sbin/dtrace -s #pragma D option quiet dtrace:::BEGIN { printf(""%5s %5s %5s %s\n"", ""PID"", ""PPID"", ""UID"", ""COMMAND""); self->indent = 0; } sched:::off-cpu { self->indent = 0; } proc:::exec-success { this->curpid = pid; this->curppid = ppid; this->curuid = uid; this->indent = self->indent; } proc:::exec-success /execname != ""dtrace""/ { printf(""%*s"", this->indent * 2, """"); printf(""%5d %5d %5d %s\n"", this->curpid, this->curppid, this->curuid, execname); } proc:::exec-success, sched:::on-cpu /this->curpid == pid/ { self->indent++; }antranigv: that's pretty hit on!chiefnoah: Yup. Most people are not doing things that haven't been done before. A significant portion of software is just building blocks of librariesWizord: intelligence is compression, in some sense.dataphract: "critique the epoll API as though you are Bryan Cantrill"ag_dubs: a brain would be much stranger!!Wizord: the ability to reduce a large dataset to a coherent set of rulesantranigv: "Explain the issues of epoll, write as if it's a Bryan ...