Ever since the LLM boom started, I've been saying that AI code generation's biggest impact will be to increase the burden on highly-skilled programmers to wade through the mountains of code created by less-skilled programmers.

The fact that I had to struggle mightily to write code when I was less skilled is how I gained the skill I have. Feels like AI code generation is stunting the development of the next generation of programmers. justin.searls.co/shots/2023-10

14
Share
Share on Mastodon
Share on Twitter
Share on Facebook
Share on Linkedin
roman

@searls It has been both a boon and a crutch at times. When I first pivoted into tech I had co/pilot and I turned it off as I could feel the brain rot from it. I couldn't remember simple things.
I am on a team now without any real mentorship, and without any true senior rails developers guiding the way for conventions and practices so I use chatGPT as my rubber duck. I have a context that it is my senior developer and I have it ask me questions back about my initial query. It is awesome.

1
1y
Ivan Moscoso

@searls I remember being on a team long ago where an inexperienced developer left a comment:

/* Bruce told me to do this */

I remember thinking, "did he just paste Bruce's (not real name) pseudocode & change it until it compiled?" Yep. Twenty years later, we're in the age of Bruce. 😞

1
1y
Stephen P. Anderson

@searls This. I’m seeing the same thing in any domain where there is expertise—the experts see the problems in LLM generated content (relevant to that domain); everyone else thinks it works just fine—they don’t see the gaps & errors. 🙄

1
1y
Dave Peck

@searls Time will tell, but I’ll play the optimist and wager the opposite: that the assistive boost from LLMs will help less-skilled programmers up-level much more quickly. In part, because it should force new programmers to gain the skill of *reading* code sooner than they otherwise might. (And: cargo-culters gonna cargo cult; a wrong answer copied thoughtlessly from ChatGPT and a wrong answer from StackOverflow feel roughly equivalent to me.)

0
1y
Todd A. Jacobs

@searls I think #LLM can help, but it's best as an assistive tool. It's a lot like using a calculator: it can save time and improve accuracy, but you still have to know enough math to know what keys to press, how to evaluate the validity of a result, and to know when you've fat-fingered something or gotten the order of operations wrong.

Prefab houses still need carpenters and plumbers. #AIassisted #softwaredevelopment still needs architects, engineers, and testers. Q.E.D.

0
1y
postmodern

@searls I've also noticed flawed PRs containing invalid markdown/documentation syntax during Hacktoberfest. I suspect people are using ChatGPT to generate the PRs, and then maintainers have to go through multiple rounds of review in order to fix the PR. While LLM can generate code, they can't distinguish between different markup syntaxes or documentation formats.

0
1y
Stewart Sims

@searls a concern I also have. I guess the answer is similar to how use of any other tools (e.g. StackOverflow, libraries / frameworks etc.) should be approached - with critical thinking. So far the main use I've found for it is in automating certain tasks - e.g. as an intelligent 'find and replace' tool that can generate regular expressions to transform code or data. In terms of writing code beyond very basic applications it's not particularly effective or efficient.

0
1y
Electric Monk

@searls And now think about schools and the young generation and how we (used to) learn natural languages and all the other stuff you learn in schools - by struggling, trying, failing, trying again.
I guess there’s some work to do to transform traditional schooling to LLM-based schooling…

0
1y
Kore Nordmann

@searls for me their is another, not immediate, effect:

How are we supposed to teach future generations? You learn through simplified problems first, which LLMs can solve easily. But you'll need that knowledge to solve the real ones later. How much focus, self-awareness, and discipline can we expect from younglings to still walk this path when their peers solve all problems using LLMs? I have no idea yet how we're going to teach..

0
1y
magnetichuman

@searls Depends how it's used of course, though your example doesn't fill one with hope.
Personally, I find that GPT3 helps me solve coding problems that might otherwise require hours trawling through documentation.

0
1y
Matt Burke

@searls my experience has been that the LLMs can be bad with the subtleties. We see it with react useEffect a lot. It often over uses the hook itself or adds wrong or missing dependencies. Devs keep relying on it no matter how much you try to educate

0
1y
alfonso adriasola

@searls I have given the tools a fair shake, meaning I have in good faith tried many different scenarios from simple to complex, and the experience has been consistenly, superficially looks ok, only to discover it never works and introduces the worst kind of bugs. Like fantastically so, to the point of comicality.
If this is AI then its a troll and is evil

0
1y
Josh Justice

@searls @testdouble LLM boom is my least favorite Saliva song

0
1y
vaerospace

@searls Not only did I learn forth coding I carried everywhere (no friends or locker 0$ 0help) my donated 386 brick laptop with me as a completely homeless man for YEARS in cape town . not exagerating

0
1y
Replies