top of page

"ChatGPT Wrote That"


A thesis about LLMs pushing good engineers toward excellence and mediocre engineers toward risk-mongering.

a "new" ubiquitous issue


Not too long ago, I was doing a code review and a small chunk of code caught my eye.


The code repeated an extremely memory and time intensive computation hundreds of times doing the exact same thing every time, simply to keep the last result of all of those runs and throw out all the others.


When I asked the engineer about the code chunk, I got a small glimpse into what I expect will be a ubiquitous code-review conversation from here on out:


"Oh, Chat GPT did that," he responded. "I didn't really know what it was doing, so I left it in there."


I don't mean to be hard on this particular situation or person. That's a tendency that we all have - quick answers, automated for us, seem (and often are) wonderful. Why inspect the answer if it seems to work? I face this temptation constantly and will likely fall into this trap more than once in the coming years.


root cause


In response to programming questions, ChatGPT and other GPT-4 based LLMs, are right a lot. I continue to be blown away by the accuracy and thoroughness with which they provide answers.


LLMs are also wrong quite a bit. And in weird ways that a human virtually would never be wrong. That makes the error profoundly difficult both to detect and certainly to debug.


chaotic? robust?


As LLMs get better, knowing the difference between the good and bad, accurate and inaccurate output, will only require more expertise. Experts and novices alike will produce products with incredible speed. However, one set of those products will break regularly, break strangely, and take unending hours to fix.


If you are an engineer growing up in the industry right now, beware of skipping expertise. LLMs promise the world, but they are just tools. If you attempt to see them as a substitute for needing to learn, iterate, and grow, you will find yourself building products that are at best deeply suboptimal and at worst catastrophic failures.


Pair LLMs with careful expertise, and you may find yourself building products that are more reliable and capable than you ever could before.

Commenti


bottom of page