Programming education, tacit knowledge and LLMs

Updated 2023-07-02 / Created 2023-07-02 / 1.53k words

Why programming education isn't very good, and my thoughts on AI code generation.

It seems to be fairly well-known (or at least widely believed amongst the people I regularly talk to about this) that most people are not very good at writing code, even those who really "should" be because of having (theoretically) been taught to (see e.g. Why is this? In this article, I will describe my wild guesses.

General criticisms of formal education have already been done, probably better than I can manage to do. I was originally going to write about how the incentives of the system are not particularly concerned with testing people in accurate ways, but rather easy and standardizable ways, and the easiest and most standardizable ways are to ask about irrelevant surface details rather than testing skill. But this isn't actually true: automated testing of code to solve problems is scaleable enough that things like Project Euler and Leetcode can test vast amounts of people without human intervention, and it should generally be less effort to do this than to manually process written exams. It does seem to be the case that programming education tends to preferentially test bad proxies for actual skill, but the causality probably doesn't flow from testing methods.

I think it's more plausible that teaching focuses on this surface knowledge because it's much easier and more legible, and looks and feels very much like "programming education" to someone who does not have actual domain knowledge (because other subjects are usually done in the same way), or who isn't thinking very much about it, and then similar problems and a notion that testing should be "fair" and "cover what students have learned" lead to insufficiently outcome-oriented exams, which then sets up incentives biasing students in similar directions. The underlying issue is a matter of "tacit knowledge": being good at programming requires sets of interlocking and hard-to-describe mental heuristics rather than a long list of memorized rules, and since applying them feels natural and easy - and most people who are now competent don't accurately remember lacking them - it is not immediately obvious that this is the case, and someone asked how they can do something is likely to focus on the things which are, to them, easier to explain and notice.

So why is programming education particularly bad? Shouldn't every field be harmed by tacit knowledge transmission problems? My speculative answer is that they generally are, but it's much less noticeable and plausibly also a smaller problem. The heuristics used in programming are strange and unnatural - I'll describe a few of the important ones later - but the overarching theme is that programming is highly reductionist: you have to model a system very different to your own mind, and every abstraction breaks down in some corner case you will eventually have to know about. The human mind very much likes pretending that other systems are more or less identical to it - animism is no longer a particularly popular explicitly-held belief system, but it's still common to ascribe intention to machinery, "fate" and "karma", animals without very sophisticated cognition, and a wide range of other phenomena. Computers are not at all human, in that they do exactly what someone has set them up to do, which is often not what they thought they were doing, while many beginners expect them to "understand what they meant" and act accordingly. Every simple-looking capability is burdened with detail: the computer "knows what time it is" (thanks to some nontrivial engineering with some possible failure points); the out-of-order CPU "runs just like an abstract in-order machine, but very fast" (until security researchers find a difference); DNS "resolves domain names to IPs" (but is frequently intercepted by networks, and can also serve as a covert backchannel); video codecs "make videos smaller" (but are also complex domain-specific programming languages); text rendering "is just copying bitmaps into the right places" (unless you care about Unicode or antialiasing or kerning).

The other fields which I think suffer most are maths and physics. Maths education mostly fails to convey what mathematicians actually care about and, despite some attempts to vaguely gesture at it, does not teach "problem-solving" skills as much as sometimes set nontrivial multistep problems and see if some people manage to solve them. Years of physics instruction fail to stop many students falling back to Aristotlean mechanics on qualitative questions. This is apparently mostly ignored, perhaps because knowledge without deep understanding is sufficient for many uses and enough people generalize to the interesting parts to supply research, but programming makes the problems more obvious, since essentially any useful work will rapidly run into things like debugging.

So what can be done? I don't know. Formal education is likely a lost cause: incentives aren't aligned enough that a better way to teach would be adopted any time soon, even if I had one, and enough has been invested in existing methods that any change would be extremely challenging. I do, at least, have a rough idea of what good programmers have which isn't being taught well, but I don't know how you would teach these things effectively:

If you have been paying any attention to anything within the past two years or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can pretend to be a Linux shell quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can also find some vulnerabilities in code; tool use is continually being improved (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.

Essentially, your job is probably not safe, as long as development continues (and big organizations actually notice).

You may contend that LLMs lack "general intelligence", and thus can't solve novel problems, devise clever new algorithms, etc. I don't think this is exactly right (it's probably a matter of degree rather than binary), but my more interesting objection is that most code doesn't involve anything like that. Most algorithmic problems have already been solved somewhere if you can frame them right (which is, in fairness, also a problem of intelligence, but less so than deriving the solution from scratch), and LLMs probably remember more algorithms than you. More than that, however, most code doesn't even involve sophisticated algorithms: it just has to move some data around or convert between formats or call out to libraries or APIs in the right order or process some forms. I don't really like writing that and try to minimize it, but this only goes so far. You may also have a stronger objection along the line of "LLMs are just stochastic parrots repeating patterns in their training data": this is wrong, and you may direct complaints regarding this to the comments or microblog, where I will probably ignore them.