A few years back, there was big movement pushing the idea that K-12 school students should “learn to code”. AFAICT, it was swallowed up by pandemic school closures and has since been overshadowed by the need to recover lost ground.
The emergence of enhanced AI, and in particular the coding assistant Copilot has changed the picture radically. The article linked above quotes an estimate that "it takes approximately 10 years to turn a novice into an expert coder.” If that’s right, then getting schoolkids to novice standard or a bit better seems like a pointless addition to an already overstuffed curriculum.
But CoPilot changes things. In this New Yorker article, expert coder Jamie Somers considers the waning of the craft and considers that he may not teach his son to code after all.
I have a rather different take. As one of a set of massively over-optimistic New Year goals, I aimed to learn to write useful programs in Python, which seems to be the fashionable language of the day. I wasn’t a total novice. I’ve been around computers most of my life. I played around with Fortran (punched cards which had to be sent away to run!) in high school, and I took a couple of Comp Sci courses 40 years ago. But I wasn’t a coder by any stretch of the imagination.
So, a couple of weeks ago, I skated through an online course. The ideas seemed straightforward enough, based on decades-old memories of Pascal. But of course, I ran into all kinds of syntax errors once the exercises got a bit difficult.
So, it was time for Copilot, the Bing programming assistant. Rather than write the code myself, I asked Copilot to do it, then checked that I could understand the code and that the output was what I expected. I started with toy programs I could have written in Fortran as a schoolboy (finding prime pairs), or where there was an existing package solution (ridge regression).
That was easy, so I decided to start on something more ambitious: a program that would index manuscripts (of course, these already exist, but it’s a real job). I didn’t want to the actual work of deciding on index terms, so I went for an index of proper names. That entailed finding all the capitalized words, the deleting those that were just ordinary nouns occuring at the beginning of a sentence, or in a reference.
I could have done the second part with a RegEx (regular expression), but I know from long experience that these are painful. So, I asked CoPilot to point me to a dictionary I could use. The most difficult part turned out to be getting past the necessary SSL permission (I don’t know what SSL is, but it’s important).
After that, a proper index would have had the page numbers on which items occur. But Copilot told me that there is no standard way of finding out about pagination, so I would have to use ReGex. Again, from experience, I knew I could do this, but also that it would be painful. So, I declared victory, and settled for reporting the number of occurrences. Here’s the code, and here’s some sample output, showing that the program still requires a bit of tweaking.
At this point, I’d say I can code, in the same sense as I can make a jam sandwich. I haven’t made the jam (though I did this at one time), or the bread (though I could) or the butter (more of a challenge, and I certainly couldn’t do margarine), but I can easily get access to those ingredients, mixing and matching as I choose.
What remains to as a challenge is to code an application programming interface (API) to LLM programs like GPT4. I still haven’t thought about what I would want such an API to do, let alone how to implement it. But 2024 has just begun!
I don’t have an immediate practical use for these kinds of coding skills partly because I have decades of experience in using packaged applications: word processors, spreadsheets, databases and so on. But I’m sure uses will come up. And, for kids who haven’t put in that effort, a “native speaker” ability to code would obviate a lot of the need.
My thought here is that rather than learning in the traditional style of language teaching, starting with grammar and a memorised vocab, kids should learn coding just by doing it, with a human teacher and Copilot explaining how it works. The Duolingo model in language teaching is very like this.
What does this example suggest about the broader implications of recent developments in AI. First, there are some pretty big benefits. Second (and I think this is more general), LLM-style AI shortens the distance between keen beginners and average practitioners, while enhancing the abilities of both. Looking at the text that ChatGPT typically generates, it’s as good as lots I read from day to day but (I immodestly claim) wooden and stilted compared to my own work. (Of course, I can’t make the expert judgement wrt code.)
Overall, I think LLMs will prove to be labour-augmenting, making workers more productive, rather than labour-replacing. But that remains to be seen.
Something that seems completely lost in the llm hype is the fact that coding != software engineering, and unless you are playing around with sandboxed trivialities, code is the easy part. Architecture, scaling, performance and fitness for purpose are the interesting parts, and no llm will give you code that is the best fit for your particular configuration of these.
So I agree with your prediction on the augmenting aspects. It does, however, bother me, that even people who should know better seem to miss the distinction that code is never, ever, ever just code.
The conversation these days in STEM is not necessarily dryly coding lines, but understanding simple logical steps in building code strings and execution tasks. There are some great visual icon-based builders, such as Scratch (https://scratch.mit.edu/about) and STEM/Lego-compatible robot kits, to build and demo the generated code. Or we can just go back to Perl ;-)