Last programming language? For now, maybe

In a couple of posts on Clean Coder blog dating back to 2016, Uncle Bob Martin (one of the founders of software craftsmanship movement) explores productivity gains delivered by programming languages. He says that the first programming language (assembler, of course) increased productivity ~10 times compared to coding in binary. However, by now productivity gains from newer programming languages have dwindled. He makes the case that development of programming languages has followed logarithmic curve and we’re approaching asymptote where additional efforts to develop new languages and frameworks bring smaller and smaller productivity improvements. Uncle Bob claimed:

“No future language will give us 50%, or 20%, or even 10% reduction in workload over current languages”.

I agree with Uncle Bob in the short term, but disagree in the long term.

Programming is a craft of translating human intent into machine code. Improvements in our craft come when we make this translation easier, which is achieved by communicating to computers on increasingly high level, using ever higher abstractions. In other words, we develop tools that allow the interface between computers and humans to progressively shift from pure machine code to more human interaction forms. Programming for ENIAC was done by plugboard wiring. Later, assembly language was invented as an abstraction above hardware. C is effectively a layer above it. Java is higher yet as it runs in JVM and abstracts away memory management. Development tools are also productivity enhancers, as all tools are. Content assist makes writing code faster, and so do refactoring wizards.

From this perspective, it’s clear that we have a very long journey ahead of us, with the ultimate goal we can’t even clearly see. Hopefully at some point humans would be able to seamlessly augment their brains with computer power and just think with the aid of their computers. That is the ultimate asymptote. On the way from here to the ultimate simplicity, there are many more milestones and many more programming languages. The tool for programming based on brain-to-computer interface will be much more productive than what we use today.

Of course, invention of direct brain to computer interface will not obviate the need for general programming languages or for programmers. If anything, there will always exist devices with highly constrained computing power – what we today call embedded devices. They will exist no matter how much computing power increases, simply because resources will always be scarce and there will be value in providing some computing capabilities in the devices that are too constrained for full blown “standard” runtimes. But it doesn’t mean that programming for the future embedded devices will stay primitive. Even today, embedded programming is not done in machine codes, nor in assembly. IDEs for the embedded programming could be future-modern, incorporating all of the productivity features that are yet to be invented.

We’re not approaching the final asymptote in programming efficiency, we’ve just reached a plateau of sorts. Why the plateau? Because hardware resources stagnated. Throughout history, practical improvements in programming tools and resulting productivity gains have been linked to availability of hardware resources. Compilers arrived when computing resources became cheap enough to spend some on making programs. The switch from long cycle of writing code by hand and running it from punch cards to real time coding came with abundant CPU power and leaps in I/O technology. Eclipse or IntelliJ IDEs were impossible on 16KB of memory and punch card reader.

But wait – there is a computing resource availability of which has recently improved: TPUs (Tensor Processing Units), usually cloud-based. They have enabled adoption of Machine Learning as a practical discipline. And it has delivered improvements in the efficiency of software projects – on the large scale, by enabling techniques such as image recognition. Some problems that were intractable a decade ago are solved today using machine learning. Some others were barely possible and required a lot of effort to code. ML is not a general purpose programming tool, but it is a productive tool for software development.

This brings up another, older Uncle Bob’s pronouncement. In his famous “Last Programming Language” speech, Uncle Bob noted that we seem to be going in circles between structural, object oriented and functional programming and posited that these 3 paradigms exhaust the all the possible options. But some programming problems can be solved without writing code. Practical Machine Learning enabled computers to tell cats from dogs and recognize human faces. This could not be done reliably with either structural, object-oriented or functional languages.

“Trying to manually program a computer to be clever is far more difficult than programming the computer to learn from experiences”,

said Greg Corrado, senior research scientist at Google AI. True, the context of his quote was a bit different (he was arguing that Machine Learning is the pathway to Artificial Intelligence), but the point is still valid: ML unlocked automation that was previously impossible. Wouldn’t that qualify it as an alternative to the 3 classical categories of general purpose programming languages?

What’s next? I can’t wait to see how coding could change with VR headsets and gesture sensing input devices. They could create new possibilities and unlock completely new paradigms. Happy coding!


Disclaimer: opinions expressed in this post are strictly mine and not of my employer.