Last programming language? For now, maybe

In a couple of posts on Clean Coder blog dating back to 2016, Uncle Bob Martin (one of the founders of software craftsmanship movement) explores productivity gains delivered by programming languages. He says that the first programming language (assembler, of course) increased productivity ~10 times compared to coding in binary. However, by now productivity gains from newer programming languages have dwindled. He makes the case that development of programming languages has followed logarithmic curve and we’re approaching asymptote where additional efforts to develop new languages and frameworks bring smaller and smaller productivity improvements. Uncle Bob claimed:

“No future language will give us 50%, or 20%, or even 10% reduction in workload over current languages”.

I agree with Uncle Bob in the short term, but disagree in the long term.

Programming is a craft of translating human intent into machine code. Improvements in our craft come when we make this translation easier, which is achieved by communicating to computers on increasingly high level, using ever higher abstractions. In other words, we develop tools that allow the interface between computers and humans to progressively shift from pure machine code to more human interaction forms. Programming for ENIAC was done by plugboard wiring. Later, assembly language was invented as an abstraction above hardware. C is effectively a layer above it. Java is higher yet as it runs in JVM and abstracts away memory management. Development tools are also productivity enhancers, as all tools are. Content assist makes writing code faster, and so do refactoring wizards.

From this perspective, it’s clear that we have a very long journey ahead of us, with the ultimate goal we can’t even clearly see. Hopefully at some point humans would be able to seamlessly augment their brains with computer power and just think with the aid of their computers. That is the ultimate asymptote. On the way from here to the ultimate simplicity, there are many more milestones and many more programming languages. The tool for programming based on brain-to-computer interface will be much more productive than what we use today.

Of course, invention of direct brain to computer interface will not obviate the need for general programming languages or for programmers. If anything, there will always exist devices with highly constrained computing power – what we today call embedded devices. They will exist no matter how much computing power increases, simply because resources will always be scarce and there will be value in providing some computing capabilities in the devices that are too constrained for full blown “standard” runtimes. But it doesn’t mean that programming for the future embedded devices will stay primitive. Even today, embedded programming is not done in machine codes, nor in assembly. IDEs for the embedded programming could be future-modern, incorporating all of the productivity features that are yet to be invented.

We’re not approaching the final asymptote in programming efficiency, we’ve just reached a plateau of sorts. Why the plateau? Because hardware resources stagnated. Throughout history, practical improvements in programming tools and resulting productivity gains have been linked to availability of hardware resources. Compilers arrived when computing resources became cheap enough to spend some on making programs. The switch from long cycle of writing code by hand and running it from punch cards to real time coding came with abundant CPU power and leaps in I/O technology. Eclipse or IntelliJ IDEs were impossible on 16KB of memory and punch card reader.

But wait – there is a computing resource availability of which has recently improved: TPUs (Tensor Processing Units), usually cloud-based. They have enabled adoption of Machine Learning as a practical discipline. And it has delivered improvements in the efficiency of software projects – on the large scale, by enabling techniques such as image recognition. Some problems that were intractable a decade ago are solved today using machine learning. Some others were barely possible and required a lot of effort to code. ML is not a general purpose programming tool, but it is a productive tool for software development.

This brings up another, older Uncle Bob’s pronouncement. In his famous “Last Programming Language” speech, Uncle Bob noted that we seem to be going in circles between structural, object oriented and functional programming and posited that these 3 paradigms exhaust the all the possible options. But some programming problems can be solved without writing code. Practical Machine Learning enabled computers to tell cats from dogs and recognize human faces. This could not be done reliably with either structural, object-oriented or functional languages.

“Trying to manually program a computer to be clever is far more difficult than programming the computer to learn from experiences”,

said Greg Corrado, senior research scientist at Google AI. True, the context of his quote was a bit different (he was arguing that Machine Learning is the pathway to Artificial Intelligence), but the point is still valid: ML unlocked automation that was previously impossible. Wouldn’t that qualify it as an alternative to the 3 classical categories of general purpose programming languages?

What’s next? I can’t wait to see how coding could change with VR headsets and gesture sensing input devices. They could create new possibilities and unlock completely new paradigms. Happy coding!

Disclaimer: opinions expressed in this post are strictly mine and not of my employer.  


Machine Learning and Artificial Intelligence

There is a popular line of thinking about Artificial Intelligence (AI) that says that Machine Learning is the pathway to AI. This idea gained momentum partially because Machine Learning is the hot topic (and buzzword of the decade), but also because we understand ML fairly well. With ML in hand, it is easier to make machines that can learn and become smart, while making machine smart out of the box is a much more difficult task.

This approach makes sense. After all, the only intelligence that we know – human – is learned, not built. In other words, even if we were to invent Asimov-style positronic brains, robots will still need a period of learning after being fully assembled. Of course robots will need to learn a whole lot faster than humans – nobody wants a robot that takes 18 years to become functional. Fortunately, robots can learn much faster than humans. But to learn fast, robots need a lot of input to process. It means that they will have to learn from shared information (as opposed to the personal experiences alone). To give a hypothetical example, a robot would become a good driver much faster if it can learn not just from its own driving history, like all humans do, but from the experience of all the other robot drivers. Robots will need to remain in constant communication with each other (whether centralized or decentralized) in order to keep up with all the changes in the world around them and get better at whatever they’re doing.

Now, how will ML help us achieve AI? By itself, ML is just machine code capable of syntactic manipulation based on certain statistical rules, which is not intelligence of any kind, not even a weak AI. Will it lead a computer system to developing intelligence? The Chinese Room argument says it’s impossible to jump from syntactic rules to semantic understanding, so there is still a big phase transition to be made before AI becomes a reality. Patience, please.

Wer wartet mit Besonnenheit

Der wird belohnt zur rechten Zeit

(Roughly: Whoever waits patiently, will be rewarded at the right time. © Rammstein).

Disclaimer: opinions expressed in this post are strictly my own – not of my employer.