This article is part of a series:
- This Article
There’re in general two schools of thought in terms of AI/LLMs’ influence this time around on software developers and their work practice.
- The “doomsday” or “end-of-programming” view argues that there’s less and less need for human involvement in software development, given how capable recent AI models perform. Where we go from there, among other derived scenarios, is perhaps the one where AI overlords rule human beings in the future; and software developers as a profession, are among the first to go.
- The “AI-assistant” view acknowledges the capabilities demonstrated by applications such as GitHub Copilot and OpenAI ChatGPT, at the same time still maintaining that human developers will always be needed. And who doesn’t want superstar programmers who have never been more productive before? Forget about 10X, we’re perhaps talking about 100X fueled by GPT-X & multi-modal in the future.
If we jump out of the software engineering context, the same two confronting groups are making the same arguments. Replace “developers” with many professions, this time many being white-collar, professional-level roles spanning from medical to legal, and you have the same debate or prediction.
Back in the software engineering context, software or pieces of code are a medium that humans create to gain productivity by asking programmable computers to do what we tell them to do. The role of software developers was created not long ago because as things stand, computers by design cannot write code to operate themselves. The role has since evolved into more of a king-maker type thanks to media portraits of hackers and brilliant minds using computers to achieve great things.
But we’re at the inflection point as a result of the rapid rise of AI/LLMs. Considering the two schools of thought, are we ready to take a U-turn and acknowledge the fact that just like some of the roles displaced by software, it’s time for software developers to be affected and (eventually) replaced?
To me, the underlying factors are twofold:
- What AI is capable of performing now? And
- What AI is predicted to grow into in the future, given the consensus that it could only become more powerful more rapidly over time, following a hockey stick trajectory?
In Part I, I intend to make observations than arguments which are left for Part II.
(Strong) Signal vs. (Weak) Noise
Other than the jaw-dropping abilities they possess, what the current LLMs most visibly suffer from is hallucination. Another major, although less obvious concern is bias, likely to be the result of training data. Then there’re mistakes creeping in from time to time, even in computer code generated by them, which is supposed to be deterministic. For reference, this paper by researchers at OpenAI detailed both the risk associated with LLMs and the potential mitigation.
In my view, these shortcomings are the noise in the signal vs. noise analogy where the overwhelming signal is the outstanding performance current LLMs achieve. Any argument made out of the weak noise is not considered strong enough against a future influenced by the strong signal, i.e. an increasingly capable AI force growing more and more into humans’ daily activities and our existing way of life.
Let’s look at a concrete use case. Judging by the predominantly positive feedback for GitHub Copilot, which is a proxy product (i.e. an application) of what current AI is capable of, it’s fair to say that the “assistant” feature has already been achieved with satisfactory results. Would the future of the likes of Copilot merely increase accuracy and helpfulness (and more importantly for developers to stay in the game), or eliminate the need for developers as middle-person to create any software (which applies to both humans and AI as middle-person, but that’s a different topic we may cover later)? There’re some early reports of newbies or no-coders being able to put together functional codebases using only ChatGPT. It’s not unreasonable to think that more and more people will be able to build working software (not just writing code) in the future.
The software engineering landscape will change but the question remains: where does the future hold for software developers as a profession?
Let’s observe how people and professions use these LLMs-based applications at the moment and extrapolate.
Asking the Right Questions, a.k.a Prompt Engineering
If you’ve ever worked with developers, you’d have observed that some developers just know and memorise everything that’s required to complete jobs at hand; but there’re also others, probably in overwhelmingly majority, whose productivity comes from their use of various data sources, be it their notes, StackOverflow or Google.
When we look closely at the second group, what sets senior devs apart from the average ones is their ability to ask the right questions at the right places to locate the precise knowledge needed for what they’re doing.
ChatGPT and Copilot solve the “right place” challenge as single sources of abundant knowledge. Let’s observe how the “right questions” are asked in these scenarios:
- In the case of Copilot, the expectation is that devs would dictate the structure and flow of the code, i.e. being an architect, while Copilot assists with filling in the details for particular coding tasks, i.e. being a skilled builder. Some anecdotal feedback from my peers seems to confirm that Copilot benefits senior devs much more because of the assistant nature whilst not as much for junior ones because they’re yet to equip themselves with enough conviction on architecting working software, which goes beyond just writing pieces of code (which interestingly, has been accomplished by Copilot by now). One thing for sure is that a small team of (senior) devs can now have a much bigger impact thanks to the marriage between their ability to ask the right questions and AI applications’ ability to put together vast amounts of knowledge into easily accessible, single places.
- As for ChatGPT, because of its generic UX, prompting or prompt engineering becomes a hot topic (or even a new profession as it appears to be) which is the repeat of what’s described earlier about how senior devs pinpoint required knowledge quickly by asking the right questions at the right place. Here, the process of coming up with the “right question” is prompt engineering. It’s unclear what the ideal profile of a prompt engineer would be, but presumably they would come from domain experts. And in the software engineering context, the senior devs.
- Again through ChatGPT, it becomes obvious that non-coders find themselves being able to not only code but create pieces of software that’re usable. It may still take some time and effort to productionise these pieces of software created by the collaboration between ChatGPT and coding amateurs, but the possibilities are already abundant in plain sight, such as often-cited lower barrier to entry into coding and lower cost of starting software businesses, as no expensive devs need to be involved from the beginning as it’s historically required.
Deriving from the above:
- Curiously, what hasn’t been mentioned so far are those mid-level professional devs more experienced than newbies but yet to graduate to be seniors. I think they’re in danger of being wiped out completely as a group.
- What’s also interesting is that the recent practice of specialising in software development because of the increasing complexity in each narrow domain could be reversed, simply because the complexity no longer requires as many human specialist devs to tackle. I think we’re witnessing a renaissance of generalist devs aided by AI/LLMs.
Furthermore:
- The journey of training to be a software developer could be unrecognisable. For the first time, you’re developing software not as an individual but with a capable companion. What to learn and what not to learn become a strategic decision that influences your learning path.
- People will likely question the career prospects of being a software developer, where newbies can code without training, the road to seniors is dotted with unknowns and the scary part: nothing in between.
Now we’re clear about what AI can do and what impact it has on devs at this moment in time. However we still don’t know for sure how capable AI can be in the future. One way to judge is to look at the possibility of AI making self-initiated breakthroughs.
Finding Secrets - The AI Breakthroughs
In software development, we’re trained to thinking in systems coined by Donella Meadows, thanks to some constant themes from computer science history such as abstraction and computation models. And it’s helped. Humans have come a long way from the inception of modern computing to what we have now as Marc Andreessen penned “Software is eating the world”, by relentlessly making both incremental improvements and drastic breakthroughs.
On the incremental improvements front, we’ve just experienced that AI works as a more-than-capable assistant or copilot. It is within comprehension that with the emergence of (later, more advanced models based) AI agents, AI can autonomously achieve even more improvements with little help from humans.
What about making drastic breakthrough? As Peter Thiel once said, “Every great business is built around a secret that’s hidden from outside”. So the remaining question is:
Would AI be able to connect the dots and come up with something new in the foreseeable future, i.e. finding secrets?
In other words and putting it in the context of software engineering:
Would AI possess higher-order cognitive abilities exhibited in human critical thinking that has been proven useful in the history of computer science, such as deep contextual understanding, creativity and innovation, and even intuition and tacit knowledge which we rely on at times?
And the follow-up question, nevertheless an important one:
Could AI possess the ethical and moral reasoning ability to make the right decisions?
I’ll share my thoughts on all of these questions in Part II (and yes we’ll be touching on AGI).
The spoiler is that I don’t think the profession is going away; however, it will look drastically different from the previous 40 years since the software crisis.
Recap
AI is eating software that has been eating the world up to this point.
Newbies and expert-level devs look likely to benefit the most.
For the rest, it could spell an existential crisis.
As for AI self-initiated breakthrough, it remains to be seen whether it will happen.
See you in Part II.