May 12, 2023

Y2023 AI versus Devs | Part I: Friend or Foe

This article is part of a series:

  • This Article

Unsplash | Mohammad Rahmani

There’re in general two schools of thought in terms of AI/LLMs’ influence this time around on software developers and their work practice.

If we jump out of the software engineering context, the same two confronting groups are making the same arguments. Replace “developers” with many professions, this time many being white-collar, professional-level roles spanning from medical to legal, and you have the same debate or prediction.

Back in the software engineering context, software or pieces of code are a medium that humans create to gain productivity by asking programmable computers to do what we tell them to do. The role of software developers was created not long ago because as things stand, computers by design cannot write code to operate themselves. The role has since evolved into more of a king-maker type thanks to media portraits of hackers and brilliant minds using computers to achieve great things.

But we’re at the inflection point as a result of the rapid rise of AI/LLMs. Considering the two schools of thought, are we ready to take a U-turn and acknowledge the fact that just like some of the roles displaced by software, it’s time for software developers to be affected and (eventually) replaced?

To me, the underlying factors are twofold:

In Part I, I intend to make observations than arguments which are left for Part II.

(Strong) Signal vs. (Weak) Noise

Other than the jaw-dropping abilities they possess, what the current LLMs most visibly suffer from is hallucination. Another major, although less obvious concern is bias, likely to be the result of training data. Then there’re mistakes creeping in from time to time, even in computer code generated by them, which is supposed to be deterministic. For reference, this paper by researchers at OpenAI detailed both the risk associated with LLMs and the potential mitigation.

In my view, these shortcomings are the noise in the signal vs. noise analogy where the overwhelming signal is the outstanding performance current LLMs achieve. Any argument made out of the weak noise is not considered strong enough against a future influenced by the strong signal, i.e. an increasingly capable AI force growing more and more into humans’ daily activities and our existing way of life.

Let’s look at a concrete use case. Judging by the predominantly positive feedback for GitHub Copilot, which is a proxy product (i.e. an application) of what current AI is capable of, it’s fair to say that the “assistant” feature has already been achieved with satisfactory results. Would the future of the likes of Copilot merely increase accuracy and helpfulness (and more importantly for developers to stay in the game), or eliminate the need for developers as middle-person to create any software (which applies to both humans and AI as middle-person, but that’s a different topic we may cover later)? There’re some early reports of newbies or no-coders being able to put together functional codebases using only ChatGPT. It’s not unreasonable to think that more and more people will be able to build working software (not just writing code) in the future.

The software engineering landscape will change but the question remains: where does the future hold for software developers as a profession?

Let’s observe how people and professions use these LLMs-based applications at the moment and extrapolate.

Asking the Right Questions, a.k.a Prompt Engineering

If you’ve ever worked with developers, you’d have observed that some developers just know and memorise everything that’s required to complete jobs at hand; but there’re also others, probably in overwhelmingly majority, whose productivity comes from their use of various data sources, be it their notes, StackOverflow or Google.

When we look closely at the second group, what sets senior devs apart from the average ones is their ability to ask the right questions at the right places to locate the precise knowledge needed for what they’re doing.

ChatGPT and Copilot solve the “right place” challenge as single sources of abundant knowledge. Let’s observe how the “right questions” are asked in these scenarios:

Deriving from the above:

Furthermore:

Now we’re clear about what AI can do and what impact it has on devs at this moment in time. However we still don’t know for sure how capable AI can be in the future. One way to judge is to look at the possibility of AI making self-initiated breakthroughs.

Finding Secrets - The AI Breakthroughs

In software development, we’re trained to thinking in systems coined by Donella Meadows, thanks to some constant themes from computer science history such as abstraction and computation models. And it’s helped. Humans have come a long way from the inception of modern computing to what we have now as Marc Andreessen penned “Software is eating the world”, by relentlessly making both incremental improvements and drastic breakthroughs.

On the incremental improvements front, we’ve just experienced that AI works as a more-than-capable assistant or copilot. It is within comprehension that with the emergence of (later, more advanced models based) AI agents, AI can autonomously achieve even more improvements with little help from humans.

What about making drastic breakthrough? As Peter Thiel once said, “Every great business is built around a secret that’s hidden from outside”. So the remaining question is:

Would AI be able to connect the dots and come up with something new in the foreseeable future, i.e. finding secrets?

In other words and putting it in the context of software engineering:

Would AI possess higher-order cognitive abilities exhibited in human critical thinking that has been proven useful in the history of computer science, such as deep contextual understanding, creativity and innovation, and even intuition and tacit knowledge which we rely on at times?

And the follow-up question, nevertheless an important one:

Could AI possess the ethical and moral reasoning ability to make the right decisions?

I’ll share my thoughts on all of these questions in Part II (and yes we’ll be touching on AGI).

The spoiler is that I don’t think the profession is going away; however, it will look drastically different from the previous 40 years since the software crisis.

Recap

AI is eating software that has been eating the world up to this point.

Newbies and expert-level devs look likely to benefit the most.

For the rest, it could spell an existential crisis.

As for AI self-initiated breakthrough, it remains to be seen whether it will happen.

See you in Part II.

© 2012-2022 Chen Wang | Built by GitHub and Hugo with Charaka theme | Source Code