Will AI Replace Developers?
Since the release of Github Copilot and Chat-GPT, I see many posts and tweets about how AI will or are replacing humans in different tasks. Many have also said that this disruption will come to the developer, which touches me in particular. I have been thinking a lot about this. Will AI replace my job? How?
Personally, I am a user of Chat-GPT and of GH Copilot. Currently, it does not feel like it could replace me, but feels like having a very capable and fast junior developer helping me with menial tasks:
- Want some boiler plate? Done.
- Do I need to create a class very similar to another class I recently created? Explain in the comments what I want, and GH Copilot does it.
- Do I need to use an API I have not used before in my code? GH Copilot can produce the right code from the comments I put and apply them to my particular application.
- Do I have a repetitive structure in the code that I need to replicate in several places? GH Copilot will learn the pattern and offer me the right auto completions.
- Do I need to write a helper function with slight variations for my use case? GH Copilot will infer the right implementation just from some comments or the name of the function.
AI is acting like a productivity booster for me. It saves time by avoiding context-switches to lookup documentation, automating repetitive work and building simple components and functions. If the auto-completion is large, I just do a small code review to make sure that everything is as I wanted.
At the moment, AI is not a threat to my profession, but what about the future? The position I see right now is that at the pace this is evolving, in 3 to 5 years, it will build software directly from some prompt.
Let us do a brief thought experiment and suppose it can do that. The prompt would be the specification of the software. You would write a specification and then Chat-GPT will create the full working software for you. That’s it, no coding, no graphic design, no nothing. The problem is that writing a specification for software is HARD. It takes time and effort to create a solid spec, and you need to have the skills to do it. It might work for a small piece of software, but for a large one, I do not think that someone with no training can do it.
So, we have shifted the problem of coding to spec writing. One plausible argument is that AI can help you write a consistent spec. You just write in the prompt to write your spec, and then little by little you refine it. This is also a possibility. The problem is that reviewing and refining a large spec once again requires training. The problem I see with this spec thing is that code itself is a spec, a low level one maybe, but a very precise one. So in the end, GH Copilot become GH Copilot for specs. There might even be different ways to approach different problems, you could build your spec little by little like you usually would build code (bottom-up) or just ask Chat-GPT to write you the spec of Uber (which it already knowns or can infer) and then refine it to become Uber for Lawn Mowers (clone and refine), or whatever people come up with. It clearly saves time and opens new possibilities, but you will still need to be an expert in the domain of application building to do it.
You will also need specialization within the domain. For example, I recently saw a guy on Twitter create a 3D game with just by issuing some prompts. It looks cool, but even I, a software engineer with a lot of experience, cannot replicate that. He is indeed a domain expert in 3D games. He can review the code generated and fix whatever problems there were, and guide the IA to reach his intended goal. To reach his level, I would need to do a lot of reading and studying.
However, we might see the productivity of a single developer grow larger and larger, meaning that a developer of the future might produce as much code as 5 developers of today. I see two likely possibilities. First, the software will be much more polished, one developer can achieve what a whole team can, but what would be possible with a team of 5? Features that are nice-to-have today will be mandatory tomorrow, and expectations from users will be higher. Second, in the same line of nice-to-have, we might have more time to perform other tasks that usually get neglected, like producing more maintainable code or lending developers to other areas of the business that could also get automated or improved if the capacity was there.
In the same way that the spreadsheet displaced a lot of accounting clerks but increased the demand for financial analysts (which could work on their own without an army of clerks to compute the results of their analysis), the job of “coder” might become obsolete, but we might need more software engineers. People who can not only produce code, but design systems that are implemented by generated code.
A final point I would like to make is that sometimes natural language (like a prompt for an AI) might not be the best way of expressing a concept or a program. Comes to mind the solution of the cubic equation, that the Italian mathematician Nicolo Tartaglia published as a poem. Mathematical notation did not exist back then, so people had to write the description of the mathematical formulae in natural language. Indeed, mathematical notation is much better suited to express that solution than a poem (!) or natural language. Expressing a program as a natural language might work well when the concept of the program already exists and we only refer to it (for example, “photo sharing app”). However, if we need to create a new app that we need to describe in full, a programming language might be a much better solution.
The possibilities for the future are endless, and I see it with optimism. However, disruption can come from any place. AI tools are here to stay and they get better and better. They might come after us in the end, but as all disruptive technologies, we must adapt and use them to create a better world.