Fortunately software is much more than App ideas fishing for VC investments. A lot of us are building actual tools for nurses, teachers, technicians, artists, students, etc. We have to analyze these human beings’ role in society, their needs, their situation, which is different from merely preying on their attention span. Programming languages are still the most reliable way to specify how the software must behave. And once the software is done, it is merely born. It then lives through a steady flow of continuous adaptation until one day it dies as all things do. Downplaying the human condition is a mistake.
You missed the point. The point is that almost all software today follows the same general ideas, patterns, etc.
The quality of the output of AI is not tied to what these patterns are used towards. Even if, say, your tool has a completely new network protocol. An LLM will still “understand” that it is a network protocol, that it serializes following rules that you tell it, serializes and deserializes the way you decide, then it will write that down in a memory and be able to work with that.
A new file format? Same. A very specialized new kind of No-SQL database that fits your very specific tool better? It will also write down in a file how it works and be able to use that.
It’s as good as the documentation you give it is. Which, for basic things such as setting up a basic REST API, it has learned in its training data. If it hasn’t, it’s up to you to provide it, and it will be perfectly able to use it.
Even if you build some weird unique assembly language it will be able to use it if you give it the set of instructions and their documentation.
A medicine dispenser application for a nurse is still just CRUD operations for the most part. There’s nothing innovative about how the code would be written in an application like that.
Fortunately software is much more than App ideas fishing for VC investments. A lot of us are building actual tools for nurses, teachers, technicians, artists, students, etc. We have to analyze these human beings’ role in society, their needs, their situation, which is different from merely preying on their attention span. Programming languages are still the most reliable way to specify how the software must behave. And once the software is done, it is merely born. It then lives through a steady flow of continuous adaptation until one day it dies as all things do. Downplaying the human condition is a mistake.
You missed the point. The point is that almost all software today follows the same general ideas, patterns, etc.
The quality of the output of AI is not tied to what these patterns are used towards. Even if, say, your tool has a completely new network protocol. An LLM will still “understand” that it is a network protocol, that it serializes following rules that you tell it, serializes and deserializes the way you decide, then it will write that down in a memory and be able to work with that.
A new file format? Same. A very specialized new kind of No-SQL database that fits your very specific tool better? It will also write down in a file how it works and be able to use that.
It’s as good as the documentation you give it is. Which, for basic things such as setting up a basic REST API, it has learned in its training data. If it hasn’t, it’s up to you to provide it, and it will be perfectly able to use it.
Even if you build some weird unique assembly language it will be able to use it if you give it the set of instructions and their documentation.
A medicine dispenser application for a nurse is still just CRUD operations for the most part. There’s nothing innovative about how the code would be written in an application like that.