The general opinion on LLMs for code generation
2025-10-16
Today I had a presentation at job. It wasnt a presentation it was more like a panel where people came to the tell where they come from and they take questions from the audience, the subject of the event was "Skills for the future". It was an event intended to give to the public a perspective on how to manege their careeers in the age of A.I.
The cool thing about the event is that I was able to take a b look at how the comunity sees the A.I. in my job, which is a big bank with hundreds of programmers.
The panel was integrated by two people, a C++ developer, that also uses Phyton and java, and an IT specialist, a woman, that was on the I.T. field for 35 years.
The first interesting question was: "LLMs are good for some languages and not so good for others. How do you see tools like copylot for C++? It is reliable?"
The man answered something like "For C++ is not reliable at all, specially for big codebases. The reason is that, in the context of using copilot as a tool for generating code, there is not enough C++ projects in github to properly train chatbots. A good rule of thumb is that if you see that in your language has very few projects on github, chances are that the code generated by the LLM will be pretty bad. Big codebases where you need a lot of context to create a pice of code, like a C++ project wáth 1000s of lines of code whith dependencies of a lot of custom made libraries, a the LLM has no capacity to get all the context. So in my experience, LLMs and A.I. is good for small projects with popular languages and if you want to do, lets say, a wall in your garden, you can ask ChatGPT how many bricks and how much mortar you need, and you will have a pretty good estimate. For a C++ project has no use."
Since I'm pretty esceptic about A.I., and ususally write everything myself, and Im grumpy 46er, and I allways thought of myself as a black sheep in a world where everyone thinks as A.I, as the indispensable tool. Listening to seasoned programmers and leaders saying that no, is not that usefull was pretty shocking.
Another question was "Do you think that modern books writed with the help of A.I. are useful?"
The nswer was like "There are clasical books that everyone should read, but the new technical books writen after the A.I. revolution can not be trusted" so this verified my perception that the information that is being created nowadays, written and online, is polluted and unreliable. And books that are printed in the past, previous to the chatbot era are becoming an inportant asset.
A carefully crafted information, put together by an experien ced profesional, can not be replaced by a A.I. generated equivalent.