Skip to main content

Concerns with AI programming

Cover image for article: Concerns with AI programming

I’ve been programming with AI on the side a bit for the last months. While it’s an amazing experience on the one hand, it also raised some concerns on the other.

I’m not a full-time developer, so I’m looking forward to learn from more experienced developers how they see this…

  1. The ease with which LLMs generate solutions leads to a decline in/ erosion of critical thinking. Why argue with a PhD level intelligence? You’ll stop evaluating AI produced code, accepting the 1st answer, reducing the ability to solve problems and making decisions independently.

  2. LLMs tend to suggest common/popular solutions, even when better/more up-to-date alternatives exist → adoption of suboptimal technologies simply because they are the default suggestion from the LLM, rather than making an informed decision based on updated best practices.

  3. LLMs output is not neutral; its influenced by the training data (and selection… see also point 4 ↓), reinforcement learning, and post-processing steps. LLMs are systems that generate words/code in the most common/likely order. But what if the LLMs are trained on average code which is not that great…?

    Without being aware of these issues, you limit potential superior alternatives → the code ecosystem will become less diverse and innovative.

  4. Companies may use LLMs to promote their own products and services, creating a closed ecosystem where developers may not realize they are opting (or locking) into a specific company’s entire stack.

  5. LLMs tending to reinforce what is already popular will greatly hinder the adoption of new libraries, languages, and frameworks. The common denominator that the AI builds upon is never going to be that new innovative framework that you don’t yet know about. This could make it more difficult for innovative new technologies to gain traction, as developers may not be exposed to alternatives outside of what the LLM commonly suggests.

  6. Not really worried about AI replacing human developers completely just yet: AI is still mainly about the “plumbing” not about great system design/architecture. But what about the process of training the humans that have the capacity to go beyond the plumbing? Until now, getting to that next/higher level often did include a lot of plumbing. How do we get humans there without doing the plumbing first (fun challenge for our education specialists).

  7. Contributed by Julliette Reinders Folmer: AIs are often trained on copyrighted code from other people which - pending many lawsuits - makes the use of AI is basically theft of intellectual property. This can cause issues with code licensing: what if the AI is trained on GPL-licensed code? That basically means that YOUR (AI generated) code MUST now be GPL licensed as well as it can be considered a derivative work…

What do you think?