8/12/2025

The Stuff That Actually Worries Programmers About AI Taking Over Coding

Hey everyone, let's talk about something that's on every programmer's mind these days: Large Language Models (LLMs) & how they're changing the game for coding. It seems like every other week there's a new AI tool that promises to write all our code for us. And while that sounds pretty cool on the surface, a lot of us in the trenches have some… concerns.
It's not just about whether an AI can write a function faster than we can. Honestly, the worries are a lot deeper & more nuanced than that. We're talking about the very nature of our jobs, the quality of the software we build, & even our own skills as developers.
So, let's get into it. Here are the biggest concerns that programmers actually have about using LLMs for coding.

Is the Code Even… Good? Accuracy & Those Weird AI "Hallucinations"

This is probably the most immediate & practical concern for anyone writing code. If you ask an LLM to generate a piece of code, you have to ask yourself: is this code correct? Is it efficient? Is it even real?
Turns out, a lot of the time, the answer is a resounding "maybe." We've all seen it: code that looks plausible but has subtle bugs, uses outdated practices, or is just plain inefficient. A recent study even found that nearly half of all code generated by AI contains security flaws. That's a pretty scary statistic when you think about it.
Then there's the whole issue of "hallucinations." This is when an LLM just… makes stuff up. It might reference a software library that doesn't exist or use a function in a way that's completely wrong. For a junior developer who doesn't know any better, this can lead to hours of frustration & debugging. It’s like having a super-fast assistant who’s also a compulsive liar.
And here’s the thing: while a human programmer can be held accountable for their mistakes, who do you blame when an LLM introduces a critical bug into your system? The lines of responsibility get really blurry, really fast.

The Elephant in the Room: Will I Still Have a Job?

Okay, let's just address it head-on: job security. It's a major concern, & for good reason. When you see stats saying that over half of developers think LLMs can code better than most humans, it's hard not to feel a little anxious.
The fear isn't necessarily that all developers will be replaced overnight. It's more about a shift in the job market. Many believe that AI will lower the barrier to entry for junior developers, potentially saturating the market & making it harder for new programmers to get a foothold. Some even worry that entry-level jobs could disappear altogether, creating a future where there are no experienced senior developers to oversee the AI's work.
It's a weird paradox. While 80% of developers see AI as an enabler, there's still a significant chunk who are worried about being outperformed or replaced. The pressure is on to adapt, but it's not always clear what that adaptation looks like.

The Hidden Dangers: Security Flaws & Data Privacy

This is a BIG one. When you're using an LLM to help you code, you're often feeding it information about your project. But what happens to that data? A whopping 24% of developers cite data privacy as their top concern when using AI. Can you be sure that your company's proprietary code isn't being used to train the next version of the model? It's a legitimate fear.
Beyond data privacy, there's the issue of security vulnerabilities. As mentioned before, AI-generated code can be riddled with security holes. But it gets even more insidious than that. Researchers have found that LLMs can "hallucinate" non-existent software packages. A malicious actor could then create a fake package with the same name, & the next time a developer uses the LLM-generated code, they could be unknowingly installing malware onto their system. It's a terrifyingly simple way to introduce a major security risk.

Are We Forgetting How to Think? The Skill Atrophy Problem

This might be the most philosophical, but also one of the most important, concerns. For years, programmers have honed their skills through a process of struggle & discovery. You encounter a problem, you dig into the documentation, you debug the code, & you eventually come to a deep understanding of how things work.
But what happens when you can just ask an LLM for the answer? The temptation to copy & paste a solution without really understanding it is immense. And that's where the real danger lies. We risk creating a generation of developers who are completely reliant on AI. Take away their ChatGPT subscription, & they're lost.
There's a real fear that this over-reliance will lead to a decline in critical thinking, problem-solving skills, & the overall quality of software engineering. It’s like using a calculator for all your math problems – you might get the right answer, but you never actually learn the underlying principles.
For businesses, this is a ticking time bomb. How do you innovate & build robust, complex systems if your development team doesn't have a deep understanding of the code they're writing? This is where having the right tools becomes crucial. For instance, a platform like Arsturn, which allows businesses to create custom AI chatbots trained on their own data, can be a safer & more controlled way to leverage AI. You can create a knowledgeable assistant for your developers that draws from your internal documentation & best practices, rather than the wild west of the public internet. This helps ensure that the AI is providing accurate, secure, & relevant information, which can actually support skill development rather than hinder it.
This is a legal minefield that we're only just beginning to navigate. LLMs are trained on vast amounts of data, including billions of lines of code scraped from the internet. But what if that code was under a restrictive license? Or what if it was unethically sourced?
When you use AI-generated code, you could be unknowingly incorporating someone else's copyrighted work into your project. This opens up a whole can of legal worms that most companies would rather avoid. Who is liable if you get sued for copyright infringement? The developer? The company? The provider of the LLM? No one really knows for sure.

Is the Fun Gone? The Loss of Creativity & the Joy of Programming

For many of us, programming is more than just a job. It's a creative outlet, a puzzle to be solved. There's a certain joy in wrestling with a difficult problem & coming up with an elegant solution.
But what happens when an LLM can just spit out a solution in seconds? Some programmers worry that it will take the creativity & satisfaction out of their work, turning it into a mundane task of prompting an AI & then cleaning up its mess. It's a valid concern. If you got into programming because you love to build things & solve problems, the idea of becoming a "prompt engineer" might not be all that appealing.

A Tool for Juniors, a Toy for Seniors?

There's also a growing sentiment that the usefulness of LLMs varies depending on your experience level. For a junior developer, an LLM can be an incredible learning tool & a way to get unstuck. But for a senior engineer who is dealing with complex, nuanced problems that require a deep understanding of a specific codebase, an LLM might not be as helpful.
A senior developer's job often involves tasks that an LLM simply can't handle, like deciphering what a customer actually wants, debugging a tricky race condition, or designing a new system from the ground up. In these situations, the LLM can feel more like a distraction than a useful tool.
That's not to say that senior developers can't benefit from AI. But their needs are different. They might use AI for more targeted tasks, like generating boilerplate code or getting a quick refresher on a specific syntax. It's less about asking the AI to solve the problem for them, & more about using it as a high-powered assistant.
This is another area where a custom AI solution like Arsturn can be incredibly valuable. Imagine a senior developer being able to ask a chatbot, "What's the standard procedure for deploying a hotfix to the main production server?" & getting an instant, accurate answer based on the company's internal runbooks. That's a much more powerful & practical application of AI for experienced developers than just asking it to write a generic sorting algorithm. By building a no-code AI chatbot trained on your own data, you can create a tool that truly understands your business's unique context & can provide personalized, relevant assistance to your entire team, from junior to senior.

So, Where Do We Go From Here?

Look, the genie is out of the bottle. LLMs are here to stay, & they're only going to get more powerful. The concerns we've talked about are real, but they don't mean that AI is a "bad" thing for programming. It just means we need to be smart about how we use it.
The key is to treat AI as a tool, not a replacement for our own brains. We need to stay curious, keep learning, & never, ever blindly trust the code that an AI generates. The future of programming isn't about being replaced by AI; it's about learning to work alongside it effectively.
Hope this was helpful & gives you some food for thought. Let me know what you think – what are your biggest concerns about using LLMs for coding?

Copyright © Arsturn 2025