Would you use AI?

I decided to write my first blog post about my personal view and approach to programming with the use of AI, because after reading posts from others, various articles, and having conversations with colleagues at work and in the industry, I got the impression that my perspective on this powerful tool is a bit different. I do agree that AI, particularly Large Language Models (LLMs), has been a breakthrough – it started speeding up a lot of tasks, brought us convenience like never before, and opened up exciting possibilities for innovation and efficiency in coding. But did programming actually become easier because of that?

I asked myself this question one evening after watching an episode of the anime “Dungeon Meshi.” In that episode, an elf suggested lighting a fire with magic instead of using a flint, as the dwarf usually did. He replied that making things easier might lead to losing skills, and added, “convenience and ease are not the same thing.”

I have to admit, that line really hit me and stuck with me, because I have a similar concern with AI that he had with magic – I truly don’t want to lose or weaken my valuable programming skills. But to be clear, this doesn’t mean I see AI as something bad or that we shouldn’t use it. In fact, I’m optimistic about its potential to augment our abilities, acting as a collaborative partner rather than a crutch. I just believe we shouldn’t become dependent on it. I think we should strive to be the kind of programmers who, even when cut off from the internet, are still capable of writing good-quality, working software.

Personally, I don’t use tools built into editors. Instead, I interact with AI through terminal-based chats, where I feed in files with instructions and project information for more contextual assistance. I treat LLMs more like a pair-programming buddy or an advanced autocompleter that suggests simple snippets, refactors on-the-fly, or flags potential issues in code I’m writing in real-time:

  • Often to get an initial understanding of difficult or complex concepts – I also ask for sources I can dive into later, in case surface-level knowledge isn’t enough.
  • Sometimes to compare some of my solutions (especially ones I’m not fully confident in) with those suggested by the chat, to see if maybe there’s a more optimal approach or something I hadn’t even considered.
  • Less frequently, but similarly to the point above – after I’ve planned what and how I want to refactor more complex parts of a project, I ask the chat for suggestions to see if I missed anything or if there’s a better way to do it.
  • Really rarely, but it happens – I ask the chat to explain or translate a piece of code that I’ve been struggling to understand for a while.
  • On the frontend side, I’m more trusting and optimistic – I occasionally use LLMs to generate code snippets or even larger components, as the visual and interactive nature of frontend work makes it easier to verify and iterate quickly. It feels like a boost to creativity without as much risk.
  • For backend, my trust is more limited: I might use it for generating large fragments like mocks, handling bulk data transformations, or rewriting repetitive data-heavy sections, but I treat it more as a partner that checks syntax, suggests potential improvements, and warns about possible errors, rather than relying on it for core logic or complex systems where security and reliability are paramount.

What I try to avoid – except in truly exceptional situations – is copying AI-generated code and pasting it directly into my project without thorough review and personalization. Besides my earlier point about skill degradation, I’ve got another reason. When I write code myself, I feel like I’m connected to it – I know every piece of it, every symbol I’ve written, and I can navigate through it with ease. Making changes or adding something new feels simple and even enjoyable. What’s more, sometimes at work when I get a bug report or task, I have this strange sense of where exactly in the files I need to look, like a mental map. But it’s a different story when I’ve pasted code (even when I understood it at the time). After a while, that code becomes a black box. I don’t feel that same connection. It’s harder to modify, and when I return to it after some time, I often need to re-learn how it works – much more than with something I wrote myself from scratch.

How often I use AI also depends on who I’m writing code for. If it’s for work, for a company, I use it much more, leveraging it as that pair-programming partner to stay competitive. After all, no one’s paying me for my personal satisfaction from implementing a feature or fixing a bug. On top of that, I feel the pressure – that I won’t be good enough or fast enough compared to my colleagues who are more eager to use every advantage available. On the other hand, I still believe I’m doing the right thing by limiting myself to those use cases, and I have a strong sense that in the long run, it will pay off with better results, deeper expertise, and greater independence. But even if I turn out to be wrong someday and I get replaced – I won’t regret my choices, because I simply don’t want to work in a way that makes me feel suffocated.

To wrap it up, I’d like to go back to the question I asked at the beginning – did programming become easier? Maybe it’s just a skill issue and I’m the only one struggling with this. But in my opinion, no – programming has not become easier in the core sense. Programming with AI is more convenient and can be genuinely empowering when used thoughtfully, like a smart collaborator that enhances our work. But if we rely on this convenience without boundaries, we can become addicted to it, ending up with tunnel vision and losing some core skills. My approach is to embrace the optimism of LLMs while prioritizing my own experience and growth to avoid dependency – that way, AI becomes a tool that elevates me, not one that defines me.

Leave a Reply