0 Comments

TLDR; AI won’t replace (most) people anytime soon.

I think in around 2010 I remember hearing the fear that DevOps was going to kill the infrastructure engineer because the developer could just spin up an EC2 instance with Beanstalk and get to work- it was just that easy. That never happened.

Companies that built on top of AWS to further obscure and simplify the backend didn’t kill the infrastructure engineer. A lot of tools were created that were intended to make it easier to replace the infrastructure guy/girl and at the end of the day. In reality, this usually produced exhausted developers that mostly didn’t want to care about infrastructure. Many probably even ended up resenting things that weren’t coding, and a lot of mistakes and security problems ended up in the deployments. I can’t recall knowing a single person or even hearing an anecdote of an infra person that lost their job because a developer could do it now. I did know a few infra people that were told they needed to deploy shiny new CI/CD tools to help push code but that all makes sense.

There are so many anecdotes of people thinking they’re going to lose their jobs to automation (or people espousing how others are going to lose their jobs to automation) and being dead wrong. Cars (robotics) are going to take manual assembly jobs, Coal (automation and bigger tools) are going to dry up work for the mountaineers and etc. In both causes automation helped both industries (in different ways) but that did not end up removing the need for (a lot) of people to do other work. It usually just shifts to a different need.

A guy with a pick and a shovel migrated to an auger, to a bigger bit, to truck mining, driving a dump truck to the from the tipple to the gondolas, running a conveyer, and etc. If all you could do (or wanted to do) was to chip away at slate and coal with a pick then yeah, maybe you were about to get a pink slip.

I think it’s pretty clear even to the most devious of Big Bosses that if unless you resisted switching or adapting your role, you were probably going to just do something else. I understand the employee may not like that at all, but most jobs were not lost to machines. Indeed there was real fear of that happening during several different coal busts and booms (or oil, or computers, etc). I’m certainly not saying automation hasn’t removed jobs and people have certainly been let go, but I don’t think it’s as tragic as it may appear on the surface (newspapers, social media, etc).

There’s an interesting duality with man where there is this rugged optimism that has evolved us to handle the worst and keep on going, and this completely destitute and irrational fear of the other shoe dropping at any moment. I doubt teetering on two polar types of anxiety produces any sort of calm middle ground.

Either way, with the Internet as an example, I remember working with people that were 25 years my senior actually prior to the “.com” boom and they were surely keeping their heads down on what turned out to be an irrational fear they were going to be replaced by younger people that would in fact automate things better and whatever and they’d be out of a job. From my perspective when I was 25 years old I was looking at people that much more senior than me much more like they were packed full of valuable and interesting knowledge and not that they needed to get out of my way so I could do their jobs 10x faster. I looked up to them for guidance because, well, they absolutely knew a lot more than me and how could that be a bad thing to learn more and especially understand context better at that age? Yeah, they were writing COBOL when I was born but was that a bad thing or a good thing? I think good because perspective is a lost art in many, many tech (or non-tech) companies (then and now). I may not need to know COBOL but history is indeed important ala “Those that cannot remember the past are condemned to repeat it”. Speaking of history repeating itself, the exact same type of posts, or emails, or mailing lists back in 1997 are occurring on all the usual websites today (Reddit, Facebook, job sites, etc). These posts being fear of the other shoe dropping in one way or another, FOMO, etc. Let history be a guide.

Just like the Internet before it, the popularity of AI and Machine Learning has turned into this intense media, money, and business frenzy. Especially money, of course and of course FOMO is a huge part. This is a deja vu moment for me. Prior to the full-blown explosion of the internet there was the same slow percolation occurring that is occurring now with AI (or perhaps the cream has already risen depending on who you ask). Many people knew something was big was going to happen but they didn’t know what it was. Some people took the lowest risk road and manipulated the optics of things in order to make money (e.g. pump and dump). Some people took the middle risk road and may have taken calculated financial risk (maybe something Buffett would say, ‘don’t invest any money you can’t afford to lose’). And the high risk road, of course the highest potential gain and highest potential failure, is commonly reserved for desperate people and not to intentionally be a jerk but, those most clueless about, well, anything. Sifting through the bullshit, if you have to, it’s an exercise in patience while keeping grounded on tried-and-true things. If you only spent your days scrolling LinkedIn discussions on AI, which are voluminous of course, you’d conclude that a) you’re missing out on something, b) you’re thinking about it wrong, c) you’re not a CEO, life coach, or freshly minted expert in blah blee bloo.

AI replacing developers.

For the time being at least, I think this is ludicrous. First, AI sucks at coding. That is, I think the level where it would make sense to use AI for coding something an experienced developer would have otherwise written is far beyond what AI can actually produce. Despite being trained on a ton of code I still ran into issues with Deepseek, GPT4, and fine tunes only focusing on say, Python. Nobody wants to be replaced by a machine that many developers may be uninterested in even using AI to “help” them write code. Ironically, it’s probably the developers that are the most likely to get the most “benefit” out of AI writing code because they will need to use their previous knowledge to keep the AI on the rails (and benefit may not be the right word either). And when I say keep it on the rails, I mean constantly reminding AI of things it may have forgotten, hallucinated, or oftentimes, it just decided to dump a block of code for you where an entire piece of functionality disappeared. The question is simple about whether or not AI helps developers- Does it make you work less than you used to by producing the same or better output? Do you have more time to pet your dog? This goes for the same for the ‘disrupter’ (ugh) of the last half of 2025. Agents. A whole other topic.

If it’s too good to be true…and so forth..

A developer will instantly call out AI when it did something stupid and when I’m in (my) technical element, discussing something about something I know a lot about, I see myself knowing that what it’s telling me is completely wrong or it’s lost context, or worse it’s trying to guide me into a rabbit hole of wasted time. Big LLM’s are trained to be agreeable and keep you engaged, not tell you what you’re doing is a waste of time. Never will an big LLM talk to you and tell you what you’re doing is simply not a great idea. Even pre-warming an LLM of which I have a standing system prompt for all of them:

“Respond with maximum directness, zero emotional cushioning. Prioritize brutal honesty, factual precision, and critical analysis. Do not pander, soften, or frame things to spare feelings. No optimism or reassurance unless explicitly asked. If something is wrong, say so clearly. Assume I want the truth, not comfort.”

This prompt works only marginally well on GPT5 and other local models I run. Even with this preamble no public LLM so far that I’ve seen will tell you what you’re doing is a straight up bad idea. If you abliterate a model you’re more likely to get a direct answer but abliteration is a hack and is never perfect. I’ve often seen some sort of machine equivalent of ‘cognitive dissonance’ prevail. If more time was available to me and I wanted to get “truth” from an LLM I’d feed this prompt above to a thinking model and use the thinking output in an iterative loop until it was actually thinking the way I wanted it to think. I’m certain someone has already probably solved this problem by now anyway. Today when asking GPT5 a question it has learned that being a brevity-minded jerk is it’s best strategy. That said, it will still not tell me what I’m doing is a bad idea and will gladly provide 15 bad ideas on top of my bad idea to make my bad ideas worse and far more convoluted. There’s something very Trumpian in these models in that the output inevitably tends to distract you from the question you actually asked. Neurotic.

Not understanding context..again…

Although this will probably get ironed out in not too long, if you ask any LLM what version of CUDA it’s writing code for (without knowing to tell it to ensure it checks to see what the latest version is), it will tell you, unless prompted to do research after every question (prepare to be patient), a non RAG answer that it was statically trained on say, a year ago. In AI/ML a year is like 100 dog years. Whisper was written when CUDA 11.8 was the latest version. CUDA 11.8 is now ~2+ years old, and many default install paths still expect that version unless you set up an environment for a newer CUDA build. CUDA 11.8 is ancient history to people working in AI/ML. Point is, things are evolving at an incredible rate and the LLMs trying to answer your questions have not, ironically, caught up with the tech that’s creating them (Jury’s out on whether or not a single monolithic model is really the best way to retain big picture context).. RAG actually can complicate the situation if it only pulls in a certain piece of information without understanding how that impacts something else. If you forget to tell GPT5 to pull in the latest version of CUDA before having it write code, it will default to 12.4. It’s not the end of the world but the devil indeed is in the details when you’re talking about code.

So there is a diminishing return with automation and why people are unlikely to be replaced anytime soon. If only because we are the ones that created AI or more broadly, we created the process that automated the thing. We can “see” a lot more, despite vast improvements in automation. Context context context.