Todd vs. Axios Round ???
![]() |
| Is it the hype or your career that's exploding? |
My AI cheerleader, Axios, is at it again after being (relatively) dormant on the benefits and scaries of AI recently. To give them more credit than I normally do, they've at least turned the lens on themselves rather than consuming corporate press releases and regurgitating them (they're certainly not the only ones, but they seem to relish the mystique of AI more than, say, the NY Times). While none of their claims are verifiable, they would have to outright lie if there wasn't some truth to the statements they're making, even if those claims may be exaggerated - intentionally or otherwise. Here's the source article (as far as I know, there's no paywall) for reference.
A few further caveats before I tuck in -
- I believe that AI can help significantly with productivity. Though it's hard to get a numeric measure on the results, I'll wager that, based on my most recent projects involving AI in the process, I'm at least 20% more productive than I was in the pre-LLM days. Having said all of that, the claims in the article (4x productivity over the last two months) seem a little hard to swallow.
- That said, it's difficult to know how complex a given company's stack is and how knowledgeable and experienced its workforce is. There's a potential paradox at play here - if Axios' engineers skew more towards the senior level, it's possible that AI could provide a signficant boost, since the engineers have the experience to ignore the cruft gurgled up by the LLMs without the need for copious research. Conversely, if the engineers are inexperienced, AI could be swinging at some of the slow pitches that the team didn't know to take a cut at, also creating a significant increase in productivity. If the engineers are a mix, then it's a little more difficult to believe in the productivity gains. Even if the LLM agents are writing all the code, the engineering teams need to understand it and communicate across the experience divide.
- As noted before, software engineers - both in articles I've read and in discussions I've had with real engineers- often state that agents can't act in isolation and write production-ready, enterprise-grade code. Engineers need to make (a lot of) tweaks. This again makes me double down on my skepticism of the scale claimed, since babysitting an LLM (or an army of agents) takes time. Even assuming it doubles productivity (which I don't believe), that's still [does a quick math calculation in his head and on his fingers] - HALF of the productivity gains claimed in the article.
- Again, there are no numbers or means to back these claims, so I can't formally refute them, which is fitting, because I never do anything formally on this blog.
So, let's look at a few quotes that led me to give a few Marge-like groans.
"Last year, [a similar engineering project by one engineer] took three weeks. This past week, he used AI-based "agent teams" and completed the same amount of work in 37 minutes."
It's hard to gauge what constitutes a "similar" project without details. Also, the extremely specific duration of 37 minutes smells a bit like marketing, especially when weighed against a cumbersome 3 weeks. I can imagine that if we count elapsed time from the start of any prompt to the final delivered code, it could be a literal 37 minutes, but are the comparisons apt?
As much as I hate when people try to generate a formal "definition of done," I think it warrants inspection here. Was the 3-week project coded, tested, and deployed in that time frame, or was it feature complete (i.e. initial coding was done, but there was a lot of debugging left to do)?
If it were the former, then how can anyone be certain that the AI code is production-ready in 37 minutes? Often, things fail at scale with hundreds or thousands of users that worked for fewer users.
If it's the latter, then where were the gains? Did this rely on external APIs that the developer needed to research? Did it save on the effort needed to perform the research (seems feasible)? How did the developer know that something that took them 3 weeks to complete was correct in such a short period of time?
Did it finish so quickly because much of the work from the similar project had already been encoded somewhere - via a framework, source control, documentation, etc? If so, still impressive (especially if this type of project is common), but what would the duration of the second iteration have been if left up to the meat puppet again?
"...six months ago, we refocused our product and tech organization, shrinking from 63 to 43 people..."
And a year before that, the team size was 90 people. It's not uncommon for up-and-coming companies to overhire (of which I would categorize Axios, as their local journalism footprint has definitely expanded over that same time period) when given the budget and struggle to find work for a team that size. 90 is still a tiny team compared to Big Tech (hence the name), but it would be huge for the lean ship that Coastal Chicago runs. Again, it's hard to gauge without knowing more about the particular department. And, regardless, a bigger team requires more communication overhead, which, even in the most efficient companies, will slow down code output.
I'd look askance at a company that halved its engineering department in 2 years if it ever decides to hire again. Granted, they did admit they're giving this update to show the warts-and-all impact of AI, and I can appreciate the candor, but I imagine that's going to have an impact on morale. If the response were, "Yeah, but we're the cream of the crop that's left and we're turning on the AI afterburners," my questions would be "So, you cut all of the dead weight? And you attributed those gains to AI? Hmmm." and "If you're so invested in a cutthroat atmosphere, how will you feel when you're gone? Do you think it's healthy to invest in a winner-take-all culture? Do you really think a self-made billionaire really exists? How do you measure the productivity of an engineer?"
"Tech debt is gone. Not because we solved it, but because AI just made it irrelevant."
Wut? If it's still there, it's still debt. How did AI make it irrelevant? Did it package it as technical credit default swaps in layered tranches? Tech debt often layers on top of years' worth of trade-offs that aren't undone simply by coding. And, anything that writes code creates a trade-off (i.e. debt) that doesn't disappear. It just displaces debt or pays one type and generates another. Especially when you have a lot of bots writing code that engineers may or may not be vetting.
It's possible that you believe that humans will be entirely ejected from the loop of coding, so code maintenance isn't a problem for our prying eyes anymore. Maybe, but we're not there yet, and there will still need to be some quality control in place for a non-deterministic system.
Ah, you say, again with your clever retort, but humans are non-deterministic in your empathetic call to advocate for the machines' humanity, so what are we losing? Well, determinism. Our entire existence over millions of years has accounted for human fallibility. Our lives are built around it. If a human can't add 2+2, we assume they're tired/drunk/stupid and move on. If a machine can't do it with a near-perfect success rate, well, something's wrong.
"The Axios backlog, which was 12 months long, will be gone in months. Then the work really starts, as engineers become builders and begin to think beyond the limits of a request list."
That seems to be a fairly small backlog, and it's amazing that with quadrupled productivity over only two months, the entire backlog has melted away. Tack on to that the idea that engineers should now, ostensibly, start suggesting features, and you have a strange beast indeed. I guess ideally, the engineers are displaced in favor of PMs who can vibe code the features after another two rounds of layoffs.
Then, we'll have finally destroyed a class of highly skilled, highly paid professionals with no definitive answers for where they should land. Which will then lead to the natural follow-up question -
Who's next?
Until next time, my human and robot friends.

Comments
Post a Comment