Agents of Mediocrity
Every time I gear up to write the next planned post in my literary opus, some news article - lately, Axios - begs me to postpone to check their latest bugle call towards AI Hype. Once more unto the Digital Breach, my friends!
I generally like Axios and its concise style. I'm sure many people would disdain its "smart brevity" listicle format, but I appreciate that I can often find relevant points quickly without having to worry if the author's burying the lede.
But I also think, despite its doubling down on its claim of objective reporting, it still has its slants. It's definitely business-friendly, and, in the case of AI-related subjects, even more so. It doesn't serve as an explicit cheerleader so much as other sites tend to do, but it typically parrots whatever press releases the industry issues without questioning the validity of those statements. In the case of AI, it's understandable, because who wants to listen to a 30-minute jeremiad from a software engineer on the benefits of the Rust language when your article is centered on the merits and drawbacks of AI-centered jobs? But, it would help to gather viewpoints from those other than "tech leaders" whose salaries are tied to the success (or simulacrum of success) of AI vehicles.
This is the latest article to catch my ire. It's entitled AI Agents will do the grunt work of coding. The article covers the release of Github's Copilot Agent and similar agentic wonders. To be honest, I don't know quite what the difference is with other versions of Copilot, because Copilot as a coding assistant has been around before the phrase "ChatGPT" became a canker sore in everyone's mouth. I assume this version is "more autonomous," meaning some marketing division believes it can lie to its users with much greater freedom.
If I'm going to take my own fair and balanced perspective, I'll admit the article dials down the hype of AI-tooling somewhat, but there were a few howlers that I wanted to point out:
"Tech leaders have sent mixed messages on just how much work they see ahead for programmers."
Let's instead read this as "management hopes to God that they can continue to erode the salaries of these pesky workers that have had the leverage and the audacity to use that leverage to demand better working conditions than the average worker over the last decade. How dare you! I am Executitus. Look upon me and despair! (I'm really on a Romantic poet kick today. I've now exhausted every quote I had to memorize in high school).
I've mentioned it before, and I'll state it again - I use coding assistants. They're useful. They often save me scads of time when I would otherwise need to walk my way through documentation that is often nothing more than a tautological exercise in masochism. If I'm really interested in what an ObjectMakingClass is, I don't want the documentation to tell me that it's a class used for making objects. I want to know what it considers an object, what happens when I make that object when I use this class, and use case examples ranging from simple to sensible production implementations.
A coding assistant often will provide an easy way to use the code and verify its basic usage on the fly, thus saving me somewhere around (licks finger and sticks it in the air) 15-25% of my time. Now, I guess as Tech Management (and having been in that role, I can attest to the fact that we are subject to nonsensical, reductive reasoning), you can say that means that we can get rid of 15-25% of our workforce. But, really, that's akin to lopping off 15-25% of 100 humans and exclaiming, "Look, I made a human from the spare parts."
The gains made individually can't be applied holistically. Even if in some cases they can be used to reduce the size of the workforce, maybe hiring in the long run should slow. But software teams are chronically understaffed for the desired feature list because writing software, like most careers, is deceptively harder than outsiders think it is, so maybe it's worthwhile using that 15-25% savings as a surplus to improve your business elsewhere.
"But programs, unlike other kinds of language products, have a built-in pass-fail test: Either they run or they don't."
This is, in fact, one of the reasons I opted to go into software over sticking with traditional engineering. The feedback loop for experimentation is so much faster and cheaper. But this quote is a bit reductive. It's similar to dropping an apple and claiming we no longer need scientists to understand the complexities of gravity.
And, it isn't the programs themselves that are pass-fail. It's very concise statements within programs that are pass-fail. The composition of those parts is greater than the sum of their whole, and the systems those programs are a part of begin to exhibit emergent behaviors that exist because complex systems interact in complex ways. You can no further break these components down into something that's logically digestible by an unreasoning machine than you can map all the cells in your body to determine how to optimize your own health.
I'll construct a thought exercise for the uninitiated that touches just one of the scenarios software engineers need to consider:
Say two orders come in for the last order of Sunshine Sparkle, Now with More Pony! at the exact same time. How do you write a rule that determines which order wins? What if you don't have a rule in place and both orders are unaware of the other and double decrement the inventory? Now, assume you have -1 items of Sunshine Sparkle, Now with More Pony! listed in your system, and someone else attempts to place an order. What if your system can't handle numbers below 0? Does the system blow up? Does it hold indefinitely, waiting for things to climb back into the natural number range, consuming resources that prevent others from moving forward until the whole site grinds to a halt? What if someone cancels the order, but the cancellation was never acknowledged? Now you actually have 1 SSNwMP! sitting in inventory, but the system is still claiming -1. What if there's a bug in the system that does this type of double counting every 10th order?
You may say that these are contrived examples, but something similar happens at least millions of times a day across the web, so it's difficult to determine what's pass-fail at such a scale.
"The biggest challenges in creating software tend to arise from poorly conceived specifications and misinterpretations of data, both of which are often rooted in confusion over human needs."
That is a big challenge, but my block of text above shows it isn't just a poorly conceived spec that causes headaches. Likely nothing in the scenarios above would be included in a Product Manager's well-written spec, because they're not functional considerations. They're either taken as givens or are outside of the ken of the spec writer.
Let's take a more tangible example to illustrate the complexity that exists outside of the specification - building architects compose ideas for beautiful and, hopefully, functional buildings. Engineers get annoyed with architects because that eye-catching atrium they've designed has no way to functionally bear the load. Construction workers get annoyed with engineers because the new load-bearing columns they've now designed require the workers to drive the support all the way through the Earth's core to support the weight. Sure, I'm using a silly example and, in this case, both the architect and engineer will know better, but often the practical considerations need to be yielded to the, well, practitioners. Only a trivial system with no value can be designed with no regard to implementation.
Which leads me to...
"But software developers who excel at navigating the boundaries between human desire and machine capability should continue to find themselves in demand."
I understand the stereotypes behind software engineers. Hell, I even played into them above with my little Rust quip. But this is literally what all software developers do today. It's been the definition of a good engineer since Geocities sites dotted our virtual world, if not since punch cards roamed the planet. This isn't a new skill or a finer point among engineers that can be mined in the AI mediocrity we're so happily careening towards as a collective. It's just a lazy way to restate that maybe AI won't meet all of our needs without coming clean and admitting as much.
Until next time, my human and robot friends.
Comments
Post a Comment