For those of us that started with QBASIC and .bat files, then moved on to HTML and Notepad, applications like Dreamweaver and FrontPage were initially sort of an insult. “How dare you try to make what I do easier?!” Of course we eventually tried them, because, well, that’s what we do: we experiment with new stuff. But when I looked at the code that these WYSIWYG apps created, most often it was pure shit. And time and time again, with each new UI-based, page-builder app that promised “You don’t need a web developer anymore”, the code it created continued to be, in my opinion, pure shit.
But that’s not to say it wasn’t usable…
If you are trying to create an enterprise-level SASS product, a “builder” app might be able to create an initial POC, possibly even a MVP, but it is not going to be usable for the final, push-to-prod, ask-customers-for-money, code. However, if you are a small business owner that just needs an online presence so people can find and contact you (think local dog walker or neighborhood florist), what you can create with those builders might be completely fine! There is a good chance that there will be edge-cases where some devices might display all or part of a page oddly or something might not work perfectly, but for those small business owners, maybe that is… okay? You get what you pay for, and if that is all you can afford, then maybe it is good enough.
And so, we web developers did lose some business, as a certain level of customers were now able to build those types of pages for themselves. Shit or not, they can create what they need. More advanced projects, with more advanced features, like contact forms or appointment calendars, would still require us, right? But as AI takes center stage, we find ourselves being caught-up in the next wave of “You don’t need a web developer anymore”… You just tell the AI what you want to build and push it to prod, right? What could go wrong?
Well, it turns out quite a lot. I have yet to see a single demo work correctly on the first try. But to be fair, I don’t think I have ever coded anything that worked exactly the way I want, sans errors, on the first go-round, either. But when errors pop up in code that I wrote myself, I at least know where to start looking to try to solve the problem. With AI, similar to WYSIWYGs, making changes stands a very good chance of just making things worse. Including breaking things that worked just fine before you tried to fix that error.
Now, some of you might say “That is the fault of the prompt, not the AI; garbage-in, garbage-out”, and that might be fair. AI doesn’t know everything we know, nor have all the experiences that we have, so it can’t have all the built-in guardrails that we have. If we do not include into our prompts all of the boundaries, rules, regulations, cautions, pitfalls, workarounds, etc. that we have collected over the years/decades, then our AI is likely to make the same mistakes we made. It would have to make these mistakes, and correct them, in order to learn, just as we learned. And so, I guess we are supposed to continue using AI, and teaching and correcting AI, so that it gets better, learns everything we know, learns from its mistakes, and doesn’t make those mistakes anymore, right?
But AI can never know everything we know, and cannot avoid mistakes like we do, because it cannot infer: we do not need to make every mistake in order to learn how to avoid future mistakes; some mistakes can teach us how to avoid other mistakes, even though they are not identical, and I have yet to witness AI do that.
And I think this is my biggest gripe: Historically, when developers grabbed hold of a new tool and ran with it, it was because that tool solved a problem, and solved it reliably, making us better at our jobs. Notepad++ didn’t work just fine sometimes, then just not understand what we were typing other times. jQuery didn’t perform DOM-crawling or Ajax calls just fine sometimes, then completely bugger them up other times. DevTools didn’t record page loads just fine sometimes, then miss or add resources in the Network tab other times. These were some of the monumental tools that took our industry by storm in their day, and for good reason: they solved a problem, reliably, making us more efficient and better at our job, and therefore we fell in love with them and used them like crazy! And how did those products make us do that? Because they were tested thoroughly and not released to us until they worked reliably. But AI has not done this. Instead, we are the testing ground; we are QA. And we are being asked/told to use it and do this training on the clock, as part of our jobs, within our existing deadlines.
Certainly, AI is impressive. It can perform amazing feats of engineering and processing, outputting code that might take a developer days or weeks to create and test. But it also might just output pure shit. And it is up to us to figure out which we got back, and what to do about it. AI promises to solve all the world’s problems by just entering a few prompts. But what is being created? Does it work? Correctly?? If you don’t know how the code was created, or how it does what it does, how can you be sure it is reliable? And what if you need to make changes? How can you be sure that your changes don’t break something else? I can hear some of you saying “That’s why we have tests”, and that might catch things and prevent errors. But it also might not. <sarcasm>I guess we can always ask AI to check for us…</sarcasm>
Maybe training AI (and it training us how to prompt it) is just going to take some time. I just wish it wasn’t being forced on us so forcibly. I wish it were better trained before we had to start using it, so it had better guardrails out of the gate. I wish it didn’t hallucinate with one breath, then totally own up to it and correct itself with the next breath. And I wish we had time to learn to use it better before we had to use it. Maybe we just need an extremely robust rule book to help govern the boundaries and expectations? Starting with Asmiov’s Three Laws seems like a good place to start…
There are a lot of other reasons to hate AI that I haven’t even gotten into, such as: environmental concerns, as their server farms eat electricity at alarming rates; the jobs that people will be lost to it, whether justified or not, as companies use “AI can do it” as their justification for reducing staff at ridiculous numbers; concerns over where the next generation of junior developers will come from, as we lean on AI to do their jobs for us; concerns over who will review and maintain the code that AI generates once senior developers start leaving the work force, and we don’t have any junior developers to take their place; and more.
I know there are people that still use WYSIWYG-type apps, mostly for the same things they were used for decades ago (i.e. small businesses), but we “true” web developers turned our backs on them long, long ago. We quickly realized we could do better with our own hands. (And many of us have since spent countless hours untangling that pure shit code they generated.) Now, of course, all of those apps are adding the letters “AI” to their Web Builders, so that they can be trendy, and future generations of non-developers can continue to create their own pages. Is the code any better? Will it be reliable? Will AI have better staying power than WYSIWYG apps? Or will we be back in a few years to clean up all of the AI-generated code, too? I know the term Vibe Coding Cleanup Specialist already exists, and entire businesses exist to perform that exact task. Sounds like a terrible job to me. But maybe that is where the next generation of Junior Devs will come from, after cleaning up the code from their AI predecessors?
Happy coding AIing,
Atg