[1]Read the Tea Leaves Software and other dark arts, by Nolan Lawson [2][Search this Blog ] Search • [4]Home • [5]Apps • [6]Code • [7]Talks • [8]About « [9]Goodbye Salesforce, hello Socket 2 Apr AI ambivalence Posted April 2, 2025 by Nolan Lawson in [10]Machine Learning, [11]NLP. [12]24 Comments I’ve avoided writing this post for a long time, partly because I try to avoid controversial topics these days, and partly because I was waiting to make my mind up about the current, all-consuming, conversation-dominating topic of generative AI. But Steve Yegge’s [13]“Revenge of the junior developer” awakened something in me, so let’s go for it. I don’t come to AI from nowhere. Longtime readers may be surprised to learn that I have a master’s in [14]computational linguistics, i.e. I studied this kind of stuff 20-odd years ago. In fact, two of the authors of the famous [15] “stochastic parrot” paper were folks I knew at the time – Emily Bender was one of my professors, and Margaret Mitchell was my lab partner in one of our toughest classes (sorry my Python sucked at the time, Meg). That said, I got bored of working in AI after grad school, and quickly switched to general coding. I just found that “feature engineering” (which is what we called training models at the time) was not really my jam. I much preferred to put on [16]some good coding tunes, crank up the IDE, and bust out code all day. Plus, I had developed a dim view of natural-language processing technologies largely informed by my background in (non-computational) linguistics as an undergrad. In linguistics, we were taught that the human mind is a wondrous thing, and that Chomsky had conclusively shown that humans have a natural language instinct. The job of the linguist is to uncover the hidden rules in the human mind that govern things like syntax, semantics, and phonology (i.e. why the “s” in “beds” is pronounced like a “z” unlike in “bets,” due to the voicing of the final consonant). Then when I switched to computational linguistics, suddenly the overriding sensation I got was that everything was actually about number-crunching, and in fact you could throw all your linguistics textbooks in the trash and just let gobs of training data and statistics do the job for you. “Every time I fire a linguist, the performance goes up,” as [17]a famous computational linguist said. I found this perspective belittling and insulting to the human mind, and more importantly, it didn’t really seem to work. Natural-language processing technology seemed stuck at the level of [18]support vector machines and [19] conditional random fields, hardly better than the Markov models in your iPhone 2’s autocomplete. So I got bored and disillusioned and left the field of AI. Boy, that AI thing sure came back with a vengeance, didn’t it? Still skeptical That said, while everybody else was either reacting with horror or delight at the tidal wave of gen-AI hype, I maintained my skepticism. At the end of the day, all of this technology was still just number-crunching – brute force trying to approximate the hidden logic that Chomsky had discovered. I acknowledged that there was some room for statistics – Peter Norvig’s essay mentioning [20]the story of an Englishman ordering an “ale” and getting served an “eel” due to [21]the Great Vowel Shift still sticks in my brain – but overall I doubted that mere stats could ever approach anything close to human intelligence. Today, though, philosophical questions of what AI says about human cognition seem beside the point – these things can get stuff done. Especially in the field of coding (my cherished refuge from computational linguistics), AIs now dominate: every IDE assumes I want AI autocomplete by default, and I actively have to hunt around in the settings to turn it off. And for several years, that’s what I’ve been doing: studiously avoiding generative AI. Not just because I doubted how close to [22]“AGI” these things actually were, but also because I just found them annoying. I’m a fast typist, and I know JavaScript like the back of my hand, so the last thing I want is some overeager junior coder grabbing my keyboard to mess with the flow of my typing. Every inline-coding AI assistant I’ve tried made me want to gnash my teeth together – suddenly instead of writing code, I’m being asked to constantly read code (which as everyone knows, is less fun). And plus, the suggestions were rarely good enough to justify the aggravation. So I abstained. Later I read Baldur Bjarnason’s excellent book [23]The Intelligence Illusion, and this further hardened me against generative AI. Why use a technology that 1) dumbs down the human using it, 2) generates hard-to-spot bugs, and 3) doesn’t really make you much more productive anyway, when you consider the extra time reading, reviewing, and correcting its output? So I put in my earbuds and kept coding. Meanwhile, as I was blissfully coding away like it was ~2020, I looked outside my window and suddenly realized that the tidal wave was approaching. It was 2025, and I was (seemingly) the last developer on the planet not using gen-AI in their regular workflow. Opening up I try to keep an open mind about things. If you’ve read this blog for a while, you know that I’ve sometimes espoused opinions that I later completely backtracked on – my post from 10 years ago about [24]progressive enhancement is a good example, because I’ve almost completely swung over to the progressive enhancement side of things since then. My more recent [25]“Why I’m skeptical of rewriting JavaScript tools in ‘faster’ languages” also seems destined to age like fine milk. Maybe I’m relieved I didn’t write a big bombastic takedown of generative AI a few years ago, because hoo boy. I started using Claude and Claude Code a bit in my regular workflow. I’ll skip the suspense and just say that the tool is way more capable than I would ever have expected. The way I can use it to interrogate a large codebase, or generate unit tests, or even “refactor every callsite to use such-and-such pattern” is utterly gobsmacking. It also nearly replaces StackOverflow, in the sense of “it can give me answers that I’m highly skeptical of,” i.e. it’s not that different from StackOverflow, but boy is it faster. Here’s the main problem I’ve found with generative AI, and with “vibe coding” in general: it completely sucks out the joy of software development for me. Imagine you’re a Studio Ghibli artist. You’ve spent years perfecting your craft, you love the feeling of the brush/pencil in your hand, and your life’s joy is to make beautiful artwork to share with the world. And then someone tells you gen-AI can [26]just spit out My Neighbor Totoro for you. Would you feel grateful? Would you rush to drop your art supplies and jump head-first into the role of AI babysitter? This is how I feel using gen-AI: like a babysitter. It spits out reams of code, I read through it and try to spot the bugs, and then we repeat. Although of course, as [27]Cory Doctorow points out, the temptation is to not even try to spot the bugs, and instead just let your eyes glaze over and let the machine do the thinking for you – the [28]full dream of vibe coding. I do believe that this is the end state of this kind of development: “giving into the vibes,” not even trying to use your feeble primate brain to understand the code that the AI is barfing out, and instead to let other barf-generating “agents” evaluate its output for you. I’ll accept that maybe, maybe, if you have the right orchestra of agents that you’re conducting, then maybe you can cut down on the bugs, hallucinations, and repetitive boilerplate that gen-AI seems prone to. But whatever you’re doing at that point, it’s not software development, at least not the kind that I’ve known for the past ~20 years. Conclusion I don’t have a conclusion. Really, that’s my current state: ambivalence. I acknowledge that these tools are incredibly powerful, I’ve even started incorporating them into my work in certain limited ways (low-stakes code like POCs and unit tests seem like an ideal use case), but I absolutely hate them. I hate the way they’ve taken over the software industry, I hate how they make me feel while I’m using them, and I hate the human-intelligence-insulting postulation that a glorified Excel spreadsheet can do what I can but better. In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in [29]the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of [30]1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development. So this is where I’ve landed: I’m using generative AI, probably just “dipping my toes in” compared to what maximalists like Steve Yegge promote, but even that little bit has made me feel less excited than defeated. I am defeated in the sense that I can’t argue strongly against using these tools (they bust out unit tests way faster than I can, and can I really say that I was ever lovingly-crafting my unit tests?), and I’m defeated in the sense that I can no longer confidently assert that brute-force statistics can never approach the ineffable beauty of the human mind that Chomsky described. (If they can’t, they’re sure doing a good imitation of it.) I’m also defeated in the sense that this very blog post is just more food for the AI god. Everything I’ve ever written on the internet (including here and on GitHub) has been eagerly gobbled up into the giant AI [31]katamari and is being used to happily undermine me and my fellow bloggers and programmers. (If you ask Claude to generate a “blog post title in the style of Nolan Lawson,” it can actually do a pretty decent job of mimicking my shtick.) The fact that I wrote this entire post without the aid of generative AI is cold comfort – nobody cares, and likely few have gotten to the end of this diatribe anyway other than the robots. So there’s my overwhelming feeling at the end of this post: ambivalence. I feel besieged and horrified by what gen-AI has wrought on my industry, but I can no longer keep my ears plugged while the tsunami roars outside. Maybe, like a lot of other middle-aged professionals suddenly finding their careers upended at the peak of their creative power, I will have to adapt or face replacement. Or maybe my best bet is to continue to zig while others are zagging, and to try to keep my coding skills sharp while everyone else is “vibe coding” a monstrosity that I will have to debug when it crashes in production someday. I honestly don’t know, and I find that terrifying. But there is some comfort in the fact that I don’t think anyone else knows what’s going to happen either. Related 24 responses to this post. 1. ● Posted by [32]Miguelito on [33]April 2, 2025 at 10:49 AM This is a profound essay just as an essay, but it makes a very important point about the future. Everyone is asking “what now”. You have answered “I am defeated”. That is one step on the path to an answer. The first response would seem to be to limit the usage of AI… That won’t work. Someone else will use it. Then limit it from creating art, reserve that for humans. That’s better but doesn’t say much about what humans should do. All through Western history, the society has been oriented around a civilization driven by caste based occupations. Our identity has come from our work. Before that, we identified as a hunter, or fisher or stone lapper,… What now? Ahhh, I think I may have the answer. You see, I’ve been studying this question for over 50 years… how humanity can adapt genetically and strategically to a post tribal ecology. I’m best at biology, but when studying survival I realized that a lot of the answer exists in philosophy, something crowded out by STEM. (Science is great for creating wealth and power, but not so good for providing understanding.) I have long looked for the answer and may have found a key … meme: In terms of biology, the purpose of an individual is to survive. That means many things including the survival of civilization, which human survival depends on. Now that may not seem like the solution, but it is another step on the way. That is biology converted to philosophy, a survival strategy. It is a step on the path to answering. That leaves the path open for finding answers in the future. [34]Reply 2. ● Posted by classyswiftly1d8dff3178 on [35]April 2, 2025 at 11:23 AM can I really say that I was ever lovingly-crafting my unit tests? More so than most, at least! [36]Reply 3. ● Posted by Theodore Brown on [37]April 2, 2025 at 11:54 AM Thank you for writing this! It’s pretty close to my own thinking and experience. The fact that I wrote this entire post without the aid of generative AI is cold comfort – nobody cares Maybe I’m in the minority, but I actually care a lot about this. Whenever I see people posting things that were spit out by an LLM, that’s when I don’t care. I don’t have the slightest interest in reading the “thoughts” of generative AI, since there isn’t really any creative thinking in it. It’s just a derivative of the prompt, training data, and model weights. I am very interested in reading original content that comes from the reasoning of a human mind. That’s what has the potential for novel ideas and innovations. [38]Reply 4. ● Posted by Manuel Jasso on [39]April 2, 2025 at 12:11 PM Great post, Nolan, I felt very connected and I am sure you’re representing a large number of developers. I also see myself as a skeptic, and I have another dimension to offer to your dissertation: I think “intelligence” is an overloaded term and we need to always use a qualifier. With this, there is evidence that “artificial intelligence” resembles “human intelligence”, but I think only time will tell how close they are or if they can be considered “the same”. As a skeptic, I don’t think we understand “human intelligence” really, and we get excited that “artificial intelligence” is sooooo close! And I will add that, from my perspective, “human intelligence” is only possible in “biological beings”, and in this area we really have no idea how to “make” one… [40]Reply □ ● Posted by [41]Nolan Lawson on [42]April 2, 2025 at 2:26 PM Thanks, Manuel! I agree that the “intelligence illusion” is one of the things that muddies the debate. I do not think these tools approach human intelligence, but I’m also not sure it matters. A plane doesn’t flap its wings like a bird, but it still flies. But I do think there is an [43]“Eliza effect” here that is multiplying the hype and expectations. [44]Reply 5. ● Posted by adamtreineke on [45]April 2, 2025 at 1:48 PM I’m also defeated in the sense that this very blog post is just more food for the AI god. Patrick Mackenzie’s (patio11) take on this, paraphrased somewwhat because I don’t remember quite where he said it: “Writing today lets you modify the weights of all AI models going forward.” I love the implication that my 17 years of Reddit and Twitter posts are making the AI gods a little funnier, a little more sarcastic, and a bit more respectful, patient and thoughtful. (And bestow a tendency to write 140 character sentences and misuse commas.) I do appreciate the lack of generative AI while authoring this. I think you either said once (or I saw it on a blog post) that writing was a way of really working through ideas. That really stuck with me and there is a respect and confidence that I have as a reader when I see that someone has wrestled with an idea versus just copy/pasted whatever their prompt decided was appropriate. Thanks Nolan! [46]Reply 6. ● Posted by Keen on [47]April 2, 2025 at 1:50 PM This well echoes my own background and experience and feelings about “AI”. Ambivalence is the right word. I’ve once tried a vibe coding tool and just felt overwhelmed with the idea of *maintaining* the monstrosity it spat out (presumably we’re going to see VibeOps and VibeSec and VibePhishing on their way). I do find llm’s useful in small portions, used sparingly and with only the context I explicitly allow it to see. But I shudder at the idea of just telling the computer to make a site that should handle logins and personal data and navigate the GDPR landscape and be maintainable across 10 years of javascript framework updates. [48]Reply 7. Posted by [49]Kerrick’s Wager: on the Future of Manual Programming - Kerrick Long's Blog on [50]April 2, 2025 at 5:57 PM […] cannot know whether Steve Yegge is right or wrong, and I do not know enough to make a prediction. If the future of productivity is agentic software […] [51]Reply 8. ● Posted by [52]Kerrick Long on [53]April 2, 2025 at 5:57 PM Maybe, like a lot of other middle-aged professionals suddenly finding their careers upended at the peak of their creative power, I will have to adapt or face replacement. Or maybe my best bet is to continue to zig while others are zagging, and to try to keep my coding skills sharp while everyone else is “vibe coding” a monstrosity that I will have to debug when it crashes in production someday. I honestly don’t know, and I find that terrifying. But there is some comfort in the fact that I don’t think anyone else knows what’s going to happen either. I don’t know either, but [54]I’m taking a “Pascal’s Wager” approach to it. I’m going to learn to use these tools so that if Yegge is right I’ll still have a job in software. But learning these new skills won’t dull my existing software repertoire, so if Yegge is wrong all I’ve lost are a few nights and weekends, and some cash fed into the token-generating machines. [55]Reply 9. ● Posted by [56]nevf on [57]April 2, 2025 at 11:08 PM I love writing software and have been doing it for a very long time. There is joy and immense satisfaction seeing what you type on the keyboard come to life on the big or small screen. In fact the older I get the more I enjoy the entire process. I’m also into Gym, Golf, Swimming etc. and these activities keep my body active and strong. It is however the practice of writing software and developing applications that keeps my brain incredibly healthy and provides a good balance to my physical pursuits. I’d be very surprised if AI wouldn’t help be be more productive and likely produce better code. However I don’t have the interest or wherewithal to get on board. Let me make my own mistakes, continue my journey learning new things, instead of wasting brain cells trying to work out what is wrong with AI generated code or how it does what it is supposed to do. I hark back to the days of Visual Basic, which opened a whole new world to people to write code and apps. We got so much crapware that finding the quality apps was a challenge. Look at the AI written articles, most of which are regurgitated random content with often contradictory information. A complete waste of time and ink. Here we go again! I have no doubt that the plethora of AI apps that apparently took only two days to write and are selling like hotcakes, fall into the same Visual Basic category of days gone by. Eventually these folks will learn that the life of software developer doesn’t come that easily. Or then again maybe I’m just too old and stuck in my ways to be able to drink the latest and greatest kool aid. BTW Nolan, great post. [58]Reply 10. ● Posted by bebraw on [59]April 3, 2025 at 12:50 AM I’ve been thinking about the same thing. It’s definitely a different field with the advent of the new AI based tools and it must be quite different to arrive to the field now than just a few years ago. I hope the development doesn’t erode deep thinking skills as you’ll need those to solve hard problems. All of the work cannot be outsourced to the current solutions. [60]Reply 11. ● Posted by [61]Stolzenhain on [62]April 3, 2025 at 7:21 AM Great article and great job at describing the actual vibes at hand. Your arguments shed light on other aspects like playful education. Programming feels less engaging, and is less likely to attract new talent with the latter not being welcomed by: “Look, Y is happening because of X, you can do it yourself, you have the tools at hand.” but rather “As an experienced programmer I have little insight myself as to why Y happens when prompted for X.” [63]Reply □ ● Posted by [64]Nolan Lawson on [65]April 3, 2025 at 9:50 AM That’s a good point I hadn’t considered! Yes, it’s like the difference between Bob Ross empowering people to draw “happy little clouds” and having an AI just draw the art for you. The latter is likely to attract people for whom the art is just a stepping stone to something else (maybe more art! but probably not brushstrokes). [66]Reply 12. ● Posted by [67]Jakub Fiala on [68]April 3, 2025 at 2:48 PM THANK YOU for writing this article, you have pretty much summed up how I’ve been feeling throughout this strange period. One thought that helped me cope was about cooking. I know people who have more or less stopped cooking, because they live in a big city where a delivery service will take 20 min to bring you delicious fresh food that you couldn’t possibly have cooked yourself, and it will only cost about 2-3x what the ingredients would. Yes, we have the technology – as Steve Yegge would say, the future is now. Plus, this way, you’ve saved lots of time you can now spend… … at this point a LLM would probably go on to say “being productive” or “relaxing” or “making art”. Those are all things I’d love to do! I can definitely afford to eat takeaway almost every day, or at least have the groceries delivered, so I asked myself – why do I still make 3 meals a day almost every day of the week? I think the answer is simply that I like it. Maybe I’m sticking to the “old ways”, maybe I’m wasting time, I’m definitely being very inefficient if you look at it mathematically. I just can’t help doing things I like, which include cooking, JavaScript, solving problems, reading and understanding, thinking… I like thinking! It’s the last thing I’m going to outsource. Affordable food delivery has existed for years, but even wealthy people still choose to cook at home. I think something similar might happen with this alleged “tsunami” of agents. [69]Reply 13. ● Posted by [70]dan on [71]April 4, 2025 at 5:48 AM Maybe this will bring some solace: [72]https://serce.me/posts/ 2025-31-03-there-is-no-vibe-engineering [73]Reply 14. ● Posted by 4zmite on [74]April 5, 2025 at 12:52 AM I recognized the feeling from a moment in my youth. Back then, I loved playing the game “UFO: Enemy Unknown.” The game involved building a global defense network to ward off an alien invasion. Building bases, researching new technologies, and buying weapons were all part of the strategy mechanics. At the same time, I was beginning to explore how software was built. Using a hex editor and a disassembler, I would pick apart things to see how they worked. This was another kind of game that I thoroughly enjoyed. One day, it hit me: the amount of money I had in the game must somehow be stored in the save files! I could use my hex editor to change it. Sure enough, my plan worked. I awarded myself a generous donation, and for a few hours, I was thrilled. I could buy all the cool stuff I couldn’t afford before and I had no problem fending off the pesky alien invasion. Aliens were no match for my hex editor. The next day, I stopped playing the game. It wasn’t fun anymore. It left me unsatisfied. Sure, I would win every time, but I didn’t enjoy it. Not only that, even playing without cheating lost its shine. Why bother playing when I knew there was an easier way to win? Excerpt from [75]https://4zm.org/2025/04/05/bitter-prediction.html [76]Reply 15. ● Posted by [77]Marc on [78]April 5, 2025 at 2:47 AM I was also very skeptical of AI and similarly surprised at how it can predict the code I want to write. True the code is often wrong but I can pick that up quickly. The real benefit is that it saves me hours of googling and trawling stack overflow (whose search, by the way, sucks badly). Tyoe your stack overflow question into chatgpt first and bam! [79]Reply 16. ● Posted by Mark S on [80]April 5, 2025 at 10:53 PM Please don’t give in. “You must use it or get left behind” is not real, it’s marketing. It’s like Nathan Fielder’s doink-it. You don’t need to buy a toy to prove you’re not a baby. And you don’t need to get locked into tools your boss will have to pay $200 a month for, because the company who made them is losing billions a year. They’ve ruined search results, they’re ruining the environment, they have no respect for human creativity, nor creators themselves. They don’t care about your license. They don’t care about your infra costs and will DoS you in a heartbeat, no matter what you put in robots.txt. GenAI companies want to extract value from us, repackage it and sell it. It’s disgusting. It’s not true that nobody cares that you didn’t use AI for this post. I care, and many other people care in the comments. Another human made an effort to create something? How wonderful! It’s very different from excretions of LLMs. And what if the models get so good we won’t be able to tell the difference? What if they get better than humans at this? Well, humans still play and watch chess. In fact, it’s more popular than ever. Keep being human. Please. Cheers [81]Reply □ ● Posted by [82]Nolan Lawson on [83]April 6, 2025 at 8:07 AM Thanks for the kind words. I’ve never found the environmental argument against gen-AI very strong, mostly because I haven’t seen strong evidence that it’s worse for emissions than anything else in computing, or heck, [84]anything else in general (having a child seems to be the worst thing you can do by far). The idea that a small group of advocates can abstain from gen-AI and stop the momentum of the current boom also seems pretty unlikely to me at this point – this is the metaphor I gave of plugging my ears while a tidal wave threatens to hit my house. GitHub says [85]92% of developers are using AI coding tools; the effect on the industry seems pretty guaranteed at this point. That said, I respect your decision not to use a technology if you find it distasteful. I’m a vegetarian partly for ethical and environmental reasons, although I have no illusion that what I’m doing is anything but a drop in the bucket compared to the world’s soaring demand for meat. Same with why I prefer to ride my bike or bus rather than drive whenever possible. Sometimes you have to do things because you know they’re right, not because they’ll make a big difference. Gen-AI just hasn’t reached that level of urgency for me. [86]Reply 17. Posted by [87]March 2025 month notes | Echo One on [88]April 13, 2025 at 3:08 PM […] AI Ambivalence a more sober assessment of the impact and utility of the current state of AI-assisted coding and a piece that resonated with me for asking the question about whether this iteration of coding retains the same appeal […] [89]Reply 18. ● Posted by [90]AI에 대한 양가적 감정: 열광과 회의 사이에서 균형 찾기 - AI Sparkup on [91]April 13, 2025 at 7:30 PM […] AI ambivalence | Read the Tea Leaves – Nolan Lawson […] [92]Reply 19. ● Posted by joshuapinter on [93]April 29, 2025 at 11:31 AM try to keep my coding skills sharp while everyone else is “vibe coding” a monstrosity that I will have to debug when it crashes in production someday. That’s honestly my hope and intention too. I have about 10 years left in my programming career and I hope there’s still value in what we bring to the table during this time and for the next decades to come after that. It can’t all just boil down to derivative work. Unless AI starts creating new libraries, languages and concepts, there’s always going to be a need for trend setters and trail blazers. Good luck. To all of us. [94]Reply 20. ● Posted by jhnnns on [95]May 2, 2025 at 3:08 AM Great read! It perfectly summarizes my feelings about coding and AI :) However, I recently changed my mind about AI when I discovered the joy of using agents for doing tedious refactoring work. For me, the fun of coding is not typing. What motivates me most is seeing when functions and data mesh perfectly like a cogwheel. And AI can help me a lot with that, as long as certain coding standards are met. In the end, it also boils down to the question if you just care about the output of the software or also about the code behind it. The former works for short-term, low-stakes projects, but I doubt that it’ll work for mission-critical code. AI can generate mission-critical code, but you need to read and understand it. And it often also requires fixing and improving. [96]Reply 21. ● Posted by ohay on [97]May 2, 2025 at 1:31 PM well said can sympathize ambivalence’s a perfectly appropriate response to a tidal wave’s inevitability (why this is manufactured ‘inevitability’ is another question~) some thoughts triggered by your post: reminds me of the web and how it felt like there was such an artificially produced ‘need’, and also a sense of inevitability and a sea change (i mean: who really needed to email someone across the country a picture?). it was a huge solution that created its own demand after the fact. ‘the ai’ (the commercially available aspect) has that feel the other thing that came to mind for some reason: ai development’s probably quite…hampered by whatever philosophical criteria and conscious or unconscious decisions by the early planners underly all this. so whatever’s involved that anthropomorphicizes the goals and mechanisms will also, inevitably, cause some of these dynamics you describe. we (collectively) didn’t need to make such a close competitor probably not describing the thought too well – but good post, and thanks for sharing! [98]Reply Leave a comment [99]Cancel reply [ ] [ ] [ ] [ ] [ ] [ ] [ ] Δ[ ] This site uses Akismet to reduce spam. [107]Learn how your comment data is processed. Recent Posts • [108]AI ambivalence • [109]Goodbye Salesforce, hello Socket • [110]2024 book review • [111]Avoiding unnecessary cleanup work in disconnectedCallback • [112]Why I’m skeptical of rewriting JavaScript tools in “faster” languages About Me Photo of Nolan Lawson, headshot I'm Nolan, a programmer from Seattle working at Socket. All opinions are my own. Photo by Cătălin Mariş. Archives • [113]April 2025 (1) • [114]January 2025 (1) • [115]December 2024 (2) • [116]October 2024 (2) • [117]September 2024 (3) • [118]August 2024 (1) • [119]July 2024 (1) • [120]March 2024 (1) • [121]January 2024 (1) • [122]December 2023 (4) • [123]August 2023 (2) • [124]January 2023 (2) • [125]December 2022 (1) • [126]November 2022 (2) • [127]October 2022 (2) • [128]June 2022 (4) • [129]May 2022 (3) • [130]April 2022 (1) • [131]February 2022 (1) • [132]January 2022 (1) • [133]December 2021 (3) • [134]September 2021 (1) • [135]August 2021 (6) • [136]February 2021 (2) • [137]January 2021 (2) • [138]December 2020 (1) • [139]July 2020 (1) • [140]June 2020 (1) • [141]May 2020 (2) • [142]February 2020 (1) • [143]December 2019 (1) • [144]November 2019 (1) • [145]September 2019 (1) • [146]August 2019 (2) • [147]June 2019 (4) • [148]May 2019 (3) • [149]February 2019 (2) • [150]January 2019 (1) • [151]November 2018 (1) • [152]September 2018 (5) • [153]August 2018 (1) • [154]May 2018 (1) • [155]April 2018 (1) • [156]March 2018 (1) • [157]January 2018 (1) • [158]December 2017 (1) • [159]November 2017 (2) • [160]October 2017 (1) • [161]August 2017 (1) • [162]May 2017 (1) • [163]March 2017 (1) • [164]January 2017 (1) • [165]October 2016 (1) • [166]August 2016 (1) • [167]June 2016 (1) • [168]April 2016 (1) • [169]February 2016 (2) • [170]December 2015 (1) • [171]October 2015 (1) • [172]September 2015 (1) • [173]July 2015 (1) • [174]June 2015 (2) • [175]October 2014 (1) • [176]September 2014 (1) • [177]April 2014 (1) • [178]March 2014 (1) • [179]December 2013 (2) • [180]November 2013 (3) • [181]August 2013 (1) • [182]May 2013 (3) • [183]January 2013 (1) • [184]December 2012 (1) • [185]November 2012 (1) • [186]October 2012 (1) • [187]September 2012 (3) • [188]June 2012 (2) • [189]March 2012 (3) • [190]February 2012 (1) • [191]January 2012 (1) • [192]November 2011 (1) • [193]August 2011 (1) • [194]July 2011 (1) • [195]June 2011 (3) • [196]May 2011 (2) • [197]April 2011 (4) • [198]March 2011 (1) Tags [199]accessibility [200]alogcat [201]android [202]android market [203]apple [204]app tracker [205]benchmarking [206]blobs [207]boost [208]bootstrap [209] browsers [210]bug reports [211]catlog [212]chord reader [213]code [214]contacts [215]continuous integration [216]copyright [217]couch apps [218]couchdb [219] couchdroid [220]developers [221]development [222]emoji [223]grails [224]html5 [225]indexeddb [226]information retrieval [227]japanese name converter [228] javascript [229]jenkins [230]keepscore [231]listview [232]logcat [233]logviewer [234]lucene [235]nginx [236]nlp [237]node [238]nodejs [239]npm [240] offline-first [241]open source [242]passwords [243]performance [244]pinafore [245]pokedroid [246]pouchdb [247]pouchdroid [248]query expansion [249] relatedness calculator [250]relatedness coefficient [251]s3 [252]safari [253] satire [254]sectioned listview [255]security [256]semver [257]shadow dom [258] social media [259]socket.io [260]software development [261]solr [262]spas [263] supersaiyanscrollview [264]synonyms [265]twitter [266]ui design [267]ultimate crossword [268]w3c [269]webapp [270]webapps [271]web platform [272]web sockets [273]websql Links • [274]Mastodon • [275]GitHub • [276]npm [277]Blog at WordPress.com. • [278] Comment • [279] Reblog • [280] Subscribe [281] Subscribed □ [282] [favic] Read the Tea Leaves Join 1,280 other subscribers [283][ ] Sign me up □ Already have a WordPress.com account? [290]Log in now. • □ [291] [favic] Read the Tea Leaves □ [292] Subscribe [293] Subscribed □ [294]Sign up □ [295]Log in □ [296] Copy shortlink □ [297] Report this content □ [298] View post in Reader □ [299]Manage subscriptions □ [300]Collapse this bar [b] References: [1] https://nolanlawson.com/ [4] https://nolanlawson.com/ [5] https://nolanlawson.com/apps/ [6] https://nolanlawson.com/code/ [7] https://nolanlawson.com/talks/ [8] https://nolanlawson.com/about/ [9] https://nolanlawson.com/2025/01/18/goodbye-salesforce-hello-socket/ [10] https://nolanlawson.com/category/machine-learning/ [11] https://nolanlawson.com/category/nlp-2/ [12] https://nolanlawson.com/2025/04/02/ai-ambivalence/#comments [13] https://sourcegraph.com/blog/revenge-of-the-junior-developer [14] https://www.compling.uw.edu/ [15] https://dl.acm.org/doi/10.1145/3442188.3445922 [16] https://en.wikipedia.org/wiki/Endless_Fantasy [17] https://en.wikiquote.org/wiki/Fred_Jelinek [18] https://en.wikipedia.org/wiki/Support_vector_machine [19] https://en.wikipedia.org/wiki/Conditional_random_field [20] https://norvig.com/chomsky.html [21] https://en.wikipedia.org/wiki/Great_Vowel_Shift [22] https://en.wikipedia.org/wiki/Artificial_general_intelligence [23] https://illusion.baldurbjarnason.com/ [24] https://nolanlawson.com/2016/10/13/progressive-enhancement-isnt-dead-but-it-smells-funny/ [25] https://nolanlawson.com/2024/10/20/why-im-skeptical-of-rewriting-javascript-tools-in-faster-languages/ [26] https://carly.substack.com/p/everything-is-ghibli [27] https://pluralistic.net/2024/10/30/a-neck-in-a-noose/ [28] https://vibemanifesto.org/ [29] https://en.wikipedia.org/wiki/The_medium_is_the_message [30] https://www.youtube.com/watch?v=loMEF18Ir4s [31] https://en.wikipedia.org/wiki/Katamari_Damacy [32] http://zagwap.com/Bio/index.html [33] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237911 [34] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237911#respond [35] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237912 [36] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237912#respond [37] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237913 [38] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237913#respond [39] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237914 [40] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237914#respond [41] http://nolanlawson.com/ [42] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237917 [43] https://en.wikipedia.org/wiki/ELIZA_effect [44] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237917#respond [45] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237915 [46] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237915#respond [47] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237916 [48] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237916#respond [49] https://kerrick.blog/articles/2025/kerricks-wager/ [50] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237918 [51] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237918#respond [52] http://kerrick.blog/ [53] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237919 [54] https://kerrick.blog/articles/2025/kerricks-wager/ [55] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237919#respond [56] http://nevf.wordpress.com/ [57] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237920 [58] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237920#respond [59] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237922 [60] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237922#respond [61] https://studioagenturbuero.com/ [62] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237923 [63] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237923#respond [64] http://nolanlawson.com/ [65] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237924 [66] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237924#respond [67] https://fiala.space/ [68] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237925 [69] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237925#respond [70] http://itswebtechtime.wordpress.com/ [71] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237926 [72] https://serce.me/posts/2025-31-03-there-is-no-vibe-engineering [73] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237926#respond [74] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237928 [75] https://4zm.org/2025/04/05/bitter-prediction.html [76] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237928#respond [77] http://cawoodm.wordpress.com/ [78] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237930 [79] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237930#respond [80] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237931 [81] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237931#respond [82] http://nolanlawson.com/ [83] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237932 [84] https://iopscience.iop.org/article/10.1088/1748-9326/aa7541 [85] https://github.blog/news-insights/research/survey-reveals-ais-impact-on-the-developer-experience/ [86] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237932#respond [87] http://rrees.me/2025/04/13/march-2025-month-notes/ [88] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237935 [89] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237935#respond [90] https://aisparkup.com/posts/1201 [91] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237936 [92] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237936#respond [93] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237939 [94] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237939#respond [95] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237942 [96] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237942#respond [97] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#comment-237943 [98] https://nolanlawson.com/2025/04/02/ai-ambivalence/?replytocom=237943#respond [99] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020#respond [107] https://akismet.com/privacy/ [108] https://nolanlawson.com/2025/04/02/ai-ambivalence/ [109] https://nolanlawson.com/2025/01/18/goodbye-salesforce-hello-socket/ [110] https://nolanlawson.com/2024/12/30/2024-book-review/ [111] https://nolanlawson.com/2024/12/01/avoiding-unnecessary-cleanup-work-in-disconnectedcallback/ [112] https://nolanlawson.com/2024/10/20/why-im-skeptical-of-rewriting-javascript-tools-in-faster-languages/ [113] https://nolanlawson.com/2025/04/ [114] https://nolanlawson.com/2025/01/ [115] https://nolanlawson.com/2024/12/ [116] https://nolanlawson.com/2024/10/ [117] https://nolanlawson.com/2024/09/ [118] https://nolanlawson.com/2024/08/ [119] https://nolanlawson.com/2024/07/ [120] https://nolanlawson.com/2024/03/ [121] https://nolanlawson.com/2024/01/ [122] https://nolanlawson.com/2023/12/ [123] https://nolanlawson.com/2023/08/ [124] https://nolanlawson.com/2023/01/ [125] https://nolanlawson.com/2022/12/ [126] https://nolanlawson.com/2022/11/ [127] https://nolanlawson.com/2022/10/ [128] https://nolanlawson.com/2022/06/ [129] https://nolanlawson.com/2022/05/ [130] https://nolanlawson.com/2022/04/ [131] https://nolanlawson.com/2022/02/ [132] https://nolanlawson.com/2022/01/ [133] https://nolanlawson.com/2021/12/ [134] https://nolanlawson.com/2021/09/ [135] https://nolanlawson.com/2021/08/ [136] https://nolanlawson.com/2021/02/ [137] https://nolanlawson.com/2021/01/ [138] https://nolanlawson.com/2020/12/ [139] https://nolanlawson.com/2020/07/ [140] https://nolanlawson.com/2020/06/ [141] https://nolanlawson.com/2020/05/ [142] https://nolanlawson.com/2020/02/ [143] https://nolanlawson.com/2019/12/ [144] https://nolanlawson.com/2019/11/ [145] https://nolanlawson.com/2019/09/ [146] https://nolanlawson.com/2019/08/ [147] https://nolanlawson.com/2019/06/ [148] https://nolanlawson.com/2019/05/ [149] https://nolanlawson.com/2019/02/ [150] https://nolanlawson.com/2019/01/ [151] https://nolanlawson.com/2018/11/ [152] https://nolanlawson.com/2018/09/ [153] https://nolanlawson.com/2018/08/ [154] https://nolanlawson.com/2018/05/ [155] https://nolanlawson.com/2018/04/ [156] https://nolanlawson.com/2018/03/ [157] https://nolanlawson.com/2018/01/ [158] https://nolanlawson.com/2017/12/ [159] https://nolanlawson.com/2017/11/ [160] https://nolanlawson.com/2017/10/ [161] https://nolanlawson.com/2017/08/ [162] https://nolanlawson.com/2017/05/ [163] https://nolanlawson.com/2017/03/ [164] https://nolanlawson.com/2017/01/ [165] https://nolanlawson.com/2016/10/ [166] https://nolanlawson.com/2016/08/ [167] https://nolanlawson.com/2016/06/ [168] https://nolanlawson.com/2016/04/ [169] https://nolanlawson.com/2016/02/ [170] https://nolanlawson.com/2015/12/ [171] https://nolanlawson.com/2015/10/ [172] https://nolanlawson.com/2015/09/ [173] https://nolanlawson.com/2015/07/ [174] https://nolanlawson.com/2015/06/ [175] https://nolanlawson.com/2014/10/ [176] https://nolanlawson.com/2014/09/ [177] https://nolanlawson.com/2014/04/ [178] https://nolanlawson.com/2014/03/ [179] https://nolanlawson.com/2013/12/ [180] https://nolanlawson.com/2013/11/ [181] https://nolanlawson.com/2013/08/ [182] https://nolanlawson.com/2013/05/ [183] https://nolanlawson.com/2013/01/ [184] https://nolanlawson.com/2012/12/ [185] https://nolanlawson.com/2012/11/ [186] https://nolanlawson.com/2012/10/ [187] https://nolanlawson.com/2012/09/ [188] https://nolanlawson.com/2012/06/ [189] https://nolanlawson.com/2012/03/ [190] https://nolanlawson.com/2012/02/ [191] https://nolanlawson.com/2012/01/ [192] https://nolanlawson.com/2011/11/ [193] https://nolanlawson.com/2011/08/ [194] https://nolanlawson.com/2011/07/ [195] https://nolanlawson.com/2011/06/ [196] https://nolanlawson.com/2011/05/ [197] https://nolanlawson.com/2011/04/ [198] https://nolanlawson.com/2011/03/ [199] https://nolanlawson.com/tag/accessibility/ [200] https://nolanlawson.com/tag/alogcat/ [201] https://nolanlawson.com/tag/android-2/ [202] https://nolanlawson.com/tag/android-market/ [203] https://nolanlawson.com/tag/apple/ [204] https://nolanlawson.com/tag/app-tracker/ [205] https://nolanlawson.com/tag/benchmarking/ [206] https://nolanlawson.com/tag/blobs/ [207] https://nolanlawson.com/tag/boost/ [208] https://nolanlawson.com/tag/bootstrap/ [209] https://nolanlawson.com/tag/browsers/ [210] https://nolanlawson.com/tag/bug-reports/ [211] https://nolanlawson.com/tag/catlog/ [212] https://nolanlawson.com/tag/chord-reader/ [213] https://nolanlawson.com/tag/code/ [214] https://nolanlawson.com/tag/contacts/ [215] https://nolanlawson.com/tag/continuous-integration/ [216] https://nolanlawson.com/tag/copyright/ [217] https://nolanlawson.com/tag/couch-apps/ [218] https://nolanlawson.com/tag/couchdb/ [219] https://nolanlawson.com/tag/couchdroid/ [220] https://nolanlawson.com/tag/developers/ [221] https://nolanlawson.com/tag/development/ [222] https://nolanlawson.com/tag/emoji/ [223] https://nolanlawson.com/tag/grails/ [224] https://nolanlawson.com/tag/html5/ [225] https://nolanlawson.com/tag/indexeddb/ [226] https://nolanlawson.com/tag/information-retrieval/ [227] https://nolanlawson.com/tag/japanese-name-converter/ [228] https://nolanlawson.com/tag/javascript/ [229] https://nolanlawson.com/tag/jenkins/ [230] https://nolanlawson.com/tag/keepscore/ [231] https://nolanlawson.com/tag/listview/ [232] https://nolanlawson.com/tag/logcat/ [233] https://nolanlawson.com/tag/logviewer/ [234] https://nolanlawson.com/tag/lucene/ [235] https://nolanlawson.com/tag/nginx/ [236] https://nolanlawson.com/tag/nlp/ [237] https://nolanlawson.com/tag/node/ [238] https://nolanlawson.com/tag/nodejs/ [239] https://nolanlawson.com/tag/npm/ [240] https://nolanlawson.com/tag/offline-first/ [241] https://nolanlawson.com/tag/open-source/ [242] https://nolanlawson.com/tag/passwords/ [243] https://nolanlawson.com/tag/performance/ [244] https://nolanlawson.com/tag/pinafore/ [245] https://nolanlawson.com/tag/pokedroid/ [246] https://nolanlawson.com/tag/pouchdb/ [247] https://nolanlawson.com/tag/pouchdroid/ [248] https://nolanlawson.com/tag/query-expansion/ [249] https://nolanlawson.com/tag/relatedness-calculator/ [250] https://nolanlawson.com/tag/relatedness-coefficient/ [251] https://nolanlawson.com/tag/s3/ [252] https://nolanlawson.com/tag/safari/ [253] https://nolanlawson.com/tag/satire/ [254] https://nolanlawson.com/tag/sectioned-listview/ [255] https://nolanlawson.com/tag/security/ [256] https://nolanlawson.com/tag/semver/ [257] https://nolanlawson.com/tag/shadow-dom/ [258] https://nolanlawson.com/tag/social-media/ [259] https://nolanlawson.com/tag/socket-io/ [260] https://nolanlawson.com/tag/software-development/ [261] https://nolanlawson.com/tag/solr/ [262] https://nolanlawson.com/tag/spas/ [263] https://nolanlawson.com/tag/supersaiyanscrollview/ [264] https://nolanlawson.com/tag/synonyms/ [265] https://nolanlawson.com/tag/twitter/ [266] https://nolanlawson.com/tag/ui-design/ [267] https://nolanlawson.com/tag/ultimate-crossword/ [268] https://nolanlawson.com/tag/w3c/ [269] https://nolanlawson.com/tag/webapp/ [270] https://nolanlawson.com/tag/webapps-2/ [271] https://nolanlawson.com/tag/web-platform/ [272] https://nolanlawson.com/tag/web-sockets/ [273] https://nolanlawson.com/tag/websql/ [274] https://toot.cafe/@nolan [275] https://github.com/nolanlawson [276] https://npmjs.com/~nolanlawson [277] https://wordpress.com/?ref=footer_blog [278] https://nolanlawson.com/2025/04/02/ai-ambivalence/#comments [279] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020 [280] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020 [281] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020 [282] https://nolanlawson.com/ [290] https://wordpress.com/log-in?redirect_to=https%3A%2F%2Fr-login.wordpress.com%2Fremote-login.php%3Faction%3Dlink%26back%3Dhttps%253A%252F%252Fnolanlawson.com%252F2025%252F04%252F02%252Fai-ambivalence%252F [291] https://nolanlawson.com/ [292] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020 [293] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020 [294] https://wordpress.com/start/ [295] https://wordpress.com/log-in?redirect_to=https%3A%2F%2Fr-login.wordpress.com%2Fremote-login.php%3Faction%3Dlink%26back%3Dhttps%253A%252F%252Fnolanlawson.com%252F2025%252F04%252F02%252Fai-ambivalence%252F [296] https://wp.me/p1t8Ca-3Ry [297] https://wordpress.com/abuse/?report_url=https://nolanlawson.com/2025/04/02/ai-ambivalence/ [298] https://wordpress.com/reader/blogs/21720966/posts/14852 [299] https://subscribe.wordpress.com/ [300] https://nolanlawson.com/2025/04/02/ai-ambivalence/?ck_subscriber_id=1881659020