> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails
Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.
Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.
It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
A long time ago I would write these stupid long, wordy, emails to my manager summing up my work week. He finally told me, "please, keep it short and sweet. I don't need to know every wire or line of code you touched. Just summarize it into a few sentences." Best conversation ever. Went from 2 hours of typing Friday afternoon to 10 minutes or so. I'm stumped as to why we went backwards.
I think this is the author's point. The ability to write short and concisely is a skill. So goes the saying: "If I had more time, I would have written a shorter letter."
Using LLMs to do that shortening is potentially hindering that practice.
The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.
I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.
> the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills
I agree. And I think everyone who uses them does a combination. The biggest danger lies with new learners who mistake immediate completion of a training task with having learned something.
I will drive a stick-shift once a year, and automatic every other day. Surely my skill will atrophy, but the convenience is worth it.
The exception is when you're sending emails to people who don't have the same background knowledge or assumptions as you do.
Imagine:
Write a coherent but succinct email to Ms Griffin, principal of the school where my 8yo son goes, explaining;
- Quizlet good for short term recollection
- no point memorising stuff if going to forget later
- better to start using anki, and only memorize stuff where worth remembering forever
That seems like an effective way to get Ms Griffin annoyed. Given the prevalence of cheating in education they are might be much more likely to identify that an LLM was used to generate the text, after which they label the email as spam and the parent as someone would would send them such spam.
I think it's a fair hill to die on, I'll join you. I go so far as to say if I take a very direct tone with you after a formality and you keep up the formalities, it's a bit of a red flag. Gimmie the words with what you want only, please.
It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.
For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.
Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?
I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.
And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.
> "People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did."
Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.
I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.
> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"
Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.
Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.
I recommend using Anki (or whatever software does the job) to commit everyday, normal stuff that comes up to long-term memory.
Anki has desktop and phone apps, and if you make an account online, you can connect both to it and sync across the two devices with no effort. I can do my daily review and add cards from laptop or phone whenever something comes up.
I use no subdecks, and zero complex features. Add cards, edit in the "browser", delete sometimes if I've second thoughts. 40 new cards each day, reviewing is ~45 mins and a joy.
All that to say - it's a direct antidote to the issues being described here. I rush to new things less, and spend much more time consolidating and forming links between stuff I know or "knew".
It's directly pushing me towards behaviour that fits the reality of how my brain works. Tabs are being closed with me saying to myself - I'll learn the name of the author and book for now, that's a good start.
Great for birthdays, names, an anecdote you loved, a little idea you had, fleshing out your geography, history, knowledge of plants, lyrics, nuggets from the Common Lisp book you're doing, etc etc.
So for me one huge thing to reclaim your brain and get acquainted with your memory is - flashcards!
> For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did.
> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?
I'd argue that using calculators instead of learning how addition is done does hurt kids' ability to do mental arithmetic. It's an experiment we haven't tried, or at least not in places I've lived in. Sure, once you get how addition is done, feel free to free up your mind skipping 2+ digit arithmetics using a calculator. Same as: sure, once you learned what caching is and implement a small prototype, feel free to ask Claude to implement caching for you.
I wonder if in the place of many lower level skills one is then freed to explore higher order skills. We now have very fancy calculators, such as in the form of tools like notebooks that connect to data sources and run transformations and show visualizations.
"While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly. Moving up the ladder of abstraction has consistently been good."
Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.
The analogy likening LLMs to compilers is extremely specious. In both steps, the text written by the user/programmer is higher-level and thus "easier" but beyond that, the analogy doesn't hold.
- Natural language is not precise and has no spec, unlike programming languages.
- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.
- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.
Very subjective but IME, understanding assembly is correlated with being a skilled web developer. Even though you don't actually write assembly while doing web dev.
You can still do that if you want to. Most people don't.
Back before I got a cell phone, I had many many phone numbers memorized. Once I got a cell phone with a contacts list, I just stopped. Now I have my parents and my wife's phone numbers memorized, and that's it.
URLs are much the same. On most websites, if I can see the domain is the one that I expect to be on, that's all I really care about. There's a few pages that I interact with the URL directly, but it's a minority.
Exactly. The industry has encouraged mediocrity and in-efficiency with over-abstraction and abusing technologies in areas where it doesn't make sense for basic software.
This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.
Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.
Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.
>This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.
I love LLMs, and actually feel they are making me smarter.
I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.
If you're curious about torque converters I suspect you're careful about this, but what's your information vetting process? I use LLMs via text, so I can verify info as it streams in. How do you verify what's spoken to you in a car?
I also rather use them as a tutor of sorts than "please do things for me." I think they're quite useful in that regard, albeit I know not to trust them fully as the only source of information.
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails.
Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.
For sure. If I get an email with 3 dense paragraphs, I'm more likely to mark it unread and come back to it later, after processing the other 20 emails in my inbox.
There’ll be a move to oral ability assessment across the board.
Oral exams, face to face interviews, etc.
If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.
"However… even this might still be too slow. Why understand every line of code deeply if you can just build and ship?"
Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.
I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?
Really, do you need those Chrome extensions?
Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.
For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.
Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.
Which is why I try to treat LLMs like a “calculator” to check my work.
I do things myself, then after I do it myself - ask an LLM to do the same.
That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.
I want to add another one to the author's list, which I think is even more relevant:
Writing.
Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.
Like Assembly to C to Python, as the author points out, LLMs allow us to move up the ladder of abstraction. There are obvious downsides to this, but I expect the evolution is inevitable.
The complaints about that evolution are also inevitable. We lose control, and expertise is less valued. We experts stand to lose a lot, especially if we clutch to the old ways. We're in the midst of a sea-change, we know what we're losing, but we're not sure what we're gaining.
> Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.
Can you help me complete this analogy? By failing to rely on "writing" (read: LLMs), what will fail to be recorded and therefore remembered? Is the idea that if knowledge isn't encompassed by an LLM, in the future it will be inaccessible to anyone?
Sure! I am not the OP, but it seems like the analogy is how being a Luddite and refusing to integrate modern tools leads to being left behind and becoming irrelevant. Another more contemporary example: when intravascular techniques were first being developed, many CT surgeons felt as though those procedures were beneath them and gladly let cardiologists take point for those while they continued to do open procedures. Because of this, they lost a lot of ground in billable procedures, and it negatively affected compensation and demand for CT surgeons. Now, cardiologists can do some minimally invasive valve repairs and ASD closures, which will continue to take business away from CT surgeons. If you refuse to adapt to new technologies, you will be left behind.
For a critical article it lists quite a lot of pro-LLM analogies, which are false in my opinion.
The pocket calculator simplifies exactly one part of math and probably isn't even used that much by research mathematicians.
Chess programs are obviously forbidden in competitions. Should we forbid LLMs for programming? In line with the headline though, Magnus Carlsen said that he does not use chess programs that much and that they do make him dumber. His seconds of course use them in preparation for competitions.
LLMs destroy the essence of humanity: Free thought. We are fortunate that many people have come to the same conclusion and are fighting back.
> Total offloading cripples true learning but maximizes short-term speed and output, and finding the right balance is crucial.
LLMs help you go fast but going fast keeps you shallow.
Luckily you can fractally take apart something the LLM made for you with the help of, you guessed it, an LLM. “Why did you make this a constant?” “Why is this hoisted?” “What makes this more performant than that?” .. and pretty soon, you’re not dumb anymore. At least in that area.
LLMs are the ultimate tool for self-directed learners.
I have used a simple benchmark for productivity related personal workflow changes that has always served me very well and kept me feeling good about what i choose to use to do my job, and that's what i now call the airplane mode test. can i use this tool completely offline, isolated from the rest of the world? growing up wifi was not ubiquitous for me. i didn't live somewhere remote, i was just an early adopter of computers and carried a laptop in school when that was considered a rare piece of kit. learning programming i always kept the philosophy of can i do this on the train while going home in mind when selecting how i wrote code. i kept man pages handy and learned how to search them properly (man -k . | grep), learned how to access the gnu info pages on my machine, and even found the glibc manual tucked away safely in my /usr/share directory which i hadn't know was there. over the years i stayed away from stack overflow and google when i was writing code as much as possible, first looking at the resources available to me on my local machine.
i now have qwen3. it runs locally on my machine. it can vibe code, it can oneshot, it can reason about complex non code problems and give me tons of background information on the entire world. i used to keep a copy of wikipedia offline, only some few gigabytes for the text version and even if that is too much there's reduced selection versions available in kiwix.
i am fine with llms taking over a lot of that tedious work, because i am confident i will always have this tool the same as all my other tools and data that i backup and keep for myself. that means its ok for me to cheat a bit here and there and reach out to the higher power models in the cloud the same way i would sometimes google an error message before even reading it if i am doing work to pay my bills. i have these rungs of the ladder to climb down from and feel like i am not falling into oblivion.
i think the phrase that sums this up best is work smarter not harder. im ok with accepting a smarter way of doing things, as long as i can always depend on it being there for me in an adverse situation.
The author mentions it at the end, but continuing to experience long-form content like reading books and listening to multi-hour podcasts — particularly in other knowledge domains — should counteract this.
The whole idea of "LLM replacing thinking" is because
people don't want to understand the output.
Just a few prompts like "What are the flaws of code below"
would improve understanding and allow to plan ahead better.
> While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly.
You may think this, but the principles are extremely relevant even in much 'higher tiers' of programming, such as the front-end. Performance optimization is always relevant, and understanding the concepts you learn from learning assembly is crucial.
Such courses also generally encourage a depth of understanding of the whole computing stack. This is more and more relevant in the modern age, where we have technologies such as Web*Assembly*.
I learn a lot from asking LLMs to do things especially in areas like front-end development where I don't know most features of CSS, HTML5, or React. All you have to do is read the code the LLM writes and ask it follow-up questions.
LLMs can accelerate learning. Everyone is optimistic about the idea of personalized tutors improving education. You can already use them like that while working on real-world projects.
I think the list of historical analogies is missing the biggest one - the internet.
Memorization used to be a much more important skill than it is today. I am probably worse at rote memorization than I was when I was 13. Am I dumber? I would say no - I've just adapted to the fact that memorization is much less important in a world where I have access to basically the entire recorded history of human knowledge anywhere, anytime.
LLMs are just another very powerful technology that changes what subdomains of intelligence matter. When you have an LLM that can write code better than any human being (and since I know I will get testy HN replies about how LLMs can't do that, I will clarify here that I mean this is a thing that is not true today but will be in the future), the skill that matters shifts from writing code to defining the problem or designing the product.
> Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). The difference this time is that intelligence cannot easily be confined.
This is true, but I think it just means we'll see a more extreme kind of the same change we've seen as we've created powerful new tools in the past. I think it's helpful to think of the tool less as intelligence and more as the products of intelligence that are relevant, like generating high quality code or doing financial analysis. You'll have tools that can do those things extremely well, and it'll be up to you to make use of them rather than worrying about the loss of now-irrelevant skills.
> If you do work at the very frontier, LLMs definitely aren’t as helpful and, for very good programmers, their use of these models for coding is fundamentally different.
This needs a mention anytime someone says they struggle to get value from LLMs.
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand.
I feel like the possibility of having a dying and phone and needing to get back home from a new place late at night is certainly possible, so I think it is worth having at least a basic knowledge of the major highways in your locality and which direction each one goes.
> I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.
No, he doesn't. Plato quotes Socrates quoting a mythical Egyptian king talking with the god that had supposedly created writing and wanted to gift it to the Egyptians. The entire conversation is much more nuanced. For one, writing had existed for three millennia by the point this dialogue was written, and alphabetic Greek writing had existed for several centuries.
Plato does make the point that access to text is not enough to acquire knowledge and it can foster a sense of false understanding in people who conflate knowing about something with knowing something, which I think is quite relevant when you see people claiming than can learn things from asking LLMs about it.
> Assembly to C to Python. Almost no one writes assembly by hand anymore (unless you work at Deepseek)
Unless you are maintaining hardware or device drivers which is done at any company that makes hardware such as: Apple, Google, Microsoft, Nvidia, SpaceX, Intel, AMD, ARM, Tesla and the list goes on.
Yep. Or writing video codecs or other performance critical software. It's amazing that people make blanket statements like this when really they're just not familiar with what other SWEs are doing
More broadly, there's a lot of value in knowing how to work with constrained systems: things that have to be offline, or radiation-hardened, or low-power, or low-spec, etc. Those tend to be resilient systems; i.e., things that people can quietly rely on instead of being subject to "move fast and break things."
Building web apps that you can update willy-nilly while running on arbitrarily powerful and always-available hardware isn't the entirety of software engineering.
Correct, crème of the crop software engineers doing bleeding edge work will most likely never be supplanted by LLM. I think the issue is that 90% of programmers do not do such work. Things that most software engineers actually do (front end web dev using a popular framework, MVC like apps, gluing together APIs and library’s to make a custom configuration in an otherwise commonly solved problem etc.) are the things that LLM excels at, and will continue to improve as time goes on.
The vibe coding examples are interesting to me. Okay, you can create chrome extensions and personal apps with these tools. The author seems to take it as a given that that’s the extent of useful programming. How do they work in maintaining huge applications over time that require interactions between dozens or hundreds or thousands of streams of work?
> My first response to most problems is to ask an LLM, and this might atrophy my ability to come up with better solutions since my starting point is already in the LLM-solution space.
People were doing this with Stack Overflow / Blogs / Forums. It doesn't matter if you look up pre-existing solutions. It matters whether you understand it properly. If you do that is fine, if you don't then you will produce poor code.
> Before the rise of LLMs, learning was a prerequisite for output. Learning by building projects has always been the best way to improve at coding, but now, you can build things without deeply understanding the implementation.
People completed whole projects all the time before LLMs without deeply understanding the code. I've had to work with large amounts of code where it was clear people never read the docs, never understood the libraries frameworks they were working with. Many people seem to do "Cargo Cult Programming", where they just follow what someone else has done and just adapt enough to solve their problem.
I've seen people take snippets from stack overflow wholesale and just fiddle until it works not really understanding it.
LLMs are just a continuation of this pattern. Many people just want to do their hours and get paid and are not interested and/or capable of actually understanding fully what they are working on.
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand. But this is also a narrow skill that isn’t foundational to other higher-order ones. Maybe software engineering will be something as obsolete as navigating where you can wholly offload it? However, that seems unlikely given the difference in complexity of the two tasks.
I think the author will learn the hard way. You shouldn't rely on Google Maps. Literally less than 2 weeks ago, Google maps was non-functional (I ran out of data), I ended up using road signs and driving towards town names I recognised to navigate back. Learning basic navigational methods is a good idea.
From my perspective, the kind of loss expected with LLMs does not reveal itself in one generation.
What you described is more akin to laziness than loss of knowledge. It is also a trap. Your text is almost satirical to the notion that AI could be harmful for learning, because we all know we can relearn those things. And we can, for now.
Several generations of it, when people start to forget simple things, is where the danger lies. We don't know if it will come to that or not.
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails
Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.
Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.
It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
A long time ago I would write these stupid long, wordy, emails to my manager summing up my work week. He finally told me, "please, keep it short and sweet. I don't need to know every wire or line of code you touched. Just summarize it into a few sentences." Best conversation ever. Went from 2 hours of typing Friday afternoon to 10 minutes or so. I'm stumped as to why we went backwards.
Now it has gotten more informal - slack. Not sure how many still use emails for internal communications.
it’s like Saitama sensei said; keep it to twenty words or less
> Just send the bullet points! Nobody wants the prose. […] It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.
I use LLMs to shorten my emails.
I think this is the author's point. The ability to write short and concisely is a skill. So goes the saying: "If I had more time, I would have written a shorter letter."
Using LLMs to do that shortening is potentially hindering that practice.
The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.
I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.
> the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills
I agree. And I think everyone who uses them does a combination. The biggest danger lies with new learners who mistake immediate completion of a training task with having learned something.
I will drive a stick-shift once a year, and automatic every other day. Surely my skill will atrophy, but the convenience is worth it.
The exception is when you're sending emails to people who don't have the same background knowledge or assumptions as you do.
Imagine:
That seems like an effective way to get Ms Griffin annoyed. Given the prevalence of cheating in education they are might be much more likely to identify that an LLM was used to generate the text, after which they label the email as spam and the parent as someone would would send them such spam.
> Just send the bullet points! Nobody wants the prose.
But the recipient can just ask AI to convert the prose into bullet points.
That’s what I can do with my new found time now that LLMs write my emails for me, use LLMs to converts others emails into bullet points!
this heavily depends on your interlocutor.
I think it's a fair hill to die on, I'll join you. I go so far as to say if I take a very direct tone with you after a formality and you keep up the formalities, it's a bit of a red flag. Gimmie the words with what you want only, please.
[dead]
> No one lamented the advent of calculators.
It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.
For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.
Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?
I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.
And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.
> "People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did."
Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.
I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.
> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"
Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.
Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.
I recommend using Anki (or whatever software does the job) to commit everyday, normal stuff that comes up to long-term memory.
Anki has desktop and phone apps, and if you make an account online, you can connect both to it and sync across the two devices with no effort. I can do my daily review and add cards from laptop or phone whenever something comes up.
I use no subdecks, and zero complex features. Add cards, edit in the "browser", delete sometimes if I've second thoughts. 40 new cards each day, reviewing is ~45 mins and a joy.
All that to say - it's a direct antidote to the issues being described here. I rush to new things less, and spend much more time consolidating and forming links between stuff I know or "knew".
It's directly pushing me towards behaviour that fits the reality of how my brain works. Tabs are being closed with me saying to myself - I'll learn the name of the author and book for now, that's a good start.
Great for birthdays, names, an anecdote you loved, a little idea you had, fleshing out your geography, history, knowledge of plants, lyrics, nuggets from the Common Lisp book you're doing, etc etc.
So for me one huge thing to reclaim your brain and get acquainted with your memory is - flashcards!
> For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did.
> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.
Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?
Exactly! Steph Ango (Obsidian creator) has said it well in his "Don't delegate understanding" essay: https://stephango.com/understand
I'd argue that using calculators instead of learning how addition is done does hurt kids' ability to do mental arithmetic. It's an experiment we haven't tried, or at least not in places I've lived in. Sure, once you get how addition is done, feel free to free up your mind skipping 2+ digit arithmetics using a calculator. Same as: sure, once you learned what caching is and implement a small prototype, feel free to ask Claude to implement caching for you.
I wonder if in the place of many lower level skills one is then freed to explore higher order skills. We now have very fancy calculators, such as in the form of tools like notebooks that connect to data sources and run transformations and show visualizations.
Calculators also ruined the ability to understand and use logarithms (slide rules).
"While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly. Moving up the ladder of abstraction has consistently been good."
Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.
The analogy likening LLMs to compilers is extremely specious. In both steps, the text written by the user/programmer is higher-level and thus "easier" but beyond that, the analogy doesn't hold.
- Natural language is not precise and has no spec, unlike programming languages.
- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.
- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.
Very subjective but IME, understanding assembly is correlated with being a skilled web developer. Even though you don't actually write assembly while doing web dev.
It’s been overall good. Being able to access a web app or website by entering a URL is impressive!
You can serve web pages and render them in browsers all written in C. I'll concede that that's a useful level of abstraction over assembly.
Many browsers, especially Chrome, have abstracted away direct interaction with URLs. Would you also consider that good?
You can still do that if you want to. Most people don't.
Back before I got a cell phone, I had many many phone numbers memorized. Once I got a cell phone with a contacts list, I just stopped. Now I have my parents and my wife's phone numbers memorized, and that's it.
URLs are much the same. On most websites, if I can see the domain is the one that I expect to be on, that's all I really care about. There's a few pages that I interact with the URL directly, but it's a minority.
A functioning URL is impressive?
Exactly. The industry has encouraged mediocrity and in-efficiency with over-abstraction and abusing technologies in areas where it doesn't make sense for basic software.
This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.
Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.
Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.
>This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.
NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.
I love LLMs, and actually feel they are making me smarter.
I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.
If you're curious about torque converters I suspect you're careful about this, but what's your information vetting process? I use LLMs via text, so I can verify info as it streams in. How do you verify what's spoken to you in a car?
I do the same as GP on a regular 2 hour drive up I-5 I take.
The vetting process is the same as if I were driving up I-5 with a gear head friend of mine having a conversation with them as we go.
If something sounds off I just tell it I think it's wrong or to double check itself, similar to what I do with text.
I also rather use them as a tutor of sorts than "please do things for me." I think they're quite useful in that regard, albeit I know not to trust them fully as the only source of information.
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails.
Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.
For sure. If I get an email with 3 dense paragraphs, I'm more likely to mark it unread and come back to it later, after processing the other 20 emails in my inbox.
There’ll be a move to oral ability assessment across the board.
Oral exams, face to face interviews, etc.
If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.
Better check if that's really a "hearing aid", then.
"However… even this might still be too slow. Why understand every line of code deeply if you can just build and ship?"
Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.
I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?
Really, do you need those Chrome extensions?
Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.
For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.
Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.
I’ve been afraid of this as well.
Which is why I try to treat LLMs like a “calculator” to check my work.
I do things myself, then after I do it myself - ask an LLM to do the same.
That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.
> Historical Analogies
I want to add another one to the author's list, which I think is even more relevant:
Writing.
Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.
Like Assembly to C to Python, as the author points out, LLMs allow us to move up the ladder of abstraction. There are obvious downsides to this, but I expect the evolution is inevitable.
The complaints about that evolution are also inevitable. We lose control, and expertise is less valued. We experts stand to lose a lot, especially if we clutch to the old ways. We're in the midst of a sea-change, we know what we're losing, but we're not sure what we're gaining.
> Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.
Can you help me complete this analogy? By failing to rely on "writing" (read: LLMs), what will fail to be recorded and therefore remembered? Is the idea that if knowledge isn't encompassed by an LLM, in the future it will be inaccessible to anyone?
Sure! I am not the OP, but it seems like the analogy is how being a Luddite and refusing to integrate modern tools leads to being left behind and becoming irrelevant. Another more contemporary example: when intravascular techniques were first being developed, many CT surgeons felt as though those procedures were beneath them and gladly let cardiologists take point for those while they continued to do open procedures. Because of this, they lost a lot of ground in billable procedures, and it negatively affected compensation and demand for CT surgeons. Now, cardiologists can do some minimally invasive valve repairs and ASD closures, which will continue to take business away from CT surgeons. If you refuse to adapt to new technologies, you will be left behind.
For a critical article it lists quite a lot of pro-LLM analogies, which are false in my opinion.
The pocket calculator simplifies exactly one part of math and probably isn't even used that much by research mathematicians.
Chess programs are obviously forbidden in competitions. Should we forbid LLMs for programming? In line with the headline though, Magnus Carlsen said that he does not use chess programs that much and that they do make him dumber. His seconds of course use them in preparation for competitions.
LLMs destroy the essence of humanity: Free thought. We are fortunate that many people have come to the same conclusion and are fighting back.
Social media caused societal decay. Dating apps led to a loneliness epidemic. AI will make us dumber.
Digital applications lead to the opposite of what they were meant to do. This is a very reliable indicator for outcomes.
Wild that students are forced to consciously make a decision to not learn because that seems like the better strategic play
> Total offloading cripples true learning but maximizes short-term speed and output, and finding the right balance is crucial.
LLMs help you go fast but going fast keeps you shallow.
Luckily you can fractally take apart something the LLM made for you with the help of, you guessed it, an LLM. “Why did you make this a constant?” “Why is this hoisted?” “What makes this more performant than that?” .. and pretty soon, you’re not dumb anymore. At least in that area.
LLMs are the ultimate tool for self-directed learners.
I have used a simple benchmark for productivity related personal workflow changes that has always served me very well and kept me feeling good about what i choose to use to do my job, and that's what i now call the airplane mode test. can i use this tool completely offline, isolated from the rest of the world? growing up wifi was not ubiquitous for me. i didn't live somewhere remote, i was just an early adopter of computers and carried a laptop in school when that was considered a rare piece of kit. learning programming i always kept the philosophy of can i do this on the train while going home in mind when selecting how i wrote code. i kept man pages handy and learned how to search them properly (man -k . | grep), learned how to access the gnu info pages on my machine, and even found the glibc manual tucked away safely in my /usr/share directory which i hadn't know was there. over the years i stayed away from stack overflow and google when i was writing code as much as possible, first looking at the resources available to me on my local machine.
i now have qwen3. it runs locally on my machine. it can vibe code, it can oneshot, it can reason about complex non code problems and give me tons of background information on the entire world. i used to keep a copy of wikipedia offline, only some few gigabytes for the text version and even if that is too much there's reduced selection versions available in kiwix.
i am fine with llms taking over a lot of that tedious work, because i am confident i will always have this tool the same as all my other tools and data that i backup and keep for myself. that means its ok for me to cheat a bit here and there and reach out to the higher power models in the cloud the same way i would sometimes google an error message before even reading it if i am doing work to pay my bills. i have these rungs of the ladder to climb down from and feel like i am not falling into oblivion.
i think the phrase that sums this up best is work smarter not harder. im ok with accepting a smarter way of doing things, as long as i can always depend on it being there for me in an adverse situation.
The author mentions it at the end, but continuing to experience long-form content like reading books and listening to multi-hour podcasts — particularly in other knowledge domains — should counteract this.
That's simply consumption and is by no means a stand-in for actual problem solving and learning.
The whole idea of "LLM replacing thinking" is because people don't want to understand the output. Just a few prompts like "What are the flaws of code below" would improve understanding and allow to plan ahead better.
> While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly.
You may think this, but the principles are extremely relevant even in much 'higher tiers' of programming, such as the front-end. Performance optimization is always relevant, and understanding the concepts you learn from learning assembly is crucial.
Such courses also generally encourage a depth of understanding of the whole computing stack. This is more and more relevant in the modern age, where we have technologies such as Web*Assembly*.
I learn a lot from asking LLMs to do things especially in areas like front-end development where I don't know most features of CSS, HTML5, or React. All you have to do is read the code the LLM writes and ask it follow-up questions.
LLMs can accelerate learning. Everyone is optimistic about the idea of personalized tutors improving education. You can already use them like that while working on real-world projects.
I think the list of historical analogies is missing the biggest one - the internet.
Memorization used to be a much more important skill than it is today. I am probably worse at rote memorization than I was when I was 13. Am I dumber? I would say no - I've just adapted to the fact that memorization is much less important in a world where I have access to basically the entire recorded history of human knowledge anywhere, anytime.
LLMs are just another very powerful technology that changes what subdomains of intelligence matter. When you have an LLM that can write code better than any human being (and since I know I will get testy HN replies about how LLMs can't do that, I will clarify here that I mean this is a thing that is not true today but will be in the future), the skill that matters shifts from writing code to defining the problem or designing the product.
> Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). The difference this time is that intelligence cannot easily be confined.
This is true, but I think it just means we'll see a more extreme kind of the same change we've seen as we've created powerful new tools in the past. I think it's helpful to think of the tool less as intelligence and more as the products of intelligence that are relevant, like generating high quality code or doing financial analysis. You'll have tools that can do those things extremely well, and it'll be up to you to make use of them rather than worrying about the loss of now-irrelevant skills.
> If you do work at the very frontier, LLMs definitely aren’t as helpful and, for very good programmers, their use of these models for coding is fundamentally different.
This needs a mention anytime someone says they struggle to get value from LLMs.
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand.
I feel like the possibility of having a dying and phone and needing to get back home from a new place late at night is certainly possible, so I think it is worth having at least a basic knowledge of the major highways in your locality and which direction each one goes.
I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.
0: https://en.m.wikipedia.org/wiki/Phaedrus_(dialogue)
> I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.
No, he doesn't. Plato quotes Socrates quoting a mythical Egyptian king talking with the god that had supposedly created writing and wanted to gift it to the Egyptians. The entire conversation is much more nuanced. For one, writing had existed for three millennia by the point this dialogue was written, and alphabetic Greek writing had existed for several centuries.
Plato does make the point that access to text is not enough to acquire knowledge and it can foster a sense of false understanding in people who conflate knowing about something with knowing something, which I think is quite relevant when you see people claiming than can learn things from asking LLMs about it.
sounds like φ matched orders
https://dmf-archive.github.io/
> Assembly to C to Python. Almost no one writes assembly by hand anymore (unless you work at Deepseek)
Unless you are maintaining hardware or device drivers which is done at any company that makes hardware such as: Apple, Google, Microsoft, Nvidia, SpaceX, Intel, AMD, ARM, Tesla and the list goes on.
Yep. Or writing video codecs or other performance critical software. It's amazing that people make blanket statements like this when really they're just not familiar with what other SWEs are doing
More broadly, there's a lot of value in knowing how to work with constrained systems: things that have to be offline, or radiation-hardened, or low-power, or low-spec, etc. Those tend to be resilient systems; i.e., things that people can quietly rely on instead of being subject to "move fast and break things."
Building web apps that you can update willy-nilly while running on arbitrarily powerful and always-available hardware isn't the entirety of software engineering.
Correct, crème of the crop software engineers doing bleeding edge work will most likely never be supplanted by LLM. I think the issue is that 90% of programmers do not do such work. Things that most software engineers actually do (front end web dev using a popular framework, MVC like apps, gluing together APIs and library’s to make a custom configuration in an otherwise commonly solved problem etc.) are the things that LLM excels at, and will continue to improve as time goes on.
LLM is just another tool, use it wrong and you'll suffer from it.
I use it as a tutor, to teach how to do certain tasks or and explain things I do not yet understand.
When I create things like articles, I ask it to review and point out grammar/spelling mistakes.
And sometimes I use it as a search engine to find sources where I can validate information.
The vibe coding examples are interesting to me. Okay, you can create chrome extensions and personal apps with these tools. The author seems to take it as a given that that’s the extent of useful programming. How do they work in maintaining huge applications over time that require interactions between dozens or hundreds or thousands of streams of work?
Hey man, it’s not necessarily the llm.
> My first response to most problems is to ask an LLM, and this might atrophy my ability to come up with better solutions since my starting point is already in the LLM-solution space.
People were doing this with Stack Overflow / Blogs / Forums. It doesn't matter if you look up pre-existing solutions. It matters whether you understand it properly. If you do that is fine, if you don't then you will produce poor code.
> Before the rise of LLMs, learning was a prerequisite for output. Learning by building projects has always been the best way to improve at coding, but now, you can build things without deeply understanding the implementation.
People completed whole projects all the time before LLMs without deeply understanding the code. I've had to work with large amounts of code where it was clear people never read the docs, never understood the libraries frameworks they were working with. Many people seem to do "Cargo Cult Programming", where they just follow what someone else has done and just adapt enough to solve their problem.
I've seen people take snippets from stack overflow wholesale and just fiddle until it works not really understanding it.
LLMs are just a continuation of this pattern. Many people just want to do their hours and get paid and are not interested and/or capable of actually understanding fully what they are working on.
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand. But this is also a narrow skill that isn’t foundational to other higher-order ones. Maybe software engineering will be something as obsolete as navigating where you can wholly offload it? However, that seems unlikely given the difference in complexity of the two tasks.
I think the author will learn the hard way. You shouldn't rely on Google Maps. Literally less than 2 weeks ago, Google maps was non-functional (I ran out of data), I ended up using road signs and driving towards town names I recognised to navigate back. Learning basic navigational methods is a good idea.
From my perspective, the kind of loss expected with LLMs does not reveal itself in one generation.
What you described is more akin to laziness than loss of knowledge. It is also a trap. Your text is almost satirical to the notion that AI could be harmful for learning, because we all know we can relearn those things. And we can, for now.
Several generations of it, when people start to forget simple things, is where the danger lies. We don't know if it will come to that or not.
[dead]
[dead]