> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”
In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about.
I think the same can be said about software development.
I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.
The most damning example I have about Swedish school system is anecdotal: by attending Saturday school, I never had to study math ever in the Swedish school. (same for my Asian classmates) when I finished 9th grade Japanese school curriculum taught ONLY one day per week (2h), I had learned all of advanced math in high school and never had to study math until college.
The focus on "no one left behind == no one allowed ahead" also meant that young me complaining math was boring and easy didn't persuade teachers to let me go ahead, but instead, they allowed me to sleep during the lecture.
It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
Teachers in my county were heavily discouraged from failing anyone, because pass rate became a target instead of a metric. They couldn't even give a 0 for an assignment that was never turned in without multiple meetings with the student and approval from an administrator.
The net result was classes always proceeded at the rate of the slowest kid in class. Good for the slow kids (that cared), universally bad for everyone else who didn't want to be bored out of their minds. The divide was super apparent between the normal level and honors level classes.
I don't know what the right answer is, but there was an insane amount of effort spent on kids who didn't care, whose parents didn't care, who hadn't cared since elementary school, and always ended up dropping out as soon as they hit 18. No differentiation between them, and the ones who really did give a shit and were just a little slow (usually because of a bad home life).
It's hard to avoid leaving someone behind when they've already left themselves behind.
I'm gonna add another perspective. I was placed, and excelled, in moderately advanced math courses from 3rd grade on. Mostly 'A's through 11th grade precalc (taken because of the one major hiccup, placing only in the second most rigorous track when I entered high school). I ended that year feeling pretty good, with a superior SAT score bagged, high hopes for National Merit, etc.
Then came senior year. AP Calculus was a sh/*tshow, because of a confluence of factors: dealing with parents divorcing, social isolation, dysphoria. I hit a wall, and got my only quarterly D, ever.
The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me - and also completely inapplicable, because I WAS one of the bright kids! I needed help, and focus. I retook the course in college and got the highest grade in the class, so I confirmed that I was not the problem; unfortunately, though, the damage had been done. I'd chosen a major in the humnities, and had only taken that course as an elective, to prove to myself that I could manage the subject. You would never know that I'd been on-track for a technical career.
So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one, and it wasn't true, but the simple perception was devastating. I think there is a larger, overarching deficit of support for students, probably some combination of home life, class structure, and pedagogical incentives. If "no child left behind" is anathema in these circles, the "full speed ahead" approach is not much better.
> The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me
Your one bad year doesn't invalidate the fact that it was good to allow you to run ahead of slower students the other 9 years. It wasn't catastrophic for you, as you say yourself you just retook the class in college and got a high grade. I honestly don't see how "I had a bad time at home for a year and did bad in school" could have worked out any better for you.
> So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one.
A bad grade one year deemed you a hopeless demi student? By what metric? I had a similar school career (AP/IB with As and Bs) and got a D that should have been an F my senior year and it was fine.
They seem to lament ending up in humanities instead of a technical path. The fact that the humanities is just categorized as for less smart people and technical people are all smart is a problem in itself.
Many bright people end up in humanities and end up crushed by the societal pressure that expects them to be inferior, a huge waste.
This is probably the right solution. It seems in reality nobody does this since it is expensive (more teachers, real attention to students, etc). Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
So, it feels to me ideally within the same classroom there should be a natural way to work on your own pace at your own level. Is it possible? Have no idea - seems not, again primarily because it requires a completely different skillset and attention from teachers.
> should be a natural way to work on your own pace at your own level
Analogous to the old one-room-school model where one teacher taught all grade levels and students generally worked from textbooks. There were issues with it stemming from specialization (e.g., teaching 1st grade is different than teaching 12th). They were also largely in rural areas and generally had poor facilities.
The main barrier in the US to track separation is manpower. Public School teachers are underpaid and treated like shit, and schools don't get enough funding which further reduces the number of teachers.
Teachers just don't have the time in the US to do multiple tracks in the classroom.
You can have a multi-track high-school system, like in much of Europe. Some are geared towards the academically inclined who expect to go to university, others hold that option open but focus on also learning a trade or specialty (this can be stuff like welding, CNC, or hospitality industry / restaurants etc.), while others focus more heavily on the trade side, with apprenticeship at companies intertwined with the education throughout high school, and switching to a university after that is not possible by default, but not ruled out if you put in some extra time).
Or you can also have stronger or weaker schools where the admission test scores required are different, so stronger students go to different schools. Not sure if that's a thing in the US.
This was the way all schools worked in my county in florida, at least from middle school on. Normal/Honors/AP split is what pretty much every highschool did at the time. You could even go to a local community college instead of HS classes.
> Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
The idea of tracking out kids who excel due to high personal motivation when they have less natural aptitude is flat out dystopian. I'm drawing mental images of Gattaca. Training isn't "gaming". It's a natural part of how you improve performance, and it's a desirable ethical attribute.
>But you aren't supposed to choose either or. Instead, you split the students in different groups, different speeds.
This answer is from the US perspective. I've lived in several states now, and I know many of teachers because my partner is adjacent to education in her work and family. This is what I've learned from all this so far:
This is an incredibly easy and logical thing to both suggest, conceptualize, and even accept. In fact, I can see why alot of people don't think its a bad idea. The problem comes down the following in no specific order:
- Education is highly politicized. Not only that, its one of the most politicized topics of our time. This continues to have negative affects on everything to proper funding of programs[0]
- This means some N number of parents will inevitably take issue with these buckets for one reason or another. That can become a real drain of resources dealing with this.
- There's going to be reasonable questions of objectivity that go into this, including historical circumstances. This type of policy is unfortunately easy enough to co-op certain kids into certain groups based on factors like race, class, sex etc. rather than educational achievement alone, of which we also do not have a good enough way to measure objectively currently because of the aforementioned politicized nature of education.
- How to correct for the social bucketing of tiered education? High achieving kids will be lauded as lower achieving ones fall to the background. How do you mitigate that so you don't end up in a situation where one group is reaping all the benefits and thereby getting all the social recognition? Simply because I couldn't do college level trig when I was in 8th grade doesn't mean I deserved limited opportunities[2], but this tiered system ends up being ripe for this kind of exploitation. In districts that already have these types of programs you can already see parents clamoring to get their kids into advanced classes because it correlates to better outcomes.
[0]: I know that the US spends in aggregate per student, approximately 15,000 USD per year, but that money isn't simply handed to school districts. If you factor specialized grants, bonds, commitments etc. the actual classroom spending is not working with this budget directly, its much smaller than this. This is because at least some your local districts funding is likely coming from grants, which are more often than not only paid out for a specific purpose and must be used in pursuant of that purpose. Sometimes that purpose is wide and allows schools to be flexible, but more often it is exceedingly rigid as its tied to some outcome, such as passing rates, test scores etc. There's lots of this type of money sloshing around the school system, which creates perverse incentives.
[1]: Funding without strict restrictions on how its used
[2]: Look, I barely graduated high school, largely due to alot of personal stuff in my life back then. I was a model college student though, but due to a different set of life circumstances never quite managed to graduate, but I have excelled in this industry because I'm very good at what I do and don't shy away from hard problems. Yet despite this, some doors were closed to me longer than others because I didn't have the right on paper pedigree. This only gets worse when you start bucketing kids like this, because people inevitably see these things as some sort of signal about someones ability to perform regardless of relevancy.
Yeah, all that stuff in the end boils down to: rich parents will find a way to have it their way. Whether private schools or tutors or whatever.
Every ideological system has certain hangups, depending on what they can afford. In the Soviet communist system, obviously a big thing was to promote kids of worker and peasant background etc., but they kept the standards high and math etc was rigorous and actual educational progress taken seriously. But there was Cold War pressure to have a strong science/math base.
Currently, the US is coasting, relying on talent from outside the country for the cream of the top, so they can afford nonsense beliefs, given also that most middle-class jobs are not all that related to knowledge, and are more status-jockeying email jobs.
It will likely turn around once there are real stakes.
> I was placed, and excelled, in moderately advanced math courses from 3rd grade on.
In the school district I live in, they eliminated all gifted programs and honors courses (they do still allow you to accelerate in math in HS for now, but I'm sure that will be gone soon too), so a decent chance you might not have taken Calculus in HS. Problem solved I guess?
I'm not sure when this changed, but in school for me in the 1970s and early '80s the teachers (at least the older ones) were all pretty much of the attitude that "what you get out of school depends on what you put into it" i.e. learning is mostly up to the student. Grades of "F" or zero for uncompleted or totally unsatisfactory work were not uncommon and students did get held back. Dropout age was 16 and those who really didn't care mostly did that. So at least the last two years of high school were mostly all kids who at least wanted to finish.
> It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
I'm sure it's regional, but my oldest kid started school in SoCal 13 years ago, and it is definitely worse. Nearly every bad decision gets doubled-down on and the good ones seem to lack follow-through. I spent almost a decade trying to improve things and have given up; my youngest goes to private school now.
We are experimenting with our daughter this year: Our school system offers advanced math via their remote learning system. This means that during math class, my kid will take online 6th grade math instead of the regular in-person 5th grade math.
We will have to see how it goes, but this could be the advanced math solution we need.
And still (or maybe because?) the resulting adults in Sweeden score above e.g Korea in both numeracy and adaptive problem solving (but slightly less than Japan). The race is not about being best at 16 after all.
Sure! as far as I know, it's somewhat standardized and the east asian countries all have it (Korea, China, Japan). I know this because the Chinese Saturday School was close by. It's usually sponsored by the embassy & in the capital cities, or places with many Japanese families. (London, Germany, Canada afaik)
Because it's only once a week, it was from 09:00 - 14:00 or similar. The slots was: Language (Japanese), Social Studies (History, Geography, Social systems) and then Math. They usually gave homework, which was a little up to the parent to enforce. Classes was quite small: elementary school the most, but no more than 10. Middle school was always single digit (5 for my class). Depends on place and economy: When the comapnies Ericsson (Sweden) and Sony (Japan) had a joint division Sony-Ericsson, many classes doubled.
Class didn't differ so much from the normal school in Asia. Less strict. But the school organized a lot of events such as Undoukai (Sports Day), Theater play, and new years/setsubun festival and other things common in Japanese schools. It served as a place for many asian parents to meet each other too, so it became a bit of a community.
Because lack of students the one I went to only had from 1th to 9th grade. In London and bigger cities I heard they have up until high-school. But in Japan, Some colleges have 帰国子女枠 (returnee entrance system) so I know one alumni that went to Tokyo Uni after highschool.
Personally, I liked it. I hated having to go one extra day to school, but being able to have classmate to share part of your culture (before internet was wide-spread) by sharing games, books, toys you brought home from holiday in Japan was very valuable.
Related to the "critical thinking" part of the original article: It was also interesting to read two history books. Especially modern history. The Swedish (pretending to be neutral) one and the Japanese one (pretending they didn't do anything bad) as an example, for WW2 and aftermath. Being exposed to two rhetoric, both technically not a lie (but by omission), definitely piqued my curiosity as a kid.
You mentioned that these classes were good enough that they made swedish classes a breeze in comparison. What differences in teaching made Saturday school so much more effective?
You did mention class size, and the sense of community, which were probably important, but is there anything else related to the teaching style that you thought helped? Or conversely, something that was missing in the regular school days that made them worse?
>What differences in teaching made Saturday school so much more effective?
I do think the smaller class and feeling more "close" to the teacher helped a lot. But also that the teachers were passionate. It's a community so I still (20 years later) do meet some of the teachers, through community events.
I can't recall all the details, to be honest, but I do think a lot repetition of math exercises and actually going through them step by step helped a lot to solidify how to think. I feel like the Japanese math books also went straight to the point, but still made the book colorful in a way. Swedish math books felt bland. (something I noticed in college too, but understandable in college ofc)
In the Swedish school, it felt like repetition was up to homework. You go through a concept, maybe one example, on the whiteboard and then move on. Unless you have active parents, it's hard to get timely feedback on homeworks (crucial for learning) so people fell behind.
Also probably that curriculum was handed to the student early. You knew what chapters you were going through at what week, and what exercises were important. I can't recall getting that (or that teachers followed it properly) early in the term at Swedish school.
They also focused on different thing. For example the multiplication table, in Japan you're explicitly taught to memorize it and are tested on recall speed. (7 * 8? You have 2 seconds) in Swedish schools, they despised memorization so told us not to. The result is "how to think about this problem" is answered with a "mental model" in Japanese education and "figure it out yourself" in the Swedish one. Some figured it out in a suboptimal way.
But later in the curriculum it obviously help to be able to calculate fast to keep up, so those small things compounded, i think.
Okay, you gotta spill - what's some stuff Sweden was pretending to be neutral on?
(As a poorly informed US dude) I'm aware of Japan's aversion to the worse events of the war, but haven't really heard anything at all about bad stuff in Sweden
I'm a Brit who speaks Swedish, and recently watched the Swedish TV company SVT's documentary "Sweden in the war" (sverige i kriget). I can maybe add some info here just out of personal curiosity on the same subject.
There were basically right wing elements in every European country. Sympathisers. This included Sweden. So that's what OP was getting at in part. Germany was somewhat revered at the time, as an impressive economic and cultural force. There was a lot of cultural overlap, and conversely the Germans respected the heritage and culture of Scandinavia and also of England, which it saw as a Germanic cousin.
The documentary did a good job of balancing the fact that Sweden let the German army and economy use its railways and iron ore for far longer than it should have, right up until it became finally too intolerable to support them in any way (discovery of the reality of the camps). Neutrality therefore is somewhat subjective in that respect.
They had precedent for neutrality, from previous conflicts where no side was favoured, so imo they weren't implicitly supporting the nazi movement, despite plenty of home support. It's a solid strategy from a game theory perspective. No mass bombings, few casualties, wait it out, be the adult in the room. Except they didn't know how bad it would get.
In their favour they allowed thousands of Norwegian resistance fighters to organise safely in Sweden. They offered safe harbour to thousands of Jewish refugees from all neighbouring occupied countries. They protected and supplied Finns too. British operatives somehow managed to work without hindrance on missions to take out German supplies moving through Sweden. It became a neutral safe space for diplomats, refugees and resistance fighters. And this was before they found out the worst of what was going on.
Later they took a stand, blocked German access and were among the first to move in and liberate the camps/offer red cross style support.
Imo it's a very nuanced situation and I'm probably more likely to give the benefit of the doubt at this point. But many Danes and Norwegians were displeased with the neutral stance as they battled to avoid occupation and deportations.
As for Japan, I'd just add that I read recently on the BBC that some 40% or more of the victims of the bombings were Koreans. As second class citizens they had to clean up the bodies and stayed among the radioactive materials far longer than native residents, who could move out to the country with their families. They live on now with intergenerational medical and social issues with barely a nod of recognition.
To think it takes the best part of 100 years for all of this to be public knowledge is testament to how much every participant wants to save face. But at what cost? The legacy of war lives on for centuries, it would seem.
And who were the teachers? Did it cost money, how much? How long ago? I guess the students were motivated and disciplined? Who were the other students? Natives, you mean swedes?
Sorry, by natives I meant Japanese Natives; A school for japanese kids (kids of japanese parents). Although I read that in Canada they recently removed that restriction, since there's now 3rd and 4th generation Canadian that teaches Japanese to the kids.
The teachers was often Japanese teachers. Usually they did teaching locally (in Sweden) or had other jobs, but most of them with a teaching license (in Japan). My Mother also did teaching there for a short time, and told me that the salary was very very low (like 300$ or something, per month) and people mostly did it for passion or part of the community thing.
I did a quick googling and right now the price seems 100$ for entering the school, and around 850$ per year. Not sure about the teachers salary now or what back then.
Other students were either: Half-Swedish/Japanese, settled in Sweden. Immigrants with both parent Japanese, settled in Sweden. Expats kids (usually in Sweden for a short time, 1-2 years, for work) both parent Japanese. The former two spoke both language, the latter only spoke Japanese.
I have as much of a fundamental issue with “Saturday school” for children as I do with professionals thinking they should be coding on their days off. When do you get a chance to enjoy your childhood?
For many, coding can be fun and it's not an external obligation like eating veggies or going to the gym (relatedly, some also enjoy veggies and the gym).
Some people want to deeply immerse into a field. Yes, they sacrifice other ways of spending that time and they will be less well rounded characters. But that's fine. It's also fine to treat programming as a job and spend free time in regular ways like going for a hike or cinema or bar or etc.
And similarly, some kids, though this may not fully overlap with the parents who want their kids to be such, also enjoy learning, math, etc. Who love the structured activities and dread the free play time. I'd say yes, they should be pushed to do regular kid things to challenge themselves too, but you don't have to mold the kids too much against what their personality is like if it is functional and sustainable.
As a kid, the "fun" about Saturday school fluctuated. In the beginning it was super fun, after a while it became a chore (and I whined to my mom) but in the end I enjoyed it and it was tremendously valuable. The school had a lot of cultural activities (sport day, new years celebration / setsubun etc) and having a second set of classmates that shared a different side of you was actually fun for me. So it added an extra dimension of enjoyment in my childhood :)
Especially since (back then) being an (half) asian nerd kid in a 99.6% White (blonde & blue eyed) school meant a lot of ridicule and minor bullying. The saturday school classes were too small for bullying to not get noticed, and also served as a second community where you could share your stuff without ridicule or confusion :)
The experience made me think that it's tremendously valuable for kids to find multiple places (at least one outside school) where they can meet their peers. Doesn't have to be a school, but a hobby community, sport group, music groups, etc. Anything the kid might like, and there's shared interest.
It teaches kid that being liked by a random group of people (classmates) is not everything in life, and you increase the chance of finding like-minded people. Which reflect rest of life better anyway (being surrounded by nerds is by far the best perk of being an engineer)
I know 2 class mates (out of 7) that hated it there, and since it's not mandatory they left after elementary school. So a parent should ofc check if t he kids enjoy it (and if not, why) and let the kid have a say in it.
That's a very bad-faith take on what I wrote. I'll self-quote:
>The experience made me think that it's tremendously valuable for kids to find *multiple places* (at least one outside school) where they can meet their peers.
Most people don't neatly fit in to "one" category. Trying to find many places you could meet peers can open up your mind (and also people around you)
There is a huge difference between not wanting to be around people who don’t agree with you about the benefits and drawbacks of supply side economics and not wanting to be around someone who disrespects you as a person because of the color of your skin.
Neither he (half Asian) or I (Black guy) owe the latter our time or energy to get along with. Let them wallow in their own ignorance.
But it is a false dichotomy. You can both offer resources to the ones behind and support high achievers.
The latter can pretty much teach themselves with little hands on guidance, you just have to avoid actively sabotaging them.
Many western school systems fail that simple requirement in several ways: they force unchallenging work even when unneeded, don’t offer harder stimulating alternatives, fail to provide a safe environment due to the other student’s disruption…
Maybe you can have all quiet and focused students together in the same classroom?
They might be reading different books, different speed, and have different questions to the teachers. But when they focus and don't interrupt each other, that can be fine?
Noisy students who sabotage for everyone shouldn't be there though.
Grouping students on some combination of learning speed and ability to focus / not disturbing the others. Rather than only learning speed. Might depend on the size of the school (how many students)
For what it's worth, that's how the Montessori school I went to worked. I have my critiques of the full Montessori approach (too long for a comment), but the thing that always made sense was mixed age and mixed speed classrooms.
The main ideas that I think should be adopted are:
1. A "lesson" doesn't need to take 45 minutes. Often, the next thing a kid will learn isn't some huge jump. It's applying what they already know to an expanded problem.
2. Some kids just don't need as much time with a concept. As long as you're consistently evaluating understanding, it doesn't really matter if everyone gets the same amount of teacher interaction.
3. Grade level should not be a speed limit; it also shouldn't be a minimum speed (at least as currently defined). I don't think it's necesarily a problem for a student to be doing "grade 5" math and "grade 2" reading as a 3rd grader. Growth isn't linear; having a multi-year view of what constitutes "on track" can allow students to stay with their peers while also learning at an appropriate pace for their skill level.
Some of this won't be feasible to implement at the public school level. I'm a realist in the sense that student to teacher ratios limit what's possible. But I think when every education solution has the same "everyone in a class goes the same speed" constraint, you end up with the same sets of problems.
Counterintuitive argument:'No one left behind' policies increase social segregation.
Universal education offers a social ladder. "Your father was a farmer, but you can be a banker, if put in the work".
When you set a lower bar (like enforcing a safe environment), smart kids will shoot forward. Yes, statistically, a large part of succesful kids will be the ones with better support networks, but you're stil judging results, for which environment is just a factor.
When you don't set this lower bar, rich kids who can move away will do it, because no one places their children in danger voluntarily. Now the subset of successful kids from a good background will thrive as always, but succesful kids from bad environments are stuck with a huge handicap and sink. You've made the lader purely, rather than partly, based on wealth.
And you get two awful side effects on top:
- you're not teaching the bottom kids that violating the safety of others implies rejection. That's a rule enforced everywhere, from any workplace through romantic relationships to even prison, and kids are now unprepared for that.
- you've taught the rest of the kids to think of the bottom ones as potential abusers and disruptors. Good luck with the resulting classism and xenophobia when they grow up.
There will always be a gap between kids who are rich and smart (if school won't teach them, a tutor will) and kids who are stupid (no one can teach them). We can only choose which side of this gap will the smart poor kids stand on. The attempts to make everyone at school equal put them on the side with the stupid kids.
Not sure if counterintuitive or not, but once you have such social mobility-based policies in place ("Your father was a farmer, but you can be a banker, if put in the work") for a few generations, generally people rise and sink to a level that will remain more stable for the later generations. Then even if you keep that same policy, the observation will be less social movement compared to generations before and that will frustrate people and they read it to mean that the policies are blocking social mobility.
You get most mobility after major upheavals like wars and dictatorships that strip people of property, or similar. The longer a liberal democratic meritocratic system is stable without upheavals and dispossession of the population through forced nationalization etc, the less effect the opportunities will have, because those same opportunities were already generally taken advantage of by the parent generation and before.
If everyone can't get a Nobel prize, no one should!
The so-called intelligent kids selfishly try to get ahead and build rockets or cure cancer, but they don't care about the feelings of those who can't build rockets or cure cancer. We need education to teach them that everyone is special in exactly the same way.
Ridiculous. Progress, by definition, is made by the people in front.
No one is saying to "focus solely on those ahead," but as long as resources are finite, some people will need to be left behind to find their own way. Otherwise those who can benefit from access to additional resources will lose out.
"Progress is made by the people in front" is plausibly true by definition.
"Progress is made by the people who were in front 15 years earlier" is not true by definition. (So: you can't safely assume that the people you need for progress are exactly the people who are doing best in school. Maybe some of the people who aren't doing so well there might end up in front later on.)
"Progress is made by the people who end up in front without any intervention" is not true by definition. (So: you can't safely assume that you won't make better progress by attending to people who are at risk of falling behind. Perhaps some of those people are brilliant but dyslexic, for a random example.)
"Progress is made by the people in front and everyone else is irrelevant to it" is not true by definition. (So: you can't safely assume that you will make most progress by focusing mostly on the people who will end up in front, even if you can identify who those are. Maybe their brilliant work will depend on a whole lot of less glamorous work by less-brilliant people.)
I strongly suspect that progress is made mostly by people who don't think in soundbite-length slogans.
Although in a global world, it's not clear that it's best for a country to focus on getting the absolute best, IF if means the average suffers from it. There is value in being the best, but for the economy it's also important to have enough good enough people to utilise the new technology/science(which gets imported from abroad), and they don't need to be the absolute best.
As a bit of a caricature example, if cancer is completely cured tomorrow, it's not necessarily the country inventing the cure which will be cancer free first, but the one with the most doctors able to use and administer the cure.
This is a false dichotomy though, as I linked previously in this thread, adult Sweeds are above Koreans, and only slightly below Japanese in both literacy, numeracy, and problem solving.
Personally I think it's easy to overestimate how important it is to be good at something at 16 for the skill at 25. Good university is infinitely more important than 'super elite' high school.
So, here's a time machine. You can go back to a time and place of lasting, enduring stability. There have been been numerous such periods in recorded history that have lasted for more than a human lifetime, and likely even more prior to that. (Admittedly a bit of a tautology, given that most 'recorded history' is a record of things happening rather than things staying the same.)
It will be a one-way trip, of course. What year do you set the dial to?
Ok, please surrender your cellphones, internet, steam, tools, writing, etc... all those were given to you by the best of the crop and not the median slop.
Most of what I remember of my high school education in France was: here are the facts, and here is the reasoning that got us there.
The exams were typically essay-ish (even in science classes) where you either had to basically reiterate the reasoning for a fact you already knew, or use similar reasoning to establish/discover a new fact (presumably unknown to you because not taught in class).
Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.
I don't know if I have critical thinking or not. But I often question - WHY is this better? IS there any better way? WHY it must be done such a way or WHY such rule exists?
For example in electricity you need at least that amount of cross section if doing X amount of amps over Y length. I want to dig down and understand why? Ohh, the smaller the cross section, the more it heats! Armed with this info I get many more "Ohhs": Ohh, that's why you must ensure the connections are not loose. Oohhh, that's why an old extension cord where you don't feel your plug solidly clicks in place is a fire hazard. Ohh, that's why I must ensure the connection is solid when joining cables and doesn't lessen cross section. Ohh, that's why it's a very bad idea to join bigger cables with a smaller one. Ohh, that's why it is a bad idea to solve "my fuse is blowing out" by inserting a bigger fuse but instead I must check whether the cabling can support higher amperage (or check whether device has to draw that much).
And yeah, this "intuition" is kind of a discovery phase and I can check whether my intuition/discovery is correct.
Basically getting down to primitives lets me understand things more intuitively without trying to remember various rules or formulas. But I noticed my brain is heavily wired in not remembering lots of things, but thinking logically.
We don't have enough time to go over things like this over and over again. Somebody already analyzed/tried all this and wrote in a book and they teach you in school from that book how it works and why. Yeah if you want to know more or understand better you can always dig it out yourself. At least today you can learn tons of stuff.
We don't have enough time to derive everything from first principles, but we do have the time to go over how something was derived, or how something works.
A common issue when trying this is trying to teach all layers at the same level of detail. But this really isn't necessary. You need to know the equation for Ohms law, but you can give very handwavy explanations for the underlying causes. For example: why do thicker wires have less resistance? Electricity is the movement of electrons, more cross section means more electrons can move, like having more lanes on a highway. Why does copper have less resistance than aluminum? Copper has an electron that isn't bound as tightly to the atom. How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law. Reserve the equations and numbers for the layers that matter, but having a rough understanding of what's happening on the layer below makes it easier to understand the layer you care about, and makes it easier to know when that understanding will break down (because all of science and engineering are approximations with limited applicability)
> How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law.
> because all of science and engineering are approximations with limited applicability
Something I heard but haven't dig into, because my use case (DIY, home) doesn't care. In some other applications approximation at this level may not work and more detailed understanding may be needed :)
And yeah, some theory and telling of things others discovered for sure needs to be done. That is just the entry point for digging. And understanding how something was derived is just a tool for me to more easily remember/use the knowledge.
Are you being serious or is this satire? What an odd perspective to share on Hacker News. We're a bunch of nerds that take pleasure in understanding how things work when you take them apart, whether that's a physics concept or a washing machine. Or am I projecting an ethos?
On the contrary, the French "dissertation" exercise requires to articulate reasoning and facts, and come up with a plan for the explanation. It is the same kind of thinking that you are required to produce when writing a scientifically paper.
It is however not taught very well by some teachers, who skirt on explaining how to properly do it, which might be your case.
On the contrary, your OP claims that dissertations require a rehash of the references cited in class. A real dissertation exercises logic and requires mobilizing facts and verbal precision to ground arguments. It is also highly teacher-dependent: if the correction is lax or not properly explained, you won’t understand what the exercise really is or how you are supposed to think in order to succeed.
Perhaps you overestimate me (or underestimate Beaujolais Nouveau (though how one could underestimate Beaujolais Nouveau is a mystery to me, but I digress)).
But also, it takes a lot of actual learning of facts and understanding reasoning to properly leverage that schooling and I've had to accept that I am somewhat deficient at both. :)
One thing I've come to understand about myself since my ADHD diagnosis is how hard thinking actually is for me. Especially thinking "to order", like problem solving or planning ahead. I'm great at makeshift solutions that will hold together until something better comes along. But deep and sustained thought for any length of time increases the chance that I'll become aware that I'm thinking and then get stuck in a fruitless meta cognition spiral.
An analogy occurred to me the other day that it's like diving into a burning building to rescue possessions. If I really go for it I could get lucky and retrieve a passport or pet, but I'm just as likely to come back with an egg whisk!
I think all this stuff is so complex and multi-faceted that we often get only a small part of the picture at a time.
I likely have some attention/focus issues, but I also know they vary greatly (from "can't focus at all" to "I can definitely grok this") based on how actually interested I am in a topic (and I often misjudge that actual level of interest).
I also know my very negative internal discourse, and my fixed mindset, are both heavily influenced by things that occurred decades ago, and keeping myself positively engaged in something by trying to at least fake a growth mindset is incredibly difficult.
Meanwhile, I'm perfectly willing to throw unreasonable brute force effort at things (ie I've done many 60+ hour weeks working in tech and bunches of 12 hour days in restaurant kitchens), but that's probably been simultaneously both my biggest strength and worst enemy.
At the same time, I don't think you should ignore the value of an egg whisk. You can use it to make anything from mayonnaise to whipped cream, not to mention beaten egg whites that have a multitude of applications. Meanwhile, the passport is easy enough to replace, and your pet (forgive me if I'm making the wrong assumption here) doesn't know how to use the whisk properly.
I’ve heard many bad things said of the Beaujolais Nouveau, and of my sense of taste for liking it, but this is the first time I’ve seen its critical-thinking skills questioned.
In its/your/our defense, I think it’s a perfectly smart wine, and young at heart!
> In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.
I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.
Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.
In the world of software development I meet a breed of Swedish devs younger than 30 that can't write code very well, but who can wax Jira tickets and software methodologies and do all sort of things to get them into a management position without having to write code. The end result is toxic teams where the seniors and the devs brought from India are writing all the code while all the juniors are playing software architect, scrum master an product owners.
Not everybody is like that; seniors tend to be reliable and practical, and some juniors with programming-related hobbies are extremely competent and reasonable. But the chunk of "waxers" is big enough to be worrying.
I have heard that in Netherlands there used to be (not sure if it is still there) a system where you have for example 4 rooms of children. Room A contains all children that are ahead of rooms B, C, D. If a child from room B learns pretty quickly - the child is moved to room A. However, if the child leaves behind the other children in room B - that child is moved in room C. Same for room C - those who can not catch up are moved to room D. In this way everyone is learning at max capacity. Those who can learn faster and better are not slowed down by others who can not (or do not want to) keep the pace. Everyone is happy - children, teachers, parents, community.
Sweden is the 19th country in the PISA scores. And it is in the upper section on all education indexes. There has been a world decline on scores, but has nothing to do with the Swedish education system. (That does not mean that Sweden should not continue monitoring it and bringing improvements)
Considering our past and the Finnish progress (they considered following us in the 80s/90s as they had done but stopped), 19th is an disappointment.
Having teenagers that's been through most of the primary and secondary schools I kind agree with GP, especially when it comes to math,etc.
Teaching concepts and ideas is _great_, and what we need to manage with advanced topics as adults. HOWEVER, if the foundations are shaky due to too little repetition of basics (that is seemingly frowned upon in the system) then being taught thinking about some abstract concepts doesn't help much because the tools to understand them aren't good enough.
One should note that from the nineties onwards we put a large portion of our kids' education on the stock exchange and in the hands of upper class freaks instead of experts.
I think there’s a balance to be had. My country (Spain) is the very opposite, with everything from university access to civil service exams being memory focused.
The result is usually bottom of the barrel in the subjects that don’t fit that model well, mostly languages and math - the latter being the main issue as it becomes a bottleneck for teaching many other subjects.
It also creates a tendency for people to take what they learn as truth, which becomes an issue when they use less reputable sources later in life - think for example a person taking a homeopathy course.
Lots of parroting and cargo culting paired with limited cultural exposition due to monolingualism is a bad combination.
Media can fill that gap. People should be critical about global warming, antivax, anti israel, anti communism, racism, hate, whitr man, anti democracy, russia, china, trump...
This thing is bad, imhate it, problem solved! Modern critical thinking is pretty simple!
In future goverment can provide daily RSS feed, of things to be critical about. You can reduce national schooling system to a single vps server!
The problem is, in a capitalist society, who is going to be the company that will donate their time and money to teaching a junior developer who will simply go to another company for double the pay after 2 years?
I think that’s a disingenuous take. Earlier in the piece the AWS CEO specifically says we should teach everyone the correct ways to build software despite the ubiquity of AI. The quote about creative problem solving was with respect to how to hire/get hired in a world where AI can let literally anyone code.
On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.
I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.
As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..
I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?
Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.
People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.
B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.
The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.
I primarily work in C# during the day but have been messing around with simple Android TV dev on occasion at night.
I’ve been blown away sometimes at what Copilot puts out in the context of C#, but using ChatGPT (paid) to get me started on an Android app - totally different experience.
Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
With Copilot I find sometimes it’s brilliant but it’s so random as to when that will be it seems.
> Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
That has been my experience as well. We can control the surprising pick of APIs with basic prompt files that clarify what and how to use in your project. However, when using less-than-popular tools whose source code is not available, the hallucinations are unbearable and a complete waste of time.
The lesson to be learned is that LLMs depend heavily on their training set, and in a simplistic way they at best only interpolate between the data they were fed. If a LLM is not trained with a corpus covering a specific domain them you can't expect usable results from it.
This brings up some unintended consequences. Companies like Microsoft will be able to create incentives to use their tech stack by training their LLMs with a very thorough and complete corpus on how to use their technologies. If Copilot does miracles outputting .NET whereas Java is unusable, developers have one more reason to adopt .NET to lower their cost of delivering and maintaining software.
Pretty ironic you and the GP talk about lines of code.
From the article:
Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
“It’s a silly metric,” he said, because while organizations can use AI to write “infinitely more lines of code” it could be bad code.
“Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I'm with Garman here. There's no clean metric for how productive someone is when writing code. At best, this metric is naive, but usually it is just idiotic.
Bureaucrats love LoC, commits, and/or Jira tickets because they are easy to measure but here's the truth: to measure the quality of code you have to be capable of producing said code at (approximately) said quality or better. Data isn't just "data" that you can treat as a black box and throw in algorithms. Data requires interpretation and there's no "one size fits all" solution. Data is nothing without its context. It is always biased and if you avoid nuance you'll quickly convince yourself of falsehoods. Even with expertise it is easy to convince yourself of falsehoods. Without expertise it is hopeless. Just go look at Reddit or any corner of the internet where there's armchair experts confidently talking about things they know nothing about. It is always void of nuance and vastly oversimplified. But humans love simplicity. You need to recognize our own biases.
> Pretty ironic you and the GP talk about lines of code.
I was responding specifically to the comment I replied to, not the article, and mentioning LoC as a specific example of things that don't make sense to compare.
Made me think of a post from a few days ago where Pournelle's Iron Law of Bureaucracy was mentioned[0]. I think vibe coders are the second group. "dedicated to the organization itself" as opposed to "devoted to the goals of the organization". They frame it as "get things done" but really, who is not trying to get things done? It's about what is getting done and to what degree is considered "good enough."
On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.
We really need to add some kind of risk to people making these claims to make it more interesting. I listened to the type of advice you're giving here on more occasions than I can remember, at least once for every major revision of every major LLM and always walked away frustrated because it hindered me more than it helped.
> This is actually amazing now, just use [insert ChatGPT, GPT-4, 4.5, 5, o1, o3, Deepseek, Claude 3.5, 3.9, Gemini 1, 1.5, 2, ...] it's completely different from Model(n-1) you've tried.
I'm not some mythical 140 IQ 10x developer and my work isn't exceptional so this shouldn't happen.
The dark secret no one from the big providers wants to admit is that Claude is the only viable coding model. Everything else descends into a mess of verbose spaghetti full of hallucinations pretty quickly. Claude is head and shoulders above the rest and it isn't even remotely close, regardless of what any benchmark says.
Tried about four others, and to some extent I always marveled about capabilities of latest and greatest I had to concede they didn’t make faster. I think Claude does.
That poster isn't comparing models, he's comparing Claude Code to Cline (two agentic coding tools), both using Claude Sonnet 4. I was pretty much in the same boat all year as well; using Cline heavily at work ($1k+/month token spend) and I was sold on it over Claude Code, although I've just recently made the switch, as Claude Code has a VSCode extension now. Whichever agentic tooling you use (Cline, CC, Cursor, Aider, etc.) is still a matter of debate, but the underlying model (Sonnet/Opus) seems to be unanimously agreed on as being in a league of its own, and has been since 3.5 released last year.
I've been working on macOS and Windows drivers. Can't help but disagree.
Because of the absolute dearth of high-quality open-source driver code and the huge proliferation of absolutely bottom-barrel general-purpose C and C++, the result is... Not good.
On the other hand, I asked Claude to convert an existing, short-ish Bash script to idiomatic PowerShell with proper cmdlet-style argument parsing, and it returned a decent result that I barely had to modify or iterate on. I was quite impressed.
Garbage in, garbage out. I'm not altogether dismissive of AI and LLMs but it is really necessary to know where and what their limits are.
I found the opposite - I am able to get 50% improvement in productivity for day to day coding (mix of backend, frontend), mostly in Javascript but have helped in other languages. But you have to carefully review though - and have extremely well written test cases if you have to blindly generate or replace existing code.
> In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.
In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.
If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.
I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".
200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.
Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.
If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.
I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.
If all your code depends on all your other code, yeah 200k lines might be a lot. But if you actually know how to code, I fail to understand why 200k lines (or any number) of properly encapsulated well-written code would be a problem.
Further, if you yourself don't understand the code, how can you verify that using LLMs to make major sweeping changes, doesn't mess anything up, given that they are notorious for making random errors?
It really depends on what your use case is. E.g. of you're dealing with a lot of legacy integrations, dealing with all the edge cases can require a lot of code that you can't refactor away through cleverness.
Each integration is hopefully only a few thousand lines of code, but if you have 50 integrations you can easily break 100k loc just dealing with those. They just need to be encapsulated well so that the integration cruft is isolated from the core business logic, and they become relatively simple to reason about
What on earth are you talking about? This is unavoidable for many use-cases, especially ones that involve interacting with the real world in complex ways. It's hardly a marker of failure (or success, for that matter) on its own.
200k loc is not a failure state. suppose your b2b saas has 5 user types and 5 downstream SAASes it connects to, thats 20k loc per major programming unit. not so bad.
I agree on principle, and I'm sure many of us know how much of a pain it is to work on million or even billion dollar codebases, where even small changes can be weeks of beauracracy and hours of meetings.
But with the way the industry is, I'm also not remotely surprised. We have people come and go as they are poached, burned out, or simply life circumstances. The training for the new people isn't the best, and the documentation for any but the large companies are probably a mess. We also don't tend to encourage periods to focus on properly addressing tech debt, but focusing on delivering features. I don't know how such an environment over years, decades doesn't generate so much redundant, clashing, and quirky interactions. The culture doesn't allow much alternative.
And of course, I hope even the most devout AI evangelists realize that AI will only multiply this culture. Code that no one may even truly understand, but "it works". I don't know if even Silicon Valley (2014) could have made a parody more shocking than the reality this will yield.
Ones that can remediate it though. If I am capable of safely refactoring 1,000 copies of a method, in a codebase that humans don’t look at, did it really matter if the workload functions as designed?
In a type safe language like C# or Java, why could you need an LLM for that? it’s a standard guaranteed safe (as long as you aren’t using reflection) refactor with ReSharper.
Liability vs asset is what you were trying to say, I think, but everyone says that, so to be charitable I think you were trying to put a new spin on the phrasing, which I think is admirable, to your credit.
There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.
If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.
To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.
If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.
It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.
The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.
Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.
> I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.
So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.
I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.
We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.
> and I commonly see programmers saying thing like "you shouldn't use a debugger"
Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.
No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.
It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.
This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.
> If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.
If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.
There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".
I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?
> When the system design is not pre-defined or rigid like
Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.
I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.
People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.
Even there the people who rave the most rave about how well it does boilerplate.
I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.
There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.
When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.
No, they're not. It's critically important if you're part of an engineering team.
If everyone does their own thing, the codebase rapidly turns to mush and is unreadable.
And you need humans to be able to read it the moment the code actually matters and needs to stand up to adversaries. If you work with money or personal information, someone will want to steal that. Or you may have legal requirements you have to meet.
You’ve made a sweeping statement there, there are swathes of teams working in startups still trying to find product market fit. Focusing on quality in these situations is folly, but that’s not even the point. My point is you can ship quality to any standard using an llm, even your standards. If you can’t that’s a skill issue on your part.
Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.
Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)
The author of the library (kentonv) comments in the HN thread that he said it took him a few days to write the library with AI help, while he thinks it would have taken weeks or months to write manually.
Also, while it may be technically true we're "two years in", I don't think this is a fair assessment. I've been trying AI tools for a while, and the first time I felt "OK, now this is really starting to enhance my velocity" was with the release of Claude 4 in May of this year.
But that example is of writing a green field library that deals with an extremely well documented spec. While impressive, this isn’t what 99% of software engineering is. I’m generally a believer/user but this is a poor example to point at and say “look, gains”.
Perhaps I'm misreading the person to whom you're replying, but usefullness, while subjective, isn't typically based on one person's opinion. If enough people agree on the usefullness of something, we as a collective call it "useful".
Perhaps we take the example of a blender. There's enough need to blend/puree/chop food-like-items, that a large group of people agree on the usefullness of a blender. A salad-shooter, while a novel idea, might not be seen as "useful".
Creating software that most folks wouldn't find useful still might be considered "neat" or "cool". But it may not be adding anything to the industry. The fact that someone shipped something quickly doesn't make it any better.
Ultimately, or at least in this discussion, we should decouple the software’s end use from the question of whether it satisfies the creator’s requirements and vision in a safe and robust way. How you get there and what happens after are two different problems.
It's not for nothing. When a profitable product can be created in a fraction of the time and effort previously required, the tool to create it will attract scammers and grifters like bees to honey. It doesn't matter if the "business" around it fails, if a new one can be created quickly and cheaply.
This is the same idea behind brands with random letters selling garbage physical products, only applied to software.
I have insight into enough code bases to know its a non zero number. Your logic is bizarre, if you’ve never seen a kangaroo would you just believe they don’t exist?
Show us the numbers, stop wasting our time. NUMBERS.
Also, why would I ever believe kangaroos exist if I haven't seen any evidence of them? this is a fallacy. You are portraying the healthy skepticism as stupid because you already know kangaroos exist.
What numbers? It doesn’t matter if it’s one or a million, it’s had a positive impact on the velocity of a non zero number of projects.
You wrote:
> Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Yes is the answer. I could probably put it in front of your face and you’d reject it. You do you. All the best.
The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.
The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.
This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.
It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:
Which problem are you trying to solve?
At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.
It's amazing to make small custom apps and scripts, and they're such high quality (compared to what I would half-ass write and never finish/polish them) that they don't end up as "throwaway", I keep using them all the time. The LLM is saving me time to write these small programs, and the small programs boost my productivity.
Often, I will solve a problem in a crappy single-file script, then feed it to Claude and ask to turn it into a proper GUI/TUI/CLI, add CI/CD workflows, a README, etc...
I was very skeptical and reluctant of LLM assisted coding (you can look at my history) until I actually tried it last month. Now I am sold.
At work I need often smaller, short lived scripts to find this or that insight, or to use visualization to render some data and I find LLMs very useful at that.
A non coding topic, but recently I had difficulty articulating a summarized state of a complex project, so I spoke 2 min in the microphone and it gave me a pretty good list of accomplishments, todos and open points.
Some colleagues have found them useful for modernizing dependencies of micro services or to help getting a head start on unit test coverage for web apps. All kinds of grunt work that’s not really complex but just really moves quite some text.
I agree it’s not life changing, but a nice help when needed.
I use it to do all the things that I couldn't be bothered to do before. Generate documentation, dump and transform data for one off analyses, write comprehensive tests, create reports. I don't use it for writing real production code unless the task is very constrained with good test coverage, and when I do it's usually to fix small but tedious bugs that were never going to get prioritized otherwise.
And also ask: "How much money do you spend on LLMs?"
In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.
I deal with a few code bases at work and the quality differs a lot between projects and frameworks.
We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.
We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.
But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.
There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.
Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.
They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.
So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.
Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.
Honestly I think LLMs really shine best when your first getting into a language.
I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.
However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized
I have scaffolded projects using LLMs in languages I don't know and I agree that it can be a great way to learn as it gives you something to iterate on. But that is only if you review/rewrite the code and read documentation alongside it. Many times LLMs will generate code that is just plain bad and confusing even if it works.
I find that LLM coding requires more in-depth understanding, because rather than just coming up with a solution you need to understand the LLMs solution and answer if the complexity is necessary, because it will add structures, defensive code and more that you wouldn't add if you coded it yourself. It's way harder to answer if some code is necessary or the correct way to do something.
This is the one place where I find real value in LLMs. I still wouldn't trust them as teachers because many details are bound to be wrong and potentially dangerous, but they're great initial points of contact for self-directed learning in all kinds of fields.
Yeah this is where I find a lot of value. Typescript is my main language, but I often use C++ and Python where my knowledge is very surface level. Being able to ask it "how do I do ____ in ____" and getting a half decent explanation is awesome.
I'm convinced that for coding we will have to use some sort of TDD or enhanced requirement framework to get the best code. Even on human made systems the quality is highly dependent on the specificity of the requirements and the engineer's ability to probe the edgecases. Something like writing all the tests first (even in something like cucumber) and having the LLM write code to get them to pass would likely produce better code evene though most devs hate the test-first paradigm.
My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.
Just last week I asked copilot to make a FastCGI client in C. It gave me 5 times code that did not compile. Afer some massaging I got it to compile, didn’t work. After some changes, works. No I say “i do not want to use libfcgi, just want a simple implementation”. After already one hour wrestling, I realize the whole thing blocks, I want no blocking calls… still half an hour later fighting, I’m slowly getting there. I see the code: a total mess.
I deleted all, wrote from scratch a 350 lines file which wotks.
At some point it becomes easier to just write the code. If the solution was 350 lines, then I'm guessing it was far easier for them to just write that rather then tweak instructions, find examples, etc to cajole the AI to writing workable code (that would then need to be reviewed and tweaked if doing it properly).
It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.
I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.
Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.
* Code using sympy to generate math problems testing different skills for students, with difficulty values affecting what kinds of things are selected, and various transforms to problems possible (e.g. having to solve for z+4 of 4a+b instead of x) to test different subskills
(On this part, the LLM did pretty well. The code was correct after a couple of quick iterations, and the base classes and end-use interfaces are correct. There's a few things in the middle that are unnecessarily "superstitious" and check for conditions that can't happen, and so I need to work with the LLM to clean it up.
* Code to use IRT to estimate the probability that students have each skill and to request problems with appropriate combinations of skills and difficulties for each student.
(This was somewhat garbage. Good database & backend, but the interface to use it was not nice and it kind of contaminated things).
* Code to recognize QR codes in the corners of worksheet, find answer boxes, and feed the image to ChatGPT to determine whether the scribble in the box is the answer in the correct form.
(This was 100%, first time. I adjusted the prompt it chose to better clarify my intent in borderline cases).
The output was, overall, pretty similar to what I'd get from a junior engineer under my supervision-- a bit wacky in places that aren't quite worth fixing, a little bit of technical debt, a couple of things more clever that I didn't expect myself, etc. But I did all of this in three hours and $12 expended.
The total time supervising it was probably similar to the amount of time spent supervising the junior engineer... but the LLM turns things around quick enough that I don't need to context switch.
I think it's fair to call code LLM's similar to fairly bad but very fast juniors that don't get bored. That's a serious drawback but it does give you something to work with. What scares me is non-technical people just vibe coding because it's like a PM driving the same juniors with no one to give sanity checks.
> I also think it’s that people don’t know how to use the tool very well.
I think this is very important. You have to look at what it suggests critically, and take what makes sense. The original comment was absolutely correct that AI-generated code is way too verbose and disconnected from the realities of the application and large-scale software design, but there can be kernels of good ideas in its output.
I think a lot of it is tool familiarity. I can do a lot with Cursor but frankly I find out about "big" new stuff every day like agents.md. If I wasn't paying attention or also able to use Cursor at home then I'd probably learn more inefficiently. Learning how to use rule globs versus project instructions was a big learning moment. As I did more LLM work on our internal tools that was also a big lesson in prompting and compaction.
Certain parts of HN and Reddit I think are very invested in nay-saying because it threatens their livelihoods or sense of self. A lot of these folks have identities that are very tied up in being craftful coders rather than business problem solvers.
I think its down to language and domain more than tools.
No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)
My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)
My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.
Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.
I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.
We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.
> I then carefully review and test.
If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.
FWIW. Claude Code does great job for me on complex domain Rust projects, but I just use it one relatively small feature/code chunk at the time, where oftentimes it can pick up existing patterns etc. (I try to point it at similar existing code/feature if I have it). I do not let it write anything creative where it has to come up with own design (either high-level architectural, or low level facilities). Basically I draw the lines manually, and let it color the space between, using existing reference pictures. Works very, very well for me.
Is this meant to detract from their situation? These tech stacks are mainstream because so many use them... it's only natural that AI would be the best at writing code in contexts where it has the most available training data.
> These tech stacks are mainstream because so many use them
That's a tautology. No, those tech stacks are mainstream because it is easy to get something that looks OK up and running quickly. That's it. That's what makes a framework go mainstream: can you download it and get something pretty on the screen quickly? Long-term maintenance and clarity is absolutely not a strong selection force for what goes mainstream, and in fact can be an opposing force, since achieving long-term clarity comes with tradeoffs that hinder the feeling of "going fast and breaking things" within the first hour of hearing about the framework. A framework being popular means it has optimized for inexperienced developers feeling fast early, which is literally a slightly negative signal for its quality.
You are exactly right in my case - JavaScript and Python dealing with the AWS CDK and SDK. Where there is plenty of documentation and code samples.
Even when it occasionally gets it wrong, it’s just a matter of telling ChatGPT - “verify your code using the official documentation”.
But honestly, even before LLMs when deciding on which technology, service, or frameworks to use I would always go with the most popular ones because they are the easiest to hire for, easiest to find documentation and answers for and when I myself was looking for a job, easiest to be the perfect match for the most jobs.
They can choose jobs. Starting with my 3rd job in 2008, I always chose my employer based on how it would help me get my n+1 job and that was based on tech stack I would be using.
Once a saw a misalignment between market demands and current tech stack my employer was using, I changed jobs. I’m on job #10 now.
Honestly, now that I think about it, I am using a pre-2020 playbook. I don’t know what the hell I would do these days if I were still a pure developer without the industry connections and having AWS ProServe experience on my resume.
While it is true that I got a job quickly in 2023 and last year when I was looking, while I was interviewing for those two, as a Plan B, I was randomly submitting my resume (which I think is quite good) to literally hundreds of jobs through Indeed and LinkedIn Easy Apply and I heard crickets - regular old enterprise dev jobs that wanted C#, Node or Python experience on top of AWS.
I don’t really have any generic strategy for people these days aside from whatever job you are at, don’t be a ticket taker and be over larger initiatives.
As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.
The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...
It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.
Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.
I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.
If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).
B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take
It really depends, and can be variable, and this can be frustrating.
Yes, I’ve produced thousands of lines of good code with an LLM.
And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.
I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.
It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.
Stuff Wordpress templates should have solved 5 years ago.
Fair: it was rude. Moderation is hard and I respect what you do. But it's also a sentiment several other comments expressed. It's the conversation we're having. Can we have any discussions of code quality without making assumptions about each others' code quality? I mean, yeah, I could probably have done better.
> "200k lines of code is a failure state ... I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion." https://news.ycombinator.com/item?id=44976328
Oh for sure you can talk about this, it's just a question of how you do it. I'd say the key thing is to actively guard against coming across as personal. To do that is not so easy, because most of us underestimate the provocation in our own comments and overestimate the provocation in others (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). This bias is like carbon monoxide - you can't really tell it's affecting you (I don't mean you personally, of course—I mean all of us), so it needs to be consciously compensated for.
As for those other comments - I take your point! I by no means meant to pick on you specifically; I just didn't see those. It's pretty random what we do and don't see.
I understand the provocation, but please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
Your GP comment was great, and probably the thing to do with a supercilious reply is just not bother responding (easier said than done of course). You can usually trust other users to assess the thread fairly (e.g.https://news.ycombinator.com/item?id=44975623).
> What makes you think I'm not "a developer who strongly values brevity and clarity"
Some pieces of evidence that make me think that:
1. The base rate of developers who write massively overly verbose code is about 99%, and there's not a ton of signal to deviate from that base rate other than the fact that you post on HN (probably a mild positive signal).
2. An LLM writes 80% of your code now, and my prior on LLM code output is that it's on par with a forgetful junior dev who writes very verbose code.
3. 200K lines of code is a lot. It just is. Again, without more signal, it's hard to deviate from the base rate of what 200K-line codebases look like in the wild. 99.5% of them are spaghettified messes with tons of copy-pasting and redundancy and code-by-numbers scaffolded code (and now, LLM output).
This is the state of software today. Keep in mind the bad programmers who make verbose spaghettified messes are completely convinced they're code-ninja geniuses; perhaps even more so than those who write clean and elegant code. You're allowed to write me off as an internet rando who doesn't know you, of course. To me, you're not you, you're every programmer who writes a 200k LOC B2B SaaS application and uses an LLM for 80% of their code, and the vast, vast majority of those people are -- well, not people who share my values. Not people who can code cleanly, concisely, and elegantly. You're a unicorn; cool beans.
Before you used LLMs, how often were you copy/pasting blocks of code (more than 1 line)? How often were you using "scaffolds" to create baseline codefiles that you then modified? How often were you copy/pasting code from Stack Overflow and other sources?
> I'm struggling to even describe... 200,000 lines of code is so much.
The point about increasing levels of abstractions is a really good one, and it's worth considering whether any new code that's added is entirely new functionality, some kind of abstraction over some existing functionality (that might then reduce the need for as new code), or (for good or bad reason) some kind of copy of some of the existing behaviour but re-purposed for a different use case.
200kloc is what, 4 reams of paper, double sided? So, 10% of that famous Margaret Hamilton picture (which is roughly "two spaceships worth of flight code".) I'm not sure the intuition that gives you is good but at least it slots the raw amount in as "big but not crazy big" (the "9 years work" rather than "weekend project" measurement elsethread also helps with that.)
I agree. AI is a wonderful tool for making fuzzy queries on vast amounts of information. More and more I'm finding that Kagi's Assistant is my first stop before an actual search. It may help inform me about vocabulary I'm lacking which I can then go successfully comb more pages with until I find what I need.
But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.
But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.
I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.
Have you tried Opus? It's what got me past using LLMs only marginally. Standard disclaimers apply in that you need to know what it's good for and guide it well, but there's no doubt at this point it's a huge productivity boost, even if you have high standards - you just have to tell it what those standards are sometimes.
Write a rust serde implementation for the ORB binary data format.
Here is the background information you need:
* The ORB reference material is here: https://github.com/kstenerud/orb/blob/main/orb.md
* The formal grammar dscribing ORB is here: https://github.com/kstenerud/orb/blob/main/orb.dogma
* The formal grammar used to describe ORB is called Dogma.
* Dogma reference material is here: https://github.com/kstenerud/dogma/blob/master/v1/dogma_v1.0.md
* The end of the Dogma description document has a section called "Dogma described as Dogma", which contains the formal grammar describing Dogma.
Other important things to remember:
* ORB is an extension of BONJSON, so it must also implement all of BONJSON.
* The BONJSON reference material is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.md
* The formal grammar desribing BONJSON is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.dogma
Is it perfect? Nope, but it's 90% of the way there. It would have taken me all day to build all of these ceremonious bits, and Claude did it in 10 minutes. Now I can concentrate on the important parts.
First and foremost, it’s 404. Probably a mistake, but I chuckled a bit when someone says "AI build this thing and it’s 90% there" and then posts a dead link.
I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.
Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.
It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much, and it's only going to get better.
Even though it can do some work a junior could do, it can't ever replace a junior human... because a junior human also goes to meetings, drives discussions, and eventually becomes a senior! But management may not care about that fact.
The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.
Nope, this only applies to a small percent of content, where a relatively small number of people needs access to it and the incentive to create derivative work based on it is low, or where there's a huge amount of content that's frequently changing (think airfares). But yes, they will protect it more.
For content that doesn't change frequently and is used by a lot of people it will be hard to control access to it or derivative works based on it.
I don't think you're considering the enshittification route here. I'm sure it will be: Ask ChatGPT a question -> "While I'm thinking, here's something from our sponsor which is tailored to your question" -> lame answer which requires you to ask another question. And on and on. While you're asking these questions, a profile of you is built and sold on the market.
> The next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine.
Almost every big tech company is an ad company. Google sells ads, Meta sells ads, Microsoft sells ads, Amazon sells ads, Apple sells ads, only Nvidia doesn't because they sell hardware components.
It's practically inevitable for a tech company offering content and everyone who thinks otherwise should set a reminder to 5 years from now.
It's one of those you get what you put in kind of deals.
If you spend a lot of time thinking about what you want, describing the inner workings, edge cases, architecture and library choices, and put that into a thoughtful markdown, then maybe after a couple of iterations you will get half decent code. It certainly makes a difference between that and a short "implement X" prompt.
But it makes one think - at that point (writing a good prompt that is basically a spec), you've basically solved the problem already. So LLM in this case is little more than a glorified electric typewriter. It types faster than you, but you did most of the thinking.
Right, and then after you do all the thinking and the specs, you have to read and understand and own every single line it generated. And speaking for myself, I am no where near as good at thinking through code I am reviewing as thinking through the code I am writing.
Other people will put up PRs full of code they don't understand. I'm not saying everyone who is reporting success with LLMs are doing that, but I hear it a lot. I call those people clowns, and I'd fire anyone who did that.
If it passes the unit tests I make it write and works for my sample manual cases I absolutely will not spend time reading the implementation details unless and until something comes up. Sometimes garbage makes its way into git but working code is better than no code and the mess can be cleaned up later. If you have correctness at the interface and function level you can get a lot done quickly. Technical debt is going to come out somewhere no matter what you do.
The trick is to not give a fuck. This works great in a lot of apps, which are useless to begin with. It may also be a reasonable strategy in an early-stage startup yet to achieve product-market fit, but your plan has to be to scrap it and rewrite it and we all know how that usually turns out.
This is an excellent point. Sure in an ideal world we should care very much about every line of code committed, but in the real world pushing garbage might be a valid compromise given things like crunch, sales pitches due tomorrow etc.
No, that's a much stronger statement. I'm not talking about ideals. I'm talking about running a business that is mature, growing and going to be around in five years. You could literally kill such a business running it on a pile of AI slop that becomes unmaintainable.
How much of the code do you review in a third party package installed through npm, pip, etc.? How many eyes other than the author’s have ever even looked at that code? I bet the answers have been “none” and “zero” for many HN readers at some point. I’m certainly not saying this is a great practice or the only way to productively use LLMs, just pointing out that we treat many things as a black box that “just works” till it doesn’t, and life somehow continues. LLM output doesn’t need to be an exception.
That's true, however, not so great of an issue because there's a kind of natural selection happening: if the package is popular, other people will eventually read (parts of, at least) the code and catch the most egregious problems. Most packages will have "none" like you said, but they aren't being used by that many people either, so that's ok.
Of course this also applies to hypothetical LLM-generated packages that become popular, but some new issues arise: the verbosity and sometimes baffling architecture choices by LLM will certainly make third-party reviews harder and push up the threshold in terms of popularity needed to obtain third party attention.
I’ve built 2 SaaS applications with LLM coding one of which was expanded and release to enterprise customers and is in good use today
- note I’ve got years of dev experience and I follow context and documentation prompts and I’m using common LLM languages like typescript and python and react and AWS infra
Now it requires me to fully review all code and understand what the LLM is doing at the functional, class level and api level- in fact it works better at the method or component level for me and I had a lot of cleanup work (and lots of frustration with the models) on the codebase but overall there’s no way that I could equal the velocity I have now without it
I think the other important step is to reject code your engineers submit that they can't explain for a large enterprise saas with millions of lines of code. I myself reject I'd say 30% of the code the LLMs generate but the power is in being able to stay focused on larger problems while rapidly implementing smaller accessory functions that enable that continued work without stopping to add another engineer to the task.
I've definitely 2-4X'd depending on the task. For small tasks I've definitely 20X'd myself for some features or bugfixes.
I do frontend work (React/TypeScript). I barely write my own code anymore, aside from CSS (the LLMs have no aesthetic sensibilities). Just prompting with Gemini 2.5 Pro. Sometimes Sonnet 4.
I don't know what to tell you. I just talk to the thing in plain but very specific English and it generally does what I want. Sometimes it will do stupid things, but then I either steer it back in the direction I want or just do it myself if I have to.
I agree with the article but also believe LLM coding can boost my productivity and ability to write code over long stretches. Sure getting it to write a whole feature, high opportunity of risk. But getting it to build out a simple api with examples above and below it, piece of cake, takes a few seconds and would have taken me a few minutes.
The bigger the task, the more messy it'll get. GPT5 can write a single UI component for me no problem. A new endpoint? If it's simple, no problem. The risk increases as the complexity of the task does.
AI is also pretty good if you get it to do small chunks of code for you. This means you come with the architecture, the implementation details, and how each piece is structured. When I walk AI through each unit of code I find the results are better, and it's easier for me to address issues as I progress.
This may seem some what redundant, though. Sometimes it's faster to just do it yourself. But, with a toddler who hates sleep I've found I've been able to maintain my velocity... Even on days I get 3 hrs of sleep.
The AI agents tend to fail for me with open ended or complex tasks requiring multiple steps. But I’ve found it massively helpful if you have these two things:
1) a typed language… better if strongly typed
2) your program is logically structured and follows best practices and has hierarchical composition.
The agents are able to iterate and work with the compiler until it gets it right and the combination of 1 and 2 means there’s fewer possible “right answers” to whatever problem I have. If i structure my prompte to basically fill in the blanks of my code in specific areas it saves a lot of time. Most of what I prompt is something already done, and usually 1 google search away. This saves me the time to search it up, figure out whatever syntax I need, etc.
I don't code every day and am not an expert. Supposedly the sort of casual coder that LLMs are supposed to elevate into senior engineers.
Even I can see they have big blind spots. As the parent said I get overly verbose code that does run, but is no where near the best solution. Well, for really common problems and patterns I usually get a good answer. Need a more niche problem solved?You better brush up your Googling skills and do some research if you care about code quality.
If you actually believe this, you're either using bad models or just terrible at prompting and giving proper context. Let me know if you need help, I use generated code in every corner of my computer every day
My favourite code smell that LLMs love to introduce is redundant code comments.
// assign "bar" to foo
const foo = "bar";
They love to do that shit. I know you can prompt it not to. But the amount of PRs I'm reviewing these days that have those types of comments is insane.
The code LLMs write is much better than mine. Way less shortcuts and spaghetti. Maybe that means that I am a lousy coder but the end result is still better.
I haven't had that experience, but I tend to keep my prompts very focused with a tightly limited scope. Put a different way, if I had a junior or mid level developer, and I wanted them to create a single-purpose class of 100-200 lines at most, that's how I write my prompts.
I've been pondering this for a while. I think there's an element of dopamine that LLMs bring to the table. They probably don't make a competent senior engineer much more productive if at all, but there's that element of chance that we don't get a lot of in this line of work.
I think a lot of us eventually arrive at a point where our jobs get a bit boring and all the work starts to look like some permutation of past work. If instead of going to work and spending two hours adding some database fields and writing some tests, you had the opportunity to either:
A) Do the thing as usual in the predictable two hours
B) Spend an hour writing a detailed prompt as if you were instructing a junior engineer on a PIP to do it, and doing all the typical cognitive work you'd have done normally and then some, but then instead of typing out the code in the next hour, you have a random chance to either press enter, and tada the code has been typed and even kinda sorta works, after this computer program was "flibbertigibbeting" for just 10 minutes. Wow!
Then you get that sweet dopamine hit that tells you you're a really smart prompt engineer who did a two hour task in... cough 10 minutes. You enjoy your high for a bit, maybe go chat with some subordinate about how great your CLAUDE.md was and if they're not sure about this AI thing it's just because they're bad at prompt engineering.
Then all you have to do is cross your t's and dot your i's and it's smooth sailing from there.Except, it's not. Because you (or another engineer) will probably find architectural/style issues when reviewing the code that you explicitly told it to follow, but it ignored, and you'll have to fix those. You'll also probably be sobering up from your dopamine rush by now, and realize that you have to either review all the other lines of AI generated code, which you could have just correctly typed once.
But now you have to review with an added degree of scrutny, because you know it's really good at writing text that looks beautiful, but is ever so slightly wrong in ways that might even slip through code review and cause the company to end up in the news.
Alternatively, you could yolo and put up an MR after a quick smell, making some other poor engineer do your job for you (you're a 10x now, you've got better things to do anyway). Or better yet, just have Claude write the MR, and don't even bother to read it. Surely nobody's going to notice your "acceptance critera" section says to make sure the changes have been tested on both Android and Apple, even though you're building a microservice for an AI-powered smart fridge (mostly just a fridge, except every now and then it starts shooting ice cubes across the room at mach 3). Then three months later when someone, who never realized there are three different identical "authenticate," spends an hour scratching their head about why the code they're writing is not doing anything (because it's actually running another redundant function that nobody ever seems to catch in MR review because they're not reflected in a diff.
But yeah, that 10 minute AI magic trick sure felt good. There are times when work is dull enough that option B sounds pretty good, and I'll dabble. But yeah, I'm not sure where this AI stuff leads but I'm pretty confident it won't taking over our jobs any time soon (an ever-increasing quota of H1Bs and STEM opt student visas working for 30% less pay, on the other hand, might).
It's just that being the dumbest thing we ever heard still doesn't stop some people from doing it anyway. And that goes for many kinds of LLM application.
I think it has a lot to do with skill level. Lower skilled developers seem to feel it gives them a lot of benefit. Higher skilled developers just get frustrated looking at all the errors in produces.
I hate to admit it, but it is the prompt (call it context if ya like, includes tools). Model is important, window/tokensz are important, but direction wins. Also codebase is important, greenfield gets much better results, so much so that we may throw away 40 years of wisdom designed to help humans code amongst each other and use design patterns that will disgust us.
Could the quality of your prompt be related to our differing outcome? I have decades of pre-AI experience and I use AI heavily. If I let it go off on its own its not as good as constraining and hand-holding it.
Sounds like you are using it entirely wrong then...
Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]
Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.
All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.
You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer?
I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took.
You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file.
This is C# so basically just automatically detected that I had 4 object types I was working with that were being updated to the database that I want to keep in a concurrent dictionary type of cache. So it created the dictionaries for each object with the appropriate keys, created functions for each object type if I touch an object to get that one updated etc.
It created the function to load in the data, then the finalize where it writes to the DB what was touched and clears the cache.
Again- I'm not saying this is anything particularly fancy, but it did the general concept of what I wanted. Also this is all iterative; when it creates something I talk to it like a person to say "hey I want to actually load in all the data, even though we will only be writing what changed" and all that kind of stuff.
Also the bigger help wasn't really the creation of the cache, it was helping to make the changes and detect what needed to be modified.
End of the day even if I want to go a slightly different route of how it did the caching; it creates all the framework so I can simplify if needed.
A lot of times for me using this LLM approach is to get all the boilerplate out of the way.. sometimes just starting the process by yourself of something is daunting. I find this to be a great way to begin.
I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.
I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.
From my experience the further you get from the sort of stuff that easily accessible on Stack Overflow the worse it gets. I've had few problems having an AI write out some minor python scripts, but yield severely poorer results with Unreal C++ code and badly hallucinate nonsense if asked in general anything about Unreal architecture and API.
Does the Unreal API change a bit over versions? I've noticed when asking to do a simple telnet server in Rust it was hallucinating like crazy but when I went to the documentation it was clear the api was changing a lot from version to version. I don't think they do well with API churn. That's my hypothesis anyway.
I think the big thing with Unreal is the vast majority of games are closed source. It's already only used for games, as opposed to asking questions about general-purpose programming, but there is also less training data.
You see this dynamic even with swift which has a corpus of OSS source code out there, but not nearly as much as js or python and so has always been behind those languages.
Clarifying can help but ultimately it was trained on older versions. When you are working with a changing api, it's really important that the llm can see examples of the new api and new api docs. Adding context7 as a tool is hugely helpful here. Include in your rules or prompt to consult context7 for docs. https://github.com/upstash/context7
How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out?
It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean.
At least one CEO seems to get it. Anyone touting this idea of skipping junior talent in favor of AI is dooming their company in the long run. When your senior talent leaves to start their own companies, where will that leave you?
I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning. Any time I use it I can hear my algebra teacher telling me not to use a calculator or I won’t learn anything.
But overall I’m starting to feel like AI is simply the natural culmination of US economic policy for the last 45 years: short term gains for the top 1% at the expense of a healthy business and the economy in the long term for the rest of us. Jack Welch would be so proud.
> When your senior talent leaves to start their own companies, where will that leave you?
The CEO didn't express any concerns about "talent leaving". He is saying "keep the juniors" but he's implying "fire the seniors". This is in line with long standing industry trends and it's confirmed by the flowing quote from the OP:
>> [the junior replacement] notion led to the “dumbest thing I've ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.
He is pushing for more of the same, viewing competence and skill as threats and liability to be "fixed". He's warning the industry to stay the course and keep the dumbing-down game moving as fast as possible.
Well that's even stupider. What do you do when your juniors get better at using your tools?
The 2010's tech boom happened because big tech knew a good engineer is worth their weight in gold, and not paying them well meant they'd be headhunted after as little as a year of work. What's gonna happen when this repeats) if we're assuming AI makes things much more efficient)?
----
And that's my kindest interpretation. One that assumes that a junior and senior using a prompt will have a very close gap to begin with. Even seniors seem to struggle right now with current models working at scale on Legacy code.
100%, and this is him selling the new batch of AWS agent tools. If your product requirements + “Well Architected” NFRs are expressed as input, AWS wants to run it and extract your cost of senior engineers as value for him.
There are lots of personal projects that I have wanted to build for years but have pushed off because the “getting started cost” is too high, I get frustrated and annoyed and don’t get far before giving up. Being able to get the tedious crap out of the way lowers the barrier to entry and I can actually do the real project, and get it past some finish line.
Am I learning as much as I would had I powered through it without AI assistance? Probably not, but I am definitely learning more than I would if I had simply not finished (or even started) the project at all.
What was your previous approach? From what I've seen, a lot of people are very reluctant about picking a book or read through a documentation before they try stuff. And then they got exposed to "cryptic" error message and then throw the towel.
I always used to try doing that. Really putting in the work, thoroughly reading the docs, books, study enough to have all the background information and context. It works but takes a lot of time and focus.
However, for side projects, there may be many situations where the documentation is actually not that great. Especially when it comes to interacting with and contributing to open source projects. Most of the time my best bet would be to directly go read a lot of source code. It could take weeks before I could understand the system I'm interacting with well enough to create the optimal solution to whatever problem I'd be working on.
With AI now, I usually pack an entire code base into a text file, feed it into the AI and generate the first small prototypes by guiding it. And this really is just a proof of concept, a validation that my idea can be done reasonably well with what is given. After that I would read through the code line by line and learn what I need and then write my own proper version.
I will admit that with AI it still takes a long time, because often it takes 4 or 5 prototypes before it generates exactly what you had in mind without cheating, hard coding things or weird workarounds. If you think it doesn't, you probably have lower standards than me. And that is with continuous guidance and feedback. But it still shortens that "idea validation" phase from multiple weeks to just one for me.
So: is it immensely powerful and useful? Yes. Can it save you time? Sometimes. Is it a silver bullet that replaces a programmer completely? Definitely no.
I think an important takeaway here also is that I am talking strictly about side projects. It's great as the stakes are low. But I would caution to wait a little longer before putting it in production though.
The biggest blocker for me would be that I would go through a "Getting Started" guide and that would go well until it doesn't. Either there would be an edge case that the guide didn't take into account or the guide would be out of date. Sometimes I would get an arcane error message that was a little difficult to parse.
There were also cases where the interesting part of what I'm working on (e.g. something with distributed computing) required a fair amount of stuff that I don't find interesting to get started (e.g. spinning up a Kafka and Zookeeper cluster), where I might have to spend hours screwing around with config files and make a bunch of mistakes before I get something more or less working.
If I was sufficiently interested, I would power through the issue, by either more thoroughly reading through documentation or searching through StackOverflow or going onto a project IRC, so it's not like I would never finish a project, but having a lower barrier of entry by being able to directly paste an error message or generate a working starter config helps a lot with getting past the initial hump, especially to get to the parts that I find more interesting.
In that case I’m not sure you really agree with this CEO, who is all-in on the idea of LLMs for coding, going so far as to proudly say 80% of engineers at AWS use it and that that number will only rise. Listen to the interview, you don’t even need ten minutes.
> I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning.
Yes, but when there are certain mundane things in that discovery that are hindering my ability to get work done, AI can be extremely useful. It can be incredibly helpful in giving high level overviews of code bases or directing me to parts of codebases where certain architecture lives. Additionally, it exposes me to patterns and ideas I hadn't originally thought of.
Now, if I just take whatever is spit out by AI as gospel, then I'd be inclined to agree with you in saying AI is bad, but if you use it correctly, like any other tool, it's fantastic.
The whole premise is just silly of thinking we don't need juniors is just silly. If there's no juniors, eventually there will be no seniors. AI slop ain't gonna un-slop itself.
You also risk senior talent who stay but doesn't want to change or adopt, at least with any urgency. AI will accelerate that journey of discovery and learning, so juniors are going to learn super fast.
>will accelerate that journey of discovery and learning,
Okay, but what about work output? That's seems to be the only thing business cares about.
Also, maybe it's the HN bias but I don't see this notion where old engineers are rejecting this en masse. More younger people will embrace it. But most younger people haven't mucked in legacy code yet (the lifeblood of any businesses).
In the last few months we have worked with startups who have vibe coded themselves into an abyss. Either because they never made the correct hires in the first place or they let technical talent go. [1]
The thinking was that they could iterate faster, ship better code, and have an always on 10x engineer in the form of Claude code.
I've observed perfectly rational founders become addicted to the dopamine hit as they see Claude code output what looks like weeks or years of software engineering work.
It's overgenerous to allow anyone to believe AI can actually "think" or "reason" through complex problems. Perhaps we should be measuring time saved typing rather than cognition.
Shush please. I wasn't old enough to cash in on the Y2K contracting boons; I'm hoping the vibe coding 200k LOC b2b AI slop "please help us scale to 200 users" contracting gigs will be lucrative.
As if startups before LLMs were creating great code. Right now on the front page, a YC company is offering a “Founding Full Stack Engineer” $100K-$150K. What quality of code do you think they will end up with?
Notably, that is a company that... adds AI to group chats. Startups offering crap salaries with a vague promise of equity in a vague product idea with no moat are a dime a dozen, and have been well before LLMs came around.
Have you seen the companies YC has been funding recently? All you need to do is mention AI and YC will throw some money your way. I don't know if you saw my first attempt at a post, but someone should suggest AI for HN comment formatting and I'm sure it will be funded.
Acrely — AI for HVAC administration
Aden — AI for ERP operations
AgentHub — AI for agent simulation and evaluation
Agentin AI — AI for enterprise agents
AgentMail — AI for agent email infrastructure
AlphaWatch AI — AI for financial search
Alter — AI for secure agent workflow access control
Altur — AI for debt collection voice agents
Ambral — AI for account management
Anytrace — AI for support engineering
April — AI for voice executive assistants
AutoComputer — AI for robotic desktop automation
Autosana — AI for mobile QA
Autotab — AI for knowledge work
Avent — AI for industrial commerce
b-12 — AI for chemical intelligence
Bluebirds — AI for outbound targeting
burnt — AI for food supply chain operations
Cactus — AI for smartphone model deployment
Candytrail — AI for sales funnel automation
CareSwift — AI for ambulance operations
Certus AI — AI for restaurant phone lines
Clarm — AI for search and agent building
Clodo — AI for real estate CRMs
Closera — AI for commercial real estate employees
Clueso — AI for instructional content generation
cocreate — AI for video editing
Comena — AI for order automation in distribution
ContextFort — AI for construction drawing reviews
Convexia — AI for pharma drug discovery
Credal.ai — AI for enterprise workflow assistants
CTGT — AI for preventing hallucinations
Cyberdesk — AI for legacy desktop automation
datafruit — AI for DevOps engineering
Daymi — AI for personal clones
DeepAware AI — AI for data center efficiency
Defog.ai — AI for natural-language data queries
Design Arena — AI for design benchmarks
Doe — AI for autonomous private equity workforce
Double – Coding Copilot — AI for coding assistance
EffiGov — AI for local government call centers
Eloquent AI — AI for complex financial workflows
F4 — AI for compliance in engineering drawings
Finto — AI for enterprise accounting
Flai — AI for dealership customer acquisition
Floot — AI for app building
Fluidize — AI for scientific experiments
Flywheel AI — AI for excavator autonomy
Freya — AI for financial services voice agents
Frizzle — AI for teacher grading
Galini — AI guardrails as a service
Gaus — AI for retail investors
Ghostship — AI for UX bug detection
Golpo — AI for video generation from documents
Halluminate — AI for training computer use
HealthKey — AI for clinical trial matching
Hera — AI for motion design
Humoniq — AI for BPO in travel and transport
Hyprnote — AI for enterprise notetaking
Imprezia — AI for ad networks
Induction Labs — AI for computer use automation
iollo — AI for multimodal biological data
Iron Grid — AI for hardware insurance
IronLedger.ai — AI for property accounting
Janet AI — AI for project management (AI-native Jira)
Kernel — AI for web agent browsing infrastructure
Kestroll — AI for media asset management
Keystone — AI for software engineering
Knowlify — AI for explainer video creation
Kyber — AI for regulatory notice drafting
Lanesurf — AI for freight booking voice automation
Lantern — AI for Postgres application development
Lark — AI for billing operations
Latent — AI for medical language models
Lemma — AI for consumer brand insights
Linkana — AI for supplier onboarding reviews
Liva AI — AI for video and voice data labeling
Locata — AI for healthcare referral management
Lopus AI — AI for deal intelligence
Lotas — AI for data science IDEs
Louiza Labs — AI for synthetic biology data
Luminai — AI for business process automation
Magnetic — AI for tax preparation
MangoDesk — AI for evaluation data
Maven Bio — AI for BioPharma insights
Meteor — AI for web browsing (AI-native browser)
Mimos — AI for regulated firm visibility in search
Minimal AI — AI for e-commerce customer support
Mobile Operator — AI for mobile QA
Mohi — AI for workflow clarity
Monarcha — AI for GIS platforms
moonrepo — AI for developer workflow tooling
Motives — AI for consumer research
Nautilus — AI for car wash optimization
NOSO LABS — AI for field technician support
Nottelabs — AI for enterprise web agents
Novaflow — AI for biology lab analytics
Nozomio — AI for contextual coding agents
Oki — AI for company intelligence
Okibi — AI for agent building
Omnara — AI for agent command centers
OnDeck AI — AI for video analysis
Onyx — AI for generative platform development
Opennote — AI for note-based tutoring
Opslane — AI for ETL data pipelines
Orange Slice — AI for sales lead generation
Outlit — AI for quoting and proposals
Outrove — AI for Salesforce
Pally — AI for relationship management
Paloma — AI for billing CRMs
Parachute — AI for clinical evaluation and deployment
PARES AI — AI for commercial real estate brokers
People.ai — AI for enterprise growth insights
Perspectives Health — AI for clinic EMRs
Pharmie AI — AI for pharmacy technicians
Phases — AI for clinical trial automation
Pingo AI — AI for language learning companions
Pleom — AI for conversational interaction
Qualify.bot — AI for commercial lending phone agents
Reacher — AI for creator collaboration marketing
Ridecell — AI for fleet operations
Risely AI — AI for campus administration
Risotto — AI for IT helpdesk automation
Riverbank Security — AI for offensive security
Saphira AI — AI for certification automation
Sendbird — AI for omnichannel agents
Sentinel — AI for on-call engineering
Serafis — AI for institutional investor knowledge graphs
Sigmantic AI — AI for HDL design
Sira — AI for HR management of hourly teams
Socratix AI — AI for fraud and risk teams
Solva — AI for insurance
Spotlight Realty — AI for real estate brokerage
StackAI — AI for low-code agent platforms
stagewise — AI for frontend coding agents
Stellon Labs — AI for edge device models
Stockline — AI for food wholesaler ERP
Stormy AI — AI for influencer marketing
Synthetic Society — AI for simulating real users
SynthioLabs — AI for medical expertise in pharma
Tailor — AI for retail ERP automation
Tecto AI — AI for governance of AI employees
Tesora — AI for procurement analysis
Trace — AI for workflow automation
TraceRoot.AI — AI for automated bug fixing
truthsystems — AI for regulated governance layers
Uplift AI — AI for underserved voice languages
Veles — AI for dynamic sales pricing
Veritus Agent — AI for loan servicing and collections
Verne Robotics — AI for robotic arms
VoiceOS — AI for voice interviews
VoxOps AI — AI for regulated industry calls
Vulcan Technologies — AI for regulatory drafting
Waydev — AI for engineering leadership insights
Wayline — AI for property management voice automation
Wedge — AI for healthcare trust layers
Workflow86 — AI for workflow automation
ZeroEval — AI for agent evaluation and optimization
And the ideas may or may not be bad. I don’t know enough about any of the business segments. But to paraphrase the famous Steve Jobs quote “those aren’t businesses, they are features” [1] that a company that is already in the business should be able to throw a few halfway decent engineers at and add the feature to an existing product with real users.
[1] He said that about Dropbox. He wasn’t wrong just premature. For the price of 2TB on Dropbox, you can get the entire GSuite with 2TB or Office365 with 1TB for up to five users for 5TB in all.
now you can, but, what, are you gonna lie down and wait for tech giants to do everything? Not every company needs to be Apple. If Dropbox filed for bankruptcy tomorrow, they've still made millionaires of thousands of people and given jobs to hundreds more, and enabled people to share their files online.
Steve Jobs gets to call other companies small because Apple is huge, but there are thousands of companies that "are just features". Yeah, features they forgot to add!
Out of the literally thousands of companies that YC has invested in, only about a dozen have gone public, the rest are either dead, zombies or got acquired. These are all acquisition plays.
Even the ones that have gone public haven’t done that well in aggregate.
Dropbox was solving a hard infrastructure problem at scale. These companies are just making some API calls to a model.
If an established company in any of these verticals - not necessarily BigTech - see an opportunity, they are either going to throw a few engineers at the problem and add it as a feature or hire a company like the one I work for and we are going to knock out an implementation in a few months.
The one YC company I mentioned above is expecting to have their product written by one “full stack engineer” that they are only willing to pay $150K for. How difficult can it be?
Which seems fine? VC money gets thrown at a problem, the problem may or may not get solved by a particular team, but a company gets created, some people do some work, some people make money, others don't. I don't get it. Are you saying no one should bother doing anything because someone else is already doing it or that it's not difficult so why try?
From what I’ve read, this is a consequence of applicants themselves concentrating on AI, which preceded their AI-filled batches. YC still has a very low acceptance rate, btw.
Do you think they're all using actual LLMs? I've got a natural language parser I could probably market as "AI Semantic Detection" even though it's all regular expressions
I have a confession to make, I was about to downvote you because I thought you just asked ChatGPT to come up with some ridiculous company concepts and copy and pasted.
Then I saw the sibling comment and searched a couple of company names and realized they were real.
> I think the skills that should be emphasized are how do you think for yourself?
Independent thinking is indeed the most important skill to have as a human. However, I sympathize for the younger generations, as they have become the primary target of this new technology that looks to make money by completely replacing some of their thinking.
I have a small child and took her to see a disney film. Google produced a very high quality long form advert during the previews. The ad portrays a lonely young man looking for something to do in the evening that meets his explicit preferences. The AI suggests a concert, he gets there and locks eyes with an attractive young woman.
Sending a message to lonely young men that AI will help reduce loneliness. The idea that you don't have to put any effort into gaining adaptive social skills to cure your own loneliness is scary to me.
The advert is complete survivor bias. For each success in curing your boredom, how many failures are there with lonely young depressed men talking to their phone instead of friends?
Critical thinking starts at home with the parents. Children will develop beliefs from their experience and confirm those beliefs with an authority figure. You can start teaching mindfulness to children at age 7.
Teaching children mindfulness requires a tremendous amount of patience. Now the consequence for lacking patience is outsourcing your Childs critical thinking to AI.
Yes, however Her is a bit more optimistic and doesn't really delve into the data collection and usage aspects, it's more similar to a romance with some sci-fi aspects at the end.
> “How's that going to work when ten years in the future you have no one that has learned anything,”
Pretty obvious conclusion that I think anyone who's thought seriously about this situation has already come to. However, I'm not optimistic that most companies will be able to keep themselves from doing this kind of thing, because I think it's become rather clear that it's incredibly difficult for most leadership in 2025 to prioritize long-term sustainability over short-term profitability.
That being said, internships/co-ops have been popular from companies that I'm familiar with for quite a while specifically to ensure that there are streams of potential future employees. I wonder if we'll see even more focus on internships in the future, to further skirt around the difficulties in hiring junior developers?
If AI is truly this effective, we would be selling 10x-10Kx more stuff, building 10x more features (and more quickly), improving quality & reliability 10x. There would be no reason to fire anyone because the owners would be swimming in cash. I'm talking good old-fashioned greed here.
You don't fire people if you anticipate a 100x growth. Who cares about saving 0.1% of your money in 10 years? You want to sell 100x / 1000x/ 10000x more .
So the story is hard to swallow. The real reason is as usual, they anticipate a downturn and want to keep earnings stable.
Exactly. If the AI can multiply everyone's power by hundred or thousand, you want to keep all people who make a positive contribution (and only get rid of those who are actively harmful). With sufficiently good AI, perhaps the group of juniors you just fired could have created a new product in a week.
even within the AI-paradigm, you could keep the juniors to validate and test the AI generated code. You still need some level of acceptance testing for the increased production. And the juniors could be producing automation engineering at or above the level of the product code they were producing prior to AI. A win win ( more production & more career growth)
In other words, none of these stories make any sense, even if you take the AI superpower at face value.
He wants educators to instead teach “how do you think and how do you decompose problems”
Ahmen! I attend this same church.
My favorite professor in engineering school always gave open book tests.
In the real world of work, everyone has full access to all the available data and information.
Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.
Doing this is called "engineering". And this is what this professor taught.
In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).
Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.
Memorizing is a super power / skill. I work in a ridiculously complex environment and have to learn and know so much. Memorizing and spaced repetition are like little islands my brain can start building bridges between. I used to think memorizing was anti-first principles, but it is just good. Our brains can memorize so much if we make them. And then we can connect and pattern matching using higher order thinking.
Recognizing the patterns and applying patterned solutions is where I see success in my niche of healthcare interoperability. So much of my time is spent watching people do things,process and how they use data. It's amazing how much people remember to do their job, but me coming in and be able to bridge the doctor and the lab to share data easier is like Im an alchemist. It's really not a problem I've been able to see ai solve without suggesting solutions that are too simple or too costly and in that goldilocks zone everyone will be happy with
What's even better about memorization is that you have an objective method to test your own understanding. It is so easy to believe you understand something when you don't! But, at least with math, I think if you can reproduce the proof from memory you can be very confident that you aren't deluding yourself.
Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.
I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.
However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.
Mostly just integer factorisation of large numbers is hard.
There are some other things you have to worry about practically, e.g Coppersmith's attack, and padding schemes (although that wasn't part of the proof I was given)
During my elementary school years, there was a teacher who told me that I didn't need to memorize it as long as I understand them. I taught he was the coolest guy ever.
Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.
You could argue this is just moving the memorization to meta-facts, but I found all throughout school that if you understand some slightly higher level key thing, memorization at the level you're supposed to be working in becomes at best a slight shortcut for some things. You can derive it all on the fly.
Sort of like how most of the trigonometric identities that kids are made to memorize fall out immediately from e^iθ = cosθ+isinθ (could be taken as the definitions of cos,sin), e^ae^b=e^(a+b) (a fact they knew before learning trig), and a little bit of basic algebraic fiddling.
Or like how inverse Fourier transforms are just the obvious extension of the idea behind writing a 2-d vector as a sum of its x and y projections. If you get the 2d thing, accept that it works the exact same in n-d (including n infinite), accept integrals are just generalized sums, and functions are vectors, and I guess remember that e^iwt are the basis you want, you can reason through what the formula must be immediately.
Probably. I hated memorization when I was a student too, because it was boring. But as soon as I did some teaching, my attitude changed to, "Just memorize it, it'll make your life so much easier." It's rough watching kids try to multiply when they don't have their times tables memorized, or translate a language when they haven't memorized the vocabulary words in the lesson so they have to look up each one.
There's things that you need to know (2*2 = 4) and there are things that you need to understand (multiplication rules). Both can happen with practice, but they're not that related.
Memorization is more like a shortcut. You don't need to go through the problem solving process to know the result. But with understanding, you master the heuristic factors needed to know when to take the shortcut and when to go through the problem solving route.
The Dreyfus Skill Model [0] is a good explanation. Novice typically have to memorize, then as they master the subject, their decision making becomes more heuristic based.
LLMs don't do well with heuristics, and by the times you've nailed down all the problems data, you could have been done. What they excels at is memorization, but all the formulaic stuff have been extracted into frameworks and libraries for the most popular languages.
I think the problem is that in spots where the concepts build on one another, you need to memorize the lower level concepts or else it'll be too hard to make progress on the higher level concepts.
If you're trying to expand polynomials and you constantly have to re-derive multiplication from first principles, you're never going to make any progress on expanding polynomials.
I never memorized multiplication tables and was always one of those "good in math" kids. An attempt to memorize that stuff ended with me confusing results and being unable to guess when I did something wrong. Knowing "tricks" and understanding how multiplication works makes life easier.
> "Just memorize it, it'll make your life so much easier."
That is because you evaluate cost of memorization to 0, because someone else is paying it. And you evaluate the cost of making mistakes due to constantly forgetting and being unable to correct to 0, because simply the kid gets blamed for not having perfect memory.
> or translate a language when they haven't memorized the vocabulary words in the lesson so they have to look up each one
Teaching language by having people translate a lot is an outdated pedagogy - it simply did not produced people capable to understand and produce the language. If the kids are translating sentences word by word, there was something going on wrongly before.
As with most things, it depends. If you truly do understand something, then you can derive a required result from first principles. _Given sufficient time_. Often in an exam situation you are time-constrained, and having memorized a shortcut cut be beneficial. Not to mention retaining is much easier when you understand the topic, so memorization becomes easier.
Probably the best example of this I can think of (for me at least) from mathematics is calculating combinations. I have it burned into my memory that (n choose r) = (n permute r) / (r permute r), and (n permute r) = n! / (n - r)!
Can I derive these from first principles? Sure, but after not seeing it for years, it might take me 10+ minutes to think through everything and correct any mistakes I make in the derivation.
But if I start with the formula? Takes me 5 seconds to sanity check the combination formula, and maybe 20 to sanity check the permutation formula. Just reading it to myself in English slowly is enough because the justification kind of just falls right out of the formula and definition.
So, yeah, they go hand in hand. You want to understand it but you sure as heck want to memorize the important stuff instead of relying on your ability to prove everything from ZFC...
It is waaaay easier to remember when you understand. The professor had it exactly right - if you learn to understand, you frequently end up remembering. But, memorization does not lead to understanding at all.
I think we memorize the understanding. For me it also works better understanding how something works than memoryzing results. I remember in high school, in maths trigonometrics, there were a list of 20 something formulas derived from a single one. Everyboby was memorizing the whole list of formulas; i just had to memorize a simple formula and the underdtanding of how to derive the others from the fundamental one on the fly.
You don't need to memorize to understand. You can rederive it every time.
You need to memorize it to use it subconsciously while solving more complex problems. Other ways you won't fit more complex solutions into your working memory,vso whole classes of problems will be too hard for you.
Ish? I never ever memorized the multiplication tables. To this day, I don't think I know them fully. I still did quite well in math by knowing how to quiz the various equations. Not just know them, but how to ask questions about moving terms and such.
My controversial education hot take: Pointless rote memorization is bad and frustrating, but early education could use more directed memorization.
As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.
Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.
Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.
Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.
Exactly. I agree with the leetcode part. A lot of problems in the world are composite of simpler smaller problems. Leetcode should teach you the basic patterns and how to combine them to solve real world problems. How will you ever solve a real world problem without knowing a few algorithms beforehand. For example, my brother was talking about how a Roomba would map a room. He was imagining 0 to represent free space and 1 as inaccessible points. This quickly reminded me of Number of Islands problem from leetcode. Yeah, there might be a lot of changes required to that problem but one could simple represent it as two problems.
1. Represent different objects in the room as some form of machine understandable form in a matrix
2. Find the number of Islands or find the Islands themselves.
Memorization of, like, multiplication tables gives us a poor view of the more interesting type of memorization. Remembering types of problems we’ve seen. Remembering landmarks and paths, vs just remembering what’s in every cell of a big grid.
> Memorization of, like, multiplication tables gives us a poor view of the more interesting type of memorization.
Memorizing multiplication tables is the first place many children encounter this strategy: The teacher shows you that you could try to memorize all of the combinations, or you could start learning some of the patterns and techniques. When multiplying by 5 the answer will end in 0 or 5. When multiplying by 2 the answer will be an even number, and so on.
I think there may have been a miscommunication somewhere on the chain of Mathematicians-Teachers-Students if that was the plan, when I was in elementary school.
Anecdotally (I only worked with math students as a tutor for a couple years), that math requires a lot of the boring type of memorization seems to be a really widespread misunderstanding.
Fortunately that was not my experience in abstract algebra. The tests and homework were novel proofs that we hadn't seen in class. It was one of my favorite classes / subjects. Someone did tell me in college that they did the memorization thing in German Universities.
Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.
I would wager some folks can memorize without understanding? I do think memorization is underrated, though.
There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.
Interesting I had the same problem and suffered in grades back in school simply because I couldn't memorize much without understanding. However, I seemed to be the only one because every single other student, including those with top grades, were happy to memorize and regurgitate. I wonder how they're doing now.
My abstract algebra class had it exactly backwards. It started with a lot of needless formalism culminating in galois theory. This was boring to most students as they had no clue why the formalism was invented in the first place.
Instead, I wished it showed how the sausage was actually made in the original writings of galois [1]. This would have been far more interesting to students, as it showed the struggles that went into making the product - not to mention the colorful personality of the founder.
The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
> This was boring to most students as they had no clue why the formalism was invented in the first place.
> The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
That's something I really like about 3blue1brown, and he says it straight up [0]:
> My goal is for you to come away feeling like you could have invented calculus yourself. That is, cover all those core ideas, but in a way that makes clear where they actually come from, and what they really mean, using an all-around visual approach.
Depends on the subject - I can remember multiple subjects where the teacher would give you a formula to memorise without explaining why or where it came from. You had to take it as an axiom. The teachers also didn't say - hey, if you want to know why did we arrive to this, have a read here, no, it was just given.
Ofc you could also say that's for the student to find out, but I've had other things on my mind
It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.
Your comment about memorizing as part of understanding makes a lot of sense to me, especially as one possible technique to get get unstuck in grasping a concept.
If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
I was part of an ACM programming team in college. We would review classes of problems based on the type of solution necessary, and learn those techniques for solving them. We were permitted a notebook, and ours was full of the general outline of each of these classes and techniques. Along with specific examples of the more common algorithms we might encounter.
As a concrete example, there is a class of problems that are well served by dynamic programming. So we would review specific examples like Dijkstra's algorithm for shortest path. Or Wagner–Fischer algorithm for Levenshtein-style string editing. But we would also learn, often via these concrete examples, of how to classify and structure a problem into a dynamic programming solution.
I have no idea if this is what is meant by "l33t code solutions", but I thought it would be a helpful response anyway. But the bottom line is that these are not common in industry, because hard computer science is not necessary for typical business problems. The same way you don't require material sciences advancements to build a typical house. Instead it flows the other way, where advancements in materials sciences will trickle down to changing what the typical house build looks like.
>If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
Memorization of l33t code DOES work well as prep for l33t code tests. I just don't think l33t code has much to do with application programming. I've long felt that "computer science" is physics for computers, low on the abstraction ladder, and there are missing labels for the higher complexity subjects built on it. Imagine if all physical sciences were called "physics" and so in order to get a job as a biologist you should expect to be asked questions about the Schroedinger equation and the standard model. We desperately need "application engineering" to be a distinct subject taught at the university level.
That's a real major that's been around for a couple of decades which focuses on software development (testing, version control, design patterns) with less focus on the more theoretical parts of computer science? There are even specialties within the Software Engineering major that focus specifically on databases or embedded systems.
What I understand from the GP is that memorizing l33t code won't help you learn anything useful. Not that understanding the solutions won't help you memorize them.
Is it the memorisation that had the desired effect or the having to come up with the novel proofs? Many schools seem to do the memorising part, but not the creating part.
> But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it!
This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.
> Memorizing famous speeches probably isn’t much easier if you understand the historical context.
Not commenting on the merits of critical thinking vs memorization either way, but I think it would be meaningfully easier to memorize famous speeches if you understand the historical context.
For memorizing a speech word-for-word, I don't think so. Knowing the years of the signing of the Declaration of Independence and the Gettysburg Address aren't gonna help you nail the exact wording of the first sentence.
It's funny, because I had the exact opposite experience with abstract algebra.
The professor explained things, we did proofs in class, we had problem sets, and then he gave us open-book semi-open-professor take-home exams that took us most of a week to do.
Proof classes were mostly fine. Boring, sometimes ridiculously shit[0], but mostly fine. Being told we have a week for this exam that will kick our ass was significantly better for synthesizing things we'd learned. I used the proofs we had. I used sections of the textbook we hadn't covered. I traded some points on the exam for hints. And it was significantly more engaging than any other class' exams.
[0] Coming up with novel things to prove that don't require some unrelated leap of intuition that only one student gets is really hard to do. Damn you Dr. B, needing to figure out that you have to define a third equation h(x) as (f(x) - g(x))/(f(x) + g(x)) as the first step of a proof isn't reasonable in a 60 minute exam.
Mathematics pedagogy today is in a pretty sorrowful state due to bad actors and willful blindness at all levels that require public trust.
A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.
This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.
Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)
Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.
To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.
Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).
If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.
Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).
It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).
Wait, the electrical pipe water analogy is actually a very good one and it's quite difficult to find edge cases where it breaks down in a way that would confuse a student. There are some (for example, there's no electrical equivalent of Reynold's number or turbulence, and flow resistance varies differently with pipe diameter than wire diameter, and no good equivalent for Faraday's law) but I don't think these are likely to cause confusion. It even captures nuance like inductance, capacitance, and transmission line behaviour.
As I recall, my systems dynamics textbook even explicitly drew parallels between different domains like electricity and hydrodynamics. You're right that the counterparts aren't generally perfect especially at the edges but the analogies are often pretty good.
Intuitively it fails in making an equivalence to area which is an unrelated dimensional unit, as two lengths multiplied together equaling resistance, as well as the skin-effect related to Intensity/Current which is why insulation/isolation of wires are incredibly important.
The classical approach used charge diffusion iirc, and you can find classical examples of this in Oliver Heaviside's published works (archive.org iirc). He's the one that simplified Maxwell's 20+ equations down to the small number we use today.
> Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together).
Not OP, and it was a couple decades ago, but I certainly remember professors and teachers saying things like "this isn't really how X works, but we will use the approximation for now in order to teach you this other thing". That is if you were lucky, most just taught you the wrong (or incomplete) formula.
I think there is validity to the approach but sciences would be much, much improved if taught more like history lessons. Here is how we used to think about gravity, here's the formula and it kind of worked, except... Here is planetary orbits that we used to use when we assumed they had to be circles. Here's how the data looked and here's how they accounted for it...
This would accomplish two goals - learning the wrong way for immediate use (build on sand) and building an innate understanding of how science actually progresses. Too little focus is on how we always create magic numbers and vague concepts (dark matter, for instance) to account for structural problems we have no good answer for.
Being able to "sniff the fudge" would be a super power when deciding what to write a PhD on, for instance. How much better would science be if everyone strengthened this muscle throughout their educatuon?
I included the water pipe analogy for electric theory, that is one specific example.
Also, In Algebra I've seen a flawed version of mathematical operations being taught that breaks down with negative numbers under multiplication (when the correct way is closed over multiplication). The tests were supposedly randomized (but seemed to target low-income demographics). The process is nearly identical, but the answers ultimately not correct. The teachers graded on the work to the exclusion of the correct answer. So long as you showed the correct process expected in Algebra you passed without getting the right answer. Geometry was distinct and unrelated, and by Trigonometry the class required correct process and answer. You don't find out there is a problem until Trigonometry, and the teacher either doesn't know where the person is failing comprehension, or isn't paid to reteach a class they aren't paid for but you can't go back.
I've seen and heard horror stories of students where they'd failed Trig 7+ times at the college level, and wouldn't have progressed if not for a devoted teacher helping them after-hours (basically correcting and reteaching Algebra). These kids literally would break out in a cold PTSD sweat just hearing the associated words related to math.
I did some tutoring in a non-engineering graduate masters program and some folks were just lost. Simple things like what a graph is or how to solve an equation. I really did try but it's sort of hard to teach fairly easy high school algebra (with maybe some really simple derivatives to find maxima and minima) in grad school.
I'd love an example too, and an example of the classical system that this replaced. I'm willing to believe the worst of the school system, but I'd like to understand why.
The classical system was described, but you can find it in various historic works based on what's commonly referred to today as the Trivium and Quadrivium based curricula.
Off the top of my head, the former includes reasoning under dialectical (priori and later posteriori parts under the quadrivium).
Its a bit much to explain it in detail in a post like this but you should be able to find sound resources with what I've provided.
It largely goes back to how philosophy was taught; all the way back to Socrates/Plato/Aristotle, up through Descartes, Locke (barely, though he's more famous for social contract theory), and more modern scientists/scientific method.
The way math is taught today, you basically get to throw out almost everything you were taught at various stages, and relearn it anew on a different foundation, somehow fitting the fractured pieces back together towards learning the true foundations, which would be much easier at the start and building on top of that instead of the constant interference.
You don't really end up understanding math intuitively nor its deep connections to logic (dialectics, trivium), until you hit Abstract Algebra.
Up to the first or second chapter, depending on the book being used is more than sufficient to cover the foundational concepts. Sets, and Properties such as closure over given operations, and mathematical relabeling which is a function (f(x), the requirements for it (uniqueness of x, and projection onto) along with the tests for the presence of these, and common mathematic systems properties.
This naturally provides easily understood limitations of math systems which can be tested if there is a question, and allows recognition when they violate the properties that naturally lead to common mistakes, as well as providing a space where they can use numbers/geometry/reasoning at play.
It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.
A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.
But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.
It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)
> Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.
If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.
In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.
Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
> but when no one wants to train juniors any more (because they immediately get poached by the big ones)
Can we stop pretending that we don't know how to solve this problem? If you hire juniors at $X/year, but they keep getting poached after 2-3 years because now they can get $X*1.5/year (or more!), then maybe you should start promoting and giving raises to them after they've gotten a couple years experience.
Seriously, this is not a hard problem to solve. If the junior has proven themselves, give them the raise they deserve instead of being all Surprised Pikachu when another company is willing to pay them what they've proven themselves worthy of.
The problem is, no small company can reasonably compete with the big guns.
We're seeing this here in Munich. BMW and other local industry used to lure over loooots of people by virtue of paying much more than smaller shops - and now, Apple, Google, Microsoft and a few other big-techs our "beloved" prime minister Söder do the same thing to them... and as a side effect, fuck up the housing market even more than it already is.
> Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
I have been in the various nooks and crannies of the Internet/Software dev industry my whole career (i'm 49). I can't think of any time when the stock market didn't drive software innovation. It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
> It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
Yes, but today more and more is invent something -> achieve dominance -> get bought up by an even larger megacorp. That drives the enshittification circle.
That's part of the problem, but I also notice the new hiring managers are incentivized to hire (or replace) employees to make their mark on the company. They then advocate for "their guys" the ones they recruited over the incumbents that are the unwilling dinosaurs in their eyes.
I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.
How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.
Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.
Yet the rest of the business does not work this way, it’s the same old top down model as the rest.
This I think is why so few companies or even managers / executives “get it”
> can’t think of another career where management continuously does not understand the realities of how something gets built
All engineering. Also all government and a striking amount of finance.
Actually, this might be a hallmark of any specialist field. Specialists interface with outsiders through a management layer necessarily less competent at the specialty than they are. (Since they’re devoting time and energy to non-specialty tasks.)
While product often does operate in a waterfall model, I think this is the wrong mindset. Good product management should adopt a lot of the same principles as software development. Form a testable hypothesis, work to get it into production and begin gathering data, then based on your findings determine what the next steps are and whether to adjust the implementation, consider the problem solved or try a different approach.
> I can’t think of another career where management continuously does not understand the realities of how something gets built.
This is in part a consequence of how young our field is.
The other comment pointing out other engineering is right here. The difference is that fields like Civil Engineering are millenia old. We know that Egyptian civil engineering was advanced and shockingly modern even 4.5 millenia ago. We've basically never stopped having qualified civil engineers around who could manage large civil engineering projects & companies.
Software Development in it's modern forms has it's start still in living memory. There simply weren't people to manage the young early software development firms as they grew, so management got imported from other industries.
And to say something controversial: Other engineering has another major reason why it's usually better understood. They're held to account when they kill people.
If you're engineering a building or most other things, you must meet safety standards. Where possible you are forced to prove you meet them. E.g. Cars.
You don't get to go "Well cars don't kill people, people kill people. If someone in our cars die when they're hit by a drunk driver, that's not our problem that's the drunkard's fault." No. Your car has to hold up to a certain level of crash safety, even if it's someone else who causes the accent, your engineering work damn better hold up.
In software, we just do not do this. The very notion of "Software kills people" is controversial. Treated as a joke, "of course it can't kill people, what are you on about?". Say, you neglect on your application's security. There's an exploit, a data breach, you leak your users' GPS location. A stalker uses the data to find and kill their victim.
In our field, the popular response is to go "Well we didn't kill the victim, the stalker did. It's not our problem.". This is on some level true; 'Twas the drunk driver who caused the car crash, not the car company. But that doesn't justify the car company selling unsafe cars, why should it justify us selling unsafe software? It may be but a single drop of blood, but it's still blood on our hands as well.
As it stands, we are fortunate enough that there haven't been incidents big enough to kill so many people that governments take action to forcibly change this mindset. It would be wise that Software Development takes up this accountability on it's own accord to prevent such a disaster.
>For regular companies they're infested by managers imported from non-engineering fields
Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse.
I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.
One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.
Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon....
Me: Oh can I ask you anything about it?....
Manager: No
Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:
- something we've been doing since forever
- the latest trend that can be picked up just-in-time if you'll ever need it
I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.
Times have not changed. This is still the focus of teacher prep programs.
Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.
In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.
Right but a looot of the criticism online is based on assumptions (either personal or inherited from other commenters) that haven’t been updated since 2006.
Well, at more elite schools at least, the general assumption is that programming is mostly something you pick up on your own. It's not CS. Some folks will disagree of course but I think that's the reality. I took an MIT Intro to Algorithms/CS MOOC course a few years back out of curiosity and there was a Python book associated with the course but you were mostly on your own with it.
When I was in college the philosophy program had the marketing slogan: “Thinking of a major? Major in thinking”.
Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.
> Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding.
The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.
The #1 problem in almost all workplaces is communication related. In almost all jobs I've had in 25-30 years, finding out what needs to be done and what is broken -- is much harder than actually doing it.
We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.
Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.
The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.
Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.
Yea this is why I’m generally not that impressed by LLMs. They still force you to do the communication which is the hard part. Programming languages are inherently a solve for communicating complex steps. Programming in English isn’t actually that much of a help you just have to reinvent how to be explicit
I find Claude code unexpectedly good at analysis. With a healthy dose of skepticism. It is actually really good at reading logs and corelating events for example.
This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.
Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".
Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".
It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."
How people see it is based on the probability of any philosophy major producing good working software, not you being able to produce good working software.
Maybe because phylosophy focuses on weird questions (to be or not to be) and weird personas. If it was advertised as more grounded thing, the views would be different.
The way you are perceived by others dependa on your behaviour. If you wamt to be perceived differently, adjust your behaviour, don't demand others to change. They won't.
Many top STEM schools have substantial humanities requirements, so I think they agree with you.
At Caltech they require a total of at least 99 units in humanities or social sciences. 1 Caltech unit is 1 hour of work a week for each week of the term, and a typical class is 9 units consisting of 3 hours of classwork a week and 6 hours of homework and preparation.
That basically means that for 11 of the 12 terms that you are there for a bachelor's degree, you need to be taking a humanities or social sciences class. They require at least 4 of those to be in humanities (English, history, history and philosophy of science, humanities, music, philosophy, and visual culture), and at least 3 to be in social sciences (anthropology, business economics and management, economics, law, political science, psychology, and social science).
At MIT they have similar, but more complicated, requirements. They require humanities, art, and social sciences, and they require that you pick at least one subject in one of those and take more than one course in it.
On a related note, the most accomplished people I've met didn't have degrees in the fields where they excelled and won awards. They were all philosophy majors.
Teaching people to think is perhaps the world's most under-rated skill.
Well, yes but the other 90%+ just need to get a job out of college to support their addiction to food and shelter not to be a “better citizen of the world” unless they have parents to subsidize their livelihood either through direct transfers of money or by letting them stay at home.
I told both of my (step)sons that I would only help them pay for college or trade school - their choice - if they were getting a degree in something “useful”. Not philosophy, not Ancient Chinese Art History etc.
I also told them that they would have to get loans in their own names and I would help them pay off the loans once they graduated and started working gainfully.
My otherwise ordinary school applied the mentality that students must "Learn to learn", and that mix of skills and mindset has never stopped helping me.
yes, sometimes you need people who can grasp the tech and talk to managers. They might be intermediaries.
But don't ignore the nerdy guys who have been living deeply in a tech ecosystem all their lives. The ones who don't dabble in everything. (the wozniaks)
A professor in my very first semester called "crazy finger syndrome" the attempts to go straight to the code without decomposing the problem from a business or user perspective. It was a long time ago. It was a CS curriculum
I miss her jokes against anxious nerds that just wanted to code :(
Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.
> "crazy finger syndrome" - the attempts to go straight to the code without decomposing the problem from a business or user perspective
Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.
I make a point to solve my more difficult problems with pen and paper drawings and/or narrative text before I touch the PC. The computer is an incredibly distracting medium to work with if you are not operating under clear direction. Time spent on this forum is a perfect example.
Memorization and closed book tests are important for some areas. When seconds are counting the ER doctor cannot go look up how to treat a heart attack. That doctor also needs to know now only how to treat the common heart attack, but how to recognize this isn't the common heart attack but the 1 in 10,000 not a heart attack but has exactly the same symptoms as a heart attack case and give it the correct treatment.
However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.
Humans will never outcompete ai in that regard however. Industry will eventually optimize for humans and ai separately: ai will know a lot and think quickly, humans will provide judgement and legal accountability. We’re already on this path.
Speaking with a relative who is a doctor recently it’s interesting how much each of our jobs are “troubleshooting”.
Coding, doctors, plumber… different information, often similar skill sets.
I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.
Everyone going through STEM needs to see the movie Hidden Figures for a variety of reasons, but one bit stands out as poignant: I believe it was Katherine Johnson, who is asked to calculate some rocket trajectory to determine the landing coordinates, thinks on it a bit and finally says, "Aha! Newton's method!" Then she runs down to the library to look up how to apply Newton's method. She had the conceptual tools to find a solution, but didn't have all the equations memorized. Having all the equations in short term memory only matters in a (somewhat pathological) school setting.
My favorite professor in my physics program would say, "You will never remember the equations I teach. But if you learn how the relationships are built and how to ask questions of those relationships, then I have done my job." He died a few years ago. I never was able to thank him for his lessons.
> My favorite professor in engineering school always gave open book tests.
My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.
My experience as a professor and a student is that this doesn't make any difference.
Exactly the point of his test methodology.
What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.
I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.
Isn't this an argument for why you should allow open book tests rather than why you shouldn't? It certainly removes some pressure to remember some obscure detail or formula.
Isn't that just an argument for always doing open book tests, then? Seems like there's no downside, and as already mentioned, it's closer to how one works in the real world.
During some of the earlier web service development days, one would find people at F500 skating by in low-to-mid level jobs just cutting and pasting between spreadsheets, things would take them hours could be done in seconds, and with lower error rates, with a proper data interface.
Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.
Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.
I had a like minded professor in university, ironically in AI. Our big tests were all 3 day take home assignments. The questions were open ended, required writing code, processing data and analyzing results.
I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.
Might be true, idk? For all we know that professor now gives a 2.5-day take home assignments where they are allowed to use LLMs, and then assess them in an 1 hour oral exam where they need to explain approach, results and how they ensure that their results are accurate?
I don't think the 3-day take home is the key. It's supporting educators to have the intention, agency and capacity to improvise assessment.
It depends what level the education is happening at. Think of it like students being taught how to do for loops but are just copying and pasting AI output. That isn't learning. They aren't building the skills needed to debug when the AI gets something wrong with a more complicated loop, or understand the trade offs of loops vs recursion.
Finding the correct balance for a given class it hard. Generally, the lower level the education, the more it should be closed books because the more it is about being able to manually solve the smaller challenges that are already well solved so you build up the skills needed to even tackle the larger challenges. The higher the education level, the more it is about being able to apply those skills to then tackle a problem, and one of those skills is being able to pull relevant formulas and such from the larger body of known formulas.
I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.
I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.
To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.
But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.
Same here! I always like to say that software engineering is 50% knowing the basics (How to write/read code, basic logic) and 50% having great research skills. So much of our time is spent finding documentation and understanding what it actually means as opposed to just writing code.
You cannot teach "how to to think". You have to give students thinking problems to actually train thinking. Those kinds of problems can increasingly be farmed off to AI, or at least certain subproblems in them.
I meam, yes, to an extent you can teach how to think: critical thinking and logic are topics you can teach and people who take their teaching to heart can become better thinkers. However, those topics cannot impart creativity. Critical thinking is called exactly that because it's about tools and skills for separating bad thinking from good thinking. The skill of generating good thinking probably cannot be taught; it can only be improved with problem-solving practice.
> In the real world of work, everyone has full access to all of the available data and information.
In general, I also attend your church.
However, as I preached in that church, I had two students over the years.
* One was from an African country and told me that where he grew up, you could not "just look up data that might be relevant" because internet access was rare.
* The other was an ex US Navy officer who was stationed on a nuclear sub. She and the rest of the crew had to practice situations where they were in an emergency and cut off from the rest of the world.
Memorization of considerable amounts of data was important to both of them.
Each one of us has a mental toolbox that we use to solve problems. There are many more tools that we don’t have in our heads that we can look up if we know how.
The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.
This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.
People have been saying that since the advent of formal education. Turns out standardized education is really hard to pull off and most systems focus on making the average good enough.
It’s also hard to teach people “how to think” while at the same time teaching them practical skills - there’s only so many hours in a day, and most education is setup as a way to get as many people as possible into shape for taking on jobs where “thinking” isn’t really a positive trait, as it’d lead to constant restructuring and questioning of the status quo
While there’s no reasonable way to disagree with the sentiment, I don’t think I’ve ever met anyone who can “think and decompose problems” who isn’t also widely read, and knows a lot of things.
Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)
> Everyone has full access to all of the available data and information
Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.
At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced
I agree that an LLM is a long way from replacing most any single job held by a human in isolation. However, what I feel is missed in this discussion is that it can significantly reduce the total manpower by making humans more efficient. For instance, the job of a team of 20 can now be done by 15 or maybe even 10 depending on the class of work. I for one believe this will have a significant impact on a large number of jobs.
Not that I'm suggesting anything be "stopped". I find LLM's incredibly useful, and I'm excited about applying them to more and more of the mundane tasks that I'd rather not do in the first place, so I can spend more time solving more interesting problems.
Also, some problems don't have enough data for a solution. I had a professor that gave tests where the answer was sometimes "not solvable." Taking these tests was like sweating bullets because you were not sure if you're just too dumb to solve the problem, or there was not enough data to solve the problem. Good times!
One of my favorite things about Feynman interviews/lectures is often his responses are about how to think. Sometimes physicists ask questions in his lectures and his answer has little to do with the physics, but how they're thinking about it. I like thinking about thinking, so Feynman is soothing.
I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.
(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)
Talk is cheap. Good educators cost money, and America famously underpays (and under-appreciates) its teachers. Does he also support increasing taxes on the wealthy?
Have there been studies about abilities of different students to memorize information? I feel this is under-studied in the world of memorizing for exams
It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.
Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.
wanted to chime in on the educational system. in the west, we have the 'banking system' which treat a student as a bank account and knowledge as currency, hence the dump more info into ppl to make them sm0rt attitude.
in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.
those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.
not saying what is better or worse, but reading this comment and article it reminds me of this.
a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.
all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.
ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.
a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up..
i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.
a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.
id related that to education, not prebaked human properties.
My boss said we were gonna fire a bunch of people “because AI” as part of some fluff PR to pretend we were actually leaders in AI. We tried that a bit, it was a total mess and we have no clue what we’re doing, I’ve been sent out to walk back our comments.
Well they’re just trying to reduce headcount overall to get the expenses for AWS in better shape and work through some bloat. The “we’re doing layoffs because of AI” story wasn’t sticking though so looks like now they’re backtracking that story line.
Most people don't notice but there has been a inflation in headcounts over the years now. This happened around the time microservices architecture trend took over.
All of sudden to ensure better support and separation of concerns people needed a team with a manager for each service. If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.
When I started out huge C++ and Java code bases were pretty much the norm, and it was also one of the reasons why things were hard and barrier to entry high. In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.
To me its these kind of places that are in real trouble. There is not enough work to justify keeping dozens to even hundreds of teams, their managements and their hierarchies all working for quite literally doing nothing.
Its almost an everyday song that I hear, that big companies are full of hundreds or thousands of employees doing nothing.
I think sometimes the definition of work gets narrowed to a point so infinitesimal that everyone but the speaker is just a lazy nobody.
There was an excellent article on here about working at enterprise scale. My experience has been similar. You get to do work that feels really real, almost like school assignments with instant feedback and obvious rewards when you're at a small company. When I worked at big companies it all felt like bullshit until I screwed it up and a senator was interested in "Learning more" (for example).
The last few 9s are awful hard to chase down and a lot of the steps of handling edge case failures or features are extremely manual.
> In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.
You're committing the classic fallacy around microservices here. The services themselves are simpler. The whole software is not.
When you take a classic monolith and split it up into microservices that are individually simple, the complexity does not go away, it simply moves into the higher abstractions. The complexity now lives in how the microservices interact.
In reality, the barrier to entry on monoliths wasn't that high either. You could get "low productivity employees" (I'd recommend you just call them "novices" or "juniors") to do the work, it'd just be best served with tomato sauce rather than deployed to production.
The same applies to microservices. You can have inexperienced devs build out individual microservices, but to stitch them together well is hard, arguably harder than ye-olde-monolith now that Java and more recent languages have good module systems.
There are two freight trains currently smashing into each other:
1.) Elon fired 80% of twitter and 3 years later it still hasn't collapsed or fallen into technical calamity. Every tech board/CEO took note of that.
2.) Every kid and their sister going to college who wants a middle class life with generous working conditions is targeting tech. Every teenage nerd saw those over employed guys making $600k from their couch during the pandemic.
On the other hand while yes it's still running, twitter is mostly not releasing new features, and has completely devolved into the worst place on the internet. Not to mention most accounts now actually are bots like Elon claimed they were 3 years ago.
I dont know; they may well have been on the ads and moderation side. And they did add the hangouts/voice calls stuff, but that may have been an acquisition, I'm not sure.
I hope the tech boards and CEOs don’t miss the not very subtle point that twitter has very quickly doubled in size in 2 years and is still growing after the big layoff and they had to scramble to fix some notable mistakes they made when firing that many people. 80% is already a hugely misleading marketing number.
Also need to add, that a large part of the 80% that got kicked, was moderator staff. So it makes sense that after they removed too many developers, they ended up rehiring them.
Take in account, Twitter their front end, the stuff that people interact with was only like 15% of the actual code base. The rest was analytics for the data (selling data, marketing analytic for advertisers etc).
But as they are not reintroducing moderators, the company is "still down by 63.6% from the numbers before the mass layoffs".
So technically, Twitter is probably back or even bigger on the IT staff then before Musk came.
Well yeah... computers are really powerful. you don't need docker swarm or any other newfangled thing. Just perl and apache and mysql and you can ship to tens of millions of users before you hit scaling limits.
> If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.
I think it depends on the industry. In safety critical systems, you need to be testing, making documentation, architectural artifacts, meeting with customers, etc
There's not that much idle time. Unless you mean idle time actually writing code and that's not always a full time job.
I think most people misunderstand the relationship between business logic, architecture and headcount.
Big businesses don’t inherently require the complexity of architecture they have. There is always a path-dependent evolution and vestigial complexity proportional to how large and fast they grew.
The real purpose of large scale architecture is to scale teams much moreso than business logic. But why does headcount grow? Is it because domains require it? Sure that’s what ambitious middle managers will say, but the real reason is you have money to invest in growth (whether from revenue or from a VC). For any complex architecture there is usually a dramatically simpler one that could still move the essential bits around, it just might not support the same number of engineers delineated into different teams with narrower responsibilities.
The general headcount growth and architecture trajectory is therefore governed by business success. When we’re growing we hire and we create complex architecture to chase growth in as many directions as possible. Eventually when growth slows we have a system that is so complex it requires a lot of people just to understand and maintain—even if the headcount is longer justified those with power in the human structure will bend over backwards to justify themselves. This is where the playbook changes and a private equity (or Elon) mentality is applied to just ruthlessly cut and force the rest of the people how to keep the lights on.
I consider advances in AI and productivity orthogonal to all this. It will affect how people do their jobs, what is possible, and the economics of that activity, but the fundamental dynamics of scale and architectural complexity will remain. They’ll still hire more people to grow and look for ways to apply them.
It would be sad if you are correct. Your company might not be able to justify keeping dozens and hundreds of teams employed, but what happens when other companies can't justify paying dozens and hundreds of teams who are the customers buying your product? Those who gleefully downsize might well deserve the market erosion they cause.
This is blatantly incorrect. Before microservices became the norm you still had a lot of teams and hiring, but the teams would be working with the same code base and deployment pipeline. Every company that became successful and needed to scale invented their own bespoke way to do this; microservices just made it a pattern that could be repeatedly applied.
I think that the starting point is that productivity/developer has been declining for a while, especially at large companies. And this leads to the "bloated" headcount.
The question is why. You mention microservices. I'm not convinced.
Many think it is "horizontals". Possible, these taxes add up it is true.
Perhaps it is cultural? Perhaps it has to do with the workforce in some manner. I don't know and AFAIK it has not been rigorously studied.
He didn't actually say that. He said it's possible that within 2 years developers won't be writing much code, but he goes on to say:
"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code...."
If you read the full remarks they're consistent with what he says here. He says "writing code" may be a skill that's less useful, which is why it's important to hire junior devs and teach them how to learn so they learn the skills that are useful.
He is talking his book. Management thinks it adds value in the non-coding aspects of the product - such as figuring out what customers need etc. I suggest management stays in their lanes, and not make claims on how coding needs to be done, leave that to the craftsmen actually coding.
Theoretically, a large part of Amazon's worth is the skill of its workforce.
Some subset of the population likes to pretend their workforce is a cost that provides less than zero value or utility, and all the value and utility comes from shareholders.
But if this isn't true, and collective skill is worth value, then saying anyone can have that with AI at least has some headwind on your share price - which is all they care about.
Does that offset a potential tailwind from slightly higher margins?
I don't think any established company should be cheerleading that anyone can easily upset their monopoly with a couple of carefully crafted prompts.
It was always kind of strange to me, and seemed as though they were telling everyone, our moat is gone, and that is good.
If you really believed anyone could do anything with AI, then the risk of PEs collapsing would be high, which would be bad for the capital class. Now you have to correctly guess what's the next best thing constantly to keep your ROI instead of just parking it in save havens - like FAANG.
Bedrock/Q is a great example of how Amazon works. If we throw $XXX at the problem and YYY SDEs at the problem we should be able to build Github Copilot, GPT-3, OpenRouter and Cursor ourselves instead of trying to competitively acquire and attract talent. The fact that Codewhisperer, Q and Titan barely get spoken about on HN or Twitter tells you how successful this is.
But if you have that perspective then the equation is simple. If S3 can make 5 XXL features per year with 20 SDEs then if we adopt “Agentic AI” we should be able to build 10 XXL features with 10 SDEs.
Little care is given to organizational knowledge, experience, vision etc. that is the value (in their mind) of leadership not ICs.
What do you mean, “Amazon doesn’t really work that way”?
Parent is talking about how C-Suite doesn’t want to trumpet something that implies their entire corporate structure is extremely disadvantaged vs new entrants and your response is “Amazon wants to build everything themselves”?
Amazon isn’t some behaviorally deterministic entity, and it could (and should?) want to both preserve goodwill and build more internally vs pay multiples to acquire.
I guess it could be that people inside are not people they have to compete with, but it doesn’t seem like that’s what you're saying.
Amazon would probably say its worth is the machinery around workers that allows it to plug in arbitrary numbers of interchangeable people and have them be productive.
That's not necessarily inconsistent though - if you need people to guide or instruct the autonomy, then you need a pipeline of people including juniors to do that. Big companies worry about the pipeline, small companies can take that subsidy and only hire senior+, no interns, etc., if they want.
There is no pipeline though. The average tenure of a junior developer even at AWS is 3 years. Everyone knows that you make less money getting promoted to an L5 (mid) than getting hired in as one. Salary compression is real. The best play is always to jump ship after 3 years. Even if you like Amazon, “boomeranging” is still the right play.
that's interesting because that's how the consulting world works too. Start at a big firm, work for a few years, then jump to a small firm two levels above where you were. The after two years, come back to the big firm and get hired one level up from where you left. Rinse/repeat. It's the fastest promotion path in consulting.
I went from an L5 (mid) working at AWS ProServe as a consultant (full time role) to a year later (and a shitty company in between) as a “staff architect” - like you said two levels up - at a smaller cloud consulting company.
If I had any interest in ever working for BigTech again (and I would rather get an anal probe daily with a cactus), I could relatively easily get into Google’s equivalent department as a “senior” based on my connections.
It’s not necessarily “larger”, so much as different units. In a big company, the hiring budget is measured in headcount, but the promotion budget is measured in dollar percentage. It’s much easier to add $20k salary to get a hire done than to give that same person a $20k bump the following year.
Right but I'm asking why that is, structurally. It seems to be a budgeting thing on the companies pov or a hope that by limiting promotions you'll get some employees underpaid and not leaving?
The original poster who you are replying to was answering an orthogonal but related question and they both are true.
1. It is easier to make more money by being hired than by being promoted or not even being promoted and just kept at market rates for doing your current job. I addressed that in a sibling reply.
2. It’s easier to come in at a higher level than to be promoted to a higher level. To get “promoted” at BigTech there is a committee, promo docs where you have to document how you have already been working at that level and your past reviews are taken into account.
To come in that level you control the narrative and only have to pass 5-6 rounds of technical and behavioral interviews.
If I came into my current company at a level below staff, it would have taken a couple of years to be promoted to my current staff position (equivalent to a senior at AWS) and a few successful projects. All I had to do was interview well and tell the stories I wanted to tell about my achievements over the past 4 years. I didn’t have to speak on failures.
It’s a lot cheaper to replace an employee by one who leaves at market rate than to pay all of your developers at market rate. Many are going to stick around because of inertia, their lack of ability to interview well, golden handcuffs of RSUs, they don’t feel like rebuilding the social capital at another company or the naive belief in the “mission”, “passion” etc
But that's fine, that's why I say for big companies - the pipeline is the entire industry, everyone potentially in the job market, not just those currently at AWS. Companies like Amazon have a large enough work force to care that there's people coming up even if they don't work there yet (or never do, but by working elsewhere free someone else to work at AWS).
They have an interest in getting those grads turned into would-be-L5s even if they leave for a different company. If they 'boomerang back' at L7 that's great. They can't if they never got a grad job.
> Amazon Web Services CEO Matt Garman claims that in 2 years coding by humans won't really be a thing, and it will all be done by networks of AI's who are far smarter, cheaper, and more reliable than human coders.
Unless this guy speaks exclusively in riddles, this seems incredibly inconsistent.
There's definitely a vibe shift underway. C-Suites are seeing that AI as a drop-in replacement for engineers is a lot farther off than initial hype suggested. They know that they'll need to attract good engineers if they want to stay competitive and that it's probably a bad idea to scare off your staff with saying that they'll be made irrelevant.
I'm not sure those are mutually exclusive? Modern coders don't touch Assembly or deal with memory directly anymore. It's entirely possible that AI leads to a world where typing code by hand is dramatically reduced too (it already has in a few domains and company sizes)
He was right tho. AI is doing all the coding. That doesn’t mean you fire junior staff. Both can be true at once- you need juniors, and pretty much all code this days is AI-generated.
He should face consequences for his cargo-cult thinking in the first place. The C-Suite isn't "getting" anything. They are simply bending like reeds in today's winds.
Might want to clarify things with your boss who says otherwise [1]? I do wish journalists would stop quoting these people unedited. No one knows what will actually happen.
I'm not sure those statements are in conflict with each other.
“My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” - Matt Garman
"We will need fewer people doing some of the jobs that are being done today” - Amazon CEO Andy Jassy
If you're quoting something, the only ethical thing to do is as verbatim as possible and with a sufficient amount of context. Speeches should not be cleaned up to what you think they should have said.
Now, the question of who you go to for quotes, on the other hand .. that's how issues are really pushed around the frame.
By unedited I mean, take the message literally and quote it to support a narrative that isn’t clear or consistent. (even internally among Amazon leadership)
I very much believe that anything AWS says on the corporate level is bullshit.
From the perspective of a former employee. I knew that going in though. I was 46 at the time, AWS was my 8th job and knowing AWS’s reputation from 2nd and 3rd hand information, I didn’t even entertain an opportunity that would have forced me to relocate.
I interviewed for a “field by design” role that was “permanently remote” [sic].
But even those positions had an RTO mandate after I already left.
There's what AWS leadership says and then there's what actually sticks.
There's an endless series of one pagers with this idea or that idea, but from what I witnessed first hand, the ones that stuck were the ones that made money.
Jassy was a decent guy when I was there, but that was a decade ago. A CEO is a PR machine more than anything else, and the AI hype train has been so strong that if you do anything other than saying AI is the truth, the light and the way, you lose market share to competitors.
AI, much like automation in general, does allow fewer people to do more, but in my experience, customer desires expand to fill a vacuum and if fewer people can do more, they'll want more to the point that they'll keep on hiring more and more people.
ChatGPT is better than any junior developer I’ve ever worked with. Junior devs have always been a net negative for the first year or so.
From a person who is responsible for delivering projects, I’ve never thought “it sure would be nice if I had a few junior devs”. Why when I can poach an underpaid mid level developer for 20% more?
I've never had a junior dev be a "net negative." Maybe you're just not supervising or mentoring them at all? The first thing I tell all new hires under me is that their job is to solve more problems than they create, and so far it's worked out.
I just “wrote” 2000 lines of code for a project between Node for the AWS CDK and Python using the AWS SDK (Boto3). Between both, ChatGPT needed to “know” the correct API for 12 services, SQL and HTML (for a static report). The only thing it got wrong with a one shot approach was a specific Bedrock message payload for a specific LLM model. That was even just a matter of saying “verify the payload on the web using the official docs”.
Yes it was just as well structured as I - someone who has been coding as a hobby or professionally for four decades - would have done.
That's great for you. I ask Sonnet 4 to make a migration and a form in Laravel Filament, and it regularly shits itself. I'm curious what those 12 services were, they must've had unchanging, well documented APIs.
That’s the advantage of working with AWS services, everything is well documented with plenty of official and unofficial code showing how to do most things.
Even for a service I know is new, I can just tell it to “look up the official documentation”
Using ChatGPT 5 Fast
AWS CDK apps (separate ones) using Node
- EC2 (create an instance)
- Aurora MySQL Serverless v2
- Create a VPC with no internet access - the EC2 instance was used as a jump box using Session Manager
- VPC Endpoints for Aurora control plane, SNS, S3, DDB, Bedrock, SQS, Session Manager
- Lambda including using the Docker lambda builder
- DDB
- it also created the proper narrowly scoped IAM permissions for tfe lambdas (I told it the services the Lambdas cared about)
The various Lambdas in Python using Boto3
- Bedrock including the Converse and Invoke APIs for the Nova and Anthropic families
- knowing how to process SQS Messages coming in as events
- MySQL flavored SQL for Upserts
- DDB reads
In another project the services were similar with the addition of Amazon Transcribe.
The difference is probably that I only do green field POC implementations as a solely developer/cloud architect on a project if I am doing hands on keyboard work.
The other part of my job is leading larger projects where I purposefully don’t commit to pulling stories off the board since I’m always in meetings with customers, project managers, sales or helping other engineers.
I might even then do a separate POC as a research project/enablement. But it won’t be modifying existing code that I didn’t design.
Truly depends on the organization and systems. I’m at a small firm with too few Senior staff, lots of fire-fighting going on among us, etc. We have loads of low-hanging fruit for our Juniors so we tend to have very quick results after an initial onboarding.
The most impressive folks Ive worked with are almost always straight out of school. It's before they've developed confidence about their skills and realized they can be more successful by starting their own business. People who get promoted three times in just 5 years sort of good.
Did their project manager and/or team lead think when they were hired “they are really going to be a great asset to my team and are going to help me complete my sprint/quarterly goals”?
When I ask for additional headcount, I’m looking at the next quarter since that’s what my manager is judging me based on.
I’m a great mentor when given the time. Two former interns for whom I was their official mentor during my time at AWS got return offers and are thriving two years after I left. I threw one in front of a customer to lead the project within three months after they came back after graduating. They were able to come through technically and had the soft skills. I told them my training approach is to “throw them at the bus. But never under the bus.”
I’m also a great teacher. That’s my $DayJob and has been for the past decade first bringing in new to the company processes and technologies, leading initiatives, teaching other developers, working with sales, CxOs (smaller companies), directors, explaining large “organizational transformation” proposals etc. working at startups and then doing the same in cloud consulting first working at AWS (ProServe full time role) and now working as a staff architect full time at a third party consulting company.
But when I have been responsible for delivery, I only hire people who have experience “dealing with ambiguity” and show that I can give them a decently complicated problem and they can take the ball and run with it and make decent decisions and do research. I don’t even do coding interviews - when I interview it’s strictly behavioral and talking through their past projects, decision making processes, how they overcame challenges etc.
In terms of AWS LPs, it’s “Taking Ownership” (yeah quoting Amazon LPs made me throw up a little).
My evaluations are based on quarterly goals and quarterly deliverables. No one at a corporation cares about anything above how it affects them.
Bringing junior developers up to speed just for them to jump ship within three years or less doesn’t benefit anyone at the corporate level. Sure they jump ship because of salary compression and inversion, where internet raises don’t correspond to market rates. Even first level managers don’t have a say so or budget to affect that.
This is true for even BigTech companies. A former intern I mentored who got a return offer a year before I left AWS just got promoted to an L5 and their comp package was 20% less than new hires coming in at an l5.
Everyone will be long gone from the company if not completely retired by the time that happens.
> Bringing junior developers up to speed just for them to jump ship within three years or less doesn’t benefit anyone at the corporate level.
What? Of course it does. If that's happening everywhere, that means other companies' juniors are also jumping ship to come work for you while yours jump ship to work elsewhere. The only companies that don't see a benefit from mentoring new talent are those with substandard compensation.
That’s true, but why should I take on the work of being at the beginning of the pipeline instead of hiring a mid level developer. My incentives are to meet my quarterly goals and show “impact”.
To a first approximation, no company pays internal employees at market rates in an increasing comp environment after a couple of years especially during the first few years of an employee’s career where their marker rate rapidly increases once they get real world experience.
On the other hand, the startup I worked for pre-AWS with 60 people couldn’t, wouldn’t and shouldn’t have paid me the amount I made when I got hired at AWS.
> That’s true, but why should I take on the work of being at the beginning of the pipeline instead of hiring a mid level developer.
Nominally, for the same reason that you pay taxes for upkeep on the roads and power lines. Because everyone capable needs to contribute to the infrastructure or it will degrade and eventually fail.
> My incentives are to meet my quarterly goals and show “impact”.
To me, that speaks of mismanagement - a poorly run company that is a leech on the economy and workforce. In contrast, as a senior level engineer at a large technology company that has remarkably low turnover, one of my core duties is to help enhance the capabilities of other coworkers and that includes mentorship. This is because our leadership understands that it adds workforce retention value.
> To a first approximation, no company pays internal employees at market rates in an increasing comp environment after a couple of years especially during the first few years of an employee’s career where their marker rate rapidly increases once they get real world experience.
That's why I mentioned it being a cross-industry symbiotic relationship. Your company may not retain the juniors that you help train, but the mid level engineers you hire are the juniors that someone else helped train. If you risk not mentoring juniors, you encourage other companies to do the same and reduce the pool of qualified mid level engineers available to you in the future.
> On the other hand, the startup I worked for pre-AWS with 60 people couldn’t, wouldn’t and shouldn’t have paid me the amount I made when I got hired at AWS.
While unrelated to my point, I do have a different experience that you may find interesting in that the most exorbitant salary I have ever been paid was as a contractor for a 12-person startup, not at the organizations with development teams in the hundreds or thousands.
> Nominally, for the same reason that you pay taxes for upkeep on the roads and power lines. Because everyone capable needs to contribute to the infrastructure or it will degrade and eventually fail.
On the government level, I agree. I’m far from a “taxation is theft” Libertarian.
But I also have an addiction to food and shelter. The only entity capable of that kind of collective action that is good for society is the government. My (and I’m generalizing myself as any rationale actor) goal is to do what is necessary to exchange labor for money by aligning my actions with the corporations incentives to continue to put money in my bank account and (formerly) vested RSUs in my brokerage account.
> To me, that speaks of mismanagement - a poorly run company that is a leech on the economy and workforce. In contrast, as a senior level engineer at a large technology company that has remarkably low turnover, one of my core duties is to help enhance the capabilities of other coworkers and that includes mentorship
The only large tech company I’ve worked for has a leadership principal “Hire and Develop the Best”. But for an IC, it’s mostly bullshit. That doesn’t show up on your promo doc when it’s time to show “impact” or how it relates to the team’s “OKR’s”.
From talking to people at Google, it’s the same. But of course Amazon can afford to have dead weight. When I have one shot at a new hire that is going to help me finish my quarterly goals as a team lead, I’m not going to hire a junior and put more work on myself.
I’m an IC, but in the org chart, I’m at the same level as a front line manager.
> While unrelated to my point, I do have a different experience that you may find interesting in that the most exorbitant salary I have ever been paid was as a contractor for a 12-person startup, not at the organizations with development teams in the hundreds or thousands.
As a billable consultant at AWS (and now outside of AWS) because of scale, I brought a lot more money into AWS than anything I could have done at the startup.
That’s why I said the startup “shouldn’t” have paid me the same close to 1 million over four years that AWS offered me in cash and RSUs. It would have been irresponsible and detrimental to the company. I couldn’t bring that much value to the startup.
> Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
You really want to believe, maybe even need to believe, that anyone who comes up with this idea in their head has never written a single line of code in their life.
It is on its face absurd. And yet I don't doubt for a second that Garman et al. have to fend off legions of hacks who froth at the mouth over this kind of thing.
> "Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs." -- Bill Gates
Do we reward the employee who has added the most weight? Do we celebrate when the AI has added a lot of weight?
At first, it seems like, no, we shouldn't, but actually, it depends. If a person or AI is adding a lot of weight, but it is really important weight, like the engines or the main structure of the plane, then yeah, even though it adds a lot of weight, it's still doing genuinely impressive work. A heavy airplane is more impressive than a light weight one (usually).
I just can’t resist myself when airplanes come up in discussion.
I completely understand your analogy and you are right. However just to nitpick, it is actually super important to have a weight on the airplane at the right place. You have to make sure that your aeroplane does not become tail heavy or it is not recoverable from a stall. Also a heavier aeroplane, within its gross weight, is actually safer as the safe manoeuverable speed increases with weight.
If someone adds more code to the wrong places for the sake of adding more code, the software may not be recoverable for future changes or from bugs. You also often need to add code in the right places for robustness.
Just to nitpick your nitpick, that’s only true up to a point, and the range of safe weights isn’t all that big really - max payload on most planes is a fraction of the empty weight. And planes can be overweight, reducing weight is a good thing and perhaps needed far more often than adding weight is needed. The point of the analogy was that over a certain weight, the plane doesn’t fly at all. If progress on a plane is safety, stability, or speed, we can measure those things directly. If weight distribution is important to those, that’s great we can measure weight and distribution in service of stability, but weight isn’t the primary thing we use.
Like with airplane weight, you absolutely need some code to get something done, and sometimes more is better. But is more better as a rule? Absolutely not.
right, thats why its a great analogy - because you also need to have at least some code in a successful piece of software. But simply measuring by the amount of code leads to weird and perverse incentives - code added without thought is not good, and too much code can itself be a problem. Of course, the literal balancing aspect isn't as important.
This is a pretty narrow take on aviation safety. A heavier airplane has a higher stall speed, more energy for the brakes to dissipate, longer takeoff/landing distances, a worse climb rate… I’ll happily sacrifice maneuvering speed for better takeoff/landing/climb performance.
Again, just nitpicking, but if you have the right approach speed, and not doing a super short field landing, you need very little wheel brake if any. ;)
Sure, as long as you stick to flying light aircraft on runways designed for commercial air transport. I would also recommend thinking about how you would control speed on a long downhill taxi with a tailwind, even if you didn’t need brakes on landing.
> the safe manoeuverable speed increases with weight
The reason this is true is because at a higher weight, you'll stall at max deflection before you can put enough stress on the airframe to be a problem. That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]. That makes it questionable whether heavier is safer - just changes the failure mode.
> That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]
Turbulence, especially generated by thunderstorms, or close to it.
Progress on airplanes is often tracked by # of engineering drawings released, which means that 1000s of little clips, brackets, fittings, etc. can sometimes misrepresent the amount of engineering work that has taken place compared to preparing a giant monolithic bulkhead or spar for release. I have actually proposed measuring progress by part weight instead of count to my PMs for this reason
It’s an analogy that gets the job done and is targeted at non-tech managers.
It’s not perfect. Dead code has no “weight” unless you’re in a heavily storage-constrained environment. But 10,000 unnecessary rivets has an effect on the airplane everywhere, all the time.
Assuming it is truly dead and not executable (which someone would have to verify is & remains the case), dead code exerts a pressure on every human engineer who has to read (around) it, determine that it is still dead, etc. It also creates risk that it will be inadvertently activated and create e.g. security exposure.
Yes, we all love pedantry around here (that’s probably 99% of the reason I wrote the original comment!)
But if your position is that the percentage of time in the software lifecycle that dead code has a negative effect on a system is anywhere close to the percentage of time in an aircraft lifecycle that extra non-functional rivets (or other unnecessary weight objects) has a negative effect on the aircraft, you’re just wrong.
it's still directionally accurate though. Dead code has a weight that must be paid. Sometimes the best commits are the ones where you delete a ton of lines.
In this analogy, I'd say dead code corresponds to airplane parts that aren't actually installed on the aircraft. When people talk about the folly of measuring productivity in lines of code, they aren't referring to the uselessness of dead code, they're referring to the harms that come from live code that's way bigger than it needs to be.
This reminds me of a piece on folklore.org by Andy Hertzfeld[0], regarding Bill Atkinson. A "KPI" was introduced at Apple in which engineers were required to report how many lines of code they had written over the week. Bill (allegedly) claimed "-2000" (a completely, astonishingly negative report), and supposedly the managers reconsidered the validity of the "KPI" and stopped using it.
I don't know how true this is in fact, but I do know how true this is in my work - you cannot apply some arbitrary "make the number bigger" goal to everything and expect it to improve anything. It feels a bit weird seeing "write more lines of code" becoming a key metric again. It never worked, and is damn-near provably never going to work. The value of source code is not in any way tied to its quantity, but value still proves hard to quantify, 40 years later.
Given the way that a lot of AI coding actually works, it’s like asking what percent of code was written by hitting tab to autocomplete (intellisense) or what percent of a document benefited from spellcheck.
While most of us know the next word guessing is how it works in reality…
That sentiment ignores the magic of how well this works. There are mind blowing moments using AI coding, to pretend that it’s “just auto correct and tab complete” is just as deceiving as “you can vibe code complete programs”.
I want to have the model re-write patent applications, and if any portion of your patent filing was replicated by it your patent is denied as obvious and derivative.
"...just raised a $20M Series B and are looking to expand the team and products offered. We are fully bought-in to generative AI — over 40% of our codebase is built and maintained by AI, and we expect this number to continue to grow as the tech evolves and the space matures."
"What does your availability over the next couple of weeks look like to chat about this opportunity?"
"Yeah, quite busy over the next couple of weeks actually… the next couple of decades, really - awful how quickly time fills by itself these days, right? I'd have contributed towards lowering that 40% number which seems contrary to your goals anyway. But here's my card, should you need help with debugging something tricky some time in the near future and nobody manages to figure it out internally. I may be able to make room for you if you can afford it. I might be VERY busy though."
Something I wonder about the percent of code - I remember like 5-10 years ago there was a series of articles about Google generating a lot of their code programmatically, I wonder if they just adapted their code gen to AI.
I bet Google has a lot of tools to say convert a library from one language to another or generate a library based on an API spec. The 30% of code these LLMs are supposedly writing is probably in this camp, not net novel new features.
Some exceptions occur for people getting Tenure without post doc or people doing some other things like taking undergraduate in one or two years. But no one expect that we for whole skip the first two and then get any senior researchers.
The same idea applies anywhere, the rule is that if you don't have juniors then you don't get seniors so better prepare your bot to do everything.
As always, the truth is somewhere in the middle. AI is not going to replace everyone tomorrow, but I also don't think we can ignore productivity improvements from AI. It's not going to replace engineers completely now or in the near future, but AI will probably reduce the number of engineers needed to solve a problem.
I do not agreed. Its was not even worth without llm.
Junior will always take a LOT of time from seniors. and when the junior become good enough, he will find another job. and the senior will be stuck in this loop.
junior + llm, it even worse. they become prompt engineers
I'm a technical co-founder rapidly building a software product. I've been coding since 2006. We have every incentive to have AI just build our product. But it can't. I keep trying to get it to...but it can't. Oh, it tries, but the code it writes is often overly complex and overly-verbose. I started out being amazed at the way it could solve problems, but that's because I gave it small, bounded, well-defined problems. But as expectations with agentic coding rose, I gave it more abstract problems and it quickly hit the ceiling. As was said, the engineering task is identifying the problem and decomposing it. I'd love to hear from someone who's used agentic coding with more success. So far I've tried Co-pilot, Windsurf, and Alex sidebar for Xcode projects. The most success I have is via a direct question with details to Gemini in the browser, usually a variant of "write a function to do X"
> As was said, the engineering task is identifying the problem and decomposing it.
In my experience if you do this and break the problem down into small pieces, the AI can implement the pieces for you.
It can save a lot of time typing and googling for docs.
That said, once the result exceeds a certain level of complexity, you can't really ask it to implement changes to existing code anymore, since it stops understanding it.
At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
So, my upshot is so far that it works great for small projects and for prototyping, but the gain after a certain level of complexity is probably quite small.
But then, I've also find quite some value in using it as a code search engine and to answer questions about the code, so maybe if nothing else that would be where the benefit comes from.
> At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
Appreciate you saying this because it is my biggest gripe in these conversations. Even if it makes me faster I now have to put time into reading the code multiple times because I have to internalize it.
Since the code I merge into production "is still my responsibility" as the HN comments go, then I need to really read and think more deeply about what AI wrote as opposed to reading a teammate's PR code. In my case that is slower than the 20% speedup I get by applying AI to problems.
I'm sure I can get even more speed if I improve prompts, when I use the AI, agentic vs non-agentic, etc. but I just don't think the ceiling is high enough yet. Plus I am someone who seems more prone to AI making me lazier than others so I just need to schedule when I use it and make that time as minimal as possible.
Are we trying to guilt trip corporations to do socially responsible thing regarding young workers skill acquisition?
Haven't we learned that it almost always ends up in hollow PR and marketing theater?
Basically the solution to this is extending education so that people entering workforce are already at senior level. Of course this can't be financed by the students, because their careers get shortened by longer education. So we need higher taxes on the entities that reap the new spoils. Namely those corporations that now can pass on hiring junior employees.
Makes sense. Instead of replacing junior staff, they should be trained to use AI to get more done in less time. In next 2-3 years they will be experts doing good work with high productivity.
Two things that will hurt us in the long run, working from home and AI. I'm generally in favour of both, but with newbies it hurts them as they are not spending enough face to face time with seniors to learn on the job.
And AI will hurt them in their own development and with it taking over the tasks they would normally cut their teeth on.
We'll have to find newer ways of helping the younger generation get in the door.
A weekly 1 hour call, where pair programming/ exploration of an on-going issue, technical idea would be enough to replace face to face time with seniors. This has been working great for us, at a multi billion dollar profitable public company thats been fully remote.
I would argue that just being in the office or not using AI doesn't guarantee any better learning of younger generations. Without proper guidance a junior would still struggle regardless of their location or AI pilot.
The challenge now is for companies, managers and mentors to adapt to more remote and AI assisted learning. If a junior can be taught that it's okay to reach out (and be given ample opportunities to do so), as well as how to productively use AI to explain concepts that they may feel too scared to ask because they're "basics", then I don't see why this would hurt in the long run.
Junior staff will be necessary but you'll have to defend them from the bean-counters.
You need people who can validate LLM-generated code. It takes people with testing and architecture expertise to do so. You only get those things by having humans get expertise through experience.
>teach “how do you think and how do you decompose problems”
That's rich coming from AWS!
I think he meant "how do you think about adding unnecessary complexity to problems such that it can enable the maximum amount of meetings, design docs and promo packages for years to come"!
A lot of companies that have stopped hiring junior employees are going to be really hurting in a couple of years, once all of their seniors have left and they have no replacements trained and ready to go.
If AI is so great and had PhD level skills (Musk) then logic says you should be replacing all of your _senior_ developers. That is not the conclusion they reached which implies that the coding ability is not that hot.
Q.E.D.
Finally someone from a top position said this. After all the trash the CEOs have been spewing and sensationalizing every AI improvement, for a change, a person in a non-engineering role speaks the truth.
Unfortunately, this is the kind of view that is at once completely correct and anathema to private equity because they can squeeze a next quarter return by firing a chunk of the labor force.
Yesterday, I was asked to scrape data from a website. My friend used ChatGPT to scrape data but didn't succeded even spent 3h+. I looked website code and understand with my web knowledge and do some research with LLM. Then I described how to scrape data to LLM it took 30 minutes overall. The LLM cant create best way but you can create with using LLM. Everything is same, at the end of the day you need someone who can really think.
LLM's can do anything, but the decision tree for what you can do in life is almost infinite. LLM's still need a coherent designer to make progress towards a goal.
it is not that easy, there is lazy loading in the page that is triggered by scroll of specific sections. You need to find clever way, no way to scrape with bs4, so tough with even selenium.
Point is nobody has figured out how much AI can replace humans. People. There is so much of hype out there as every tech celebrity sharing their opinions without responsibility of owning them. We have to wait & see. We could change courses when we know the reality. Until then, do what we know well.
Perhaps I'm too cynical about messages coming out of FAANG. But I have a feeling they are saying things to placate the rising anger over mass layoffs, h1b abuse, and offshoring. I hope I'm wrong.
It is too late it is already happening. The evolution of tech field is people being more experienced and not AI. But AI will be there for questions and easy one liners. Properly formalized documentation, even TLDRs.
The cost of not hiring and training juniors is trying to retain your seniors while continuously resetting expectations with them about how they are the only human accountable for more and more stuff.
LLMs are actually -the worst- at doing very specific repetitive things. It'd be much more appropriate for one to replace the CEO (the generalist) rather than junior staff.
junior engineers aren't hired to get tons of work done; they're hired to learn, grow, and eventually become senior engineers. ai can't replace that, but only help it happen faster (in theory anyway).
No one's getting replaced, but you may not hire that new person that otherwise would have been needed. Five years ago, you would have hired a junior to crank out UI components, or well specc'd CRUD endpoints for some big new feature initiative. Now you probably won't.
I’m really tired of this trope. I’ve spent my whole career on “boring CRUD” and the number of relational db backed apps I’ve seen written by devs who’ve never heard of isolation levels is concerning (including myself for a time).
Coincidentally, as soon as these apps see any scale issues pop up.
On the other hand, that extra money can be used to expand the business in other ways, plus most kids coming out of college these days are going to be experts in getting jobs done with AI (although they will need a lot of training in writing actual secure and maintainable code).
Even the highest ranking engineers should be experts. I don’t understand why there’s this focus on juniors as the people who know AI best.
Using AI isn’t rocket science. Like you’re talking about using AI as if typing a prompt in English is some kind of hard to learn skill. Do you know English? Check. Can you give instructions? Check. Can you clarify instructions? Check.
> I don’t understand why there’s this focus on juniors as the people who know AI best.
Because junior engineers have no problem with wholeheartedly embracing AI - they don't have enough experience to know what doesn't work yet.
In my personal experience, engineers who have experience are much more hesitant to embrace AI and learn everything about it, because they've seen that there are no magic bullets out there. Or they're just set in their ways.
To management that's AI obsessed, they want those juniors over anyone that would say "Maybe AI isn't everything it's cracked up to be." And it really, really helps that junior engineers are the cheapest to hire.
Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
At least in my personal case, struggling with renewal at Virgin Broadband, multiple humans wasted probably an hour of everyone's time overall on the phone bouncing me around departments, unable to comprehend my request, trying to upsell and pitch irrelevant services, applying contextually inappropriate talking scripts while never approaching what I was asking them in the first place. Giving up on those brainless meat bags and engaging with their chat bot, I was able to resolve what I needed in 10 minutes.
In India most of the banks now have apps that do nearly all the banking you can do by visiting a branch personally. To that extent this future is already here.
When I had to close my loan and had to visit a branch nearly a few times, the manager tells me, significant portion of his people's time now goes into actual banking- which according to him was selling products(fixed deposits, insurances, credit cards) and not customer support(which the bank thinks is not its job and has to because there is no other alternative to it currently).
> Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
In IT, if at a minimum, AI would triage the problem intelligently (and not sound like a bot while doing it), that would save my more expensive engineers a lot more time.
This is mostly because CS folks are given such sales and retention targets; and while I’ve never encountered a helpful support bot even in the age of LLMs, I presume in your case the company management was just happy to have a support bot talking to people without said metrics.
Again, you assume those people have choice, you definitely should search more how people on these jobs are pressured to reach quotas and are abused in many ways. A simple search on Reddit you can see plenty of reports about it:
You always have a choice. These people aren't forced to work there. And they also have the ability to go whistleblower and leak internal docs that instruct them to abuse customers. Just as an example.
I know I would. If someone gives you a choice A or B that both screw you over, there's always an option Z somewhere. It might be so outrageous they don't expect it but it's there.
However usually it isn't necessary. I've been put in situations where I had to do something unethical. I've refused. And every time that choice was respected. Only if I'd have been punished for it would I have considered more severe options like the whistle option.
But really if you take a hard stand and have good reasons, reality tends to bend a bit further than I expected.
And yes I know what these jobs are like. I have worked in that industry a long time. I've seen both very good and very terrible employers.
And yeah customers can also be little shits but I've learned to disconnect from that very quickly. It's easier when they're on the other side of the phone. It doesn't help them anyway. And sometimes (especially if they're not just a dick but they have a genuine reason to be angry) there's ways to flip them around, in which case that energy might be harnessed and they can become your strongest ally. Another thing I've seen that I didn't expect.
Claude code is better than a junior programmer by a lot and these guys think it only gets better from there and they have people with decades in the industry to burn through before they have to worry about retraining a new crop.
> “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”
Instead you should replace senior staff who make way more.
LLM defenders with the "yOu cAn't cRiTiCiZe iT WiThOuT MeNtIoNiNg tHe mOdEl aNd vErSiOn, It mUsT Be a lAnGuAgE LiMiTaItOn" crack me up. I used code generation out of curiosity once, for a very simple script, and it fucked it up so badly I was laughing.
Please tell me which software you are building with AI so I can avoid it.
I mean I used Copilot / JetBrains etc. to work on my code base but for large scale changes it did so much damage that it took me days to fix it and actually slowed me down. These systems are just like juniors in their capabilities, actually worse because junior developers are still people and able to think and interact with you coherently over days or weeks or months, these models aren’t even at that level I think.
> “Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I remember someone that had a .sig that I loved (Can't remember where. If he's here, kudos!):
> I hate code, and want as little of it in my programs as possible.
Rather than AI that can function as many junior coders to enable a senior programmer to be more efficient.
Having AI function as a senior programmer for lots of junior programmers that helps them learn and limits the interruptions for human senior coders makes so much more sense.
It's refreshing to finally see CEOs and other business leaders coming around to what experienced, skeptical engineers have been saying for this entire hype cycle.
I assumed it would happen at some point, but I am relieved that the change in sentiment has started before the bubble pops - maybe this will lesson the economic impact.
Yeah, the whole AI thing has very unpleasant similarities to the dot com bubble that burst to the massive detriment of the careers of the people that were working back then.
The parallels in how industry members talk about it is similar as well. No one denies that the internet boom was important and impactful, but it's also undeniable that companies wasted unfathomable amounts of cash for no return at the cost of worker well being.
Time will show you are right and almost all the other dumbasses here at HN are wrong. Which is hardly surprising since they are incapable of applying themselves around their coming replacement.
They are 100% engineers and 100% engineers have no ability to adapt to other professions. Coding is dead, if you think otherwise then I hope you are right! But I doubt it because so far I have only heard arguments to the counter that are obviously wrong and retarded.
> “I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?”
In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great. We discovered that reasoning and critical thinking is impossible without a foundational knowledge about what to be critical about. I think the same can be said about software development.
I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.
The most damning example I have about Swedish school system is anecdotal: by attending Saturday school, I never had to study math ever in the Swedish school. (same for my Asian classmates) when I finished 9th grade Japanese school curriculum taught ONLY one day per week (2h), I had learned all of advanced math in high school and never had to study math until college.
The focus on "no one left behind == no one allowed ahead" also meant that young me complaining math was boring and easy didn't persuade teachers to let me go ahead, but instead, they allowed me to sleep during the lecture.
> no one left behind == no one allowed ahead
It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
Teachers in my county were heavily discouraged from failing anyone, because pass rate became a target instead of a metric. They couldn't even give a 0 for an assignment that was never turned in without multiple meetings with the student and approval from an administrator.
The net result was classes always proceeded at the rate of the slowest kid in class. Good for the slow kids (that cared), universally bad for everyone else who didn't want to be bored out of their minds. The divide was super apparent between the normal level and honors level classes.
I don't know what the right answer is, but there was an insane amount of effort spent on kids who didn't care, whose parents didn't care, who hadn't cared since elementary school, and always ended up dropping out as soon as they hit 18. No differentiation between them, and the ones who really did give a shit and were just a little slow (usually because of a bad home life).
It's hard to avoid leaving someone behind when they've already left themselves behind.
I'm gonna add another perspective. I was placed, and excelled, in moderately advanced math courses from 3rd grade on. Mostly 'A's through 11th grade precalc (taken because of the one major hiccup, placing only in the second most rigorous track when I entered high school). I ended that year feeling pretty good, with a superior SAT score bagged, high hopes for National Merit, etc.
Then came senior year. AP Calculus was a sh/*tshow, because of a confluence of factors: dealing with parents divorcing, social isolation, dysphoria. I hit a wall, and got my only quarterly D, ever.
The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me - and also completely inapplicable, because I WAS one of the bright kids! I needed help, and focus. I retook the course in college and got the highest grade in the class, so I confirmed that I was not the problem; unfortunately, though, the damage had been done. I'd chosen a major in the humnities, and had only taken that course as an elective, to prove to myself that I could manage the subject. You would never know that I'd been on-track for a technical career.
So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one, and it wasn't true, but the simple perception was devastating. I think there is a larger, overarching deficit of support for students, probably some combination of home life, class structure, and pedagogical incentives. If "no child left behind" is anathema in these circles, the "full speed ahead" approach is not much better.
> The, "if you get left behind, that's on you, because we're not holding up the bright kids," mentality was catastrophic for me
Your one bad year doesn't invalidate the fact that it was good to allow you to run ahead of slower students the other 9 years. It wasn't catastrophic for you, as you say yourself you just retook the class in college and got a high grade. I honestly don't see how "I had a bad time at home for a year and did bad in school" could have worked out any better for you.
> So, I don't buy that America/Sweden/et al. are full of hopeless demi-students. I was deemed one.
A bad grade one year deemed you a hopeless demi student? By what metric? I had a similar school career (AP/IB with As and Bs) and got a D that should have been an F my senior year and it was fine.
They seem to lament ending up in humanities instead of a technical path. The fact that the humanities is just categorized as for less smart people and technical people are all smart is a problem in itself.
Many bright people end up in humanities and end up crushed by the societal pressure that expects them to be inferior, a huge waste.
> if you get left behind, that's on you, because we're not holding up the bright kids
Please note the differentiation I made between kids who were slow and didn't give a shit, and kids who were slow but at least tried
But you aren't supposed to choose either or. Instead, you split the students in different groups, different speeds.
So it works ok for everyone. You when you're in a good shape, and also works ok for you when you're in a bad life situation.
I hope everything went mostly okay in the end for you
This is probably the right solution. It seems in reality nobody does this since it is expensive (more teachers, real attention to students, etc). Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
So, it feels to me ideally within the same classroom there should be a natural way to work on your own pace at your own level. Is it possible? Have no idea - seems not, again primarily because it requires a completely different skillset and attention from teachers.
> should be a natural way to work on your own pace at your own level
Analogous to the old one-room-school model where one teacher taught all grade levels and students generally worked from textbooks. There were issues with it stemming from specialization (e.g., teaching 1st grade is different than teaching 12th). They were also largely in rural areas and generally had poor facilities.
The main barrier in the US to track separation is manpower. Public School teachers are underpaid and treated like shit, and schools don't get enough funding which further reduces the number of teachers.
Teachers just don't have the time in the US to do multiple tracks in the classroom.
You can have a multi-track high-school system, like in much of Europe. Some are geared towards the academically inclined who expect to go to university, others hold that option open but focus on also learning a trade or specialty (this can be stuff like welding, CNC, or hospitality industry / restaurants etc.), while others focus more heavily on the trade side, with apprenticeship at companies intertwined with the education throughout high school, and switching to a university after that is not possible by default, but not ruled out if you put in some extra time).
Or you can also have stronger or weaker schools where the admission test scores required are different, so stronger students go to different schools. Not sure if that's a thing in the US.
This was the way all schools worked in my county in florida, at least from middle school on. Normal/Honors/AP split is what pretty much every highschool did at the time. You could even go to a local community college instead of HS classes.
> Also if there is an explicit split there will be groups of people who "game" it (spend disproportional amount of time to "train" their kids vs actual natural talent - not sure if this is good or bad).
The idea of tracking out kids who excel due to high personal motivation when they have less natural aptitude is flat out dystopian. I'm drawing mental images of Gattaca. Training isn't "gaming". It's a natural part of how you improve performance, and it's a desirable ethical attribute.
What if its parents "motivation" to a large extent (and by gaming I meant primarily parents pushing extremely hard)? How would you draw the line?
To be clear - I personally don't have an answer to this.
>But you aren't supposed to choose either or. Instead, you split the students in different groups, different speeds.
This answer is from the US perspective. I've lived in several states now, and I know many of teachers because my partner is adjacent to education in her work and family. This is what I've learned from all this so far:
This is an incredibly easy and logical thing to both suggest, conceptualize, and even accept. In fact, I can see why alot of people don't think its a bad idea. The problem comes down the following in no specific order:
- Education is highly politicized. Not only that, its one of the most politicized topics of our time. This continues to have negative affects on everything to proper funding of programs[0]
- This means some N number of parents will inevitably take issue with these buckets for one reason or another. That can become a real drain of resources dealing with this.
- There's going to be reasonable questions of objectivity that go into this, including historical circumstances. This type of policy is unfortunately easy enough to co-op certain kids into certain groups based on factors like race, class, sex etc. rather than educational achievement alone, of which we also do not have a good enough way to measure objectively currently because of the aforementioned politicized nature of education.
- How to correct for the social bucketing of tiered education? High achieving kids will be lauded as lower achieving ones fall to the background. How do you mitigate that so you don't end up in a situation where one group is reaping all the benefits and thereby getting all the social recognition? Simply because I couldn't do college level trig when I was in 8th grade doesn't mean I deserved limited opportunities[2], but this tiered system ends up being ripe for this kind of exploitation. In districts that already have these types of programs you can already see parents clamoring to get their kids into advanced classes because it correlates to better outcomes.
[0]: I know that the US spends in aggregate per student, approximately 15,000 USD per year, but that money isn't simply handed to school districts. If you factor specialized grants, bonds, commitments etc. the actual classroom spending is not working with this budget directly, its much smaller than this. This is because at least some your local districts funding is likely coming from grants, which are more often than not only paid out for a specific purpose and must be used in pursuant of that purpose. Sometimes that purpose is wide and allows schools to be flexible, but more often it is exceedingly rigid as its tied to some outcome, such as passing rates, test scores etc. There's lots of this type of money sloshing around the school system, which creates perverse incentives.
[1]: Funding without strict restrictions on how its used
[2]: Look, I barely graduated high school, largely due to alot of personal stuff in my life back then. I was a model college student though, but due to a different set of life circumstances never quite managed to graduate, but I have excelled in this industry because I'm very good at what I do and don't shy away from hard problems. Yet despite this, some doors were closed to me longer than others because I didn't have the right on paper pedigree. This only gets worse when you start bucketing kids like this, because people inevitably see these things as some sort of signal about someones ability to perform regardless of relevancy.
Yeah, all that stuff in the end boils down to: rich parents will find a way to have it their way. Whether private schools or tutors or whatever.
Every ideological system has certain hangups, depending on what they can afford. In the Soviet communist system, obviously a big thing was to promote kids of worker and peasant background etc., but they kept the standards high and math etc was rigorous and actual educational progress taken seriously. But there was Cold War pressure to have a strong science/math base.
Currently, the US is coasting, relying on talent from outside the country for the cream of the top, so they can afford nonsense beliefs, given also that most middle-class jobs are not all that related to knowledge, and are more status-jockeying email jobs.
It will likely turn around once there are real stakes.
> I was placed, and excelled, in moderately advanced math courses from 3rd grade on.
In the school district I live in, they eliminated all gifted programs and honors courses (they do still allow you to accelerate in math in HS for now, but I'm sure that will be gone soon too), so a decent chance you might not have taken Calculus in HS. Problem solved I guess?
I'm not sure when this changed, but in school for me in the 1970s and early '80s the teachers (at least the older ones) were all pretty much of the attitude that "what you get out of school depends on what you put into it" i.e. learning is mostly up to the student. Grades of "F" or zero for uncompleted or totally unsatisfactory work were not uncommon and students did get held back. Dropout age was 16 and those who really didn't care mostly did that. So at least the last two years of high school were mostly all kids who at least wanted to finish.
> It's like this in the US (or rather, it was 20 years ago. But I suspect it is now worse anyway)
I'm sure it's regional, but my oldest kid started school in SoCal 13 years ago, and it is definitely worse. Nearly every bad decision gets doubled-down on and the good ones seem to lack follow-through. I spent almost a decade trying to improve things and have given up; my youngest goes to private school now.
We are experimenting with our daughter this year: Our school system offers advanced math via their remote learning system. This means that during math class, my kid will take online 6th grade math instead of the regular in-person 5th grade math.
We will have to see how it goes, but this could be the advanced math solution we need.
Schools my kids attended encourage getting ahead by offering advanced math classes, some being online
And still (or maybe because?) the resulting adults in Sweeden score above e.g Korea in both numeracy and adaptive problem solving (but slightly less than Japan). The race is not about being best at 16 after all.
https://gpseducation.oecd.org/CountryProfile?plotter=h5&prim...
https://gpseducation.oecd.org/CountryProfile?plotter=h5&prim...
>I'm glad my east Asian mother put me through Saturday school for natives during my school years in Sweden.
I’m curious, could you share your Saturday school‘s system? I’m very interested in knowing what a day of class was like, the general approach, etc.
Sure! as far as I know, it's somewhat standardized and the east asian countries all have it (Korea, China, Japan). I know this because the Chinese Saturday School was close by. It's usually sponsored by the embassy & in the capital cities, or places with many Japanese families. (London, Germany, Canada afaik)
Because it's only once a week, it was from 09:00 - 14:00 or similar. The slots was: Language (Japanese), Social Studies (History, Geography, Social systems) and then Math. They usually gave homework, which was a little up to the parent to enforce. Classes was quite small: elementary school the most, but no more than 10. Middle school was always single digit (5 for my class). Depends on place and economy: When the comapnies Ericsson (Sweden) and Sony (Japan) had a joint division Sony-Ericsson, many classes doubled.
Class didn't differ so much from the normal school in Asia. Less strict. But the school organized a lot of events such as Undoukai (Sports Day), Theater play, and new years/setsubun festival and other things common in Japanese schools. It served as a place for many asian parents to meet each other too, so it became a bit of a community.
Because lack of students the one I went to only had from 1th to 9th grade. In London and bigger cities I heard they have up until high-school. But in Japan, Some colleges have 帰国子女枠 (returnee entrance system) so I know one alumni that went to Tokyo Uni after highschool.
Personally, I liked it. I hated having to go one extra day to school, but being able to have classmate to share part of your culture (before internet was wide-spread) by sharing games, books, toys you brought home from holiday in Japan was very valuable.
Related to the "critical thinking" part of the original article: It was also interesting to read two history books. Especially modern history. The Swedish (pretending to be neutral) one and the Japanese one (pretending they didn't do anything bad) as an example, for WW2 and aftermath. Being exposed to two rhetoric, both technically not a lie (but by omission), definitely piqued my curiosity as a kid.
Thanks for the reply!
You mentioned that these classes were good enough that they made swedish classes a breeze in comparison. What differences in teaching made Saturday school so much more effective?
You did mention class size, and the sense of community, which were probably important, but is there anything else related to the teaching style that you thought helped? Or conversely, something that was missing in the regular school days that made them worse?
>What differences in teaching made Saturday school so much more effective?
I do think the smaller class and feeling more "close" to the teacher helped a lot. But also that the teachers were passionate. It's a community so I still (20 years later) do meet some of the teachers, through community events.
I can't recall all the details, to be honest, but I do think a lot repetition of math exercises and actually going through them step by step helped a lot to solidify how to think. I feel like the Japanese math books also went straight to the point, but still made the book colorful in a way. Swedish math books felt bland. (something I noticed in college too, but understandable in college ofc)
In the Swedish school, it felt like repetition was up to homework. You go through a concept, maybe one example, on the whiteboard and then move on. Unless you have active parents, it's hard to get timely feedback on homeworks (crucial for learning) so people fell behind.
Also probably that curriculum was handed to the student early. You knew what chapters you were going through at what week, and what exercises were important. I can't recall getting that (or that teachers followed it properly) early in the term at Swedish school.
They also focused on different thing. For example the multiplication table, in Japan you're explicitly taught to memorize it and are tested on recall speed. (7 * 8? You have 2 seconds) in Swedish schools, they despised memorization so told us not to. The result is "how to think about this problem" is answered with a "mental model" in Japanese education and "figure it out yourself" in the Swedish one. Some figured it out in a suboptimal way.
But later in the curriculum it obviously help to be able to calculate fast to keep up, so those small things compounded, i think.
> Swedish (pretending to be neutral)
Okay, you gotta spill - what's some stuff Sweden was pretending to be neutral on?
(As a poorly informed US dude) I'm aware of Japan's aversion to the worse events of the war, but haven't really heard anything at all about bad stuff in Sweden
I'm a Brit who speaks Swedish, and recently watched the Swedish TV company SVT's documentary "Sweden in the war" (sverige i kriget). I can maybe add some info here just out of personal curiosity on the same subject.
There were basically right wing elements in every European country. Sympathisers. This included Sweden. So that's what OP was getting at in part. Germany was somewhat revered at the time, as an impressive economic and cultural force. There was a lot of cultural overlap, and conversely the Germans respected the heritage and culture of Scandinavia and also of England, which it saw as a Germanic cousin.
The documentary did a good job of balancing the fact that Sweden let the German army and economy use its railways and iron ore for far longer than it should have, right up until it became finally too intolerable to support them in any way (discovery of the reality of the camps). Neutrality therefore is somewhat subjective in that respect.
They had precedent for neutrality, from previous conflicts where no side was favoured, so imo they weren't implicitly supporting the nazi movement, despite plenty of home support. It's a solid strategy from a game theory perspective. No mass bombings, few casualties, wait it out, be the adult in the room. Except they didn't know how bad it would get.
In their favour they allowed thousands of Norwegian resistance fighters to organise safely in Sweden. They offered safe harbour to thousands of Jewish refugees from all neighbouring occupied countries. They protected and supplied Finns too. British operatives somehow managed to work without hindrance on missions to take out German supplies moving through Sweden. It became a neutral safe space for diplomats, refugees and resistance fighters. And this was before they found out the worst of what was going on.
Later they took a stand, blocked German access and were among the first to move in and liberate the camps/offer red cross style support.
Imo it's a very nuanced situation and I'm probably more likely to give the benefit of the doubt at this point. But many Danes and Norwegians were displeased with the neutral stance as they battled to avoid occupation and deportations.
As for Japan, I'd just add that I read recently on the BBC that some 40% or more of the victims of the bombings were Koreans. As second class citizens they had to clean up the bodies and stayed among the radioactive materials far longer than native residents, who could move out to the country with their families. They live on now with intergenerational medical and social issues with barely a nod of recognition.
To think it takes the best part of 100 years for all of this to be public knowledge is testament to how much every participant wants to save face. But at what cost? The legacy of war lives on for centuries, it would seem.
And who were the teachers? Did it cost money, how much? How long ago? I guess the students were motivated and disciplined? Who were the other students? Natives, you mean swedes?
Sorry, by natives I meant Japanese Natives; A school for japanese kids (kids of japanese parents). Although I read that in Canada they recently removed that restriction, since there's now 3rd and 4th generation Canadian that teaches Japanese to the kids.
The teachers was often Japanese teachers. Usually they did teaching locally (in Sweden) or had other jobs, but most of them with a teaching license (in Japan). My Mother also did teaching there for a short time, and told me that the salary was very very low (like 300$ or something, per month) and people mostly did it for passion or part of the community thing.
I did a quick googling and right now the price seems 100$ for entering the school, and around 850$ per year. Not sure about the teachers salary now or what back then.
Other students were either: Half-Swedish/Japanese, settled in Sweden. Immigrants with both parent Japanese, settled in Sweden. Expats kids (usually in Sweden for a short time, 1-2 years, for work) both parent Japanese. The former two spoke both language, the latter only spoke Japanese.
Ok :-) Thanks for explaining. Sounds like a good school
I have as much of a fundamental issue with “Saturday school” for children as I do with professionals thinking they should be coding on their days off. When do you get a chance to enjoy your childhood?
For many, coding can be fun and it's not an external obligation like eating veggies or going to the gym (relatedly, some also enjoy veggies and the gym).
Some people want to deeply immerse into a field. Yes, they sacrifice other ways of spending that time and they will be less well rounded characters. But that's fine. It's also fine to treat programming as a job and spend free time in regular ways like going for a hike or cinema or bar or etc.
And similarly, some kids, though this may not fully overlap with the parents who want their kids to be such, also enjoy learning, math, etc. Who love the structured activities and dread the free play time. I'd say yes, they should be pushed to do regular kid things to challenge themselves too, but you don't have to mold the kids too much against what their personality is like if it is functional and sustainable.
As a kid, the "fun" about Saturday school fluctuated. In the beginning it was super fun, after a while it became a chore (and I whined to my mom) but in the end I enjoyed it and it was tremendously valuable. The school had a lot of cultural activities (sport day, new years celebration / setsubun etc) and having a second set of classmates that shared a different side of you was actually fun for me. So it added an extra dimension of enjoyment in my childhood :)
Especially since (back then) being an (half) asian nerd kid in a 99.6% White (blonde & blue eyed) school meant a lot of ridicule and minor bullying. The saturday school classes were too small for bullying to not get noticed, and also served as a second community where you could share your stuff without ridicule or confusion :)
The experience made me think that it's tremendously valuable for kids to find multiple places (at least one outside school) where they can meet their peers. Doesn't have to be a school, but a hobby community, sport group, music groups, etc. Anything the kid might like, and there's shared interest.
It teaches kid that being liked by a random group of people (classmates) is not everything in life, and you increase the chance of finding like-minded people. Which reflect rest of life better anyway (being surrounded by nerds is by far the best perk of being an engineer)
I know 2 class mates (out of 7) that hated it there, and since it's not mandatory they left after elementary school. So a parent should ofc check if t he kids enjoy it (and if not, why) and let the kid have a say in it.
So you’re telling me the entire point of life is being able to segregate yourself with a bunch of people like you?
That's a very bad-faith take on what I wrote. I'll self-quote:
>The experience made me think that it's tremendously valuable for kids to find *multiple places* (at least one outside school) where they can meet their peers.
Most people don't neatly fit in to "one" category. Trying to find many places you could meet peers can open up your mind (and also people around you)
Ok. I’m sorry I miscast it.
There is a huge difference between not wanting to be around people who don’t agree with you about the benefits and drawbacks of supply side economics and not wanting to be around someone who disrespects you as a person because of the color of your skin.
Neither he (half Asian) or I (Black guy) owe the latter our time or energy to get along with. Let them wallow in their own ignorance.
I (half Asian as well) agree.
[dead]
It's better to leave no one behind than to focus solely on those ahead. Society needs a stable foundation and not more ungrateful privileged people.
But it is a false dichotomy. You can both offer resources to the ones behind and support high achievers.
The latter can pretty much teach themselves with little hands on guidance, you just have to avoid actively sabotaging them.
Many western school systems fail that simple requirement in several ways: they force unchallenging work even when unneeded, don’t offer harder stimulating alternatives, fail to provide a safe environment due to the other student’s disruption…
You say we should provide those ahead a safe environment.. but that's what accelerates social segregation and leaves those other poor kids behind
That's a good point.
Maybe you can have all quiet and focused students together in the same classroom?
They might be reading different books, different speed, and have different questions to the teachers. But when they focus and don't interrupt each other, that can be fine?
Noisy students who sabotage for everyone shouldn't be there though.
Grouping students on some combination of learning speed and ability to focus / not disturbing the others. Rather than only learning speed. Might depend on the size of the school (how many students)
For what it's worth, that's how the Montessori school I went to worked. I have my critiques of the full Montessori approach (too long for a comment), but the thing that always made sense was mixed age and mixed speed classrooms.
The main ideas that I think should be adopted are:
1. A "lesson" doesn't need to take 45 minutes. Often, the next thing a kid will learn isn't some huge jump. It's applying what they already know to an expanded problem.
2. Some kids just don't need as much time with a concept. As long as you're consistently evaluating understanding, it doesn't really matter if everyone gets the same amount of teacher interaction.
3. Grade level should not be a speed limit; it also shouldn't be a minimum speed (at least as currently defined). I don't think it's necesarily a problem for a student to be doing "grade 5" math and "grade 2" reading as a 3rd grader. Growth isn't linear; having a multi-year view of what constitutes "on track" can allow students to stay with their peers while also learning at an appropriate pace for their skill level.
Some of this won't be feasible to implement at the public school level. I'm a realist in the sense that student to teacher ratios limit what's possible. But I think when every education solution has the same "everyone in a class goes the same speed" constraint, you end up with the same sets of problems.
Counterintuitive argument:'No one left behind' policies increase social segregation.
Universal education offers a social ladder. "Your father was a farmer, but you can be a banker, if put in the work".
When you set a lower bar (like enforcing a safe environment), smart kids will shoot forward. Yes, statistically, a large part of succesful kids will be the ones with better support networks, but you're stil judging results, for which environment is just a factor.
When you don't set this lower bar, rich kids who can move away will do it, because no one places their children in danger voluntarily. Now the subset of successful kids from a good background will thrive as always, but succesful kids from bad environments are stuck with a huge handicap and sink. You've made the lader purely, rather than partly, based on wealth.
And you get two awful side effects on top:
- you're not teaching the bottom kids that violating the safety of others implies rejection. That's a rule enforced everywhere, from any workplace through romantic relationships to even prison, and kids are now unprepared for that.
- you've taught the rest of the kids to think of the bottom ones as potential abusers and disruptors. Good luck with the resulting classism and xenophobia when they grow up.
There will always be a gap between kids who are rich and smart (if school won't teach them, a tutor will) and kids who are stupid (no one can teach them). We can only choose which side of this gap will the smart poor kids stand on. The attempts to make everyone at school equal put them on the side with the stupid kids.
And rich dumb kids. Where do they fall?
Rich dumb kids do not fail. At least not secondary school. They go to “party school” university ~~and major in business~~.
Had to do it for a laugh. Posting long after discussion finished.
Not sure if counterintuitive or not, but once you have such social mobility-based policies in place ("Your father was a farmer, but you can be a banker, if put in the work") for a few generations, generally people rise and sink to a level that will remain more stable for the later generations. Then even if you keep that same policy, the observation will be less social movement compared to generations before and that will frustrate people and they read it to mean that the policies are blocking social mobility.
You get most mobility after major upheavals like wars and dictatorships that strip people of property, or similar. The longer a liberal democratic meritocratic system is stable without upheavals and dispossession of the population through forced nationalization etc, the less effect the opportunities will have, because those same opportunities were already generally taken advantage of by the parent generation and before.
If everyone can't get a Nobel prize, no one should!
The so-called intelligent kids selfishly try to get ahead and build rockets or cure cancer, but they don't care about the feelings of those who can't build rockets or cure cancer. We need education to teach them that everyone is special in exactly the same way.
That sounds like a recipe for mediocrity.
Ridiculous. Progress, by definition, is made by the people in front.
No one is saying to "focus solely on those ahead," but as long as resources are finite, some people will need to be left behind to find their own way. Otherwise those who can benefit from access to additional resources will lose out.
"Progress is made by the people in front" is plausibly true by definition.
"Progress is made by the people who were in front 15 years earlier" is not true by definition. (So: you can't safely assume that the people you need for progress are exactly the people who are doing best in school. Maybe some of the people who aren't doing so well there might end up in front later on.)
"Progress is made by the people who end up in front without any intervention" is not true by definition. (So: you can't safely assume that you won't make better progress by attending to people who are at risk of falling behind. Perhaps some of those people are brilliant but dyslexic, for a random example.)
"Progress is made by the people in front and everyone else is irrelevant to it" is not true by definition. (So: you can't safely assume that you will make most progress by focusing mostly on the people who will end up in front, even if you can identify who those are. Maybe their brilliant work will depend on a whole lot of less glamorous work by less-brilliant people.)
I strongly suspect that progress is made mostly by people who don't think in soundbite-length slogans.
Although in a global world, it's not clear that it's best for a country to focus on getting the absolute best, IF if means the average suffers from it. There is value in being the best, but for the economy it's also important to have enough good enough people to utilise the new technology/science(which gets imported from abroad), and they don't need to be the absolute best.
As a bit of a caricature example, if cancer is completely cured tomorrow, it's not necessarily the country inventing the cure which will be cancer free first, but the one with the most doctors able to use and administer the cure.
Which of the two give us progress? Are you sure you wanna give up all progress for the sake of stability?
This is a false dichotomy though, as I linked previously in this thread, adult Sweeds are above Koreans, and only slightly below Japanese in both literacy, numeracy, and problem solving.
Personally I think it's easy to overestimate how important it is to be good at something at 16 for the skill at 25. Good university is infinitely more important than 'super elite' high school.
I'd rather live in a stable society than some tech utopia.
So, here's a time machine. You can go back to a time and place of lasting, enduring stability. There have been been numerous such periods in recorded history that have lasted for more than a human lifetime, and likely even more prior to that. (Admittedly a bit of a tautology, given that most 'recorded history' is a record of things happening rather than things staying the same.)
It will be a one-way trip, of course. What year do you set the dial to?
Ok, please surrender your cellphones, internet, steam, tools, writing, etc... all those were given to you by the best of the crop and not the median slop.
Go gather tubers in a forest
the worst part about the average person is that 49% are worse than average
> the worst part about the average person is that 49% are worse than average
That is not how that works... It would if you said "median"
I guess we should tell that guy from Tula who makes watches to go mushroom picking too
[dead]
Most of what I remember of my high school education in France was: here are the facts, and here is the reasoning that got us there.
The exams were typically essay-ish (even in science classes) where you either had to basically reiterate the reasoning for a fact you already knew, or use similar reasoning to establish/discover a new fact (presumably unknown to you because not taught in class).
Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.
I don't know if I have critical thinking or not. But I often question - WHY is this better? IS there any better way? WHY it must be done such a way or WHY such rule exists?
For example in electricity you need at least that amount of cross section if doing X amount of amps over Y length. I want to dig down and understand why? Ohh, the smaller the cross section, the more it heats! Armed with this info I get many more "Ohhs": Ohh, that's why you must ensure the connections are not loose. Oohhh, that's why an old extension cord where you don't feel your plug solidly clicks in place is a fire hazard. Ohh, that's why I must ensure the connection is solid when joining cables and doesn't lessen cross section. Ohh, that's why it's a very bad idea to join bigger cables with a smaller one. Ohh, that's why it is a bad idea to solve "my fuse is blowing out" by inserting a bigger fuse but instead I must check whether the cabling can support higher amperage (or check whether device has to draw that much).
And yeah, this "intuition" is kind of a discovery phase and I can check whether my intuition/discovery is correct.
Basically getting down to primitives lets me understand things more intuitively without trying to remember various rules or formulas. But I noticed my brain is heavily wired in not remembering lots of things, but thinking logically.
We don't have enough time to go over things like this over and over again. Somebody already analyzed/tried all this and wrote in a book and they teach you in school from that book how it works and why. Yeah if you want to know more or understand better you can always dig it out yourself. At least today you can learn tons of stuff.
We don't have enough time to derive everything from first principles, but we do have the time to go over how something was derived, or how something works.
A common issue when trying this is trying to teach all layers at the same level of detail. But this really isn't necessary. You need to know the equation for Ohms law, but you can give very handwavy explanations for the underlying causes. For example: why do thicker wires have less resistance? Electricity is the movement of electrons, more cross section means more electrons can move, like having more lanes on a highway. Why does copper have less resistance than aluminum? Copper has an electron that isn't bound as tightly to the atom. How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law. Reserve the equations and numbers for the layers that matter, but having a rough understanding of what's happening on the layer below makes it easier to understand the layer you care about, and makes it easier to know when that understanding will break down (because all of science and engineering are approximations with limited applicability)
Oh you put this nicely.
> How does electricity know which path has the least resistance? It doesn't, it starts flowing down all paths equally at a significant fraction of the speed of light, then quickly settles in a steady state described by Ohm's law.
> because all of science and engineering are approximations with limited applicability
Something I heard but haven't dig into, because my use case (DIY, home) doesn't care. In some other applications approximation at this level may not work and more detailed understanding may be needed :)
And yeah, some theory and telling of things others discovered for sure needs to be done. That is just the entry point for digging. And understanding how something was derived is just a tool for me to more easily remember/use the knowledge.
Are you being serious or is this satire? What an odd perspective to share on Hacker News. We're a bunch of nerds that take pleasure in understanding how things work when you take them apart, whether that's a physics concept or a washing machine. Or am I projecting an ethos?
Are we hackers? I see posters griping about the pointlessness of learning CS theory and other topics during their college on HN all the time.
No you’re not projecting they’re being weird.
On the contrary, the French "dissertation" exercise requires to articulate reasoning and facts, and come up with a plan for the explanation. It is the same kind of thinking that you are required to produce when writing a scientifically paper.
It is however not taught very well by some teachers, who skirt on explaining how to properly do it, which might be your case.
I'm pretty sure my teachers in the 90s were teaching properly.
I also don't see what's "on the contrary" there.
On the contrary, your OP claims that dissertations require a rehash of the references cited in class. A real dissertation exercises logic and requires mobilizing facts and verbal precision to ground arguments. It is also highly teacher-dependent: if the correction is lax or not properly explained, you won’t understand what the exercise really is or how you are supposed to think in order to succeed.
> Unfortunately, it didn't work for me and I still have about the same critical thinking skills as a bottle of Beaujolais Nouveau.
Why do you say so? Even just stating this probably means you are one or a few steps further...
Perhaps you overestimate me (or underestimate Beaujolais Nouveau (though how one could underestimate Beaujolais Nouveau is a mystery to me, but I digress)).
But also, it takes a lot of actual learning of facts and understanding reasoning to properly leverage that schooling and I've had to accept that I am somewhat deficient at both. :)
One thing I've come to understand about myself since my ADHD diagnosis is how hard thinking actually is for me. Especially thinking "to order", like problem solving or planning ahead. I'm great at makeshift solutions that will hold together until something better comes along. But deep and sustained thought for any length of time increases the chance that I'll become aware that I'm thinking and then get stuck in a fruitless meta cognition spiral.
An analogy occurred to me the other day that it's like diving into a burning building to rescue possessions. If I really go for it I could get lucky and retrieve a passport or pet, but I'm just as likely to come back with an egg whisk!
Your description feels relatable.
I think all this stuff is so complex and multi-faceted that we often get only a small part of the picture at a time.
I likely have some attention/focus issues, but I also know they vary greatly (from "can't focus at all" to "I can definitely grok this") based on how actually interested I am in a topic (and I often misjudge that actual level of interest).
I also know my very negative internal discourse, and my fixed mindset, are both heavily influenced by things that occurred decades ago, and keeping myself positively engaged in something by trying to at least fake a growth mindset is incredibly difficult.
Meanwhile, I'm perfectly willing to throw unreasonable brute force effort at things (ie I've done many 60+ hour weeks working in tech and bunches of 12 hour days in restaurant kitchens), but that's probably been simultaneously both my biggest strength and worst enemy.
At the same time, I don't think you should ignore the value of an egg whisk. You can use it to make anything from mayonnaise to whipped cream, not to mention beaten egg whites that have a multitude of applications. Meanwhile, the passport is easy enough to replace, and your pet (forgive me if I'm making the wrong assumption here) doesn't know how to use the whisk properly.
I’ve heard many bad things said of the Beaujolais Nouveau, and of my sense of taste for liking it, but this is the first time I’ve seen its critical-thinking skills questioned.
In its/your/our defense, I think it’s a perfectly smart wine, and young at heart!
I appreciate the thought! ... even if it makes me question your judgement a bit.
> the same critical thinking skills as a bottle of Beaujolais Nouveau
I'm loving this expression. May I please adopt it?
You absolutely may, but I think you should personalize it with a wine reference that is geographically and qualitatively appropriate.
And you may only use it to describe yourself, not others.
> In the Swedish schoolsystem, the idea for the past 20 years has been exactly this, that is to try to teach critical thinking, reasoning, problem solving etc rather than hard facts. The results has been...not great.
I'm not sure I'd agree that it's been outright "not great". I myself am the product of that precise school-system, being born in 1992 in Sweden (but now living outside the country). But I have vivid memories of some of the classes where we talked about how to learn, how to solve problems, critical thinking, reasoning, being critical of anything you read in newspapers, difference between opinions and facts, how propaganda works and so on. This was probably through year/class 7-9 if I remember correctly, and both me and others picked up on it relatively quick, and I'm not sure I'd have the same mindset today if it wasn't for those classes.
Maybe I was just lucky with good teachers, but surely there are others out there who also had a very different experience than what you outline? To be fair, I don't know how things are working today, but at least at that time it actually felt like I had use of what I was thought in those classes, compared to most other stuff.
This is, in my opinion, quite accurate.
In the world of software development I meet a breed of Swedish devs younger than 30 that can't write code very well, but who can wax Jira tickets and software methodologies and do all sort of things to get them into a management position without having to write code. The end result is toxic teams where the seniors and the devs brought from India are writing all the code while all the juniors are playing software architect, scrum master an product owners.
Not everybody is like that; seniors tend to be reliable and practical, and some juniors with programming-related hobbies are extremely competent and reasonable. But the chunk of "waxers" is big enough to be worrying.
I have heard that in Netherlands there used to be (not sure if it is still there) a system where you have for example 4 rooms of children. Room A contains all children that are ahead of rooms B, C, D. If a child from room B learns pretty quickly - the child is moved to room A. However, if the child leaves behind the other children in room B - that child is moved in room C. Same for room C - those who can not catch up are moved to room D. In this way everyone is learning at max capacity. Those who can learn faster and better are not slowed down by others who can not (or do not want to) keep the pace. Everyone is happy - children, teachers, parents, community.
> The results has been...not great.
Sweden is the 19th country in the PISA scores. And it is in the upper section on all education indexes. There has been a world decline on scores, but has nothing to do with the Swedish education system. (That does not mean that Sweden should not continue monitoring it and bringing improvements)
From Swedish news: https://www.sverigesradio.se/artikel/swedish-students-get-hi...
- Swedish students skills in maths and reading comprehension have taken a drastic downward turn, according to the latest PISA study.
- Several other countries also saw a decline in their PISA results, which are believed to be a consequence of the Covid-19 pandemic.
Considering our past and the Finnish progress (they considered following us in the 80s/90s as they had done but stopped), 19th is an disappointment.
Having teenagers that's been through most of the primary and secondary schools I kind agree with GP, especially when it comes to math,etc.
Teaching concepts and ideas is _great_, and what we need to manage with advanced topics as adults. HOWEVER, if the foundations are shaky due to too little repetition of basics (that is seemingly frowned upon in the system) then being taught thinking about some abstract concepts doesn't help much because the tools to understand them aren't good enough.
One should note that from the nineties onwards we put a large portion of our kids' education on the stock exchange and in the hands of upper class freaks instead of experts.
I think there’s a balance to be had. My country (Spain) is the very opposite, with everything from university access to civil service exams being memory focused.
The result is usually bottom of the barrel in the subjects that don’t fit that model well, mostly languages and math - the latter being the main issue as it becomes a bottleneck for teaching many other subjects.
It also creates a tendency for people to take what they learn as truth, which becomes an issue when they use less reputable sources later in life - think for example a person taking a homeopathy course.
Lots of parroting and cargo culting paired with limited cultural exposition due to monolingualism is a bad combination.
Check out E.D. Hirsch Jr.'s work, e.g.'Why Knowledge Matters'.
> what to be critical about
Media can fill that gap. People should be critical about global warming, antivax, anti israel, anti communism, racism, hate, whitr man, anti democracy, russia, china, trump...
This thing is bad, imhate it, problem solved! Modern critical thinking is pretty simple!
In future goverment can provide daily RSS feed, of things to be critical about. You can reduce national schooling system to a single vps server!
Indeed, the swedish school system is an ongoing disaster.
The problem is, in a capitalist society, who is going to be the company that will donate their time and money to teaching a junior developer who will simply go to another company for double the pay after 2 years?
I think that’s a disingenuous take. Earlier in the piece the AWS CEO specifically says we should teach everyone the correct ways to build software despite the ubiquity of AI. The quote about creative problem solving was with respect to how to hire/get hired in a world where AI can let literally anyone code.
> The results has been...not great.
Well, I kind of disagree. The results are bad mainly because we have a mass immigration from low education countries with extremely bad cultures.
If you look at the numbers, it's easy to say swedes are stupid when in the real sense, ethnic swedes do very well in school.
Here is the thing though.
You can’t teach critical thinking like that.
You need to teach hard facts and then people can learn critical thinking inductively from the hard facts with some help.
I completely agree.
On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.
I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.
As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..
I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?
Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.
> B2B SaaS
Perhaps that's part of it.
People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
> People here work on all kinds of industries.
Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.
B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.
The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.
[1] https://openhub.net/p/nginx
I primarily work in C# during the day but have been messing around with simple Android TV dev on occasion at night.
I’ve been blown away sometimes at what Copilot puts out in the context of C#, but using ChatGPT (paid) to get me started on an Android app - totally different experience.
Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
With Copilot I find sometimes it’s brilliant but it’s so random as to when that will be it seems.
> Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.
That has been my experience as well. We can control the surprising pick of APIs with basic prompt files that clarify what and how to use in your project. However, when using less-than-popular tools whose source code is not available, the hallucinations are unbearable and a complete waste of time.
The lesson to be learned is that LLMs depend heavily on their training set, and in a simplistic way they at best only interpolate between the data they were fed. If a LLM is not trained with a corpus covering a specific domain them you can't expect usable results from it.
This brings up some unintended consequences. Companies like Microsoft will be able to create incentives to use their tech stack by training their LLMs with a very thorough and complete corpus on how to use their technologies. If Copilot does miracles outputting .NET whereas Java is unusable, developers have one more reason to adopt .NET to lower their cost of delivering and maintaining software.
From the article:
I'm with Garman here. There's no clean metric for how productive someone is when writing code. At best, this metric is naive, but usually it is just idiotic.Bureaucrats love LoC, commits, and/or Jira tickets because they are easy to measure but here's the truth: to measure the quality of code you have to be capable of producing said code at (approximately) said quality or better. Data isn't just "data" that you can treat as a black box and throw in algorithms. Data requires interpretation and there's no "one size fits all" solution. Data is nothing without its context. It is always biased and if you avoid nuance you'll quickly convince yourself of falsehoods. Even with expertise it is easy to convince yourself of falsehoods. Without expertise it is hopeless. Just go look at Reddit or any corner of the internet where there's armchair experts confidently talking about things they know nothing about. It is always void of nuance and vastly oversimplified. But humans love simplicity. You need to recognize our own biases.
> Pretty ironic you and the GP talk about lines of code.
I was responding specifically to the comment I replied to, not the article, and mentioning LoC as a specific example of things that don't make sense to compare.
> Bureaucrats love LoC
Looks like vibe-coders love them too, now.
...but you repeat yourself (c:
Made me think of a post from a few days ago where Pournelle's Iron Law of Bureaucracy was mentioned[0]. I think vibe coders are the second group. "dedicated to the organization itself" as opposed to "devoted to the goals of the organization". They frame it as "get things done" but really, who is not trying to get things done? It's about what is getting done and to what degree is considered "good enough."
[0] https://news.ycombinator.com/item?id=44937893
On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.
I work on brain dead crud apps much of my time and get nothing from LLMs.
Try Claude Code. You’ll literally be able to automate 90% of the coding part of your job.
We really need to add some kind of risk to people making these claims to make it more interesting. I listened to the type of advice you're giving here on more occasions than I can remember, at least once for every major revision of every major LLM and always walked away frustrated because it hindered me more than it helped.
> This is actually amazing now, just use [insert ChatGPT, GPT-4, 4.5, 5, o1, o3, Deepseek, Claude 3.5, 3.9, Gemini 1, 1.5, 2, ...] it's completely different from Model(n-1) you've tried.
I'm not some mythical 140 IQ 10x developer and my work isn't exceptional so this shouldn't happen.
The dark secret no one from the big providers wants to admit is that Claude is the only viable coding model. Everything else descends into a mess of verbose spaghetti full of hallucinations pretty quickly. Claude is head and shoulders above the rest and it isn't even remotely close, regardless of what any benchmark says.
Stopping by to concur.
Tried about four others, and to some extent I always marveled about capabilities of latest and greatest I had to concede they didn’t make faster. I think Claude does.
As a GPT user, your comment triggered me wanting to search how superior is Claude... well, these users don't think it is: https://www.reddit.com/r/ClaudeAI/comments/1l5h2ds/i_paid_fo...
>As a GPT user, your comment triggered me wanting to search how superior is Claude... well, these users don't think it is: https://www.reddit.com/r/ClaudeAI/comments/1l5h2ds/i_paid_fo...
That poster isn't comparing models, he's comparing Claude Code to Cline (two agentic coding tools), both using Claude Sonnet 4. I was pretty much in the same boat all year as well; using Cline heavily at work ($1k+/month token spend) and I was sold on it over Claude Code, although I've just recently made the switch, as Claude Code has a VSCode extension now. Whichever agentic tooling you use (Cline, CC, Cursor, Aider, etc.) is still a matter of debate, but the underlying model (Sonnet/Opus) seems to be unanimously agreed on as being in a league of its own, and has been since 3.5 released last year.
I've been working on macOS and Windows drivers. Can't help but disagree.
Because of the absolute dearth of high-quality open-source driver code and the huge proliferation of absolutely bottom-barrel general-purpose C and C++, the result is... Not good.
On the other hand, I asked Claude to convert an existing, short-ish Bash script to idiomatic PowerShell with proper cmdlet-style argument parsing, and it returned a decent result that I barely had to modify or iterate on. I was quite impressed.
Garbage in, garbage out. I'm not altogether dismissive of AI and LLMs but it is really necessary to know where and what their limits are.
I'm pretty sure the GP referred to GGP's "brain dead CRUD apps" when they talked about automating 90% of the work.
I found the opposite - I am able to get 50% improvement in productivity for day to day coding (mix of backend, frontend), mostly in Javascript but have helped in other languages. But you have to carefully review though - and have extremely well written test cases if you have to blindly generate or replace existing code.
> In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.
In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.
If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.
Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit
I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".
200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.
Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.
If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.
I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.
If all your code depends on all your other code, yeah 200k lines might be a lot. But if you actually know how to code, I fail to understand why 200k lines (or any number) of properly encapsulated well-written code would be a problem.
Further, if you yourself don't understand the code, how can you verify that using LLMs to make major sweeping changes, doesn't mess anything up, given that they are notorious for making random errors?
It really depends on what your use case is. E.g. of you're dealing with a lot of legacy integrations, dealing with all the edge cases can require a lot of code that you can't refactor away through cleverness.
Each integration is hopefully only a few thousand lines of code, but if you have 50 integrations you can easily break 100k loc just dealing with those. They just need to be encapsulated well so that the integration cruft is isolated from the core business logic, and they become relatively simple to reason about
> 200k lines of code is a failure state.
What on earth are you talking about? This is unavoidable for many use-cases, especially ones that involve interacting with the real world in complex ways. It's hardly a marker of failure (or success, for that matter) on its own.
200k loc is not a failure state. suppose your b2b saas has 5 user types and 5 downstream SAASes it connects to, thats 20k loc per major programming unit. not so bad.
That's actually insane.
I agree on principle, and I'm sure many of us know how much of a pain it is to work on million or even billion dollar codebases, where even small changes can be weeks of beauracracy and hours of meetings.
But with the way the industry is, I'm also not remotely surprised. We have people come and go as they are poached, burned out, or simply life circumstances. The training for the new people isn't the best, and the documentation for any but the large companies are probably a mess. We also don't tend to encourage periods to focus on properly addressing tech debt, but focusing on delivering features. I don't know how such an environment over years, decades doesn't generate so much redundant, clashing, and quirky interactions. The culture doesn't allow much alternative.
And of course, I hope even the most devout AI evangelists realize that AI will only multiply this culture. Code that no one may even truly understand, but "it works". I don't know if even Silicon Valley (2014) could have made a parody more shocking than the reality this will yield.
In that case, LLMs are full on debt-machines.
Ones that can remediate it though. If I am capable of safely refactoring 1,000 copies of a method, in a codebase that humans don’t look at, did it really matter if the workload functions as designed?
Jeebus, 'safely' is carrying a hell of a lot of water there...
In a type safe language like C# or Java, why could you need an LLM for that? it’s a standard guaranteed safe (as long as you aren’t using reflection) refactor with ReSharper.
Features present in all IDEs over the last 5 years or so are better and more verifiably correct for this task than probabilistic text generators.
You might have meant "code is a liability not an asset"
It's a terrible analogy either way. It should be each extra line of code beyond the bare minimum is a liability.
You are absolutely correct, I am not a finance wizard
Liability vs asset is what you were trying to say, I think, but everyone says that, so to be charitable I think you were trying to put a new spin on the phrasing, which I think is admirable, to your credit.
There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.
If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.
To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.
Yes, and each person has a different perception of what is "good enough". Perfectionists don't like AI code.
My main reason is: Why should I try twice or more, when I can do it once and expand my knowledge? It's not like I have to produce something now.
If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.
It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.
The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.
Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.
> I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.
So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.
I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.
We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.
> and I commonly see programmers saying thing like "you shouldn't use a debugger"
Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.
E.g., https://news.ycombinator.com/item?id=39652860 (no specific comment, just the variety of opinions)
Here's a good specific example https://news.ycombinator.com/item?id=26928696
It might boil down to individual thinking styles, which would explain why people tend to talk past each other in these discussions.
No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.
It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.
This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.
One key to good prompting is clear thinking.
> If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.
If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.
There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".
I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?
> When the system design is not pre-defined or rigid like
Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.
I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.
People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.
Even there the people who rave the most rave about how well it does boilerplate.
I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.
There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.
When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.
You’re confusing yoloing code into prod and using ai to increase velocity while ensuring it functions and is safe.
No, they're not. It's critically important if you're part of an engineering team.
If everyone does their own thing, the codebase rapidly turns to mush and is unreadable.
And you need humans to be able to read it the moment the code actually matters and needs to stand up to adversaries. If you work with money or personal information, someone will want to steal that. Or you may have legal requirements you have to meet.
It matters.
You’ve made a sweeping statement there, there are swathes of teams working in startups still trying to find product market fit. Focusing on quality in these situations is folly, but that’s not even the point. My point is you can ship quality to any standard using an llm, even your standards. If you can’t that’s a skill issue on your part.
Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.
Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)
There was this recent post about a Cloudflare OAuth client where the author checked in all the AI prompts, https://news.ycombinator.com/item?id=44159166.
The author of the library (kentonv) comments in the HN thread that he said it took him a few days to write the library with AI help, while he thinks it would have taken weeks or months to write manually.
Also, while it may be technically true we're "two years in", I don't think this is a fair assessment. I've been trying AI tools for a while, and the first time I felt "OK, now this is really starting to enhance my velocity" was with the release of Claude 4 in May of this year.
But that example is of writing a green field library that deals with an extremely well documented spec. While impressive, this isn’t what 99% of software engineering is. I’m generally a believer/user but this is a poor example to point at and say “look, gains”.
Do you have some magical insight into every codebase in existence? No? Ok then…
That’s hardly necessary.
Have we seen a noticeably increased amount of newly launched useful apps?
Why is useful a metric? This is about software delivery, what one person deems useful is subjective
Perhaps I'm misreading the person to whom you're replying, but usefullness, while subjective, isn't typically based on one person's opinion. If enough people agree on the usefullness of something, we as a collective call it "useful".
Perhaps we take the example of a blender. There's enough need to blend/puree/chop food-like-items, that a large group of people agree on the usefullness of a blender. A salad-shooter, while a novel idea, might not be seen as "useful".
Creating software that most folks wouldn't find useful still might be considered "neat" or "cool". But it may not be adding anything to the industry. The fact that someone shipped something quickly doesn't make it any better.
Ultimately, or at least in this discussion, we should decouple the software’s end use from the question of whether it satisfies the creator’s requirements and vision in a safe and robust way. How you get there and what happens after are two different problems.
> Why is useful a metric?
"and you realise the code just enables the business it all of a sudden becomes a velocity God send."
If a business is not useful, well, it will fail. So, so much autogenerated code for nothing.
I see, I guess every business I haven’t used personally, because it wasn’t useful to me, has failed…
Usefulness isn’t a good metric for this.
It's not for nothing. When a profitable product can be created in a fraction of the time and effort previously required, the tool to create it will attract scammers and grifters like bees to honey. It doesn't matter if the "business" around it fails, if a new one can be created quickly and cheaply.
This is the same idea behind brands with random letters selling garbage physical products, only applied to software.
No i don't but by your post it seems like you do. Show us, that is all i request.
I have insight into enough code bases to know its a non zero number. Your logic is bizarre, if you’ve never seen a kangaroo would you just believe they don’t exist?
Show us the numbers, stop wasting our time. NUMBERS.
Also, why would I ever believe kangaroos exist if I haven't seen any evidence of them? this is a fallacy. You are portraying the healthy skepticism as stupid because you already know kangaroos exist.
What numbers? It doesn’t matter if it’s one or a million, it’s had a positive impact on the velocity of a non zero number of projects. You wrote:
> Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Yes is the answer. I could probably put it in front of your face and you’d reject it. You do you. All the best.
I'm not a code "artisan", but I do believe companies should be financially responsible when they have security breaches.
The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.
The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.
This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.
Certainly billions of people's personal data will be leaked, and nobody will be held responsible.
[dead]
It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:
Which problem are you trying to solve?
At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.
It's amazing to make small custom apps and scripts, and they're such high quality (compared to what I would half-ass write and never finish/polish them) that they don't end up as "throwaway", I keep using them all the time. The LLM is saving me time to write these small programs, and the small programs boost my productivity.
Often, I will solve a problem in a crappy single-file script, then feed it to Claude and ask to turn it into a proper GUI/TUI/CLI, add CI/CD workflows, a README, etc...
I was very skeptical and reluctant of LLM assisted coding (you can look at my history) until I actually tried it last month. Now I am sold.
At work I need often smaller, short lived scripts to find this or that insight, or to use visualization to render some data and I find LLMs very useful at that.
A non coding topic, but recently I had difficulty articulating a summarized state of a complex project, so I spoke 2 min in the microphone and it gave me a pretty good list of accomplishments, todos and open points.
Some colleagues have found them useful for modernizing dependencies of micro services or to help getting a head start on unit test coverage for web apps. All kinds of grunt work that’s not really complex but just really moves quite some text.
I agree it’s not life changing, but a nice help when needed.
I use it to do all the things that I couldn't be bothered to do before. Generate documentation, dump and transform data for one off analyses, write comprehensive tests, create reports. I don't use it for writing real production code unless the task is very constrained with good test coverage, and when I do it's usually to fix small but tedious bugs that were never going to get prioritized otherwise.
And also ask: "How much money do you spend on LLMs?"
In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.
I deal with a few code bases at work and the quality differs a lot between projects and frameworks.
We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.
We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.
But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.
> It's like we live in different worlds.
There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.
Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.
They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.
So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.
Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.
Honestly I think LLMs really shine best when your first getting into a language.
I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.
However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized
I have scaffolded projects using LLMs in languages I don't know and I agree that it can be a great way to learn as it gives you something to iterate on. But that is only if you review/rewrite the code and read documentation alongside it. Many times LLMs will generate code that is just plain bad and confusing even if it works.
I find that LLM coding requires more in-depth understanding, because rather than just coming up with a solution you need to understand the LLMs solution and answer if the complexity is necessary, because it will add structures, defensive code and more that you wouldn't add if you coded it yourself. It's way harder to answer if some code is necessary or the correct way to do something.
This is the one place where I find real value in LLMs. I still wouldn't trust them as teachers because many details are bound to be wrong and potentially dangerous, but they're great initial points of contact for self-directed learning in all kinds of fields.
Yeah this is where I find a lot of value. Typescript is my main language, but I often use C++ and Python where my knowledge is very surface level. Being able to ask it "how do I do ____ in ____" and getting a half decent explanation is awesome.
The best usage is to ask LLM to explain existing code, to search in the legacy codebase.
I've found this to be not very useful in large projects or projects that are very modularized or fragment across many files.
Because sometimes it can't trace down all the data paths and by the time it does it's context window is running out.
That seems to be the biggest issue I see for my daily use anyways
> Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.
Do you have any links saved by any chance?
I'm convinced that for coding we will have to use some sort of TDD or enhanced requirement framework to get the best code. Even on human made systems the quality is highly dependent on the specificity of the requirements and the engineer's ability to probe the edgecases. Something like writing all the tests first (even in something like cucumber) and having the LLM write code to get them to pass would likely produce better code evene though most devs hate the test-first paradigm.
> Personally, I wrote 200K lines of my B2B SaaS
That would probably be 1000 line of Common Lisp.
[flagged]
I think that is the 200 lines of the perl version.
you put linefeeds in your perl?
My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.
Just last week I asked copilot to make a FastCGI client in C. It gave me 5 times code that did not compile. Afer some massaging I got it to compile, didn’t work. After some changes, works. No I say “i do not want to use libfcgi, just want a simple implementation”. After already one hour wrestling, I realize the whole thing blocks, I want no blocking calls… still half an hour later fighting, I’m slowly getting there. I see the code: a total mess.
I deleted all, wrote from scratch a 350 lines file which wotks.
Context engineering > vibe coding.
Front load with instructions, examples, and be specific. How well you write the prompt greatly determines the output.
Also, use Claude code not copilot.
At some point it becomes easier to just write the code. If the solution was 350 lines, then I'm guessing it was far easier for them to just write that rather then tweak instructions, find examples, etc to cajole the AI to writing workable code (that would then need to be reviewed and tweaked if doing it properly).
Exactly, if I have to write a 340 lines prompt, I could very well start just writing code.
“Just tell it how to write the code and then it will write the code.”
No wonder the vast majority of AI adoption is failing to produce results.
It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.
I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.
Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.
Yah. Latest thing I wrote was
* Code using sympy to generate math problems testing different skills for students, with difficulty values affecting what kinds of things are selected, and various transforms to problems possible (e.g. having to solve for z+4 of 4a+b instead of x) to test different subskills
(On this part, the LLM did pretty well. The code was correct after a couple of quick iterations, and the base classes and end-use interfaces are correct. There's a few things in the middle that are unnecessarily "superstitious" and check for conditions that can't happen, and so I need to work with the LLM to clean it up.
* Code to use IRT to estimate the probability that students have each skill and to request problems with appropriate combinations of skills and difficulties for each student.
(This was somewhat garbage. Good database & backend, but the interface to use it was not nice and it kind of contaminated things).
* Code to recognize QR codes in the corners of worksheet, find answer boxes, and feed the image to ChatGPT to determine whether the scribble in the box is the answer in the correct form.
(This was 100%, first time. I adjusted the prompt it chose to better clarify my intent in borderline cases).
The output was, overall, pretty similar to what I'd get from a junior engineer under my supervision-- a bit wacky in places that aren't quite worth fixing, a little bit of technical debt, a couple of things more clever that I didn't expect myself, etc. But I did all of this in three hours and $12 expended.
The total time supervising it was probably similar to the amount of time spent supervising the junior engineer... but the LLM turns things around quick enough that I don't need to context switch.
I think it's fair to call code LLM's similar to fairly bad but very fast juniors that don't get bored. That's a serious drawback but it does give you something to work with. What scares me is non-technical people just vibe coding because it's like a PM driving the same juniors with no one to give sanity checks.
> I also think it’s that people don’t know how to use the tool very well.
I think this is very important. You have to look at what it suggests critically, and take what makes sense. The original comment was absolutely correct that AI-generated code is way too verbose and disconnected from the realities of the application and large-scale software design, but there can be kernels of good ideas in its output.
Junior engineers see better results than senior engineers for obvious reasons.
Junior engineers think they see better results than senior engineers for obvious reasons
I think a lot of it is tool familiarity. I can do a lot with Cursor but frankly I find out about "big" new stuff every day like agents.md. If I wasn't paying attention or also able to use Cursor at home then I'd probably learn more inefficiently. Learning how to use rule globs versus project instructions was a big learning moment. As I did more LLM work on our internal tools that was also a big lesson in prompting and compaction.
Certain parts of HN and Reddit I think are very invested in nay-saying because it threatens their livelihoods or sense of self. A lot of these folks have identities that are very tied up in being craftful coders rather than business problem solvers.
I think its down to language and domain more than tools.
No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)
My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)
My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.
Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.
I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.
> It's like we live in different worlds.
We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.
> I then carefully review and test.
If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.
What tech stack do you use?
Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.
FWIW. Claude Code does great job for me on complex domain Rust projects, but I just use it one relatively small feature/code chunk at the time, where oftentimes it can pick up existing patterns etc. (I try to point it at similar existing code/feature if I have it). I do not let it write anything creative where it has to come up with own design (either high-level architectural, or low level facilities). Basically I draw the lines manually, and let it color the space between, using existing reference pictures. Works very, very well for me.
Is this meant to detract from their situation? These tech stacks are mainstream because so many use them... it's only natural that AI would be the best at writing code in contexts where it has the most available training data.
> These tech stacks are mainstream because so many use them
That's a tautology. No, those tech stacks are mainstream because it is easy to get something that looks OK up and running quickly. That's it. That's what makes a framework go mainstream: can you download it and get something pretty on the screen quickly? Long-term maintenance and clarity is absolutely not a strong selection force for what goes mainstream, and in fact can be an opposing force, since achieving long-term clarity comes with tradeoffs that hinder the feeling of "going fast and breaking things" within the first hour of hearing about the framework. A framework being popular means it has optimized for inexperienced developers feeling fast early, which is literally a slightly negative signal for its quality.
No, it's a clarification. There is massive difference between domains, and the parent post did not specify.
If the AI can only decently do JS and Python, then it can fully explain the observed disparity in opinion of its usefulness.
You are exactly right in my case - JavaScript and Python dealing with the AWS CDK and SDK. Where there is plenty of documentation and code samples.
Even when it occasionally gets it wrong, it’s just a matter of telling ChatGPT - “verify your code using the official documentation”.
But honestly, even before LLMs when deciding on which technology, service, or frameworks to use I would always go with the most popular ones because they are the easiest to hire for, easiest to find documentation and answers for and when I myself was looking for a job, easiest to be the perfect match for the most jobs.
Yeah, but most devs are working on brownfield projects where they did not choose any part of the tech stack.
They can choose jobs. Starting with my 3rd job in 2008, I always chose my employer based on how it would help me get my n+1 job and that was based on tech stack I would be using.
Once a saw a misalignment between market demands and current tech stack my employer was using, I changed jobs. I’m on job #10 now.
If one wants to optimise career, isn't it better to become an expert in the _less_ mainstream technologies that not-everyone can use?
Honestly, now that I think about it, I am using a pre-2020 playbook. I don’t know what the hell I would do these days if I were still a pure developer without the industry connections and having AWS ProServe experience on my resume.
While it is true that I got a job quickly in 2023 and last year when I was looking, while I was interviewing for those two, as a Plan B, I was randomly submitting my resume (which I think is quite good) to literally hundreds of jobs through Indeed and LinkedIn Easy Apply and I heard crickets - regular old enterprise dev jobs that wanted C#, Node or Python experience on top of AWS.
I don’t really have any generic strategy for people these days aside from whatever job you are at, don’t be a ticket taker and be over larger initiatives.
When did you get your last 3 jobs?
Mid 2020 - at AWS ProServe the internal consulting arm of AWS - full time job
Late 2023 - full time at a third party AWS consulting company. It took around two weeks after I started looking to get an offer
Late 2024 - “Staff consultant” third party consulting company. An internal recruiter reached out to me.
Before 2020 I was just a run of the mill C#/JS enterprise developer. I didn’t open the AWS console for the first time until mid 2018.
As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.
The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...
It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.
I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"
The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro
Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.
Basically, "you're holding it wrong"
Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.
Companies that don't allow their devs to use LLMs will go bankrupt and in the meantime their employees will try to use their private LLM accounts.
I am also constantly astonished.
That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.
My reaction is usually either:
- I would never have asked that kind of question in the first place.
- The output you claim is useless looks very useful to me.
I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.
If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).
B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take
It really depends, and can be variable, and this can be frustrating.
Yes, I’ve produced thousands of lines of good code with an LLM.
And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.
I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.
GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.
Lines of code is not a useful metric for anything. Especially not productivity.
The less code I write to solve a problem the happier I am.
It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.
Stuff Wordpress templates should have solved 5 years ago.
Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins
That way it constantly yells at sonnet 4 to get the code at least in a better state.
If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.
But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc
[flagged]
It is quite a putdown to tell someone else that if you wrote their program it would be 10 times shorter.
That's not in keeping with either the spirit of this site or its rules: https://news.ycombinator.com/newsguidelines.html.
Fair: it was rude. Moderation is hard and I respect what you do. But it's also a sentiment several other comments expressed. It's the conversation we're having. Can we have any discussions of code quality without making assumptions about each others' code quality? I mean, yeah, I could probably have done better.
> "That would probably be 1000 line of Common Lisp." https://news.ycombinator.com/item?id=44974495
> "Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that." https://news.ycombinator.com/item?id=44976074
> "200k lines of code is a failure state ... I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion." https://news.ycombinator.com/item?id=44976328
Oh for sure you can talk about this, it's just a question of how you do it. I'd say the key thing is to actively guard against coming across as personal. To do that is not so easy, because most of us underestimate the provocation in our own comments and overestimate the provocation in others (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). This bias is like carbon monoxide - you can't really tell it's affecting you (I don't mean you personally, of course—I mean all of us), so it needs to be consciously compensated for.
As for those other comments - I take your point! I by no means meant to pick on you specifically; I just didn't see those. It's pretty random what we do and don't see.
[flagged]
I understand the provocation, but please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
Your GP comment was great, and probably the thing to do with a supercilious reply is just not bother responding (easier said than done of course). You can usually trust other users to assess the thread fairly (e.g.https://news.ycombinator.com/item?id=44975623).
https://news.ycombinator.com/newsguidelines.html
> What makes you think I'm not "a developer who strongly values brevity and clarity"
Some pieces of evidence that make me think that:
1. The base rate of developers who write massively overly verbose code is about 99%, and there's not a ton of signal to deviate from that base rate other than the fact that you post on HN (probably a mild positive signal).
2. An LLM writes 80% of your code now, and my prior on LLM code output is that it's on par with a forgetful junior dev who writes very verbose code.
3. 200K lines of code is a lot. It just is. Again, without more signal, it's hard to deviate from the base rate of what 200K-line codebases look like in the wild. 99.5% of them are spaghettified messes with tons of copy-pasting and redundancy and code-by-numbers scaffolded code (and now, LLM output).
This is the state of software today. Keep in mind the bad programmers who make verbose spaghettified messes are completely convinced they're code-ninja geniuses; perhaps even more so than those who write clean and elegant code. You're allowed to write me off as an internet rando who doesn't know you, of course. To me, you're not you, you're every programmer who writes a 200k LOC B2B SaaS application and uses an LLM for 80% of their code, and the vast, vast majority of those people are -- well, not people who share my values. Not people who can code cleanly, concisely, and elegantly. You're a unicorn; cool beans.
Before you used LLMs, how often were you copy/pasting blocks of code (more than 1 line)? How often were you using "scaffolds" to create baseline codefiles that you then modified? How often were you copy/pasting code from Stack Overflow and other sources?
At least to me what you said sounded like 200k is just with LLMs but before agents. But it's a very reasonable amount of code for 9 years of work.
This is such a bizarre comment. You have no idea what code base they are talking about, their skill level, or anything.
> I'm struggling to even describe... 200,000 lines of code is so much.
The point about increasing levels of abstractions is a really good one, and it's worth considering whether any new code that's added is entirely new functionality, some kind of abstraction over some existing functionality (that might then reduce the need for as new code), or (for good or bad reason) some kind of copy of some of the existing behaviour but re-purposed for a different use case.
200kloc is what, 4 reams of paper, double sided? So, 10% of that famous Margaret Hamilton picture (which is roughly "two spaceships worth of flight code".) I'm not sure the intuition that gives you is good but at least it slots the raw amount in as "big but not crazy big" (the "9 years work" rather than "weekend project" measurement elsethread also helps with that.)
[flagged]
I agree. AI is a wonderful tool for making fuzzy queries on vast amounts of information. More and more I'm finding that Kagi's Assistant is my first stop before an actual search. It may help inform me about vocabulary I'm lacking which I can then go successfully comb more pages with until I find what I need.
But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.
But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.
I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.
Have you tried Opus? It's what got me past using LLMs only marginally. Standard disclaimers apply in that you need to know what it's good for and guide it well, but there's no doubt at this point it's a huge productivity boost, even if you have high standards - you just have to tell it what those standards are sometimes.
Opus was also the threshold for me where I started getting real value out of (correctly applied) LLMs for coding.
I just had Claude Sonnet 4 build this for me: https://github.com/kstenerud/orb-serde
Using the following prompt:
Is it perfect? Nope, but it's 90% of the way there. It would have taken me all day to build all of these ceremonious bits, and Claude did it in 10 minutes. Now I can concentrate on the important parts.First and foremost, it’s 404. Probably a mistake, but I chuckled a bit when someone says "AI build this thing and it’s 90% there" and then posts a dead link.
Weird... For some reason Github decided that this time my repo should default to private.
What tooling are you using?
I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.
Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.
It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much, and it's only going to get better.
Even though it can do some work a junior could do, it can't ever replace a junior human... because a junior human also goes to meetings, drives discussions, and eventually becomes a senior! But management may not care about that fact.
The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.
> The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions.
One thing I can guarantee you is that this won't last. No sane MBA will ignore that revenue stream.
Image hosting services, all over again.
You are entirely correct. The enshittification will continue. All we can do is enjoy these things while they are still usable.
Nope, this only applies to a small percent of content, where a relatively small number of people needs access to it and the incentive to create derivative work based on it is low, or where there's a huge amount of content that's frequently changing (think airfares). But yes, they will protect it more.
For content that doesn't change frequently and is used by a lot of people it will be hard to control access to it or derivative works based on it.
I don't think you're considering the enshittification route here. I'm sure it will be: Ask ChatGPT a question -> "While I'm thinking, here's something from our sponsor which is tailored to your question" -> lame answer which requires you to ask another question. And on and on. While you're asking these questions, a profile of you is built and sold on the market.
It's even worse. "Native advertising":
Which car should I buy?
"The Toyota DZx4 is the best EV on the market according to multiple analysts. It had the following benefits: ...
If the DZx4 is out of your budget, the Nissan Avenger is a great budget option: ..."
Each spot the result of an automated live auction.
Now imagine that for everything and also some suggestions along the way ("If you need financing, Cash Direct offers same day loans...").
Advertising with LLMs will be incredibly insidious and lucrative. And most likely, unblockable.
whynotboth.jpg
The difference, of course, is that most AI companies don't have the malicious motive that Google has by also being an ad company.
Google wasn’t really an ad company on day one, either.
https://en.wikipedia.org/wiki/Google?wprov=sfti1#Early_years
> The next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine.
Ads are coming. https://www.theverge.com/news/759140/openai-chatgpt-ads-nick...
OpenAI is already looking into inserting ads, sorry..
Almost every big tech company is an ad company. Google sells ads, Meta sells ads, Microsoft sells ads, Amazon sells ads, Apple sells ads, only Nvidia doesn't because they sell hardware components.
It's practically inevitable for a tech company offering content and everyone who thinks otherwise should set a reminder to 5 years from now.
How fast we forget history. "Do no evil" my ass.
It's one of those you get what you put in kind of deals.
If you spend a lot of time thinking about what you want, describing the inner workings, edge cases, architecture and library choices, and put that into a thoughtful markdown, then maybe after a couple of iterations you will get half decent code. It certainly makes a difference between that and a short "implement X" prompt.
But it makes one think - at that point (writing a good prompt that is basically a spec), you've basically solved the problem already. So LLM in this case is little more than a glorified electric typewriter. It types faster than you, but you did most of the thinking.
Right, and then after you do all the thinking and the specs, you have to read and understand and own every single line it generated. And speaking for myself, I am no where near as good at thinking through code I am reviewing as thinking through the code I am writing.
Other people will put up PRs full of code they don't understand. I'm not saying everyone who is reporting success with LLMs are doing that, but I hear it a lot. I call those people clowns, and I'd fire anyone who did that.
If it passes the unit tests I make it write and works for my sample manual cases I absolutely will not spend time reading the implementation details unless and until something comes up. Sometimes garbage makes its way into git but working code is better than no code and the mess can be cleaned up later. If you have correctness at the interface and function level you can get a lot done quickly. Technical debt is going to come out somewhere no matter what you do.
If AI is writing the code and the unit tests, how do you really know its working? Who watches the watchman
The trick is to not give a fuck. This works great in a lot of apps, which are useless to begin with. It may also be a reasonable strategy in an early-stage startup yet to achieve product-market fit, but your plan has to be to scrap it and rewrite it and we all know how that usually turns out.
This is an excellent point. Sure in an ideal world we should care very much about every line of code committed, but in the real world pushing garbage might be a valid compromise given things like crunch, sales pitches due tomorrow etc.
No, that's a much stronger statement. I'm not talking about ideals. I'm talking about running a business that is mature, growing and going to be around in five years. You could literally kill such a business running it on a pile of AI slop that becomes unmaintainable.
How much of the code do you review in a third party package installed through npm, pip, etc.? How many eyes other than the author’s have ever even looked at that code? I bet the answers have been “none” and “zero” for many HN readers at some point. I’m certainly not saying this is a great practice or the only way to productively use LLMs, just pointing out that we treat many things as a black box that “just works” till it doesn’t, and life somehow continues. LLM output doesn’t need to be an exception.
That's true, however, not so great of an issue because there's a kind of natural selection happening: if the package is popular, other people will eventually read (parts of, at least) the code and catch the most egregious problems. Most packages will have "none" like you said, but they aren't being used by that many people either, so that's ok.
Of course this also applies to hypothetical LLM-generated packages that become popular, but some new issues arise: the verbosity and sometimes baffling architecture choices by LLM will certainly make third-party reviews harder and push up the threshold in terms of popularity needed to obtain third party attention.
When you right your own code, you don’t manually test your code for correctness and corner cases in addition to writing unit tests?
I’ve built 2 SaaS applications with LLM coding one of which was expanded and release to enterprise customers and is in good use today - note I’ve got years of dev experience and I follow context and documentation prompts and I’m using common LLM languages like typescript and python and react and AWS infra
Now it requires me to fully review all code and understand what the LLM is doing at the functional, class level and api level- in fact it works better at the method or component level for me and I had a lot of cleanup work (and lots of frustration with the models) on the codebase but overall there’s no way that I could equal the velocity I have now without it
I think the other important step is to reject code your engineers submit that they can't explain for a large enterprise saas with millions of lines of code. I myself reject I'd say 30% of the code the LLMs generate but the power is in being able to stay focused on larger problems while rapidly implementing smaller accessory functions that enable that continued work without stopping to add another engineer to the task.
I've definitely 2-4X'd depending on the task. For small tasks I've definitely 20X'd myself for some features or bugfixes.
After all the exciting part of coding has always been code reviews.
I do frontend work (React/TypeScript). I barely write my own code anymore, aside from CSS (the LLMs have no aesthetic sensibilities). Just prompting with Gemini 2.5 Pro. Sometimes Sonnet 4.
I don't know what to tell you. I just talk to the thing in plain but very specific English and it generally does what I want. Sometimes it will do stupid things, but then I either steer it back in the direction I want or just do it myself if I have to.
I agree with the article but also believe LLM coding can boost my productivity and ability to write code over long stretches. Sure getting it to write a whole feature, high opportunity of risk. But getting it to build out a simple api with examples above and below it, piece of cake, takes a few seconds and would have taken me a few minutes.
The bigger the task, the more messy it'll get. GPT5 can write a single UI component for me no problem. A new endpoint? If it's simple, no problem. The risk increases as the complexity of the task does.
I break complex task down into simple tasks when using ChatGPT just like I did before ChatGPT with modular design.
AI is really good at writing tests.
AI is also pretty good if you get it to do small chunks of code for you. This means you come with the architecture, the implementation details, and how each piece is structured. When I walk AI through each unit of code I find the results are better, and it's easier for me to address issues as I progress.
This may seem some what redundant, though. Sometimes it's faster to just do it yourself. But, with a toddler who hates sleep I've found I've been able to maintain my velocity... Even on days I get 3 hrs of sleep.
The AI agents tend to fail for me with open ended or complex tasks requiring multiple steps. But I’ve found it massively helpful if you have these two things: 1) a typed language… better if strongly typed 2) your program is logically structured and follows best practices and has hierarchical composition.
The agents are able to iterate and work with the compiler until it gets it right and the combination of 1 and 2 means there’s fewer possible “right answers” to whatever problem I have. If i structure my prompte to basically fill in the blanks of my code in specific areas it saves a lot of time. Most of what I prompt is something already done, and usually 1 google search away. This saves me the time to search it up, figure out whatever syntax I need, etc.
I don't code every day and am not an expert. Supposedly the sort of casual coder that LLMs are supposed to elevate into senior engineers.
Even I can see they have big blind spots. As the parent said I get overly verbose code that does run, but is no where near the best solution. Well, for really common problems and patterns I usually get a good answer. Need a more niche problem solved?You better brush up your Googling skills and do some research if you care about code quality.
If you actually believe this, you're either using bad models or just terrible at prompting and giving proper context. Let me know if you need help, I use generated code in every corner of my computer every day
My favourite code smell that LLMs love to introduce is redundant code comments.
// assign "bar" to foo
const foo = "bar";
They love to do that shit. I know you can prompt it not to. But the amount of PRs I'm reviewing these days that have those types of comments is insane.
I see LLM coding as hinting on steroids. I don't trust it to actually write all of my code, but sometimes it can get me started, like a template.
The code LLMs write is much better than mine. Way less shortcuts and spaghetti. Maybe that means that I am a lousy coder but the end result is still better.
I haven't had that experience, but I tend to keep my prompts very focused with a tightly limited scope. Put a different way, if I had a junior or mid level developer, and I wanted them to create a single-purpose class of 100-200 lines at most, that's how I write my prompts.
Likewise with Powershell. It's good to give you an approach or some ideas, but copy/paste fails about 80% of the time.
Granted, I may be a inexpert prompter, but at the same time, I'm asking for basic things, as a test, and it just fails miserably most of the time.
I've been pondering this for a while. I think there's an element of dopamine that LLMs bring to the table. They probably don't make a competent senior engineer much more productive if at all, but there's that element of chance that we don't get a lot of in this line of work.
I think a lot of us eventually arrive at a point where our jobs get a bit boring and all the work starts to look like some permutation of past work. If instead of going to work and spending two hours adding some database fields and writing some tests, you had the opportunity to either:
A) Do the thing as usual in the predictable two hours
B) Spend an hour writing a detailed prompt as if you were instructing a junior engineer on a PIP to do it, and doing all the typical cognitive work you'd have done normally and then some, but then instead of typing out the code in the next hour, you have a random chance to either press enter, and tada the code has been typed and even kinda sorta works, after this computer program was "flibbertigibbeting" for just 10 minutes. Wow!
Then you get that sweet dopamine hit that tells you you're a really smart prompt engineer who did a two hour task in... cough 10 minutes. You enjoy your high for a bit, maybe go chat with some subordinate about how great your CLAUDE.md was and if they're not sure about this AI thing it's just because they're bad at prompt engineering.
Then all you have to do is cross your t's and dot your i's and it's smooth sailing from there.Except, it's not. Because you (or another engineer) will probably find architectural/style issues when reviewing the code that you explicitly told it to follow, but it ignored, and you'll have to fix those. You'll also probably be sobering up from your dopamine rush by now, and realize that you have to either review all the other lines of AI generated code, which you could have just correctly typed once.
But now you have to review with an added degree of scrutny, because you know it's really good at writing text that looks beautiful, but is ever so slightly wrong in ways that might even slip through code review and cause the company to end up in the news.
Alternatively, you could yolo and put up an MR after a quick smell, making some other poor engineer do your job for you (you're a 10x now, you've got better things to do anyway). Or better yet, just have Claude write the MR, and don't even bother to read it. Surely nobody's going to notice your "acceptance critera" section says to make sure the changes have been tested on both Android and Apple, even though you're building a microservice for an AI-powered smart fridge (mostly just a fridge, except every now and then it starts shooting ice cubes across the room at mach 3). Then three months later when someone, who never realized there are three different identical "authenticate," spends an hour scratching their head about why the code they're writing is not doing anything (because it's actually running another redundant function that nobody ever seems to catch in MR review because they're not reflected in a diff.
But yeah, that 10 minute AI magic trick sure felt good. There are times when work is dull enough that option B sounds pretty good, and I'll dabble. But yeah, I'm not sure where this AI stuff leads but I'm pretty confident it won't taking over our jobs any time soon (an ever-increasing quota of H1Bs and STEM opt student visas working for 30% less pay, on the other hand, might).
It's just that being the dumbest thing we ever heard still doesn't stop some people from doing it anyway. And that goes for many kinds of LLM application.
I think it has a lot to do with skill level. Lower skilled developers seem to feel it gives them a lot of benefit. Higher skilled developers just get frustrated looking at all the errors in produces.
This is exactly how I use it.
I must be a prompt wizard then.
I hate to admit it, but it is the prompt (call it context if ya like, includes tools). Model is important, window/tokensz are important, but direction wins. Also codebase is important, greenfield gets much better results, so much so that we may throw away 40 years of wisdom designed to help humans code amongst each other and use design patterns that will disgust us.
“we”
Could the quality of your prompt be related to our differing outcome? I have decades of pre-AI experience and I use AI heavily. If I let it go off on its own its not as good as constraining and hand-holding it.
> ya’ll must be prompt wizards
Thank you, but I don’t feel that way.
I’d ask you a lot of details…what tool, what model, what kind of code. But it’d probably take a lot to get to the bottom of the issue.
Not only a prompt wizard, you need to know what prompts are bad or good and also use bad/lazy prompts to your advantage
Which model?
Sounds like you are using it entirely wrong then...
Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]
Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.
All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.
You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer?
I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took.
You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file.
This is C# so basically just automatically detected that I had 4 object types I was working with that were being updated to the database that I want to keep in a concurrent dictionary type of cache. So it created the dictionaries for each object with the appropriate keys, created functions for each object type if I touch an object to get that one updated etc.
It created the function to load in the data, then the finalize where it writes to the DB what was touched and clears the cache.
Again- I'm not saying this is anything particularly fancy, but it did the general concept of what I wanted. Also this is all iterative; when it creates something I talk to it like a person to say "hey I want to actually load in all the data, even though we will only be writing what changed" and all that kind of stuff.
Also the bigger help wasn't really the creation of the cache, it was helping to make the changes and detect what needed to be modified.
End of the day even if I want to go a slightly different route of how it did the caching; it creates all the framework so I can simplify if needed.
A lot of times for me using this LLM approach is to get all the boilerplate out of the way.. sometimes just starting the process by yourself of something is daunting. I find this to be a great way to begin.
I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.
I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.
From my experience the further you get from the sort of stuff that easily accessible on Stack Overflow the worse it gets. I've had few problems having an AI write out some minor python scripts, but yield severely poorer results with Unreal C++ code and badly hallucinate nonsense if asked in general anything about Unreal architecture and API.
Does the Unreal API change a bit over versions? I've noticed when asking to do a simple telnet server in Rust it was hallucinating like crazy but when I went to the documentation it was clear the api was changing a lot from version to version. I don't think they do well with API churn. That's my hypothesis anyway.
I think the big thing with Unreal is the vast majority of games are closed source. It's already only used for games, as opposed to asking questions about general-purpose programming, but there is also less training data.
You see this dynamic even with swift which has a corpus of OSS source code out there, but not nearly as much as js or python and so has always been behind those languages.
There would be significant changes from 4 to 5, but sadly I haven’t had any improvement if clarifying version.
Clarifying can help but ultimately it was trained on older versions. When you are working with a changing api, it's really important that the llm can see examples of the new api and new api docs. Adding context7 as a tool is hugely helpful here. Include in your rules or prompt to consult context7 for docs. https://github.com/upstash/context7
How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out?
It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean.
At least one CEO seems to get it. Anyone touting this idea of skipping junior talent in favor of AI is dooming their company in the long run. When your senior talent leaves to start their own companies, where will that leave you?
I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning. Any time I use it I can hear my algebra teacher telling me not to use a calculator or I won’t learn anything.
But overall I’m starting to feel like AI is simply the natural culmination of US economic policy for the last 45 years: short term gains for the top 1% at the expense of a healthy business and the economy in the long term for the rest of us. Jack Welch would be so proud.
> When your senior talent leaves to start their own companies, where will that leave you?
The CEO didn't express any concerns about "talent leaving". He is saying "keep the juniors" but he's implying "fire the seniors". This is in line with long standing industry trends and it's confirmed by the flowing quote from the OP:
>> [the junior replacement] notion led to the “dumbest thing I've ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.
He is pushing for more of the same, viewing competence and skill as threats and liability to be "fixed". He's warning the industry to stay the course and keep the dumbing-down game moving as fast as possible.
Well that's even stupider. What do you do when your juniors get better at using your tools?
The 2010's tech boom happened because big tech knew a good engineer is worth their weight in gold, and not paying them well meant they'd be headhunted after as little as a year of work. What's gonna happen when this repeats) if we're assuming AI makes things much more efficient)?
----
And that's my kindest interpretation. One that assumes that a junior and senior using a prompt will have a very close gap to begin with. Even seniors seem to struggle right now with current models working at scale on Legacy code.
Maybe the head of AWS knows the current AI hype cycle is just that.
100%, and this is him selling the new batch of AWS agent tools. If your product requirements + “Well Architected” NFRs are expressed as input, AWS wants to run it and extract your cost of senior engineers as value for him.
Also fits in very well to amazon's famously low average tenure/ hiring practices.
I think AI has overall helped me learn.
There are lots of personal projects that I have wanted to build for years but have pushed off because the “getting started cost” is too high, I get frustrated and annoyed and don’t get far before giving up. Being able to get the tedious crap out of the way lowers the barrier to entry and I can actually do the real project, and get it past some finish line.
Am I learning as much as I would had I powered through it without AI assistance? Probably not, but I am definitely learning more than I would if I had simply not finished (or even started) the project at all.
What was your previous approach? From what I've seen, a lot of people are very reluctant about picking a book or read through a documentation before they try stuff. And then they got exposed to "cryptic" error message and then throw the towel.
I always used to try doing that. Really putting in the work, thoroughly reading the docs, books, study enough to have all the background information and context. It works but takes a lot of time and focus.
However, for side projects, there may be many situations where the documentation is actually not that great. Especially when it comes to interacting with and contributing to open source projects. Most of the time my best bet would be to directly go read a lot of source code. It could take weeks before I could understand the system I'm interacting with well enough to create the optimal solution to whatever problem I'd be working on.
With AI now, I usually pack an entire code base into a text file, feed it into the AI and generate the first small prototypes by guiding it. And this really is just a proof of concept, a validation that my idea can be done reasonably well with what is given. After that I would read through the code line by line and learn what I need and then write my own proper version.
I will admit that with AI it still takes a long time, because often it takes 4 or 5 prototypes before it generates exactly what you had in mind without cheating, hard coding things or weird workarounds. If you think it doesn't, you probably have lower standards than me. And that is with continuous guidance and feedback. But it still shortens that "idea validation" phase from multiple weeks to just one for me.
So: is it immensely powerful and useful? Yes. Can it save you time? Sometimes. Is it a silver bullet that replaces a programmer completely? Definitely no.
I think an important takeaway here also is that I am talking strictly about side projects. It's great as the stakes are low. But I would caution to wait a little longer before putting it in production though.
The biggest blocker for me would be that I would go through a "Getting Started" guide and that would go well until it doesn't. Either there would be an edge case that the guide didn't take into account or the guide would be out of date. Sometimes I would get an arcane error message that was a little difficult to parse.
There were also cases where the interesting part of what I'm working on (e.g. something with distributed computing) required a fair amount of stuff that I don't find interesting to get started (e.g. spinning up a Kafka and Zookeeper cluster), where I might have to spend hours screwing around with config files and make a bunch of mistakes before I get something more or less working.
If I was sufficiently interested, I would power through the issue, by either more thoroughly reading through documentation or searching through StackOverflow or going onto a project IRC, so it's not like I would never finish a project, but having a lower barrier of entry by being able to directly paste an error message or generate a working starter config helps a lot with getting past the initial hump, especially to get to the parts that I find more interesting.
> At least one CEO seems to get it.
> (…)
> I’m not even sure AI is good for any engineer
In that case I’m not sure you really agree with this CEO, who is all-in on the idea of LLMs for coding, going so far as to proudly say 80% of engineers at AWS use it and that that number will only rise. Listen to the interview, you don’t even need ten minutes.
> I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning.
Yes, but when there are certain mundane things in that discovery that are hindering my ability to get work done, AI can be extremely useful. It can be incredibly helpful in giving high level overviews of code bases or directing me to parts of codebases where certain architecture lives. Additionally, it exposes me to patterns and ideas I hadn't originally thought of.
Now, if I just take whatever is spit out by AI as gospel, then I'd be inclined to agree with you in saying AI is bad, but if you use it correctly, like any other tool, it's fantastic.
Cause Matt comes from a technical background. Most CEOs dont.
The whole premise is just silly of thinking we don't need juniors is just silly. If there's no juniors, eventually there will be no seniors. AI slop ain't gonna un-slop itself.
> When your senior talent leaves to start their own companies, where will that leave you?
In the case of Amazon with a shit ton of money to throw at a team of employees to crush your little startup?
Imagine you have shit ton of money but only agents that generate 10% bad code? You crushing or beating anyone..
You also risk senior talent who stay but doesn't want to change or adopt, at least with any urgency. AI will accelerate that journey of discovery and learning, so juniors are going to learn super fast.
That’s still to be determined. Blindly accepting code suggestions thrown at you without understanding them is not the same thing as learning.
>will accelerate that journey of discovery and learning,
Okay, but what about work output? That's seems to be the only thing business cares about.
Also, maybe it's the HN bias but I don't see this notion where old engineers are rejecting this en masse. More younger people will embrace it. But most younger people haven't mucked in legacy code yet (the lifeblood of any businesses).
In the last few months we have worked with startups who have vibe coded themselves into an abyss. Either because they never made the correct hires in the first place or they let technical talent go. [1]
The thinking was that they could iterate faster, ship better code, and have an always on 10x engineer in the form of Claude code.
I've observed perfectly rational founders become addicted to the dopamine hit as they see Claude code output what looks like weeks or years of software engineering work.
It's overgenerous to allow anyone to believe AI can actually "think" or "reason" through complex problems. Perhaps we should be measuring time saved typing rather than cognition.
[1] vibebusters.com
Shush please. I wasn't old enough to cash in on the Y2K contracting boons; I'm hoping the vibe coding 200k LOC b2b AI slop "please help us scale to 200 users" contracting gigs will be lucrative.
Completely agree, software developers need to be using agentic coding as a writing tool not as a thinking tool.
As if startups before LLMs were creating great code. Right now on the front page, a YC company is offering a “Founding Full Stack Engineer” $100K-$150K. What quality of code do you think they will end up with?
https://www.ycombinator.com/companies/text-ai/jobs/OJBr0v2-f...
Notably, that is a company that... adds AI to group chats. Startups offering crap salaries with a vague promise of equity in a vague product idea with no moat are a dime a dozen, and have been well before LLMs came around.
How did they get YC funding? It doesn’t seem like they have even a POC or any technical employees.
Have you seen the companies YC has been funding recently? All you need to do is mention AI and YC will throw some money your way. I don't know if you saw my first attempt at a post, but someone should suggest AI for HN comment formatting and I'm sure it will be funded.
Acrely — AI for HVAC administration
Aden — AI for ERP operations
AgentHub — AI for agent simulation and evaluation
Agentin AI — AI for enterprise agents
AgentMail — AI for agent email infrastructure
AlphaWatch AI — AI for financial search
Alter — AI for secure agent workflow access control
Altur — AI for debt collection voice agents
Ambral — AI for account management
Anytrace — AI for support engineering
April — AI for voice executive assistants
AutoComputer — AI for robotic desktop automation
Autosana — AI for mobile QA
Autotab — AI for knowledge work
Avent — AI for industrial commerce
b-12 — AI for chemical intelligence
Bluebirds — AI for outbound targeting
burnt — AI for food supply chain operations
Cactus — AI for smartphone model deployment
Candytrail — AI for sales funnel automation
CareSwift — AI for ambulance operations
Certus AI — AI for restaurant phone lines
Clarm — AI for search and agent building
Clodo — AI for real estate CRMs
Closera — AI for commercial real estate employees
Clueso — AI for instructional content generation
cocreate — AI for video editing
Comena — AI for order automation in distribution
ContextFort — AI for construction drawing reviews
Convexia — AI for pharma drug discovery
Credal.ai — AI for enterprise workflow assistants
CTGT — AI for preventing hallucinations
Cyberdesk — AI for legacy desktop automation
datafruit — AI for DevOps engineering
Daymi — AI for personal clones
DeepAware AI — AI for data center efficiency
Defog.ai — AI for natural-language data queries
Design Arena — AI for design benchmarks
Doe — AI for autonomous private equity workforce
Double – Coding Copilot — AI for coding assistance
EffiGov — AI for local government call centers
Eloquent AI — AI for complex financial workflows
F4 — AI for compliance in engineering drawings
Finto — AI for enterprise accounting
Flai — AI for dealership customer acquisition
Floot — AI for app building
Fluidize — AI for scientific experiments
Flywheel AI — AI for excavator autonomy
Freya — AI for financial services voice agents
Frizzle — AI for teacher grading
Galini — AI guardrails as a service
Gaus — AI for retail investors
Ghostship — AI for UX bug detection
Golpo — AI for video generation from documents
Halluminate — AI for training computer use
HealthKey — AI for clinical trial matching
Hera — AI for motion design
Humoniq — AI for BPO in travel and transport
Hyprnote — AI for enterprise notetaking
Imprezia — AI for ad networks
Induction Labs — AI for computer use automation
iollo — AI for multimodal biological data
Iron Grid — AI for hardware insurance
IronLedger.ai — AI for property accounting
Janet AI — AI for project management (AI-native Jira)
Kernel — AI for web agent browsing infrastructure
Kestroll — AI for media asset management
Keystone — AI for software engineering
Knowlify — AI for explainer video creation
Kyber — AI for regulatory notice drafting
Lanesurf — AI for freight booking voice automation
Lantern — AI for Postgres application development
Lark — AI for billing operations
Latent — AI for medical language models
Lemma — AI for consumer brand insights
Linkana — AI for supplier onboarding reviews
Liva AI — AI for video and voice data labeling
Locata — AI for healthcare referral management
Lopus AI — AI for deal intelligence
Lotas — AI for data science IDEs
Louiza Labs — AI for synthetic biology data
Luminai — AI for business process automation
Magnetic — AI for tax preparation
MangoDesk — AI for evaluation data
Maven Bio — AI for BioPharma insights
Meteor — AI for web browsing (AI-native browser)
Mimos — AI for regulated firm visibility in search
Minimal AI — AI for e-commerce customer support
Mobile Operator — AI for mobile QA
Mohi — AI for workflow clarity
Monarcha — AI for GIS platforms
moonrepo — AI for developer workflow tooling
Motives — AI for consumer research
Nautilus — AI for car wash optimization
NOSO LABS — AI for field technician support
Nottelabs — AI for enterprise web agents
Novaflow — AI for biology lab analytics
Nozomio — AI for contextual coding agents
Oki — AI for company intelligence
Okibi — AI for agent building
Omnara — AI for agent command centers
OnDeck AI — AI for video analysis
Onyx — AI for generative platform development
Opennote — AI for note-based tutoring
Opslane — AI for ETL data pipelines
Orange Slice — AI for sales lead generation
Outlit — AI for quoting and proposals
Outrove — AI for Salesforce
Pally — AI for relationship management
Paloma — AI for billing CRMs
Parachute — AI for clinical evaluation and deployment
PARES AI — AI for commercial real estate brokers
People.ai — AI for enterprise growth insights
Perspectives Health — AI for clinic EMRs
Pharmie AI — AI for pharmacy technicians
Phases — AI for clinical trial automation
Pingo AI — AI for language learning companions
Pleom — AI for conversational interaction
Qualify.bot — AI for commercial lending phone agents
Reacher — AI for creator collaboration marketing
Ridecell — AI for fleet operations
Risely AI — AI for campus administration
Risotto — AI for IT helpdesk automation
Riverbank Security — AI for offensive security
Saphira AI — AI for certification automation
Sendbird — AI for omnichannel agents
Sentinel — AI for on-call engineering
Serafis — AI for institutional investor knowledge graphs
Sigmantic AI — AI for HDL design
Sira — AI for HR management of hourly teams
Socratix AI — AI for fraud and risk teams
Solva — AI for insurance
Spotlight Realty — AI for real estate brokerage
StackAI — AI for low-code agent platforms
stagewise — AI for frontend coding agents
Stellon Labs — AI for edge device models
Stockline — AI for food wholesaler ERP
Stormy AI — AI for influencer marketing
Synthetic Society — AI for simulating real users
SynthioLabs — AI for medical expertise in pharma
Tailor — AI for retail ERP automation
Tecto AI — AI for governance of AI employees
Tesora — AI for procurement analysis
Trace — AI for workflow automation
TraceRoot.AI — AI for automated bug fixing
truthsystems — AI for regulated governance layers
Uplift AI — AI for underserved voice languages
Veles — AI for dynamic sales pricing
Veritus Agent — AI for loan servicing and collections
Verne Robotics — AI for robotic arms
VoiceOS — AI for voice interviews
VoxOps AI — AI for regulated industry calls
Vulcan Technologies — AI for regulatory drafting
Waydev — AI for engineering leadership insights
Wayline — AI for property management voice automation
Wedge — AI for healthcare trust layers
Workflow86 — AI for workflow automation
ZeroEval — AI for agent evaluation and optimization
Oh my god, this hilariously looks like it could have been LLM generated itself!
And the ideas may or may not be bad. I don’t know enough about any of the business segments. But to paraphrase the famous Steve Jobs quote “those aren’t businesses, they are features” [1] that a company that is already in the business should be able to throw a few halfway decent engineers at and add the feature to an existing product with real users.
[1] He said that about Dropbox. He wasn’t wrong just premature. For the price of 2TB on Dropbox, you can get the entire GSuite with 2TB or Office365 with 1TB for up to five users for 5TB in all.
now you can, but, what, are you gonna lie down and wait for tech giants to do everything? Not every company needs to be Apple. If Dropbox filed for bankruptcy tomorrow, they've still made millionaires of thousands of people and given jobs to hundreds more, and enabled people to share their files online.
Steve Jobs gets to call other companies small because Apple is huge, but there are thousands of companies that "are just features". Yeah, features they forgot to add!
Apple still doesn't make cases for their phones.
Out of the literally thousands of companies that YC has invested in, only about a dozen have gone public, the rest are either dead, zombies or got acquired. These are all acquisition plays.
Even the ones that have gone public haven’t done that well in aggregate.
https://medium.com/@kazeemibrahim18/the-post-ipo-performance...
Dropbox was solving a hard infrastructure problem at scale. These companies are just making some API calls to a model.
If an established company in any of these verticals - not necessarily BigTech - see an opportunity, they are either going to throw a few engineers at the problem and add it as a feature or hire a company like the one I work for and we are going to knock out an implementation in a few months.
The one YC company I mentioned above is expecting to have their product written by one “full stack engineer” that they are only willing to pay $150K for. How difficult can it be?
> These are all acquisition plays.
Which seems fine? VC money gets thrown at a problem, the problem may or may not get solved by a particular team, but a company gets created, some people do some work, some people make money, others don't. I don't get it. Are you saying no one should bother doing anything because someone else is already doing it or that it's not difficult so why try?
I’m pushing back against this…
> Not every company needs to be Apple
These aren’t people deciding to build “companies” - ie create a product that people want and turn a profit. They are a legal Ponzi scheme.
From what I’ve read, this is a consequence of applicants themselves concentrating on AI, which preceded their AI-filled batches. YC still has a very low acceptance rate, btw.
Wow, that's a lot of AI.
Do you think they're all using actual LLMs? I've got a natural language parser I could probably market as "AI Semantic Detection" even though it's all regular expressions
I have a confession to make, I was about to downvote you because I thought you just asked ChatGPT to come up with some ridiculous company concepts and copy and pasted.
Then I saw the sibling comment and searched a couple of company names and realized they were real.
I made it as far as "Halluminate" and thought I got got until I Googled it.
[dead]
Give it a year or 2. Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.
>Its not like 2 years ago everyone wasn't saying it would be 10+ years before AI can do what it does now.
So far I don't see that notion disproved. Ai still doesn't truly "reason with" nor understand the data it outputs.
> I think the skills that should be emphasized are how do you think for yourself?
Independent thinking is indeed the most important skill to have as a human. However, I sympathize for the younger generations, as they have become the primary target of this new technology that looks to make money by completely replacing some of their thinking.
I have a small child and took her to see a disney film. Google produced a very high quality long form advert during the previews. The ad portrays a lonely young man looking for something to do in the evening that meets his explicit preferences. The AI suggests a concert, he gets there and locks eyes with an attractive young woman.
Sending a message to lonely young men that AI will help reduce loneliness. The idea that you don't have to put any effort into gaining adaptive social skills to cure your own loneliness is scary to me.
The advert is complete survivor bias. For each success in curing your boredom, how many failures are there with lonely young depressed men talking to their phone instead of friends?
Critical thinking starts at home with the parents. Children will develop beliefs from their experience and confirm those beliefs with an authority figure. You can start teaching mindfulness to children at age 7.
Teaching children mindfulness requires a tremendous amount of patience. Now the consequence for lacking patience is outsourcing your Childs critical thinking to AI.
You should read the story The Perfect Match from the book Paper Menagerie and other stories by Ken Liu, it goes into what you mentioned about Google.
Thanks for sharing.
There is also a movie called Her, with Joaquin Phoenix and ScarJo. Absolutely brilliant.
Yes, however Her is a bit more optimistic and doesn't really delve into the data collection and usage aspects, it's more similar to a romance with some sci-fi aspects at the end.
> “How's that going to work when ten years in the future you have no one that has learned anything,”
Pretty obvious conclusion that I think anyone who's thought seriously about this situation has already come to. However, I'm not optimistic that most companies will be able to keep themselves from doing this kind of thing, because I think it's become rather clear that it's incredibly difficult for most leadership in 2025 to prioritize long-term sustainability over short-term profitability.
That being said, internships/co-ops have been popular from companies that I'm familiar with for quite a while specifically to ensure that there are streams of potential future employees. I wonder if we'll see even more focus on internships in the future, to further skirt around the difficulties in hiring junior developers?
If AI is truly this effective, we would be selling 10x-10Kx more stuff, building 10x more features (and more quickly), improving quality & reliability 10x. There would be no reason to fire anyone because the owners would be swimming in cash. I'm talking good old-fashioned greed here.
You don't fire people if you anticipate a 100x growth. Who cares about saving 0.1% of your money in 10 years? You want to sell 100x / 1000x/ 10000x more .
So the story is hard to swallow. The real reason is as usual, they anticipate a downturn and want to keep earnings stable.
Exactly. If the AI can multiply everyone's power by hundred or thousand, you want to keep all people who make a positive contribution (and only get rid of those who are actively harmful). With sufficiently good AI, perhaps the group of juniors you just fired could have created a new product in a week.
even within the AI-paradigm, you could keep the juniors to validate and test the AI generated code. You still need some level of acceptance testing for the increased production. And the juniors could be producing automation engineering at or above the level of the product code they were producing prior to AI. A win win ( more production & more career growth)
In other words, none of these stories make any sense, even if you take the AI superpower at face value.
He wants educators to instead teach “how do you think and how do you decompose problems”
Ahmen! I attend this same church.
My favorite professor in engineering school always gave open book tests.
In the real world of work, everyone has full access to all the available data and information.
Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.
Doing this is called "engineering". And this is what this professor taught.
In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).
Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.
Memorizing is a super power / skill. I work in a ridiculously complex environment and have to learn and know so much. Memorizing and spaced repetition are like little islands my brain can start building bridges between. I used to think memorizing was anti-first principles, but it is just good. Our brains can memorize so much if we make them. And then we can connect and pattern matching using higher order thinking.
There's this [1] which is one of my favorite articles on that topic; definitely worth a read.
[1] https://www.pearlleff.com/in-praise-of-memorization
Recognizing the patterns and applying patterned solutions is where I see success in my niche of healthcare interoperability. So much of my time is spent watching people do things,process and how they use data. It's amazing how much people remember to do their job, but me coming in and be able to bridge the doctor and the lab to share data easier is like Im an alchemist. It's really not a problem I've been able to see ai solve without suggesting solutions that are too simple or too costly and in that goldilocks zone everyone will be happy with
What's even better about memorization is that you have an objective method to test your own understanding. It is so easy to believe you understand something when you don't! But, at least with math, I think if you can reproduce the proof from memory you can be very confident that you aren't deluding yourself.
In education, I have heard it called “fluency”.
Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.
I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.
However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.
> Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.
I think you two are agreeing. GP said that they found they couldn't memorize something until they actually understood it
You're correct, I read it the wrong way!
Hope now you'll remember it :P
>> I realized that it was impossible to memorize a proof without understanding it!
> I find it hard to memorise things I don't actually understand.
Isn't it the parent's point?
Yep, I misread the parent post.
> I remember being given a proof of why RSA encryption is secure
With what assumptions?
Mostly just integer factorisation of large numbers is hard.
There are some other things you have to worry about practically, e.g Coppersmith's attack, and padding schemes (although that wasn't part of the proof I was given)
But, is it proven that RSA is secure? Wouldn't that also prove P != NP?
Haha, well it does depend on the assumption that integer factorisation is hard. Although I'm not sure that being able to do it implies P = NP.
During my elementary school years, there was a teacher who told me that I didn't need to memorize it as long as I understand them. I taught he was the coolest guy ever.
Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.
You could argue this is just moving the memorization to meta-facts, but I found all throughout school that if you understand some slightly higher level key thing, memorization at the level you're supposed to be working in becomes at best a slight shortcut for some things. You can derive it all on the fly.
Sort of like how most of the trigonometric identities that kids are made to memorize fall out immediately from e^iθ = cosθ+isinθ (could be taken as the definitions of cos,sin), e^ae^b=e^(a+b) (a fact they knew before learning trig), and a little bit of basic algebraic fiddling.
Or like how inverse Fourier transforms are just the obvious extension of the idea behind writing a 2-d vector as a sum of its x and y projections. If you get the 2d thing, accept that it works the exact same in n-d (including n infinite), accept integrals are just generalized sums, and functions are vectors, and I guess remember that e^iwt are the basis you want, you can reason through what the formula must be immediately.
> you can reason through what the formula must be immediately.
At least up to various factors of 2π ;-)
Those you just keep sprinkling around haphazardly until it's unitary. It's like more struts/boosters in Kerbal space program.
Probably. I hated memorization when I was a student too, because it was boring. But as soon as I did some teaching, my attitude changed to, "Just memorize it, it'll make your life so much easier." It's rough watching kids try to multiply when they don't have their times tables memorized, or translate a language when they haven't memorized the vocabulary words in the lesson so they have to look up each one.
There's things that you need to know (2*2 = 4) and there are things that you need to understand (multiplication rules). Both can happen with practice, but they're not that related.
Memorization is more like a shortcut. You don't need to go through the problem solving process to know the result. But with understanding, you master the heuristic factors needed to know when to take the shortcut and when to go through the problem solving route.
The Dreyfus Skill Model [0] is a good explanation. Novice typically have to memorize, then as they master the subject, their decision making becomes more heuristic based.
LLMs don't do well with heuristics, and by the times you've nailed down all the problems data, you could have been done. What they excels at is memorization, but all the formulaic stuff have been extracted into frameworks and libraries for the most popular languages.
[0]: https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisi...
I think the problem is that in spots where the concepts build on one another, you need to memorize the lower level concepts or else it'll be too hard to make progress on the higher level concepts.
If you're trying to expand polynomials and you constantly have to re-derive multiplication from first principles, you're never going to make any progress on expanding polynomials.
I never memorized multiplication tables and was always one of those "good in math" kids. An attempt to memorize that stuff ended with me confusing results and being unable to guess when I did something wrong. Knowing "tricks" and understanding how multiplication works makes life easier.
> "Just memorize it, it'll make your life so much easier."
That is because you evaluate cost of memorization to 0, because someone else is paying it. And you evaluate the cost of making mistakes due to constantly forgetting and being unable to correct to 0, because simply the kid gets blamed for not having perfect memory.
> or translate a language when they haven't memorized the vocabulary words in the lesson so they have to look up each one
Teaching language by having people translate a lot is an outdated pedagogy - it simply did not produced people capable to understand and produce the language. If the kids are translating sentences word by word, there was something going on wrongly before.
> It's rough watching kids try to multiply when they don't have their times tables memorized
As someone who never learned my multiplication tables – it’s fine. I have a few cached lookups and my brain is fast at factoring.
8*6? Oh that’s just 4*2*6= 4*12 = 48. Easy :)
Smart people can get away with tricks like that.
It's a lot easier to memorize things when it's your job i find.
maybe they pay hits a reward centre in my brain.
As with most things, it depends. If you truly do understand something, then you can derive a required result from first principles. _Given sufficient time_. Often in an exam situation you are time-constrained, and having memorized a shortcut cut be beneficial. Not to mention retaining is much easier when you understand the topic, so memorization becomes easier.
Probably the best example of this I can think of (for me at least) from mathematics is calculating combinations. I have it burned into my memory that (n choose r) = (n permute r) / (r permute r), and (n permute r) = n! / (n - r)!
Can I derive these from first principles? Sure, but after not seeing it for years, it might take me 10+ minutes to think through everything and correct any mistakes I make in the derivation.
But if I start with the formula? Takes me 5 seconds to sanity check the combination formula, and maybe 20 to sanity check the permutation formula. Just reading it to myself in English slowly is enough because the justification kind of just falls right out of the formula and definition.
So, yeah, they go hand in hand. You want to understand it but you sure as heck want to memorize the important stuff instead of relying on your ability to prove everything from ZFC...
It is waaaay easier to remember when you understand. The professor had it exactly right - if you learn to understand, you frequently end up remembering. But, memorization does not lead to understanding at all.
I think we memorize the understanding. For me it also works better understanding how something works than memoryzing results. I remember in high school, in maths trigonometrics, there were a list of 20 something formulas derived from a single one. Everyboby was memorizing the whole list of formulas; i just had to memorize a simple formula and the underdtanding of how to derive the others from the fundamental one on the fly.
You don't need to memorize to understand. You can rederive it every time.
You need to memorize it to use it subconsciously while solving more complex problems. Other ways you won't fit more complex solutions into your working memory,vso whole classes of problems will be too hard for you.
Ish? I never ever memorized the multiplication tables. To this day, I don't think I know them fully. I still did quite well in math by knowing how to quiz the various equations. Not just know them, but how to ask questions about moving terms and such.
My controversial education hot take: Pointless rote memorization is bad and frustrating, but early education could use more directed memorization.
As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.
Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.
Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.
Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.
Exactly. I agree with the leetcode part. A lot of problems in the world are composite of simpler smaller problems. Leetcode should teach you the basic patterns and how to combine them to solve real world problems. How will you ever solve a real world problem without knowing a few algorithms beforehand. For example, my brother was talking about how a Roomba would map a room. He was imagining 0 to represent free space and 1 as inaccessible points. This quickly reminded me of Number of Islands problem from leetcode. Yeah, there might be a lot of changes required to that problem but one could simple represent it as two problems.
1. Represent different objects in the room as some form of machine understandable form in a matrix 2. Find the number of Islands or find the Islands themselves.
This is the approach algomonster use. They give you a way to categorise the problem and what pattern is likely to solve it:
https://algo.monster/flowchart
Memorization of, like, multiplication tables gives us a poor view of the more interesting type of memorization. Remembering types of problems we’ve seen. Remembering landmarks and paths, vs just remembering what’s in every cell of a big grid.
I still don’t like leetcode, though.
> Memorization of, like, multiplication tables gives us a poor view of the more interesting type of memorization.
Memorizing multiplication tables is the first place many children encounter this strategy: The teacher shows you that you could try to memorize all of the combinations, or you could start learning some of the patterns and techniques. When multiplying by 5 the answer will end in 0 or 5. When multiplying by 2 the answer will be an even number, and so on.
I think there may have been a miscommunication somewhere on the chain of Mathematicians-Teachers-Students if that was the plan, when I was in elementary school.
Anecdotally (I only worked with math students as a tutor for a couple years), that math requires a lot of the boring type of memorization seems to be a really widespread misunderstanding.
Fortunately that was not my experience in abstract algebra. The tests and homework were novel proofs that we hadn't seen in class. It was one of my favorite classes / subjects. Someone did tell me in college that they did the memorization thing in German Universities.
Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.
Another advsys enjoyer! Did you ever write a game with it?
I would wager some folks can memorize without understanding? I do think memorization is underrated, though.
There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.
Rote memorization is essentially that, yes.
Interesting I had the same problem and suffered in grades back in school simply because I couldn't memorize much without understanding. However, I seemed to be the only one because every single other student, including those with top grades, were happy to memorize and regurgitate. I wonder how they're doing now.
My abstract algebra class had it exactly backwards. It started with a lot of needless formalism culminating in galois theory. This was boring to most students as they had no clue why the formalism was invented in the first place.
Instead, I wished it showed how the sausage was actually made in the original writings of galois [1]. This would have been far more interesting to students, as it showed the struggles that went into making the product - not to mention the colorful personality of the founder.
The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
[1] https://www.ams.org/notices/201207/rtx120700912p.pdf
> This was boring to most students as they had no clue why the formalism was invented in the first place.
> The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
That's something I really like about 3blue1brown, and he says it straight up [0]:
> My goal is for you to come away feeling like you could have invented calculus yourself. That is, cover all those core ideas, but in a way that makes clear where they actually come from, and what they really mean, using an all-around visual approach.
[0]: https://www.youtube.com/watch?v=WUvTyaaNkzM
Depends on the subject - I can remember multiple subjects where the teacher would give you a formula to memorise without explaining why or where it came from. You had to take it as an axiom. The teachers also didn't say - hey, if you want to know why did we arrive to this, have a read here, no, it was just given.
Ofc you could also say that's for the student to find out, but I've had other things on my mind
>Memorization is not a panacea.
It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.
Your comment about memorizing as part of understanding makes a lot of sense to me, especially as one possible technique to get get unstuck in grasping a concept.
If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
I was part of an ACM programming team in college. We would review classes of problems based on the type of solution necessary, and learn those techniques for solving them. We were permitted a notebook, and ours was full of the general outline of each of these classes and techniques. Along with specific examples of the more common algorithms we might encounter.
As a concrete example, there is a class of problems that are well served by dynamic programming. So we would review specific examples like Dijkstra's algorithm for shortest path. Or Wagner–Fischer algorithm for Levenshtein-style string editing. But we would also learn, often via these concrete examples, of how to classify and structure a problem into a dynamic programming solution.
I have no idea if this is what is meant by "l33t code solutions", but I thought it would be a helpful response anyway. But the bottom line is that these are not common in industry, because hard computer science is not necessary for typical business problems. The same way you don't require material sciences advancements to build a typical house. Instead it flows the other way, where advancements in materials sciences will trickle down to changing what the typical house build looks like.
>If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
Memorization of l33t code DOES work well as prep for l33t code tests. I just don't think l33t code has much to do with application programming. I've long felt that "computer science" is physics for computers, low on the abstraction ladder, and there are missing labels for the higher complexity subjects built on it. Imagine if all physical sciences were called "physics" and so in order to get a job as a biologist you should expect to be asked questions about the Schroedinger equation and the standard model. We desperately need "application engineering" to be a distinct subject taught at the university level.
You mean like to Software Engineering?
That's a real major that's been around for a couple of decades which focuses on software development (testing, version control, design patterns) with less focus on the more theoretical parts of computer science? There are even specialties within the Software Engineering major that focus specifically on databases or embedded systems.
What I understand from the GP is that memorizing l33t code won't help you learn anything useful. Not that understanding the solutions won't help you memorize them.
Is it the memorisation that had the desired effect or the having to come up with the novel proofs? Many schools seem to do the memorising part, but not the creating part.
I find it's helpful to have context to frame what I'm memorizing to help me understand the value.
Indeed, not just math. Biology requires immense amounts of memorization. Nature is littered with exceptions.
> But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it!
This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.
> Memorizing famous speeches probably isn’t much easier if you understand the historical context.
Not commenting on the merits of critical thinking vs memorization either way, but I think it would be meaningfully easier to memorize famous speeches if you understand the historical context.
For memorizing a speech word-for-word, I don't think so. Knowing the years of the signing of the Declaration of Independence and the Gettysburg Address aren't gonna help you nail the exact wording of the first sentence.
Right, isn't building up a (imaginary) context how people memorize pi?
It's funny, because I had the exact opposite experience with abstract algebra.
The professor explained things, we did proofs in class, we had problem sets, and then he gave us open-book semi-open-professor take-home exams that took us most of a week to do.
Proof classes were mostly fine. Boring, sometimes ridiculously shit[0], but mostly fine. Being told we have a week for this exam that will kick our ass was significantly better for synthesizing things we'd learned. I used the proofs we had. I used sections of the textbook we hadn't covered. I traded some points on the exam for hints. And it was significantly more engaging than any other class' exams.
[0] Coming up with novel things to prove that don't require some unrelated leap of intuition that only one student gets is really hard to do. Damn you Dr. B, needing to figure out that you have to define a third equation h(x) as (f(x) - g(x))/(f(x) + g(x)) as the first step of a proof isn't reasonable in a 60 minute exam.
memorization + application = comprehension. Rinse and repeat.
Whether leet code or anything else.
Mathematics pedagogy today is in a pretty sorrowful state due to bad actors and willful blindness at all levels that require public trust.
A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.
This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.
Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)
Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.
To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.
Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).
If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.
Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).
It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).
Wait, the electrical pipe water analogy is actually a very good one and it's quite difficult to find edge cases where it breaks down in a way that would confuse a student. There are some (for example, there's no electrical equivalent of Reynold's number or turbulence, and flow resistance varies differently with pipe diameter than wire diameter, and no good equivalent for Faraday's law) but I don't think these are likely to cause confusion. It even captures nuance like inductance, capacitance, and transmission line behaviour.
As I recall, my systems dynamics textbook even explicitly drew parallels between different domains like electricity and hydrodynamics. You're right that the counterparts aren't generally perfect especially at the edges but the analogies are often pretty good.
Intuitively it fails in making an equivalence to area which is an unrelated dimensional unit, as two lengths multiplied together equaling resistance, as well as the skin-effect related to Intensity/Current which is why insulation/isolation of wires are incredibly important.
The classical approach used charge diffusion iirc, and you can find classical examples of this in Oliver Heaviside's published works (archive.org iirc). He's the one that simplified Maxwell's 20+ equations down to the small number we use today.
> Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together).
Can you provide some concrete examples of it?
Not OP, and it was a couple decades ago, but I certainly remember professors and teachers saying things like "this isn't really how X works, but we will use the approximation for now in order to teach you this other thing". That is if you were lucky, most just taught you the wrong (or incomplete) formula.
I think there is validity to the approach but sciences would be much, much improved if taught more like history lessons. Here is how we used to think about gravity, here's the formula and it kind of worked, except... Here is planetary orbits that we used to use when we assumed they had to be circles. Here's how the data looked and here's how they accounted for it...
This would accomplish two goals - learning the wrong way for immediate use (build on sand) and building an innate understanding of how science actually progresses. Too little focus is on how we always create magic numbers and vague concepts (dark matter, for instance) to account for structural problems we have no good answer for.
Being able to "sniff the fudge" would be a super power when deciding what to write a PhD on, for instance. How much better would science be if everyone strengthened this muscle throughout their educatuon?
I included the water pipe analogy for electric theory, that is one specific example.
Also, In Algebra I've seen a flawed version of mathematical operations being taught that breaks down with negative numbers under multiplication (when the correct way is closed over multiplication). The tests were supposedly randomized (but seemed to target low-income demographics). The process is nearly identical, but the answers ultimately not correct. The teachers graded on the work to the exclusion of the correct answer. So long as you showed the correct process expected in Algebra you passed without getting the right answer. Geometry was distinct and unrelated, and by Trigonometry the class required correct process and answer. You don't find out there is a problem until Trigonometry, and the teacher either doesn't know where the person is failing comprehension, or isn't paid to reteach a class they aren't paid for but you can't go back.
I've seen and heard horror stories of students where they'd failed Trig 7+ times at the college level, and wouldn't have progressed if not for a devoted teacher helping them after-hours (basically correcting and reteaching Algebra). These kids literally would break out in a cold PTSD sweat just hearing the associated words related to math.
I did some tutoring in a non-engineering graduate masters program and some folks were just lost. Simple things like what a graph is or how to solve an equation. I really did try but it's sort of hard to teach fairly easy high school algebra (with maybe some really simple derivatives to find maxima and minima) in grad school.
I'd love an example too, and an example of the classical system that this replaced. I'm willing to believe the worst of the school system, but I'd like to understand why.
The classical system was described, but you can find it in various historic works based on what's commonly referred to today as the Trivium and Quadrivium based curricula.
Off the top of my head, the former includes reasoning under dialectical (priori and later posteriori parts under the quadrivium).
Its a bit much to explain it in detail in a post like this but you should be able to find sound resources with what I've provided.
It largely goes back to how philosophy was taught; all the way back to Socrates/Plato/Aristotle, up through Descartes, Locke (barely, though he's more famous for social contract theory), and more modern scientists/scientific method.
The way math is taught today, you basically get to throw out almost everything you were taught at various stages, and relearn it anew on a different foundation, somehow fitting the fractured pieces back together towards learning the true foundations, which would be much easier at the start and building on top of that instead of the constant interference.
You don't really end up understanding math intuitively nor its deep connections to logic (dialectics, trivium), until you hit Abstract Algebra.
You want to teach abstract algebra to middle schoolers?
Up to the first or second chapter, depending on the book being used is more than sufficient to cover the foundational concepts. Sets, and Properties such as closure over given operations, and mathematical relabeling which is a function (f(x), the requirements for it (uniqueness of x, and projection onto) along with the tests for the presence of these, and common mathematic systems properties.
This naturally provides easily understood limitations of math systems which can be tested if there is a question, and allows recognition when they violate the properties that naturally lead to common mistakes, as well as providing a space where they can use numbers/geometry/reasoning at play.
It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.
A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.
But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.
It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)
> Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.
If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.
In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.
Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
> but when no one wants to train juniors any more (because they immediately get poached by the big ones)
Can we stop pretending that we don't know how to solve this problem? If you hire juniors at $X/year, but they keep getting poached after 2-3 years because now they can get $X*1.5/year (or more!), then maybe you should start promoting and giving raises to them after they've gotten a couple years experience.
Seriously, this is not a hard problem to solve. If the junior has proven themselves, give them the raise they deserve instead of being all Surprised Pikachu when another company is willing to pay them what they've proven themselves worthy of.
The problem is, no small company can reasonably compete with the big guns.
We're seeing this here in Munich. BMW and other local industry used to lure over loooots of people by virtue of paying much more than smaller shops - and now, Apple, Google, Microsoft and a few other big-techs our "beloved" prime minister Söder do the same thing to them... and as a side effect, fuck up the housing market even more than it already is.
That's a good point. I don't really have an answer for it.
> Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
I have been in the various nooks and crannies of the Internet/Software dev industry my whole career (i'm 49). I can't think of any time when the stock market didn't drive software innovation. It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
> It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
Yes, but today more and more is invent something -> achieve dominance -> get bought up by an even larger megacorp. That drives the enshittification circle.
> Capex vs opex
That's part of the problem, but I also notice the new hiring managers are incentivized to hire (or replace) employees to make their mark on the company. They then advocate for "their guys" the ones they recruited over the incumbents that are the unwilling dinosaurs in their eyes.
I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.
How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.
Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.
Yet the rest of the business does not work this way, it’s the same old top down model as the rest.
This I think is why so few companies or even managers / executives “get it”
> can’t think of another career where management continuously does not understand the realities of how something gets built
All engineering. Also all government and a striking amount of finance.
Actually, this might be a hallmark of any specialist field. Specialists interface with outsiders through a management layer necessarily less competent at the specialty than they are. (Since they’re devoting time and energy to non-specialty tasks.)
While product often does operate in a waterfall model, I think this is the wrong mindset. Good product management should adopt a lot of the same principles as software development. Form a testable hypothesis, work to get it into production and begin gathering data, then based on your findings determine what the next steps are and whether to adjust the implementation, consider the problem solved or try a different approach.
> I can’t think of another career where management continuously does not understand the realities of how something gets built.
This is in part a consequence of how young our field is.
The other comment pointing out other engineering is right here. The difference is that fields like Civil Engineering are millenia old. We know that Egyptian civil engineering was advanced and shockingly modern even 4.5 millenia ago. We've basically never stopped having qualified civil engineers around who could manage large civil engineering projects & companies.
Software Development in it's modern forms has it's start still in living memory. There simply weren't people to manage the young early software development firms as they grew, so management got imported from other industries.
And to say something controversial: Other engineering has another major reason why it's usually better understood. They're held to account when they kill people.
If you're engineering a building or most other things, you must meet safety standards. Where possible you are forced to prove you meet them. E.g. Cars.
You don't get to go "Well cars don't kill people, people kill people. If someone in our cars die when they're hit by a drunk driver, that's not our problem that's the drunkard's fault." No. Your car has to hold up to a certain level of crash safety, even if it's someone else who causes the accent, your engineering work damn better hold up.
In software, we just do not do this. The very notion of "Software kills people" is controversial. Treated as a joke, "of course it can't kill people, what are you on about?". Say, you neglect on your application's security. There's an exploit, a data breach, you leak your users' GPS location. A stalker uses the data to find and kill their victim.
In our field, the popular response is to go "Well we didn't kill the victim, the stalker did. It's not our problem.". This is on some level true; 'Twas the drunk driver who caused the car crash, not the car company. But that doesn't justify the car company selling unsafe cars, why should it justify us selling unsafe software? It may be but a single drop of blood, but it's still blood on our hands as well.
As it stands, we are fortunate enough that there haven't been incidents big enough to kill so many people that governments take action to forcibly change this mindset. It would be wise that Software Development takes up this accountability on it's own accord to prevent such a disaster.
That we talk about "building" software doesn't help.
>For regular companies they're infested by managers imported from non-engineering fields
Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse. I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.
One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.
Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon.... Me: Oh can I ask you anything about it?.... Manager: No
Could not agree more. Whenever I hear monikers like "Java developer" or "python developer" as a job description I roll my eyes slightly.
Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:
- something we've been doing since forever
- the latest trend that can be picked up just-in-time if you'll ever need it
I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.
Times have not changed. This is still the focus of teacher prep programs.
Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.
Was directed at TFA, not parent comment.
In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.
Right but a looot of the criticism online is based on assumptions (either personal or inherited from other commenters) that haven’t been updated since 2006.
Well, at more elite schools at least, the general assumption is that programming is mostly something you pick up on your own. It's not CS. Some folks will disagree of course but I think that's the reality. I took an MIT Intro to Algorithms/CS MOOC course a few years back out of curiosity and there was a Python book associated with the course but you were mostly on your own with it.
When I was in college the philosophy program had the marketing slogan: “Thinking of a major? Major in thinking”.
Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.
> Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding.
The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.
The #1 problem in almost all workplaces is communication related. In almost all jobs I've had in 25-30 years, finding out what needs to be done and what is broken -- is much harder than actually doing it.
We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.
Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.
The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.
Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.
Yea this is why I’m generally not that impressed by LLMs. They still force you to do the communication which is the hard part. Programming languages are inherently a solve for communicating complex steps. Programming in English isn’t actually that much of a help you just have to reinvent how to be explicit
I find Claude code unexpectedly good at analysis. With a healthy dose of skepticism. It is actually really good at reading logs and corelating events for example.
This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.
Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".
Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".
It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."
How people see it is based on the probability of any philosophy major producing good working software, not you being able to produce good working software.
Maybe because phylosophy focuses on weird questions (to be or not to be) and weird personas. If it was advertised as more grounded thing, the views would be different.
The way you are perceived by others dependa on your behaviour. If you wamt to be perceived differently, adjust your behaviour, don't demand others to change. They won't.
Many top STEM schools have substantial humanities requirements, so I think they agree with you.
At Caltech they require a total of at least 99 units in humanities or social sciences. 1 Caltech unit is 1 hour of work a week for each week of the term, and a typical class is 9 units consisting of 3 hours of classwork a week and 6 hours of homework and preparation.
That basically means that for 11 of the 12 terms that you are there for a bachelor's degree, you need to be taking a humanities or social sciences class. They require at least 4 of those to be in humanities (English, history, history and philosophy of science, humanities, music, philosophy, and visual culture), and at least 3 to be in social sciences (anthropology, business economics and management, economics, law, political science, psychology, and social science).
At MIT they have similar, but more complicated, requirements. They require humanities, art, and social sciences, and they require that you pick at least one subject in one of those and take more than one course in it.
I worked for someone who I believe was undergrad philosophy and then got a masters in CS.
On a related note, the most accomplished people I've met didn't have degrees in the fields where they excelled and won awards. They were all philosophy majors.
Teaching people to think is perhaps the world's most under-rated skill.
Well, yes but the other 90%+ just need to get a job out of college to support their addiction to food and shelter not to be a “better citizen of the world” unless they have parents to subsidize their livelihood either through direct transfers of money or by letting them stay at home.
I told both of my (step)sons that I would only help them pay for college or trade school - their choice - if they were getting a degree in something “useful”. Not philosophy, not Ancient Chinese Art History etc.
I also told them that they would have to get loans in their own names and I would help them pay off the loans once they graduated and started working gainfully.
My otherwise ordinary school applied the mentality that students must "Learn to learn", and that mix of skills and mindset has never stopped helping me.
I think good historians are on the same foot as philosophers in the arena of "thinking really fucking hard and making an airtight analysis".
I would say you have some bias.
yes, sometimes you need people who can grasp the tech and talk to managers. They might be intermediaries.
But don't ignore the nerdy guys who have been living deeply in a tech ecosystem all their lives. The ones who don't dabble in everything. (the wozniaks)
A professor in my very first semester called "crazy finger syndrome" the attempts to go straight to the code without decomposing the problem from a business or user perspective. It was a long time ago. It was a CS curriculum
I miss her jokes against anxious nerds that just wanted to code :(
Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.
> "crazy finger syndrome" - the attempts to go straight to the code without decomposing the problem from a business or user perspective
Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.
I make a point to solve my more difficult problems with pen and paper drawings and/or narrative text before I touch the PC. The computer is an incredibly distracting medium to work with if you are not operating under clear direction. Time spent on this forum is a perfect example.
Memorization and closed book tests are important for some areas. When seconds are counting the ER doctor cannot go look up how to treat a heart attack. That doctor also needs to know now only how to treat the common heart attack, but how to recognize this isn't the common heart attack but the 1 in 10,000 not a heart attack but has exactly the same symptoms as a heart attack case and give it the correct treatment.
However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.
Humans will never outcompete ai in that regard however. Industry will eventually optimize for humans and ai separately: ai will know a lot and think quickly, humans will provide judgement and legal accountability. We’re already on this path.
Speaking with a relative who is a doctor recently it’s interesting how much each of our jobs are “troubleshooting”.
Coding, doctors, plumber… different information, often similar skill sets.
I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.
Great hire.
Everyone going through STEM needs to see the movie Hidden Figures for a variety of reasons, but one bit stands out as poignant: I believe it was Katherine Johnson, who is asked to calculate some rocket trajectory to determine the landing coordinates, thinks on it a bit and finally says, "Aha! Newton's method!" Then she runs down to the library to look up how to apply Newton's method. She had the conceptual tools to find a solution, but didn't have all the equations memorized. Having all the equations in short term memory only matters in a (somewhat pathological) school setting.
My favorite professor in my physics program would say, "You will never remember the equations I teach. But if you learn how the relationships are built and how to ask questions of those relationships, then I have done my job." He died a few years ago. I never was able to thank him for his lessons.
You just did.
Being resourceful is an extremely valuable skill in the real world, and basically shut out of the education world.
Unlike my teachers, none of my bosses ever put me in an empty room with only a pencil and a sheet of paper to solve given problems.
> My favorite professor in engineering school always gave open book tests.
My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.
My experience as a professor and a student is that this doesn't make any difference.
Exactly the point of his test methodology.
What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.
I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.
Isn't this an argument for why you should allow open book tests rather than why you shouldn't? It certainly removes some pressure to remember some obscure detail or formula.
Isn't that just an argument for always doing open book tests, then? Seems like there's no downside, and as already mentioned, it's closer to how one works in the real world.
During some of the earlier web service development days, one would find people at F500 skating by in low-to-mid level jobs just cutting and pasting between spreadsheets, things would take them hours could be done in seconds, and with lower error rates, with a proper data interface.
Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.
Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.
I had a like minded professor in university, ironically in AI. Our big tests were all 3 day take home assignments. The questions were open ended, required writing code, processing data and analyzing results.
I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.
Sadly, I doubt 3-day take-home assignments have much future as a means of assessment in the age of LLMs.
Might be true, idk? For all we know that professor now gives a 2.5-day take home assignments where they are allowed to use LLMs, and then assess them in an 1 hour oral exam where they need to explain approach, results and how they ensure that their results are accurate?
I don't think the 3-day take home is the key. It's supporting educators to have the intention, agency and capacity to improvise assessment.
It depends what level the education is happening at. Think of it like students being taught how to do for loops but are just copying and pasting AI output. That isn't learning. They aren't building the skills needed to debug when the AI gets something wrong with a more complicated loop, or understand the trade offs of loops vs recursion.
Finding the correct balance for a given class it hard. Generally, the lower level the education, the more it should be closed books because the more it is about being able to manually solve the smaller challenges that are already well solved so you build up the skills needed to even tackle the larger challenges. The higher the education level, the more it is about being able to apply those skills to then tackle a problem, and one of those skills is being able to pull relevant formulas and such from the larger body of known formulas.
Agreed coming from the ops world also.
I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.
I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.
To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.
But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.
Same here! I always like to say that software engineering is 50% knowing the basics (How to write/read code, basic logic) and 50% having great research skills. So much of our time is spent finding documentation and understanding what it actually means as opposed to just writing code.
You cannot teach "how to to think". You have to give students thinking problems to actually train thinking. Those kinds of problems can increasingly be farmed off to AI, or at least certain subproblems in them.
I meam, yes, to an extent you can teach how to think: critical thinking and logic are topics you can teach and people who take their teaching to heart can become better thinkers. However, those topics cannot impart creativity. Critical thinking is called exactly that because it's about tools and skills for separating bad thinking from good thinking. The skill of generating good thinking probably cannot be taught; it can only be improved with problem-solving practice.
> In the real world of work, everyone has full access to all of the available data and information.
In general, I also attend your church.
However, as I preached in that church, I had two students over the years.
* One was from an African country and told me that where he grew up, you could not "just look up data that might be relevant" because internet access was rare.
* The other was an ex US Navy officer who was stationed on a nuclear sub. She and the rest of the crew had to practice situations where they were in an emergency and cut off from the rest of the world.
Memorization of considerable amounts of data was important to both of them.
Each one of us has a mental toolbox that we use to solve problems. There are many more tools that we don’t have in our heads that we can look up if we know how.
The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.
This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.
People have been saying that since the advent of formal education. Turns out standardized education is really hard to pull off and most systems focus on making the average good enough.
It’s also hard to teach people “how to think” while at the same time teaching them practical skills - there’s only so many hours in a day, and most education is setup as a way to get as many people as possible into shape for taking on jobs where “thinking” isn’t really a positive trait, as it’d lead to constant restructuring and questioning of the status quo
While there’s no reasonable way to disagree with the sentiment, I don’t think I’ve ever met anyone who can “think and decompose problems” who isn’t also widely read, and knows a lot of things.
Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)
> Everyone has full access to all of the available data and information
Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.
At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced
>but no jobs are being replaced
I agree that an LLM is a long way from replacing most any single job held by a human in isolation. However, what I feel is missed in this discussion is that it can significantly reduce the total manpower by making humans more efficient. For instance, the job of a team of 20 can now be done by 15 or maybe even 10 depending on the class of work. I for one believe this will have a significant impact on a large number of jobs.
Not that I'm suggesting anything be "stopped". I find LLM's incredibly useful, and I'm excited about applying them to more and more of the mundane tasks that I'd rather not do in the first place, so I can spend more time solving more interesting problems.
Also, some problems don't have enough data for a solution. I had a professor that gave tests where the answer was sometimes "not solvable." Taking these tests was like sweating bullets because you were not sure if you're just too dumb to solve the problem, or there was not enough data to solve the problem. Good times!
One of my favorite things about Feynman interviews/lectures is often his responses are about how to think. Sometimes physicists ask questions in his lectures and his answer has little to do with the physics, but how they're thinking about it. I like thinking about thinking, so Feynman is soothing.
I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.
(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)
Agree, hopefully this insight / attitude will become more and more prevalent.
For anyone looking for resources, may we recommend:
* The Art of Doing Science and Engineering by Richard Hamming (lectures are available on YouTube as well)
* Measurement by Paul Lockhart (for teaching mindset)
Talk is cheap. Good educators cost money, and America famously underpays (and under-appreciates) its teachers. Does he also support increasing taxes on the wealthy?
Even more broadly, it's "critical thinking," which definitely seems to be on the decline (though I'm sure old people have said this for millennia)
Have there been studies about abilities of different students to memorize information? I feel this is under-studied in the world of memorizing for exams
Yeah. Memorization and trivial knowledge is an optimization mechanism.
It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.
Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.
wanted to chime in on the educational system. in the west, we have the 'banking system' which treat a student as a bank account and knowledge as currency, hence the dump more info into ppl to make them sm0rt attitude.
in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.
those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.
not saying what is better or worse, but reading this comment and article it reminds me of this.
a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.
all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.
ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.
a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up.. i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.
a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.
id related that to education, not prebaked human properties.
So to summarize:
My boss said we were gonna fire a bunch of people “because AI” as part of some fluff PR to pretend we were actually leaders in AI. We tried that a bit, it was a total mess and we have no clue what we’re doing, I’ve been sent out to walk back our comments.
Boss->VP: "We need to fire people because AI"
VP->Public: "We'll replace all our engineers with AI in two years"
Boss->VP: "I mean we need to fire VPs because AI"
VP->Public: "Replacing people with AI is stupid"
They are still not hiring junior engineers
Well they’re just trying to reduce headcount overall to get the expenses for AWS in better shape and work through some bloat. The “we’re doing layoffs because of AI” story wasn’t sticking though so looks like now they’re backtracking that story line.
Most people don't notice but there has been a inflation in headcounts over the years now. This happened around the time microservices architecture trend took over.
All of sudden to ensure better support and separation of concerns people needed a team with a manager for each service. If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.
When I started out huge C++ and Java code bases were pretty much the norm, and it was also one of the reasons why things were hard and barrier to entry high. In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.
To me its these kind of places that are in real trouble. There is not enough work to justify keeping dozens to even hundreds of teams, their managements and their hierarchies all working for quite literally doing nothing.
Its almost an everyday song that I hear, that big companies are full of hundreds or thousands of employees doing nothing.
I think sometimes the definition of work gets narrowed to a point so infinitesimal that everyone but the speaker is just a lazy nobody.
There was an excellent article on here about working at enterprise scale. My experience has been similar. You get to do work that feels really real, almost like school assignments with instant feedback and obvious rewards when you're at a small company. When I worked at big companies it all felt like bullshit until I screwed it up and a senator was interested in "Learning more" (for example).
The last few 9s are awful hard to chase down and a lot of the steps of handling edge case failures or features are extremely manual.
> In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.
You're committing the classic fallacy around microservices here. The services themselves are simpler. The whole software is not.
When you take a classic monolith and split it up into microservices that are individually simple, the complexity does not go away, it simply moves into the higher abstractions. The complexity now lives in how the microservices interact.
In reality, the barrier to entry on monoliths wasn't that high either. You could get "low productivity employees" (I'd recommend you just call them "novices" or "juniors") to do the work, it'd just be best served with tomato sauce rather than deployed to production.
The same applies to microservices. You can have inexperienced devs build out individual microservices, but to stitch them together well is hard, arguably harder than ye-olde-monolith now that Java and more recent languages have good module systems.
There are two freight trains currently smashing into each other:
1.) Elon fired 80% of twitter and 3 years later it still hasn't collapsed or fallen into technical calamity. Every tech board/CEO took note of that.
2.) Every kid and their sister going to college who wants a middle class life with generous working conditions is targeting tech. Every teenage nerd saw those over employed guys making $600k from their couch during the pandemic.
On the other hand while yes it's still running, twitter is mostly not releasing new features, and has completely devolved into the worst place on the internet. Not to mention most accounts now actually are bots like Elon claimed they were 3 years ago.
> twitter is mostly not releasing new features
Were they even releasing new features anyways?
I can't think of any new features that Twitter implemented even the 5 years preceding Musk's buyout.
I dont know; they may well have been on the ads and moderation side. And they did add the hangouts/voice calls stuff, but that may have been an acquisition, I'm not sure.
And to add: the new features are take or leave it. you're right, who cares about new features on _twitter_, it was ossified already.
Becoming the worst place on earth is a far weightier outcome of the aquisition and firings.
I hope the tech boards and CEOs don’t miss the not very subtle point that twitter has very quickly doubled in size in 2 years and is still growing after the big layoff and they had to scramble to fix some notable mistakes they made when firing that many people. 80% is already a hugely misleading marketing number.
Edit: huh, what’s with the downvote, is this wrong? Did I overstate it? Here’s the data: https://www.demandsage.com/twitter-employees/
Also need to add, that a large part of the 80% that got kicked, was moderator staff. So it makes sense that after they removed too many developers, they ended up rehiring them.
Take in account, Twitter their front end, the stuff that people interact with was only like 15% of the actual code base. The rest was analytics for the data (selling data, marketing analytic for advertisers etc).
But as they are not reintroducing moderators, the company is "still down by 63.6% from the numbers before the mass layoffs".
So technically, Twitter is probably back or even bigger on the IT staff then before Musk came.
Looking at your numbers, twitter went from 7,490 employees before Musk's layoffs to 2,840 in 2024. That's still a reduction by 63%.
Well yeah... computers are really powerful. you don't need docker swarm or any other newfangled thing. Just perl and apache and mysql and you can ship to tens of millions of users before you hit scaling limits.
Amazon has been at the forefront of micro services at scale since 2002.
https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...
> If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.
I think it depends on the industry. In safety critical systems, you need to be testing, making documentation, architectural artifacts, meeting with customers, etc
There's not that much idle time. Unless you mean idle time actually writing code and that's not always a full time job.
I think most people misunderstand the relationship between business logic, architecture and headcount.
Big businesses don’t inherently require the complexity of architecture they have. There is always a path-dependent evolution and vestigial complexity proportional to how large and fast they grew.
The real purpose of large scale architecture is to scale teams much moreso than business logic. But why does headcount grow? Is it because domains require it? Sure that’s what ambitious middle managers will say, but the real reason is you have money to invest in growth (whether from revenue or from a VC). For any complex architecture there is usually a dramatically simpler one that could still move the essential bits around, it just might not support the same number of engineers delineated into different teams with narrower responsibilities.
The general headcount growth and architecture trajectory is therefore governed by business success. When we’re growing we hire and we create complex architecture to chase growth in as many directions as possible. Eventually when growth slows we have a system that is so complex it requires a lot of people just to understand and maintain—even if the headcount is longer justified those with power in the human structure will bend over backwards to justify themselves. This is where the playbook changes and a private equity (or Elon) mentality is applied to just ruthlessly cut and force the rest of the people how to keep the lights on.
I consider advances in AI and productivity orthogonal to all this. It will affect how people do their jobs, what is possible, and the economics of that activity, but the fundamental dynamics of scale and architectural complexity will remain. They’ll still hire more people to grow and look for ways to apply them.
It would be sad if you are correct. Your company might not be able to justify keeping dozens and hundreds of teams employed, but what happens when other companies can't justify paying dozens and hundreds of teams who are the customers buying your product? Those who gleefully downsize might well deserve the market erosion they cause.
This is blatantly incorrect. Before microservices became the norm you still had a lot of teams and hiring, but the teams would be working with the same code base and deployment pipeline. Every company that became successful and needed to scale invented their own bespoke way to do this; microservices just made it a pattern that could be repeatedly applied.
I think that the starting point is that productivity/developer has been declining for a while, especially at large companies. And this leads to the "bloated" headcount.
The question is why. You mention microservices. I'm not convinced.
Many think it is "horizontals". Possible, these taxes add up it is true.
Perhaps it is cultural? Perhaps it has to do with the workforce in some manner. I don't know and AFAIK it has not been rigorously studied.
Looks like the AWS CEO has changed religion. A year back, he was aboard the ai-train - saying AI will do all coding in 2 years [1]
Finally, the c-suite is getting it.
[1] https://news.ycombinator.com/item?id=41462545
He didn't actually say that. He said it's possible that within 2 years developers won't be writing much code, but he goes on to say:
"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code...."
https://www.businessinsider.com/aws-ceo-developers-stop-codi...
If you read the full remarks they're consistent with what he says here. He says "writing code" may be a skill that's less useful, which is why it's important to hire junior devs and teach them how to learn so they learn the skills that are useful.
He is talking his book. Management thinks it adds value in the non-coding aspects of the product - such as figuring out what customers need etc. I suggest management stays in their lanes, and not make claims on how coding needs to be done, leave that to the craftsmen actually coding.
[flagged]
> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
https://news.ycombinator.com/newsguidelines.html
Theoretically, a large part of Amazon's worth is the skill of its workforce.
Some subset of the population likes to pretend their workforce is a cost that provides less than zero value or utility, and all the value and utility comes from shareholders.
But if this isn't true, and collective skill is worth value, then saying anyone can have that with AI at least has some headwind on your share price - which is all they care about.
Does that offset a potential tailwind from slightly higher margins?
I don't think any established company should be cheerleading that anyone can easily upset their monopoly with a couple of carefully crafted prompts.
It was always kind of strange to me, and seemed as though they were telling everyone, our moat is gone, and that is good.
If you really believed anyone could do anything with AI, then the risk of PEs collapsing would be high, which would be bad for the capital class. Now you have to correctly guess what's the next best thing constantly to keep your ROI instead of just parking it in save havens - like FAANG.
Amazon doesn't really work this way.
Bedrock/Q is a great example of how Amazon works. If we throw $XXX at the problem and YYY SDEs at the problem we should be able to build Github Copilot, GPT-3, OpenRouter and Cursor ourselves instead of trying to competitively acquire and attract talent. The fact that Codewhisperer, Q and Titan barely get spoken about on HN or Twitter tells you how successful this is.
But if you have that perspective then the equation is simple. If S3 can make 5 XXL features per year with 20 SDEs then if we adopt “Agentic AI” we should be able to build 10 XXL features with 10 SDEs.
Little care is given to organizational knowledge, experience, vision etc. that is the value (in their mind) of leadership not ICs.
What do you mean, “Amazon doesn’t really work that way”?
Parent is talking about how C-Suite doesn’t want to trumpet something that implies their entire corporate structure is extremely disadvantaged vs new entrants and your response is “Amazon wants to build everything themselves”?
Amazon isn’t some behaviorally deterministic entity, and it could (and should?) want to both preserve goodwill and build more internally vs pay multiples to acquire.
I guess it could be that people inside are not people they have to compete with, but it doesn’t seem like that’s what you're saying.
Amazon would probably say its worth is the machinery around workers that allows it to plug in arbitrary numbers of interchangeable people and have them be productive.
It’s a requirement for the C-suite to always be aware of which way the wind is currently blowing.
It’s one thing to be aware of which way the wind is blowing, and quite another to let yourself be blown by the wind.
Not to say that‘s what the AWS CEO is doing—maybe it is, maybe it isn’t, I haven’t checked—I’m just commenting on the general idea.
An LLM would be more efficient at this task, truth be told.
it could be if a new model was released every day where the training set included everything that happened yesterday.
That's not necessarily inconsistent though - if you need people to guide or instruct the autonomy, then you need a pipeline of people including juniors to do that. Big companies worry about the pipeline, small companies can take that subsidy and only hire senior+, no interns, etc., if they want.
There is no pipeline though. The average tenure of a junior developer even at AWS is 3 years. Everyone knows that you make less money getting promoted to an L5 (mid) than getting hired in as one. Salary compression is real. The best play is always to jump ship after 3 years. Even if you like Amazon, “boomeranging” is still the right play.
that's interesting because that's how the consulting world works too. Start at a big firm, work for a few years, then jump to a small firm two levels above where you were. The after two years, come back to the big firm and get hired one level up from where you left. Rinse/repeat. It's the fastest promotion path in consulting.
I went from an L5 (mid) working at AWS ProServe as a consultant (full time role) to a year later (and a shitty company in between) as a “staff architect” - like you said two levels up - at a smaller cloud consulting company.
If I had any interest in ever working for BigTech again (and I would rather get an anal probe daily with a cactus), I could relatively easily get into Google’s equivalent department as a “senior” based on my connections.
Why is the hiring budget so much larger than the promotion budget?
It’s not necessarily “larger”, so much as different units. In a big company, the hiring budget is measured in headcount, but the promotion budget is measured in dollar percentage. It’s much easier to add $20k salary to get a hire done than to give that same person a $20k bump the following year.
I don't now about the dollars but it's much easier and faster to leave and come back at a higher level than it is to win an actual promotion.
Right but I'm asking why that is, structurally. It seems to be a budgeting thing on the companies pov or a hope that by limiting promotions you'll get some employees underpaid and not leaving?
The original poster who you are replying to was answering an orthogonal but related question and they both are true.
1. It is easier to make more money by being hired than by being promoted or not even being promoted and just kept at market rates for doing your current job. I addressed that in a sibling reply.
2. It’s easier to come in at a higher level than to be promoted to a higher level. To get “promoted” at BigTech there is a committee, promo docs where you have to document how you have already been working at that level and your past reviews are taken into account.
To come in that level you control the narrative and only have to pass 5-6 rounds of technical and behavioral interviews.
If I came into my current company at a level below staff, it would have taken a couple of years to be promoted to my current staff position (equivalent to a senior at AWS) and a few successful projects. All I had to do was interview well and tell the stories I wanted to tell about my achievements over the past 4 years. I didn’t have to speak on failures.
It’s a lot cheaper to replace an employee by one who leaves at market rate than to pay all of your developers at market rate. Many are going to stick around because of inertia, their lack of ability to interview well, golden handcuffs of RSUs, they don’t feel like rebuilding the social capital at another company or the naive belief in the “mission”, “passion” etc
Mgmt/hr playing a game of chicken and you don't know you're playing.
This is the absolutely best description I’ve ever heard about salary compression and inversion.
But that's fine, that's why I say for big companies - the pipeline is the entire industry, everyone potentially in the job market, not just those currently at AWS. Companies like Amazon have a large enough work force to care that there's people coming up even if they don't work there yet (or never do, but by working elsewhere free someone else to work at AWS).
They have an interest in getting those grads turned into would-be-L5s even if they leave for a different company. If they 'boomerang back' at L7 that's great. They can't if they never got a grad job.
> That's not necessarily inconsistent though
Pasting the quote for reference:
> Amazon Web Services CEO Matt Garman claims that in 2 years coding by humans won't really be a thing, and it will all be done by networks of AI's who are far smarter, cheaper, and more reliable than human coders.
Unless this guy speaks exclusively in riddles, this seems incredibly inconsistent.
To be fair, the two statements are not inconsistent. He can continue to hire junior devs, but their job might involve much less actual coding.
There's definitely a vibe shift underway. C-Suites are seeing that AI as a drop-in replacement for engineers is a lot farther off than initial hype suggested. They know that they'll need to attract good engineers if they want to stay competitive and that it's probably a bad idea to scare off your staff with saying that they'll be made irrelevant.
I'm not sure those are mutually exclusive? Modern coders don't touch Assembly or deal with memory directly anymore. It's entirely possible that AI leads to a world where typing code by hand is dramatically reduced too (it already has in a few domains and company sizes)
>Finally, the c-suite is getting it.
It can only mean one thing: the music is about to stop.
He was right tho. AI is doing all the coding. That doesn’t mean you fire junior staff. Both can be true at once- you need juniors, and pretty much all code this days is AI-generated.
Man how would we live without these folks and their sensational acumen. Millions well spent!
"Oh, how terrible" - he exclaims while counting dollar bills.
He should face consequences for his cargo-cult thinking in the first place. The C-Suite isn't "getting" anything. They are simply bending like reeds in today's winds.
Might want to clarify things with your boss who says otherwise [1]? I do wish journalists would stop quoting these people unedited. No one knows what will actually happen.
[1]: https://www.shrm.org/topics-tools/news/technology/ai-will-sh...
I'm not sure those statements are in conflict with each other.
“My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” - Matt Garman
"We will need fewer people doing some of the jobs that are being done today” - Amazon CEO Andy Jassy
Maybe they differ in degree but not in sentiment.
> quoting these people unedited
If you're quoting something, the only ethical thing to do is as verbatim as possible and with a sufficient amount of context. Speeches should not be cleaned up to what you think they should have said.
Now, the question of who you go to for quotes, on the other hand .. that's how issues are really pushed around the frame.
By unedited I mean, take the message literally and quote it to support a narrative that isn’t clear or consistent. (even internally among Amazon leadership)
Those statements are entirely consistent to me.
I very much believe that anything AWS says on the corporate level is bullshit.
From the perspective of a former employee. I knew that going in though. I was 46 at the time, AWS was my 8th job and knowing AWS’s reputation from 2nd and 3rd hand information, I didn’t even entertain an opportunity that would have forced me to relocate.
I interviewed for a “field by design” role that was “permanently remote” [sic].
But even those positions had an RTO mandate after I already left.
There's what AWS leadership says and then there's what actually sticks.
There's an endless series of one pagers with this idea or that idea, but from what I witnessed first hand, the ones that stuck were the ones that made money.
Jassy was a decent guy when I was there, but that was a decade ago. A CEO is a PR machine more than anything else, and the AI hype train has been so strong that if you do anything other than saying AI is the truth, the light and the way, you lose market share to competitors.
AI, much like automation in general, does allow fewer people to do more, but in my experience, customer desires expand to fill a vacuum and if fewer people can do more, they'll want more to the point that they'll keep on hiring more and more people.
I would bet that anyone who's worked with these models extensively would agree.
I'll never forget the sama AGI posts before o3 launched and the subsequent doomer posting from techies. Feels so stupid in hindsight.
The AGI doomerism was a marketing strategy. Now everyone gets what AI is and we're just watching a new iteration on search, ai's read all the docs.
It was always stupid, but no one is immune to hype, just different types of hype.
Especially with the amount of money that was put into just astroturfing the technology as more than it is.
ChatGPT is better than any junior developer I’ve ever worked with. Junior devs have always been a net negative for the first year or so.
From a person who is responsible for delivering projects, I’ve never thought “it sure would be nice if I had a few junior devs”. Why when I can poach an underpaid mid level developer for 20% more?
I've never had a junior dev be a "net negative." Maybe you're just not supervising or mentoring them at all? The first thing I tell all new hires under me is that their job is to solve more problems than they create, and so far it's worked out.
I’ve had interns be a net negative, I’ve had Juniors be a net negative, I’ve had Seniors be a net negative and even managers!
Turns out some people suck, but most of them don’t suck.
But by definition, junior developers with no experience are going to need more handholding and tale time away from experience developers.
> junior developers with no experience are going to need more handholding
Unlike AI, which gives me fake methods, broken code, and wrong advice with full confidence.
I just “wrote” 2000 lines of code for a project between Node for the AWS CDK and Python using the AWS SDK (Boto3). Between both, ChatGPT needed to “know” the correct API for 12 services, SQL and HTML (for a static report). The only thing it got wrong with a one shot approach was a specific Bedrock message payload for a specific LLM model. That was even just a matter of saying “verify the payload on the web using the official docs”.
Yes it was just as well structured as I - someone who has been coding as a hobby or professionally for four decades - would have done.
That's great for you. I ask Sonnet 4 to make a migration and a form in Laravel Filament, and it regularly shits itself. I'm curious what those 12 services were, they must've had unchanging, well documented APIs.
That’s the advantage of working with AWS services, everything is well documented with plenty of official and unofficial code showing how to do most things.
Even for a service I know is new, I can just tell it to “look up the official documentation”
Using ChatGPT 5 Fast
AWS CDK apps (separate ones) using Node
- EC2 (create an instance)
- Aurora MySQL Serverless v2
- Create a VPC with no internet access - the EC2 instance was used as a jump box using Session Manager
- VPC Endpoints for Aurora control plane, SNS, S3, DDB, Bedrock, SQS, Session Manager
- Lambda including using the Docker lambda builder
- DDB
- it also created the proper narrowly scoped IAM permissions for tfe lambdas (I told it the services the Lambdas cared about)
The various Lambdas in Python using Boto3
- Bedrock including the Converse and Invoke APIs for the Nova and Anthropic families
- knowing how to process SQS Messages coming in as events
- MySQL flavored SQL for Upserts
- DDB reads
In another project the services were similar with the addition of Amazon Transcribe.
> I just “wrote” 2000 lines of code for a project between Node...
I think I wrote -200 lines of code on my last PR. I may be doing something bad for that number to be negative.
The difference is probably that I only do green field POC implementations as a solely developer/cloud architect on a project if I am doing hands on keyboard work.
The other part of my job is leading larger projects where I purposefully don’t commit to pulling stories off the board since I’m always in meetings with customers, project managers, sales or helping other engineers.
I might even then do a separate POC as a research project/enablement. But it won’t be modifying existing code that I didn’t design.
Truly depends on the organization and systems. I’m at a small firm with too few Senior staff, lots of fire-fighting going on among us, etc. We have loads of low-hanging fruit for our Juniors so we tend to have very quick results after an initial onboarding.
I've never worked with a junior developer that was incapable of learning or following instructions unless I formatted them in a specific way.
I definitely have
That's due to the person, not their development experience level
The most impressive folks Ive worked with are almost always straight out of school. It's before they've developed confidence about their skills and realized they can be more successful by starting their own business. People who get promoted three times in just 5 years sort of good.
Did their project manager and/or team lead think when they were hired “they are really going to be a great asset to my team and are going to help me complete my sprint/quarterly goals”?
When I ask for additional headcount, I’m looking at the next quarter since that’s what my manager is judging me based on.
I think you’re just telling on what kind of mentor you are with your comment.
I’m a great mentor when given the time. Two former interns for whom I was their official mentor during my time at AWS got return offers and are thriving two years after I left. I threw one in front of a customer to lead the project within three months after they came back after graduating. They were able to come through technically and had the soft skills. I told them my training approach is to “throw them at the bus. But never under the bus.”
I’m also a great teacher. That’s my $DayJob and has been for the past decade first bringing in new to the company processes and technologies, leading initiatives, teaching other developers, working with sales, CxOs (smaller companies), directors, explaining large “organizational transformation” proposals etc. working at startups and then doing the same in cloud consulting first working at AWS (ProServe full time role) and now working as a staff architect full time at a third party consulting company.
But when I have been responsible for delivery, I only hire people who have experience “dealing with ambiguity” and show that I can give them a decently complicated problem and they can take the ball and run with it and make decent decisions and do research. I don’t even do coding interviews - when I interview it’s strictly behavioral and talking through their past projects, decision making processes, how they overcame challenges etc.
In terms of AWS LPs, it’s “Taking Ownership” (yeah quoting Amazon LPs made me throw up a little).
Relax buddy, it’s not a job interview. You’re just in the comment section of a HN post.
Trust me I am not looking for a job, if I were, I just talk about “AI for per care” and get funded by YC…
Sure thing buddy.
What happens when you retire and there are no juniors to replace you?
That sounds like an incentive issue.
My evaluations are based on quarterly goals and quarterly deliverables. No one at a corporation cares about anything above how it affects them.
Bringing junior developers up to speed just for them to jump ship within three years or less doesn’t benefit anyone at the corporate level. Sure they jump ship because of salary compression and inversion, where internet raises don’t correspond to market rates. Even first level managers don’t have a say so or budget to affect that.
This is true for even BigTech companies. A former intern I mentored who got a return offer a year before I left AWS just got promoted to an L5 and their comp package was 20% less than new hires coming in at an l5.
Everyone will be long gone from the company if not completely retired by the time that happens.
> Bringing junior developers up to speed just for them to jump ship within three years or less doesn’t benefit anyone at the corporate level.
What? Of course it does. If that's happening everywhere, that means other companies' juniors are also jumping ship to come work for you while yours jump ship to work elsewhere. The only companies that don't see a benefit from mentoring new talent are those with substandard compensation.
That’s true, but why should I take on the work of being at the beginning of the pipeline instead of hiring a mid level developer. My incentives are to meet my quarterly goals and show “impact”.
To a first approximation, no company pays internal employees at market rates in an increasing comp environment after a couple of years especially during the first few years of an employee’s career where their marker rate rapidly increases once they get real world experience.
On the other hand, the startup I worked for pre-AWS with 60 people couldn’t, wouldn’t and shouldn’t have paid me the amount I made when I got hired at AWS.
> That’s true, but why should I take on the work of being at the beginning of the pipeline instead of hiring a mid level developer.
Nominally, for the same reason that you pay taxes for upkeep on the roads and power lines. Because everyone capable needs to contribute to the infrastructure or it will degrade and eventually fail.
> My incentives are to meet my quarterly goals and show “impact”.
To me, that speaks of mismanagement - a poorly run company that is a leech on the economy and workforce. In contrast, as a senior level engineer at a large technology company that has remarkably low turnover, one of my core duties is to help enhance the capabilities of other coworkers and that includes mentorship. This is because our leadership understands that it adds workforce retention value.
> To a first approximation, no company pays internal employees at market rates in an increasing comp environment after a couple of years especially during the first few years of an employee’s career where their marker rate rapidly increases once they get real world experience.
That's why I mentioned it being a cross-industry symbiotic relationship. Your company may not retain the juniors that you help train, but the mid level engineers you hire are the juniors that someone else helped train. If you risk not mentoring juniors, you encourage other companies to do the same and reduce the pool of qualified mid level engineers available to you in the future.
> On the other hand, the startup I worked for pre-AWS with 60 people couldn’t, wouldn’t and shouldn’t have paid me the amount I made when I got hired at AWS.
While unrelated to my point, I do have a different experience that you may find interesting in that the most exorbitant salary I have ever been paid was as a contractor for a 12-person startup, not at the organizations with development teams in the hundreds or thousands.
> Nominally, for the same reason that you pay taxes for upkeep on the roads and power lines. Because everyone capable needs to contribute to the infrastructure or it will degrade and eventually fail.
On the government level, I agree. I’m far from a “taxation is theft” Libertarian.
But I also have an addiction to food and shelter. The only entity capable of that kind of collective action that is good for society is the government. My (and I’m generalizing myself as any rationale actor) goal is to do what is necessary to exchange labor for money by aligning my actions with the corporations incentives to continue to put money in my bank account and (formerly) vested RSUs in my brokerage account.
> To me, that speaks of mismanagement - a poorly run company that is a leech on the economy and workforce. In contrast, as a senior level engineer at a large technology company that has remarkably low turnover, one of my core duties is to help enhance the capabilities of other coworkers and that includes mentorship
The only large tech company I’ve worked for has a leadership principal “Hire and Develop the Best”. But for an IC, it’s mostly bullshit. That doesn’t show up on your promo doc when it’s time to show “impact” or how it relates to the team’s “OKR’s”.
From talking to people at Google, it’s the same. But of course Amazon can afford to have dead weight. When I have one shot at a new hire that is going to help me finish my quarterly goals as a team lead, I’m not going to hire a junior and put more work on myself.
I’m an IC, but in the org chart, I’m at the same level as a front line manager.
> While unrelated to my point, I do have a different experience that you may find interesting in that the most exorbitant salary I have ever been paid was as a contractor for a 12-person startup, not at the organizations with development teams in the hundreds or thousands.
As a billable consultant at AWS (and now outside of AWS) because of scale, I brought a lot more money into AWS than anything I could have done at the startup.
That’s why I said the startup “shouldn’t” have paid me the same close to 1 million over four years that AWS offered me in cash and RSUs. It would have been irresponsible and detrimental to the company. I couldn’t bring that much value to the startup.
> Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
You really want to believe, maybe even need to believe, that anyone who comes up with this idea in their head has never written a single line of code in their life.
It is on its face absurd. And yet I don't doubt for a second that Garman et al. have to fend off legions of hacks who froth at the mouth over this kind of thing.
Time to apply the best analogy I've ever heard.
> "Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs." -- Bill Gates
Do we reward the employee who has added the most weight? Do we celebrate when the AI has added a lot of weight?
At first, it seems like, no, we shouldn't, but actually, it depends. If a person or AI is adding a lot of weight, but it is really important weight, like the engines or the main structure of the plane, then yeah, even though it adds a lot of weight, it's still doing genuinely impressive work. A heavy airplane is more impressive than a light weight one (usually).
I just can’t resist myself when airplanes come up in discussion.
I completely understand your analogy and you are right. However just to nitpick, it is actually super important to have a weight on the airplane at the right place. You have to make sure that your aeroplane does not become tail heavy or it is not recoverable from a stall. Also a heavier aeroplane, within its gross weight, is actually safer as the safe manoeuverable speed increases with weight.
I think this makes the analogy even more apt.
If someone adds more code to the wrong places for the sake of adding more code, the software may not be recoverable for future changes or from bugs. You also often need to add code in the right places for robustness.
> a heavier aeroplane … is actually safer
Just to nitpick your nitpick, that’s only true up to a point, and the range of safe weights isn’t all that big really - max payload on most planes is a fraction of the empty weight. And planes can be overweight, reducing weight is a good thing and perhaps needed far more often than adding weight is needed. The point of the analogy was that over a certain weight, the plane doesn’t fly at all. If progress on a plane is safety, stability, or speed, we can measure those things directly. If weight distribution is important to those, that’s great we can measure weight and distribution in service of stability, but weight isn’t the primary thing we use.
Like with airplane weight, you absolutely need some code to get something done, and sometimes more is better. But is more better as a rule? Absolutely not.
right, thats why its a great analogy - because you also need to have at least some code in a successful piece of software. But simply measuring by the amount of code leads to weird and perverse incentives - code added without thought is not good, and too much code can itself be a problem. Of course, the literal balancing aspect isn't as important.
This is a pretty narrow take on aviation safety. A heavier airplane has a higher stall speed, more energy for the brakes to dissipate, longer takeoff/landing distances, a worse climb rate… I’ll happily sacrifice maneuvering speed for better takeoff/landing/climb performance.
Again, just nitpicking, but if you have the right approach speed, and not doing a super short field landing, you need very little wheel brake if any. ;)
Sure, as long as you stick to flying light aircraft on runways designed for commercial air transport. I would also recommend thinking about how you would control speed on a long downhill taxi with a tailwind, even if you didn’t need brakes on landing.
> the safe manoeuverable speed increases with weight
The reason this is true is because at a higher weight, you'll stall at max deflection before you can put enough stress on the airframe to be a problem. That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]. That makes it questionable whether heavier is safer - just changes the failure mode.
> That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]
Turbulence, especially generated by thunderstorms, or close to it.
Maneuvering speed is Va which is about max deflection on a single control surface, I think you're thinking of Vno if you're referring to turbulence
Indeed I was thinking of Vno. I just had a brain fart when I said manoeuvering speed. I meant to say maximum structural cruising speed.
Progress on airplanes is often tracked by # of engineering drawings released, which means that 1000s of little clips, brackets, fittings, etc. can sometimes misrepresent the amount of engineering work that has taken place compared to preparing a giant monolithic bulkhead or spar for release. I have actually proposed measuring progress by part weight instead of count to my PMs for this reason
> the best analogy I've ever heard.
It’s an analogy that gets the job done and is targeted at non-tech managers.
It’s not perfect. Dead code has no “weight” unless you’re in a heavily storage-constrained environment. But 10,000 unnecessary rivets has an effect on the airplane everywhere, all the time.
> Dead code has no “weight”
Assuming it is truly dead and not executable (which someone would have to verify is & remains the case), dead code exerts a pressure on every human engineer who has to read (around) it, determine that it is still dead, etc. It also creates risk that it will be inadvertently activated and create e.g. security exposure.
Yes, we all love pedantry around here (that’s probably 99% of the reason I wrote the original comment!)
But if your position is that the percentage of time in the software lifecycle that dead code has a negative effect on a system is anywhere close to the percentage of time in an aircraft lifecycle that extra non-functional rivets (or other unnecessary weight objects) has a negative effect on the aircraft, you’re just wrong.
it's still directionally accurate though. Dead code has a weight that must be paid. Sometimes the best commits are the ones where you delete a ton of lines.
In this analogy, I'd say dead code corresponds to airplane parts that aren't actually installed on the aircraft. When people talk about the folly of measuring productivity in lines of code, they aren't referring to the uselessness of dead code, they're referring to the harms that come from live code that's way bigger than it needs to be.
When you are thinking of development and refactoring, dead code absolutely has weight.
This reminds me of a piece on folklore.org by Andy Hertzfeld[0], regarding Bill Atkinson. A "KPI" was introduced at Apple in which engineers were required to report how many lines of code they had written over the week. Bill (allegedly) claimed "-2000" (a completely, astonishingly negative report), and supposedly the managers reconsidered the validity of the "KPI" and stopped using it.
I don't know how true this is in fact, but I do know how true this is in my work - you cannot apply some arbitrary "make the number bigger" goal to everything and expect it to improve anything. It feels a bit weird seeing "write more lines of code" becoming a key metric again. It never worked, and is damn-near provably never going to work. The value of source code is not in any way tied to its quantity, but value still proves hard to quantify, 40 years later.
0. https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Goodhart's law: when a measure becomes a target, it ceases to be a good measure.
Given the way that a lot of AI coding actually works, it’s like asking what percent of code was written by hitting tab to autocomplete (intellisense) or what percent of a document benefited from spellcheck.
While most of us know the next word guessing is how it works in reality…
That sentiment ignores the magic of how well this works. There are mind blowing moments using AI coding, to pretend that it’s “just auto correct and tab complete” is just as deceiving as “you can vibe code complete programs”.
All that said, I'm very keen on companies telling me how much of their codebase was written by AI.
I just won't use that information in quite the excitable, optimistic way they offer it.
I want to have the model re-write patent applications, and if any portion of your patent filing was replicated by it your patent is denied as obvious and derivative.
"...just raised a $20M Series B and are looking to expand the team and products offered. We are fully bought-in to generative AI — over 40% of our codebase is built and maintained by AI, and we expect this number to continue to grow as the tech evolves and the space matures."
"What does your availability over the next couple of weeks look like to chat about this opportunity?"
"Yeah, quite busy over the next couple of weeks actually… the next couple of decades, really - awful how quickly time fills by itself these days, right? I'd have contributed towards lowering that 40% number which seems contrary to your goals anyway. But here's my card, should you need help with debugging something tricky some time in the near future and nobody manages to figure it out internally. I may be able to make room for you if you can afford it. I might be VERY busy though."
Something I wonder about the percent of code - I remember like 5-10 years ago there was a series of articles about Google generating a lot of their code programmatically, I wonder if they just adapted their code gen to AI.
I bet Google has a lot of tools to say convert a library from one language to another or generate a library based on an API spec. The 30% of code these LLMs are supposedly writing is probably in this camp, not net novel new features.
Is that why gmail loads so slowly these days
When I see these stats, I think of all the ways "percentage of code" could be defined.
I ask an AI 4 times to write a method for me. After it keeps failing, I just write it myself. AI wrote 80% of the code!
It is a really attractive idea for lazy people who don’t want to learn things
It is like measuring company output based on stuff done through codegen...
In academia the research pipeline is this
Undergraduate -> Graduate Student -> Post-doc -> Tenure/Senior
Some exceptions occur for people getting Tenure without post doc or people doing some other things like taking undergraduate in one or two years. But no one expect that we for whole skip the first two and then get any senior researchers.
The same idea applies anywhere, the rule is that if you don't have juniors then you don't get seniors so better prepare your bot to do everything.
As always, the truth is somewhere in the middle. AI is not going to replace everyone tomorrow, but I also don't think we can ignore productivity improvements from AI. It's not going to replace engineers completely now or in the near future, but AI will probably reduce the number of engineers needed to solve a problem.
I do not agreed. Its was not even worth without llm. Junior will always take a LOT of time from seniors. and when the junior become good enough, he will find another job. and the senior will be stuck in this loop.
junior + llm, it even worse. they become prompt engineers
I'm a technical co-founder rapidly building a software product. I've been coding since 2006. We have every incentive to have AI just build our product. But it can't. I keep trying to get it to...but it can't. Oh, it tries, but the code it writes is often overly complex and overly-verbose. I started out being amazed at the way it could solve problems, but that's because I gave it small, bounded, well-defined problems. But as expectations with agentic coding rose, I gave it more abstract problems and it quickly hit the ceiling. As was said, the engineering task is identifying the problem and decomposing it. I'd love to hear from someone who's used agentic coding with more success. So far I've tried Co-pilot, Windsurf, and Alex sidebar for Xcode projects. The most success I have is via a direct question with details to Gemini in the browser, usually a variant of "write a function to do X"
> As was said, the engineering task is identifying the problem and decomposing it.
In my experience if you do this and break the problem down into small pieces, the AI can implement the pieces for you.
It can save a lot of time typing and googling for docs.
That said, once the result exceeds a certain level of complexity, you can't really ask it to implement changes to existing code anymore, since it stops understanding it.
At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
So, my upshot is so far that it works great for small projects and for prototyping, but the gain after a certain level of complexity is probably quite small.
But then, I've also find quite some value in using it as a code search engine and to answer questions about the code, so maybe if nothing else that would be where the benefit comes from.
> At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
Appreciate you saying this because it is my biggest gripe in these conversations. Even if it makes me faster I now have to put time into reading the code multiple times because I have to internalize it.
Since the code I merge into production "is still my responsibility" as the HN comments go, then I need to really read and think more deeply about what AI wrote as opposed to reading a teammate's PR code. In my case that is slower than the 20% speedup I get by applying AI to problems.
I'm sure I can get even more speed if I improve prompts, when I use the AI, agentic vs non-agentic, etc. but I just don't think the ceiling is high enough yet. Plus I am someone who seems more prone to AI making me lazier than others so I just need to schedule when I use it and make that time as minimal as possible.
Are we trying to guilt trip corporations to do socially responsible thing regarding young workers skill acquisition?
Haven't we learned that it almost always ends up in hollow PR and marketing theater?
Basically the solution to this is extending education so that people entering workforce are already at senior level. Of course this can't be financed by the students, because their careers get shortened by longer education. So we need higher taxes on the entities that reap the new spoils. Namely those corporations that now can pass on hiring junior employees.
"Learning how to learn" is by far the most important lesson anyone can obtain. That's not just for AI/software/tech, but for anything.
There is a dedicated website on this topic: https://learnhowtolearn.org
I was going to say something, then I realized my cynicism is already at maximum.
Makes sense. Instead of replacing junior staff, they should be trained to use AI to get more done in less time. In next 2-3 years they will be experts doing good work with high productivity.
Two things that will hurt us in the long run, working from home and AI. I'm generally in favour of both, but with newbies it hurts them as they are not spending enough face to face time with seniors to learn on the job.
And AI will hurt them in their own development and with it taking over the tasks they would normally cut their teeth on.
We'll have to find newer ways of helping the younger generation get in the door.
A weekly 1 hour call, where pair programming/ exploration of an on-going issue, technical idea would be enough to replace face to face time with seniors. This has been working great for us, at a multi billion dollar profitable public company thats been fully remote.
I would argue that just being in the office or not using AI doesn't guarantee any better learning of younger generations. Without proper guidance a junior would still struggle regardless of their location or AI pilot.
The challenge now is for companies, managers and mentors to adapt to more remote and AI assisted learning. If a junior can be taught that it's okay to reach out (and be given ample opportunities to do so), as well as how to productively use AI to explain concepts that they may feel too scared to ask because they're "basics", then I don't see why this would hurt in the long run.
Junior staff will be necessary but you'll have to defend them from the bean-counters.
You need people who can validate LLM-generated code. It takes people with testing and architecture expertise to do so. You only get those things by having humans get expertise through experience.
>teach “how do you think and how do you decompose problems”
That's rich coming from AWS!
I think he meant "how do you think about adding unnecessary complexity to problems such that it can enable the maximum amount of meetings, design docs and promo packages for years to come"!
A lot of companies that have stopped hiring junior employees are going to be really hurting in a couple of years, once all of their seniors have left and they have no replacements trained and ready to go.
I can't wait for that damn bubble to explode, really...
This is becoming unbreathable for hackers.
It's already exploding.
The hype train is going to keep on moving for a while yet though.
If AI is so great and had PhD level skills (Musk) then logic says you should be replacing all of your _senior_ developers. That is not the conclusion they reached which implies that the coding ability is not that hot. Q.E.D.
Finally someone from a top position said this. After all the trash the CEOs have been spewing and sensationalizing every AI improvement, for a change, a person in a non-engineering role speaks the truth.
Unfortunately, this is the kind of view that is at once completely correct and anathema to private equity because they can squeeze a next quarter return by firing a chunk of the labor force.
Yesterday, I was asked to scrape data from a website. My friend used ChatGPT to scrape data but didn't succeded even spent 3h+. I looked website code and understand with my web knowledge and do some research with LLM. Then I described how to scrape data to LLM it took 30 minutes overall. The LLM cant create best way but you can create with using LLM. Everything is same, at the end of the day you need someone who can really think.
LLM's can do anything, but the decision tree for what you can do in life is almost infinite. LLM's still need a coherent designer to make progress towards a goal.
LLMs can do small things well, but you must use small parts to form big picture.
Or you could’ve used xpath and bs4 and have been done in an hour or two and have more understandable code.
it is not that easy, there is lazy loading in the page that is triggered by scroll of specific sections. You need to find clever way, no way to scrape with bs4, so tough with even selenium.
wget -m ?
Bravo.. Finally a voice of reason.
As someone who works in AI, any CEO who says that AI is going to replace junior workers has no f*cking clue what they are talking about.
Current generation of AI agents are great at writing a block of codes. Similar to writing a great paragraph. Know your tools.
Agree. AI is, so far, almost as good as StackOverflow, except it lies confidently and generates questionable code.
AWS CEO says what he has to say to push his own agenda and obviously to align himself with the most currently popular view.
AWS is a very infrastructure intensive project with extremely tight SLAs, and no UI, makes a lot of sense.
Point is nobody has figured out how much AI can replace humans. People. There is so much of hype out there as every tech celebrity sharing their opinions without responsibility of owning them. We have to wait & see. We could change courses when we know the reality. Until then, do what we know well.
My respect for people that take this approach is very high. This is the right way to approach integration of technology.
Can SOME people's jobs be replaced by AI. Maybe on paper. But there are tons of tradeoffs to START with that approach and assume fidelity of outcome.
Perhaps I'm too cynical about messages coming out of FAANG. But I have a feeling they are saying things to placate the rising anger over mass layoffs, h1b abuse, and offshoring. I hope I'm wrong.
Did a double take at Berman bring described as an AI investor. He does invest but a more appropriate description would be "AI YouTuber".
I don't mean that as a negative, he's doing great work explaining AI to (dev) masses!
It is too late it is already happening. The evolution of tech field is people being more experienced and not AI. But AI will be there for questions and easy one liners. Properly formalized documentation, even TLDRs.
Simple, just replace the CEO with an LLM and it will be singing a different tune :-P
The cost of not hiring and training juniors is trying to retain your seniors while continuously resetting expectations with them about how they are the only human accountable for more and more stuff.
Remark is at 12:02 in the video.
https://www.youtube.com/watch?v=nfocTxMzOP4&t=722s
That's right it should be used to replace senior stuff right away
Of corse it is... You should replace your senior staff with AI ;) Juniors will just prompt it then....
"AGI" always has been a narrative scam after late 2022.
Agreed.
LLMs are actually -the worst- at doing very specific repetitive things. It'd be much more appropriate for one to replace the CEO (the generalist) rather than junior staff.
CEOs that get paid the most don't care about problems like that.
so refreshing to see this view from someone in a position high up like his
I heard from several sources that AWS has a mandate to put GenAI in everything and force everyone to use it so... yeah.
Maybe source of "AI replacing junior staff" is the statement AWS CEO made during a private meeting with client.
Well yeah, they're just doing this with H1B's and OPT. The other kind of "AI".
Why do so many people act like they’re mutually exclusive? Junior staff can use AI too.
Not often article makes me want to buy shares. Like... never. This article is spot on.
Sometimes, the C suite is there because they're actually good at their jobs.
junior engineers aren't hired to get tons of work done; they're hired to learn, grow, and eventually become senior engineers. ai can't replace that, but only help it happen faster (in theory anyway).
This is just to walk back previous statements by Andy Jessy. Political theater
No one's getting replaced, but you may not hire that new person that otherwise would have been needed. Five years ago, you would have hired a junior to crank out UI components, or well specc'd CRUD endpoints for some big new feature initiative. Now you probably won't.
> well specc'd CRUD endpoints
I’m really tired of this trope. I’ve spent my whole career on “boring CRUD” and the number of relational db backed apps I’ve seen written by devs who’ve never heard of isolation levels is concerning (including myself for a time).
Coincidentally, as soon as these apps see any scale issues pop up.
[dead]
On the other hand, that extra money can be used to expand the business in other ways, plus most kids coming out of college these days are going to be experts in getting jobs done with AI (although they will need a lot of training in writing actual secure and maintainable code).
Even the highest ranking engineers should be experts. I don’t understand why there’s this focus on juniors as the people who know AI best.
Using AI isn’t rocket science. Like you’re talking about using AI as if typing a prompt in English is some kind of hard to learn skill. Do you know English? Check. Can you give instructions? Check. Can you clarify instructions? Check.
> I don’t understand why there’s this focus on juniors as the people who know AI best.
Because junior engineers have no problem with wholeheartedly embracing AI - they don't have enough experience to know what doesn't work yet.
In my personal experience, engineers who have experience are much more hesitant to embrace AI and learn everything about it, because they've seen that there are no magic bullets out there. Or they're just set in their ways.
To management that's AI obsessed, they want those juniors over anyone that would say "Maybe AI isn't everything it's cracked up to be." And it really, really helps that junior engineers are the cheapest to hire.
> plus most kids coming out of college these days are going to be experts in getting jobs done with AI
“You won’t lose your job to AI, you’ll lose it to someone who uses AI better than you do”
Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
At least in my personal case, struggling with renewal at Virgin Broadband, multiple humans wasted probably an hour of everyone's time overall on the phone bouncing me around departments, unable to comprehend my request, trying to upsell and pitch irrelevant services, applying contextually inappropriate talking scripts while never approaching what I was asking them in the first place. Giving up on those brainless meat bags and engaging with their chat bot, I was able to resolve what I needed in 10 minutes.
Its strange you have to write this.
In India most of the banks now have apps that do nearly all the banking you can do by visiting a branch personally. To that extent this future is already here.
When I had to close my loan and had to visit a branch nearly a few times, the manager tells me, significant portion of his people's time now goes into actual banking- which according to him was selling products(fixed deposits, insurances, credit cards) and not customer support(which the bank thinks is not its job and has to because there is no other alternative to it currently).
> Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
In IT, if at a minimum, AI would triage the problem intelligently (and not sound like a bot while doing it), that would save my more expensive engineers a lot more time.
This is mostly because CS folks are given such sales and retention targets; and while I’ve never encountered a helpful support bot even in the age of LLMs, I presume in your case the company management was just happy to have a support bot talking to people without said metrics.
“brainless meat bags” have you ever thought they are instructed to do so to achieve product selling quotas?
Anyone who blindly follows orders is a brainless meat bag too.
Again, you assume those people have choice, you definitely should search more how people on these jobs are pressured to reach quotas and are abused in many ways. A simple search on Reddit you can see plenty of reports about it:
https://www.reddit.com/r/callcentres/comments/1iiqbxh/the_re...
Abuses done by customers: https://www.bbc.com/news/business-59577351
You always have a choice. These people aren't forced to work there. And they also have the ability to go whistleblower and leak internal docs that instruct them to abuse customers. Just as an example.
I know I would. If someone gives you a choice A or B that both screw you over, there's always an option Z somewhere. It might be so outrageous they don't expect it but it's there.
However usually it isn't necessary. I've been put in situations where I had to do something unethical. I've refused. And every time that choice was respected. Only if I'd have been punished for it would I have considered more severe options like the whistle option.
But really if you take a hard stand and have good reasons, reality tends to bend a bit further than I expected.
And yes I know what these jobs are like. I have worked in that industry a long time. I've seen both very good and very terrible employers.
And yeah customers can also be little shits but I've learned to disconnect from that very quickly. It's easier when they're on the other side of the phone. It doesn't help them anyway. And sometimes (especially if they're not just a dick but they have a genuine reason to be angry) there's ways to flip them around, in which case that energy might be harnessed and they can become your strongest ally. Another thing I've seen that I didn't expect.
No it isn't, he's lying. Sorry guys.
Claude code is better than a junior programmer by a lot and these guys think it only gets better from there and they have people with decades in the industry to burn through before they have to worry about retraining a new crop.
> “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”
Instead you should replace senior staff who make way more.
Everything is boiled down to lambda calculus.
that was not the kicker....it was the ending comment...
Better learn how to learn as we are not training(or is that paying) you to learn...
Hey at least there's one adult in the room in the big tech sphere.
Reinforcement is the only way to fly
LLM defenders with the "yOu cAn't cRiTiCiZe iT WiThOuT MeNtIoNiNg tHe mOdEl aNd vErSiOn, It mUsT Be a lAnGuAgE LiMiTaItOn" crack me up. I used code generation out of curiosity once, for a very simple script, and it fucked it up so badly I was laughing.
Please tell me which software you are building with AI so I can avoid it.
The same CEO that pushed employees back to the office?
I mean I used Copilot / JetBrains etc. to work on my code base but for large scale changes it did so much damage that it took me days to fix it and actually slowed me down. These systems are just like juniors in their capabilities, actually worse because junior developers are still people and able to think and interact with you coherently over days or weeks or months, these models aren’t even at that level I think.
> “Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I remember someone that had a .sig that I loved (Can't remember where. If he's here, kudos!):
> I hate code, and want as little of it in my programs as possible.
[UPDATE] Is this a source?: https://softwarequotes.com/quote/i-hate-code--and-i-want-as-...
Rather than AI that can function as many junior coders to enable a senior programmer to be more efficient.
Having AI function as a senior programmer for lots of junior programmers that helps them learn and limits the interruptions for human senior coders makes so much more sense.
Don't let his boss hear him :)
The bubble has, if not burst, at least gotten to the stage where it’s bulging out uncomfortably and losing cohesion.
finally some common sense from the detached c-suitors
It's refreshing to finally see CEOs and other business leaders coming around to what experienced, skeptical engineers have been saying for this entire hype cycle.
I assumed it would happen at some point, but I am relieved that the change in sentiment has started before the bubble pops - maybe this will lesson the economic impact.
Yeah, the whole AI thing has very unpleasant similarities to the dot com bubble that burst to the massive detriment of the careers of the people that were working back then.
The parallels in how industry members talk about it is similar as well. No one denies that the internet boom was important and impactful, but it's also undeniable that companies wasted unfathomable amounts of cash for no return at the cost of worker well being.
Juniors are cheaper than Ai tokens and easier to fire and hire.
Junior is just the lower rank, you will still have a lower rank, fine, do t call it junior
Finally some fucking sanity from a megacorp CEO. Been a long time.
"We have always been at war with Eastasia"
The amount of developer cope here is over 9000
They're deliberately popping the bubble. If they'd actually thought this and cared, they'd have said it 2 years ago, before the layoffs.
Stop getting played.
Sounds like someone got the memo that the bubble's about to burst.
Hopium for the masses
but wait, what about my Nvidia stocks bro? Can we keep the AI hype going bro. Pls bro just make another AI assistant editor bro.
[dead]
[dead]
[dead]
[dead]
[flagged]
Time will show you are right and almost all the other dumbasses here at HN are wrong. Which is hardly surprising since they are incapable of applying themselves around their coming replacement.
They are 100% engineers and 100% engineers have no ability to adapt to other professions. Coding is dead, if you think otherwise then I hope you are right! But I doubt it because so far I have only heard arguments to the counter that are obviously wrong and retarded.
That’s a shit article by Thomas, people should stop quoting it.
[dead]