selinkocalar 24 minutes ago

This hits on something we think about constantly at Delve.

The key is building systems that are transparent about their confidence levels and gracefully handle edge cases.

The companies that will win in AI aren't the ones with perfect algorithms - they're the ones who design for human understanding and real-world messiness.

d4rkn0d3z 2 days ago

In physics, the change from classical to quantum theory was a change from determinism to probabilistic determinism. There is not one physicist on earth that would ever exhort you to use quantum theory where classical theory will do. Furthermore, when you study physics you must learn the classical theory first or you will be hopelessly lost, just like the author of this article.

The central analogy of the article is entirely bogus.

This article does not rise to the level of being wrong.

  • hearsathought a day ago

    > In physics, the change from classical to quantum theory was a change from determinism to probabilistic determinism.

    Don't you mean from determinism to nondeterminism?

    > There is not one physicist on earth that would ever exhort you to use quantum theory where classical theory will do.

    That's being practical.

    • d4rkn0d3z a day ago

      > "Don't you mean from determinism to nondeterminism?"

      No, I mean exactly what I said. Given a system's state, one evolves the state using wave equation du jour, nondeterminism does not occur.

  • ares623 a day ago

    > This article does not rise to the level of being wrong.

    Amazing

meindnoch 2 days ago

What a bunch of pretentious nonsense. It is always a red flag when an author tries to shoehorn mathematical notation into an article that has nothing mathematical about it whatsoever. Gives off "igon value problem"-vibes.

ankit219 2 days ago

Building with non deterministic systems isnt new. It does not take a scientist. Though people who have experience with these systems are fewer in number today. You saw the same thing with TCP/IP development where we ended up developing systems that assumed the randomness and made sure that isnt passed on to the next layer. For every game, given the latency involved in previous networks, there is no way on the network games were deterministic.

  • golergka 2 days ago

    Isn't any kind of human in the loop make system non-deterministic?

pdhborges 2 days ago

I will believe this theory if someone shows me that the ratio of scientists to engineers of leading teams of the leading companies deploying AI products is bigger than 1.

  • layer8 2 days ago

    I don’t think the dichotomy between scientists and engineers that’s being established here is making much sense in the first place. Applied science is applied science.

therobots927 2 days ago

This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:

“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”

This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:

“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”

Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.

I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:

“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”

I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.

  • bubblyworld 2 days ago

    You seem to be having strong emotions about this stuff, so I'm a little nervous that I'm going to get flamed in response, but my best take at a well-intentioned response:

    I don't think the author is arguing that all computing is going to become probabilistic. I don't get that message at all - in fact they point out many times that LLMs can't be trusted for problems with definite answers ("if you need to add 1+1 use a calculator"). Their opening paragraph was literally about not blindly trusting LLM output.

    > I don’t actually think the above paragraph makes any sense, does anyone disagree with me?

    Yes - it makes perfect sense to me. Working with LLMs requires a shift in perspective. There isn't a formal semantics you can use to understand what they are likely to do (unlike programming languages). You really do need to resort to observation and hypothesis testing, which yes, the scientific method is a good philosophy for! Two things can be true.

    > the use of formal mathematical notation just adds insult to injury here

    I don't get your issue with the use of a function symbol and an arrow. I'm a published mathematician - it seems fine to me? There's clearly no serious mathematics here, it's just an analogy.

    > This AI conversation could not be a better example of the loss of meaning.

    The "meaningless" sentence you quote after this is perfectly fine to me. It's heavy on philosophy jargon, but that's more a taste thing no? Words like "ontology" aren't that complicated or nonsensical - in this case it just refers to a set of concepts being used for some purpose (like understanding the behaviour of some code).

  • AgentMatt 2 days ago

    > I’d recommend Baudrillards work on hyperreality.

    Any specific piece of writing you can recommend? I tried reading Simulacra and Simulation (English translation) a while ago and I found it difficult to follow.

    • therobots927 2 days ago

      I would actually recommend the YouTube channel Plastic Pills. This is a great video to start with: https://youtu.be/S96e6TdJlNE?si=gSVzXyyBq7t_q0Xp

      • tucnak 2 days ago

        Name-dropping Baudrillard based on Youtube videos is real rich... in irony.

        • therobots927 a day ago

          In case you didn’t notice the parent comment specifically said Baudrillards writing was hard to follow. This channel is run by a philosophy PhD who explains his work. You think he’s misrepresenting baudrillards work I’m guessing?

          • tucnak a day ago

            It's just funny; philosophy YouTube is a simulacrum in its own right. On a different note, I don't think Baudrillard is worth reading in 2025. I would rather recommend Mark Fisher, but then again it's all gravy, baby. To call all this pop literature "philosophy," especially post-Wittgenstein is a bit silly after all.

            • therobots927 a day ago

              Yeah I’m gonna read Fisher’s book. I’ve watched a couple of his lectures on YouTube. I don’t really have the mental bandwidth outside work to get really into nitty gritty philosophy. But stuff that’s digestible and helps me contextualize this twilight zone timeline we’re in is nice. I don’t know what good it does to go down the rabbit hole when the vast majority of people aren’t joining you.

  • nutjob2 2 days ago

    > I’m not saying future computing won’t be probabilistic

    Current and past computing has always been probabilistic in part, doesn't mean it will become 100% so. Almost all of the implementationof LLMs is deterministic except the part that is randomized. Its output is used in the same way. Humans combine the two approaches as well. Even reality is a combination of quantum uncertainty at a low level and very deterministic physics everywhere else.

    > We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.

    The hype machine always involves pseudo-scientific babble and this is a particularly cringey example. The idea that seems to be promoted, that AI will be god like and therein we'll find all truth and knowledge is beyond delusional.

    It a tool, like all other tools. Just like we see faces in everything we're also very susceptible to language (especially our own, consumed and regurgitated back to us) from a very neat chatbot.

    AI hype is borderline mass hysteria at this point.

    • therobots927 2 days ago

      “The hype machine always involves pseudo-scientific babble and this is a particularly cringey example.”

      Thanks for confirming. As crazy as the chatbot fanatics are, hearing them talk makes ME feel crazy.

  • hgomersall 2 days ago

    There's another couple of principles underlying the most uses of science, which are consistency and smoothness. That is extrapolation and interpolation makes sense. Also, that if an experiment works now, it will work forever. Critically, the physical world is knowable.

  • rexer 2 days ago

    I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.

    > Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

    Can you say more? It seems to me the article says the same thing you are.

    > I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

    I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.

    In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.

    • voidhorse 2 days ago

      Sure, but it's overblown. People have been reasoning about and building probabilistic systems formally since the birth of information theory back in the 1940s. Many systems we already rely on today are highly stochastic in their own ways.

      Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.

      • Terr_ 2 days ago

        > Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.

        We also see this in cryptocurrencies. The path to forgetting is greased by the presence of money and fame, and at some later time they are eventually forced to "discover" the same ancient problems they insisted couldn't possibly apply.

      • rexer 2 days ago

        Truly appreciate the perspective. Any pointers to previous work on dealing with stochastic systems from the past? Part of my work is securing AI workloads, and it seems like losing determinism throws out a lot of assumptions in previously accepted approaches.

  • voidhorse 2 days ago

    It's already wrong at the first step. A probabilistic system is by definition not a function (it is a relation). This is such a basic mistake I don't know how anyone can take this seriously. Many existing systems are also not strictly functions (internal state can make them return different outputs for a given input). People love to abuse mathematics and employ its concepts hastily and irresponsibly.

    • therobots927 2 days ago

      The fact that the author is a data scientist at Anthropic should start ringing alarm bells for anyone paying attention. Isn’t Claude supposed to be at the front of the pack? To be honest I have a suspicion that Claude wrote the lions share of this essay. It’s that incomprehensible and soaked in jargon and formulas used completely out of context and incorrectly.

      • stoorafa 2 days ago

        Their job, in this case, is probably more of a signal than a clear indicator

        Plenty of front-running companies have hired plenty of…not-solid or excessively imaginative data scientists

        From what the other comments say, this one seems to lack a grounding in science itself, which frankly is par for the course depending on their background

  • falcor84 2 days ago

    > Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

    But as per Gödel's incompleteness theorem and the Halting Problem, math questions (and consequently physics and CS questions) don't always have an answer.

    • therobots927 2 days ago

      Providing examples of questions without correct answers does not prove that no questions have correct answers. Or that it’s hallucinations aren’t problematic when they provide explicitly incorrect answers. The author is just avoiding addressing the hallucination problem at all by saying “well sometimes there is no correct answer”

    • layer8 2 days ago

      There is a truth of the matter regarding whether a program will eventually halt or not, even when there is no computable proof for either case. Similar for the incompleteness theorems. The correct response in such cases is “I don’t know”.

      • therobots927 2 days ago

        You know something I don’t hear a lot from chatGPT? “I don’t know”

        • A4ET8a8uTh0_v2 2 days ago

          True. Seemingly, based on my experiences, the instructions are to keep individual engaged for as long as possible. I hate to say it, but it works too.

  • aredox 2 days ago

    [flagged]

    • therobots927 2 days ago

      Exactly right. LLMs are a natural product of our post truth society. I’ve given up hope that things get better but maybe they will once the decline becomes more tangible. I just hope it involves less famine than previous systemic collapses.

bithive123 2 days ago

It became evident to me while playing with Stable Diffusion that it's basically a slot machine. A skinner box with a variable reinforcement schedule.

Harmless enough if you are just making images for fun. But probably not an ideal workflow for real work.

  • diggan 2 days ago

    > It became evident to me while playing with Stable Diffusion that it's basically a slot machine.

    It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.

    • bithive123 2 days ago

      Strictly speaking, yes, but there is so much variability introduced by prompting that even keeping the seed value static doesn't change the "slot machine" feeling, IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.

      • diggan a day ago

        > IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.

        You yourself acknowledge someone can better than another on getting good results from Stable Diffusion, how is that in any way similar to slot machine or rolling the dice? The point of those analogies is precisely that it doesn't matter what skill/knowledge you have, you'll get a random outcome. The same is very much not true for Stable Diffusion usage, something you seem to know yourself too.

  • A4ET8a8uTh0_v2 2 days ago

    << But probably not an ideal workflow for real work.

    Hmm. Ideal is rarely an option, so I have to assume you are being careful about phrasing.

    Still, despite it being a black box, one can still tip the odds on one's favor, so the real question is what is considered 'real work'? I personally would define that as whatever they you are being paid to do. If that premise is accepted, then the tool is not the issue, despite its obvious handicaps.

novok 2 days ago

What exactly was rewritten in 3 weeks at replit? Literally everything? The agent part?

mehulashah 2 days ago

I tend to agree with most of what has been said. The only difference is that some workflows need to be deterministic otherwise the penalty for failure is high. In that case, AI is helpful in searching through a space, but something orthogonal needs to verify its output.

weego a day ago

'AI businesses just aren't like anything before them'.

The entire article is a business pseudo-philosophy word salad equivalent of tiktok 'I'm not like other (men/women)' posts.

jmogly 2 days ago

Right now you have this awesome new dynamic capability that doesn’t mesh with how we are used to building software; well defined, constrained, correct. Software products are close ended. Imagine if they weren’t. Imagine if you were playing an open world game like Skyrim or runescape and new areas were created as you explored, new weapons, entirely new game mechanics spontaneously arose as you played. Or imagine an intent based business analytics app, that had a view on your company’s database and when a user wanted a report or a visual it generated it on the fly. We’re only limited by our imagination here.

  • dingnuts 2 days ago

    and by reality. the tech is nowhere near ready for this

  • thrown-0825 2 days ago

    imagine whatever you like, its your fantasy

    in the real world we expect consistent behavior from engineered systems, and building everything on top of a gpu powered bullshit engine isn't going to give you that

    its a nice narrative though, maybe you should pick up writing sci-fi

    • jmogly a day ago

      Yeah today we do … what if engineered systems were more like humans, where we trade consistency for more interesting capabilities. Not to say there won’t still be traditional engineering for stable behavior systems, but these tools do have the ability to add a human touch to our software that wasn’t possible before. I’ll stick to software.

      • rafterydj a day ago

        That's all well and good to ask as a what if, but in terms of practical applications, the vast majority of the time you want to trade the other way around whenever possible - you want your system to work reliably. That's the cornerstone of any project that wants to be useful for others.

failiaf 2 days ago

(unrelated) what's the font used for the cursive in the article? the heading is ibm plex serif and the content dm mono, but the cursive font is simply labeled as dm mono which isn't accurate

  • leutersp 2 days ago

    Chrome Dev console shows that the italics font is indeed named "dm" just like the rest of the content. It is not really a cursive, only a few letters are stylized ("f", "s" and "l").

    It is possible (and often desirable) to use different WOFF fonts for italics, and they can look quite different from the standard font.

patrickscoleman 2 days ago

Great read. We've been seeing some wild emergent behavior at Rime (tts voice ai) too, e.g. training the model to <laugh> and it being able to <sigh>.

thorum 2 days ago

I like this framing, but I don’t think it’s entirely new to LLMs. Humans have been building flexible, multi-purpose tools and using them for things the original inventor or manufacturer didn’t think of since before the invention of the wheel. It’s in our DNA. Our brains have been shaped by a world where that is normal.

The rigidness and near-perfect reliability of computer software is the unusual thing in human history, an outlier we’ve gotten used to.

  • therobots927 2 days ago

    “The rigidness and near-perfect reliability of computer software is the unusual thing in human history, an outlier we’ve gotten used to.”

    Ordered approximately by recency:

    Banking? Clocks? Roman aqueducts? Mayan calendars? The sun rising every day? Predictable rainy and dry season?

    How is software the outlier here?

    • thorum 2 days ago

      My point was more “humans are used to tools that don’t always work and can be used in creative ways” than “no human invention has ever been rigid and reliable”.

      People on HN regularly claim that LLMs are useless if they aren’t 100% accurate all the time. I don’t think this is true. We work around that kind of thing every day.

      With your examples:

      - Before computers, fraud and human error was common in the banking system. We designed a system that was resilient against this and mostly worked, most of the time, well enough for most purposes even though it was built on an imperfect foundation.

      - Highly precise clocks are a recent invention. For regular people 200 years ago, one person’s clock would often be 5-10 minutes off from someone else’s. People managed to get things done anyway.

      I’ll grant you that Roman aqueducts, seasons and the sun are much more reliable than computers (as are all the laws of nature).

      • skydhash a day ago

        Isn’t technology the invention of more reliable and precise tools?

  • ericwood 2 days ago

    I've always viewed computers as being an obvious complement. Of course we worked so hard to build machines that are good at the things our brains don't take to as naturally.

ipdashc 2 days ago

While this article is a little overenthusiastic for my taste, I think I agree with the general idea of it - and it's always kind of been my pet peeve when it comes to ML. It's a little depressing to think that's probably where the industry is heading. Does anyone feel the same way?

A lot of the stuff the author says resonates deeply, but like, the whole deterministism thing is why I liked programming and computers in the first place. They are complicated but simple; they run on straightforward, man-made rules. As the article says:

> Any good engineer will know how the Internet works: we designed it! We know how packets of data move around, we know how bytes behave, even in uncertain environments like faulty connections.

I've always loved this aspect of it. We humans built the entire system, from protocols down to transistors (and the electronics/physics is so abstracted away it doesn't matter). If one wants to understand or tweak some aspect of it, with enough documentation or reverse engineering, there is nothing stopping you. Everything makes sense.

The author is spot on; every time I've worked with ML it feels more like you're supposed to be a scientist than an engineer, running trials and collecting statistics and tweaking the black box until it works. And I hate that. Props to those who can handle real fields like biology or chemistry, right, but I never wanted to be involved with that kind of stuff. But it seems like that's the direction we're inevitably going.

  • ACCount37 2 days ago

    ML doesn't work like programming because it's not programming. It just happens to run on the same computational substrate.

    Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning. An engineer works with known laws of nature - but the laws of machine learning are still being written. You have to be at least a little bit of a scientist to navigate this landscape.

    Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems. But machine intelligence is desirable. So we're building AIs anyway.

    • ath3nd a day ago

      > Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning

      Nah, it's just a very very big and fancy autocomplete with probabilistic tokenization and some extra tricks thrown in to minimize the shortcomings of the approach.

      > Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems.

      LLMs are maybe artificial but they are not intelligence unless you have overloaded the term intelligence to mean something much less and more trivial. A crow and even a cat is intelligent. An LLM is not.

      • ACCount37 a day ago

        That's copium.

        The proper name for it is "AI effect", but the word "copium" captures the essence perfectly.

        Humans want to feel special, and a lot of them feel like intelligence is what makes them special. So whenever a new AI system shows a new capability that was thought to require intelligence? A capability that was once exclusive to humans? That doesn't mean it's "intelligent" in any way. Surely it just means that this capability was stupid and unimportant and didn't require any intelligence in the first place!

        Writing a simple short story? Solving a college level math problem? Putting together a Bash script from a text description of what it should do? No intelligence required for any of that!

        Copium is one hell of a drug.

        • ath3nd a day ago

          > Copium is one hell of a drug.

          What is the word for creating an account 12 days ago and exclusively defending the LLMs because they can't defend themselves?

          > Writing a simple short story

          Ah, allow me to introduce you to the Infinite Monkey theorem.

          https://en.wikipedia.org/wiki/Infinite_monkey_theorem

          In the case of LLMs it's just the monkey's hand is artificially guided by all the peanut-guided trainings it was trained on but it still didn't use a single ounce of thought or intelligence. Sorry that you get impressed by simple tricks and confuse them for magic.

          • ACCount37 a day ago

            And this proves what exactly? That any task can be solved by pure chance at pass@k, with k blowing out to infinity as the solution space grows?

            We know that. The value of intelligence is being able to outperform that random chance.

            LLMs already outperform a typewriter monkey, a keyboard cat, and a non-insignificant amount of humans on a very diverse range of tasks.

    • voidhorse 2 days ago

      You should read some of the papers written in the 1940s and learn about the history of cybernetics. Your glowing perception of the "demon summoning" nature of ML might change a bit.

      People want others to think this tech is mysterious. It's not. We've known the theory of these systems since the mid 1900s, we just didn't fully work out the resource arrangements to make them tractable until recently. Yes, there are some unknowns and the end product is a black box insofar as you cannot simply inspect source code, but this description of the situation is pure fantasy.

      • ACCount37 2 days ago

        Good luck trying to use theory from the 1940s to predict modern ML. And if theory has little predictive power, then it's of little use.

        There's a reason why so many "laws" of ML are empirical - curves fitted to experimental observation data. If we had a solid mathematical backing for ML, we'd be able to derive those laws from math. If we had solid theoretical backing for ML, we'd be able to calculate whether a training run would fail without actually running it.

        People say this tech is mysterious because it is mysterious. It's a field where practical applications are running far ahead of theory. We build systems that work, and we don't know how or why.

        • skydhash a day ago

          We have solid backing in maths for it. But the fact is what we are seeking is not what the math told us, but an hope that what it told us is sufficiently close to the TRUTH. Hence the pervasive presence of errors and loss functions.

          We know it’s not the correct answer, but better something close than nothing. (close can be awfully far, which is worse than nothing)

          • ACCount37 a day ago

            The math covers the low level decently well, but you run out of it quick. A lot of it fails to scale, and almost all of it fails to capture the high level behavior of modern AIs.

            You can predict how some simple narrow edge case neural networks will converge, but this doesn't go all the way to frontier training runs, or even the kind of runs you can do at home on a single GPU. And that's one of the better covered areas.

            • skydhash a day ago

              You can’t predict because the data is unknown before training. And training is computation based on math. And the results are the weights. And every further computation is also math based. The result can be surprising, but there’s no fairy dust here.

              • ACCount37 a day ago

                There's no fairy dust there, but that doesn't mean we understand how it works. There's no fairy dust in human brain either.

                Today's mathematical background applied to frontier systems is a bit like trying to understand how a web browser works from knowing how a transistor works. The mismatch is palpable.

                Sure, if you descend to a low enough level, you wouldn't find any magic fairy dust - it's transistors as far as eye can see. But "knowing how a transistor works" doesn't come close to capturing the sheer complexity. Low level knowledge does not automatically translate to high level knowledge.

  • crabmusket 2 days ago

    I feel similarly to you.

    Even amidst the pressure of "client X needs feature Y yesterday, get it done with maximum tech debt!" it felt like there were still chances to carve out niches of craft, care and quality.

    The rise of "probabilistic" software feels like it is a gift to "close enough is good enough" and "ship first, ask questions later".

    If only we all became scientists. We'll more likely become closer to tabloid journalists, producing something that sounds just truthy enough to get clicks.

  • Legend2440 2 days ago

    > They are complicated but simple; they run on straightforward, man-made rules.

    The trouble is that many problems simply cannot be solved using rules. They're too complex and undefinable and vague.

    We need systems that are more flexible and expressive to handle the open-ended complexity of the real world.

  • ethan_smith 2 days ago

    The deterministic and probabilistic paradigms will likely coexist rather than fully replace each other - we'll build deterministic interfaces and guardrails around probabilistic cores to get the best of both worlds.

drunx a day ago

This article can be taken a little less seriously. It is just an opinion/experience of a person.

I think he has a lot of good points there. Yes he treads on the thin ice with the very big statements and generalisations, but those we don't have to "sign under with blood".

I do like the simple formula concept. It does make sense. It's not an ultimate representation of everything, but it's a nice idea of how to frame the differences between the logics we are dealing with.

I choose to not commit the whole message of the article to my core beliefs, but I'll borrow thoughts, ideas for the debates and work ahead.

AIorNot 2 days ago

From the article:

“We have a class of products with deterministic cost and stochastic outputs: a built-in unresolved tension. Users insert the coin with certainty, but will be uncertain of whether they'll get back what they expect. This fundamental mismatch between deterministic mental models and probabilistic reality produces frustration — a gap the industry hasn't yet learned to bridge.”

And all the news today around AI being a bubble -

We’re still learning what we can do with these models and how to evaluate them but industry and capitalism forces our hand into building sellable products rapidly

  • CGMthrowaway 2 days ago

    It's like putting money into a (potentially) rigged slot machine

    • ACCount37 2 days ago

      It's like paying a human to do something.

      Anyone who thinks humans are reliable must have never met one.

  • voidhorse 2 days ago

    Precisely. There is nothing inherently wrong with LLMs and "agent" systems. There are certain classes of problems that they might be great for solving.

    The problem is that the tech industry has devolved into a late-capitalist clown market supported on pure wild speculation and absurd "everything machine" marketing. This not only leads to active damage (see people falling into delusional spirals thanks to chat bots) but also inhibits us from figuring out what the actual good uses are and investing into the right applications.

    Radical technology leads to behavior change. Smart phones led to behavior change, but you didn't have to beg people to buy them. LLMs are leading to behavior change but only because they are being forcibly shoved into everyone's faces and people are co-gaslighting each other into the collective hysteria that to not participate is to miss out on something big, but they can never articulate what that big thing actually is.

olddustytrail a day ago

I vibe like there is a real point trying to escape this article but it sounds like a long form LinkedIn post.

Ask ChatGPT to fix it.

ps. On a side note, I love that vibe has come back as a word. Feels like the 60s.

camillomiller 2 days ago

It seems to me that probabilistic approaches are more akin to magical AI thinking right now, so defending that as the new paradigm sounds quite egregious and reeks of (maybe involuntary?) inevitabilism.

Even if the assumption is correct, forcing a probabilistic system on a strongly deterministic society won't end well. Maybe for society, but mostly for the companies drumming up their probabilistic systems and their investors.

Also, anyone who wants to make money probabilistically is better off going to the casino. Baccarat is a good one. European Roulette also has a better house margin than chatGPT's error margin.

  • ath3nd a day ago

    > It seems to me that probabilistic approaches are more akin to magical AI thinking right now, so defending that as the new paradigm sounds quite egregious and reeks of (maybe involuntary?) inevitabilism

    Thank you for saying that!

    I read it as: "our product is fickle and unreliable and you have to get used to it and love it because we tell you that is the future".

    But it's not the future, it's just one of many possible futures, and not one I and a large part of society wants to be a part of. These "leaders" are talking and talking but they are just salemen, trying to frame what they are seling you as good or inevitable. It's not.

    Look for example, at the Ex CEO of Github and his clownish statements:

    - 2nd of August: Developers, either embrace AI or leave the industry https://www.businessinsider.com/github-ceo-developers-embrac...

    - 11th of August: Resigns. https://www.techradar.com/pro/github-ceo-resigns-is-this-the...

    Tell me this is not pitiful, tell me this is the person I gotta believe in and who knows what's the future of tech?

    Tell me I gotta believe Sama when he tells me for the 10th time that AGI is nearly there when his latest features were "study mode", "announcing OpenAI office suite" and ChatGPT5 (aka ChatGPT4.01).

    Or Musk and his full self driving cars which he promised since 2019? The guy who bought Twitter, tanked its value in half so he can win the election for Trump and then got himself kicked out of the government? The guy making Nazi salutes?

    Are those the guys telling what's the future and why are we even listening to them?

    • camillomiller a day ago

      Unfortunately the answer is “because they are rich beyond comprehension and no regulation was put in place at the right time to avoid that“.

      • ath3nd a day ago

        While money certainly gives them a bigger ability to influence the future than an ordinary bloke, it still doesn't mean they are able to just wish it true.

        Elon failed his boring tunnel

        Elon is failing his rocket ventures so much that he had to try to gut NASA so he remains competitive

        Elon didn't achieve full self driving cars and is now sued, hopefully to oblivion, for the fatal car crashes his half baked tech created.

        Zuck didnt achieve the Metaverse.

        Zuck didn't achieve widespread VR adoption.

        It's extremely funny to see these pitiful billionaires speak with confidence of a future they want and pour billions in, but then fail to achieve. Their failures are multiplied by how much money, resources, time and people they threw at the problem and still failed. Watching these men-chileren unravel in front of our very eyes is infinite comedy, we are talking going Nazi, getting drug addicted, going Bro?, MMA training, one can't make this stuff up, their actions read like made up satire in the Onion.

        They are society's laughing stock, because despite all their vast resources they seemingly don't want to do anything good for society and instead prefer to race to see who can fly their pitiful rockets to Mars first or sell more crap to people who don't need it, or yet influence another election and elect the next Mussolini wannabe. Those guys are the ultimate clowns.

thrown-0825 2 days ago

this entire endeavor is a fools errand, and any who has used coding agents for anything more complex than a web tut knows it.

it doesn't matter how much jargon and mathematical notation you layer on top of your black box next token generator, it will still be unreliable and inconsistent because fundamentally the output is an approximation of an answer and has no basis in reality

This is not a limitation you can build around, its a basic limitation of the underlying models.

Bonus points if you are relying on an LLM for orchestration or agentic state, its not going to work, just move on to a problem you can actually solve.

  • eru 2 days ago

    You could more or less use the same reasoning to argue for why humans can't write software.

    And you'd be half-right: humans are extremely unreliable, and it takes a lot of safeguards and automated testing and PR reviews etc to get reliable software out of humans.

    (Just to be clear, I agree that current models aren't exactly reliable. But I'm fairly sure with enough resources thrown at the problem, we could get reasonably reliable systems out of them.)

    • skydhash a day ago

      There’s a lot of projects with only one and two people behind it and they produce widely used software. Still have to see big tech producing something actually useful with one of those agents they’re touting about.

ath3nd 2 days ago

> Dismissal is a common reaction when witnessing AI’s rate of progress. People struggle to reconcile their world model with what AI can now do, and how.

By probabilistic pattern matching. Next.

> Every model update doesn’t necessarily mean a complete rewrite every time, but it does force you to fundamentally rethink your assumptions each time, making a rewrite a perfectly plausible hypothesis.

Translation: our crap is unreliable and will introduce regressions that you have to manage. It can regress so much that anything you built on this has to be redone. Next.

> I strongly believe that when it comes to AI, something is happening. This time it does feel different.

Said the guy working for the company that is selling you this stuff. Next.

ath3nd a day ago

Dealing with pretentious pseudoscientific blogs in the LLM era. /s

mentalgear 2 days ago

> After decades of technical innovation, the world has (rightfully) developed some anti-bodies to tech hype. Mainstream audiences have become naturally skeptical of big claims of “the world is changing”.

Well, it took about 3 years of non-stop AI hype from the industry and press (and constant ignoring of actual experts) until finally the perception seems to have shifted in recognising it as another bubble. So I wouldn't say any lessons were learned. Get ready for the next bubble when the crypto grifters that moved to "AI" will soon move on the to the NEXT-BIG-THING!

  • brookst 2 days ago

    A technology's long term value has absolutely zero relationship to whether there is a bubble at any moment. Real estate has had many bubbles. That doesn't mean real estate is worthless, or that it won't appreciate.

    Two propositions that can both be true, and which I believe ARE both true:

    1. AI is going to change the world and eat many industries to an even greater degree than software did

    2. Today, AI is often over-hyped and some combo of grifters, the naive, and gamblers are driving a bubble that will pop at some point

  • ACCount37 2 days ago

    There's been non-stop talk of "AI bubble" for 3 years now. Frontier AI systems keep improving in the meanwhile.

    Clearly, a lot of people very desperately want AI tech to fail. And if there is such a strong demand for "tell me that AI tech will fail", then there will be hacks willing to supply. I trust most of those "experts" as far as I can throw them.

  • ivape 2 days ago

    ”… finally the perception seems to have shifted in recognising it as another bubble”

    Who recognized this exactly? The MIT article? Give me a break. NVDA was $90 this year, was that the world recognizing AI was a bubble? No one is privy to anything when it comes to this. Everyone is just going to get blindsided again and again every time they sleep on this stuff.

  • lacy_tinpot 2 days ago

    Is it really "hype" if like 100s of millions of people are using llms on a daily basis?

    • PaulRobinson 2 days ago

      It’s not the usage that’s the problem. It’s the valuations.

    • nutjob2 2 days ago

      Absolutely, it's just that some are just kidding themself as to what it can do now and in the future.

    • pmg101 2 days ago

      The dot-com bubble burst but I'm betting you visited at least one of those "websites" they were hyping today.

hodgehog11 2 days ago
  • adidoit 2 days ago

    No it isn't...the author is talking about products and building with a completely different mindset to deterministic software.

    The bitter lesson is about model level performance improvements and the futility of scaffolding in the face of search and scaling.

    • hodgehog11 2 days ago

      It isn't clear to me why these are so different? The alternative mindset to deterministic software is to use probabilistic models. The common mentality is that deterministic software takes developer knowledge into account. This becomes less effective in the big data era, that's the bitter lesson, and that's why the shift is taking place. I'm not saying that the article is the bitter lesson restated, but I'm saying that this is a realisation of that lesson.

      • adidoit 2 days ago

        Ah ok yes that makes more sense to me. Thank you for clarifying - I agree this new philosophy of building on probabilistic software is an outcome of the bitter lesson.

        And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

        • ath3nd a day ago

          > And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

          Who is "we" in that case? Are you the one building the model? Do you have the compute and data capacity to test every corner case that matters?

          In a deterministic system you can review the code and determine what it does under certain conditions. How do you know the ones that do build non-deterministic systems (because, let's face it, you will use but not build those systems) haven't rigged it for their benefit and not yours?

          > And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

          Our deterministic scaffolds sounds so dramatic, it sounds like you think of them like the chains that keep holding you, if only those chains were removed, you'd be able to fly. But it's not you who'd be able to fly, it's the ones building the model and having the compute to build it. And because of its non-deterministic nature, a backdoor for their benefit is now simply plausible deniability. Who is we. You are a user of those models, you will not be adding anything to it, maybe only circumstantially by your prompts being mined. You are not we.

          • hodgehog11 21 hours ago

            This is a genuine concern, which is why it is a very hot topic of research. If you're giving a probabilistic program the potential to do something sinister, using a commercial model or something that you have not carefully finetuned yourself would be a terrible idea. The same principle applies for commercial binaries; without decompilation and thorough investigation, can you really trust what it's doing?

  • greymalik 2 days ago

    How so?

    • chaos_emergent 2 days ago

      Author advocates for building general purpose systems that can accomplish goals within some causal boundary given relevant constraints, versus highly deterministic logical flows that are created from priors like intuition or user research.

      Parallel with the bitter lesson being that general purpose algorithms that use search and learning leveraged by increasing computational capacity tend to beat out specialized methods that exploit intuition about how cognitive processes work.