arp242 16 hours ago

A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.

Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.

And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.

[1]: https://github.com/gomarkdown/markdown/security/advisories/G...

[2]: https://rustsec.org/advisories/RUSTSEC-2024-0373.html

  • codedokode 9 hours ago

    Dereferencing a null pointer is an error. It is a valid bug.

    The maintainer claims this is caused by allocator failure (malloc returning null), but it is still a valid bug. If you don't want to deal with malloc failures, just crash when malloc() returns null, instead of not checking malloc() result at all.

    The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

    Another solution is to propagate every error back to the caller, but it is difficult and there is high probability that the caller won't bother checking the result because of laziness.

    A quote from a bug report [1]:

    > If xmlSchemaNewValue returns NULL (e.g., due to a failure of malloc), xmlSchemaDupVal checks for this and returns NULL.

    [1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/905

    • saurik 6 hours ago

      > It is a valid bug.

      But is it a high-severity security bug?

      • Quekid5 5 hours ago

        Considering that it's Undefined Behavior, quite possibly.

        EDIT: That said, I'm on the maintainer's side here.

        • gpderetta 4 hours ago

          > Considering that it's Undefined Behavior, quite possibly.

          Is it thought? Certainly it is according the C and C++ standards, but POSIX adds:

          > References to unmapped addresses shall result in a SIGSEGV signal

          While time-traveling-UB is a theoretical possibility, practically POSIX compliant compilers won't reorder around potentially trapping operations (they will do the reverse, they might remove a null check if made redundant by a prior potentially trapping dereference) .

          A real concern is if a null pointer is dereferenced with a large attacker-controlled offset that can avoid the trap, but that's more of an issue of failing to bound check.

          • ynik 3 hours ago

            Under your interpretation, neither gcc nor clang are POSIX compliant. Because in practice all these optimizing compilers will reorder memory accesses without bothering to prove that the pointers involved are valid -- the compiler just assumes that the pointers are valid, which is justified because otherwise the program would have undefined behavior.

            • somat 2 hours ago

              I am not so sure. Assuming the program does not do anything undefined is sort of the worst possible take on leaving the behavior undefined in the first place. I mean the behavior was left undefined so that "something" could be done. the language standard just does not know what that "something" is. Hell the compiler could do nothing and that would make more sense.

              But to make optimizations pretending it is an invariant, it can't happen, when the specification clearly says it could happen. That's wild, and I would argue out of specification.

            • gpderetta 3 hours ago

              Actually you are right, what I said about reordering is nonsense. The compiler will definitely reorder non-aliasing accesses. There are much weaker properties that are preserved.

    • fredilo 7 hours ago

      > The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

      So could the reporter of the bug. Alternatively, he could add an `if(is null){crash}` after the malloc. The fix is easy for anyone that has some knowledge of the code base. The reporter has demonstrated this knowledge in finding the issue.

      If a useful PR/patch diff was provided with the reporter, I would have expected it to be merged right away.

      However, instead of doing the obvious thing to actually solve the issue, the reporter hits the maintainer with this bureaucratic monster:

      > We'd like to inform you that we are preparing publications on the discovered vulnerability.

      > Our Researchers plan to release the technical research, which will include the description and details of the discovered vulnerability.

      > The research will be released after 90 days from the date you were informed of the vulnerability (approx. August 5th, 2025).

      > Please answer the following questions:

      >

      > * When and in what version will you fix the vulnerability described in the Report? (date, version)

      > * If it is not possible to release a patch in the next 90 days, then please indicate the expected release date of the patch (month).

      > * Please, provide the CVE-ID for the vulnerability that we submitted to you.

      >

      > In case you have any further questions, please, contact us.

      https://gitlab.gnome.org/GNOME/libxml2/-/issues/905#note_243...

      The main issue here is really one of tone. The maintainer has been investing his free time to altruistically move the state of software forward and the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?

      > Thank you for your nice library. It is very useful to us! However, we found a minor error that unfortunately might be severely exploitable. Attached is a patch that "fixes" it in an ad-hoc way. If you want to solve the issue in a different way, could we apply the patch first, and then you refactor the solution when you find time? Thanks! Could you give us some insights on when after merging to main/master, the patch will end up in a release? This is important for us to decide whether we need to work with a bleeding edge master version. Thank you again for your time!

      Ultimately, it is a very similar message content. However, it feels completely different.

      Suppose you are a maintainer without that much motivation left, and you get hit with such a message. You will feel like the reporter is an asshole. (I'm not saying he is one.) Do you really care, if he gets powned via this bug? It takes some character strength on the side of the maintainer to not just leave the issue open out of spite.

      • sersi 5 hours ago

        > the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?

        The reporter doesn't care about libxml2 being more secure, they only care about having a CVE-ID to brag about discovering a vulnerability and publishing it on their blog. If the reporter used the second message you wrote, they wouldn't get what they want.

      • yrro an hour ago

        If I received an email like that I'd reply with an invoice.

      • rwmj 3 hours ago

        If someone had reported that on a project I maintain, I'd have told them to get outta here, in somewhat less polite language. They're quite clearly promoting their own company / services and don't care in the slightest about libxml2.

    • sidewndr46 an hour ago

      in the event that malloc returns NULL and it isn't checked, isn't the program going to crash anyways? I usually just use a macro like "must_malloc" that does this anyways. But the out come is the same I would think. It's mostly a difference of where it happens.

    • andrewaylett 2 hours ago

      Many systems have (whether you like the idea or not) effectively infallible allocators. If malloc won't ever return null, there's not much point in checking.

    • worthless-trash 8 hours ago

      A while back i remember looking at the kernel source code, when overcommit is enabled, malloc would not fail if it couldnt allocate memory, it would ONLY fail if you attempted to allocate memory larger than the available memory space.

      I not think you can deal with the failure condition the way you think on Linux (and I imagine other operating systems too).

      • vbezhenar 2 hours ago

        It's very easy to make malloc return NULL:

          % ulimit -v 80000
          
          % cat test.c
          #include <stdio.h>
          #include <stdlib.h>
          
          int main(void) {
            char *p = malloc(100'000'000);
            printf("%p\n", p);
          }
          
          % cc test.c
          
          % ./a.out
          (nil)
      • codedokode 6 hours ago

        The bug was about the case when malloc returns null, but the library doesn't check for it.

        • bjourne 4 hours ago

          Correct, but the point is that it is difficult to get malloc to return null on Linux. Why litter your code with checks for de facto impossible scenarios?

          • codedokode 30 minutes ago

            First, Linux has thousands of settings that could affect this, second the library probably works not only on Linux.

          • daef 2 hours ago

            in systems level programming (the introductory course before operating systems in our university) this was one of the first misconceptions to be eradicated. you cannot trust malloc to return null.

  • viraptor 15 hours ago

    > Denial of service is not resulting in ...

    DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.

    • bastawhiz 14 hours ago

      An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug. It's a security problem because the system itself does security. It would be wild if any bug that leads to a crash or a memory leak was a "security" bug because the library might have been used by someone somewhere in a context that has security implications.

      A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.

      "Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.

      The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.

      Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.

      • lmeyerov 11 hours ago

        Traditional security follows the CIA triad: Confidentiality (ex: data leaks), Integrity (ex: data deletion), and Availability (ex: site down). Something like SOC2 compliance typically has you define where you are on these, for example

        Does availability not matter to you? Great. For others, maybe it does, like you are some medical device segfaulting or OOMing in an unmanaged way on a cfg upload is not good. 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.

        • bastawhiz 10 hours ago

          > some medical device segfaulting or OOMing in an unmanaged way

          Memory safety is arguably always a security issue. But a library segfaulting when NOT dealing with arbitrary external input wouldn't be a CVE in any case, it's just a bug. An external third party would need to be able to push a crafted config to induce a segfault. I'm not sure what kind of medical device, short of a pacemaker that accepts Bluetooth connections, might fall into such a category, but I'd argue that if a crash in your dependencies' code prevents someone's heart from beating properly, relying CVEs to understand the safety of your system is on you.

          Should excessive memory allocation in OpenCV for certain visual patterns be a CVE because someone might have built an autonomous vehicle with it that could OOM and (literally) crash? Just because you put the code in the critical path of a sensitive application doesn't mean the code has a vulnerability.

          > 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.

          Of course! It's a security problem for me in my usage of a library because I made the failure mode of the library have security implications. I don't want my service to go offline, but that doesn't mean I should be entitled to having my application's exposure to failure modes affecting availability be treated on equal footing to memory corruption or an RCE or permissions bypass.

          • lmeyerov 9 hours ago

            I agree on the first part, but it's useful to be more formal on the latter --

            1. Agreed it's totally fine for a system to have some bugs or CVEs, and likewise fine for OSS maintainers to not feel compelled to address them. If someone cares, they can contribute.

            2. Conversely, it's very useful to divorce some application's use case from the formal understanding of whether third-party components are 'secure' because that's how we stand on the shoulders of giants. First, it lets us make composable systems: if we use CIA parts, with some common definition of CIA, we get to carry that through to bigger parts and applications. Second, on a formal basis, 10-20 years after this stuff was understood to be useful, the program analysis community further realized we can even define them mathematically in many useful ways, where different definitions lead to different useful properties, and enables us to provably verify them, vs just test for them.

            So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view. If some library is C+I but not A... that can be fine for both the library and the downstream apps, but it's useful to have objective definitions. Likewise, something can gradations of all this -- like maybe it preserves confidentiality in typical threat models & definitions, but not something like "quantitative information flow" models: also ok, but good for everyone to know what the heck they all mean if they're going to make security decisions on it.

            • holowoodman 6 hours ago

              > So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view.

              That doesn't help anyone, because it is far too primitive.

              A medical device might have a deadly availability vulnerability. That in itself doesn't tell you anything about the actual severity of the vulnerability, because the exploit path might need "the same physical access as pulling the power plug". So not actually a problem.

              Or the fix might need a long downtime which harms a number of patients. So maybe a problem, but the cure would be worse than the disease.

              Or the vulnerability involves sending "I, Eve Il. Attacker, identified by badge number 666, do want to kill this patient" to the device. So maybe not a problem because an attacker will be caught and punished for murder, because the intent was clear.

              • lmeyerov 5 hours ago

                We're talking about different things. I agree CVE ratings and risk/severity/etc levels in general for third party libraries are awkward. I don't have a solution there. That does not mean we should stop reporting and tracking C+I+A violations - they're neutral, specific, and useful.

                Risk, severity, etc are careful terms that are typically defined contextually relative to the application... yet CVEs do want some sort of prioritization level reported too for usability reasons, so it feels shoe-horned. Those words are useful in operational context where a team can prioritize based on them, and agreed, a third-party rating must be reinterpreted for the application's rating. CVE ratings is an area where it seems "something is better than nothing", and I don't think about it enough to have an opinion on what would be better.

                Conversely, saying a library has a public method with an information flow leak is a statement that we can compositionally track (e.g., dataflow analysis). It's useful info that lets us stand on the shoulders of giants.

                FWIW, in an age of LLMs, both kinds of information will be getting even more accessible and practical for many more people. I can imagine flipping my view on risk/severity to being more useful as the LLM can do the compositional reasoning in places the automated symbolic analyzers cannot.

          • pjmlp 9 hours ago

            Yes it should, software will eventually be liable like in any other industry that has been around for centuries.

        • int_19h 11 hours ago

          We're talking about what's reasonable to expect as a baseline. A higher standard isn't wrong, obviously, but those who need it shouldn't be expecting others to provide it by default, and most certainly not for free.

      • viraptor 14 hours ago

        > An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug.

        (And other examples) That's a fallacy of looking for the root cause. The library had an issue, the system had an issue and together they resulted in a problem for you. Some issues will be more likely to result in security problems than others, so we classify them as such. We'll always deal with probabilities here, not clear lines. Otherwise we'll just end up playing a blame game "sure, this had a memory overflow, but it's package fault for not enabling protections that would downgrade it to a crash", "no it's deployments fault for not limiting that exploit to just this users data partition", "no it's OS fault for not implementing detailed security policies for every process", ...

        • bastawhiz 10 hours ago

          But it's not treated as dealing in probabilities. The CVEs (not that I think they're even worthwhile) are given scores that ignore the likelihood of an issue being used in a security sensitive context. They're scored for the worst case scenario. And if we're dealing with probabilities, it puts less onus on people who actually do things where security matters and spams everyone else where those probabilities are unrealistic, which in a huge majority of cases.

          This is worse for essentially everyone except the people who should be doing more diligence around the code that they use. If you need code to be bug free (other than the fact that you're delusional about the notion of "bug free" code) you're just playing the blame game when you don't put protections in place. And I'm not talking about memory safety, I'm talking about a regexp with pathological edge cases or a panic in user inputs. If you're not handling unexpected failure modes from code you didn't write and inspect, why does that make it a security issue where the onus is on the library maintainer?

          • viraptor 8 hours ago

            The score assigned to issues has to be the worst case one, because whoever is assessing it will not know how people use the library. The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact. People outside that system would be only guessing. And you really don't want to guess "nobody would use it this way, it's fine" if it turns out some huge private deployment does.

            • tsimionescu 7 hours ago

              > The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

              Unfortunately that's not how it happens in practice. People run security scanners, and those report that you're using library X version Y which has a known vulnerability with a High CVSS score or whatever. Even if you provide a reasoned explanation of why that vulnerability doesn't impact your use case and you convince your customer's IT team of this, this is seen as merely a temporary waiver: very likely, you'll have the same discussion next time something is scanned and found to contain this.

              The whole security audit system and industry is problematic, and often leads to huge amounts of busy work. Overly pessimistic CVEs are not the root cause, but they're still a big problem because of this.

      • comex 10 hours ago

        This is a tangent from your main argument about DoS.

        But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.

        Which is valid enough. The less likely some component is to receive untrusted input, the lower the severity should be.

        But beware of going all the way and saying "it's not a bug because we assume trusted input". Whenever you do that, you're also passing down a responsibility to the user: the responsibility to segregate trusted and untrusted data!

        Countless exploits have arisen when some parser never designed for untrusted input ended up being exposed to it. Perhaps that's not the parser's fault. But it always happens.

        If you want to build secure systems, the only good approach is to stop using libraries that have those kinds of footguns.

        • scott_w 9 hours ago

          > But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.

          It is a bug but it’s not necessarily a security hole in the library. That’s what OP is saying.

          • comex 7 hours ago

            Yes, that’s the OP’s main point, but their choice of examples suggests that they are also thinking about trusted input.

    • ivanjermakov 14 hours ago

      DoSing autonomous vehicle brake controls...

      • bastawhiz 14 hours ago

        I hope my brakes aren't parsing xml

  • nottorp 9 hours ago

    "Security" announcements seem to be of 3 kinds lately:

    1. Serious. "This is a problem and it needs fixing yesterday."

    2. Marketing. "We discovered that if earth had two moons and they aligned right and you had local root already you could blah blah. By the way we are selling this product that will generate a positive feedback loop for your paranoid tendencies, buy buy buy!".

    3. Reputation chasing. Same as above, except they don't sell you a product, they want to establish themselves as an expert in aligning moons.

    Much easier to do 2 or 3 via "AI" by the way.

  • viraptor 6 hours ago

    Unfortunately this is timely news: https://news.sky.com/story/patient-death-linked-to-cyber-att...

    > Denial of service is not resulting in ...

    Turns out they result in deaths. (This was DoS through ransomware)

    • holowoodman 6 hours ago

      Security bugs always have a context-dependent severity. An availability problem in a medical device is far more severe than a confidentiality problem. In a cloud service, the same problems might switch their severity, downtime isn't deadly and just might affect some SLAs, but disclosing sensitive data will yield significant punishment and reputation damage.

      That is why I think that "severity" and the usual kinds of vulnerability scores are BS. Anyone composing a product or operating a system has to do their own assessment, taking into account all circumstances.

      In the context of the original article this means that it is hopeless anyways, and the maintainer's point of view is valid: in some context everything is "EXTREMELY HIGH SEVERITY, PANIC NOW!". So he might as well not care and treat everything equally. Absolutely rational decision that I do agree with.

  • cedws 5 hours ago

    Denial of service is a security bug. It may seem innocuous in the context of a single library, but what happens when that library finds it way into core banking systems, energy infrastructure and so on? It's a target ripe for exploitation by foreign adversaries. It has the same potential to harm people as other bugs.

    • sigilis 4 hours ago

      The importance of the system in question is not a factor in whether something is a security bug for a dependency. The threat model of the important system should preclude it from using dependencies that are not developed with a similar security paradigm. Libxml2 simplly operates under a different regime than, as an arbitrary example, the nuclear infrastructure of a country.

      The library isn't a worm, it does not find its way into anything. If the bank cares about security they will write their own, use a library that has been audited for such issues, sponsor the development, or use the software provided as is.

      You may rejoin with the fact that it could find its way into a project as a dependency of something else. The same arguments apply at any level.

      If those systems crash because they balanced their entire business on code written by randos who contribute to an open source project then the organizations in question will have to deal with the consequences. If they want better, they can do what everyone is entitled to: they can contribute to, make, or pay for something better.

    • arp242 3 hours ago

      By that standard almost any bug could be considered a "security bug", including things like "returns error even though my XML is valid" or "it parses this data wrong".

  • yeyeyeyeyeyeyee 8 hours ago

    A basic definition of a security bug is something that violates confidentiality, integrity or availability.

    A DoS affects the availability of an application, and as such is a real security bug. While the severity of it might be lower than a bug that allows to "empty bank accounts", and fixing it might get a lower priority, it doesn't make it any less real.

    • citrin_ru 7 hours ago

      The problem is that DoS is the most vaguely defined category. If a library processes some inputs 1000 slower than average one may claim that this is a DoS. What if it is just 10x slower? Where to draw the line? What is the problem domain is such that some inputs just take more time and there is no way to 'fix' it? What if the input comes only from a trusted source?

    • thinkharderdev 4 hours ago

      The CIA triad is a framework for threat modeling, not a threat model in and of itself. And what those specific terms mean will also be very system-specific.

  • dcow 10 hours ago

    Full disclosure is the only fair and humane way to handle “security” bugs, because as you point out, every bug is a security bug to someone. And adversaries will make their way onto embargo lists anyway. It’s good to see a principled maintainer other than openbsd fighting the fight.

  • icedchai 16 hours ago

    Everything is a "security bug" in the right (wrong?) context, I suppose.

    • cogman10 15 hours ago

      Well, that's sort of the problem.

      It's true that once upon a time, libxml was a critical path for a lot of applications. Those days are over. Protocols like SOAP are almost dead and there's not really a whole lot of new networking applications using XML in any sort of manor.

      The context where these issues could be security bugs is an ever-vanishing usecase.

      Now, find a similar bug in zlib or zstd and we could talk about it being an actual security bug.

      • tzs 5 hours ago

        Aside from heavy use in the healthcare, finance, banking, retail, manufacturing, transportation, logistics, telecommunications, automotive, publishing, and insurance industries, w̶h̶a̶t̶ ̶h̶a̶v̶e̶ ̶t̶h̶e̶ ̶R̶o̶m̶a̶n̶s̶ who uses XML?

        • cogman10 3 hours ago

          I think you (and others) are misconstruing what I'm saying.

          I'm not saying XML is unused.

          I'm saying that the specific space where it's use can cause security problems from things like a DDOS are rare.

          A legacy backend system that consumes XML docs isn't at risk of a malicious attacker injecting DDOS docs.

          When XML is used for data interchange, it's typically only in circumstances where trusted parties are swapping XML docs. Where it's not typically being used is the open Internet. You aren't going to find many new rest endpoints emitting or consuming XML.

          And the reason it's being used is primarily legacy. The format and parser are static. Swapping them out would be disruptive and gives few benefits.

          That's what it means for something to increasingly become irrelevant. When new use slows or stops and development is primarily on legacy.

      • fires10 14 hours ago

        SOAP is used far more than most people realize. I deal extensively in "cutting edge" industries that rely heavily on SOAP or SOAP based protocols. Supply chain systems and manufacturing.

      • betaby 12 hours ago

        > there's not really a whole lot of new networking applications using XML in any sort of manor.

        Quite the opposite. NETCONF is XML https://en.wikipedia.org/wiki/NETCONF and all modern ISP/Datacenter routers/switches have it underneath and most of the time as a primary automation/orchestration protocol.

      • monocasa 12 hours ago

        Unfortunately stuff like SAML is XML.

        That being said, I don't think that libxml2 has support for the dark fever dream that is XMLDSig, which SAML depends on.

  • pjmlp 9 hours ago

    Any bug that can be used directly, or indirectly alongside others, is a security bug.

    A denial of service in a system related to emergency phone calls can result on people's deaths.

  • nicce 16 hours ago

    > A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

    That is not true at all. Availability is also critical. If nobody can use bank accounts, bank has no purpose.

    • arp242 16 hours ago

      Many of these issues are not the type of issues that will bring down an entire platform; most are of the "if I send wrong data, the server will return with a 500 for that request" or "my browser runs out of memory if I use a maliciously crafted regexp". Well, whoopdeedoo.

      And even if it somehow could, it's 1) just not the same thing as "I lost all my money" – that literally destroys lives and the bank not being available for a day doesn't. And 2) almost every bug has the potential to do that in at least some circumstances – circumstances with are almost never true in real-world applications.

      • nicce 15 hours ago

        > Many of these issues are not the type of issues that will bring down an entire platform; most are of the "if I send wrong data, the server will return with a 500 for that request" or "my browser runs out of memory if I use a maliciously crafted regexp". Well, whoopdeedoo.

        I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does. OOMing your browser has no impact to others. These should be labeled correctly instead of downplaying the significance of denial of service.

        Like I said in my other comment, there are two entities - the end-user and the service provider. The service provider/business loses money too when customers cannot make transactions (maybe they had promise to keep specific uptime and now they need to pay compensations). Or they simple get bankrupted because they lost their users.

        Even customers may lose money or something else when they can't make transactions. Or maybe identification is based on bank credentials on some other service. The list goes on.

        • bawolff 12 hours ago

          > I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does

          Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site.

          I agree with your point broadly though that the risk of such things are grossly overstated, but i think we should be careful about going in the opposite direction too far.

          • nicce 12 hours ago

            > Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site

            That is true, but the status code 500 alone does not reveal that; it is speculation. Status codes are not always used correctly. It is typically just indicator to dig deeper. There might be a security issue, but the code itself is not enough.

            Maybe this just the same general problem of false positives. Proving something requires more effort and more time and people tend to optimise things.

            • bawolff 11 hours ago

              True, but in the context of the article we are talking about null pointer dereference. That is almost certainly going to cause a segfault and require restarting the process.

    • SchemaLoad 12 hours ago

      If every single bug in libxml is a business ending scenario for the bank, then maybe the bank can afford to hire someone to work on those bugs rather than pestering a single volunteer.

    • bogeholm 16 hours ago

      Security and utility are separate qualities.

      You’re correct that inaccessible money are useless, however one could make the case that they’re secure.

      • nicce 16 hours ago

        I think you are only considering the users - for the business provider the availability has larger meaning because the lack of it can bankrupt your business. It is about securing operations.

        • arp242 15 hours ago

          If a panic or null pointer deref in some library causes your entire business to go down long enough that you go bankrupt, then you probably deserve to go out of business because your software is junk.

          • nicce 15 hours ago

            I believe you know well that bankrupt is the worst case. Many business functions can be so critical that 24h disturbance is enough to cause high financial damages or even loss of life. A bug in the car's brakes that prevents their usage is also denial of service.

            • arp242 15 hours ago

              Or 24h disturbance. Or indeed taking the entire system down at all.

              And no one is talking about safety-critical systems. You are moving the goalposts. Does a gas pedal use a markdown or XML parser? No.

              • nicce 14 hours ago

                The point was about the importance of availability.

                > Does a gas pedal use a markdown or XML parser? No.

                Cars in general use, extensively: https://en.wikipedia.org/wiki/AUTOSAR

                • int_19h 11 hours ago

                  Great, then we have someone with both resources and an incentive to write and maintain an XML parser with strict availability guarantees.

                  • fodkodrasz 5 hours ago

                    Automotive companies pay big buck to vendors who supply certified tools/libraries, because getting stuff certified is lot of work/time. This also means that those stuff are often outdated, and a pain to work with, yet their vendors are not expected to function as charities, as often expected by FLOSS authors, esp. when releasing their code under BSD/MIT licenses and then getting eaten by the sharks.

                • fodkodrasz 5 hours ago

                  AUTOSAR xml-s are compile-time/integration time toolchain metadata mostly in my memory.

                  Yet this is off topic for the libxml funding/bug debate.

                  For embedded mission critical C libxml is surely unsuitable, just like 99.99% of the open source third party code. Also unneeded. If crashes the app on the developer machine or in the build pipeline if it runs out of memory? Who cares (from a safety point of view)? That has nothing to do with availability of safety critical systems in the car.

        • em-bee 16 hours ago

          not paying rent can get you evicted. and not paying your medical bill can get you denied care. (in china most medical care is not very expensive, but every procedure has to be paid in advance. you probably won't be denied emergency care so your life would not be in immediate danger, but sometimes an optional scan discovers something life threatening that you weren't aware of so not being able to pay for it can put you at risk)

        • leni536 16 hours ago

          Virtually all bugs have some cost. Security bugs tend to be more expensive than others, but it doesn't mean that all very expensive bugs are security bugs.

      • burnt-resistor 14 hours ago

        Define what you mean by "security".

        Control integrity, nonrepudiation, confidentiality, privacy, ...

        Also, define what you mean by "utility" because there's inability to convert a Word document, inability to stop a water treatment plant from poisoning people, and ability to stop a fire requiring "utility".

      • hsbauauvhabzb 14 hours ago

        Inability for drug dispensers to dispense life saving drugs due to DoS has failed utility and will cost lives, would you describe that as secure?

    • antonymoose 16 hours ago

      I routinely handle regex DoS complaints on front-end input validation…

      If a hacker wants to DoS their own browser I’m fine with that.

      • nicce 15 hours ago

        This depends on the context to be fair. Front-end DoS can suddenly expand into botnet DDoS if you can trigger it by just serving a specific kind of URL. E.g. search goes into endless loop that makes requests into the backend.

        • talkin 7 hours ago

          No. The Regex DoS class of bugs is about infinite backtracking or looping inside the regex engine. Completely isolated component, just hogging CPU inside the regex engine. It may also have ‘DoS’ in its name, but there’s no relation to network (D)DoS attacks.

          It could still be a security error, but only if all availability errors are for that project. But after triage, the outcome is almost always “user can hang own browser on input which isn’t likely”. And yes, it’s a pity I wrote ‘almost’, which means having to check 99% false alarms.

      • Onavo 16 hours ago

        Until the same library for their "isomorphic" backend..

        • hsbauauvhabzb 14 hours ago

          Server side rendering is all the rage again, so yeah it might be.

    • p1necone 15 hours ago

      I think it's context dependent whether DoS is on par with data loss/extraction, including whether it's actually a security issue or not. I would argue DoS for a bank (assuming it affects backend systems and not just the customer portal) would be a serious security issue given the kinds of things it could impact.

ytfghghvh 7 minutes ago

> It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML.

This is a garbage criticism. It’s perfectly adequate for that for almost everyone. If you are shipping it in a browser to billions of people, that’s a very unique situation, and any security issues are a you problem.

Not sure if this is intended to be a “show both sides” journalism thing but it’s a totally asshole throwaway comment.

jonathanlydall 6 hours ago

The breaking point here seems to be security researchers (or maybe just one) essentially “farming” this project for “reputation”. They seem to be approaching it like a computer game against NPCs where you get as much reward as time spent, except in this case they’re imposing a significant amount of work on a real life volunteer maintainer.

I suspect the maintainer would mind less if it was reported by actual users of the library who encountered a real world issue and even better if they offer a patch at the same time, but these bugs are likely the result of scanning tools or someone eyeballing the code for theoretical issues.

In light of the above, the proposed MAINTENANCE-TERMS.md makes a lot of sense, but I think it should also state that security researchers looking for CVEs or are concerned about responsible disclosure should contact the vendor of the software distributing the library.

This would put the onus on the large corporates leveraging the library (at no charge) to use their own resources to deal with addressing security researcher concerns appropriately and they can probably do most of the fix work themselves and the coordinate with the maintainer only to get a release out in a timely manner.

If maintainers find that people coming to them with security issues have done all work possible before hand, they’d probably be completely happy to help.

kibwen 16 hours ago

> Ariadne Conill, a long-time open-source contributor, observed that corporations using open source had responded with ""regulatory capture of the commons"" instead of contributing to the software they depend on.

I'm only half-joking when I say that one of the premier selling points of GPL over MIT in this day and age is that it explicitly deters these freeloading multibillion-dollar companies from depending on your software and making demands of your time.

  • c2h5oh 5 hours ago

    With SAAS swallowing big chunk of software business GPL is much less effective.

    There isn't much difference between MIT and GPL unless you are selling a product that runs locally or on premisses and with the latter some companies try to work around GPL by renting servers with software on it - either as physical boxes or something provided on cloud provider marketplace.

    Look at what you actually have installed on your computer - odds are that unless your job requires something like CAD, photo/video editing or other highly specialized software you have nothing made by large enterprise with exception of OS and Slack/Teams/Zoom.

    • toyg 2 hours ago

      > With SAAS swallowing big chunk of software business GPL is much less effective.

      Which is why we have the AGPL.

  • spott 12 hours ago

    This makes an assumption that a bunch of companies are maintaining their own forks of MIT software with bug fixes and features and not giving it back.

    I find that hard to believe.

    • roryirvine 5 hours ago

      One of the comments on the LWN article is an analysis of exactly that happening with this very library - https://lwn.net/Articles/1026956/

      In short, Apple maintain a 448 kB diff which they 'throw across the wall' in the form of an opaque tarball, shorn of all context. Many of the changes contained within look potentially security-related, but it's been released in a way which would require a huge amount of work to unpick.

      That level of effort is unfeasible for a volunteer upstream developer, but is a nice juicy resource for a motivated attacker. Apple's behaviour, therefore, is going to be a net negative from a security point of view for all other users of this library.

    • adastra22 11 hours ago

      No, they're mostly not. They're throwing the maintenance demand back on the unpaid, understaffed open source developers. That's what TFA is about.

    • baobun 3 hours ago

      Oh, I've seen it plenty. Cultural awareness is just very low in places for some reason.

    • canyp 12 hours ago

      Not really. A company that does not bother contributing to a liberally-licensed project will 100% avoid GPL software like the plague. In either case, they won't contribute. In the latter case, they don't get to free-ride like a parasite.

      • ninjin 8 hours ago

        It is reasonable to assume that this is true. But an equally effective way other than making your license unpalatable to them, is just to say no and state clearly: "Patches or GTFO". Also, have a homepage to link with your (hefty?) consulting rates?

        I have mentioned this in the past, but there was this weird shift in culture just after 2000 where increasingly open source projects were made to please their users, whether they were corporate or not, and "your project is your CV" became how their maintainers would view their projects. It does not have to be this way and we should (like it seems to be the case with libxml2) maybe try to fix this culture?

        • tzs 5 hours ago

          > It is reasonable to assume that this is true. But an equally effective way other than making your license unpalatable to them, is just to say no and state clearly: "Patches or GTFO". Also, have a homepage to link with your (hefty?) consulting rates?

          That's fine for feature requests, but the issue in the present case is bug reports.

          • ninjin 2 hours ago

            I fail to see how that is different. Ultimately, you have released a piece of software into the wild with a clause stating: "The software is provided 'as is' and the author disclaims all warranties with regard to this software including all implied warranties of merchantability and fitness". Thus, it is purely cultural that somehow others and yourself expect you to cancel your family time on a Saturday night solely because an issue has been found in a piece of software you have given away for free. This "value add" is wearing people out and if we want this expectation to remain, maybe it is time for those profiting or those with a monopoly on violence to explore ways to support those that kindly provide free labour like this?

      • jenadine 8 hours ago

        > will 100% avoid GPL software like the plague.

        Not true. Many companies uses Linux for example.

        They will just avoid using GPL software in ways that would impact their own intellectual property (linking a GPL library to their proprietary software). Sometimes they will even use it with dubious "workaround" such as saying "we use a deamon with IPC so that's ok"

        • quietbritishjim 4 hours ago

          > > will 100% avoid GPL software like the plague.

          > Not true. Many companies uses Linux for example.

          I thought it was clear, given that this is a discussion about an open source library, that they were talking about GPL libraries. The way that standalone GPL software is used in companies is qualitatively quite different.

  • tzs 5 hours ago

    From a maintainers point of view there is no difference between someone from a large company reporting a bug and some random hobby programmer reporting a bug.

  • xxpor 16 hours ago

    Why bother open sourcing if you're not interested in getting people to use it?

    • OkayPhysicist 16 hours ago

      The GPL does not prohibit anyone from using a piece of software. It exclusively limits the actions of bad faith users. If all people engaged with FOSS in good faith, we wouldn't need licenses, because all most FOSS licenses require of the acceptors is to do a couple of small, free activities that any decent person would do anyway. Thank/give credit to the authors who so graciously allowed you to use their work, and if you make any fixes or improvements, share alike.

      Security issues like this are a prime example of why all FOSS software should be at least LGPLed. If a security bug is found in FOSS library, who's the more motivated to fix it? The dude who hacked the thing together and gave it away, or the actual users? Requesting that those users share their fixes is farrr from unreasonable, given that they have clearly found great utility in the software.

      • charcircuit 15 hours ago

        GPL doesn't force people to share their fixes and improvements. And there is nothing bad faith about not sharing all your hardwork for free.

        • OkayPhysicist 14 hours ago

          It does if you then share the resulting software. And I think if you make an improvement just for your own enjoyment, you'd be a better person if you shared it back than if you didn't.

          • ahtihn 9 hours ago

            A lot of software out there runs on servers and is never shared with users in a manner that matters for GPL.

            • jenadine 8 hours ago

              That's why there is AGPL to fix that "bug"

              Anyway, the GPL is there to protect final users and not the maintainer of the project. And if a software is running on someone else server, you are not the user of that software. (Although you use the service and give the data, but that's another problem)

      • SpicyLemonZest 14 hours ago

        The GPL "does not prohibit anyone" in a narrow legalistic sense. In colloquial discussions (see e.g. https://www.gnu.org/licenses/why-not-lgpl.en.html), the Free Software Foundation is quite clear that the GPL exists to stop proprietary software developers from using your code by imposing conditions they can't satisfy.

    • gizmo686 16 hours ago

      A decent part of my job is open source. Our reason for doing it is simple: we would rather have people who are not us do the work instead of us.

      On some of our projects this has been a great success. We have some strong outside contributors doing work on our project without us needing to pay them. In some cases, those contributors are from companies that are in direct competition with us.

      On other projects we've open sourced, we've had people (including competitors) use, without anyone contributing back.

      Guess which projects stay open source.

      • OkayPhysicist 16 hours ago

        We have a solution to this. It's called the (L)GPL. If people would stop acting like asking for basic (zero cost) decency in exchange for their gift is tantamount to armed robbery, we could avoid this whole mess.

    • ben0x539 15 hours ago

      When I, as a little child (or at least that is how it feels now), got excited about contributing to open source, it was not the thought that one day my code might help run some giant web platform's infrastructure or ship as part of some AAA videogame codebase that motivated me. The motivation was the idea that my code might be useful to people even with no corporation or business having to be involved!

    • riedel 9 hours ago

      There is tons of reasons. E.g. public money public code. We are in research and we are open sourcing because we know that we cannot maintain anything, giving people the chance to pick up stuff without having buy stuff that is constantly losing value and becomes abandon ware very soon these days (at this point we often don't even have the resources to open source). So what you most get from us is 'public money crappy unmaintained code'

    • bigfatkitten 16 hours ago

      So that if they find it useful, they will contribute their own improvements to benefit the project.

      I don’t think many projects see acquiring unpaying corporate customers as a goal.

    • freeone3000 15 hours ago

      What’s the point in people using it if all that profit ends up in someone else’s pockets?

    • meindnoch 16 hours ago

      Trillion dollar corporations are not "people".

      • eikenberry 14 hours ago

        No corporations are people, they are legal constructs. How much money they are worth makes no difference.

    • lelandbatey 16 hours ago

      You can want to be helpful without wanting to have power or responsibility.

      I'm interested in people (not companies, or at least I don't care about companies) being able to read, reference, learn from, or improve the open source software that I write. It's there if folks want it. I basically never promote it, and as such, it has little uptake. It's still useful though, and I use it, and some friends use it. Hooray. But that's all.

    • itsanaccount 16 hours ago

      you seem to have mistaken corporations for people.

      • kortilla 16 hours ago

        You seem to think corporations aren’t made of people

        • dsr_ 16 hours ago

          Sheds are made of wood, but they aren't trees.

        • eikenberry 14 hours ago

          Groups of people are not the same as the people that make them up. They think differently and have different motivations.

        • codedokode 8 hours ago

          Corporations are made of rich stock owners.

    • timewizard 14 hours ago

      People can use it. Corporations won't. I'm entirely unbothered by this outcome.

      This isn't a popularity contest and I'm sick of gamification of literally everything.

djoldman 15 hours ago

I really don’t understand solo unpaid maintainers who feel “pressure” from users. My response would always be: it’s my repo, my code, if you don’t like how I’m doing things, fork the code megashrug.

You owe them nothing. That fact doesn’t mean maintainers or users should be a*holes to each other, it just means that as a user, you should be grateful and you get what you get, unless you want to contribute.

Or, to put it another way: you owe them exactly what they’ve paid for!

  • sysmax 12 hours ago

    Sadly, that stuff backfires. The researcher will publish your response along with some snarky remarks how you are refusing to fix a "critical issue", and next time you are looking for a job and the HR googles up your name, it pops up, and -poof-, we'll call your later.

    I used to work on a kernel debugging tool and had a particularly annoying security researcher bug me about a signed/unsigned integer check that could result in a target kernel panic with a malformed debug packet. Like you couldn't do the same by just writing random stuff at random addresses, since you are literally debugging the kernel with full memory access. Sad.

    • hgs3 11 hours ago

      Just be respectful and not snarky. And be clear about your boundaries.

      What I do is I add the following notice to my GitHub issue template: "X is a passion project and issues are triaged based on my personal availability. If you need immediate or ongoing support, then please purchase a support contract through my software company: [link to company webpage]".

  • kayodelycaon 15 hours ago

    Your solution is exactly right, but let me try to help understanding the problem.

    Many open source developers feel a sense of responsibility for what they create. They are emotionally invested in it. They may want to be liked or not be disliked.

    You’re able to not care about these things. Other people care but haven’t learned how to set boundaries.

    It’s important to remember, if you’re not understanding what a majority of people are doing, you are the different one. The question should be “Why am I different?” not “Why isn’t everyone else like me?”

    “Here’s the solution” comes off far better than, “I don’t understand why you don’t think like me.”

    • atemerev 5 hours ago

      That's a good argument, thank you. Open source authors are even more heroic with this.

  • michaelt 15 hours ago

    > I really don’t understand solo unpaid maintainers who feel “pressure” from users.

    Some open source projects which are well funded and/or motivated to grow are giddy with excitement at the prospect you might file a bug report [1,2]. Other projects will offer $250,000 bounties for top tier security bugs [3].

    Other areas of society, like retail and food service, take an exceptionally apologetic, subservient attitude when customers report problems. Oh, sir, I'm terribly sorry your burger had pickles when you asked for no pickles. That must have made you so frustrated! I'll have the kitchen fix it right away, and of course I'll get your table some free desserts.

    Some people therefore think doing a good job, as an open source maintainer, means emulating these attitudes. That you ought to be thankful for every bug report, and so very, very sorry to everyone who encounters a crash.

    Needless to say, this isn't a sustainable way to run a one-person project, unless you're a masochist.

    [1] https://llvm.org/docs/Contributing.html#id5 [2] https://dev.java/contribute/test/ [3] https://bughunters.google.com/about/rules/chrome-friends/574...

  • msgodel 15 hours ago

    The correct response to this kind of thing is an invoice IMO.

neilv 4 hours ago

If you skim past the less-interesting project history, there's interesting description of some dynamics that apply to a lot of open source projects, including:

> Even if it is a valid security flaw, it is clear why it might rankle a maintainer. The report is not coming from a user of the project, and it comes with no attempt at a patch to fix the vulnerability. It is another demand on an unpaid maintainer's time so that, apparently, a security research company can brag about the discovery to promote its services.

> If Wellnhofer follows the script expected of a maintainer, he will spend hours fixing the bugs, corresponding with the researcher, and releasing a new version of libxml2. Sveshnikov and Positive Technologies will put another notch in their CVE belts, but what does Wellnhofer get out of the arrangement? Extra work, an unwanted CVE, and negligible real-world benefit for users of libxml2.

> So, rather than honoring embargoes and dealing with deadlines for security fixes, Wellnhofer would rather treat security issues like any other bug; the issues would be made public as soon as they were reported and fixed whenever maintainers had time. Wellnhofer also announced that he was stepping down as the libxslt maintainer and said it was unlikely that it would ever be maintained again. It was even more unlikely, he said, with security researchers ""breathing down the necks of volunteers.""

> [...] He agreed that ""wealthy corporations"" with a stake in libxml2 security issues should help by becoming maintainers. If not, ""then the consequence is security issues will surely reach the disclosure deadline (whatever it is set to) and become public before they are fixed"".

bryanlarsen 16 hours ago

> The point is that libxml2 never had the quality to be used in mainstream browsers or operating systems to begin with

I think that's seriously over-estimating the quality of software in mainstream browsers and operating systems. Certainly some parts of mainstream OS's and browsers are very well written. Other parts, though...

  • karel-3d 8 hours ago

    I have only experience with Chrome codebase and while it's C++ (which I don't personally like) it's pretty solid and most of the weird hairy stuff were the external linked libraries. But I didn't poke around THAT much.

    • prmoustache 3 hours ago

      Using badly written external linked library is bad software engineering / quality too as the responsibility lies into who are linking it.

    • troupo 8 hours ago

      Remember "25000 string allocations per each key stroke"? https://groups.google.com/a/chromium.org/g/chromium-dev/c/EU...

      • quietbritishjim 4 hours ago

        That's certainly interesting but to give context for those not following the link: This is for entering into the address/search bar, so it covers all the work of searching history, performing network requests to autocomplete search terms, and displaying those results. It's not like entering a character into a regular text box on a page.

        • troupo 3 hours ago

          IMO no amount of context can justify 25000 string allocations per key stroke :)

          Also, if you read the issues in the first post, it has nothing to do with "covering all the work". It's just really bad programming

  • burnt-resistor 14 hours ago

    That's the problem of abusing and freeloading off critical components of the FOSS supply chain. Megacorps must pay their fair share or bad things happen, just like unbounded, corrupt crapitalism.

JonChesterfield 16 hours ago

This is an alarming read. Not so much the "security bugs are bugs, go away" sentiment which seems completely legitimate, but that libxml2 and libxslt have been ~ solo dev passion projects. These aren't toys. They're part of the infrastructure computing is built on.

  • stavros 6 hours ago

    You got the timeline wrong: libxml2 has always been a solo dev passion project, then a bunch of megacorps used them for the infrastructure computing is built on. This is on them.

  • chronid 9 hours ago

    Exactly how openssl was (is?) when heartbleed happened. It's nothing new sadly, there are memes about the "unknown oss passion project" holding up the entire stack all over the internet.

  • jeroenhd 5 hours ago

    These projects are toys. The real problem is that multi billion dollar companies are using toys to keep you safe. Maybe we shouldn't build our core infrastructure with LEGO blocks and silly putty.

  • pabs3 8 hours ago

    The Nebraska project in this diagram isn't just one project, its pretty much the entirety of our underlying software infrastructure.

    https://xkcd.com/2347/

zppln 16 hours ago

Very sad read. Much of the multi-billion dollar project I work on is built on top of libxml2 and my company doesn't have a clue. Fuck, even most of my colleagues working with XML every day don't even know it because they only interface indirectly with it via lxml.

  • burnt-resistor 14 hours ago

    Well, they need to pony up around $150k or so to keep it alive rather than freeloading off the work of others.

    • sneak 8 hours ago

      It’s not freeloading to accept a gift given freely.

      • KingMob 7 hours ago

        Yet in real life gifting, we expect reciprocity and have norms. (E.g., if little Johnny doesn't bring a present to Sally's birthday party, he never gets invited back.)

        Asymmetrical gifting is only acceptable with a power imbalance; if the boss gives an employee a gift, it need not be reciprocated.

        FOSS actually turns this on its head, since unpaid volunteers are giving billionaires like Bezos gifts. Worse, people argue in favor of it.

      • meepmorp 12 minutes ago

        I've never once expected someone to repair a gift they gave me because I found a flaw in it. That's when it becomes freeloading.

      • atemerev 5 hours ago

        It is not freeloading, but correspondingly you cannot demand anything from a gifter. Not even "could you please look at it". They might. Or they may ignore you. Or they may delete their repo and go away to the wild. Up to them.

  • ethan_smith 5 hours ago

    Companies should implement dependency audits that identify critical open source components and allocate appropriate support resources proportional to their business impact.

  • mschuster91 16 hours ago

    > Fuck, even most of my colleagues working with XML every day don't even know it because they only interface indirectly with it via lxml.

    Relevant XKCD: https://xkcd.com/2347/

SAI_Peregrinus 14 hours ago

There are two types of responsible disclosure: coordinated disclosure where there's an embargo (ostensibly so that the maintainer can patch the software before the vulnerability is widely known) and full disclosure where there's no embargo (so that users can mitigate the vulnerability on their own, useful if it's already being exploited). There's no reason a maintainer shouldn't be allowed to default to full disclosure. In general, any involved party can disclose fully. Irresponsible disclosure is solely disclosing the vulnerability to groups that will exploit it, e.g. NSO.

  • tptacek 11 hours ago

    Yeah, exactly. And the subtext of all of this is that big companies are going to get burnt by these kinds of decisions. But big companies work around this kind of thing all the time. OpenSSL is a good example.

lukaslalinsky 6 hours ago

As a maintainer of several open source projects over my life, I really hated these so called security researchers and their CVEs. I routinely fixed more impacting bugs due to user reports, but when one of these companies found a big, they made a whole theater around it, while the impact being pretty small. Pretty much any bug, except maybe a typo in the UI, is a security bug. It gets tiring very soon. And with the CVEs comes a lot of publicity and a lot of demands.

  • mrweasel 3 hours ago

    Does the security researchers provide you with patches, or is it more frequently "there's a bug here".

    In the later case I'm wondering if there's an argument to be made for "Show me the code or shut up". Simply rejecting reports on security issue which are not also accompanied by a patch. I'm think, will it devalue the CVE on the researchers resume, if the project simply says no, on the grounds of not being a fix?

    Probably not.

    • viraptor 9 minutes ago

      CVE is an index of vulnerabilities. Whether there's a patch and who made it is largely irrelevant in that context.

DeepYogurt 16 hours ago

It'd be great if some of these open source security initiatives could dial up the quality of reports. I've seen so so many reports for some totally unreachable code and get a cve for causing a crash. Maintainers will argue that user input is filtered elsewhere and the "vuln" isn't real, but mitre don't care.

  • selfhoster11 16 hours ago

    Better yet - they could contribute a patch that fixes the issue.

  • mschuster91 16 hours ago

    > I've seen so so many reports for some totally unreachable code and get a cve for causing a crash.

    There have been a lot of cases where something once deemed "unreachable" eventually was reachable, sometimes years later, after a refactoring and now there was an issue.

    • DeepYogurt 16 hours ago

      At what rate though? Is it worth burning out devs we as a community rely upon because maybe someday 0.000001% of these bugs might have real impact? I think we need to ask more of these "security researchers". Either provide a real world attack vector or start patching these bugs along with the reports.

      • bigfatkitten 16 hours ago

        “PoC or GTFO” is an entirely reasonable response.

        • codedokode 8 hours ago

          I wouldn't bother to write PoC because it is a waste of time and it is faster to fix the potential bug rather than figure out what conditions are necessary to turn it into a vulnerability. I think that we all should stop writing PoCs for bugs and spend the lifetime for something more useful.

          • mschuster91 6 hours ago

            That's not easy though, especially not for large and old code bases. As an outsider doing occasional bugfixes when I spot issues in an open-source project, I don't have the time to dig into how exactly I need to set up my computer to even have a minimum viable build setup, adhere to each project's different code standards, deal with the bullshit called "Contributor License Agreement" and associated paperwork, or wrap my head around how this specific project does testing and pipelines.

            What I can and will do however is write a bug ticket that says what I think the issue is, where my closest suspicion is that causes the issue, and provide either a reproduction or a bugfix patch. Dealing with the remainder of the bureaucracy however is what I do not see as my responsibility.

        • duped 15 hours ago

          "PR or payment to fix or GTFO" is also a reasonable response

        • marcusb 16 hours ago

          Also a wonderful zine!

      • mschuster91 16 hours ago

        IMHO, at least the foundations of what makes the Internet tick - the Linux kernel, but also stuff like SSL libraries, format parsers, virtualization tooling and the standard libraries and tools that come installed by default on Linux systems - should be funded by taxpayers. The EU budget for farm subsidies is about 40 billion euros a year - cut 1% off of it, so 400 million euros, and invest it into the core of open source software, and we'd get an untold amount of progress in return.

        • chronid 9 hours ago

          They should be funded by the companies using them. Do you believe any of the fortune top100 would be greatly impacted by funding libxml2? They probably all rely on it, one way or the other.

          The foundation of the internet is something that gets bigger and bigger every year. I understand the sentiment and the reasoning of declaring software a "public good", but it won't scale.

          • mschuster91 4 hours ago

            > They should be funded by the companies using them. Do you believe any of the fortune top100 would be greatly impacted by funding libxml2? They probably all rely on it, one way or the other.

            I agree in theory but it's impractical to achieve due to the coordination effort involved, hence using taxes as a proxy.

            > The foundation of the internet is something that gets bigger and bigger every year. I understand the sentiment and the reasoning of declaring software a "public good", but it won't scale.

            For a long time, a lot of foundational development was funded by the government. Of course it can scale - the problem is most people don't believe in capable government any more after 30-40 years of neoliberal tax cuts and utter incompetence (California HSR comes to my mind). We used to be able to do great things funded purely by the government, usually via military funding: laser, radar, microwaves and generally a lot of RF technology, even the Internet itself originated out of the military ARPANET. Or the federal highways. And that was just what the Americans did.

        • charcircuit 15 hours ago

          It's not the government's job to subsidize people's bad business models.

          • viraptor 5 minutes ago

            It shouldn't be, but it is to a huge degree. Oil companies, corn production, milk subsidies, road network growth, etc. are all bad business subsidies in the US for example.

          • mschuster91 6 hours ago

            Governments used to fund basic research all the time for decades to provide a common good. Governments fund education, universities, road infrastructure and other foundational stuff so that companies can work.

    • canyp 11 hours ago

      And whose fault is it? The person who gave their work for free, or the parasitic company that shipped a product with it?

      • mschuster91 2 hours ago

        Often enough such issues also affect a lot of downstream open-source software.

kstrauser 15 hours ago

> It includes a request for Wellnhofer to provide a CVE number for the vulnerability and provide information about an expected patch date.

“Three.”

“Like, the number 3? As in, 1, 2, …?”

“Yes. If you’re expecting me to pick, this will be CVE-3.”

  • viraptor a minute ago

    The project doesn't have to provide one though. The person reporting it can handle it if they care. It's ok to say "I'm not interested in those".

  • mrweasel 3 hours ago

    I think he should just reject reports of vulnerabilities if they aren't accompanied by a patch.

heisenbit 5 hours ago

Bigger companies have either policies or have policies derived from regulatory demands on the software they are using for their products and services. Defects must be fixed within a certain timeframe. Software suppliers and external code must be vetted. Have such a widely used library explicitly not maintained in theory should make it a no-go area forcing either removal or ongoing explicit security audits - it may well be cheaper for any of them to take over the full maintenance load. Will be interesting to watch.

Also the not so relevant security bugs are not just costs to the developers but the library churn is also costing more and more users if the user is forced by policy to follow in a timely manner the latest versions in the name of "security".

ZiiS 2 hours ago

It seems perfectly reasonable for any library to take the stance they are not a security barrier. It is up to people using libxml2 in applications and OSs that have the resources to issue CVEs and track embargos. I am sure any resulting PRs will be gratefully welcomed.

KingOfCoders 6 hours ago

When a project is up, open source developers are keen to promote it, put it on their CV, give conference talks. There is no obligation for companies to sponsor anything, this is not the reason behind open source.

Yes open source has changed, from when the early 90s. There are more users, companies use projects and make millions with other peoples work.

I feel with the maintainer, with how ungrateful people are. And demanding without giving.

Open Source licenses fall short.

Open Source projects should clearly state what they think about fixing security, taking on external contributions or if they consider the project feature complete. Just like standard licenses, we should have a standard, parseable maintenance "contract".

"I fix whatever you pay for, I fix nothing, I fix how I see fit. Including disclosure, etc."

So everyone is clear about what to expect.

mjw1007 5 hours ago

It would be better if there was a layer of maintainers between the free software authors and the end users that could act as a buffer in cases like this, in particular to take care of security vulnerabilities that genuinely need dealing with quickly.

Of course that's exactly what traditional Linux distributions signed up to do.

Clearly many people have decided that they're better off without the distributions' packaging work. But maybe they should be thinking about how to get the "buffering" part back, and ideally make it work better than the distributions managed to.

throwaway2037 9 hours ago

    > ...there are currently four bugs marked with the security label in the libxml2 issue tracker. Three of those were opened on May 7 by Nikita Sveshnikov, a security researcher who works for a company called Positive Technologies.
I'm confused. Why doesn't Positive Technologies submit a patch or offer to pay the lead maintainer to implement a fix?

FYI, Wiki tells me:

    > Positive Technologies is a Russian information security research company and a global leader in cybersecurity.
  • jeroenhd 5 hours ago

    The security researcher is paid to find vulnerabilities, not to fix them. These companies are selling code analysis to their customers and the more issues they find, the more they'll be worth.

    When it comes to fixing the issues, their customers will have to beg/spam/threaten the maintainers until the problem is solved. They probably won't write a patch; after all, Apple, Google, and Microsoft are only small companies with limited funds.

  • flomo 7 hours ago

    Perhaps you are imagining some free software bong(o drum) circle?

    The big point is this is a critical component for Apple and Google (and maybe Microsoft), and nobody is paying any attention to it.

  • brazzy 8 hours ago

    Because they don't use libxml2 and don't actually have any need for a fix. They only want to build a reputation as pentrsters by finding vulnerabilities in high profile projects

  • codedokode 9 hours ago

    Because they have other things to do? Nobody pays them for fixing it too.

firesteelrain 15 hours ago

Understand the stance, but the big corps using it (Apple, Google, Microsoft) are using it and acknowledge it silently at risk. It's not entirely fair though, Google did make a donation.

  • burnt-resistor 14 hours ago

    Like tipping someone a penny. If it's so critical to their business, then they can pay a pittance to sustain it.

  • troupo 8 hours ago

    > It's not entirely fair though, Google did make a donation.

    Yup. $10 000.

    Remind me what the average Google salary is? Or how much profit Google made that year?

    Or better still, what is the livable wage is where libxml maintainer lives? You know, the maintainer of the library used in the core Google Product?

    • firesteelrain 6 hours ago

      I agree that $10,000 isn’t a meaningful investment given the scale of reliance.

      What would a fair model look like? An open-source infrastructure endowment? Ongoing support contracts per critical library?

      At the same time, I think there’s a tension in open source we don’t talk about enough: it’s built to be free and open to all, including the corporations we might wish were more generous. No one signed a contract!

      As the article states, Libxml2 was widely promoted (and adopted) as the go-to XML parser. Now, the maintainer is understandably tired. There is now a sustainability problem that is more systemic than personal. How much did the creator of libxml benefit?

      I don’t think we should expect companies to do the right thing just because they benefit and it isn’t how open source was meant to be and this isn’t how open source is supposed to work

      But maybe that’s the real problem

      • troupo 3 hours ago

        Yeah, open source funding is a tricky issue, and there are no good answers or solutions, unfortunately

VMtest 2 hours ago

I'm very sure if he is well paid by those corporations he will have no problem maintaining it, take note guys

democracy 8 hours ago

funny - when struts2/log4j caused a lot of million-dollar problems, how many companies were looking for commercial alternatives or invested into developing their own solutions? That's right - zero. Everyone just switched to the next freebie.

sorrythanks 13 hours ago

GPL was a good idea

  • jenadine 8 hours ago

    I agree. But it doesn't fix this particular problem. GPL software also needs maintainers and can still have security issues.

otikik 6 hours ago

I think they are not going far enough.

"All null-pointer-referencing issues should come with an accompanying fix pull request".

  • jeroenhd 5 hours ago

    I don't think putting the burden to fix the code should be on users. However, it also shouldn't be on developers.

    I think something like "Null-pointer-referencing issues will not be looked at by core maintainers unless someone already provides a patch". That way, someone else who knows how to fix the problem can step in, and users aren't left with the false impression that merely reporting their bug will not guarantee a solution.

  • tzs 6 hours ago

    So if I find a null pointer dereference issue in something written in a language I don’t know, I shouldn’t report it because I can’t include a fix?

    • otikik 6 hours ago

      If you don't know the language, why are you reporting null pointers?

      • tzs 5 hours ago

        Because the program crashed and the crash dump showed a null pointer dereference, and I found some inputs that reproduce it 100%, so I thought this might be useful to the developer?

        • otikik 5 hours ago

          In the context of libxml it does sound that for every hypothetical person like you that there's going to be 20 "security researchers" like the ones the article is mentioning; just running automated tools and trying to use security issues as a way to promote themselves.

          If getting rid of your input gets rid of the other 20 issues, I would take it.

h43z 5 hours ago

The only obstacle here appears to be the psychological issues of the maintainers themselves. I know it maybe hard to say "fuck off" but they will have to learn to say that to stop being exploited.

benced 15 hours ago

Do we need a more profound solution than what the maintainer is doing here? Any given bug is either:

a) nonsense in which case nobody should spend any time fixing this (I'm thinking things like the frontend DDOS CVEs that are common) b) an actual problem in which case a compliance person at one of these mega tech companies will tell the engineers it needs to be fixed. If the maintainer refuses to be the person fixing it (a reasonable choice), the mega tech company will eventually just do it.

I suppose the risk is the mega tech company only fixes it for their internal fork.

Aurornis 16 hours ago

I empathize with some of the frustrations, but I'm puzzled by the attempts to paint the library as low-quality and not suitable for production use:

> The viewpoint expressed by Wellnhofer's is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time.

I think it's very obvious that the maintainer is sick of this project on every level, but the efforts to trash talk its quality and the contributions of all previous developers doesn't sit right with me.

This is yet another case where I fully endorse a maintainer's right to reject requests and even step away from their project, but in my opinion it would have been better to just make an announcement about stepping away than to go down the path of trash talking the project on the way out.

  • rectang 16 hours ago

    I think Wellnhofer is accurate in his assessment of the current state of the library and its support infrastructure institutions. Software without adequate ongoing maintenance should not be used in production.

    (Disclosure: I'm a past collaborator with Nick on other projects. He's a fantastic engineer and a responsible and kind person.)

    • firesteelrain 6 hours ago

      The crux is these seemingly bogus security “bugs”. If there were quality issues, the amount of software and people using libxml by virtue of testing in production/wild would have found most issues by now.

      There is plenty of software today that is tested within cost and schedule that’s closed source and it’s running in production. I get the point but libxml is not one of those cases

  • zetafunction 13 hours ago

    A large part of the problem is the legacy burden of libxml2 and libxslt. A lot of the implementation details are exposed in headers, and that makes it hard to write improvements/fixes that don't break ABI compatibility.

  • flomo 16 hours ago

    Recall similar things were said about OpenSSL, and it was effective at getting corps to start funding the project.

    • wbl 15 hours ago

      It was not however effective at getting the project to care about quality or performance.

  • poulpy123 7 hours ago

    I think it's a way to say: "if you don't like what I'm doing, go fuck yourself"

mystified5016 9 minutes ago

Honestly the only permanent solution to this is probably a big string of LeftPad events. Maintainers of projects like this that have been subsumed into corporate infrastructure should pull the plug and nuke the git repo.

Disastrous, apocalyptic consequences is the only way to get the attention of the real decision makers. If libxml2 just vanishes and someone explains to John Chrome or whoever that $150k a year will make the problem go away, it's a non-decision. $150k isn't even a rounding error on a rounding error for Google.

The only way to fight corporations just taking whatever they want is to absolutely wreck their shit when they misbehave.

Call it juvenile, sure, but corporations are not rational adults and usually behave like a child throwing a temper tantrum. There have to be real, painful and ongoing consequences in order to force a corporation to behave.

atemerev 5 hours ago

Don't like something? Fork and fix.

Unhappy with a maintainer? Fork and maintain it yourself.

Some open source code creates issues in your project? Fix it and try to upstream. Upstream is not accepted? Fork and announce the fix.

Unpaid open source developers owe you nothing, you can't demand anything, their work is already a huge charitable contribution to humanity. If you can do better — fork button is universally available. Don't forget to say thank you to original authors while you stay on the shoulders of giants.

KingMob 8 hours ago

"...the project has received the immense sum of $11,000..."

Is the author being sarcastic? Or is that genuinely an immense sum relative to how little funding most FOSS gets?

  • pabs3 8 hours ago

    Both :/

bjourne 16 hours ago

So software released under the MIT license and maintainer now complains that corporate users are not helping improve it? I'd file this under Stallman told you so.

  • kayodelycaon 15 hours ago

    No. He’s complaining about companies demanding he do free work for them.

  • tzs 6 hours ago

    The license used is completely irrelevant here. Corporate users generally aren't making any changes to software like libxml2.

tptacek 15 hours ago

I don't think this trend much matters. Serious vendors concerned about security will simply vendor things like libxml2 and handle security inbounds themselves; they'll become the real upstreams.

  • canyp 11 hours ago

    Serious vendors:

bawolff 11 hours ago

So reading this, it sounds like the maintainer got burned out.

That's reasonable, being a maintainer is a thankless job.

However i think there is a duty to step aside when that happens. If nobody can take the maintainer's place, then so be it, its still better than the alternative. Being burned out but continuing anyways just hurts everyone.

Its absolutely not the security researcher's fault for reporting real albeit low severity bugs (to be clear though, entirely reasonable for maintainers to treat low severity security bugs as public. The security policy is the maintainer's decision, its not right to blame researchers for following the policy maintainers set)

  • teddyh 2 hours ago

    Being a free software maintainer, especially for code that you did not yourself write, is in effect a volunteer position in a charity or a non-profit organization. You yourself volunteered to take the position, and when you did, you became responsible for acting in the interests of the project and all its users. The fact that you are not paid does not mean that you can do whatever you please. If you at any time feel that you cannot fulfill your responsibilities to your users and to the development of the project, you should immediately leave your position to more eager and/or capable hands. (You should already have been prepared and have such people ready to take over, which should be possible if the project is popular enough.)

  • firesteelrain 6 hours ago

    Curl has the same issue and the problem is that these reports are just noise. It wastes everyone’s time and even lacks a Proof of Concept.