ProgramMax 19 hours ago

Author here. Hello everyone! Feel free to ask me anything. I'll go ahead and dispel some doubts I already see here:

- It isn't really a "new format". It's an update to the existing format. - It is very backwards compatible. -- Old programs will load new PNGs to the best of their capability. A user will still know "that is a picture of a red apple".

There also seems to be some confusion about how PNGs work internally. Short and sweet: - There are chunks of data. -- Chunks have a name, which says what data it contains. A program can skip a chunk it doesn't recognize. - There is only one image stream.

  • account42 6 hours ago

    > It isn't really a "new format". It's an update to the existing format. - It is very backwards compatible. -- Old programs will load new PNGs to the best of their capability. A user will still know "that is a picture of a red apple".

    This is great but also has the issue that users might not notice that their setup is giving them a less than optimal result. Of course that is probably still better than not having backwards compatibility.

    Edit: Seems the backwards compatibility isn't as great as it could be. Old programs show a washed out image instead which sucks. This should have been avoidable in the same way JPG gain maps work so that you only need updated programs to take advantage of the increased gamut on more-than-sRGB screens and not to correctly show colors that fit into sRGB.

  • nabla9 2 hours ago

    Does it have any advantage over Lossless encoding in JPEG XL?

  • dave8088 11 hours ago

    You’re awesome. Thanks for making things better.

  • fwip 18 hours ago

    Do you have any examples on hand of PNGs that use the new features of the spec? It would be cool to see a little demo page with animated or HDR images, especially to download to test if our programs support them yet.

    • ProgramMax 18 hours ago

      Sure!

      Chris Lilley--one of the original PNG co-authors--has a post with an example HDR image: https://svgees.us/blog/cICP.html It is about half way down, with the birthday cake. Generally, us tech nerds have phones that are capable of displaying it well. So perhaps view the page on your phone.

      What you should look for is the cake, the pink tips in her hair, and the background being more vivid. For me, the pink in the cake was the big give-away.

      There is also the Web Platform Tests (WPT) which we use to validate browser support: https://wpt.fyi/results/png/cicp-chunk.html?label=master&lab...

      Although, that image is just a boring teal. See it live in your browser here: https://wpt.live/png/cicp-chunk.html

      For an example of APNG, you can use Wikipedia's images: https://en.wikipedia.org/wiki/APNG

      But you have a bigger point: I should have live demonstrations of those things to help people understand.

      • cratermoon 20 minutes ago

        I can see a clear difference between the images in Firefox on MacOS with my M1 macbook. Very nice.

      • jacekm 16 hours ago

        Thank you for the examples. I tried the one with a pink cake. Turns out that on my machine only web browsers are capable of displaying the image properly. All viewers (IrfanView, XnView, Nomacs, Windows Photos) and editors (Paint .NET, GIMP) that I've tried only showed the "washed out" picture.

        • ProgramMax 15 hours ago

          Yeah. We were able to get buy-in from some big players. We cannot contact every group, though. My hope is since big players have bought in, others will hear the message and update their programs.

          Sooooo file some bugs :D

          Also, be kind to them. This literally launched yesterday.

          • dave8088 11 hours ago

            The creator of photopea.com is very responsive to user suggestions. I’d recommend contacting him if you haven’t already.

        • account42 6 hours ago

          Huh, for some reason GIMP doesn't even show the usual color space conversion dialog.

      • Nopoint2 9 hours ago

        I never realized how limited sRGB is. I guess this is why people liked CRT TVs, and why you could never watch analog TV properly on a PC screen.

        • account42 6 hours ago

          It's really not that limited, the problem is only if you reinterpret a larger gamut as sRGB without doing the proper conversion where things look washed out.

          • Nopoint2 25 minutes ago

            That's what I thought too, but the difference is big. You'd think you maybe lose some color lights, or very bright flowers, but no, colors outside sRGB are common.

            There was nothing you could do about the TV, the screen couldn't show all the colors that you needed.

      • fwip 16 hours ago

        Thanks, I appreciate all of these links. :)

  • derefr 17 hours ago

    So, I'm a big fan of metaformats with generalized tooling support. Think of e.g. Office Open XML or ePub — you don't need "an OOXML parser" / "an ePub parser" to parse these; they're both just zipped XML, so you just need a zipfile library and libxml.

    For the lifetime of PNG so far, a PNG file has almost, but just barely not, been a valid Interchange File Format (IFF) file.

    IFF is a great (simple to understand, simple to implement support for, easy to generate, easy to decode, memory-efficient, IO-efficient, relatively compact, highly compressible) metaformat, that more people should be aware of.

    However, up to this point, the usage of IFF has consisted of:

    • some old proprietary game-data and image formats from the 1980s that no modern person has heard of

    • some popular-yet-proprietary AV formats [AIFF, RIFF] that nobody would write a decoder for by hand anyway (because they would need a DSP library to handle the resulting sample-stream data anyway, and that library may as well offer container-format support too)

    • The object files of an open but uncommon language runtime (Erlang .beam files), where that runtime exposes only high-level domain-specific parsing tooling (`beam_lib`) rather than IFF-general decoding tooling

    • An "open-source but corporate-steered" image format that people are wary of allowing to gain ecosystem traction (WebP — which is more-specifically a document in a RIFF container)

    • And PNG... but non-conformantly, such that any generic IFF decoder that could decode the other things above, would choke on a PNG file.

    IMHO, this is a major reason that there is no such thing as "generalized IFF tooling" today, despite the IFF metaformat having all the attributes required to make it the "JSON of the binary world". (Don't tell me about CBOR; ain't nobody hand-rolling a CBOR encoder out of template strings.)

    If you can't guess by now, my wishlist item for PNGv3, is for PNG files to somehow become valid/conformant IFF files — such that the popularity of PNG could then serve as the bootstrap for a real IFF tooling ecosystem, and encourage awareness/use of IFF in new greenfield format-definition use-cases.

    ---

    Now, I've written PNG parsers, and generic IFF parsers too. I've even tried this exact unification trick before (I wanted an Erlang library that could parse both .beam files and PNG files. $10 if you can guess the use-case for that!)

    Because of this, I know that "making PNG valid per IFF" isn't really possible by modifying the PNG format, while ensuring that the resulting format is decodable by existing PNG decoders. If you want all the old [esp. hardware] PNG parsers to be compatible with PNGv3s, then y'all can't exactly do anything in PNGv3 like "move the 4-byte CRC inside the chunk as measured by the 4-byte chunk length" or "make the CRCs into their own chunks that reference the preceding record".

    But I'm not proposing that. I'm actually proposing the opposite.

    Much of what PNGv2 did in contravention of the IFF spec, is honestly a pretty good idea in general. It's all stuff that could be "upstreamed" — from the PNG level, to the IFF level.

    I propose: formalizing "the variant of IFF used in PNG" as its own separate metaformat specification — breaking this metaformat out from the PNG spec itself into its own standards document.

    This would then be the "Interchange File Format specification, version 2.0" (not that there was ever a formal IFFv1 spec; we all just kind of looked at what EA/Commodore had done, and copied it in our own code since it was so braindead-easy to implement.)

    This IFF 2.0 spec would formalize, at least, a version or "profile" of IFF for which PNGv2 images are conformant files. It would have chunk CRCs; chunk attribute bits encoded for purposes of decoders + editors via meaningful chunk-name letter-casing; and an allowance for some number of garbage bytes before the first valid chunk begins (for PNG's leading file signature that is not itself a valid IFF chunk.)

    This could be as far as the IFF 2.0 spec goes — leaving IFFv1 files non-decodable in IFFv2 parsers. But that'd be a shame.

    I would suggest going further — formalizing a second IFFv2 "profile" against which IFFv1 documents (like AIFF or RIFF files) are conformant; and then specifying that "generic" IFFv2-conformant decoders (i.e. a hypothetical "libiff", not a format-specific libpng) MUST implement support for decoding both the IFFv1-conforming and the PNGv2-conforming profiles of IFF.

    It could then be up to the IFF-decoding-tooling user (CLI command user, library caller) to determine which IFFv2 "profile" to apply to a given document... or the IFFv2 spec could also specify some heuristic algorithm for input-document "profile" detection. (I think it'd be pretty easy; find a single chunk, and if what follows its chunk-length is a CRC that validates that chunk, then you have the PNGv2-like profile. Whereas if it's not that, but is instead four bytes of chunk-name-valid character ranges, then you've got the IFFv1-like profile. [And if it's neither, then you've got a file with a corrupted first chunk.])

    ---

    And, if you want to go really far, you could then specify a third entirely-novel "profile", for use in greenfield IFF applications:

    • A few bytes of space aren't so precious; we can hash things much faster these days, with hardware-accelerated hashing instructions; and those instructions are also for hashes that do much better than CRC to ensure integriaty. So either replace the inline CRCs with CRC chunks, or with nested FORM-like container records (WCRC [len] [CRC4] [interior chunk]). Or just skip per-chunk CRCs and formalize a fHsh chunk for document-level integrity, embedding the output of an arbitrary hash algorithm specified by its registered https://github.com/multiformats/multihash "hash function code".

    • Re-widen the chunk-name-valid character set to those valid in IFFv1 documents, to ensure those can be losslessly re-encoded into this profile. To allow chunks with non-letter characters to have a valid attribute decoding, specify a document-level per-chunk-name "attributes of all chunks of this type" chunk, that can either be included into a given concrete format's header-chunk specification, or allowed at various points in the chunk stream per a concrete format's encoding rules (where it would then be expected to apply to any successor + successor-descendant chunks within its containing chunk's "scope.") Note that the goal here is to keep the attribute bits in some way or another — they're very useful IMHO, and I would expect an IFF decoder lib to always be emitting these boolean chunk-attribute fields as part of each decoded chunk.

    • Formalize the magic signature at the beginning into a valid chunk, that somehow encodes 1. that this is an IFF 2.0 "greenfield profile" document (bytes 0-3); 2. what the concrete format in use is (bytes 4-7). (You could just copy/generalize what RIFF does here [where a RIFF chunk has the semantics of a LIST chunk but with a leading 4-byte chunk-name type], such that the whole document is enclosed by a root chunk — though this is painful in that you need to buffer the entire document if you're going to calculate the root-chunk length.)

    I'm just spitballing; the concrete details of such a greenfield profile don't matter here, just the design goal — having a profile into which both IFFv1 and PNGv2 documents could be losslessly transcoded. Ideally with as minimal change to the "wider and weirder/more brittle ecosystem" side [in this case that's IFFv1] as possible. (Compare/contrast: how HTML5 documents are a profile of HTML that supersedes both HTML4 and XHTML1.1 — supporting both unclosed tags and XML-namespaced element names — allowing HTML4 documents to parse "as" HTML5 without rewrites, and XHTML1.1 documents to be transcoded to HTML5 by just stripping some root-level xmlns declarations and changing the doctype.)

    • ProgramMax 16 hours ago

      Strangely, I was familiar with AIFF and RIFF files but never made the connection that they're both IFF. I hadn't known about IFF before your post. Thank you :)

      W3C requires that we do not break old, conformant specs. Meaning if the next PNG spec would invalidate prior specs, they won't approve it. By extension, an old, conformant program will not suddenly become non-conformant.

      I could see a group of people formalizing IFFv2, and adapting PNG to it. But that would effectively be PNGIFF, not PNG. It would be a new spec. Because we cannot break the old one.

      That might be fine. But it comes with a new set of problems, like adoption.

      Soooo I like the idea but it would probably be a separate thing. FWIW, it would actually be nice to make a formal IFF spec. If there was no governing body that owns it, we can find an org and gather interest.

      I doubt W3C would be the right org for it. ISO subgroup??

      • saintfire 15 hours ago

        They pretty much say the same thing halfway through. Don't change PNG but adapt IFF to work with PNG's flavour of IFF.

        • ProgramMax 14 hours ago

          Right. Sorry, that was supposed to be a "yes, and..." to provide some additional context.

    • account42 6 hours ago

      We really shouldn't be making new standards with big endian byte order.

      It's also questionable how much you actually benefit from common container formats like this since you need to know the application specific format contained anyway in order to do anything useful with it. It also causes problems where "smart" programs treat files in ways that make no sense, e.g. by offering to extract a .docx file just because it looks like a .zip

  • 80x86 12 hours ago

    It would be nice if PNG supported no compression. That is handy in many situations.

joshmarinacci a day ago

A fun trick I do with my web based drawing tools is to save a JSON representation of your document as a comment field inside of a PNG. This way the doc you save is immediately usable as an image but can also be loaded back into the editor. Also means your downloads folder isn’t littered with unintelligible JSON files.

  • dtech 21 hours ago

    A fun trick, but I wouldn't want to explain to users why their things are saved as a .png, not why their things is lost after they opened and saved the PNG in Paint.

    • account42 5 hours ago

      It can also become a security issue when users inadvertendly share layers/history/whatever that isn't visible anymore in the final image but still in the editable part.

    • KetoManx64 20 hours ago

      If a user is using paint to edit their photos, they're 100% not going to be interested in having the source document to play around with.

  • speps 20 hours ago

    Macromedia Fireworks did it 20 years ago, the PNG was the default save format. Of course, it wasn’t JSON stored in there…

    • usef- 16 hours ago

      I was going to say the same thing. It was nice as their native save format could still be opened anywhere.

      But you did need to remember to export if you didn't want the extra fields increasing the file size. I remember finding fireworks pngs on web pages many times back then.

  • IvanK_net 20 hours ago

    Macromedia did this when saving Fireworks files into PNG.

    Also, Adobe saves AI files into a PDF (every AI file is a PDF file), and Photoshop can save PSD files into TIFF files (people wonder why these TIFFs have several layers in Photoshop, but just one layer in all other software).

    • giancarlostoro 19 hours ago

      > Macromedia did this when saving Fireworks files into PNG. I forgot about this..

      Fireworks was my favorite image editor, I don't know that I've ever found one I love as much as I loved Fireworks. I'm not a graphics guy, but Fireworks was just fantastic.

      • IvanK_net 19 hours ago

        BTW. I am the author of https://www.photopea.com , which is the only software that can open Fireworks files today :D If you have any files, try to open theim (it runs instantly in your browser).

        https://community.adobe.com/t5/fireworks-discussions/open-fi...

        • eigenvalue 3 hours ago

          You’re doing god’s work here, thanks for your service! I use photopea all the time. Probably the most impressive web app I’ve seen in terms of performance.

        • Andrex an hour ago

          Proud paid Photopea user here. I can't understand how you guys overcame my mountain of incredulity but you have saved my ass so much. I was literally looking into dual booting before I found your product.

          (Not many things handle .ai so well either!!)

        • speps 15 hours ago

          Do you have any info on the format used in the PNG chunks? I’d love for someone to recreate Fireworks, it was perfectly adapted to a lot of workflows.

  • neuronexmachina 21 hours ago

    This would be great for things like exported Mermaid diagrams.

  • tomtom1337 a day ago

    Could you expand on this? It sounds a bit preposterous to save a text, as json, inside an image - and then expect it to be immediately usable… as an image?

    • bitpush a day ago

      Not OP, but PNG (and most image/video formats) allows metadata and most allows arbitrary fields. Good parsers know to ignore/safely skip over fields that they are not familiar with.

      https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PN...

      This is similar to HTTP request headers, if you're familiar with that. There are a set of standard headers (User-Agent, ETag etc) but nobody is stopping you from inventing x-tomtom and sending that along with HTTP request. And on the receiving end, you can parse and make use of it. Same thing with PNG here.

    • LeifCarrotson 21 hours ago

      They're not saving text, they're saving an idea - a "map" or a "CAD model" or a "video game skin" or whatever.

      Yes, a hypothetical user's sprinker layout "map" or whatever they're working on is actually composed of a few rectangles that represent their house, and a spline representing the garden border, and a circle representing the tree in the front yard, and a bunch of line segments that draw the pipes between the sprinkler heads. Yes, each of those geometric elements can be concisely defined by JSON text that defines the X and Y location, the length/width/diameter/spline coordinates or whatever, the color, etc. of the objects on the map. And yes, OP has a rendering engine that can turn that JSON back into an image.

      But when the user thinks about the map, they want to think about the image. If a landscaping customer is viewing a dashboard of all their open projects, OP doesn't want to have to run the rendering engine a dozen times to re-draw the projects each time the page loads just to show a bunch of icons on the screen. They just want to load a bunch of PNGs. You could store two objects on disk/in the database, one being the icon and another being the JSON, but why store two things when you could store one?

    • chown a day ago

      Save text as JSON as comments but the file itself is a PNG so that you can use it as an image (like previewing it) as they would ignore the comments. However, the OP’s editor can load the file back, parse the comments, and get the original data and continue to edit. Just one file to maintain. Quite clever actually.

    • woodrowbarlow 21 hours ago

      this is useful for code that renders images (e.g. data-visualization tools). the image is the primary artifact of interest, but maybe it was generated from data represented in JSON format. by embedding the source data (invisibly) in the image, you can extract it later to modify and re-generate.

    • behnamoh a day ago

      no, GP meant they add the JSON text to the meta data of the image as comment.

    • meindnoch 20 hours ago

      Check what draw.io does when you download a PNG.

  • akx 19 hours ago

    This is what stable-diffusion-webui does too (though the format is unfortunately plaintext); ComfyUI stores the node graph as JSON, etc.

  • dragonwriter 19 hours ago

    This is also what many AI image gen frontends do, saving the generation specs as comments so you can open the image and get prompt and settings (or, for, e.g., ComfyUI, full workflows) loaded to tweak.

    Really, I think its pretty common for tools that work with images generally.

  • osetnik 19 hours ago

    > save a JSON representation of your document as a comment field inside of a PNG

    Can you compress it? I mean, theoretically there is this 'zTXt' chunk, but it never worked for me, therefore I'm asking.

  • paisawalla 21 hours ago

    Are you the developer of draw.io?

ksec a day ago

It is just a spec on something widely implemented already.

Assuming Next gen PNG will still require new decoder. They could just call it PNG2.

JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.

Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568

  • thesz a day ago

    HALIC discussion page [1] says otherwise.

    [1] https://encode.su/threads/4025-HALIC-(High-Availability-Loss...

    It looks like LEA 0.5 is the champion.

    And HALIC is not even close to ten in this [2] lossless image compression benchmark.

    [2] https://github.com/WangXuan95/Image-Compression-Benchmark

    • poly2it 20 hours ago

      It looks like HALIC offers very impressive decode speeds within its compression range.

      • ksec 19 hours ago

        And not just decoding speed but also encoding speed with difference of an order of magnitude. Some new results further down in the comments in this thread. Had it not been verified I would have thought it was a scam.

  • Aloisius a day ago

    I'll be honest, I ignored JPEG XL for a couple years because I assumed that it was merely for extra large images.

  • voxleone a day ago

    I'm using png in a computer vision image annotation tool[0]. The idea is to store the class labels directly in the image [dispensing with the side car text files], taking advantage of the beautiful png metadata capabilities. The next step is to build a specialized extension of the format for this kind of task.

    [0]https://github.com/VoxleOne/XLabel

  • illiac786 a day ago

    > If there are any problems it is its encoding and decoding speed and resources.

    And this will improve over time, like jpg encoders and decoders did.

    • ksec a day ago

      I hope I am very wrong but this isn't given. In the past reference encoder and decoder do not concern about speed and resources, but last 10 years have shown most reference encoder and decoder has already put considerable effort into speed optimisation. And it seems people are already looking to hardware JPEG XL implementation. ( I hope and guess this is for Lossless only )

      • illiac786 a day ago

        I would agree we will see less improvements that when comparing modern jpeg implementation and the reference one.

        When it comes to hardware encoding/decoding, I am not following your point I think. The fact that some are already looking at hardware implementation for JPEG XL means that….?

        I just know JPEG hardware acceleration is quite common, hence I am trying to understand how that makes JPEG XL different/better/worse?

        • ksec 19 hours ago

          In terms of PC usage, JPEG, or most image codec decoding are done via software and not hardware. AFAIK even AVIF decoding is done via software on browser.

          Hardware acceleration for lossless makes more sense for JPEG XL because it is currently very slow. As the author of HALIC posted some results below, JPEG XL is about 20 - 50x slower while requiring lots of memory after memory optimisation. And about 10 - 20 times slower compared to other lossless codec. JPEG XL is already used by Camera and stored as DNG, but encoding resources is limiting its reach. Hence hardware encoder would be great.

          For lossy JPEG XL, not so much. Just like video codec, hardware encoder tends to focus on speed and it takes multiple iteration or 5 - 10 years before it catches up on quality. JPEG XL is relatively new with so many tools and usage optimisation which even current software encoder is far from reaching the codec's potential. And I dont want crappy quality JPEG XL hardware encoder, hence I much prefer an upgradeable software encoder for JPEG XL lossy and hardware encoder for JPEG XL Lossless.

    • account42 5 hours ago

      Or it won't like JPEG 2000 encoders didn't.

      • illiac786 2 hours ago

        I mean, if jxl becomes mainstream, of course.

  • bla3 a day ago

    WebP lossless is close to state of the art and widely available. It's also not widely used. The takeaway seems to be that absolute best performance for lossless compression isn't that important, or at least it won't get you widely adopted.

    • ProgramMax 21 hours ago

      WebP maxes at 8-bit per channel. For HDR, you really need 10- or 12-bit.

      WebP is amazing. But if I were going to label something "state of the art" I would go with JPEGXL :)

    • mchusma 21 hours ago

      I don't know that i have ever used jpg or png lossless in practical usage (e.g. I don't think 99.9% of mobile app or web usecases are for lossless). WebP lossy performance is just not worth it in practice, which is why WebP never took off IMO.

      Are there usecases for lossless other than archival?

      • Inityx 18 hours ago

        Asset pipelines for media creation benefit greatly from better compression of lossless images and video

    • adzm a day ago

      Only downside is that webp lossless requires RGB colorspace so you can't, for example, save direct YUV frames from a video losslessly. AVIF lossless does support this though.

    • account42 5 hours ago

      Last I checked cwebp does not preserve PNG color space information properly so the result isn't actually visually lossless.

  • ChrisMarshallNY 21 hours ago

    Looks like it's basically reaffirming what a lot of folks have been doing, unofficially.

    For myself, I use PNG only for computer-generated still images. I tend to use good ol' JPEG for photos.

  • yyyk a day ago

    When it comes to metadata, an implementation not being widely implemented (yet) is not that big a problem. Select tools will do for meta, so this is an advancement for PNG.

  • HakanAbbas 21 hours ago

    I don't really understand what the new PNG does better. Elements such as speed or compression ratio are not mentioned. Thanks also for your kind thoughts ksec.

    Apart from the widespread support in codecs, there are 3 important elements: processing speed, compression ratio and memory usage. These are taken into account when making a decision (pareto limit). In other words, the fastest or the best compression maker alone does not matter. Otherwise, the situation can be interpreted as insufficient knowledge and experience about the subject.

    HALIC is very good in lossless image compression in terms of speed/compression ratio. It also uses a comic amount of memory. No one mentioned whether this was necessary or not. However, low memory usage negatively affects both the processing speed and the compression ratio. You can see the real performance of HALIC only on large-sized(20 MPixel+) images(single and multi-thread). An example current test is below. During operations, HALIC uses only about 20 MB of memory, while JXL uses more than 1 GB of memory.

    https://www.dpreview.com/sample-galleries/6970112006/fujifil...

    June 2025, i7 3770k, Single Thread Results

    ----------------------------------------------------

    First 4 JPG Images to PPM, Total 1,100,337,479 bytes

    HALIC NORMAL: 5.143s 6.398s 369,448,062 bytes

    HALIC FAST : 3.481s 5.468s 381,993,631 bytes

    JXL 0.11.1 -e1: 17.809s 28.893s 414,659,797 bytes

    JXL 0.11.1 -e2: 39.732s 26.195s 369,642,206 bytes

    JXL 0.11.1 -e3: 81.869s 72.354s 371,984,220 bytes

    JXL 0.11.1 -e4: 261.237s 80.128s 357,693,875 bytes

    ----------------------------------------------------

    First 4 RAW Images to PPM, Total 1.224.789.960 bytes

    HALIC NORMAL: 5.872s 7.304s 400,942,108 bytes

    HALIC FAST : 3.842s 6.149s 414,113,254 bytes

    JXL 0.11.1 -e1: 19.736s 32.411s 457,193,750 bytes

    JXL 0.11.1 -e2: 42.845s 29.807s 413,731,858 bytes

    JXL 0.11.1 -e3: 87.759s 81.152s 402,224,531 bytes

    JXL 0.11.1 -e4: 259.400s 83.041s 396,079,448 bytes

    ----------------------------------------------------

    I had a very busy time with HALAC. Now I've given him a break, too. Maybe I can go back to HALIC, which I left unfinished, and do better. That is, more intense and/or faster. Or I can make it work much better in synthetic images. I can also add a mode that is near-lossless. But I don't know if it's worth the time I'm going to spend on it.

    • account42 5 hours ago

      > In other words, the fastest or the best compression maker alone does not matter.

      Strictly true, but e.g. for archival or content delivered to many users compression speed and memory needed for compression is an afterthought compared to compressed size.

      • HakanAbbas 38 minutes ago

        Storage is cheaper than it used to be. Bandwidth is also cheaper than it used to be (though not as cheap as storage). So high quality lossy techniques and lossless techniques can be adopted more than low quality lossy compression techniques. Today, processor cores are not getting much faster. And energy is still not cheap. So in all my work, processing speed (energy consumption) is a much higher priority for me.

        • boogerlad 22 minutes ago

          You're right, but aren't you forgetting that for each image, the encode cost needs to be paid just once, but the decode time must be paid many many times? Therefore, I think it's important to optimize size and decode time.

  • klabb3 a day ago

    What about transparency? That’s the main benefit of PNG imo.

    • cmiller1 a day ago

      Yes JPEG-XL has an alpha channel.

qwertox a day ago

> Officially supports Exif data

Probably the best news here. While you already can write custom data into a header, having Exif is good.

BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.

  • Aardwolf a day ago

    Exif can also cause confusion for how to render the image: should its rotation be applied or not?

    Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation

    It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way

    With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently

    • DidYaWipe a day ago

      The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.

      The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

      • ralferoo a day ago

        One interesting thing about JPEG is that you can rotate an image with no quality loss. You don't need to convert each 8x8 square to pixels, rotate and convert back, instead you can transform them in the encoded form. So, rotating each 8x8 square is easy, and then rotating the image is just re-ordering the rotated squares.

        • pwdisswordfishz a day ago

          That doesn't seem to apply to images that aren't multiples of 8 in size, does it?

          • justincormack a day ago

            the stored image is always a multiple of 8, with padding that is ignored (and heavily compressed).

            • pwdisswordfishz a day ago

              But can this lossless rotation process account for padding not being in the usual place (lower right corner presumably)?

              • mort96 20 hours ago

                I'm not sure if this is how JPEG implements it, but in H.264, you just have metadata which specifies a crop (since H.264 also encodes in blocks). From some quick Googling, it seems like JPEG also has EXIF data for cropping, so if that's the mechanism that's used to crop off the bottom and right portions today, there's no reason it couldn't also be used to crop off the top and left portions when losslessly rotating an image's blocks.

          • hidroto a day ago

            are there any cameras that take pictures that are not a multiple of 8 in width and height?

        • DidYaWipe 15 hours ago

          Indeed. Whenever I'm using an image browser/manager application that supports rotating images, I wonder if it's doing JPEG rotation properly (as you describe) or just flipping the dumb flag.

        • meindnoch 20 hours ago

          Only if the image width/height is a multiple of 8. See: the manpage of jpegtran, especially the -p flag.

        • dylan604 a day ago

          Slight nitpicking, but you can rotate in 90° increments without loss.

      • klabb3 a day ago

        TIL, and hard agree (on face value). I’ve been struck by this with arbitrary rotation of images depending on application, very annoying.

        What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.

        • kllrnohj a day ago

          > What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.

          Performance. Rotation during rendering is often free, whereas the camera would need an intermediate buffer + copy if it's unable to change the way it samples from the sensor itself.

          • DidYaWipe 15 hours ago

            Given that rotation sensors have been standard equipment on most cameras (AKA phones) for many years now, I would expect pixel-reordering to be built into supporting ASICs and to impose negligible performance penalties.

          • airstrike a day ago

            How is rotation during rendering free?

            • kllrnohj a day ago

              For anything GPU-rendered, applying a rotation matrix to a texture sample and/or frame-buffer write is trivially cheap (see also why Vulkan prerotation exists on Android). Even ignoring GPU-rendering, you always are doing a copy as part of rendering and often have some sort of matrix operation anyway at which point concatenating a rotation matrix often doesn't change much of anything.

              • account42 5 hours ago

                The cost is paid in different memory access patterns which may or may not be mitigated by the GPU scheduler. It's an insignificant cost either way though, both for the encoder and the renderer. Also depending on the pixel order in sensor, file or frame buffer "rotated" might actually be the native way and the default is where things get flipped around from source to destination.

                • kllrnohj an hour ago

                  Access pattern is mitigated by texture swizzling which will happen regardless of how it's ultimately rendered. So even if drawn with an identity matrix you're still "paying" for it regardless just due to the underlying texture layout. GPUs can sample from linear textures, but often it comes with a significant performance penalty unless you stay on a specific, and undefined, path.

            • chainingsolid a day ago

              Pretty much every pixel rendered these days was generated by a shader so gpu side you probably already have way more translation options than just a 90° rotation (likely already being used for a rotation of 0°). You'd likely have to write more code cpu side to handle the case of tell the gpu to rotate this please and handle the UI layout diffrence. Honestly not a lot of code.

      • Someone a day ago

        > The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

        The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.

        Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,

        • account42 5 hours ago

          Sensors are not read out as JPEG but into intermediate memory. The encoding step can then deal with the needed rotation.

          RAW images aren't JPEGs so not relevant to the discussion.

        • DidYaWipe 15 hours ago

          At this point I expect any camera ASICs to be able to incorporate this logic for plenty-fast processing. Or to do it when writing out the image file, after acquiring it to a buffer.

          Your raw-image idea is interesting. I'm curious as to how photosites' arrangement would play into this.

      • mavhc a day ago

        Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)

        If a smartphone camera is doing it, then bad camera app!

        • Aardwolf a day ago

          Rotation for speed/efficiency/compression reasons (indeed with PNG's horizontal line filters it can have a compression reason too) should have been a flag part of the compressed image data format and for use by the encoder/decoder only (which does have caveats for renderers to handle partial decoding though... but the point is to have the behavior rigorously specified and encoded in the image format itself and handled by exactly one known place namely the decoder), not part of metadata

          It's basically a shame that the exif metadata contains things that affect the rendering

        • account42 5 hours ago

          Burst mode in cameras means the sensor is readout is buffered in RAM while the encoding and writing to persistent storage catches up. Rotating the buffer would be part of the latter and not affect burst speed - and is an insignificant cost anyway.

        • joking a day ago

          the main reason is probably that the chip is already outputting the image in a lossy format, and if you reorder the pixels you must reencode the image which means degrading the image, so it's much better to just change the exif orientation.

          • DidYaWipe 15 hours ago

            Image sensors don't "output images in a lossy format" as far as I know.

          • lsaferite 20 hours ago

            > the chip is already outputting the image in a lossy format

            Could you explain this one?

        • Joel_Mckay a day ago

          Most modern camera modules have built in hardware codecs like mjpeg, region of interest selection, and frame mirror/flip options.

          This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.

          It can be super annoying sometimes, as one can't really disable the feature on many devices. =3

  • bawolff a day ago

    Personally i wish people just used XMP. Exif is such a bizarre fotmat. Its essentially embedding a tiff image inside a png.

  • jandrese a day ago

    Yes, but websites frequently strip all or almost all Exif data from uploaded images because some fields are used by stalkers to track people down to their real address.

    • johnisgood a day ago

      And I strip Exif data, too, intentionally, for similar reasons.

      • bspammer a day ago

        That makes sense to me for any image you want to share publicly, but for private images having the location and capture time embedded in the image is incredibly useful.

        • jandrese 21 hours ago

          If you are uploading it to a website you are sharing it. Even if the image is supposedly "private" you have to assume it will be leaked at some point. Remember, the cloud is just someone else's computer, and they can do what they want with their computer. They may also not be entirely competent at their job.

          • johnisgood 16 hours ago

            Yes, once something has been shared (or stolen), you lost control over it, be it information or an image. EXIF data is fine, if it never leaves your device or if your device is not compromised.

        • johnisgood a day ago

          If by private you mean "never shared", I agree.

    • sunaookami 10 hours ago

      That reminds me when I first uploaded a picture to some forum and it showed my full home address together with a map as a "feature"

      • account42 5 hours ago

        It is a feature because now you are aware of what you are sharing and can potentially delete it before too many others see it.

  • pezezin 8 hours ago

    Ages ago I worked on photogrammetry software, and the lack of such information was indeed painful for us. One of the most important parts of the processing pipeline is calculation the position and orientation of each camera; having at least the orientation would have made our life much easier.

  • joshvm 15 hours ago

    There is an acceleration field (Exif.Photo.Acceleration) and (Exif.Photo.CameraElevationAngle) for elevation but oddly not 3 axes. Similarly there are fields for ambient environmental conditions, but only whatever specific things the spec-writers considered.

    You could store this in Exif.Photo.MakerNote: "A tag for manufacturers of Exif writers to record any desired information. The contents are up to the manufacturer." I think it can be pretty big, certainly more than enough for 9 DoF position data.

  • Findecanor a day ago

    Does the meta-data have support for opting in/out of "AI training"?

    And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?

albert_e a day ago

So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression! Some nostalgia from 2000s websites can be revived and relived :)

Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.

  • andsoitis a day ago

    > Curious if Animated SVGs are also a thing.

    Yes. Relevant animation elements:

    • <set>

    • <animate>

    • <animateTransform>

    • <animateMotion>

    See https://www.w3schools.com/graphics/svg_animation.asp

    • albert_e a day ago

      Oh TIL - Thanks!

      This could possibly be used to build full fledged games like pong and breakout :)

      • jerf 20 hours ago

        SVG also supports Javascript, which will probably be a lot more useful for games.

        • dveditz_ 19 hours ago

          It supports JavaScript when used as a document, but when used as an "image" by a browser (IMG tag, CSS features) JavaScript and the loading of external resources are disabled.

    • mattigames a day ago

      Overshadowed by CSS animations for almost all use cases.

      • account42 5 hours ago

        *in browsers

        Most other SVG renderers don't support much CSS.

      • lawik a day ago

        But animated gradient outlines on text is the only use-case I care about.

        • mattigames a day ago

          "Use case" is written without hyphen https://en.m.wikipedia.org/wiki/Use_case

          • WorldMaker 21 hours ago

            Hyphenation of multi-word nouns is a process in English that usually happens after some time of usage as separate words. It often happens before eventually merger into a single compound word noun. Such as: "Electronic Mail" to "E Mail" to "e-mail" to "email".

            Given how often it is used as a jargon term in software development, I can absolutely see this usage of "use-case" here as a "vote" for the next step in the process. Will we eventually see "usecase" become common? It's possible. I think it might even be a good idea. I'm debating adding my own "votes" for the hyphen moving forward.

          • fkyoureadthedoc a day ago

            I have to differentiate myself from LLMs by using words wrong though

  • riffraff a day ago

    I was under the impression many gifs these days are actually served as soundless videos, as those basically compress better.

    Can animated PNG beat av1 or whatever?

    • layer8 a day ago

      APNG would be for lossless compression, and probably especially for animations without a constant frame rate. Similar to the original GIF format, with APNG you explicitly specify the duration of each individual frame, and you can also explicitly specify looping. This isn’t for video, it’s more for Flash-style animations, animated logos/icons [0], or UI screen recordings.

      [0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...

      • fc417fc802 a day ago

        All valid points, however AV1 also supports lossless compression and is almost certainly going to win the file size competition against APNG every time.

        https://trac.ffmpeg.org/wiki/Encode/AV1#Losslessencoding

        • meindnoch a day ago

          False, or misleading.

          The AV1 spec [1] does not allow RGB color spaces, therefore AV1 cannot preserve RGB animations in a bit-identical fashion.

          [1] https://aomediacodec.github.io/av1-spec/av1-spec.pdf

          • pornel a day ago

            AV1 supports YCoCg, which encodes RGB losslessly.

            It is a bit-reversible rotation of the RGB cube. It makes the channels look more like luma and chroma that the codec expects.

        • account42 5 hours ago

          > is almost certainly going to win the file size competition against APNG every time

          For video content maybe. Pixel-art gifs are not something video codecs do well at without introducing lots of artifacts.

    • account42 5 hours ago

      Soundless videos cannot be used in environments that expect an image like embeds in forums and similar.

      It's a shame that browser vendors didn't add silent looping video support to the img tag over (imo) baseless concerns.

    • armada651 a day ago

      > Can animated PNG beat av1 or whatever?

      Animated PNGs can't beat GIF nevermind video compression algorithms.

      • Aissen a day ago

        > Animated PNGs can't beat GIF nevermind video compression algorithms.

        Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp: http://littlesvr.ca/apng/gif_apng_webp.html

        Of course I don't think it generalizes all that well…

        • armada651 20 hours ago

          You're correct and I was considering adding a footnote that if you use indexed colors like a GIF then PNG can beat GIF due to better compression algorithms. But when most people think of APNG they think of lossless compression rather than lossy compression.

          • account42 5 hours ago

            Indexed can be lossless when the source already uses few colors, e.g. because you want to improve the compression of an existing GIF or limit colors for stylistic choice (common in pixel art).

        • bmacho a day ago

          I tried these examples on ezgif, and indeed apng manages to be smaller than webp every single time. Weird, I was under the impression that webp was almost always smaller? Is this because GIF images are already special, or apng uses better compression than png?

          edit: using the same ezgif webp and apng on a H.264 source, apng is suddenly 10x the size than webp. It seems apng is only better if the source is gif

          • fc417fc802 11 hours ago

            I would guess that apng only wins when indexed colors can be used. That guess would match what you saw using an h264 file for the source.

          • account42 5 hours ago

            Almost like video codecs and animated images are different niches that optimize for different content.

          • Aissen a day ago

            I have no idea! I actually hoped someone would show a much more comprehensive and serious benchmark in response, but that has failed to materialize.

      • jeroenhd a day ago

        Once you add more than 256 different colours in total, GIF explodes in terms of file size. It's great for small, compact images with limited colour information, but it can't compete with APNG when the image becomes more detailed than what you'd find on Geocities.

        • pornel a day ago

          No, APNG explodes in size in that case.

          In APNG it's either the same 256 colors for the whole animation, or you have to use 24-bit color. That makes the pixel data 3 times larger, which makes zlib's compression window effectively 3 times smaller, hurting compression.

          OTOH GIF can add 256 new colors with each frame, so it can exceed 256 colors without the cost of switching all the way to 16.7 million colors.

    • bawolff a day ago

      Its also because people like to "pause" animations, and that is not really an option with apng & gif.

      • bigfishrunning a day ago

        why not? that's up to the program displaying the animation, not the animation itself -- i'm sure a pausable gif or apng display program is possible

        • pornel a day ago

          It's absolutely possible. Browsers even routinely pause playback when images aren't visible on screen.

          They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

          IMO browsers are just stuck with tech debt, and maintainin a no-longer-relevant distinction between "animations" and "videos". Every supported codec should work wherever GIF/APNG work and vice versa.

          It's not even a performance or complexity issue, e.g. browsers support AVIF "animations" as images, even though they're literally fully-featured AV1 videos, only wrapped in a "pretend I'm an image" metadata.

          • joquarky 21 hours ago

            I wish browsers still paused all animations when the user hits the Esc key. It's hard to read when there are distracting animations all over most pages.

          • nextaccountic 21 hours ago

            > They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

            Browsers should just allow animated gifs and apngs in <video>

            • account42 5 hours ago

              More important would be to allow (silent) videos in <img>.

        • account42 5 hours ago

          Browsers used to support pausing GIFs by pressing the escape key.

    • josephg a day ago

      I doubt it, given png is a lossless compression format. For video thats almost never what you want.

      • DidYaWipe a day ago

        For animations with lots of regions of solid color it could do very well.

        • josephg 8 hours ago

          So do most other video formats. I'm not really seeing any advantages, and I see a lot of disadvantages vs h264 and friends.

          • account42 5 hours ago

            Not without lots of artifacts.

    • fc417fc802 a day ago

      > many gifs these days are actually served as soundless videos

      That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.

      • faceplanted a day ago

        You said that's not really true and the described exactly how it's true, what did you mean?

        • fc417fc802 11 hours ago

          I parsed the comment as something along the lines of clever hackers somehow stuffing soundless videos into gif containers which is most certainly not what is going on. I was attempting to convey that they have nothing to do with gifs. Gifs are not involved anywhere in the process.

          I'm not sure why people disagree so strongly with what I wrote. Worst case scenario is that it's a slightly tangential but closely related rant about deceptive web design practices. Best case scenario is that someone who thought some sort of fancy trick involving gifs was in use learns something new.

  • account42 5 hours ago

    > So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression!

    Not progressively though unless browsers add a new mime type for it which they did not bother to do with animated webp.

  • chithanh a day ago

    When it comes to converting small video snippets to animated graphics, I think WEBP was much better than APNG from the beginning. Only if you use GIF as intermediate format then APNG was competitive.

    Nowadays, AVIF serves that purpose best I think.

    • account42 5 hours ago

      webm or any other non-gimped video codec would be a much better format for that use case. Unfortunately browsers don't allow those in image contexts so we are stuck with an inferior "state of the art" literally-webm-with-deliberately-worse-compression webp standard.

      AVIF is only starting to become widespread so can't be used without fallback if you care about your users. Not sure how it compares to AV1 quality/compression wise but hopefully its not as gimped as webp and there will encoders that aren't as crap as libwebp that almost everyone uses.

  • qingcharles a day ago

    Almost nowhere that supports uploading GIFs supports APNG or animated WEBP. The back end support is so low it's close to zero. Which is really frustrating.

    • extraduder_ire 21 hours ago

      Do you mean services that reencode gif files to webm/mp4? apng just works everywhere that png works, and will remain animated as long as it's not re-encoded.

      You can even have one frame that gets shown if and only if animation is not supported.

      • qingcharles 7 hours ago

        Yes, most places only show the first frame. They ignore the animation, sadly. Even while accepting GIFs.

  • theqwxas a day ago

    Some years ago I've used the Lottie (Bodymovin?) library. It worked great and had a nice integration: you compose your animation in Adobe After Effects, export it to an svg plus some json, and the lottie JS script would handle the animation for you. Anything else with (vector, web) animations I've tried is missing the tools or the DX for me to adopt. Curious to hear if there are more things like this.

    I'm not sure about the tools and DX around animated PNGs. Is that a thing?

  • bmacho a day ago

    > Curious if Animated SVGs are also a thing.

    SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).

    • bawolff a day ago

      Browsers support all that sort of thing, as long as you use an iframe. (Technically there are sone subtle differences between that and html5, but you are right its mostly the same)

      If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.

      • account42 4 hours ago

        And non-browser image renders support almost none of those advanced totally-still-SVG features (and I don't blame them) while they often do support animated GIFs.

  • jonhohle a day ago

    It seems crazy to think about, but I interviewed with a power company in 2003 that was building a web app with animated SVGs.

  • jokoon a day ago

    both GIF and PNG use zipping for compressing data, so APNG are not much better than GIF

    • bawolff a day ago

      PNG uses deflate (same as zip) but GIF uses LZW. These are different algorithms. You should expect different compression results i would assume.

      • account42 4 hours ago

        ZIP is theoretically a generic container and theoretically supports a number of different compression formats. Stored (no compression) and deflate are the only ones you can count on being supported everywhere though so in practice you're not wrong.

    • Calzifer a day ago

      (A)PNG supports semi-transparency. In GIF a pixel is either full transparent or full opaque.

      Also while true color gifs seem to be possible it is usually limited to 256 colors per image.

      For those reasons alone APNG is much better than GIF.

      • account42 4 hours ago

        > Also while true color gifs seem to be possible it is usually limited to 256 colors per image.

        No, it's limited to 256 colors per frame and frames can have duration 0 which allows you to combine multiple frames into more than 256 color images.

    • 0points a day ago

      Remember when we unwillingly trained the generative AI:s of our time with an endless torrent of factoids?

qwertfisch a day ago

Seems a bit too late? And also, JPEG XL supports all the features and uses already advanced compression (finite-state entropy, like ZStandard). It offers lossy and lossless compression, animated pictures, HDR, EXIF etc.

There is just no need for a PNG update, just adopt JPEG XL.

  • bmn__ a day ago

    > just

    https://caniuse.com/jpegxl

    No one can afford to "just". Five years later and it's only one browser! Crazy.

    Browser vendors must deliver, only then it's okay to admonish an end user or Web developer to adopt the format.

    • Dylan16807 18 hours ago

      Adopt it anyway. Add a decoder. Don't let google bully you out of such a good format.

  • Dwedit 7 hours ago

    If JPEG-XL decompressed faster, I'd use it more. For now, I'm sticking with WEBP for lossless, and AVIF for lossy. AVIF's CDEF filter (directional deringing) works wonders, and it's too bad that JPEG-XL lacks such a filter.

    JPEG-XL's lossy modular mode is a very unique feature which needs a lot more exposure than it has. It is well-suited to non-photographic drawings or images that aren't continuous, and have never touched any JPEG-like codecs. It has different kinds of artifacts than what you typically see in a DCT image codec. Rather than ringing, you get slight pixellation.

  • Aachen a day ago

    > advanced compression (finite-state entropy, like ZStandard)

    I've not tried it on images, but wouldn't zstandard be exceedingly bad at gradients? It completely fails to compress numbers that change at a fixed rate

    Bzip2 does that fine, not sure why https://chaos.social/@luc/114531687791022934 The two variables (inner and outer loop) could be two color channels that change at different rates. Real-world data will never be a clean i++ like it is here, but more noise surely isn't going to help the algorithm compared to this clean example

    • wongarsu 20 hours ago

      PNG's basic idea is to store the difference between the current pixel and the pixel above it, left of it or to the top-left (chosen once per row), then apply standard deflate compression to that. The first step basically turns gradients into repeating patterns of small numbers, which compress great. You can get decent improvements by just switching deflate for zstd

    • adgjlsfhk1 a day ago

      the FSE layer isn't responsible for finding these sorts of patterns in an image codec. The domain modeling turns that sort of pattern into repeated data and then the FSE goes to town on the output.

    • Retr0id a day ago

      zlib/deflate already has the same issue. It is mitigated somewhat by PNG row filters.

  • mikae1 a day ago

    > There is just no need for a PNG update, just adopt JPEG XL.

    Tell that to Google. They gave up on XL in Chrome[1] and essentially killed its adoption.

    [1] https://issues.chromium.org/issues/40168998#comment85

    • rhet0rica 17 hours ago

      From reading that, "gave up" seems to mean "deliberately killed it so their own WebP2 wouldn't have competition." Behold the monopoly at the apex of its power.

      • account42 4 hours ago

        The really weird part is that both webp and jxl developments were largely funded by Google so its not Google killing a competitors format over their own but someone in one part of Google killing the format someone elsewhere in Google developed over their pet favorite.

  • illiac786 a day ago

    I really don’t get it. Why, but why? It’s already confusing as hell, why create yet another standard (variant) with no unique selling point?

    • pmarreck a day ago

      JPEG XL is not a "variant", it is a completely new algorithm that is also fully backwards-compatible with every single JPEG already out there, of which there are probably billions at this point.

      It also has pretty much every feature desired in an image standard. It is future-proofed.

      You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

      It is a worthy successor (while also being vastly superior to) JPEG.

      • BobaFloutist a day ago

        Is there any risk that if I open a JPEG-XL in something that knows what a JPEG is but not what a JPEG-XL is and then save it, it'll get lossy compressed? Backwards compatibility is awesome, but I know that if I save/upload/share/copy a PNG, it shouldn't change without explicit edits, right?

        • illiac786 a day ago

          a sw that does not know what jpeg xl is, will not be able to open jxl files. How would it?

          Not sure what the previous poster meant with “backward compatible” here. jxl is a different format. It can include every information a jpeg includes, which then maybe qualifies as “backward compatible” but it still is a different format.

          • liuliu a day ago

            JPEG XL has the mode that in layman's word, allow bit-by-bit round-trip with JPEG.

            Original JPEG -> JPEG XL -> Recreated JPEG.

            Sha256(Original JPEG) == Sha256(Recreated JPEG).

            That's what people meant by "backward compatible".

            • colejohnson66 21 hours ago

              That’s not “backwards compatible”, but “round tripable” or “lossless reencode”

          • BobaFloutist 21 hours ago

            Ah, got it. I assumed it was a losselessly compressed JPEG with metadata telling modern software not to compress differently but that older software would open as a normal JPEG, but I guess they meant something else with "backward compatible".

      • dylan604 a day ago

        > You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

        Is that gained space enough to account for the fact you now have 2 files? Sure, you can delete the original jpg on the local system, but are you going to purge your entire set of backups?

        • illiac786 a day ago

          if you do not want to delete the original jpeg, there is no point in converting them to jpeg xl I would say.

          Unless serving jxl and saving bandwidth, while increasing your total storage, is worth it to you.

        • account42 4 hours ago

          Yes the whole point of lossless re-compression is that you do not need to keep the original JPEGs. Of course you don't need to "purge" backups, just let them rotate out normally.

          Also backup storage is usually cheaper than something that needs to have fast access speeds.

      • illiac786 a day ago

        I was referring to the new PNG, not to JPEG XL.

        • sdenton4 a day ago

          Looking at TFA, it's placing in the spec a few things that are already widely stacked onto the format (such as animation). This is a very sensible update, and backwards compatible with existing PNG.

          • illiac786 a day ago

            Not sure expanding PNG capabilities is sensible, looking at the overall landscape of image formats.

            • dveditz_ 19 hours ago

              The capabilities are already expanded in most common implementations. This update is largely blessing those features as officially "standard".

cptcobalt a day ago

It seems like this new PNG spec just cements what exists already, great! The best codecs are the ones that work on everything. PNG and JPEG work everywhere, reliably.

Try opening a HEIC or AV1 or something on a machine that doesn't natively support it down to the OS-level, and you're in for a bad time. This stuff needs to work everywhere—in every app, in the OS shell for quick-looking at files, in APIs, on Linux, etc. If a codec does not function at that level, it is not functional for wider use and should not be a default for any platform.

  • ecshafer a day ago

    I work with a LOT of images in a lot of image formats, many including extremely niche formats used in specific fields. There is a massive challenge in really supporting all of these, especially when you get down to the fact that some specs are a little looser than others. Even libraries can be very rough, since sure it says on the tin it supports JPG and TIF and HEIC... but does it support a 30GB Jpeg? Does it support all possibly meta data in the file?

  • lazide a day ago

    This new spec will make PNG even worse than HEIC or AV1 - you won’t know what codec is actually inside the PNG until you open it.

    • hulitu a day ago

      > you won’t know what codec is actually inside the PNG until you open it.

      But this is a feature. Think about all those exploits made possible by this feature. Sincerely, the CIA, the MI-6, the FSB, the Mossad, etc.

bartwe 4 hours ago

I'm worried that by supporting too many encodings and color spaces this will hamper adoption and unexpected unsupported files. Perhaps this is more of an encoder/decoder library issue, which hopefully will give us rec2020 rgb32/rgb10a2 encode/decode apis so we can simply use them without having to know so many details.

ggm a day ago

Somebody needs to manage human time/date approximates in a way other people in s/w will align to.

"photo scanned in 2025, is about something in easter, before 1940 and after 1920"

  • luguenth a day ago

    In EXIF, you have DateTimeDigitized [0]

    For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.

    [0] https://www.media.mit.edu/pia/Research/deepview/exif.html

    [1] https://www.loc.gov/standards/datetime/

    • ggm a day ago

      I remember reading about this in a web forum mainly for dublin core fanatics. Metadata is fascinating.

      Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.

      And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.

  • SchemaLoad a day ago

    The issue that gets me is that Google Photos and Apple photos will let you manually pick a date, but they won't actually set it in the photo EXIF, so when you move platforms. All of the images that came from scans/sent without EXIF lose their dates.

    • ggm a day ago

      It's in sidecar files. Takeout gets them, some tools read them.

      • kccqzy a day ago

        But there is no standardization of sidecar files, no? Whereas EXIF is pretty standard.

        • jeroenhd a day ago

          EXIF inside of PNGs is new. You can make it work by embedding structured chunks into the file, but it's not official in any way (well, not until the new spec, at least). Sidecar files have some kind of interoperable format that at least don't break buggy PNG parsers when you open the image file. The sidecar files themselves differ in format, but at least they're usually formatted according to their extension.

          The usual sidecar files, XMP files, are standardised (in that they follow a certain extensible XML structure) and can (and often do) include EXIF file information.

          • SchemaLoad 15 hours ago

            Pretty much all the photos in Apple/Google photos are going to be JPEG and HEIF which do support EXIF. But both services basically will not touch what came out of the camera at all. If you add a description or date, it gets stored externally to the image so when you export your data, those changes are lost. Or they get dumped in a JSON file requiring you to use some custom script to handle it.

            • account42 4 hours ago

              Not touching the image for metadata changes is a good thing as that makes backups more efficient/simpler. Embedded metadata is also a security issue as users may share more information than they realize which is why it is common to strip it automatically in many places.

    • mbirth a day ago

      IIRC osxphotos has an option to merge external metadata into the exported file.

LegionMammal978 a day ago

Reading the linked blog post on the new cICP chunk type [0], it looks like the "proper HDR support" isn't something that you couldn't already do with an embedded ICC profile, but instead a much-abbreviated form of the colorspace information suitable for small image files.

[0] https://svgees.us/blog/cICP.html

  • cormorant 21 hours ago

    "common but not representable RGB spaces like Adobe 1998 RGB or ProPhoto RGB cannot use CICP and have to be identified with ICC profiles instead."

    cICP is 16 bytes for identifying one out of a "list of known spaces" but they chose not to include a couple of the most common ones. Off to a great start...

    I wonder if it's some kind of legal issue with Adobe. That would also explain why EXIF / DCF refer to Adobe RGB only by the euphemism "optional color space" or "option file". [1]

    [1] https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_sy...

  • account42 4 hours ago

    Unfortunately that seems to mean that the backwards compatibility here is washed out preview instead of limited-to-sRGB.

  • ProgramMax 19 hours ago

    PNG previously supported ICC v2. That was updated to ICC v4. However, neither of these are capable of HDR.

    Maybe iccMAX supports HDR. I'm not sure. In either case, that isn't what PNG supported.

    So something new was required for HDR.

    • LegionMammal978 17 hours ago

      > However, neither of these are capable of HDR.

      How so? As far as I can tell, the ICCv2 spec is very agnostic as to the gamut and dynamic range of the output medium. It doesn't say anything to the extent of "thou shalt not produce any colors outside the sRGB gamut, nor make the white point too bright".

      Unless HDR support is supposed to be something other than just the primaries, white point, and transfer function. All the breathless blogspam about HDR doesn't make it very clear what it means in terms of colorspaces.

      • ProgramMax 14 hours ago

        IIRC (been a while), the reason was ICCv2/v4 still requires a gamma function. And PQ is not a gamma function. Maybe they can cover HLG, but if we want to represent any given HDR content, we needed something more than ICCv2/v4.

        • LegionMammal978 11 hours ago

          That doesn't sound quite right to me. ICCv2's 'curveType' gives the option of a full lookup table instead of a simple gamma function. Maybe it has to do with ICCv2 saying that the reference viewing condition has an illumination level of 500 lx for the perceptual intent? (But how does that apply to non-reflective media?)

          I don't doubt that there's lots of problems in the chain from RGB samples to display output, but I'm finding this whole thing horribly confusing. Wikipedia tries to distinguish 'HDR' transfer functions like PQ [0] from 'SDR' transfer functions in terms of their absolute luminance, but the ICC specs are just filled with relative values all the way down.

          (Not to mention how much these things get fiddled with in practice. Once, I had the idea of writing a JPEG decoder, so I looked into how exactly to convert between sRGB and Rec. 601 YCbCr coordinates. I thought, "I know, I'll just use the standard-defined XYZ conversions to bridge between them!" But psych, the ICC sRGB profile has its own black point scaling that the standards don't tell you about. I'm still not sure what the correct answer is for "these sRGB coordinates represent the exact same color as these Rec. 601 YCbCr coordinates".)

          [0] https://en.wikipedia.org/wiki/Perceptual_quantizer

          • ProgramMax 9 hours ago

            Agreed that it gets confusing. That's a piece of why I'm unable to give you a solid answer. This isn't my area of expertise.

            Here is what I can tell you confidently: The original plan was to provide an ICC profile that approximates PQ as best as we could. But it wasn't enough. So the proposal was to force the profile name to be a special string. When a PNG decoder saw that name, it would ignore the ICC profile and do actual PQ.

            Here is that original proposal: https://w3c.github.io/png-hdr-pq/

            Possibly more context (I just found this) from Apple. I'm not sure of date: https://www.color.org/hdr/02-Luke_Wallis.pdf Slide 29: "HDR parametric transfer functions not in ICC spec Parametric 3D tone mapping functions not in ICC spec - Neither can be approximated by 1-D or 3-D LUTs"

            I'm not sure why they cannot be approximated by LUT. Maybe because of the inversion problem?

rynop 21 hours ago

This is a false claim in the PR:

> Many of the programs you use already support the new PNG spec: ... Photoshop, ...

Photoshop does NOT support APNGs. The PR calls out APNg recognition as the 2nd bullet point of "What's new?"

Am I missing something? Seems like a pretty big mistake. I was excited that an art tool with some marketshare finally supported it.

  • ProgramMax 21 hours ago

    Phoptoshop supports the HDR part. But you are right, it does not support the APNG part.

LeoPanthera a day ago

> I know you all immediately wondered, better compression?. We're already working on that.

This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.

It'll be like USB-C but for images.

  • lifthrasiir a day ago

    Better compression can also mean a new set of filter methods or a new interlacing algorithm. But yeah, any of them would cause an instant incompatibility. As noted in the relevant issue [1], we will need a new media type at the very least.

    [1] https://github.com/w3c/png/issues/39#issuecomment-2674690324

    • Arnt a day ago

      We would need a new media type. But the actual new features don't need one, because the news don't break compatibility.

      https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.

    • snvzz a day ago

      I am hopeful whatever better compression doesn't end up multiplying memory requirements, or increase burden on cpu, especially on decompression.

      Now, PNG datatype for AmigaOS will need upgrading.

      • Arnt a day ago

        I don't see why? If your video output is plain old RGB (like the Amiga hardware), then an unmodified decoder will handle new files without a problem. You only need a new decoder if your video output can handle more vivid colours than RGB can express.

        • Findecanor a day ago

          An image decoded in the wrong colour space for the output will look wrong. It is not using extra bits to express the increased dynamic range: the existing numeric range is stretched and warped.

          • Arnt 21 hours ago

            Yes. But how bad? AIUI the way it's done is more or less the best that can be done with old video hardware, like mine and like the Amiga.

            It could be horrible in principle, but actually isn't.

  • Lerc a day ago

    It has fields to say what compression is used. Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.

    The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.

    • koito17 a day ago

      > Having an extensible format and electing never to extend it seems pointless.

      This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.

      For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).

      • cm2187 a day ago

        But where the analogy with USB-C is very good is that just like USB-C, there is no way for a user to tell from the look of the port or the file extension what the capabilities are. Which even for a fairly tech savvy user like me is frustrating. I have a bunch of cables, some purchased years ago, how do I know what is fit for what?

        And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.

        I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.

        • skissane a day ago

          > there is no way for a user to tell from the look of the port or the file extension what the capabilities are

          But that's typical for file extensions. Consider EXE – it is probably an executable, but an executable for what? Most commonly Windows – but which Windows version will this EXE run on? Maybe this EXE only works on Windows 11, and you are still running Windows 10. Or maybe you are running x86-64 Windows, but this EXE is actually for ARM or MIPS or Alpha. Or maybe it is for some other platform which uses that extension for executable files – such as DOS, OS/2, 16-bit Windows, Windows CE, OpenVMS, TOPS-10, TOPS-20, RSX-11...

          .html, .js, .css – suggest to use a web browser, but don't tell you whether they'll work with any particular one. Maybe they use the latest features but you use an old web browser which doesn't support them. Maybe they require deprecated proprietary extensions and so only work on some really old browser. Maybe this HTML page only works on Internet Explorer. Maybe instead of UTF-8 it is in some obscure legacy character set which your browser doesn't support.

          .zip – supports extensible compression and encryption methods, your unzip utility might not support the methods used to compress/encrypt this particular zip file. This is actually normal for very old ZIP files (from the 1980s) – early versions of PKZIP used various deprecated compression mechanisms, which few contemporary unzip utilities support. The format was extended to 64-bit without changing the extension, there's still a lot of 32-bit only implementations out there. ZIP also supports platform-specific file attributes–e.g. PKZIP for z/OS creates ZIP files which contain metadata about mainframe data storage formats, unzip on another platform is going to have no idea what it means, but the metadata is actually essential to interpreting the data correctly (e.g. if RECFM=V you need to parse the RDWs, if RECFM=F there won't be any)

          .xml - okay, it is XML – but that tells you nothing about the actual schema. Maybe you were expecting this xml file to contain historical stock prices, but instead it is DocBook XML containing product documentation, and your market data viewer app chokes on it. Or maybe it really is historical stock prices, but you are using an old version of the app which doesn't support the new schema, so you can't view it. Or maybe someone generated it on a mainframe, but due to a misconfiguration the file came out in EBCDIC instead of ASCII, and your app doesn't know how to read EBCDIC, yet the mainframe version of the same app reads it fine...

          .doc - people assume it is legacy (pre-XML) Microsoft Word: every version of which changed the file format, old versions can't read files created with newer versions correctly or at all, conversely recent versions have dropped support for files created in older versions, e.g. current Office versions can't read DOC files created with Word for DOS any more... but back in the 1980s a lot of people used that extension for plain text files which contained documentation. And it was also used by incompatible proprietary word processors (e.g. IBM DisplayWrite) and also desktop publishing packages (e.g. FrameMaker, Interleaf)

          .xmi – I've seen this extension used for both XML Model Interchange (XML-based standard for exchanging UML diagrams) and XMIT (IBM mainframe file archive format). Because extensions aren't guaranteed to be unique, many incompatible file formats share the same extension

          .com - is it an MS-DOS program, or is it DCL (Digital Command Language)?

          .pic - probably some obscure image format, but there are dozens of possibilities

          .img – could be either a disk image or a visual image, either way dozens of incompatible formats which use that extension

          .db – nowadays most likely SQLite, but a number of completely incompatible database engines have also used this extension. And even if it is SQLite, maybe your version of SQLite is too old to read this file because it uses some features only found in newer versions. And even if SQLite can read it, maybe it has the wrong schema for your app, or maybe a newer version of the same schema which your old version that app doesn't support, or an old version of the schema which the current version of the app has dropped support for...

          • Calzifer a day ago

            Just last week I had again some PDFs Okular could not open because of some more uncommon form features.

          • cmiller1 a day ago

            > Consider EXE – it is probably an executable, but an executable for what? Most commonly Windows

            Has anyone ever used .exe for anything other than Windows?

            • skissane a day ago

              Prior to Windows 95, the vast majority of PC games were MS-DOS exe files – so anyone who played any of those games (whether back in their heyday, or more recently through DOSBox) has run an MS-DOS exe. Most people who ever used Lotus 1-2-3 or WordPerfect were running an MS-DOS exe. Both products were eventually ported to Windows, but were far less popular under Windows than under DOS.

              Under Windows 95/98/Me, most command line tools were MS-DOS executables. Their support for 32-bit Windows console apps was very poor, to the extent that the input and output of such apps was proxied through a 16-bit MS-DOS executable, conagent.exe

              First time in my life I ever used GNU Emacs, it was an OS/2 exe. That's also true for bash, ls, cat, gcc, man, less, etc... EMX was my gateway drug to Slackware

            • cesarb 17 hours ago

              > Has anyone ever used .exe for anything other than Windows?

              Did you know that Microsoft Windows originally ran on top of the much older MS-DOS, which used EXE files as one of its two executable formats? Most Windows users had lots and lots of EXE files which were not Windows executables, but instead DOS executables. And then came Windows 95, which introduced 32-bit Windows executables, but kept the same file extension as 16-bit Windows executables and 16-bit DOS executables.

            • asgerhb a day ago

              Way back when, my prof was using his Linux machine to demonstrate how to use GCC. He called the end result .exe but that might have been for the benefit of the Windows users in the room. (Though Linux users being considerate to Windows users, or vice versa, is admittedly a rarity)

      • bayindirh a day ago

        JPEG is no different. Only the decoder is specified. As long as the decoder decodes what you give it to the image you wanted to see, you can implement anything. This is how imgoptim/squash/aerate/dietJPG works. By (ab)using this flexibility.

        Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.

        Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.

        • cm2187 a day ago

          Video files aren't a good analogy. Before God placed VLC and ffmpeg on earth, you had to install a galaxy of codecs on your computer to get a chance to read a video file and you could never tell exactly what codec was stored in a container, nor if you had the right codec version. Unfortunately there is no vlc and ffmpeg for images (I mean there is, the likes of imagemagick, but the vast majority of software doesn't use them).

          • bayindirh a day ago

            I lived through that era (first K-Lite Codec Pack, then CCCP came along), but still it holds.

            Proprietary or open, any visual codec is a battleground. Even in commercial settings, I vaguely remember people saying they prefer the end result of one encoder over another, for the same video/image format, not unlike how photographers judge cameras by their colors.

            So maybe, this flexibility to PNG will enable or encourage people to write better or at least unorthodox encoders which can be decoded by standard compliant ones.

      • fc417fc802 a day ago

        I honestly don't see an issue with the mpeg-4 example.

        Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.

        From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.

      • 7bit a day ago

        I never once in 25 years encountered an issue with an mp4 Container that could Not be solved by installing either the divx or xvid codec. And I extensively used mp4's metatdat for music, even with esoteric Tags.

        Not Sure what youre talking abouz.

        • Arnt a day ago

          He's saying that in 25 years, you used only the LC and HE profiles, and didn't encounter TwinVQ even once. I looked at my thousand-odd MPEG-4 files. They're overwhelmingly AAC LC, a little bit of AAC LC SBR, no TwinVQ at all.

          If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c

          https://en.wikipedia.org/wiki/TwinVQ#TwinVQ_in_MPEG-4 tells the story of TwinVQ in MPEG-4.

    • mort96 a day ago

      > Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.

      Yeah, we know. That's terrible.

    • pvorb a day ago

      Extending the format just because you can – and breaking backwards compatibility along the way – is even more pointless.

      If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.

      • jajko a day ago

        What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)? I know its not so much about PNG but some future format.

        That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.

        • lelanthran a day ago

          > What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)?

          Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.

          Best would be not to extend it.

    • shiomiru a day ago

      The difference between valid PNG you can't decompress and invalid PNG is fairly irrelevant when your aim is to get an image onto the screen.

      And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.

      • Arnt a day ago

        It's a new and compatible spin. https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the important new feature and your old software can display it.

        More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".

        For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".

      • fc417fc802 a day ago

        > plenty of more advanced competing lossless formats

        Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?

    • chithanh a day ago

      > Adding another compression form should be handled by existing software

      In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.

      TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.

    • HelloNurse a day ago

      Extensibility of PNG has been amply used, as intended, for proprietary chunks that hold application specific data (e.g. PICO-8 games) without bothering other software.

    • dooglius a day ago

      > Having an extensible format and electing never to extend it seems pointless.

      So then it was pointless for PNG to be extensible? Not sure what your argument is.

  • jillesvangurp a day ago

    Old PNGs will work just fine. And forward compatibility is much less important.

    The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.

    Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.

    • hnlmorg a day ago

      Browsers aren't the only software that work with PNGs. Far from it in fact.

    • whywhywhywhy a day ago

      > The main use case for PNG is web browsers

      It's not, most images you encounter on the web need better compression.

      The main PNG use case is to store lossless images locally as master copies that are then compressed or in workflows where you intend to edit and change them where compressed formats would degrade the more they were edited.

    • AlienRobot a day ago

      >The main use case for PNG is web browsers

      This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.

      • asadotzler 21 hours ago

        Depends on whose use cases you're considering.

        There are about 3.6 billion people surfing the web and experiencing PNGs. That use case, consuming PNGs, seems to dwarf the perhaps 100 million (somewhat wild guess) graphic designers, web developers, and photo editing professionals who manipulate images for publishing (in any medium) or archiving.

        If, on the other hand, you're considering the use cases envisioned by PNG's creators, or the use cases that interest the people processing or publishing images, yes, these people are focused on format itself and its capabilities.

        I suspect this particular use of "use case" isn't terribly clear. Also these two considerations are not incompatible.

  • skywal_l a day ago

    Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.

    That being said, they also can do dumb things however, right at the end of the sentence you quote they say:

    > we want to make sure we do it right.

    So there's hope.

    • masklinn a day ago

      > Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.

      That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.

    • vhcr a day ago

      That's what OptiPNG already does.

      • josefx a day ago

        Doesn't OptiPNG just brute force various settings and pick the best result?

  • ProgramMax 20 hours ago

    Worry not! (Well, worry a little.)

    The first bit of our research is "What can we already make use of which requires no spec update? There are plenty of PNG optimizers. How much of that should go into the typical PNG libraries?"

    Same with parallel encoding & decoding. An older image viewer will be able to decode it on one thread without ever knowing parallel decoding was an option.

    Here's the worry-a-little part: Everybody immediately jumps to file size as to what image compression is better or worse. That isn't the best take, but it is what it is. So there is pressure to adopt newer technologies.

    We often do have a way to maintain some degree of backwards compatibility even when we do this. For example, we can store a downsampled image for old viewers. Then extra, new chunks will know "Mix that with this full scale data, using a different compression".

    As you can imagine, this mixing complicates things. It might not be the best option. Sooooo we're researching it :)

    • HexDecOctBin 12 hours ago

      Downsampling will make PNG not be a lossless format. Just leave it alone, and work on a separate PNG2 or PNGX or whatever.

    • ori_b 15 hours ago

      My strong vote is to just not touch it. Stability is a feature.

  • colanderman a day ago

    One could imagine a PNG file which contains a low-resolution version of the image with a traditional compression algorithm, and encodes additional higher-resolution detail using a new compression algorithm.

  • mrheosuper a day ago

    Does usb-c spec break backward compatibility ?, a 2018 macbook work perfectly fine with 2025 usb c charger

    • danielheath a day ago

      Some things don't work unless you use the right kind of USB-C cable.

      EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.

      USB 3 didn't have this issue - every cable worked with every port.

      • mrheosuper a day ago

        That is not backward compatible problem. If a cable that does 100w charging when using pd2.0, but only 60w when using with pd3.1 device, then i would agree with you.

        • yoz-y a day ago

          The problem is not backward compatibility but labeling. A USB-C cable looks universal but isn’t. Some of them just charge, some do data, some do PD, some give you access to high speed. But there is no way to know.

          I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.

          • voidUpdate a day ago

            That's not just an issue with usb-c. normal usb a and b cables can have data or no data depending on how stingy the company wants to be, and you can't know until you test it

            • Xss3 a day ago

              You can get pretty good guesses just by feel and length. Tiny with a super thin cable? Probably charge only.

          • mystifyingpoi a day ago

            Cable labeling could fix 99% of the issues with USB-C compat. The solution should never be blaming consumer for buying the wrong cable. Crappy two-wire charge-only cables are perfectly fine for something like a night desk lamp. Keep the poor cables, they are okay, just tell me if that's the case.

            • ay a day ago

              Same thing with PNG. Just call the format with new additions it PNGX, so the user can clearly see that the reason their software can’t display the image is not a file corruption.

              This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…

            • lelanthran a day ago

              > Cable labeling could fix 99% of the issues with USB-C compat.

              Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.

              There should never have been a power-only spec for USB-C socket dimensions.

              If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.

              It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.

              • Xss3 a day ago

                So since my vape (example, i dont vape) has a power and data slot for charging and firmware updates, i should be limited to only using dual purpose cables day to day rather than a power only cable?

                • lelanthran a day ago

                  > So since my vape (example, i dont vape) has a power and data slot for charging and firmware updates, i should be limited to only using dual purpose cables day to day rather than a power only cable?

                  Well, yes.

                  Why can't you use a power+data cable for the vape (or whichever appliance takes both)? What's the deal-breaker here?

                  The alternative is labeling, or plugging cables in to see if they do what you want them to do.

                  Both are a poor user interface.

                  • Xss3 18 hours ago

                    Is the same true for my laptop? Or soldering plate? Both take over 150w of power. Buying a power and data cable is expensive compared to just power, and the length of cable is severely limited...or the data speed impaired significantly. How slow does the data have to be for it to be non compliant?

              • mystifyingpoi a day ago

                > If a cable supports only power it must not fit into a power and data socket

                That's even more confusing than the current state of affairs. If my phone has power and data socket, then I cannot use power only cable to only charge it? Presumably with the charger that has power only socket. So I need a cable with two different ends anyway. Just go micro-USB at this point :)

                Funnily enough, there is a 100% overkill way to solve such issues. Just use super expensive certified TB cables. Well... plus a A-to-C adapter for noncompliant devices, I guess.

            • kevin_thibedeau 19 hours ago

              Two wire cables are not in the specification, just like A-to-A cables aren't. The whole charging above 100mA with resistor hacks wasn't in the standard either until they had to grandfather it in. The implementers forum isn't responsible for non-members breaking their spec.

          • mrheosuper a day ago

            the parent said "changing the compression algorithm will break backwards compatibility", which i assume is something works now won't work in the future. The usb-c spec is intentionally trying to avoid that.

            • danielheath a day ago

              Today, I can save a PNG file off a random website and then open it.

              If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.

              There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.

              • fc417fc802 a day ago

                The alternative is the website operator who wants to save on bandwidth instead adopts JXL or WEBP or what have you and ... the end user with old software still can't open it.

                It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.

                Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.

              • mrheosuper a day ago

                which is what usb-c spec has been avoiding so far. Even in USB4 spec, there are a lot of mentioning the new spec should be compatible with TB3 devices.

                USB-C spec is anything but breaking backward compatible.

              • johnisgood a day ago

                This is what I fear, too.

                Do they mention which C libraries use this spec?

          • globular-toast a day ago

            Some aren't even USB. Thunderbolt and DisplayPort both use USB-C too.

            • Xss3 a day ago

              Thunderbolt meets usbc specs (and exceeds them afaik), so it is still usb...

    • mystifyingpoi a day ago

      Yeah, I also don't think they've broken backwards compat ever. Super high end charger from 2024 can charge old equipment from 2014 just fine with regular 5V.

      What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.

    • techpression a day ago

      I don’t know if it’s the spec or just a plethora of vendors that ignores it, but I have many things with a USB-C port that requires USB-A as source. USB-C to A to C works, yay dongles, but not just C to C. So maybe it’s not really breaking backwards compatibility, just a weird mix of a port and the communication being separate standards.

      • mrheosuper a day ago

        because those usb-c ports do not follow the spec. If they had followed the spec from 1st day there would be no problem even now.

      • fragmede a day ago

        it's vendors just changing the physical port but not updating the electronics. specifically, a 5.1kΩ pull-up resistors on the CC1 and/or CC pins is needed on the host (was usb-a) side in order for the c to c cable to work.

    • zirgs a day ago

      Yeah - it's a mess. Some devices only charge with a charger that supports PD. Some other devices need a charger WITHOUT PD support.

      • mrheosuper 12 hours ago

        If those devices follow the spec, they dont need charger without PD support.

        You don't follow spec, you're on your own.

  • altairprime a day ago

    They could, for example, use lossy compression for the compatibility layer and then fill it in the rest of the way to lossless using incompatible new compression objects. Legacy uses will see some fidelity degradation, but they are already being stuck with sRGB downmixes, so that’s fine — and those who are bothered by it can just emit a lossless-pixels (but lossy-color and lossy-range) compatibility layer and reserve the compression benefits for the color and dynamic range.

    I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.

  • ajnin a day ago

    What backward compatibility are we talking about here? Backwards compatibility of images will be fine, backwards compatibility of decoders might be impacted, but the article says the major image viewers (browsers) and image editors already support the 3rd version. Better compression is only planned for the 5th version of the spec.

    Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.

  • bawolff a day ago

    I don't think that will super be an issue. How often has "progressive jpeg" ever caused problems? That's the same thing.

  • bmacho a day ago

    +1 why not name it png4 or something. It's better if compatibility is obvious upfront

    • josephg a day ago

      I think if they did that, nobody would use it. And anyway, from the article:

      > Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...

      It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.

remram a day ago

So what do we call it? PNG3? The spec is titled "Portable Network Graphics Specification (Third Edition)".

Surely they aren't releasing a new, incompatible version and expecting us to pretend it's the same format...?

> This updates the existing image/png Internet Media type

whyyyyyyy

  • ProgramMax 20 hours ago

    New? Yes. Incompatible? No.

    We went to pretty extreme lengths to make sure old software worked with the new changes. Effectively, the limit will be the software, not the image.

    For example, you can imagine some old software that is unaware of color spaces and treats everything as sRGB. There is nothing we can do to make that software show a Display P3 correctly. However, we can still show the image well enough that a user understands "that is a red apple".

    • account42 3 hours ago

      Having images show up in washed up colors without any indication is not what I'd consider "working". This mistake has been made many times, please let's not make it again.

369548684892826 a day ago

A fun fact about PNG, the correct pronunciation is defined in the specification

> PNG is pronounced “ping”

See the end of Section 1 [0]

0: https://www.w3.org/TR/REC-png.pdf

  • gred a day ago

    That makes two image format names which I will refuse to pronounce correctly (the other being GIF [1]).

    [1] https://edition.cnn.com/2013/05/22/tech/web/pronounce-gif

    • ziml77 a day ago

      The only logic I ever hear for using a hard G is because that's how Graphics is said. Yet I never hear people saying jay-feg.

      • account42 4 hours ago

        As with all other pronunciations the real reason is because it sounds better (more correct) to most people.

      • gred a day ago

        Also "gift".

    • cmiller1 a day ago

      How do you pronounce PNG?

      • gred a day ago

        Pee En Gee

      • kristopolous a day ago

        I used to call them Nogs claiming the P was silent.

        People believed me. Still funny.

      • illiac786 a day ago

        P&G, stands for Pee & Gloat.

        • gred a day ago

          Portable & Graphical

  • account42 4 hours ago

    That's the correct pronunciation the same way the correct pronunciation for GIF is jiff. Human language is not something you can prescribe.

  • ProgramMax 20 hours ago

    Even though I know about this, I still pronounce it as letters. :)

  • dspillett a day ago

    Because the creator of gifs telling the world how he pronounced it made such a huge difference :)

    Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.

    • naikrovek a day ago

      When someone makes a baby, you call that person by their real name with the correct pronunciation, don’t you?

      So why can’t you do that with GIF or PNG? People that create things get to name them.

      • dspillett 4 hours ago

        There is a huge difference between inanimate objects/classes and babies. Don't personify inanimate objects, they hate that!

        On inanimate objects: Aluminium was first ratified by the IUPAC as aluminium⁰, with the agreement of its discoverer Sir Humphrey Davy¹, yet one huge nation calls it something else…

        On people: nicknames are a thing, are you saying those are universally wrong? But yes, when a person tells me that they'd prefer their name pronounced a different way, or that they'd prefer a different name entirely, or that they don't like the nickname other use for them, you can bet your arse that I'll make the effort to use their preferred name+pronunciation in future.

        ------

        [0] Though it should be noted that aluminum was, a few years after, officially accepted as an alternate form.

        [1] He initially called it aluminum in the first paper.

        • naikrovek 14 minutes ago

          Let people name their offspring, both biological and technical.

      • AllegedAlec a day ago

        > People that create things get to name them.

        And if they pick something dumb enough other people get to ignore them.

      • pixl97 a day ago

        Depends...

        You'll commonly call someone by their pronounced name out of respect, forced or given.

        In a situation where someone does something really stupid or annoying and the forced respect isn't there, most people don't.

      • eviks a day ago

        First, it's not a baby, that's a ridiculous comparison.

        But also, no, not universally even for babies, especially when the name is something ridiculous like X Æ A-Xii where even parents disagree on pronunciation, or when the person himself uses a "non-specced" variant

      • account42 4 hours ago

        You are aware that kids generally don't get to pick the nicknames they end up being called and their parents definitely don't either.

      • freeopinion a day ago

        A parent may name their baby Elizabeth. Then even the parent might call them Liz or Beth or Betsy or Bit or Bee.

      • airstrike a day ago

        Because PNGs won't answer back when I call them by some "correct" name.

    • LocalH a day ago

      I've said "jif" for almost 40 years, and I'm not stopping anytime soon.

      Hard-g is wrong, and those who use it are showing they have zero respect for others when they don't have to.

      It's the tech equivalent to the shopping cart problem. What do you do when there is no incentive one way or the other? Do you do the right thing, or do you disrespect others?

      • pwdisswordfishz a day ago

        Linguistic prescriptivism is wrong, and people who promote it are showing they have zero respect for others when they don't have to.

        • LocalH a day ago

          I agree that language is fluid. However, when it comes to names, I think people should have enough respect to pronounce things how the creator (or owner, depending on the situation) of the name says it should be pronounced. Too often people will mispronounce someone's name as a sign of intentional disrespect (see Kamala Harris for a fairly recent prominent example) and I cannot get behind that. You see a similar disrespect in the hard-soft discourse around the pronunciation of GIF. A lot of people use the hard g and mock the creator for thinking that soft g should ever have been right.

          Naming is probably one of the few language areas that I think should be prescriptive, even while language at large is descriptive.

          • Analemma_ a day ago

            I don’t think technical standards merit the same level of “deference to the creator” as personal names. People are wrong about standards they created all the time (ask me what I think about John Gruber’s “stewardship” of Markdown) and should be corrected, a standard is meant for all. Obviously the pronunciation of an acronym isn’t anywhere near as important as technical details, but I think the principle holds.

            • asadotzler 21 hours ago

              People are wrong about the children they create all the time too, and should be corrected.

              • LocalH 21 hours ago

                A child is presumably a sentient being, and at some point in their life should gain control of their name. In fact, they do, to some large degree. There are means to change one's legal name, or one can diverge from their legal name and professionally/publicly use a completely different name.

                A file format is not a sentient being. The creator's intent matters much more. If GIF had sentience and could voice a desire one way or the other, the whole discussion would be moot as it would clearly be disrespectful to intentionally mispronounce the name.

          • mandmandam a day ago

            If the creator insists on a weird pronunciation, because of an inside joke most won't ever get, then I feel no responsibility in humoring them.

            The G in gif is for graphics. Not 'giraffics'. And most people in the world have no idea what Jif even is, much less a particular catchphrase from an old ad campaign that barely even connects.

            • ziml77 a day ago

              And the P in JPEG is for photographic, so you better be saying jay-feg if you want to rely on that logic.

              • joquarky 21 hours ago

                If everyone conformed, then we would have no fun lively debates on things like this. That would be a boring world.

        • xdennis a day ago

          Linguistic prescriptivism has nothing to do with it.

          English has both pronunciations for "gi" based on origin. Giraffe, giant, ginger, etc from Latin; gift, give, (and presumably others) from Germanic roots.

          Using the preferred one is just a matter of politeness.

          Also, it's quite ironic to prescribe "linguistic prescriptivism" as wrong.

          • account42 4 hours ago

            Insisting on one out of multiple possible pronunciations when most people naturally pick a different one is the definition of linguistic prescriptivism. Politeness doesn't have anything to do with it, people are not required to let individuals dictate how our collective language works.

      • bigfishrunning a day ago

        pronounce the jraphics interchange format any way you want, everyone knows what you're talking about anyway -- try not to get so worked up. It's not the shopping cart problem, because no-one is measurably harmed by not choosing the same pronunciation as you.

        • LocalH a day ago

          i'll start using hard-g gif when you start saying "jfeg" ;)

      • npteljes a day ago

        As much as I hate jif, thinking about it, "GPU" works the same - we say gee-pee-you and not gh-pee-you. Garbage Collection is also gee-cee. So it's only logical that jif is the correct one - even if it's not the widely accepted one.

        Wrt/ communication, aside from personal preference, one can either respect the creator, or the audience. If I stand in front of 10 colleagues, 10 out of them would not understand jif, or would only get it because this issue has some history now. gif on the other hand has no friction.

        Ghengis Khan for example sounds very different from its original Mongolian pronunciation. And there is a myriad others as well.

        • LocalH a day ago

          The whole debate seems to be a modern phenomenon to me - from my anecdotal experience back in the day, it was never questioned by computer enthusiasts that it was pronounced "jif".

          • eCa 20 hours ago

            I (as a non-native English speaker) have pronounced it with a hard g since first i saw it (mid ’90s) and many years before I learned how the creator preferred it to be pronounced.

            I continue to pronounce it how I prefer it, not as a slight, but most people here would be surprised by the soft g.

            If I ever meet him I’ll attempt to pronounce it soft-g.

            On the other hand, even though my name exists and is reasonably common in English, I’m fairly certain neither you or the GIF creator would address me the way I pronounce my name. I would understand anyway, and wouldn’t care one bit.

          • npteljes 20 hours ago

            I have the same experience - but with gif. Mind you, me and my circle are not native English speakers.

            The debate itself is old. "Since the 90s" Wikipedia says, and keep in mind the format was is from 1987 - so I would say the debate is on from the get-go. Appropriate, too, if you think back, arguing about this kind of stuff was pretty common. Emacs vs vim, browser wars, different kinds of computers, tribalism everywhere.

            https://en.wikipedia.org/wiki/Pronunciation_of_GIF

            • dspillett 4 hours ago

              I think “since the 90s” here is “since the late 90s”. When I first was aware of gif files (in the early 90s IIRC) I only saw the name and meaning in print so went with the hard G to match the g's pronunciation in graphics, I don't think I was aware of the original intention to pronounce it jif until somewhere in the early 2000s, at which point the use of the hard g was almost ubiquitous and the soft g idea was presented as an interesting/amusing aside.

              • npteljes 2 hours ago

                One of Wiki's sources date it back as far as 1994, and that's a news article, so the thing must have been going on for a while.

                Thinking about it, I think I understand why hard G makes sense for people. With GPU, we pronounce the the individual letters, as it's clearly an abbreviation - as no sane English word starts with "gp". With GIF though, even though it's an abbreviation, it looks a lot like a normal word, "gift", and English also has "give", another one with a hard G, so it feels familiar to say. Moreover, the US, where GIF comes from, has Jif already established as a peanut butter brand, so it makes sense to not pronounce a newly invented, differently written word the same as an already established thing. Well, at least to some it makes sense!

  • eviks a day ago

    Ha, been doing it "wrong" my whole life!

  • yuters a day ago

    Pronouncing it like that would invite confusion as the word ping is often used in messaging.

    • nashashmi 14 hours ago

      let's propose PENJ to avoid the confusion.

adgjlsfhk1 a day ago

I'm very curious to see how this will end up stacking up vs lossless jpegxl

  • Simran-B a day ago

    I doubt it can get anywhere near. What is even the point of a new PNG version if there's something as advanced as JXL that is also royalty-free?

    • layer8 a day ago

      Browser support for JPEG XL is poor (basically only Safari I think), while the new PNG spec is already supported by all mainstream browsers.

      • encom a day ago

        It's poor, only because Google is using their stranglehold on browsers, to push their own WebP trash. That company can't get broken up soon enough.

        • layer8 a day ago

          Firefox also doesn’t support JPEG XL out of the box, and Chrome does support the new PNG, so ¯\_(ツ)_/¯.

          • account42 4 hours ago

            Firefox is there to prevent/delay the forced breakup of Googles monopoly no to provide any real competition, thanks for showing another example of that.

          • trallnag a day ago

            How about renaming JPEG XL to PNG or just merging the complete spec into PNG 3.0?

  • LoganDark a day ago

    For starters, you're actually able to use PNG.

iliketrains a day ago

Official support for animations, yes! This feels so nostalgic to me, I have written an L-system generator with support for exporting animated PNGs 11 years ago! They were working only in Firefox, and Chrome used to have an extension for them. Too bad I had to take the website down.

Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).

https://github.com/NightElfik/Malsys/blob/master/src/Malsys....

https://marekfiser.com/projects/malsys-mareks-lsystems/

  • chithanh a day ago

    > Official support for animations, yes!

    While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.

    Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?

    I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.

hrydgard a day ago

What about implementations? libpng seems pretty dead, 1.7 has been in development forever but 1.6 is still considered the stable version. Is there a current "canonical" png C/C++ library?

  • vanderZwan a day ago

    I mean, if the spec has been stable for two decades then maybe there just hasn't been much to fix? Especially since PNG is a relatively simple image format.

    • illiac786 a day ago

      Seems that logic does not apply to jpeg though.

  • ethan_smith a day ago

    For modern C/C++ PNG implementations, consider lodepng (header-only), stb_image/stb_image_write (single-file), or libspng (active fork focused on performance and security) as more actively maintained alternatives to libpng.

  • ProgramMax 20 hours ago

    libpng updates are either already landed or nearly landed.

poisonborz a day ago

Not backwards compatible. We just add it to that nice cupboard "great advanced image formats we will forget about".

Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady pace.

Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.

  • lioeters a day ago

    > Not backwards compatible

    They say it's technically compatible since older image decoders should recognize the PNG file is using a different compression algorithm than the default.

    > Many programs already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...

    This is intentionally ignoring the fact that there are countless PNG decoders out in the wild, many using libpng the standard decoder last updated 6 years ago; and they will not be able to read the new PNG v2 files.

    They should have used a different file extension, PNG2, to distinguish this incompatible format. Otherwise, users will be confused why their newly saved PNG file cannot be read by certain existing programs.

    • arp242 a day ago

      libng seems to get regular updates? A release just a few days ago.

      There's a PR for APNG: https://github.com/pnggroup/libpng/pull/706 – it seems there was some work for HDR in e.g. https://github.com/pnggroup/libpng/pull/635 as well. Related: https://github.com/pnggroup/libpng/issues/507

    • JKCalhoun a day ago

      [flagged]

      • colejohnson66 a day ago

        Those are indeed the "magic" bytes of PNG. It's a very clever choice meant to ensure the transport layer didn't mess with it.

        To start, there's a byte with the upper bit set which ensures an "8-bit clean" transport. If it's stripped, it becomes a harmless tab. Then the literal "PNG" text so you can see it in a text editor. Then a CR-LF pair to check for CR-LF to LF translations. Then, a CTRL-Z to stop display on DOS-like systems. And finally, another LF to check for LF to CR-LF translations.

        It's a clever "magic" that basically ensures a binary transport layer. Things that mattered back in 1996.

        https://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R...

        • account42 4 hours ago

          It's clever but I'm not so sure it actually mattered - other formats have done just as well with simpler magic numbers. All it does in the end is that you get something that doesn't identify as a PNG file rather than a PNG file with bad data when a non-binary transport is used - both results are bad and immediately apparent.

      • ape4 a day ago

        The 50 4E 47 spells "PNG"

  • michaelmior a day ago

    > and the abysmal everyday use it gets

    Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.

    • Geezus_42 a day ago

      Great, so I can download it, but then I have to convert it to a different format before half my apps will be able to use it.

      • PaulHoule a day ago

        Blame Adobe. For what they charge for Creative Suite it ought to have supported it a long time ago.

        My webcrawler sucks down a lot of WebP images, at least it did before it got the smackdown from Cloudflare.

        • martin_a a day ago

          Adobe Photoshop has support for WebP (through "Save as", not "Export") but I don't think WebP is important.

      • lizknope a day ago

        I was about to write that Slack doesn't support webp but I just tested it and it does. For years I have been typing "convert file.webp file.jpg" and then posting that in slack but it looks like they have added support.

      • jeroenhd a day ago

        Everything I've tried supports WebPs. It took Adobe a while but even Photoshop supports the format these days.

        Hell, for some software features (like stickers in some chat apps), WebP is mandatory.

        HEIFF files, on the other hand...

      • BeFlatXIII a day ago

        Or convert before you upload because the image host has delusions about fighting the Google monoculture by refusing WebP support. Even more of a head scratcher when WebM is their only video format.

      • wltr a day ago

        Maybe the issue is with your operating system then?

        • jdiff a day ago

          App support has very little to do with the operating system. OSes by and large will preview it just fine.

          • dinkblam a day ago

            on the contrary. on macOS apps don't have to support image (or movie) formats. it is done by the system and transparently handled by the APIs. apps automatically gain new formats when the system adds it.

            • reaperducer a day ago

              The unfortunate side effect of this convenience is that apps automatically lose image support when macOS chases to no longer support them, too.

              One example is Sony's SRF camera raw format.

              Programs like Photoshop and Affinity have to bring their own decoders where previously none were required.

              • dspillett a day ago

                And having to bring in support for formats that are deprecated by the OS, if they decide to keep supporting that format as there is sufficient demand from their users, is worse than having to bring in support for all formats rather than getting support from the OS?

                Having ask that in a slightly confrontational way, one of the reasons I started using VLC all those years ago, and still use it to this day, was having trouble with other media players that relied on OS support fail to work well (or at all) with some codecs, while VLC brought support for them, and their dog, built-in and reliable. Dragging your own format support libraries with you can be beneficial.

          • wltr a day ago

            I meant Windows, as macOS and Linux are usually good with modern things. It’s trivial to add the support if you don’t have it. I have no idea about Windows, but I got this vibe of someone using Win7 in 2025 and complaining the world moved on and keeps moving on.

        • echelon a day ago

          You can't use webp on Reddit, Instagram, and hundreds of other websites. Which is ironic because some of them serve images as webp.

          • socalgal2 a day ago

            Just tested reddit. It works fine with .webp I don't have an instagram account

            • echelon a day ago

              Try https://www.reddit.com/settings/profile

              There are so many uneven areas of Reddit where WebP doesn't work. Old reddit, profile support, mod tools, etc.

              • kccqzy a day ago

                I'm convinced that this is because of the prevalent MVP culture in modern software engineering. Instead of holistically looking at a new feature request such as "support webp images" we break it down into parts (e.g. "serve webp" "accept webp upload here" "accept webp upload there") and then we call it a MVP when only the highest priority items are done.

          • wltr a day ago

            That doesn’t mean it’s dead, it rather shows sheer incompetence of the web dev departments of these wonderful companies for whom webp or avif aren’t images, I guess.

            • PaulHoule a day ago

              Instagram's image uploading interface is klunky compared to Mastodon which is entirely unfunded.

              • echelon a day ago

                This shows the unfortunate power of distribution.

                It doesn't matter if the alternative is technically superior once the majority use the mainstream thing.

    • whywhywhywhy a day ago

      completely fails the second you want to do anything more than load it on a webpage

      Photoshop still won’t open it, MacOS preview opens it but then demands to convert it to tiff when you try to edit it

      • account42 4 hours ago

        GIMP and Gwenview have supported webp (the latter via platform image plugins that add support to other applications as well) since before you encountered them online. Maybe choose better tools.

      • asgerhb a day ago

        Maybe using VLC Media Player from an early age has left me with too high expectations. But if I have a program designed to view or edit a certain class of file, and it doesn't support a certain file format, I will blame that program.

    • AlienRobot a day ago

      You can't even upload webp to instagram.

      • bastawhiz a day ago

        Which makes sense for an app made for photos: why would you capture a photograph to disk in a format made for distributing on the web?

        • jdiff a day ago

          Indeed, why might one upload a photo to the web in a format made for distributing images on the web?

          • bastawhiz a day ago

            I could save my photos as BMPs like early digital cameras did but that doesn't make it practical or reasonable. My camera takes pictures as RAW or HEIF files. Why would I save my photos to a primarily lossy codec that's optimized and designed for distribution rather than preserving fidelity?

            We used to do this with JPEG, in fact. And that's why many pictures on Facebook from pre-2018 or so all have a distinctive grainy look. It's artifacts on top of artifacts. Storage on phones isn't tight anymore, we don't need to store photos in a format meant to minimize bytes at the expense of quality.

            • jdiff 13 hours ago

              There's more on Instagram than photos. Lotta meme pages, lot of people just uploading random screenshots and photos they downloaded that have been turned over a million times. Heck, all it takes is someone downloading their own photo from SocialMediaX to reupload on SocialMediaY, or just uploading a the WebP that they exported for their website.

        • Sharlin a day ago

          Instagram hasn't even been primarily or even secondarily about photos for a long time. Indeed trying to "just" upload a photo is made super inconvenient these days.

          • sunaookami 10 hours ago

            Tangentially related but Instagram is really the worst plattform for photos. I don't understand why they crop and downsize (!) pictures. Not even Twitter does this, it's unironically a better photo plattform.

          • bastawhiz a day ago

            Unless you're uploading memes you've downloaded from elsewhere, this strictly isn't true. I'd consider myself an Instagram power user and the only thing that I and all the people I interact with is photos and videos. None of those are webp, or would have been worthwhile to save as webp as an intermediate format.

    • hsbauauvhabzb a day ago

      My file manager can’t handle them but my browser can.

      Edit: and good luck uploading the format to the majority of webforms that aren’t faang.

      • debugnik a day ago

        Not even Google supports webp uploads in many of their web apps, and it's their format.

        • chillingeffect a day ago

          Could it be a lack of resources? Or some missing expertise? Maybe they could find some interns who are familiar with it? Maybe the entire world is so obsessed w AI, we don't even care about image formats anymore.

          • pixl97 a day ago

            Honestly this kind of stuff happens all the time in large companies.

            Interns won't want to work on a dead end like this. Moreso they need to be supervised by someone that doesn't want to get removed by being the lowest X% usefulness in a company. So all these existing tools that aren't primary revenue generators just sit on coast mode.

      • account42 3 hours ago

        Demand more from you file manager then.

        • hsbauauvhabzb 2 hours ago

          Sure, ur then it’s my image viewer, my phones image viewer, the website I try and upload pictures to. This isn’t a problem you can solve by patching one application, and it’s not one the world as a whole cares about.

          Better image formats serve entities who store images at scale, not end users.

      • upcoming-sesame a day ago

        If you are using an image optimization service like Imgix / Cloudflare Image Resizing then it doesn't really matter, image can be uploaded as any supported format and will be sent to the end user according to their "Accept" header

        • hsbauauvhabzb a day ago

          if you’d like to go and implement that in all the millions of existing web apps, go ahead?

          Let’s also not forget the dependency mess that leaves in applications before we do though..

    • dotancohen a day ago

      5% of people can't view them, yet 25% of top websites use them?

      In what other industry would it be considered acceptable to exclude 5% of visitors/users/clients?

      • pchangr a day ago

        I can tell you, I have personally worked with a global corporation and we estimated that for one of their websites, supporting the 3% that we exclude by using “modern standards” would be more costly than the amount of revenue they get from them. So in that case, it was a rational decision. And up to the 10% cut, management just didn’t want to do the extra investment. So if something falls below that 10% threshold, they just don’t care to get it fixed.

        • Aachen a day ago

          > it was a rational decision. And up to the 10% cut, management just didn’t want to do the extra investment

          Rational, or economical? I find it rational to help someone in need since I'd want others to do the same to me, even if it's not financially profitable for me. Imo more factors flow into what's rational, but I understand what you mean by corporate greed working this way (less than 10% of people are blind, neither male nor female, run a free operating system or can't afford a new computer, etc., so yep they're not profitable groups and for-profits don't optimise for that)

          • majewsky a day ago

            You are using the notion of rationality wrong. Rational reasoning can only help you find how to achieve goals that align with your values. It is strictly worthless in choosing your values.

            If a corporation has determined that profit maximization is their core tenet, excluding the needs of a minority of users can likely be deduced in a rational manner from that tenet. That is precisely why values need to be forced onto corporate actors through regulation, e.g. in this case through mandatory accessibility guidelines like EU directive 2019/882 that enters into force this very week.

            • account42 3 hours ago

              Rational reasoning also takes into account long-term and second and higher order effects which quarterly profit-driven reasoning often ignores. If you support 95% of users and your competitor supports 100% then that may help your competitor getting 100% of them while you get none.

        • account42 3 hours ago

          Thanks for demonstrating why laws like ADA are needed to force companies to not be bad citizens. We desperately need similar laws to force compatibility with older hardware - one could even champion it under environmental protection.

        • dotancohen 11 hours ago

          In my experience, accessibility features are needed by about 1.5% of users (E-commerce and some internal business tools). So by your logic, the rational choice is to exclude accessibility?

          Or Linux users? Or even Firefox users in our market?

        • eviks a day ago

          Something is off in this calculation, how did they get to such a high cost for such a simple thing as an alternative image format when the web supports multiple???

          • dooglius 20 hours ago

            My guess would be that the users hitting different types of issues are mostly the same; someone who can't view an alternative image format is using an obscure old browser or obscure OS that will inevitably have a ton of other issues too, and fixing only a subset of the issues would not make much difference.

      • 0points a day ago

        > 5% of people can't view them, yet 25% of top websites use them?

        That's not how it works.

        The server declares what versions of media it has, and the client requests a supported media format. The same trick have been used for audio and video for ages too.

        Example:

            <picture>
                <source srcset="a.webp" type="image/webp">
                <img src="fallback.jpg">
            </picture>
        • vbezhenar a day ago

          This problem was solved by HTTP since forever. Client sends `Accept` header with supported formats and server selects the necessary content with corresponding `Content-Type` header. You don't need any HTML tags for it.

          • NorwegianDude a day ago

            No, cause thats just one of the features.

            Images are often at different resolutions too, that way, depending on the pixel density of the device, and the physical size, the browser can select the photo that has high enough resolution, but not one that is needlessly large, while also selecting the preferred image format.

          • allendoerfer a day ago

            What about file extensions?

            • georgyo a day ago

              File extensions are just a hint about what the file might be and have nothing to do with what the file actually is. If the server sets the MIME type, the browser will use that as the hint.

              But even beyond that, most file formats have a bit of a header at the start of the file that declares the actual format of the file. Browsers already can understand that and use the correct render for a file without an extension.

              • allendoerfer 7 hours ago

                What if the user wants to use the file outside the browser, where they do not have access to the HTTP headers?

                • georgyo 44 minutes ago

                  The same is true, if you rename a .png to .jpg and opening it with an image viewer, it will render.

            • jdiff a day ago

              Sometimes respected, largely ignored. URLs very often don't map directly to files served.

      • sjsdaiuasgdia a day ago

        Not all businesses are attempting to reach a market of "every internet user globally".

      • bawolff a day ago

        Can the 5% view images at all? The number of web crawlers have exploded recently.

        • jdiff a day ago

          Yes, but it's 2% that are still using browsers without full support for WebP according to caniuse, which takes its numbers from StatCounter.

          https://caniuse.com/webp

          Note that I'm looking at "all tracked," which excludes 2% "other" browsers in the data whose featureset is not known.

      • pasc1878 a day ago

        Any industry.

        e.g. cars - not everyone is physically able to drive books - blind people can't read music - deaf people can't hear

        It is a form of 80/20 or 90/10 rule the last small percentage costs as much as the majority.

        • danillonunes a day ago

          I agree with the point you're trying to make, but your examples are terrible. Music industry doesn't have too much to do to help deaf people. It's not like they're deliberately making deaf-inaccessible music instead of relying on the old good deaf-accessible music formats.

          (Also, the parent comment's example is also not so good because as someone else pointed just because the top 25% websites are serving webp it does mean they're not serving alternative formats for those who does not support it, as this is quite trivial to setup)

  • Etheryte a day ago

    I don't really think this is the case here. All major browsers already support the new spec for example. This isn't a case of oh we'll have support for it eventually, it's already there.

  • Hendrikto a day ago

    > Momentum built, and additional parties became interested. […] we had representation from […] Adobe, Apple, BBC, Comcast / NBCUniversal, Google, MovieLabs, and […] W3C

    > Many […] programs […] already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...

    > Plus, you saw some broadcast companies in that list above. Behind the scenes, hardware and tooling are being updated to support the new PNG spec.

  • 127 a day ago

    There's a big issue that all old popular image formats are 8-bits. 10-bits or even 12-bits would help a lot with storing more information and maintaining editability.

    • londons_explore a day ago

      If adding more bits to an image format, please make it 'n bit'. Ie. the file could be 8 bit, it could be 10, it could be 12, it could be 60 bit!

      Whilst we're at it, please get rid of RGB and make it N channels too.

      Libraries can choose to render that into a 3 channel, 8 bit buffer for legacy applications - but the data will be there for CMYK or HDR, or depth maps, or transparency, or focus stacking, or any other future feature!

    • Retr0id a day ago

      PNG has supported 16-bits per channel since version 1.0 in 1998 (at least)

  • GuB-42 a day ago

    There are some applications for a new image format, but I agree that what we have is generally good enough.

    We need good video formats however. Video makes up most of the global internet traffic, probably accounts for a good part of global storage capacity too. Even slightly better compression will have a massive impact.

  • LocalH a day ago

    I miss the days of old Amiga OS 3.x, where you had installable "DataTypes" that any program could make use of. If we had that, then all such programs could at least be updated to basic compatibility by simply updating the datatype.

    • account42 3 hours ago

      All operating systems support some kind of shared libraries and plugin architecture.

  • ProgramMax 20 hours ago

    It is very backwards compatible. I'm not sure why you thought that.

    We jumped through quite a lot of hoops to make sure old software will be able to display new images. They simply won't display them optimally. But for the most part, that would be because the old software wouldn't display images optimally anyway. So the limit was the software, not the format.

    What I mean by this is old software that treats everything as sRGB wouldn't correctly show a Display P3 image anyway. But we made sure it will still display the image as correctly as it could.

    • account42 3 hours ago

      The sample HDR images don't show correctly in image viewers even though the colors used fit into the sRGB gamut (or at least have good approximations in there). That's not really backwards compatibility.

  • dev_l1x_be a day ago

    > Look at poor webp

    What about it?

    "Lossless WebP is typically 26% smaller than PNG, while lossy WebP can be 25-34% smaller than JPEG at equivalent quality levels"

    This literally saves houndred of thousand of cost, bandwith, electricity every month on the internet. In fact, I strongly belive that this is one of the greatest contributions from Google to society just like ZSTD from Facebook.

    https://developers.google.com/speed/webp/docs/webp_study

    • account42 3 hours ago

      > equivalent quality levels

      Therein lies the lie.

      Image and video compression comparisons are like statistics with the right corpus and evaluation criteria you can should whatever narrative you want to push.

    • Mr_Minderbinder 12 hours ago

      Those numbers are from Google. Third parties have not found WebP to be as good as Google claims.

    • Timwi a day ago

      I don't think the commenter you replied to disagrees with any of that. They were talking about poor rates of adoption, not its feature set.

      • dev_l1x_be a day ago

        The biggest driver of adoption are features.

        "WebP is used by 16.7% of all websites. This means that while it's a popular image format, it's not yet the dominant format, with JPEG still holding the majority share at 73.0%, according to W3Techs. However, WebP offers significant advantages in terms of compression and file size, making it a preferred choice for many web developers. "

    • poisonborz a day ago

      Society wholeheartedly thanks Google for saving costs for Google

      • dev_l1x_be 2 hours ago

        It saved money for our company too.

        ¯\_(ツ)_/¯

  • Retr0id a day ago

    Which aspects are not backwards compatible?

    You'll never be able to faithfully represent an HDR image on a non-HDR system, but you'll still see an image.

    • account42 3 hours ago

      The problem is when "HDR" images that would perfectly fit into the sRGB color space are not rendered correctly on non-HDR systems. This PNGv2 fails that which means it isn't really any more useful and one of the existing (and much better) HDR-supporting formats like JPEG-XL or the video codec based ones pushed by the big guys.

      • Retr0id an hour ago

        If your image fits in sRGB colour space then why not just use sRGB?

jug 15 hours ago

This is the first time I’ve seen HDR used to refer to wider color spaces and not extended brightness and contrast ratios.

  • ProgramMax 14 hours ago

    Yeah. I mentioned this elsewhere but repeating here:

    I designed the article to be accessible and understandable for the average person. So I took some liberties like showing only HDR primaries and not deep diving into HDR transfer functions. People understand the primaries intuitively.

    But you are right that a wide color image could also use those same primaries without being HDR.

    My goal was to be as truthful as possible while still being digestible at a glance.

    In the article, I linked to Chris Lilley's post which explains it more thoroughly for the technical people.

  • account42 3 hours ago

    Don't forget increased bit depths. You need all of these for a full HDR experience.

razorfen a day ago

Can anyone explain how they maintain backwards compatibility on formats like this when adding features? I assume there are byte ranges managed in the format, but with things like compression, wouldn’t compressed images be unrenderable on clients that don’t support it? I suppose it would behoove servers to serve based on what the client would support.

  • gmueckl a day ago

    In mynunderstanding, the actual image data encoding isn't altered in this update. It only introduces an extended color space definition for the encoded data.

    PNG is a highly structured file format internally. It borrows design ideas from formats like EA's Interchange File Format in that it contains lists of chunks with fixed headers encoding chunk type amd length. Decoders are expected to parse them and ignore chunk types they do not support.

    • joquarky 21 hours ago

      The Amiga was quite a platform. Glad to know that it had some long term influence.

  • joshmarinacci a day ago

    The PNG format has chunks with types. So you can add an additional chunk with a new type and existing decoders will ignore it.

    There is also some leeway for how encoding is done as long as you end up with a valid stream of bits at the end (called the bit stream format), so encoders can improve over time. This is common in video formats. I don’t know if a lossless image format would benefit much from that.

    • gmueckl a day ago

      PNG is a bit unusual in that it allows a couple of alternate compressed encodings for the data that are all lossless. It is up to the encoder to choose between them (scanline by scanline, IIRC). So.this encoding algorithm leeway is implicit in a way.

  • jdhsddh a day ago

    PNG is specifically designed to support this. Clients will simply skip chunks they do not understand.

    In this case there could be an embedded reduced colour space image next to an extended color space one

snickerbockers a day ago

It was gone??? Was I the only one using it this entire time?

nashashmi 14 hours ago

Time is ripe for audio-included animated PNG files.

nektro a day ago

cautiously optimistic. the thing that makes png so sought after is its status as frozen

guilbep a day ago

Let's call it PPNG: Pas Portable NetWork Graphic

Dwedit a day ago

If you wanted better compression, it's called Lossless WEBP. Lossless WEBP is such a nice codec. Compared with Lossless JXL, it decompresses many times more quickly, and while JXL usually produces a smaller file, it doesn't always.

Lossless AVIF is not competitive.

However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.

  • ansgri a day ago

    How's HDR and high bit depth support? One of the things I liked about JXL is wide range of bit depths and arbitrary number of channels.

  • account42 3 hours ago

    Webp is a joke. The reference encoder can't even get color spaces right.

  • altairprime a day ago

    I look forward to seeing what PNG v5 does in the future with compression, especially relative to existing formats.

  • rurban a day ago

    And the JXL api is a nightmare, compared to WEBP.

    • Dwedit a day ago

      Yeah, the whole "subscribe to events then check a status result" thing is pretty bad. This is compounded by "Box" behaving differently than everything else. When I made JxlSharp (C# JXL library wrapper), I had to add a workaround in there to force Box to behave like all the other event subscriptions.

      And buffer sizes aren't handled in a good way. You have to provide pre-allocated memory, guessing how big it is supposed to be. Then you get a "not big enough" error. This is a guessing game, not a good design. You're forced to overshoot, then shrink the buffer afterwards.

      ---

      In different APIs, there tends to be a function you call to get the required buffer size. For example, many Win32 API functions make you call them with a buffer size of 0, then you get the actual required size back. Another possibility is having the library allocate the memory, and return the allocated buffer to you. Since cross-module memory management is hairy (different `malloc` implementations can't interoperate), some APIs let you provide the `malloc`, `realloc`, and `free` functions.

4ad 3 hours ago

HDR is about, well, high dynamic range images, usually expressed with at least 10 bits of precision (although it can also be float, etc), and often, but not always encoding scene-referred data instead of image-referred data (originally it was supposed to only encode scene-referred data, but then other competing formats ignored that). It has nothing to do with the gamut and with the color primaries, although in practice HDR images use a large color space.

But you can absolutely have an SDR image encoded using a large color space. So I am not sure why the author talks about color primaries when it tries to justify HDR… I still don’t know what kind of HDR images this new PNG variant can encode.

tonyedgecombe a day ago

>After 20 years of stagnation, PNG is back with renewed vigor!

After 20 years of success, we can't resist the temptation to mess with what works.

  • eviks a day ago

    > [not] Officially supports Exif data

    How can you call this basic fail a success?

    • account42 3 hours ago

      Embedded (and thus often invisible) metadata is a mistake.

    • tonyedgecombe a day ago

      Exif data might be important to you but it clearly hasn't stopped the adoption of png.

      • Retr0id a day ago

        People crammed exif data into PNGs anyway, and now they can continue to do that but in conformance with the spec.

      • eviks a day ago

        Yes, plenty of tech garbage floats at the top, the question is why would you argue that lack of basic fixes over decades is not stagnation, but something positive

  • encom a day ago

    Yea I'm mildly concerned about this as well. PNG's age is a feature, in a time where software development has gone to hell.

    • HelloNurse a day ago

      Without the new HDR and color profile handling, PNG was still useful but significantly obsolete. Display hardware has progressed over a few decades, raising the bar for image files.

      • virtualritz a day ago

        There is nothing in display hardware today that TIFF couldn't handle already.

        For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.

        16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.

        TIFF supports tags so primaries and white point or a known color space name can be stored in the file.

        The format is so versatile, it is used everywhere.

        And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.

        PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes. If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?

        The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.

        And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.

        PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.

        • Mr_Minderbinder 11 hours ago

          > The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.

          > And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.

          What does it say? That they are naive or have the wrong priorities? Their rationale for this seems quite reasonable to me: https://www.w3.org/TR/PNG-Rationale.html#R.Non-premultiplied...

        • tonyedgecombe a day ago

          >The format is so versatile, it is used everywhere.

          Yeah, I love the fact that you can embed a PDF file inside a TIFF.

      • leni536 a day ago

        PNG already supports color profiles, but probably not HDR. I would say that the gamut argument in the article is misleading, you can already encode a wider gamut.

        Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.

        • ProgramMax 20 hours ago

          You are correct. I designed the article to be very approachable and understandable for the normal person. As such, I took some liberties like only showing HDR primaries and ignoring transfer function. I linked to Chris Lilley's post to give experts a more correct answer.

          But wide color gamut was already possibly in PNG via ICC profiles (HDR was not). And those primaries I showed could have been used in a wide color image.

          So the image is a bit misleading or red-flag-y to experts who know. But to the average person, I think it is as truthful as I can be without getting too deep in the weeds.

      • jeroenhd a day ago

        > Display hardware has progressed

        The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.

        HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.

        • account42 3 hours ago

          Or maybe the advantage isn't that big for most uses (images with super bright highlights are a nice novelty but not fun to look at all the time) and people don't want to deal with the clusterfuck that is HDR software support.

      • encom a day ago

        >Display hardware has progressed

        It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.

        • jeroenhd a day ago

          Plenty of JPGs on the web are already in HDR and you wouldn't notice it if you don't have a HDR capable display. The same is true for PNGs.

b0a04gl a day ago

it's more to do with the obvious economic layer underneath. you give a format new life only if there's tooling and distribution muscle behind it. adobe, apple, chrome, ffmpeg etc may not get aligned at the same time. someone somewhere wants apng/hdr/png to be a standard pipe again for creative chains; maybe because video formats are too bulky for microinteraction or maybe because svg is too unsafe in sandboxed renderers. and think onboarding of animations, embedded previews, rich avatars, system wide thumbs ; all without shipping a separate codec or runtime. every time a 'dead' format comes back, it's usually because someone needed a way around a gate

  • ProgramMax 20 hours ago

    In general, I support the "follow the money" idea. But I don't think it applies here.

    I'm retired and making zero money here. (I'm actually losing money on it. Wish I had a company sponsoring me for the flights and hotels for meetups.)

    All participants are required to not patent any piece of it. We work hard to make sure we only reference open standards. (This one is quite tricky. We have to convince other standard orgs to make their stuff free.)

    I could see the argument for getting around a gate. But fwiw I don't think that's the case :)

meindnoch a day ago

Parallel compression/decompression is already possible via Z_SYNC_FLUSH.

  • Retr0id a day ago

    Parallel decompression of Z_SYNC_FLUSH'd data is not possible without additional metadata to tell you where the sync points are.

    • meindnoch 20 hours ago

      True. Although this can be mitigated in a backwards compatible manner, by adding a new PNG chunk that points to the locations of the sync points.

      • Retr0id 20 hours ago

        Yes, such a chunk is being considered for introduction in future PNG revisions.

jbverschoor a day ago

What if we kind of fit JXL in PNG? That way it's more likely to be supported

kumarvvr a day ago

Never heard about Animated PNGs, and I am a nerd to the core.

Pleasantly surprised.

ccarnino a day ago

I can't believe the standard is 20yo.

NotAnOtter 18 hours ago

Can someone TLDR why I should care as someone who doesn't directly get into the weeds of this type of things?

Is this written exactly for (1) people who implement/maintain this and, I say this with love, (2) nerds. Or will there be effects outside of a microscopic improvement on storage + latency.

defraudbah a day ago

this is good news, any packages who support new png standard or planning to? rust/go/python/js?

naikrovek a day ago

Doesn’t PNG already support 16 bits per color channel and an arbitrary number of color channels?

  • ProgramMax 20 hours ago

    16-bit, yes. Arbitrary channel count, no. However, HDR is more than just bitcount.

aizk a day ago

20 years?? What took so long.

leviathan1 a day ago

Not backwards compatible I think

  • ProgramMax 20 hours ago

    It is very backwards compatible. :) We worked hard to make sure it would be.

eabeezxjc a day ago

we need transparent (like gif)

!!!

  • ProgramMax 20 hours ago

    PNGs have supported transparency since day 1 :)

neepi a day ago

Oh no another HEIC!

sylware a day ago

Until everything new is "optional". Hopefully PNG won't be the target of "enshitification". We all know that for file formats, there is a very strong pressure from developers and vendors for that to happen since it favors, hard, vendor and developer lock-in. If not careful, even with a team of PHD devs won't be able to write alternatives encoders/decoders that "reasonbly" and the world will end-up with very few alternatives implementations, if not only one.

I did skim through the specs, it seems most of it is related to cleanup and optional blocks, so it seems PNG is still safe, am I wrong? (asking those who did dive into the new specs deeply).

  • ProgramMax 20 hours ago

    Everything new is optional. This is not a breaking change. Old PNGs and software continue to work just fine. And these new changes are backwards-compatible as much as they can be. So old software can display a new PNG and be mostly correct. By that I mean, the user will still say "it is a picture of a red apple". But if the software isn't HDR, they might not get the bright highlights and inky blacks of the HDR PNG.

    • sylware 3 hours ago

      What is the remaining pertinent value of HDR since we are moving towards xrgb16161616 pixel format?

Joel_Mckay a day ago

DaVinci Resolve also supports OpenEXR format with the added magic of LUT.

PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.

Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.

People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3

  • morjom a day ago

    What are some "consistent color standards" you'd recommend? Honest question.

  • DidYaWipe a day ago

    "PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases."

    What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."

    • Joel_Mckay a day ago

      In general, with some applications people hit the limits pretty quickly with PNG and JPG. In our use-case, the EXR format essentially meant a rendered part of the source image wouldn't be "overexposed" by the render pipeline, and layers could be later adjusted to better match in Resolve. Example: your scenes fireball simulation won't look like a fried egg photo from 1980 due to hitting 0xFF.

      If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3

      • ProgramMax 20 hours ago

        0xFF is 8-bit. PNG supports up to 16-bit. It always has. Plus, PNG now supports full HDR so the fireball won't look washed out.

        I think your experience is with some tool that made bad PNGs. That is a problem with the tool, not the format.

        • Joel_Mckay 19 hours ago

          EXR stores the color-space information differently, and you missed the point.

          Have a look at a tutorial that dives into the basic details, and consider learning something:

          https://www.youtube.com/watch?v=pLt1230dtYE

          https://www.youtube.com/watch?v=mb0b83MML78

          https://www.youtube.com/watch?v=egtnkhuUe_E

          PNG has its use-cases, and some people do expect that baked color-space garbage look given it dominates a lot of low-end media. Have a great day =3

          • ProgramMax 19 hours ago

            I'm trying to follow your point. But...there are problems with your claims. Yes, EXR stores color-space differently than PNG. Because EXR doesn't store color space at all.

            In the first video, the person loads the image and manually chooses a gamma transfer function with 2.2. If that was then saved, it would produce the washed-out fireball you mentioned.

            In the second video, the person loads the image and manually chooses rec.709, which is also gamma tf and also produces washed-out fireball. In fact, the EXR image he loads literally has a bright fireball and you see it get washed out.

            If you want to make claims about EXR being better than PNG, you need to say why storing the values as floating point is better than integer. But the blown-out fireball example is just incorrect. As evidence, I'll point to HDR. ANYTHING you see in an HDR movie is now 100% losslessly reproducible in a PNG.

            • Joel_Mckay 18 hours ago

              There is a lot of conflated contexts to unpack there...

              However, I still trust the ILM engineers over your pet project, and maligned post that reeks of LLM slop.

              The argument of making cow from hamburger doesn't hold true under our use-cases. You were shown the path, and it is your choice to put in the work to learn something important.

              Best of luck kid =3

bravesoul2 2 days ago

Papua New Guniea never went away!

antirez a day ago

PNG: doing very little with as much complexity as possible.

  • LeoPanthera a day ago

    You’re going to be shocked when you find out how webp works.

    • qwertfisch a day ago

      Because that’s a video compression format, from where only a single intra-frame is used.

Padriac a day ago

I thought this was about Papua New Guinea.

kfkdjajgjic 21 hours ago

This is just rebranded MNG format that the PNG group tried to push as a ”standard” 20 years ago. Firefox removed MNG for a reason.

  • creatonez 20 hours ago

    I'm confused what aspect of this you're mad about. MNG is an animated format? Well, Firefox has supported APNG (Animated PNG) for the past 17 years without it ever being standardized and it has become extremely widely adopted. And... this new spec attempts to standardize it.