Curiously, what people commonly refer to as 'Waterfront OBJ' is merely a tiny subset of that format. I.e. the part dealing with polygons.
The format supports e.g. higher order curves and surfaces and apps like Maya or Rhino3D can read and write OBj files containing such data. [1]
Writing a parser for the polygon subset also comes with some caveats.
If your target is a GPU you probably need to care for robust triangulation of n-gons and making per-face-per-vertex data per-vertex on disconnected triangles.
Vice versa, if you are feeding data to an offline renderer you want to absolutely preserves such information.
I believe the tobj Rust crate is one of the few OBJ importers that handles all edge cases. [2] If you think it doesn't, let me know and I will fix that.
This is surprising for people familiar with one but not the other of the requirements of offline- or GPU rendering.
I.e. if you write an OBJ reader this can become a challenge; see e.g. an issue I opened here [3].
How does this compare to the `obj` crate? I'm assuming that doesn't handle cases beyond the common one well? I ask because I have a 3D rendering/GUI application lib in rust (`graphics` crate), and for OBJ files, I thinly wrap that to turn it into a mesh.
In my own applications, it hasn't come up, as I've been mostly using primitives and dynamically-generated meshes, but am wondering if I should switch.
That's very nice work, and many interesting concepts introduced in the post (for example, arenas, length bounded strings, the Cut struct).
One caveat though:
> If the OBJ source cannot fit in memory, then the model won’t fit in memory.
I don't think that this is true: a (single precision) float textual representation is typically equal or larger than its binary representation (4 bytes), the floating point used in the renderer given later in the post. The numbers given in the cube example are unlikely to occur in real world examples, where one would probably expect more than 2 digits of decimal precision. That being said, for double precision floats, it might be true in many scenarios, but I would not make that a cardinal rule anyway.
This corner cut fits within the objective of the post, which, imho, isn't to make the most efficient program, but provide a great foundation in C to build upon.
I guess that the following statement would be true: if the process cannot load the whole file in memory and allocate memory for the model at the same time, then the it won't be able to run successfully. Strictly speaking, the sentence I quoted doesn't derive from that. This is just me quibbling though, because the intended meaning was most likely what you said.
As someone who has written multiple OBJ readers over the years, this is interesting, but noteably seems to be ignoring texture coords (UV coords), and doesn't support object groups.
Also obj material support is an absolute nightmare if you ever try and support that: there's technically a sort of original standard (made around 30 years ago, so understandably somewhat messy given how far materials and physically-based shading has come in the mean time), but different DCCs do vastly different things, especially for things like texture paths and things like specular/roughness...
I think it doesn't support 'vt' because the techniques are adequately demonstrated just with faces and normals, so it would be more code without serving any pedagogical purpose. The author would, I think, not suggest you copy this code and try to use it as a library or something, but that you should develop the skillset to be able to write code like this when you need it.
The usual rite of passage into 3D programming in the old days, adding all the things that OpenGL doesn't do out of the box like in other 3D frameworks, naturally the 3D assets loading code was OBJ based.
Nowadays you can have the same fun by rewriting the previous sentence using Vulkan instead of OpenGL, and glTF instead of OBJ.
Heh. I feel that I might be somewhat responsible for the existence of this article. Perhaps merely a coincidence, but this happened a few days ago: https://old.reddit.com/r/C_Programming/comments/1itrhd9/blat... in which the author schools me on the topic of American Fuzzy Lop which he applied to my home made OBJ parser.
Note that there's a great C99/C++ single header library, tinyobjloader, that provides robust (in my experience) and feature-full OBJ loading, including triangulation of n-gons and full material parsing.
Only because every other Str-accepting function uses "s.len" instead of "s.len > 0" as the "is s non-empty" test.
Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).
> Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).
This very mindset is a source of bugs and vulnerabilities. The author has high marks from me on safety and "make it hard to use wrong" and it's quite surprising to see such code.
> Satisfying preconditions is a requirement to make functioning programs.
> The insanity would be assuming that every function is valid for the Cartesian product of all possible of its arguments.
Would it? That reminds me of a recent post on HN about proving the long (binary) division algorithm with Hoare's logic. It uses the "d > 0" precondition and proves that, indeed, the algorithm arrives at the required postcondition. However, the algorithm still terminates and produces something even when d == 0. What does it computes in this case? Is it useful? Should such questions even be considered?
Yes, a better understanding of the problem gives you a better understanding of the preconditions. Always ask if you have that right and weaken accordingly.
For this particular case it's trivial to fix substring function and extend possible inputs. It seems your proposition: "do nothing because it's futile". It's simply wrong.
Reminds me of the time I was chastised for adding a NULL check to keep <program> from segfaulting by the dev responsible for said segfault because crashing without even as much as a warning was "intended behavior". IIRC this was over reading a file from disk and just assuming it existed.
This way of writing programs is also quite a lot faster than depending on fgetline and the like. The integer and float parsing is probably slow, though.
My question is: Does the author actually use Windows XP?
I've switched to XP (from Windows 7, on a VM) and the performance is astounding even on limited hardware settings. No bloatware, just good old Win32 x86.
This sent me down a rabbit hole reading about the author's style of having an "Arena allocator," [0] which was fascinating. I often did something similar when writing ANSI C back in the day—allocate a big enough chunk of memory to operate, and do your own bookkeeping. But his Arena implementation looks more flexible and robust.
I think this article serves as a perfect example of why we should consider moving on from C. The first third of this article is "how to do memory allocation and work with strings".
Why isn't the conclusion "There is a far better way of using C, which the stdlib doesn't promote... But could"? The fact of the matter is that any sufficiently large C codebase will do this stuff anyway, it's not a language issue.
Good Rust code will also care about memory allocations to the same degree as the C code, the difference is that Rust will help you out in making sure your thinking is correct. My experience is that good systems programming has thinking about memory allocations not as an annoying side issue, but as a main concern.
If you even remotely care about performance you'll need to take care of such details in any language, and some high level 'managed' languages make that actually harder than C because you need to work around or even against builtin language features.
I've spent my entire career working in C++ writing low level code for video games (and a decent chunk of it writing backend services for said games, and the glue between the two).
If you want to talk about performance, you better come armed with numbers. If you don't, you're not writing "high performance" code.
I've spent my career ping ponging between writing fast low level code for games, and online systems. If you want to talk about high performance code, benchmarks are a requirement. There's no numbers here. It only talks about "Robust", which OP defines as:
> By robust I mean no undefined behavior for any input, valid or invalid; no out of bounds accesses, no signed overflows. Input is otherwise not validated. Invalid input may load as valid by chance, which will render as either garbage or nothing.
Robust is a baseline for high performance programming.
Always when an article of this author gets posted on HN there's a Rust fanatic saying how his code does not work and how wrong he is for committing the sin of using C in current year.
Curiously, what people commonly refer to as 'Waterfront OBJ' is merely a tiny subset of that format. I.e. the part dealing with polygons.
The format supports e.g. higher order curves and surfaces and apps like Maya or Rhino3D can read and write OBj files containing such data. [1]
Writing a parser for the polygon subset also comes with some caveats.
If your target is a GPU you probably need to care for robust triangulation of n-gons and making per-face-per-vertex data per-vertex on disconnected triangles.
Vice versa, if you are feeding data to an offline renderer you want to absolutely preserves such information.
I believe the tobj Rust crate is one of the few OBJ importers that handles all edge cases. [2] If you think it doesn't, let me know and I will fix that.
This is surprising for people familiar with one but not the other of the requirements of offline- or GPU rendering.
I.e. if you write an OBJ reader this can become a challenge; see e.g. an issue I opened here [3].
1. https://paulbourke.net/dataformats/obj/
2. https://docs.rs/tobj/latest/tobj/struct.LoadOptions.html
3. https://github.com/assimp/assimp/issues/3677
How does this compare to the `obj` crate? I'm assuming that doesn't handle cases beyond the common one well? I ask because I have a 3D rendering/GUI application lib in rust (`graphics` crate), and for OBJ files, I thinly wrap that to turn it into a mesh.
In my own applications, it hasn't come up, as I've been mostly using primitives and dynamically-generated meshes, but am wondering if I should switch.
> robust triangulation of n-gons and making per-face-per-vertex data per-vertex on disconnected triangles.
This is a simple post-process step after parsing.
That's very nice work, and many interesting concepts introduced in the post (for example, arenas, length bounded strings, the Cut struct).
One caveat though:
> If the OBJ source cannot fit in memory, then the model won’t fit in memory.
I don't think that this is true: a (single precision) float textual representation is typically equal or larger than its binary representation (4 bytes), the floating point used in the renderer given later in the post. The numbers given in the cube example are unlikely to occur in real world examples, where one would probably expect more than 2 digits of decimal precision. That being said, for double precision floats, it might be true in many scenarios, but I would not make that a cardinal rule anyway.
This corner cut fits within the objective of the post, which, imho, isn't to make the most efficient program, but provide a great foundation in C to build upon.
The sentence you quoted must be true because the input file and the output binary model both need to fit in memory at the same time.
I guess that the following statement would be true: if the process cannot load the whole file in memory and allocate memory for the model at the same time, then the it won't be able to run successfully. Strictly speaking, the sentence I quoted doesn't derive from that. This is just me quibbling though, because the intended meaning was most likely what you said.
The technique shown can be easily adapted to mmap.
I was thinking about that too. I wasn't so sure but this SO entry eclipsed my doubts:
https://stackoverflow.com/questions/7222164/mmap-an-entire-l...
As someone who has written multiple OBJ readers over the years, this is interesting, but noteably seems to be ignoring texture coords (UV coords), and doesn't support object groups.
Also obj material support is an absolute nightmare if you ever try and support that: there's technically a sort of original standard (made around 30 years ago, so understandably somewhat messy given how far materials and physically-based shading has come in the mean time), but different DCCs do vastly different things, especially for things like texture paths and things like specular/roughness...
Isn't there a common convention with vertex colors, where the color is just listed after the vertex?
(/a quick query) https://paulbourke.net/dataformats/obj/colour.html
I think it doesn't support 'vt' because the techniques are adequately demonstrated just with faces and normals, so it would be more code without serving any pedagogical purpose. The author would, I think, not suggest you copy this code and try to use it as a library or something, but that you should develop the skillset to be able to write code like this when you need it.
The usual rite of passage into 3D programming in the old days, adding all the things that OpenGL doesn't do out of the box like in other 3D frameworks, naturally the 3D assets loading code was OBJ based.
Nowadays you can have the same fun by rewriting the previous sentence using Vulkan instead of OpenGL, and glTF instead of OBJ.
"The same fun" but also likely orders of magnitude more efforts (and headaches).
Indeed, at least now there is an SDK as starting point.
Heh. I feel that I might be somewhat responsible for the existence of this article. Perhaps merely a coincidence, but this happened a few days ago: https://old.reddit.com/r/C_Programming/comments/1itrhd9/blat... in which the author schools me on the topic of American Fuzzy Lop which he applied to my home made OBJ parser.
Note that there's a great C99/C++ single header library, tinyobjloader, that provides robust (in my experience) and feature-full OBJ loading, including triangulation of n-gons and full material parsing.
https://github.com/tinyobjloader/tinyobjloader
It's fairly mature and handles many of the parsing footguns you'll inevitably run into trying to write your own OBJ parser.
This is one of those things where for literally every 3d tool you test it against you're going to find new edge cases that breaks the code.
If anyone is looking for a very fast OBJ loader, this is great: https://github.com/guybrush77/rapidobj
Also: https://aras-p.info/blog/2022/05/14/comparing-obj-parse-libr...
> Str substring(Str s, ptrdiff_t i)
The function has quite questionable implementation. It fails miserably for strings with length < i.
Only because every other Str-accepting function uses "s.len" instead of "s.len > 0" as the "is s non-empty" test.
Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).
> Still, this function is called only once, and in that call, its i argument is always <= length, so it's perfectly fine (it's only UB if you actually pass it a bad argument).
This very mindset is a source of bugs and vulnerabilities. The author has high marks from me on safety and "make it hard to use wrong" and it's quite surprising to see such code.
Satisfying preconditions is a requirement to make functioning programs.
The insanity would be assuming that every function is valid for the Cartesian product of all possible of its arguments.
What he probably needs is an assert
> Satisfying preconditions is a requirement to make functioning programs.
> The insanity would be assuming that every function is valid for the Cartesian product of all possible of its arguments.
Would it? That reminds me of a recent post on HN about proving the long (binary) division algorithm with Hoare's logic. It uses the "d > 0" precondition and proves that, indeed, the algorithm arrives at the required postcondition. However, the algorithm still terminates and produces something even when d == 0. What does it computes in this case? Is it useful? Should such questions even be considered?
> Should such questions even be considered?
Yes, a better understanding of the problem gives you a better understanding of the preconditions. Always ask if you have that right and weaken accordingly.
For this particular case it's trivial to fix substring function and extend possible inputs. It seems your proposition: "do nothing because it's futile". It's simply wrong.
Will that make the function more useful?
In general you can write better code when you can make assumptions.
Code to handle every possibility is filled with error prone branching, that reduplicates effort at every function.
Reminds me of the time I was chastised for adding a NULL check to keep <program> from segfaulting by the dev responsible for said segfault because crashing without even as much as a warning was "intended behavior". IIRC this was over reading a file from disk and just assuming it existed.
This way of writing programs is also quite a lot faster than depending on fgetline and the like. The integer and float parsing is probably slow, though.
My question is: Does the author actually use Windows XP?
> Does the author actually use Windows XP?
I've switched to XP (from Windows 7, on a VM) and the performance is astounding even on limited hardware settings. No bloatware, just good old Win32 x86.
I recently pulled an old laptop out of the closet with a mostly stock image of XP to play with an old device and it felt so snappy.
It's sad how bloated things have gotten.
Is it connected to the network?
Mine is. Why?
Because XP is no longer receiving security updates
And?
> My question is: Does the author actually use Windows XP?
Significant overlap between the types of people who use WinXP and write 3D file format importers in C, I think! Though I prefer 7 myself.
Who can do it as a one liner with a regex?
Been there, done that... Not worth it
This sent me down a rabbit hole reading about the author's style of having an "Arena allocator," [0] which was fascinating. I often did something similar when writing ANSI C back in the day—allocate a big enough chunk of memory to operate, and do your own bookkeeping. But his Arena implementation looks more flexible and robust.
[flagged]
I did some OBJ processing the other day, it's an easy format for working with PlayStation 1 3D models.
Says who?
I think this article serves as a perfect example of why we should consider moving on from C. The first third of this article is "how to do memory allocation and work with strings".
The bit about OBJ parsing is neat, though.
Why isn't the conclusion "There is a far better way of using C, which the stdlib doesn't promote... But could"? The fact of the matter is that any sufficiently large C codebase will do this stuff anyway, it's not a language issue.
Good Rust code will also care about memory allocations to the same degree as the C code, the difference is that Rust will help you out in making sure your thinking is correct. My experience is that good systems programming has thinking about memory allocations not as an annoying side issue, but as a main concern.
If you even remotely care about performance you'll need to take care of such details in any language, and some high level 'managed' languages make that actually harder than C because you need to work around or even against builtin language features.
I've spent my entire career working in C++ writing low level code for video games (and a decent chunk of it writing backend services for said games, and the glue between the two).
If you want to talk about performance, you better come armed with numbers. If you don't, you're not writing "high performance" code.
A large part of high performance programming using any language is about memory management.
For stuff that you run only for yourself and _always_ executes in a blink of an eye I do agree.
I've spent my career ping ponging between writing fast low level code for games, and online systems. If you want to talk about high performance code, benchmarks are a requirement. There's no numbers here. It only talks about "Robust", which OP defines as:
> By robust I mean no undefined behavior for any input, valid or invalid; no out of bounds accesses, no signed overflows. Input is otherwise not validated. Invalid input may load as valid by chance, which will render as either garbage or nothing.
Robust is a baseline for high performance programming.
Always when an article of this author gets posted on HN there's a Rust fanatic saying how his code does not work and how wrong he is for committing the sin of using C in current year.
I didn't mention rust. I think he's used enough features of C++ to warrant using it instead.