Show HN: BinaryRPC – Lightweight WebSocket-based RPC framework in modern C++

github.com

74 points by efecan0 19 hours ago

Hi HN,

I’m a recent CS graduate. During the past few months I wrote BinaryRPC, an open-source RPC framework in modern C++20 focused on low-latency, binary WebSocket messaging.

Why I built it * Wanted first-class session support, pluggable QoS levels and a simple middleware chain (global, specific, multi handler) without extra JSON/XML parsing. * Easy developer experience

A quick feature list * Binary WebSocket frames – minimal overhead * Built-in session layer (login / reconnect / heartbeat) * QoS1 / QoS2 with automatic ACK & retry * Plugin system – rooms, msgpack, etc. can be added in one line * Thread-safe core: RAII + folly

Still early (solo project), so any feedback on design, concurrency model or missing must-have features would help a lot.

Thanks for reading!

also see "Chat Server in 5 Minutes with BinaryRPC": https://medium.com/@efecanerdem0907/building-a-chat-server-i...

efecan0 an hour ago

Thank you all for the incredible feedback and thoughtful critique. It genuinely helped shape the direction of the project.

I've just published a detailed Road Map (https://github.com/efecan0/binaryrpc-framework/blob/main/Roa...) based on the discussions here. It includes core cleanup (bye bye Folly, hello absl), a modular transport layer, and better ergonomics for real-world apps.

This was my first open-source release and seeing it hit #1 on HN was surreal. I appreciate everyone who took the time to comment. If you're interested in helping shape the project further, feel free to join the discussion or file issues.

Thanks again — Efecan

jpc0 2 hours ago

> uwebsockets zlib boost folly glog gflags fmt double-conversion openssl usockets

Lightweight is a little bit of an exaggeration. What is the reason for using boost::thread over std::thread for this? I haven’t had time to dig through the code but most of the time I’ve found it was for compatibility with older compilers but you explicitly require C++20 support.

Regarding deps why not just standardise on vcpkg, it’s already a requirement for windows. That way you can use a manifest and ensure your dependencies are always the same version.

Better yet I would try to strip some of the more annoying to build libraries (cough folly) and replace them with more “standard” libraries that you can include using CPM and drop the requirement for vcpkg completely.

  • efecan0 2 hours ago

    Hi, author here (new-grad, v0.1.0 is literally the first public cut) – thanks a lot for the detailed dependency review!

    So the real hard deps should end up as: `uWebSockets + usockets + OpenSSL + fmt` Everything else will be opt-in.

    Road-map update (just added): 1. Merge `std::thread` rewrite (dev branch) 2. Remove folly/double-conversion, glog/gflags 3. Provide single-header client & minimal build script 4. Add `vcpkg.json` for Windows; Linux/macOS stay pure CMake/FetchContent

    Your feedback is shaping v0.2.0 – please keep it coming! Feel free to open a Discussion or issue if you spot more low-hanging DX wins. Really appreciate the help

efecan0 19 hours ago

Hi everyone, thanks for checking out BinaryRPC!

I built this project because I needed a simple but fast WebSocket-based RPC layer for my own real-time side projects. Existing options felt heavy or JSON-only, so I wrote something binary-focused and plugin-friendly.

I’d really appreciate any feedback on:

• Overall architecture / design smells • Concurrency model (thread-pool vs async IO) • “Must-have” features before this is production-ready

Design notes and a 5-minute chat-server demo are in this short post: https://medium.com/@efecanerdem0907/building-a-chat-server-i...

Any comments, suggestions or PRs are welcome. Thanks again!

jayd16 19 hours ago

My immediate reaction is why websocket based design and TCP (?) over gRPC with http/3 and UDP and multiplexing and such?

  • efecan0 19 hours ago

    I started with WebSocket over TCP for practical reasons:

    * Works everywhere today (browsers, LB, PaaS) with zero extra setup. * One upgrade -> binary frames; no gRPC/proto toolchain or HTTP/3 infra needed. * Simple reliability: TCP handles ordering; I add optional QoS2 on top. * Lets me focus on session/room/middleware features first; transport is swappable later.

    QUIC / gRPC-HTTP/3 is on the roadmap once the higher-level API stabilises.

    • seangrogg 11 hours ago

      Assuming you're locked in on the browser WebSockets are about as good as it gets at present. HTTP/3 requires WebTransport which has been a bit of a shitshow in terms of getting things up and running so far, in my experience.

      • efecan0 4 hours ago

        Thanks, that matches my experience as well. For browser clients WebSocket is still ‘the path of least pain’, so I’m keeping it as the default. When WebTransport and QUIC become easier to deploy I’ll add an optional transport module. If you’ve tried any recent WebTransport builds and have tips or docs, I’d love to see them—feel free to open an issue or drop a link. Appreciate the confirmation!

  • jeffbee 18 hours ago

    Ironically this library is much closer to what Google uses internally than grpc is.

    • efecan0 18 hours ago

      Interesting point, thanks!

  • inetknght 18 hours ago

    I'm not the author but off the top of my head:

    - gRPC is not a library I would trust with safety or privacy. It's used a lot but isn't a great product. I have personally found several fuckups in gRPC and protobuf code resulting in application crashes or risks of remote code execution. Their release tagging is dogshit, their implementation makes you think the standard library and boost libraries are easy to read and understand, and neither takes SDLC lifecycles seriously since there aren't sanitizer builds nor fuzzing regime nor static analysis running against new commits last time I checked.

    - http/3 using UDP sends performance into the crater, generally requiring _every_ packet to reach the CPU in userspace instead of being handled in the kernel or even directly by the network interface hardware

    - multiplexing isn't needed by most websocket applications

    • efecan0 18 hours ago

      Thank you for the extra information!

      I am a recent CS graduate and I work on this project alone. I chose WebSocket over TCP because it is small, easy to read, and works everywhere without extra tools. gRPC + HTTP/3 is powerful but adds many libraries and more code to learn.

      When real users need QUIC or multiplexing, I can change the transport later. Your feedback helps me a lot.

      • reactordev 18 hours ago

        The point people are beating around the bush at here is that a binary RPC framework has no such need for HTTP handling, even for handshaking, when a more terse protocol of your own design would/could/might? be better.

        I totally understand your reasoning behind leaning on websockets. You can test with a data channel in a browser app. But if we are talking low-latency, Superman fast, modern C++, RPC and forgeddaboutit. Look into handling an initial payload with credential negotiation outside of HTTP 1.1.

        • gr4vityWall 12 hours ago

          Shouldn't WebSockets be comparable to raw TCP + a simple message protocol on top of it once you're done with the initial handshaking and protocol upgrade?

          I wouldn't expect latency to be an issue for long lived connections, compared to TCP.

          • reactordev 11 hours ago

            no but reliability is. And if you need to re-establish the connection, you'll have to preamble your way through another handshake.

            gRPC uses HTTP/2, which has a Client/Server Stream API, to forgo the preamble. In the end though, ANY HTTP based protocol could be throttled by infrastructure in-between. TCP on the other hand, can be encrypted and sent without any preamble - just a protocol, and only L2/L3 can throttle.

        • efecan0 17 hours ago

          You’re right: HTTP adds an extra RTT and headers we don’t strictly need.

          My current roadmap is:

          1. Keep WebSocket as the “zero-config / browser-friendly” default. 2. Add a raw-TCP transport with a single-frame handshake: [auth-token | caps] → ACK → binary stream starts. 3. Later, test a QUIC version for mobile / lossy networks.

          So users can choose: * plug-and-play (WebSocket) * ultra-low-latency (raw TCP)

          Thanks for the nudge this will go on the transport roadmap.

          • reactordev 12 hours ago

            The actual handshake part of WebSockets is good. Send a NONCE/KEY and get back a known hash encoded however you like. This can be as little as 24 bytes or as much as 1024. Just sending the HTTP preamble eats through 151 bytes at least. Imagine that for every connection, per every machine... That's a lot of wasted bandwidth if one can skip it.

            Compression helps but I think if you want to win over the embedded crowd, having a pure TCP alternative is going to be a huge win. That said, do NOT abandon the HTTP support, WebSockets are still extremely useful. WebRTC is too. ;)

            • efecan0 4 hours ago

              Agree: for small devices every byte counts. Plan is to keep WebSocket for zero-config use, but add a raw-TCP handshake (~24-40 bytes) so embedded clients can skip the HTTP preamble. I’ll note that on the transport roadmap. Appreciate the insights!

            • inetknght 8 hours ago

              > Compression helps

              It's generally unwise to use compression for encrypted transport such as TLS or HTTP/S.

              https://en.wikipedia.org/wiki/Oracle_attack

              • efecan0 4 hours ago

                Good point, thank you.

                You’re right—no compression over TLS by default. If I add deflate support later it will be opt-in and disabled when the connection is encrypted.

                Appreciate the insights!

    • tgma 17 hours ago

      > I have personally found several fuckups in gRPC and protobuf code resulting in application crashes or risks of remote code execution.

      Would be great if you report such remote code executions to the authors/Google. I am sure they handle CVEs etc. There has been a security audit like https://github.com/grpc/grpc/tree/master/doc/grpc_security_a...

      > there aren't sanitizer builds nor fuzzing regime nor static analysis running against new commits last time I checked.

      Are you making shit up as you go? I randomly picked a recently merged commit and this is the list of test suites ran on the pull request. As far as I recall, this has been the practice for at least 8 years+ (note the MSAN, ASAN, TSAN etc.)

      I can see various fuzzers in the code base so that claim is also unsubstantiated https://github.com/grpc/grpc/tree/f5c26aec2904fddffb70471cbc...

        Android (Internal CI) Kokoro build finished
        Basic Tests C Windows Kokoro build finished
        Basic Tests C# Linux Kokoro build finished
        Basic Tests C# MacOS Kokoro build finished
        Basic Tests C# Windows Kokoro build finished
        Basic Tests C++ iOS Kokoro build finished
        Basic Tests C/C++ Linux [Build Only] Kokoro build finished
        Basic Tests ObjC Examples Kokoro build finished
        Basic Tests ObjC iOS Kokoro build finished
        Basic Tests PHP Linux Kokoro build finished
        Basic Tests PHP MacOS Kokoro build finished
        Basic Tests Python Linux Kokoro build finished
        Basic Tests Python MacOS Kokoro build finished
        Bazel Basic Tests for Python (Local) Kokoro build finished
        Bazel Basic build for C/C++ Kokoro build finished
        Bazel C/C++ Opt MacOS Kokoro build finished
        Bazel RBE ASAN C/C++ Kokoro build finished
        Bazel RBE Build Tests Kokoro build finished
        Bazel RBE Debug C/C++ Kokoro build finished
        Bazel RBE MSAN C/C++ Kokoro build finished
        Bazel RBE Opt C/C++ Kokoro build finished
        Bazel RBE TSAN C/C++ Kokoro build finished
        Bazel RBE Thready-TSAN C/C++ Kokoro build finished
        Bazel RBE UBSAN C/C++ Kokoro build finished
        Bazel RBE Windows Opt C/C++ Kokoro build finished
        Bloat Diff Kokoro build finished
        Bloat Difference Bloat Difference
        Clang Tidy (internal CI) Kokoro build finished
        Distribution Tests C# Linux Kokoro build finished
        Distribution Tests C# MacOS Kokoro build finished
        Distribution Tests C# Windows Kokoro build finished
        Distribution Tests Linux (standalone subset) Kokoro build finished
        Distribution Tests PHP Linux Kokoro build finished
        Distribution Tests PHP MacOS Kokoro build finished
        Distribution Tests Python Linux Arm64 Kokoro build finished
        Distribution Tests Ruby MacOS Kokoro build finished
        Distribution Tests Windows (standalone subset) Kokoro build finished
        EasyCLA EasyCLA check passed. You are authorized to contribute.
        Grpc Examples Tests CPP Kokoro build finished
        Memory Difference Memory Difference
        Memory Usage Diff Kokoro build finished
        Mergeable Mergeable Run has been Completed!
        Migration Test MacOS Sonoma Kokoro build finished
        ObjC Bazel Test Kokoro build finished
        Portability Tests Linux [Build Only] (internal CI) Kokoro build finished
        Portability Tests Windows [Build Only] (internal CI) Kokoro build finished
        Sanity Checks (internal CI) Kokoro build finished
        Tooling Tests Python Linux Kokoro build finished
        Windows clang-cl with strict warnings [Build Only] Kokoro build finished
      • efecan0 17 hours ago

        Interesting discussion. My current goal isn’t to replace gRPC but to offer a lighter option for simple real-time apps. I’ll keep following the thread; the security links are useful, thanks.

      • inetknght 8 hours ago

        > Would be great if you report such remote code executions to the authors/Google. I am sure they handle CVEs etc.

        I wasn't getting paid to fix their code, I have no interest in helping Google for free, and don't want to help Google.

        > There has been a security audit like

        A checkbox report from six years ago. That's ancient times at the pace that things are added to gRPC.

        > Are you making shit up as you go?

        No. This [0] repo I used to reproduce a stack smash issue before `main()`. I reported the issue here [1]. I don't get paid to fix Google's things and found a workaround for the purposes I needed.

        [0]: https://github.com/keith-bennett-airmap/grpc-stacksmash

        [1]: https://github.com/protocolbuffers/protobuf/issues/12732

        > I can see various fuzzers in the code base so that claim is also unsubstantiated

        Fuzzers are cool, but they don't cover the whole codebase.

        • tgma 6 hours ago

          > I wasn't getting paid to fix their code, I have no interest in helping Google for free, and don't want to help Google.

          Extraordinary claims need extraordinary evidence. Software can be buggy, for sure, but as you yourself acknowledge, gRPC is widely deployed at many companies that do offer bug bounties. I won't be surprised if folks can occasionally find exploits in it, but if as you suggest it is so easily exploitable to get remote code execution, you in fact should be able to collect many $$$ from not just Google, but Apple, Microsoft, and many more companies who deploy gRPC services at scale. Hard to find a nicer attack target than a network facing library that you have a zero-day RCE for. (Protobuf is an even more popular target and used by virtually all Google services.)

          https://bughunters.google.com

          > No. This [0] repo I used to reproduce a stack smash issue before `main()`. I reported the issue here [1]. I don't get paid to fix Google's things and found a workaround for the purposes I needed.

          As you have figured out yourself in the repo referred, the bug (not sure if exploitable or not) is from protobuf, a distinct library from gRPC, and appeared under certain compiler configurations. gRPC library does not even have a dependency on `libprotobuf`. It just happens to be the most popular format used jointly with gRPC. (It could be argued to be a bug in compiler conditions where abseil substitution of absl::string_view to std::string_view happens and is not fully compatible.)

          Google also specifically pays for some open source project vulnerability reports (specifically covering Protobuf as an important target), so repeated claims of I am not getting paid otherwise I had dozens of exploits should be taken with a grain of salt and considered FUD: https://bughunters.google.com/about/rules/open-source/652133...

          > Fuzzers are cool, but they don't cover the whole codebase.

          You just went from the claim "they have no fuzzers or static analysis" to "fuzzers don't cover X". Of course, you cannot prove correctness by testing and fuzzing. Testing is not verification. Tests can only prove the existence of bugs, not their non-existence.

          In any case, I would be really interested to see a comparable RPC stack that is close or more well-tested than gRPC...

lipovna an hour ago

Wow, its a very good project

gr4vityWall 12 hours ago

Congrats on your project. Did you get to replace the old Java prototype you were using at work? It'd be interesting to see how the performance compares.

  • efecan0 4 hours ago

    We did move the service from the old Java / STOMP prototype to a BinaryRPC stack earlier this quarter, but I’m still gathering formal benchmark data before I publish anything public.

    Informally, on the same hardware and traffic pattern we see:

    • noticeable CPU head-room • lower p95 latency • higher peak throughput

    Once I have a full week of numbers cleaned up, I’ll add a short performance section to the README and post the graphs. Thanks for the interest stay tuned.

jeffbee 18 hours ago

Breezy claims of "exactly once" are a red flag for me. Aside from that I think this framework looks fairly promising.

  • efecan0 18 hours ago

    Good catch—let me clarify what QoS 2 in BinaryRPC really does.

    It follows the MQTT-style 2-step handshake:

    1. Sender → `PUBLISH(id, data)` 2. Receiver → `PUBREC(id)` // stored as “seen but not completed” 3. Sender → `PUBREL(id)` 4. Receiver → `PUBCOMP(id)` // marks id as done, then passes data to the app layer

    While an id is in “seen” state the receiver drops duplicates, so the message is delivered to user code exactly once per session even if the socket retries.

    If the client reconnects with the same session-key, the server reloads the in-flight id table, so duplicates are still filtered. If the session is lost (no session-key) we fall back to at-least-once because there is no common store.

    So: “exactly once within a persisted session; effectively once” as long as the application is idempotent. I’ll update the docs to state this more precisely. Thanks for pointing it out!

sph87 13 hours ago

Modules my guy. The words “modern” and “C++” don’t go together while using headers. Also your most basic implementation requires me to write 200+ LOC and add a dozen headers. Then it’s a ton of boiler plate code duplication for every function registered.

Basically what I am saying is - you need to place more abstraction between your code and the end-user API.

Take this line:

std::string sayMessage = payload["message"].template get<std::string>();

Why not make a templated getString<“message”> that pulls from payload? So that would instead just be:

auto sayMessage = payload[“message”].as_string() or

auto sayMessage = payload.getString<“message”>() or

std::string sayMessage = payload[“message”] //We infer type from the assignment!!

It’s way cleaner. Way more effective. Way more intuitive.

When working on this kind of stuff end-developer experience should always drive the process. Look at your JSON library. Well known and loved. Imagine if instead of:

message[“code”] = “JOIN”; it was instead something like:

message.template set<std::string, std::string>(“CODE”, “JOIN”);

Somehow I don’t think the latter would have seen any level of meaningful adoption. It’s weird, obtuse and overly complex. You need to hide all that.

  • efecan0 13 hours ago

    Hi.

    Thank you for the detailed feedback—this is exactly the kind of input that helps the project grow.

    You’re right: developer experience needs to be better. Right now there is too much boiler-plate and not enough abstraction. Your example

        std::string msg = payload["message"];  // type inferred
    
    is the direction I want to take. I’ll add a thin wrapper so users can write `payload["key"].as_string()` or even rely on assignment type-inference. Refactoring the basic chat demo to be much shorter is now my next task.

    About C++20 modules: I agree they are the future. The single-header client was a quick MVP, but module support is on the roadmap as compiler tooling matures.

    If you have more DX ideas or want to discuss API design, please open an issue on GitHub I’d be happy to collaborate.

    Thanks again for the valuable feedback!

    • const_cast 13 hours ago

      On the topic of modules: a single-header template implementation is still the most practical and quick way to distribute a library. Module support is currently iffy - I wouldn't use them.

      • sph87 13 hours ago

        I love modules. Honestly. I advocate usage simply as a forcing function for upstream. Tooling support is iffy because usage is low. Usage is low because tooling is iffy. All of the major players in the build space have reasonably mature levels of support though. So it's one of those things were compilers have outpaced IDE.

        • jpc0 an hour ago

          > Tooling support is iffy because usage is low. Usage is low because tooling is iffy.

          There’s effectively one developer working on module support in clangd. I have submitted more than one issue with minimal reproducible examples of hard clangd crashes and every one is still open or Ive given up on following them.

          I’m all for modules myself and when you aren’t hitting the edge cases hey are absolutely amazing.

        • efecan0 13 hours ago

          Thanks for the great follow-up discussion, everyone. This really highlights the classic "pragmatism vs. vision" debate in the C++ ecosystem.

          You've all made it very clear that from a user's perspective, a single-header library is still the gold standard for ease of use and integration. The ideal scenario is for a developer to just #include "binaryrpc.hpp" and have everything work without touching their build system, and I now see that as a crucial goal for the project. My framework isn't there yet, and the feedback has been a wake-up call that the current multi-header approach creates too much friction for new users.

          So, my path forward is clear: 1. First, focus on simplifying the core API based on the initial feedback (e.g., creating wrapper objects for payloads). 2. Then, work towards providing a single-header distribution for maximum compatibility and ease of use.

          I agree that modules are the future. But for now, delivering the most practical and frictionless developer experience seems to be the most important priority.

          Thanks again for guiding me on this.

denizdoktur 18 hours ago

Lightweight, well-designed, and solves a real need. Impressive.

dailker 17 hours ago

nice I loved it dude. I hope you get succesful on this.

MuffinFlavored 17 hours ago

> None, AtLeastOnce, ExactlyOnce with retries, ACKs & two‑phase commit, plus pluggable back‑off strategies & per‑session TTL.

Sounds like RabbitMQ/AMQP/similar over WebSocket?

  • efecan0 17 hours ago

    It looks similar on the surface, but scope and goals are different:

    * BinaryRPC = direct request/response calls with optional QoS (per session). – No exchanges/queues, no routing keys. – One logical stream, messages mapped to handlers.

    * RabbitMQ / AMQP = full message-broker with persistent queues, fan-out, topic routing, etc.

    So you could say BinaryRPC covers the transport/QoS part of AMQP, but stays lightweight and broker-less. If an app later needs full queueing we can still bridge to AMQP, but the core idea here is “RPC first, minimal deps”.