Very cool. Seeing how almost everything from WiFi, to NVME SSDs, (to apparently USB ports sometimes?) are connected to it, is PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?
Probably a good thing SLI fell out of fashion. No consumer boards with multiple 16x, but a few with 2 8x (gated behind a "mode" switch). A few years ago it was looking like we were on our way to full 4 16x slots. For cuda/llm/whatever does it really matter if the cards are in 1x slots?
Nice! One suggestion - please add AM4 socket boards. With current memory prices, AM5 with DDR5 is becoming unattainable for some. DDR4 prices are rising as well. But not nearly as bad as DDR5.
Can anyone recommend a specific, well-made, high-performance motherboard with loads of PCIe lanes and expansion slots, and sensible lane topology?
All the motherboards these days make me feel claustrophobic. My current workstation is pretty old, but feels like it had more expansion capability (relative to its time) than what's on the market today.
You’ll have to be more specific about your price range. There are a lot of server and workstation chipsets/platforms that will have a large number of PCIe lanes, but you will pay for them.
I really suggest not seeking a lot of PCIe lanes unless you really need them right now, though. The price premium for a platform with a lot of extra PCIe is very steep once you get past consumer boards. It would be a shame to spend a huge premium on a server board and settle for slower older tech CPUs only to have all of those slots sit empty.
It’s a good idea to add up the PCIe devices you will use and the actual bandwidth they need. You lose very little by running a GPU in a PCIe x8 slot instead of a full x16 slot, for example. A 10G Ethernet card only needs 1 lane of PCIe 4.0. Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers.
>Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers
Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.
Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.
But your point is valid that very few people actually notice a difference
Yeah my ASRock have nice map of the every lane and interface and where they are connected on the board. Especially important as some devices go thru second io expander
Wow, this is great! I don't know how they generate this but it's really impressive. One of the things that I've been surprised with is some older dual socket workstations have tons of PCI-E lanes, but none are hooked to the second CPU it seems
Very nice! Just a note (as the site says on bottom left side), this can vary depending on the CPU you use, would be nice to be able to select all different variations of supported CPUs as a future feature.
That is so incredibly useful, hardware vendors do such a bad job of properly advertising how many GPUs will actually work and with what combination of m.2 slots in use.
Very cool. Seeing how almost everything from WiFi, to NVME SSDs, (to apparently USB ports sometimes?) are connected to it, is PCIe the only high-speed interconnect we have for peripherals to communicate with modern CPUs?
Probably a good thing SLI fell out of fashion. No consumer boards with multiple 16x, but a few with 2 8x (gated behind a "mode" switch). A few years ago it was looking like we were on our way to full 4 16x slots. For cuda/llm/whatever does it really matter if the cards are in 1x slots?
Nice! One suggestion - please add AM4 socket boards. With current memory prices, AM5 with DDR5 is becoming unattainable for some. DDR4 prices are rising as well. But not nearly as bad as DDR5.
Can anyone recommend a specific, well-made, high-performance motherboard with loads of PCIe lanes and expansion slots, and sensible lane topology?
All the motherboards these days make me feel claustrophobic. My current workstation is pretty old, but feels like it had more expansion capability (relative to its time) than what's on the market today.
You’ll have to be more specific about your price range. There are a lot of server and workstation chipsets/platforms that will have a large number of PCIe lanes, but you will pay for them.
I really suggest not seeking a lot of PCIe lanes unless you really need them right now, though. The price premium for a platform with a lot of extra PCIe is very steep once you get past consumer boards. It would be a shame to spend a huge premium on a server board and settle for slower older tech CPUs only to have all of those slots sit empty.
It’s a good idea to add up the PCIe devices you will use and the actual bandwidth they need. You lose very little by running a GPU in a PCIe x8 slot instead of a full x16 slot, for example. A 10G Ethernet card only needs 1 lane of PCIe 4.0. Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers.
>Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers
Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.
Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.
But your point is valid that very few people actually notice a difference
Some builds I kept tabs on:
Let's Encrypt documented their early 2021 whitebox that used 128 PCIe 4.0 lanes, mainly for storage: https://letsencrypt.org/2021/01/21/next-gen-database-servers...
Troy Hunt (HaveIBeenPwned) recently solicited upgrade advice from the internet and settled on an Asus Pro WS TRX50-SAGE WIFI (which doesn't appear to be in the MoboMaps database yet): https://gist.github.com/troyhunt/a6e565981e4769976e9cffb705f...
I’ve been struggling to find an AM5 board that can run three MI50s at 4x. This is perfect thank you.
Him are you sure about some of the PCI slots? I think some marked as 4x get downgraded to 1x on these boards…
Further edit - this maybe accurate - how are you getting this / confirming it?
How can I contribute the data for the boards I own which are not on the site?
I wish all manufacturers clearly gave info like this up front. AM4 boards would be nice.
Yeah my ASRock have nice map of the every lane and interface and where they are connected on the board. Especially important as some devices go thru second io expander
Whoa. This is so cool and helpful. Too bad my board is Intel. Is there a way to contribute to this?
I dropped a message to the creator :fingers_crossed: they open the motherboard database so we can make contributions
For disclosure, this was created by "Ronin Wilde" - https://www.youtube.com/watch?v=cgdXj75VSMo
I found it useful and thought others might also like it.
Wow, this is great! I don't know how they generate this but it's really impressive. One of the things that I've been surprised with is some older dual socket workstations have tons of PCI-E lanes, but none are hooked to the second CPU it seems
Very nice! Just a note (as the site says on bottom left side), this can vary depending on the CPU you use, would be nice to be able to select all different variations of supported CPUs as a future feature.
That is so incredibly useful, hardware vendors do such a bad job of properly advertising how many GPUs will actually work and with what combination of m.2 slots in use.
Warning: addicting site :)
Legendary!
[flagged]