dontoni 9 hours ago

GPT-4 not only has orders of magnitude more parameters than GPT-3.5. It also has a different architecture, using a Mixture of Experts approach rather than a raw GPT.

What is interesting to me is the fact that they haven’t developed (or at least publicly disclosed so) this idea further. What if you have 37 “experts”, but each being notoriously small? Is it a requisite that each expert is a fully functional LLM on its own? Can’t they interconnect like the brain does with its lobes?

  • vednig 2 hours ago

    GPT models already have a non-linear approach to model naming conventions. They should discuss these in details