Sparsely-Gated Mixture-of-Experts (MoE) introduced in paper: Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer (Shazeer et al., 2017).
Subject
Sparsely-Gated Mixture-of-Experts (MoE)
Predicate
introduced_in_paper
Object
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer (Shazeer et al., 2017)
Primary source · preprint · 2017-01-23
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer — arXiv (Shazeer, Mirhoseini, Maziarz, Davis, Le, Hinton, Dean)