Will a flagship (>60T training bytes) open-weights LLM from Meta which doesn't use a tokenizer be released in 2025?
➕
Plus
3
Ṁ344
2026
43%
chance

Resolves YES if Meta releases weights of an LLM trained on at least 60T bytes of data (roughly equivalent to the 15T tokens used to train the Llama 3.1 models) in 2025 which does not use standard fixed-vocabulary tokenization.

A qualifying model must be released under a license roughly as permissive as Llama 3.1.

This market was spurred by recent research from Meta showing a proof-of-concept for a tokenizer-free LLM. A qualifying model from Meta does not need to use the patching technique from this paper as long as it's not using tokenization.

https://ai.meta.com/research/publications/byte-latent-transformer-patches-scale-better-than-tokens/

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules