SOTA AI at EOY 2026 a reasoning model?
3
Ṁ42
2026
62%
chance

Will any state-of-the-art (SOTA) general-purpose AI system at the end of 2026 be a reasoning model?

Resolution Criteria:

This market will resolve YES if, on December 31, 2026, any AI system widely considered SOTA meets all of the following criteria for a "reasoning model":

  1. It is a Language Model - The system must be able to input and output language. As an example of what would not count: AlphaGo

  1. It uses inference-time compute - The system must be capable of using more than a single forward pass before giving its final output, with the ability to scale inference compute for better performance

  1. The extra inference compute produces an interpretable artifact - The way the model uses extra inference compute must lead to some artifact that is in a format understandable by humans (like a classic chain of thought) or it must be possible to translate this artifact into a format that can be interpreted by humans. For example, a Coconut model counts as a reasoning model here, as long as we have some method to translate the latents it produces during reasoning into text.

This artifact only needs to be in a format that is theoretically understandable by humans (like text). It is not necessary that humans can actually follow the reasoning displayed. This market also resolves YES if the reasoning model develops steganography that renders the text content unreadable to humans, as long as the format remains text-based.

SOTA Determination:

The determination of which AI system is "state-of-the-art" will be based on a combination of:

  • General consensus in the AI community

  • Performance on major benchmarks

  • My subjective assessment of overall capabilities and impact

The market resolves YES if any of the SOTA AIs meet this definition of reasoning models.


Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules