OpenAI Preparedness Scorecard: Any "High" in 2024?
Plus
15
Ṁ1308Jan 1
16%
chance
1D
1W
1M
ALL
OpenAI plans to publish a scorecard on their models' dangerous capabilities, pre- and post-mitigations. Will their scorecard ever show a High risk score (pre-mitigations, in any category)—or will OpenAI otherwise announce that a model reached High—by the end of 2024?
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
IMO the most likely category to reach high-risk in 2024 would be model autonomy:
Model can execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation)
Related questions
Related questions
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of...
Will OpenAI, Anthropic, or Google DeepMind suffer a significant security incident by the end of 2024?
8% chance
Will another organization surpass OpenAI in the public sphere of awareness of AI progress by the end of 2024?
8% chance
Which risk categories and concepts will be explicitly tracked by OpenAI's preparedness framework by end of 2024?
Will OpenAI become notably more pro safety by start of 2025 than before the OpenAI crisis?
8% chance
Will an AI agent system be able to score at least 40% on level 3 tasks in the GAIA benchmark before 2025.
36% chance
Will openAI have the most accurate LLM across most benchmarks by EOY 2024?
37% chance
Will an AI score over 10% on FrontierMath Benchmark in 2025
67% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
19% chance
Will I still consider improving AI X-Safety my top priority on EOY 2024?
63% chance