Will "AGI Safety and Alignment at Google DeepMind:
..." make the top fifty posts in LessWrong's 2024 Annual Review?
Plus
4
Ṁ1192026
26%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2024 Review resolves in February 2026.
This market will resolve to 100% if the post AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work is one of the top fifty posts of the 2024 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will "AI alignment researchers don't (seem to) stack
" make the top fifty posts in LessWrong's 2023 Annual Review?
29% chance
Will "There should be more AI safety orgs" make the top fifty posts in LessWrong's 2023 Annual Review?
34% chance
Will "There should be more AI safety orgs" make the top fifty posts in LessWrong's 2023 Annual Review?
37% chance
Will "Cognitive Emulation: A Naive AI Safety Proposal" make the top fifty posts in LessWrong's 2023 Annual Review?
37% chance
Will "The Checklist: What Succeeding at AI Safety W..." make the top fifty posts in LessWrong's 2024 Annual Review?
36% chance
Will "AI catastrophes and rogue deployments" make the top fifty posts in LessWrong's 2024 Annual Review?
41% chance
Will "My May 2023 priorities for AI x-safety: more ..." make the top fifty posts in LessWrong's 2023 Annual Review?
25% chance
Will "Deep atheism and AI risk" make the top fifty posts in LessWrong's 2024 Annual Review?
42% chance
Will "AI Control: Improving Safety Despite Intentio..." make the top fifty posts in LessWrong's 2023 Annual Review?
86% chance
Will "AGI and the EMH: markets are not expecting al..." make the top fifty posts in LessWrong's 2023 Annual Review?
22% chance