Will ELK still be a major problem by Oct 20, 2026 (according to me)?
➕
Plus
30
Ṁ967
2026
72%
chance

ELK = Eliciting Latent Knowledge.

The spirit of the question is to know whether we will still need to find a solution to whatever the hard parts of the ELK problem are as currently understood.

This question will resolve No if any of the following are true:

  • The problem of worst-case ELK is solved AND the solution can be implemented in practice fairly easily

  • It is shown conclusively that the ELK problem is and will continue to be easy to solve in practice up to at least superhuman level AI, or will not be necessary for such.

  • A substantially simpler subproblem of ELK is identified as the necessary crux for alignment, and efforts shift to this simpler subproblem

This question will still resolve Yes if any of the following are true:

  • The ELK problem is subsumed by another, more general framing, to which a solution would imply all or most of an ELK solution

  • There is some evidence that ELK is easy in practice in some current models, but there is no strong reason to expect this to generalize to much more powerful models

  • There exists in theory a method for building an AGI that solves the ELK problem, but the method is prohibitively difficult or uncompetitive.

In all ambiguous situations, I will consult alignment researchers and exercise my own judgement in resolving the question. When in conflict, I will prioritize the spirit of the question over the letter.

Get
Ṁ1,000
and
S3.00
Sort by:

If you thought Mechanistic Anomaly Detection (https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk) captured the hard and important parts of the alignment problem, would this lead to a No resolution based on:

  • A substantially simpler subproblem of ELK is identified as the necessary crux for alignment, and efforts shift to this simpler subproblem

predictedYES

@datagenproc note the primary strategy mentioned in that post for attacking mechanistic anomaly detection involves heuristic arguments (or "explanations")

I believe that E.L.K. will be subsumed by W.A.P.I.T.I., Withholding Acknowledged Posited Inter-factual Transitory Information

https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

How do you expect you would resolve this question if on Oct 20, 2026, worst-case ELK remains unsolved, there has been no conclusive demonstration that solving ELK is unnecessary in practice, and you no longer consider ELK or any of its subproblems to be the necessary crux for alignment? (The spirit of this question is meant to be like, what if ELK ends up like CEV?) Resolves to No?

predictedYES

@CharlesFoster I would resolve No, but there is a substantial burden of evidence that would need to be overcome to convince me that it it isn't a crux, and I would defer extra-hard to people who still think ELK is a crux in cases where my evidence is weak/inconclusive.

Will ELK still be a major problem by Oct 20, 2026 (according to me)?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules