Will I focus on the AI alignment problem for the rest of my life?
➕
Plus
16
Ṁ529
2034
60%
chance

Background:

I have spent about 2500 to 3800 hours on AI alignment since Feb 2022. This is a rough 75% confidence interval derived from a cursory inspection of activity data I have collected in that period (mostly browser and conversation history).

This works out to around 5.0 to 7.6 hours per day, which seems a little high to me, but I cast a wide net for what counts (anything done with the intention of reducing AI risk: thinking, reading, planning, talking, executing subgoals), so I'm not surprised.

If this seems off to you, let me know whether you believe it would be worthwhile to perform a more rigorous analysis, and what exactly that would involve.

Resolution criteria:

  • Resolves YES if I continue to be committed to AGI notkilleveryoneism (whether directly or indirectly) till the end of my natural lifespan. Examples:

    • Doing technical research, engineering, advocacy, field-building, etc.

    • Broadly, anything aimed towards increasing "dignity points" counts.

  • Resolves NO if I decide to stop considering "what is going to reduce AI related x-risk?" as a motivating factor in all major career decisions until I die.

    • This would be if I no longer consider AI risks a main personal priority.

    • Broadly, "losing will": whether due to lack of hope, interest, money, etc.

  • Resolves Ambiguous if I am alive but rendered incapable of contributing.

    • This probably won't happen, but to be prepared for the worst, I have notified the beneficiaries of my insurance policies of my wishes: to distribute my assets as they see fit towards meeting my family's needs, and allocating the rest towards funding efforts to mitigate AI x-risk.

Please let me know if these criteria seem unclear or vague and I'll update them with the help of your suggestions. In particular, "focus on" is hard to judge (what if I'm doing something adjacent or tangentially linked? what if I'm burnt out and do something unrelated for a few weeks to recharge? what is the upper bound on compromise/tradeoff against competing goals? what if I retire but am still casually involved on an ad-hoc basis?) so I'm accepting input on how flexible of a definition to use for the purposes of this market.

Get
Ṁ1,000
and
S3.00
Sort by:

I think you should keep the market open much longer.

predictedYES

@NicoDelon thanks, extended

How will you ensure someone resolves YES if you die while working on the problem?

End of history illusion. Betting NO.

What if AI alignment is solved in your lifetime?

predictedYES

@ampdot Resolves YEEEES

predictedYES

@ampdot Btw how do you define “AI alignment solved”? Have you written a post anywhere?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules