Which "proper date" will be the most beneficial for @Krantz?
2
Ṁ646
2030
10%
Jamie Joyce (@JustJamieJoyce)
7%
Daniel Hillis
5%
Daniel Sheehan (@danielsheehan45)
5%
Other
4%
@prophet (Metadao)
4%
Danny Jones
4%
Roman Yampolskiy
3%
Balaji Srinivasan
3%
Rob Bensinger
3%
Ben Goertzle
3%
Jack Dorsey
3%
Eliezer Yudkowsky
3%
Matthew Pines
3%
Eric Weinstein
3%
Max Tegmark
3%
Robin Hanson
3%
Curt Jaimungal
3%
Liv Boeree
3%
Adam M. Curry

I want to have a long infohazardous philosophy talk with several prominent individuals in the AI/crypto/decentralized information space about how to incentivise and scale the mechanistically interpretable alignment of decentralized symbolic AI fast enough to beat ML to the punch.

I want that more than any other material possession that money can buy, so I will be putting all of my available funding towards incentivising these sort of disscussions.

I'm using this prediction to survey Manifold on whom they believe would be the best at either (1) understanding and implementing the solution I'm proposing OR (2) charitably identifying precisely where the solution fails.

This prediction will resolve to whomever takes the time to understand my claims and either (1) helps implement them at scale (more than 100,000 verified humans earning revenue by performing interpretable alignment labor) OR (2) convinces me that this particular approach is not a simple solution to mechanisticlly interpretable decentralized alignment of humanity.

Here are some of the "proper date" markets for reference:

https://manifold.markets/Krantz/if-aella-and-i-go-on-a-proper-date?r=S3JhbnR6

https://manifold.markets/Krantz/if-krantz-goes-on-a-proper-date-wit?r=S3JhbnR6

https://manifold.markets/Krantz/if-krantz-goes-on-a-proper-date-wit-llS5nI9Etn?r=S3JhbnR6

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

https://manifold.markets/Krantz/if-krantz-hires-danny-sheehan-as-hi?r=S3JhbnR6

A sincere "thank you" to anyone that takes this seriously.

If you believe it is tautologically impossible for someone you don't know to have a solution to a problem you don't understand, then you can just view this as a market for predicting who will be the person that I give all my money to for interpretably and cheerfully pointing out explicitly where my claims fail. Which is something I genuinely would love to do. Please consider helping me.

Also, I will not profit from this market. Any profit I recieve will be donated to the user that has invested the most mana into this prediction through wagers or liquidity.

Get
Ṁ1,000
and
S3.00
Sort by:
reposted

OR (2) charitably identifying precisely where the solution fails.

[…]

OR (2) convinces me that this particular approach is not a simple solution to mechanisticlly interpretable decentralized alignment of humanity.

Credit where it is due for developing enough humility to acknowledge that being disproved is indeed a possibility, could this be the start of the krantz redemption arc?

To incentivize participation of this market, I will be donating any profit I make from this market to whomever I feel did the most to promote this market. This will probably just be the person (other than me) that wagers the most or adds the most liquidity. If someone comes up with a clever way to enlist the participation of the folks listed on the market, that will also be considered a major factor.

@Krantz Danny is a national treasure. He is by far the person I've been trying to share information with for the longest. He is one of the few people that understand the need for provenance mapping at scale and has probably done more for the interpretable scaling of AI than anyone else. Why do I never hear this community talk about him or his work?

Does anyone have an informed opinion about him or his work?

https://youtu.be/k8mX_prIllI?si=w5CM825HuONG8Fgp

@Krantz Thank you.

If I had an idea which in my view would be able to alter someone's P(doom) by more than 1%, I'd probably chose a similar way of trying to get the information accross. This is what I imagine Dying with Dignity to look like.

I haven't followed your ideas, apart from some tiny fragments here on Manifold, and probably won't want to engage much further, but it feels like you earned a chance: Do I understand correctly that your idea refers to a way to get humanity's values into an AI? Along the lines of "build friendly AI the krantz way", thus needing to beat all other AI labs on their way towards AGI/ASI? If I'm wrong, maybe you have written something about "mechanisticlly interpretable decentralized alignment of humanity" somewhere you can point me to, feels like that could be a cruxial point.

@Primer I'm trying to warn the world that instead of feeding oil to a neural network, we should be feeding human input to a market (defined in krantz demonstration mechanism).

So that will come to life first.

I'm defining how to write social contracts within the market itself and simultaneously print money for the people that check things. In general, I'm recommending a fundamental change in how our economy would have to work. An 'intelligence economy'.

Try to engage with my "krantz demonstration mechanism" market if you want to understand.

https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6

@Primer I'm trying to warn the world that instead of feeding oil to a neural network, we should be feeding human input to a market (defined in krantz demonstration mechanism).

So that will come to life first.

I'm defining how to write social contracts within the market itself and simultaneously print money for the people that check things. In general, I'm recommending a fundamental change in how our economy would have to work. An 'intelligence economy'.

Try to engage with my "krantz demonstration mechanism" market if you want to understand.

https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6

@Krantz Thanks!

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules