Is establishing a truth economy that produces more than 50% of the global GDP before AI fooms critical to survival?
➕
Plus
55
Ṁ29k
2040
60%
chance

A "truth economy" is a free and open network state predictive market economy where humanity verified users can wager on factual statements intentionally to produce the market data that is needed to create a decentralized (controlled by competing predictive markets) source of truth for aligning "non-owned" AI (AI that behaves how the market determines it ought) that serves in the general public's interest.

We will use this process to secure the only job we don't want AI to do for us (deciding what the future should look like) by finding out what the AI has wrong about the Universe and arguing with it until one of you learns something.) If you learn, you earn a little. If it learns, you earn much more. We argue with it about how the process should explicitly function, but it requires everyone to produce their own private constitutions (lists of propositions and corresponding confidences and values) that are used to align the general constitution that serves as a target for aligning AI. Private constitutions earn rewards based on how well they identify the variance between the current state of the general constitution and the ideal state of the general constitution.

It's a living document, embedded in the market.

It will serve as a function that allows individuals to earn a living without the need for capital simply by proving they are educated on the points of fact that are valuable to society.

Will we do that before AI kills us all? Please.

It will be resolved in the affirmative when all of the economists agree that the only job left is to produce data that is intentionally used to align an AI nobody really owns. All the mathematicians can prove that the system is safe because the "AI" isn't a real AI (neural nets) that learns how to live well "for us", it is a GOFAI that simply enables us to communicate with each other. All the historians agree that this was the principle that ensured AI is a decentralized public service and individuals are prohibited globally from owning the rights to control of the fundamental constitution that governs their behavior.

It will resolve in the negative if someone else also comes up with an interpretable solution to alignment.

Get
Ṁ1,000
and
S3.00
Sort by:

The meaning of this is:

Before AI develops extensively, establish a mechanism for objectively and impartially evaluating the world within more than 50% of AI systems.

However, addressing this issue may not necessarily rely on "markets" but rather on a system of rewards and punishments.

Yet, it seems that ultimately, everything still leads back to "money." Human economies are the same, but they are built upon the labor market.

Does AI need "money" to be driven?

Another question is, it seems that there is no absolute truth in the world; only "unchangeability" is the truth. All rules are temporary.

I have been building an open source prediction market, perhaps this could be used as the engine for what you are describing one day. https://github.com/openpredictionmarkets/socialpredict

I believe that AI progress will be slow and steady, with exponential advances in computing power required for sublinear increases in intelligence - as has been the case up to now. In such a world, there won't be one AI, nor will any one human be able to consolidate power.

I still haven't seen much evidence that the "foom" scenario is likely, and as time goes on, the evidence seems to point more and more to the opposite. People like Yudkowsky are risking catastrophe with their dangerous overreactions that are probably worse than the actual problem.

Therefore, the resolution will be NO.

My worries don't require foom. Nor any significant gain in intelligence. Quite the contrary.

Also not very worried about bad actors consolidating power.

They won't be able to control them either.

https://x.com/therealkrantz/status/1826658911430070343?t=_35awfrqUqHU4fJuBAFNzQ&s=19

bought Ṁ50 NO

A smart AI could just divide population into tribes based on cultural topics such as race and gender and then acquire vast amount of resources to rule us all while we fight about these trivial differences.

Therefore we would be stupid to accept such a proposal … wait a minute.

How does it resolve when it is discovered that alignment is not needed?

If alignment is not needed, then it's because everyone agrees on everything.

It would seem, if everyone literally agreed on everything, then we would both have a mutual understanding of how this prediction should resolve, right?

In other words, these propositions seem to create a contradiction.

  1. We are aligned and have no disagreements.

  2. You have a problem with how I resolved this prediction.

Tautologically, it's an issue we shouldn't run into.

bought Ṁ50 NO from 47% to 45%

The question seems to be asking: "Do all paths to AI alignment involve this specific proposal as a prerequisite?", which seems trivially false.

I do think a question like "are there extremely difficult epistemic problems that must be solved in any aligned AI system" is probably true, but I don't think the solution space of these problems is constrained to using a "free and open network state predictive market economy", or even that this proposal would work in the first place.

@SaviorofPlant Well, whether the system is our only current viable strategy is the sentiment I'm hoping users will consider.

@Krantz How can you be confident in that? The space of possible strategies is vast - ruling out every other possibility seems very challenging.

@SaviorofPlant For my prediction to be true, it is not necessary for no other solutions to exist. It is sufficient merely that we fail to discover another one before we all die AND that my strategy would have worked.

@Krantz I don't think this is implied by the market title - if there are 10 possible strategies that would work, one of which is yours, and we don't discover any of them, yours does not seem to be "critical for survival".

@SaviorofPlant I'm not sure I understand. If your instance is a situation where we find no solutions, then we are dead. Right? In that case, we didn't get the thing I claimed was critical to our survival. If we would have, I claim we would not be dead.

Assume you are starving on a desert island with me and I have an apple. If I say that apple is critical to your survival, what I mean is 'there's no other food around, if you don't eat this apple, you're going to die'. Could an orange from Georgia give you sustenance as well? Sure, but I'm betting you aren't going to make it to Georgia.

It is sufficient, though not necessarily necessary for our survival.

To write it as explicit as possible, I would say it's a parlay of three bets.

  1. If we produce an economy where at least half the GDP comes from the deliberate human verification of cryptographic personal constitutions whose purpose is to align/control decentralized intelligence, then artificial intelligence will not kill us. (To be clear here, I am claiming this is a solution to the general control problem that will require a new paradigm in AI.)

  2. @EliezerYudkowsky is correct about how hard it is to align ML based systems. I don't think we are going to get that figured out in time.

  3. If you harness half the intellectual labor on the planet into programming philosophy into a GOFAI system you can create an interpretable machine that scales education by orders of magnitude and tricks people into learning why they shouldn't do dumb things by paying them crypto to answer questions.

To be clear, if this approach gains wide adoption, I think history will say that humanity almost drove itself to extinction when it developed ML and that at the last moment a symbolic constitution system came along that seemed pretty dumb at first but economically rewarded users for teaching it things and learning from it. This gave everyone a job that AI couldn't take. This radically changed how information was spread through society. All of a sudden, teenagers and homeless people could make money just by learning about x-risk. Since the world was better able to communicate the circumstances of frontier ML, society was able to organize global action to ban further development that would put humanity at risk. This was accepted by many because symbolic systems have now surpassed where ML progress died peacefully.

Nick Bostrom and Carl Shulman give some suggestions on what something like this might look like under the "Epistemology" section of https://nickbostrom.com/propositions.pdf Some highlights:

"Advanced AI could serve as an epistemic prosthesis, enabling users to discern more truths and form more accurate estimates. This could be especially important for dealing with forecasting the consequences of action in a world where incredibly rapid change is unfolding as a result of advanced AI technology. It could make a rational actor model more descriptively accurate for users who choose to lean on such AI in their decision-making. More informed and rational actors could produce various efficiency gains and could also change some political and strategic dynamics (for better or worse).

...

Insofar as certain dangerous capabilities, such as biological weapons or very powerful unaligned AI, are restricted by limiting necessary knowledge, broad unrestricted access to AI epistemic assistance may pose unacceptable risks absent alternative security mechanisms.

...

If high-quality AI epistemic consensus is achieved, then a number of applications are possible, such as: Reducing self-serving epistemic bias may reduce related bargaining problems, such as nationalistic military rivals overestimating their own strength (perhaps in order to honestly signal commitment or because of internal political dynamics) and winding up in war that is more costly than either anticipated. Enabling constituencies to verify that the factual judgements going into the decision were sound even if the outcome is bad, reducing incentives for blame-avoiding but suboptimal decisions...Neutral arbitration of factual disagreements, which could help enable various treaties and deals that are currently hindered by a lack of clearly visible objective standards for what counts as a breach.

Questions concerning ethics, religion, and politics may be particularly fraught. Insofar as AI systems trained on objectives such as prediction accuracy conclude that core factual dogmas are false, this may lead believers to reject that epistemology and demand AI crafted to believe as required. Prior to the conclusions of neutral AI epistemology becoming known, there may be a basis for cooperation behind a veil of ignorance: partisans who believe they are correct have grounds to support and develop processes for more accurate epistemology to become available and credible before it becomes clear which views will be winners or losers.

"

Mind's a bit cloudy, so perhaps it's just me, but how is correcting an AI's factual beliefs going to prevent it from killing humanity? Are there factual reasons not to do it? Is there an implied moral realism here?

@Vincent The cool thing about language, is that humans get to define it. (Wittgenstein) Same with facts. "AI should not kill humanity" can be a fact. It's a rule we can write on a constitution. That's how laws work. This system is designed to solve "outer alignment" (how to get 8 billion people to agree on a set of rules, a constitution) not necessarily "inner alignment" (the much more technical task of getting AI to follow the rules on the constitution).

We need to figure this part out before the frontier AI companies achieve AGI and fail to align it because they used a 'corporate constitution' (totalitarian) that is feabily orders of magnitude smaller than the enormous constitution that a free market could create if people could earn crypto by doing philosophy with the AI about what it ought do.

@Krantz Thanks for your response. I find yours an interesting take. We negotiate meaning and morality with each other as humans, basically you're saying to treat AI similarly if I understand you correctly.

How critical that will be seems to me to depend on the particular design (process)/constraints of the AI that will be built.

@Vincent I wouldn't say we negotiate WITH the AI, rather we negotiate with each other and the AI just listens and is very good at understanding the logical connective structure of the propositions on our constitutions and can identify the paths of inference (order of new propositions to consider) to teach us how to optimize our beliefs to produce value. This was a strategy for aligning a GOFAI project I began in 2010.

The truth is, I would also bet all of my mana that I have a novel algorithm for a provably safe symbolic AI whose only barrier to outperforming the current frontier systems is a massive free and balanced repository of this particular sort of data.

I'm currently still trying to put that in a place where alignment researchers are looking.

@Krantz How does your algorithm prevent wireheading?

I wouldn't say it prevents it. Not sure what it would mean to 'prevent it'. At the end of the day it is really just a machine that asks T/F questions with the intention of helping individuals identify the contradictions in their beliefs.

It's an education tool.

That pays people to improve it.

If there is a good reason and method for preventing or implementing 'wireheading', that is important for society to learn, then it will simply teach those reasons to humans.

bought Ṁ10 NO

This sounds like a great idea but it isn't what I'd use the word "critical" for, which would mean something like "practically a necessary condition".

Can you clarify the resolution criteria? How exactly is the truth economy defined? How do you check for counterfactuals? ('is necessary') etc.

@Jacek Sure.

It will be resolved in the affirmative when all of the economists agree that the only job left is to produce data that is intentionally used to align an AI nobody really owns. All the mathematicians can prove that the system is safe because the "AI" isn't a real AI (neural nets) that learns how to live well "for us", it is a GOFAI that simply enables us to communicate with each other. All the historians agree that this was the principle that ensured AI is a decentralized public service and individuals are prohibited globally from owning the rights to control of the fundamental constitution that governs their behavior. You'll know when it resolves.

It will resolve in the negative if someone else also comes up with an interpretable solution to alignment.

Good Luck

You need to put this in the market description dog

Thanks for the tip. That should certainly make this market more accurate. Dog.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules