Will human brains be weaker than AI in every way by the end of 2024?
➕
Plus
42
Ṁ12k
Jan 1
0.9%
chance

This is a duplicate of /L/will-human-brains-be-weaker-than-ai, but for 2024. It resolves according to my interpretation of the same criteria laid out there, which I have copied here:

Resolves yes if and only if human brains are unambiguously weaker than AI in every domain.

Resolution process:

  • to propose YES resolution, comment with a reference to an AI that appears to be stronger than humans in every way.

  • to counterpropose NO resolution, comment with a domain the given AI appears to be weaker-than-human within.

  • I or other commenters check if the AI can prove its capability in the challenged domain.

  • If no counterproposed NO can be found that the AI cannot disprove, the question resolves yes.

This does not require AI to have replaced humans in practical tasks, only that there is at least one single integrated AI that can in fact beat us at absolutely everything.

Note: this includes understanding the human experience. An AI that passes this test would understand humans very dramatically better than most of us understand ourselves, since it'd need to be able to empathize with us across substrates.

Note #2: This is not a per-watt question. The AI is allowed to use arbitrarily much more compute than humans to do the task, as long as it can produce more accurate results than the best trained human in a given task. This will likely require the AI to be one that does active exploration.

Get
Ṁ1,000
and
S3.00
Sort by:

surprise, hassabis said maybe it's time for ai research to slow down a bit. If they slow down intentionally, it seems much less likely to me that this happens by 2024. This was intentionally "it has to solve everything, and have no gaps left at all", and that doesn't seem as plausible without deepmind's help in two years. I still think it could happen; one in 100 doesn't seem too high.

Apart from the object-level question, I think some bettors here have missed how fundamentally unfair to the AI the given rules are. We could reach a point where everybody agrees that AI has made human thought obsolete, and this question would still resolve no.

predictedNO
predictedYES

@ScottLawrence Sure could. I sure designed them to be this level of unfair on purpose to clarify just how much I'm claiming. I sure might be wrong. I sure still think it's gonna happen. Do you know you can holographically scan a brain with high resolution infrared light fields?

predictedYES

@L I believe there was a TED talk about this. let me see if I can find the video link where they presented an early prototype.

predictedYES

@L it was these folks. https://www.openwater.cc/


I'll leave it as an exercise to the reader to figure out why I think that's proof that a hard ASI would be able to properly exceed our understanding of ourselves in absolutely every domain.

predictedNO

@IsaacKing the challenge-response form (i.e. the ordering of quantifiers). A single AI must best all humans in all fields. Furthermore the challenges can be designed with knowledge of what the AI is, but not vice versa. Very asymmetric.

I'd still bet NO on a more symmetric challenge, but not below 1% as I've occasionally bet here. Maybe 10? Depending on exact rules.

predictedYES

@ScottLawrence I'd love to bet against you on the market you'd make!

another paper came out that made me more confident I'm right to own yes, so I bought some more. contact me personally for clarification.

seems much less likely than 2028, but my maximum likelihood causal graphs still predict yes. max likelihood causal graph is somewhere on order one in 10, so I'm only buying this up to 7% right now.

My self-though-organizing 5 mana of unwanted input.

Problem: AI’s limited access to modalities and perhaps lack of human-like built-in physics, moral and social frameworks vs. “text is a universal interface” (add whatever else becomes available besides text).

One answer/restatement, in concrete terms: regardless of having human-like world access, AI will satisfactorily navigate multisensory or moral/social problems, etc., because these are well covered by (currently mostly written or visual) training data and succumb to transfer and generalization.

Weak counterexample: in a parallel market, you observed AIs currently struggle with music.

I lean toward expecting progress to be slow enough to be OK/indifferent with 1.8% odds. Not worth betting down anymore, too risky and belief-incongruent to bet up.

predictedYES

@yaboi69 very reasonable. I have specific technical expectations that lead me to disagree extremely confidently with your mechanistic description. my uncertainty is about when the next steps actually get taken.

@L Thanks, just out of non-combative curiosity and interest – unless you’re keeping your competitive advantage secret – did you/can you share a sentence or two on these technical expectations?

predictedYES

@yaboi69 neural network advancement has been, for a long time, mostly a task of combining insights. there are many more left to be combined, and the difficulty is in getting the insights to work together. many things have been shown to be possible that are currently assumed not to be because people just haven't read enough of the research; The reason they think they aren't possible boils down to their intuitive sense of how quickly new usably good capabilities get built, which is bottlenecked on integration and scaling, both of which are very hard. I also know things about some very specific techniques that will seem incredibly obvious in retrospect but people will be baffled they didn't come up with them, and the real reason for such things is usually that people have tried the thing and not gotten it to work for years and at some point someone is going to get it working. pretty much every single component of AI is like that besides the ones that already work, and I just feel that I know about some specific ones. if you want to build a the strongest general AI it has to make the very best use of insights from task specific research in every field of modeling. it's surprising how much of the field of data modeling turns out to be surpassed by simple algorithms like transformers, but there are many insights to combine them with yet to go, and most of those insights have been at least worked out far enough that it is clear the capability already has been demonstrated to be possible.

because of this, many people think that AI can't possibly suddenly automate their field away, because the current generation thing can't do this next thing. That's fundamentally the wrong way to think about AI progress - scaling the same AI algorithm gives you more of the same kind of understanding for the most part, AI research changes how quality improves.

@L If I may press a bit more, what author with untapped insights would you recommend? E.g. I remember enjoying pieces by Marvin Minsky years ago, and that influence is still present somewhere (maybe to my detriment).

predictedYES

@yaboi69 it really depends on who we're talking about whether there are insights that are still untapped. I don't think I could suggest anything new to deepmind, but I might be able to suggest new things to some ai teams or individual researchers who are head down on their own focus area. I'm not inclined to get too much deeper without personal interaction; I don't want to aid capability unless it's safely individuality-respecting, collective-individuality-respecting, co-protective capability (which we can and will build; deepmind seems to be on a great track).

Honestly, just skim deepmind's work.

Nice doin' business with ya'.

Comment hidden
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules