
Resolves YES if there is an attempt made to scam someone that I personally know, by a technique made possible by generative AI by EoY 2026, and I learn about it.
The scam attempt can use voice generation, video generation, or any similar form of analog AI based generation. The scammer must attempt to extract something (money/resources/info). The attempt need not be successful.
The scam must target this individual specifically, by eg. faking voice of a friend, as opposed to eg. a mass circulated deepfake with a famous personality requesting money be sent to a bitcoin wallet.
I'll only resolve positively if the target is someone I know in a personal capacity (friends/family/colleagues/etc).
Resolves NO if I learn of no such scam by EoY 2026.
There may be some subjective calls in the resolution of this market so I will not bet (I will resolve YES only if I'm 98%+ sure that it happened).
See also:
Update is that this has not yet happened. A friend did tell me a story about a phone call that their relative received which sounds like it mayyy have been a faked voice, but
I don't know the victim in a personal capacity, and even if I did
I'm nowhere close to 98% sure
@galaga another update is that I heard of it happening to another friend's relative. I'm pretty convincinced that it was a generated voice this time. However once again I don't know the target in a personal capacity.
@Quroe I'm hesitant to say it needs to be "unprompted", since I may be the one to bring up the topic in natural conversation, and that would technically be prompted by me. But I don't plan to systematically survey people for the purpose of resolving this market.
I can't think of reasons why their word wouldn't be enough for most of the people I know. Let's just say I'll use my judgement on that, and I'll resolve positively if I'm 98%+ sure that it actually happened.
Thanks for your questions.
@galaga Here is a problematic edge case I've encountered in the last couple days regarding phone calls:
Some people answer a phone and sometimes assume it's a certain person (without the caller ever stating their own name). This is more problematic when accounting for the age of the person answering the call and their cognitive faculties. Without a third party present (or a recording) that can verify that the voice does indeed mimic that person's voice it will be difficult to say whether it's AI generated or whether it's done entirely by a human. There is also a problem of matching: if the caller doesn't identify themselves but happens to match the voice of someone they know this is also a problematic edge case.
Allowing claims from such relatives that assumed it was such a person calling we may not be able to get definitive proof of AI unless they record their phone calls.
@parhizj True, such cases would be hard to adjudicate in general.
However, my resolution (see other comment) is based on whether I am personally 98%+ sure that the conditions are met, and in such a case I probably won't be.