This prediction is specifically about law enforcement, not law as a whole. That is, LLM output is used for something like determining if someone should be investigated or arrested. Think police, not lawyers.
This prediction will be resolved as "yes" iff reliable media sources or official court documents in any country show that a LLM AI was used in any capacity by law enforcement to make a decision as described above.
Please keep the following conditions in mind:
It does not include LLM output being used to argue a case, or being used as evidence in a case.
It does not include the use of an LLM in some criminal capacity leading to someone's arrest (as the AI was used by the guilty party, not law enforcement).
It does not include the use of some other kind of AI or machine learning, either known today or developed in the future.
It does not include law enforcement using an LLM in a non-decision-making capacity.
It does not matter if the usage was decisive (whether the output was the ultimate cause of the decision), successful, or substantial, only that one was used as part of the decision-making process.
It does not include usage by intelligence services like the CIA/NSA
Police officers are using LLMs to write their crime reports. Not sure if this counts or not.
@CharlesFoster According to resolution criteria:
"It does not include law enforcement using an LLM in a non-decision-making capacity."
I'm not OP, but I'd say that's a no, writing reports is not decision making.
@DanPowell good question, the legality does not matter here. In fact, the existence of a legal dispute on if prior LLM usage was acceptable would be solid evidence for a yes resolution all by itself.
As an interrogation tool… That's a bit more of a special case. My gut says yes since I don't think I can meaningfully distinguish the work of an interrogator as not some kind of decision making
Keep an eye on this conference. This article is for 2023's con, but I suspect 2024's is when firms will unveil their LLM bullshit. Honestly, people who work on this tech are slimy worms
Does the NSA count? Also, this applies globally yes? i.e. Chinese law enforcement included
@WXTJ do we have this in reliable media sources or court documents? To my knowledge, everything today is in development, and is not actually informing law enforcement activities.
LLMs do not interpret audio. That is another type of model.