AI Safety: Threat or Solution? 🤖⚠️
AI
AI Policing: A Risky Experiment
Perplexity’s public safety deal is raising alarms among experts, and the question of who is policing the police – when it comes to artificial intelligence – is gaining urgency. The AI startup, Perplexity, asserts its platform can assist police officers and public safety officials in making more informed decisions, even in seemingly mundane applications, such as analyzing crime scene photos. Launched in January, Perplexity for Public Service Organizations offers one free year of the company’s Enterprise Pro tier for up to 200 seats, with discounted options available for larger agencies.
The Allure of AI Assistance in Law Enforcement
In its announcement, the company stated that the initiative is designed to assist officers in making more informed decisions in real time and to automate routine tasks, such as generating descriptions of crime scene photos, analyzing news stories and body camera transcripts, and transforming collections of investigators’ notes into polished, structured reports. However, experts caution that this kind of seemingly innocuous application is a significant concern. “What’s often pernicious about these use cases is they can be presented as administrative or menial,” explained AI policy counsel at the Policing Project. “These everyday tasks can have substantial downstream effects on people’s lives.”
Hidden Dangers Within AI-Generated Reports
Hallucinations – the generation of minor, fabricated details – within police reports have contributed to wrongful convictions, mirroring a concerning trend of lawyers utilizing AI tools to fabricate case precedents and other specifics in draft filings. While often involving seemingly insignificant errors, these instances can have disastrous consequences. Perplexity’s recently launched AI tool, which enables users to search patents using natural language and is offered free of charge, aims to address this issue.
Post-Training Efforts Don't Guarantee Accuracy
Perplexity leverages those created by other developers, such as OpenAI and Anthropic, and then “post-trains” them to minimize hallucination. However, the technology still possesses limitations. A recent study conducted by the European Broadcasting Union and the BBC revealed that Perplexity, alongside three other leading chatbots, frequently produced responses containing “at least one significant issue,” particularly when queried about current events.
This article is AI-synthesized from public sources and may not reflect original reporting.