Agents criticised for using ChatGPT for immigration reports
Federal Agents in Charlotte, North Carolina (Getty Images)
A federal judge criticised immigration agents for using artificial intelligence, specifically ChatGPT, to draft use-of-force reports, citing concerns about accuracy and credibility.
U.S. District Judge Sara Ellis noted factual discrepancies between official law enforcement narratives and body camera footage, observing an agent instructing ChatGPT to compile a report from a brief description and images.
Experts condemned this practice as the “worst possible use” of AI, highlighting significant risks to accuracy, as AI-generated reports may not reflect the officer's actual experience or perspective.
The Department of Homeland Security did not comment on its policies regarding AI use by agents, and experts noted that few departments have established clear guidelines for this technology.
Concerns were also raised about privacy, particularly if public versions of ChatGPT are used, and the inherent difficulties in using AI with visual components for accurate report generation.