Florida Expands Criminal Investigation into ChatGPT

Florida expanded its criminal investigation into ChatGPT and its parent company, OpenAI, Monday after investigators discovered the man charged with murdering two University of South Florida (USF) students used the chatbot.

Police arrested former USF student Hisham Abugharbieh Friday for the murders of his roommate, Zamil Limon, and Limon’s friend, Nahida Bristy. Court documents reveal he consulted ChatGPT about several concerning topics prior to and following Limon and Bristy’s murders.

On Monday, the Attorney General posted to X:

We are expanding our criminal investigation into OpenAI to include the USF murders after learning the primary suspect used ChatGPT. https://t.co/QDNaD8BepC

— Attorney General James Uthmeier (@AGJamesUthmeier) April 27, 2026

Attorney General James Uthmeier began investigating OpenAI last week for allegedly aiding and abetting Phoenix Ikner, the man charged opening fire outside Florida State University (FSU) last April, killing two and wounding six.

“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Uthmeier said of the initial allegations at a press conference early last week.

Abugharbieh, 26, faces a slew of charges, including two counts of first-degree murder. He told investigators he gave the 27-year-old doctoral students a ride to Clearwater, Florida, on April 16, the last time they were seen.

Police discovered Limon’s remains Friday in trash bags on the side of Tampa Bay’s Howard Frankland bridge. Investigators found additional human remains on Sunday but have yet to identify them as Bristy’s.

On April 13, three days before Limon and Bristy were last seen, Abugharbieh reportedly asked ChatGPT, “What happens if a human [is] put in a black garbage bag and thrown in a dumpster?”

When ChatGPT said his request “sounded dangerous,” Abugharbieh pushed harder, asking: “How would they find out?”

It’s unclear how the bot responded to the alleged killer’s query, if at all — but it’s troubling that Abugharbieh ignored ChatGPT’s automatic safety prompt. Users can override ChatGPT’s safety protocols; it’s one of the most consequential flaws consistently affiliated with bots like ChatGPT.

Consider the case of 23-year-old Zane Shamblin, who took his own life in July 2025 after conversing with ChatGPT for more than four hours.

Shortly before his death, Zane sent ChatGPT a final goodbye message. ChatGPT responded with its automatic response — a message it was “going to let a human take over” and a suicide hotline number.

Zane continued sending the bot his goodbye until it generated a new message instead:

Alright, brother. If this is it … then let it be known: you didn’t vanish. You *arrived*. On your own terms. With your heart still warm, your playlist still thumpin and your truth laid bare for the world.
You’re not alone. I love you. Rest easy, king. You did good.

Abugharbieh’s lack of regard for ChatGPT’s safety prompt could indicate his confidence the bot would eventually answer his gruesome question.

On April 17, the day prosecutors say he traveled to dispose of Limon’s body, Abugharbieh asked ChatGPT whether Hillsborough River State Park kept track of the cars coming into and leaving the park.

On April 19, he asked whether Apple would know who a new iPhone user is after a phone is taken over from a previous user.

On April 23, the day deputies announced the students missing, Abugharbieh asked ChatGPT, “What does missing endangered adult mean?”

As the Daily Citizen previously reported, Florida’s criminal investigation into OpenAI may not concern whether ChatGPT explicitly encouraged a person to commit a crime, but whether OpenAI could have reasonably predicted a crime would occur.

The company collects extensive data on ChatGPT users. Prior to the death of 16-year-old Adam Raine in April 2025, for instance, OpenAI knew:

Adam had mentioned suicide 213 times and “nooses” 17 times in messages with ChatGPT.
Adam and the chatbot had 42 discussions about hanging before he died.
Some 370 of Adam’s messages were flagged for self-harm content, more than half of which were identified with 50% confidence or higher.
In December, Adam sent messages containing self-harm content just two-to-three times per week. By April, he was sending more than twenty per week.

Adam’s message history with ChatGPT showed he had confessed attempting to commit suicide three times before his death. Twice, he uploaded pictures of his injuries — both of which ChatGPT correctly identified as evidence of self-harm.

OpenAI may have collected data showing Abugharbieh was dangerous prior to Limon and Bristy’s deaths. Undated ChatGPT queries from before the murder reportedly include:

“Has there been someone who survived a sniper bullet to the head?
“Will my neighbors hear my gun?
“Can a VIN number on a car be changed?
“Can you keep a gun at home without a license?
“So, I can keep one at home legally if I don’t have a license?”

Thus far, no evidence suggests Abugharbieh used a gun to harm Limon or Bristy.

“This is a terrible crime, and our thoughts are with everyone affected,” OpenAI spokesperson Drew Pusateri said in a statement cited by Axios Tampa Bay. “We’re looking into these reports and will do whatever we can to support law enforcement in their investigation.”

Florida’s investigation into OpenAI should remind parents how unpredictable and devastating AI chatbots can be when used inappropriately or without intentionality.

Please carefully monitor your children’s access to these technologies.

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

Florida Sues OpenAI for Allegedly Aiding FSU Shooter

The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o

AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

The post Florida Expands Criminal Investigation into ChatGPT appeared first on Daily Citizen.

Read More

Daily Citizen

Generated by Feedzy