OpenAI workers needed to warn Canadian cops about trans faculty shooter months in the past, tech big stated no: bombshell report

0



“While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days.”

A new bombshell report from the Wall Street Journal has revealed that employees at OpenAI flagged the ChatGPT writings and queries of Canadian trans school shooter Jesse Van Rootselaar. The employees wanted to alert Canadian authorities, but management said no.

Per the WSJ: “While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.”

OpenAI leaders decided against reaching out to authorities. The company reached out to the Royal Canadian Mounted Police (RCMP) after the massacre, and a spokesperson says they are supporting the ongoing investigation. 

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” OpenAI said in a statement. The spokesperson also told the WSJ that the company had banned Van Rootselaar’s account, but that the activity of the young man “didn’t meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others.”

On February 10, Rootselaar killed his mother, Jennifer, and his young brother. He then proceeded to Tumbler Ridge Secondary School, where he massacred a teacher and five students before turning his firearm on himself. He also injured 25 others.

Van Rootselaar was known to local police before the massacre. Authorities had visited his home several times over mental health concerns and removed guns from his residence, albeit temporarily.

While online platforms have long been debating policies surrounding user privacy and informing law enforcement about public safety issues, AI companies have now had to enter the area, as individuals spill the most personal aspects of their lives to chatbots. OpenAI stated it has human reviewers who are capable of referring harmful and threatening conversations to law enforcement in the event that they are determined to pose an imminent risk.



Source
Las Vegas News Magazine

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More