Summary
Last week, the Wall Street Journal reported that Apple had restricted some employees' use of ChatGPT and other external intelligence tools. The news comes on the tail of ChatGPT becoming available on the App Store, and raking up half a million downloads in less than a week.
The ban allegedly originates from concerns over potential confidential data leaks.
Last month, OpenAI (the company behind ChatGPT) introduced an "incognito mode" that turns off chat history and only stores conversations for 30 days before deletion. In a conversation with Reuters, Mira Murati, the company's CTO, expressed that OpenAI was compliant with European privacy laws and working to appease regulators:
"We'll be moving more and more in this direction of prioritizing user privacy," Murati said, with the goal that "it’s completely eyes off and the models are super aligned: they do the things that you want to do" - Mira Murati, OpenAI's chief technology officer (source: Reuters) |
Concerns over AI technologies and their privacy implications are ever-present in the public discourse. During his recent hearing before Congress, OpenAI's CEO surprisingly expressed his own concerns over the technologies he's helping develop, calling for cooperation between lawmakers and technologists and urging for AI regulation:
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that (...) We want to work with the government to prevent that from happening.” - Sam Altman, CEO of OpenAI (Source: The New York Times) |
In the meantime, other businesses have reportedly restricted the use of ChatGPT for employees, including JP Morgan Chase, Verizon, Samsung and Amazon.