Home / Top News / Workers are secretly using ChatGPT, AI, with big risks for companies

Workers are secretly using ChatGPT, AI, with big risks for companies


Lionel Bonaventure | Afp | Getty Images

Soaring investment from big tech companies in artificial intelligence and chatbots — amid massive layoffs and a growth decline — has left many chief information security officers in a whirlwind.

With OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard and Elon Musk’s plan for his own chatbot making headlines, generative AI is seeping into the workplace, and chief information security officers need to approach this technology with caution and prepare with necessary security measures.

The tech behind GPT, or generative pretrained transformers, is powered by large language models (LLMs), or algorithms that produce a chatbot’s human-like conversations. But not every company has its own GPT, so companies need to monitor how workers use this technology.

People are going to use generative AI if they find it useful to do their work, says Michael Chui, a partner at the McKinsey Global Institute, comparing it to the way workers use personal computers or phones.

“Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful,” Chui said.

“Throughout history, we’ve found technologies which are so compelling that individuals are willing to pay for it,” he said. “People were buying mobile phones long before businesses said, ‘I will supply this to you.’ PCs were similar, so we’re seeing the equivalent now with generative AI.”

As a result, there’s “catch up” for companies in terms of how the are going to approach security measures, Chui added.

Whether it’s standard business practice like monitoring what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace, experts think there are certain areas where CISOs and companies should start.

Start with the basics of information security

CISOs — already combating burnout and stress — deal with enough problems, like potential cybersecurity attacks and increasing automation needs. As AI and GPT move into the workplace, CISOs can start with the security basics.

Chui said companies can license use of an existing AI platform, so they can monitor what employees say to a chatbot and make sure that the information shared is protected.

“If you’re a corporation, you don’t want your employees prompting a publicly available chatbot with confidential information,” Chui said. “So, you could put technical means in place, where you can license the software and have an enforceable legal agreement about where your data goes or doesn’t go.”

Licensing use of software comes with additional checks and balances, Chui said. Protection of confidential information, regulation of where the information gets stored, and guidelines for how employees can use the software — all are standard procedure when companies license software, AI or not.

“If you have an agreement, you can audit the software, so you can see if they’re protecting the data in the ways that you want it to be protected,” Chui said.

Most companies that store information with cloud-based software already do this, Chui said, so getting ahead and offering employees an AI platform that’s company-sanctioned means a business is already in-line with existing industry practices.

How to create or integrate a customized GPT

Warren Buffett on ChatGPT and AI: This is extraordinary but not sure if it’s beneficial yet

About admin

Check Also

How yelling at kids affects their happiness, success

Almost every parent yells at their child eventually, no matter how hard they try to …