https://accelerationeconomy.com/ai/how-to-maintain-cybersecurity-as-chatgpt-and-generative-ai-proliferate/

https://prod-files-secure.s3.us-west-2.amazonaws.com/0e1920e8-43ea-4589-a1fd-25c69fd7cf72/ec2e9b4d-d8d3-413b-99d7-671db80064af/6-Ways-to-Secure-ChatGPT-Usage.jpg

The usage of apps such as ChatGPT by the workforce is growing. Powered by large language models (LLM) like GPT-3.5 and GPT-4, the generative AI chatbot is being leveraged by employees in countless ways to create code snippets, articles, documentation, social posts, content summaries, and more. However, as we’ve witnessed, ChatGPT presents security dilemmas and ethical concerns, making business leaders uneasy.

Generative AI is still relatively untested and full of potential security concerns, such as leaking sensitive information or company secrets. ChatGPT has also been on the hot seat for revealing user prompts and “hallucinating” false information. Due to these incidents, a handful of corporations have outright banned its use or sought to govern it with rigid policies.

Below, we’ll consider the security implications of using ChatGPT and cover the emerging security and privacy concerns around LLMs and LLM-based apps. We’ll outline why some organizations are banning these tools while others are going all in on them, and we’ll consider the best course of action to balance this newfound “intelligence” with the security oversight it deserves.

Guidebook: Insights Into the Why and How of AI and Hyperautomation’s Impact

Emerging ChatGPT Security Concerns

The first emerging concern around ChatGPT is the leakage of sensitive information. A report from Cyberhaven found 6.5% of employees have pasted company data into ChatGPT. Sensitive data makes up 11% of what employees paste into ChatGPT — this could include confidential information, intellectual property, client data, source code, financials, or regulated information. Depending on the use case, this may be breaking geographic or industry data privacy regulations.

For example, three separate engineers at Samsung recently shared sensitive corporate information with the AI bot to find errors in semiconductor code, optimize Samsung equipment code, and summarize meeting notes. But divulging trade secrets with an LLM-based tool is highly risky since it might use your inputs to retrain the algorithm and include them verbatim in future responses. Due to fears about how generative AI could negatively impact the financial industry, JPMorgan temporarily restricted employee use of ChatGPT, and that was followed by similar actions from Goldman Sachs and Citi.

ChatGPT also experienced a significant bug that leaked user conversation histories. The privacy breach was so alarming that it prompted Italy to outright ban the tool while it investigates possible data privacy violations. The app’s ability to recall specific pieces of information and usernames is another great concern for privacy.

Furthermore, since ChatGPT has scoured the public web, its outputs may include intellectual property from third-party sources. We know this because it turns out that it’s pretty easy to track down the exact source used in the model creation. Known as a training data extraction attack, this is when you query the language model to recover individual training examples, explains a Cornell University paper. In addition to the skimming of training data, the tool’s ability to recall specific pieces of information and usernames is another great concern for privacy.

To hear practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, and cybersecurity, make sure to register for your on-demand pass to Acceleration Economy’s Generative AI Digital Summit.

Recommendations to Secure ChatGPT Usage

A handful of large corporations, including Amazon, Microsoft, and Walmart, have issued warnings to employees regarding the use of LLM-based apps. But even small-to-medium enterprises have a role in protecting their employee’s usage of potentially harmful tools. So, how can leaders respond to the new barrage of ChatGPT-prompted security problems? Well, here are some tactics for executives to consider:

  1. Implement a policy governing the use of AI services: New generative AI policies should apply to all employees and their devices, whether on-premises or remote workers. Share this policy with anyone with access to corporate information or intellectual property (IP), including employees, contractors, and partners.
  2. Prohibit entering sensitive information into any LLM: Ensure employees know the dangers of leaking confidential, proprietary, or trade secrets into AI chatbots or language models. This includes personal identifiable information (PII) too. Enter a clause about generative AI into your standard confidentiality agreements.
  3. Ensure employees are not leaking intellectual property: As with clearly sensitive information, consider also limiting how employees feed IP into LLMs and LLM-based tools. This might include designs, blog posts, documentation, or other internal resources that are not intended to be published on the Web.
  4. Follow the AI’s guidelines: Reading up on the LLM tool’s guidelines can help inform a security posture. For example, the ChatGPT creator OpenAI’s user guide for the tool clearly states: “We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.”
  5. Consider generative AI security solutions: Vendors like Cyberhaven have created a layer to keep confidential data out of public AI models. Of course, this may be overkill — simply communicating your company policy may be enough to prevent misuse.