top of page

The Dark Side of ChatGPT and Implications for Supply Chain Security

Updated: Jun 5, 2023


Introduction:

While AI-powered language models like ChatGPT have revolutionized various applications, they also inadvertently empower less-skilled threat actors to launch sophisticated cyberattacks. The following document discusses the impact of these tools on the cyber threat landscape, specifically focusing on supply chain attacks, and outlines the minimum security measures organizations should implement to mitigate such risks.


ChatGPT and Similar Tools: A Double-Edged Sword

  • Lowering the Barrier to Entry: ChatGPT and similar language models make it easier, even for less-skilled attackers, to generate convincing phishing emails, craft social engineering campaigns, or create malware that mimics legitimate software. This lowers the barrier to entry for malicious actors, enabling them to carry out more advanced cyberattacks with minimal effort and expertise.


  • Automating the Attack Process: These AI-driven tools can also be used to automate various stages of the attack process, such as identifying targets, gathering information, and crafting personalized messages to deceive victims. This increases the speed and efficiency of cyberattacks, making it more challenging for organizations to defend against them.


These tools can carry a lot of risks to the Organization’s Supply Chain when are not properly managed; here are two examples:


  • Amplifying Attack Surface

Since less-skilled attackers now have access to advanced AI-driven tools, the potential attack surface for the organization and its supply chain grows. The increased number of adversaries with sophisticated capabilities makes it more difficult to detect and prevent an attack.


  • Disrupting Trust Relationships

Threat actors can amplify their capabilities using AI-generated communications to impersonate trusted partners, vendors, or suppliers, disrupting the trust relationships within a supply chain. Gaining unauthorized access to sensitive information or introducing malicious software into an organization's network.


  • Causing Chaos

With advanced generative AI tools, even low-skilled attackers can cause chaos and disrupt the daily work of an organization and its supply chain.

Gain access and control - Gain access to the tool and start gathering information about the company's relationships with its suppliers.

Crafting a Convincing Campaign - Using private and public information to generate persuasive and personalized phishing emails requesting an urgent update to the supplier's payment information due to a recent "banking issue."

Here are suggested mitigations and controls an organization can take to keep the supply chain as safe as possible. Share these recommendations with your suppliers to create a secure working environment surrounding your organization.

  1. Building AI URLs access policy - Share these popular examples with your suppliers and explain the risks of using these tools for unqualified employees.

    1. ChatGPT – https://chat.openai.com

    2. Google Bard – https://bard.google.com

    3. Bing AI – https://bing.com

    4. ChatGPT API – https://api.openai.com

    5. Wand AI - https://wand.ai/

    6. Glean AI - https://www.glean.ai/

    7. Hugging Face - https://huggingface.co/

    8. Bearly AI - https://bearly.ai/

    9. Base 64 AI - https://base64.ai/

    10. Nanonets AI - https://nanonets.com/

  2. Training and Awareness - Sharin training with your suppliers and their staff about managing sensitive information such as confidential customer data, PII, employment candidates, internal business information - trade secrets, and financial data.

  3. Clipboard Protection - Ask your supplier to implement clipboard block functionalities to prevent copying sensitive data. This is especially relevant with suppliers that have access to PII, customer data, trade secrets, or financial information access.

  4. Make sure your suppliers are running security audits or code reviews - Technology suppliers that are using Generative AI to generate code, share these ideas and instructions:

    1. Always Verify - Suppliers using Generative AI should take produced outputs by ChatGPT as a suggestion. Verify and check for accuracy before launching to the production environment.

    2. Validate with multiple sources, such as OS communities you rely on.

    3. Follow best practices, following the principle of least privilege for providing access to databases and other critical resources.

    4. Check for potential vulnerabilities by using codeQL and Trivy.

    5. Implement a secure package for each code with ESAPI, AntiSamy, and Cerberus frameworks in order to cover input validation and sanitation.

    6. Make sure to implement only whitelists and exclude blacklists.

    7. Ask server-side suppliers to conduct input validation as a mandatory requirement.

    8. Pay attention to what you input into ChatGPT. It still needs to be determined how safe is the use of data you enter into ChatGPT. Use sensitive inputs with care with these tools.

    9. Remove any secrets from the code. All secrets should be stored in a secured vault.

    10. Be careful about disclosing personal data that could violate compliance rules like GDPR or HIPPA.


Conclusion:

The rise of AI-driven tools like ChatGPT has significant implications for the cybersecurity landscape, particularly in the context of supply chain attacks. To defend against the growing threats posed by less-skilled attackers using these advanced tools, organizations must implement security measures and share this information and skills with their suppliers to keep a secure working environment.

By taking a proactive approach to secure their supply chains, sharing information, and training their suppliers and vendors, organizations can minimize their exposure to cyberattacks and maintain the trust and integrity of their business operations.


108 views0 comments

Recent Posts

See All
bottom of page