The Dark Side of ChatGPT and Implications for Supply Chain Security
While AI-powered language models like ChatGPT have revolutionized various applications, they also inadvertently empower less-skilled threat actors to launch sophisticated cyberattacks. The following document discusses the impact of these tools on the cyber threat landscape, specifically focusing on supply chain attacks, and outlines the minimum security measures organizations should implement to mitigate such risks.
ChatGPT and Similar Tools: A Double-Edged Sword
Lowering the Barrier to Entry: ChatGPT and similar language models make it easier, even for less-skilled attackers, to generate convincing phishing emails, craft social engineering campaigns, or create malware that mimics legitimate software. This lowers the barrier to entry for malicious actors, enabling them to carry out more advanced cyberattacks with minimal effort and expertise.
It was recently exposed that ChatGPT even managed to full parson into solving a CAPTCHA
Automating the Attack Process: These AI-driven tools can also be used to automate various stages of the attack process, such as identifying targets, gathering information, and crafting personalized messages to deceive victims. This increases the speed and efficiency of cyberattacks, making it more challenging for organizations to defend against them.
These tools can carry a lot of risks to the Organization’s Supply Chain when are not properly managed; here are two examples:
Amplifying Attack Surface
Since less-skilled attackers now have access to advanced AI-driven tools, the potential attack surface for the organization and its supply chain grows. The increased number of adversaries with sophisticated capabilities makes it more difficult to detect and prevent an attack.
Disrupting Trust Relationships
Threat actors can amplify their capabilities using AI-generated communications to impersonate trusted partners, vendors, or suppliers, disrupting the trust relationships within a supply chain. Gaining unauthorized access to sensitive information or introducing malicious software into an organization's network.
With advanced generative AI tools, even low-skilled attackers can cause chaos and disrupt the daily work of an organization and its supply chain.
Gain access and control - Gain access to the tool and start gathering information about the company's relationships with its suppliers.
Crafting a Convincing Campaign - Using private and public information to generate persuasive and personalized phishing emails requesting an urgent update to the supplier's payment information due to a recent "banking issue."
Here are suggested mitigations and controls an organization can take to keep the supply chain as safe as possible. Share these recommendations with your suppliers to create a secure working environment surrounding your organization.
Building AI URLs access policy - Share these popular examples with your suppliers and explain the risks of using these tools for unqualified employees.
ChatGPT – https://chat.openai.com
Google Bard – https://bard.google.com
Bing AI – https://bing.com
ChatGPT API – https://api.openai.com
Wand AI - https://wand.ai/
Glean AI - https://www.glean.ai/
Hugging Face - https://huggingface.co/
Bearly AI - https://bearly.ai/
Base 64 AI - https://base64.ai/
Nanonets AI - https://nanonets.com/
Training and Awareness - Sharin training with your suppliers and their staff about managing sensitive information such as confidential customer data, PII, employment candidates, internal business information - trade secrets, and financial data.
Clipboard Protection - Ask your supplier to implement clipboard block functionalities to prevent copying sensitive data. This is especially relevant with suppliers that have access to PII, customer data, trade secrets, or financial information access.
Make sure your suppliers are running security audits or code reviews - Technology suppliers that are using Generative AI to generate code, share these ideas and instructions:
Always Verify - Suppliers using Generative AI should take produced outputs by ChatGPT as a suggestion. Verify and check for accuracy before launching to the production environment.
Validate with multiple sources, such as OS communities you rely on.
Follow best practices, following the principle of least privilege for providing access to databases and other critical resources.
Check for potential vulnerabilities by using codeQL and Trivy.
Implement a secure package for each code with ESAPI, AntiSamy, and Cerberus frameworks in order to cover input validation and sanitation.
Make sure to implement only whitelists and exclude blacklists.
Ask server-side suppliers to conduct input validation as a mandatory requirement.
Pay attention to what you input into ChatGPT. It still needs to be determined how safe is the use of data you enter into ChatGPT. Use sensitive inputs with care with these tools.
Remove any secrets from the code. All secrets should be stored in a secured vault.
Be careful about disclosing personal data that could violate compliance rules like GDPR or HIPPA.
The rise of AI-driven tools like ChatGPT has significant implications for the cybersecurity landscape, particularly in the context of supply chain attacks. To defend against the growing threats posed by less-skilled attackers using these advanced tools, organizations must implement security measures and share this information and skills with their suppliers to keep a secure working environment.
By taking a proactive approach to secure their supply chains, sharing information, and training their suppliers and vendors, organizations can minimize their exposure to cyberattacks and maintain the trust and integrity of their business operations.