The rapid rise of Artificial Intelligence (AI) applications has ushered in a new era of technological disruption, changing our lives forever. Some argue that this technology is a blessing and focus on its advantages, while others claim that these are the seeds of our destruction (Eliezer Yudkowsky). Indeed, despite the numerous benefits, artificial intelligence technologies present many challenges in various fields, particularly information security. This series will present the evidence and challenges surrounding this complex case while suggesting a new way to face them head-on. This post marks the first installment.
[?] Discovering employees have leaked sensitive information, including Intellectual Property (IP) or Personally Identifiable Information (PII), to one of the popular AI services – ChatGPT/Bard/GitHub Copilot (Samsung Leak).
[?] One of the senior executives in the company falls victim to a sophisticated Business Email Compromise (BEC) attack created with AI technologies, confirming a transfer of funds equivalent to the last quarter’s profits.
Do you know the answer? Or do you need to fully grasp the magnitude of the issue?
In recent months, AI applications have been sweeping us from all directions. The official technological disruptor of 2023 is changing our lives dramatically, almost as much as the invention of the mobile phone. Although “AI technology” like Large Language Models (LLMs), Machine Learning (ML), and Neural Networks existed for some time now, something new happened that made it “break the internet” and disrupt the digital world as we know it. The story begins with OpenAI, which stands at the forefront of AI companies with its ChatGPT language model.
OpenAI’s chat offers a chat-based language model that is as natural as human language, rather than relying on programming languages that are only understandable by experienced programmers and technology professionals. If that didn’t ignite your interest, ChatGPT’s application has broken all-time records by reaching 100 million active users within two months, surpassing the previous record holder, TikTok, which accumulated 100 million users within 9 months after its global release (excluding Meta’s new app “Threads” that conveniently lean on Instagram).
Simple interface: The operation is straightforward and easy, just like conversing with a friend or person through messaging applications—no special language or coding skills are required.
Availability: The application is highly accessible to users 24/7/365 via any web browser and mobile device. An official application is also available (Currently only in the Apple Store).
Relative confidentiality: By design, no user sees the questions and answers of other users, unlike other platforms (Facebook/Twitter/Instagram) where a question in a forum or group inevitably leads to broader public exposure. Although users know that site operators may have access to this information, it is generally accepted as necessary for operational purposes and ongoing improvement.
Free: There is no need to pay unless you are interested in premium services. However, when the basic service provides satisfactory solutions for most use cases, the overall number of users grows accordingly.
Reliability: In relative terms, the outputs provided by AI systems are reliable, particularly when it comes to exact sciences. Additionally, the results improve and sharpen over time while some biases and inaccuracies still exist.
High usability: These applications assist and support users across all age groups and in various topics. They are useful for academics, students, business people, researchers, writers, strategy development, solving mathematical problems, personal growth, creating poems, stories, and even images, entertainment, medicine, and more. There isn’t a person on the planet who cannot benefit from using such applications.
Amidst all the remarkable benefits, a caveat inevitably demands our attention and caution—a significant headache of numerous challenges, particularly for the information security community. As the industry primarily focuses on safeguarding against data breaches, emerging solutions attempt to answer the most obvious problems visible to the naked eye, such as data leakage. But what other risks loom, you ask? The list is too long, but I won’t let you guess, and I’ll mention a few.
Adversarial Attacks and Manipulation:
Adversaries exploiting vulnerabilities in Generative AI (GAI) models to deceive facial recognition systems and bypass security measures.
Adversaries exploiting vulnerabilities in ML/LLMs to manipulate recommendation systems, causing personalized misinformation or targeted propaganda.
Generation of Malicious Code and Content:
Using generative AI models to create realistic phishing emails or deceptive websites that can trick users into revealing sensitive information (Such as WormGPT)
Malicious entities use AI / ML / LLM to upgrade their skills and create more dangerous, sophisticated tools and cyber attacks.
Unauthorized Access and System Exploitation:
Malicious actors leveraging generative AI to generate synthetic fingerprints or voiceprints to bypass biometric authentication systems.
Exploiting vulnerabilities in ML/LLMs to gain unauthorized access to sensitive information or bypass security controls.
After we have mentioned both the risks and the reasons for the high popularity of AI tools and why we are so enamored with these applications, it becomes much clearer why it poses a serious headache for the Cyber Security community. On one side, users are highly motivated and eager to utilize these applications, while on the other side, organizations are driven to safeguard their assets from potential leaks or harm.
To address the numerous risks involved, major tech companies have resorted to a fundamental approach: implementing access restrictions on various AI tools like ChatGPT, BARD, Copilot, and others. These restrictions are enforced through content filtering mechanisms encompassing website filtering, application restrictions, and even IP address blocking—employing every means available.
While organizational-level blocking may hinder users in their day-to-day work, their strong motivation prompts them to seek creative solutions to facilitate their tasks, even if it means resorting to personal smartphones for access.
The situation becomes even more complex when considering hybrid work modes, including working from home or remotely, as not all organizations have comprehensive end-to-end solutions. Consequently, different access policies are implemented when transitioning between the organizational network, home Wi-Fi, or other public networks. In this context, the risk escalates, as users may inadvertently leak corporate information, and such incidents are not limited to devices managed by the organization alone.
Although the incidents mentioned above may not be prevalent or orchestrated on a large scale, highly motivated users may extract organizational code or introduce external code generated by LLMs or GAI through convoluted means. For instance, they may utilize third-party applications like Gmail or Pastebin or leverage standard interfaces and systems already available in the organizational toolbox, such as GitHub.
History has taught us that high motivation makes it incredibly challenging to restrict certain behaviors through legislation or enforcement. This is evident in cases involving drugs, prostitution, pornography, money greed, and alcohol. Therefore, a different approach is necessary to effectively address these challenges.
In the face of the AI revolution, a shift in mindset becomes crucial for organizations planning their long-term strategies. As we move forward, it may be more beneficial for us to embrace the inevitable changes and capitalize on the vast opportunities presented by this revolution rather than resist it. This approach is driven by several key reasons that highlight the futility of opposing the advancements in AI technology.
Firstly, it will frustrate those who want to improve their skills and encourage growth through innovative solutions. Furthermore, managers who attempt to block or suppress the phenomenon are likely to fail to achieve true success in their endeavors. The unstoppable desire for AI tools requires a different approach that acknowledges its influence and adapts accordingly.
With this in mind, organizations contemplate the most effective action. Will a partial solution to combat data leaks be sufficient? Or should the focus be on securing access to system interfaces using MFA while implementing robust data encryption during transit?
Similarly, how can organizations address the known and unforeseen risks accompanying the AI revolution? Each passing day brings new capabilities, scenarios, and threats that were unimaginable just yesterday. Therefore, a comprehensive and forward-thinking approach is necessary to tackle the evolving challenges effectively.
Are we entering an era where acquiring a new range of security products is essential to handle the surge of AI applications? I think not. When tackling the challenges presented by AI, it is necessary to acknowledge that many of them can be effectively addressed using existing data security solutions that have been available for years. However, to fully unlock the potential of this new technology, certain adjustments to policies, fine-tuning, or the incorporation of licensing into existing technological solutions may be necessary. Simply granting users browsing access to AI tools might be insufficient – and I’ll clarify: For example, expecting an AI system to generate customized code without providing appropriate data or requesting statistical evaluation without specific numbers poses significant difficulties. Establishing trust in the system becomes a critical factor, or alternatively, we may need to filter out certain data during the process, which could result in partially inaccurate outputs stopping that train on its tracks.
If your organization’s defense framework consists of a relatively simple array of 2-3 defense mechanisms, consider exploring the possibility of obtaining a dedicated product. However, it is important to note that this would serve as a stopgap solution and may not comprehensively address all the risks and challenges associated with the required use cases.
The rise of AI applications brings remarkable benefits and significant challenges, particularly to the information security field. While these applications have gained popularity due to their user-friendly interfaces, accessibility, and reliability, they also pose risks that demand our attention. Major tech companies have implemented access restrictions to mitigate these risks, but users’ strong motivation often leads them to find alternative ways to utilize AI tools. Organizations must shift their mindset and embrace the AI revolution to effectively address these challenges rather than resisting it. By doing so, they can leverage the potential of AI while implementing robust security measures. It is essential to balance enabling user access and safeguarding sensitive data.