What is it?

Shadow AI refers to the use of AI Applications or Models being used within an organization without the explicit consent or knowledge of a firm’s IT organization. There are normally two categories of concerns when it comes to Shadow AI:

  • Internal use of Shadow AI: Leveraging AI for use on internally built models or applications: using GenAI to write code, get answers to questions, etc. or building internally used AI tools without the knowledge of IT.
  • AI in 3rd party applications in models: Having AI within 3rd party applications or models upon installation or update without the knowledge of the firm using the application.

The identification and mitigation of shadow AI within either use case is a matter of increasing concern and importance to firms everywhere as the use of AI proliferates across industries and within organizations.

Why is it important?

According to McKinsey, AI adoption within the financial services industry has grown by 2.5x from 2017 to 2022 and will no doubt continue to increase. As the use cases for which AI is used spread, so will the risk associated with it. The reason that AI is so high risk is because its outputs can be much more difficult to predict and understand and as AI accelerates and improves this problem will only be exacerbated.

The cost and complexity of AI models can also scale exponentially. For example, experts estimate that the GPT model created by OpenAI costs about $1 million dollars a day and that in upgrading from GPT 3 to GPT 4, the number of parameters scaled from one billion to 100 billion. This illustrates just how complex AI can be and how quickly that complexity can grow. Generative AI has similarly been seen to have higher rates of hallucination than originally suspected and made some embarrassing high profile errors.

According to The Economist, 77% of bankers report that AI will be the key differentiator between winning and losing banks so avoiding the use of AI is not impossible. The prevalence of shadow AI shows that even if you want to avoid it, keeping members of your organization from adopting it or tools you leverage from 3rd party vendors can be even more difficult.

Regulatory Landscape

In addition, the regulatory landscape around managing shadow AI within 3rd party applications specifically is quickly emerging. Regulators are often holding the Senior Management Function (SMF) responsible for identifying all AI models being used within their organization, even 3rd party applications, and for testing these 3rd party models and applications and their results to the standards of internally built models and tools. Therefore identifying and mitigating the risk of shadow AI becomes especially crucial. Regulations that reference the use of models within 3rd party applications include:

  • SS 1/23: This Supervisory Statement from the PRA goes into effect May 17th and sets the expectations for banks and financial firms that operate within the UK. SS1/23 Principle 2.6 Use of externally developed models, third-party vendor products. Firms should:(i) satisfy themselves that the vendor models have been validated to the same standards as their own internal MRM expectations.
  • The AI Risk Management Framework (U.S.): Released by NIST from the U.S. Department for Commerce on January 26, 2023, this framework guides organizations on how to govern, map, and measure risk to the organization, including 3rd party shadow AI risk. NIST GOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of in- fringement of a third-party’s intellectual property or other rights.
  • The E.U. AI Act: This legislation passed by the E.U. more broadly regulates the use of AI within firms that may directly impact the safety and well being of the public and holds firms accountable for errors or poor practices that lead to public harm.
  • The Artificial Intelligence and Data Act (Canada): Sets the expectations for the use of AI within Canada in order to protect the interests of the public and require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output. 3rd party vendors that pose a risk to creating bias or harm within models are likely included within the risk mentioned within the regulation.

Mitigating the Risk from Shadow AI

There are many ways to address the risk from Shadow AI. Below are the practices that can help:

  • Identifying the internal use of GenAI: EUCs and Models can be generated using GenAI that can then leak into the public sphere or hallucinate and produce errors and so testing specific Models and EUCs to see what the probability of GenAI use is can be helpful.
  • Identifying AI Models within 3rd Party Applications: Monitoring the behavior of 3rd party tools and executables and looking for patterns that may be indicative of the use of AI can be a necessary way to identify hidden risk of shadow AI. Consistent scheduled scans to identify and look for this risk can be a great way to mitigate this risk. <l/i>
  • Interdependency Map: A model’s level of risk is highly dependent on the models and data sources that serve as inputs to that model. With an interdependency map, you can easily visualize these relationships and interdependencies. Paying special attention to 3rd Party Models that feed into high impact models can help prioritize where to look for shadow AI.
  • Security Vulnerabilities: Even if firms are aware of the use of AI within a 3rd party, it can be important to automate checks for security vulnerabilities within AI 3rd party libraries.
  • Monitor 3rd Party Model Performance: Many of these 3rd party models are black boxes and here the risk of shadow AI is highest as firms do not know what techniques a 3rd party vendor is using. Monitoring 3rd party models for sudden changes in performance can be an indicator for the use of shadow AI.
  • AI Testing Validation Suite: Have a comprehensive testing suite for models that can similarly pick up strange behavior that can indicate the use of shadow AI. An effective testing suite to control for this could include: Data Drift, Validity & Reliability, Fairness, Interpretability, Code Quality among many others. The results of these tests should be consistently documented in a standardized and easy to follow way.
  • Proper Controls, Workflows, and Accountability: Helping control the use of shadow AI on internally developed tools can be a function of controlling who has access to what EUCs and Models. This can be done through an Audit Trail which also tracks who makes changes to what models as well as through Approval Workflows which can provide accountability for who approved models that were behaving suspiciously.

About CIMCON Software

CIMCON Software has been at the forefront of managing AI, EUC, and Model Risk for over 25 years, trusted by over 800 customers worldwide. We are ISO 27001 Certified for Information Security and have offices in the USA, London and Asia Pacific. Our risk management platform directly supports the automation of best practices and policy including an EUC & Model Inventory, Risk Assessment, identifying Cybersecurity & Privacy Vulnerabilities, as well as an EUC Map showing the relationships between EUCs and Models. We also offer an AIValidator tool that allows for the automation of testing and documentation generation of models and 3rd party applications that can be leveraged as a no code tool or a Python Package.

Effective Management of Shadow AI

Shadow AI is already a major problem for firms and organizations and it’s only going to get worse as AI spreads. The greatest risk of Shadow AI is that you don’t know it’s a problem until you have the proper tools to identify and mitigate it. Managing Shadow AI is essential to firms not just because of regulatory pressure, but the overall increase in the risk of errors that can be quite costly to firms. Leveraging tools that have a long history of being battle tested and a team with over 25 years of experience is the best way to get a handle on this issue and be proactive about solving issues before they arise.

AI Risk Management Framework

Explore the realm of Artificial Intelligence (AI) with our AI Risk Management Policy. This concise guide covers the spectrum of AI models, including supervised, unsupervised, and deep learning, and emphasizes making AI trustworthy based on the NIST AI Risk Management Framework.

Learn to assess and manage AI Risk, cultivate a culture of risk awareness, and utilize periodic testing with tools like ours. This policy is your essential toolkit for responsible and effective AI utilization in your organization.

 

Request AI Policy