GENERATIVE AI

What is Generative AI

Generative AI, a transformative technology, gained widespread attention with the launch of ChatGPT and GPT-4 in late 2022 and early 2023, respectively. While experts have been fascinated by this technology for years, its consumer-facing applications have now captured the public's imagination.

Generative AI can significantly impact how organizations and individuals operate, with the potential to contribute immensely to the global economy and influence numerous occupations in the US. This technology's ability to produce assets like images or text, make unstructured data accessible, and enable AI use for the average person unlocks new business opportunities and drives organizational advancements. However, it also brings inherent risks that must be carefully managed.

Generative AI concerns can be broadly categorized into two areas:

  • Internally Built Models: Leveraging Generative AI for internal automation or customer interactions (e.g., customer chatbots).
  • Unauthorized Use of External GenAI Tools: Using tools like ChatGPT for tasks such as coding or answering questions without proper data and privacy safeguards or governance, leading to potential leaks of proprietary information.

What are the Risks from GenAI (LLMs)?

Generative AI encompasses various models, including text-to-text, image-to-image, and multi-modal models. These complex models come with unique risks. Key concerns include:

  • Hallucination and False Information: AI generating misleading or incorrect content.
  • Bias and Fairness: Producing biased outputs that may discriminate against protected classes.
  • Privacy Infringement: Leaking private or sensitive information.
  • Intellectual Property Violations: Creating content based on IP-protected material without permission or producing derivative works.
  • Harmful Content: Generating offensive, malicious, or illegal content.
  • Environmental Impact: Training large models increases energy consumption, posing environmental concerns.

AI Risk Management Regulations

Europe

    • European Artificial Intelligence Office (Feb 2024)
    • EU AI Act (Dec 2023)
    • PRA Supervisory Statement SS1/23 (May 2024)

North America

  • White House Executive Order (Oct 2023)
  • NIST AI Risk Management Framework Guidance (Jan 2023)
  • OCC Comptroller’s Handbook (Aug 2021)
  • Canada: The Artificial Intelligence and Data Act (Jun 2022)

How to Manage Risk from GenAI Models

  • Consistent Monitoring for Undetected AI Models: Regularly scheduled scans to detect AI usage in models and EUCs can uncover risks before they result in errors, ensuring regulatory compliance.
  • Comprehensive AI Testing Suite: Implementing a robust testing suite to detect and control AI includes tests for data drift, validity, reliability, fairness, interpretability, and code quality. Consistent documentation of results ensures transparency and accountability.
  • LLM Vulnerability Testing: Testing for biases, fairness, harmful content, and privacy issues helps stress-test models before customer deployment.
  • Explainable LLMs: Using content attribution to trace the origin of data used in AI responses helps mitigate errors and prevent the spread of incorrect information.
  • LLM Hallucination Testing: Monitoring the rates of hallucination, or incorrect responses, in LLMs ensures accuracy and reliability. Leveraging the latest developments in RAG models and Challenger LLMs is crucial.
  • Implementing Controls and Accountability Measures: Managing access to EUC models and tools with audit trails and approval workflows can mitigate risks associated with Shadow AI.

Enforce Policies and Maintain Compliance

  • GenAI Detection Reporting: Using AI detection algorithms to scan the landscape of EUCs and models can provide a better understanding of the risk profile concerning unauthorized uploads into AI generators.
  • Securing Proprietary Code: Flagging the use of Generative AI within code repositories can help identify and mitigate the risk of leaking proprietary code to third parties.
  • Flagging Hallucination: Running AI detection reports can identify documents prone to errors due to LLM hallucinations.
  • Demonstrating Governance and Compliance: Adhering to regulations like the EU AI Act and SS1/23 involves documenting and enforcing policies regarding the internal use of Generative AI.

Who Are We?

CIMCON Software has been a leader in managing AI, EUC, and Model Risk for over 25 years, trusted by over 800 customers worldwide. We are ISO 27001 Certified for Information Security and have offices in the USA, London, and Asia Pacific. Our risk management platform supports best practices and policy automation, including an EUC & Model Inventory, Risk Assessment, Cybersecurity & Privacy Vulnerability Identification, and an EUC Map showing relationships between EUCs and models. Our AIValidator tool automates model testing and documentation, available as a no-code tool or a Python package.

End-to-End GenAI Risk Management

According to McKinsey, generative AI could surpass the added value generated by financial services from other AI technologies, potentially totaling up to $340 billion. Financial institutions are developing use cases in sales, marketing, customer operations, and software development, projecting up to a tenfold increase in productivity. Managing the challenges of GenAI involves proactive monitoring to unleash its benefits while minimizing risks. This includes monitoring hidden AI models, third-party applications, and submissions to third-party AI generators.

AI Risk Management Framework

Explore the realm of Artificial Intelligence (AI) with our AI Risk Management Policy. This concise guide covers the spectrum of AI models, including supervised, unsupervised, and deep learning, and emphasizes making AI trustworthy based on the NIST AI Risk Management Framework.

Learn to assess and manage AI Risk, cultivate a culture of risk awareness, and utilize periodic testing with tools like ours. This policy is your essential toolkit for responsible and effective AI utilization in your organization.