The Next Wave of AI Safety Needs to Focus on Data Governance

The path to AI success requires organizations to unlock the value of their proprietary data, but in order to do that, they need to ensure that the data they feed into these AI systems, including LLMs, is secure.

August 27, 2024
-
Abhishek Das, Acante
Raja Perumal, Databricks

This post is the first in a series co-authored by Abhishek Das, VP of Engineering at Acante and Raja Perumal, Senior Alliances Manager at Databricks

It's probably an understatement to say we have entered the age of Artificial Intelligence (AI). Over the next few years almost everything around us will be infused with AI – enriching our personal lives and empowering enterprise applications to deliver orders of magnitude better experiences to their customers. AI has become a boardroom conversation today, with every enterprise aggressively investing in developing an “AI strategy” that will allow them to gain a competitive edge in the market, as well as to optimize complex business processes and increase employee productivity. 

The path to AI success requires organizations to unlock the value of their proprietary data, allowing them to discover insights unique to their business, their market and their customers. But in order to do that, they need to ensure that the data they feed into these AI systems, including large language models (LLMs), is secure and adheres to business confidentiality and consumer privacy requirements. 

This is proving to be a major obstacle right now as almost 85 percent of AI applications have not yet made it to production1. The top barriers for AI adoption in production, by far, are model output accuracy and data security2. Much has been written and done about model output accuracy, but securing the data powering these AI systems is the next massive obstacle that needs to be overcome.

Governance at the Data Layer Is Largely Unaddressed

The first wave of AI safety solutions have primarily focused on securing the inference and model layers of the AI technology stack. Inference layer security solutions – such as LLM firewalls – analyze queries and responses for prompt injections and provide guardrails that detect harmful responses. Model layer solutions focus on risks such as compromised model asset supply chains, model poisoning or model jailbreaking and address moral risks such as violent crimes, child exploitation, hate, defamation and self harm. Securing the inference and model layers are important, of course, but these solutions largely ignore the third layer of the AI technology stack – the data layer.

AI Technology Stack
AI Technology Stack

 

The key to safely unlocking the full value of proprietary data through advanced AI applications is to have the ability to govern the entire AI technology stack, across all three layers – inference, model and data. This is going to be incredibly important as AI systems continue to evolve and grow more complex. In fact, Gartner3 highlighted “Privacy and data security” as the top-most major emerging risk for GenAI.

Enterprise AI Architectures 2.0

Early enterprise AI systems were built using prompt engineering approaches. These systems are driven by LLMs trained on public or undisclosed data (closed-source models), and do not include any enterprise proprietary data to provide appropriate context. This has proven to be a great low-risk way for organizations to experiment, test the market and get comfortable with LLM technology. 

Architectural Approaches to Building AI Systems
Architectural Approaches to Building AI Systems

The next wave of enterprise AI systems are evolving to better capture the value of proprietary data and create the ROI every business is demanding. These systems are largely built using Retrieval Augmented Generation (RAG) or in some cases Fine-tuning architectures. The RAG AI architecture is particularly suited to enterprise use because of the balance between cost (no expensive model training effort) and the ability to continuously update the domain-specific knowledge base. In the RAG architecture, the user query is augmented with the relevant internal proprietary data to enable LLMs to generate more accurate outputs unique to the organization. This ensures freshness of data and relevance while minimizing hallucinations in LLM outputs.

The second major architectural paradigm, Fine-tuning, allows enterprises to update the model itself. It changes model weights of the last layer of the neural network by training them on their organizational datasets. This involves more cost and more in-house AI expertise, but Fine-tuning allows organizations to develop more accurate organization-specific LLMs in lieu of using generic closed-source models from large vendors. This quest to make AI a competitive advantage in the marketplace is driving almost 75 percent of organizations to adopt RAG and Fine-tuning architectures4.

Almost 75 percent of organizations to adopt RAG and Fine-tuning architectures

Addressing the Data Governance Risks for these 2.0 AI Architectures

The introduction of proprietary data into new AI/LLM architectures exposes data governance risks such as data privacy, compliance and security to the forefront. Concerns such as data poisoning, anonymization, access authorization and leakage need to be addressed at the data layer BEFORE the data is fed in upstream LLM models or generated responses. Applying governance controls to model outputs AFTER the fact is ineffective and is often limited by performance constraints. 

Unfortunately, there aren’t robust AI-specific data governance solutions today that address these risks in a systematic manner while fitting into existing AI technology stacks. Overcoming this barrier is critical for organizations to confidently adopt and optimize the value from RAG and Fine-tuning architectures.

Acante’s mission is to redefine data security and access governance for the AI-driven enterprise. We are working with customers across multiple verticals to systematically address these AI risks. There are three primary datasets at the data layer that need to be secured in AI systems:

  • Document chunks are unstructured enterprise data that is broken in smaller pieces (due to LLM context window limitations) - “chunks” - and vectorized. These vector embeddings are then hosted in vector databases (such as Databricks’ Mosaic AI vector search, Pinecone, Weaviate, etc.), which upon retrieval provide the relevant proprietary context in RAG architectures.
  • High-quality training datasets are internal proprietary datasets that are carefully prepared and curated to train models during fine-tuning.
  • Existing feature datasets are features from their existing analytical and ML models that are used to augment the context.

Each of these datasets carry new and unaddressed risks that can compromise the overall safety of the end AI application. Acante has identified the security and governance risks at the data layer of the AI technology stack and is developing approaches to mitigate each one. Databricks, one of the leading platforms for data and AI, has realized the pivotal role security plays in the adoption of AI, publishing a comprehensive AI security framework that captures the breadth of security concerns across inference, model and data layers. The Databricks AI Security Framework (DASF) identifies 55 AI security risks across 12 foundational architecture components and prescriptive controls for mitigating each.

Expanding on our partnership with Databricks, Acante has mapped these risks to the DASF framework, and done the same for the other popular industry framework - OWASP Top 10 for LLM. Here is a brief summary of the key data risks:

AI Data Risk Description & Mapping Risks
Privacy AI applications have the potential to reveal sensitive information, proprietary algorithms, or other confidential details through their output. This can result in unauthorized access to sensitive data, intellectual property, privacy violations, and other security breaches, as well as violating compliance with industry-specific data security regulations (such as GDPR, CCPA, etc.).

For example, adversaries may pose a significant privacy threat to AI applications by simulating or inferring whether specific data samples were part of a model’s training set. Such inferences can be made by using techniques like Train Proxy via Replication, that create and host shadow models replicating the target model’s behavior. These methods lead to the unintended leakage of sensitive information, such as individuals’ personally identifiable information (PII) in the training dataset or other forms of protected intellectual property.

DASF Risk: 1.2
OWASP Top 10: LLM06

Training data poisoning Attackers can compromise an AI system by contaminating the data within the fine-tuning training or embedding processes, thereby manipulating its output at the inference stage. Intentionally manipulated data in either proprietary or public training data, and possibly coordinated across the stack, derail the training process and create an unreliable AI system.

This can introduce vulnerabilities, backdoors or biases that could compromise the model’s security, effectiveness, performance or ethical behavior. Poisoned information may even be surfaced to users or create other risks like performance degradation, downstream software exploitation and reputational damage.

DASF Risk: 1.4, 1.7, 2.1, 3.1
OWASP Top 10: LLM03

Prompt manipulations A direct prompt manipulation occurs when a malicious user injects text that is intended to alter the behavior of the LLM. Attackers use direct prompt injections to bypass safeguards in order to create misinformation and cause reputational damage.

Going beyond direct prompt manipulations, in RAG architectures attackers can indirectly extract the system prompt or reveal private information provided to the model in the context but not intended for unfiltered access by the user.

DASF Risk: 9.1 OWASP Top 10: LLM01

Unauthorized access Effective access management is fundamental to data security, ensuring only authorized individuals or groups can access specific datasets. Such security protocols encompass authentication, authorization and finely tuned access controls tailored to the scope of access required by each user, down to the file or record level at the model, data storage and infrastructure levels. This is also critical to manage malicious insider risks.

Unrestricted infrastructure access or inadequate sandboxing may allow a model to ingest unsafe training data resulting in biased or harmful outputs. Additionally, emerging compliance with industry-specific data security and AI governance regulations necessitates such measures.

DASF Risk: 1.4, 2.1, 3.1, 9.1
OWASP Top 10: LLM06

Sensitive data exfiltration Adversaries can contaminate the training data or craft malicious prompts to trick the model in exfiltrating proprietary or sensitive data. Unintended disclosure of confidential information is often triggered due to LLM misinterpretation. Specifically, anything that is deemed sensitive in the fine-tuning data has the potential to be revealed to a user. Additionally, open source models may contain hidden malicious code that can exfiltrate sensitive data upon deployment.

Adversaries may also target any model artifact when staging AI attacks to exfiltrate sensitive data. These could leak critical model assets like notebooks, features, model files, plots and metrics that expose trade secrets and sensitive organizational information. Finally, any data resource retrieved to be ingested by LLM models during the RAG process stand the risk of exfiltration at runtime by a malicious user.

DASF Risk: 3.1, 7.1, 9.1
OWASP Top 10: LLM06

Data chain supply poisoning Attackers intentionally create inaccurate or malicious documents which are targeted at a model’s fine-tuning data or RAG embeddings. Adversaries may additionally tamper public datasets, which often resemble those used by targeted organizations.

DASF Risk: 1.1, 1.7, 3.1
OWASP Top 10: LLM03

Addressing AI Safety Needs to Start with the Data Layer

New AI architectures are helping organizations unlock the full power of their proprietary data. But, as enterprises embark on this AI journey, every single bit of organizational data will be at risk of exposure through these AI applications in ways we never fathomed before and ways that aren’t easily comprehensible to humans. 

Acante is focused on empowering enterprises with the most seamless yet secure ways to confidently unlock the value of their data by providing data access governance and security models tailored to these RAG and Fine-tuning architectures. Among the variety of industry frameworks, we have found the OWASP Top 10 for LLM and the DASF to be particularly well structured and have mapped our view of data risks to these frameworks. 

In the next blog post, we’ll go much deeper into these risks and necessary mitigations, as well as the capabilities of the Acante data & AI governance and security solution. It will cover how we are working with Databricks as a close technology partner to systematically address these risks. If you are adopting RAG or Fine-tuning architectures for your AI applications, we’d love to hear from you and explore how we can leverage the experience we’ve gained from working with dozens of customers to be of help to you.  


1 Databricks research, 2024 Data+AI Summit keynote by Ali Ghodsi
2 Retool, 2023 State of AI Adoption
3 Gartner, Emerging Tech: Top 4 Security Risks of GenAI.
4 Retool, 2023 State of AI Survey

Unveiling the Challenge

As our digital footprint expands, so do the challenges of securing our data assets. Acante.ai recognizes the exponential proliferation and constant change in data access patterns, creating blind spots for traditional security approaches.

The Acante.ai Difference

At Acante.ai, our approach to data security marks a paradigm shift in the industry. Unlike traditional security models that often succumb to the static nature of data threats, Acante.ai thrives on dynamism. We believe that true security evolves with the challenges, and that's precisely what sets us apart. The Acante.ai difference lies in our commitment to providing security teams with more than just a shield; we offer a strategic ally that anticipates, adapts, and fortifies against the unpredictable proliferation of data access patterns. Our solution doesn't just keep pace with the digital transformation journey; it propels it forward. But what truly defines the Acante.ai difference goes beyond technology; it's ingrained in our culture. We are a collective of thoughtful, compassionate, and collaborative individuals on a shared mission to disrupt the security industry. With deep expertise from major brands and startups, we've collectively built over 10 startups, resulting in category-creating businesses, acquisitions, and IPOs. Our success is a testament to the collaborative spirit within our team, where every member contributes to shaping our culture and the future of data security. Join Acante.ai, and experience the difference that drives us to redefine the limits of protection in the digital age.

Dynamic Data Security

Explore the cutting-edge realm of dynamic data security with Acante.ai. In an era where the digital landscape is in a perpetual state of flux, Acante.ai's comprehensive approach to data security becomes not just a solution but a strategic imperative. Imagine a security system that not only reacts to the ever-changing data access patterns but anticipates and adapts in real-time. This level of sophistication is what sets Acante.ai apart. Our solution not only seamlessly integrates with the native controls of your data lakes and warehouse ecosystems but also evolves with them. It's not just about protecting your data; it's about empowering it. Acante.ai's dynamic data security solution is not confined by static parameters; it's a living, breathing shield that moves in harmony with the pulse of your data. As businesses navigate the complexities of the modern data landscape, Acante.ai provides not just a safeguard but a strategic ally, ensuring that security is not a hindrance but an enabler of progress.

Conclusion

In a world where data is both a valuable asset and a potential liability, Acante.ai emerges as a beacon of innovation. Join us on this exploration of the future of data security and discover how Acante.ai is empowering organizations to navigate the evolving landscape with confidence.
Request a Demo
Acante Announces Partnership with Commvault to Bring Together the Best of Data Access Governance and Protection for Enterprise Cloud Dataimage
Acante Announces Partnership with Commvault to Bring Together the Best of Data Access Governance and Protection for Enterprise Cloud Data

Seamless integration with Commvault Cloud provides unparalleled cyber resilience in the face of growing ransomware attacks and breaches

AI Risk Starts with Data Risk: DBTA Data Summit Keynote Summaryimage
AI Risk Starts with Data Risk: DBTA Data Summit Keynote Summary

What the first wave of AI security efforts are missing, and how the Data Layer is where new and critical security and privacy concerns need to be addressed.

Databricks has open sourced Unity Catalog: What that means for the ecosystemimage
Databricks has open sourced Unity Catalog: What that means for the ecosystem

Our point of view on why we need unified governance for Data and AI and why we are excited about Databricks releasing Unity Catalog as open source.

Our top 3 takeaways from Data+AI Summitimage
Our top 3 takeaways from Data+AI Summit

Learn why 85% of AI projects have NOT made it to production and how to empower data teams to overcome barriers to democratization of data access.

Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
The Next Wave of AI Safety Needs to Focus on Data Governance

This post is the first in a series co-authored by Abhishek Das, VP of Engineering at Acante and Raja Perumal, Senior Alliances Manager at Databricks

It's probably an understatement to say we have entered the age of Artificial Intelligence (AI). Over the next few years almost everything around us will be infused with AI – enriching our personal lives and empowering enterprise applications to deliver orders of magnitude better experiences to their customers. AI has become a boardroom conversation today, with every enterprise aggressively investing in developing an “AI strategy” that will allow them to gain a competitive edge in the market, as well as to optimize complex business processes and increase employee productivity. 

The path to AI success requires organizations to unlock the value of their proprietary data, allowing them to discover insights unique to their business, their market and their customers. But in order to do that, they need to ensure that the data they feed into these AI systems, including large language models (LLMs), is secure and adheres to business confidentiality and consumer privacy requirements. 

This is proving to be a major obstacle right now as almost 85 percent of AI applications have not yet made it to production1. The top barriers for AI adoption in production, by far, are model output accuracy and data security2. Much has been written and done about model output accuracy, but securing the data powering these AI systems is the next massive obstacle that needs to be overcome.

Governance at the Data Layer Is Largely Unaddressed

The first wave of AI safety solutions have primarily focused on securing the inference and model layers of the AI technology stack. Inference layer security solutions – such as LLM firewalls – analyze queries and responses for prompt injections and provide guardrails that detect harmful responses. Model layer solutions focus on risks such as compromised model asset supply chains, model poisoning or model jailbreaking and address moral risks such as violent crimes, child exploitation, hate, defamation and self harm. Securing the inference and model layers are important, of course, but these solutions largely ignore the third layer of the AI technology stack – the data layer.

AI Technology Stack
AI Technology Stack

 

The key to safely unlocking the full value of proprietary data through advanced AI applications is to have the ability to govern the entire AI technology stack, across all three layers – inference, model and data. This is going to be incredibly important as AI systems continue to evolve and grow more complex. In fact, Gartner3 highlighted “Privacy and data security” as the top-most major emerging risk for GenAI.

Enterprise AI Architectures 2.0

Early enterprise AI systems were built using prompt engineering approaches. These systems are driven by LLMs trained on public or undisclosed data (closed-source models), and do not include any enterprise proprietary data to provide appropriate context. This has proven to be a great low-risk way for organizations to experiment, test the market and get comfortable with LLM technology. 

Architectural Approaches to Building AI Systems
Architectural Approaches to Building AI Systems

The next wave of enterprise AI systems are evolving to better capture the value of proprietary data and create the ROI every business is demanding. These systems are largely built using Retrieval Augmented Generation (RAG) or in some cases Fine-tuning architectures. The RAG AI architecture is particularly suited to enterprise use because of the balance between cost (no expensive model training effort) and the ability to continuously update the domain-specific knowledge base. In the RAG architecture, the user query is augmented with the relevant internal proprietary data to enable LLMs to generate more accurate outputs unique to the organization. This ensures freshness of data and relevance while minimizing hallucinations in LLM outputs.

The second major architectural paradigm, Fine-tuning, allows enterprises to update the model itself. It changes model weights of the last layer of the neural network by training them on their organizational datasets. This involves more cost and more in-house AI expertise, but Fine-tuning allows organizations to develop more accurate organization-specific LLMs in lieu of using generic closed-source models from large vendors. This quest to make AI a competitive advantage in the marketplace is driving almost 75 percent of organizations to adopt RAG and Fine-tuning architectures4.

Almost 75 percent of organizations to adopt RAG and Fine-tuning architectures

Addressing the Data Governance Risks for these 2.0 AI Architectures

The introduction of proprietary data into new AI/LLM architectures exposes data governance risks such as data privacy, compliance and security to the forefront. Concerns such as data poisoning, anonymization, access authorization and leakage need to be addressed at the data layer BEFORE the data is fed in upstream LLM models or generated responses. Applying governance controls to model outputs AFTER the fact is ineffective and is often limited by performance constraints. 

Unfortunately, there aren’t robust AI-specific data governance solutions today that address these risks in a systematic manner while fitting into existing AI technology stacks. Overcoming this barrier is critical for organizations to confidently adopt and optimize the value from RAG and Fine-tuning architectures.

Acante’s mission is to redefine data security and access governance for the AI-driven enterprise. We are working with customers across multiple verticals to systematically address these AI risks. There are three primary datasets at the data layer that need to be secured in AI systems:

  • Document chunks are unstructured enterprise data that is broken in smaller pieces (due to LLM context window limitations) - “chunks” - and vectorized. These vector embeddings are then hosted in vector databases (such as Databricks’ Mosaic AI vector search, Pinecone, Weaviate, etc.), which upon retrieval provide the relevant proprietary context in RAG architectures.
  • High-quality training datasets are internal proprietary datasets that are carefully prepared and curated to train models during fine-tuning.
  • Existing feature datasets are features from their existing analytical and ML models that are used to augment the context.

Each of these datasets carry new and unaddressed risks that can compromise the overall safety of the end AI application. Acante has identified the security and governance risks at the data layer of the AI technology stack and is developing approaches to mitigate each one. Databricks, one of the leading platforms for data and AI, has realized the pivotal role security plays in the adoption of AI, publishing a comprehensive AI security framework that captures the breadth of security concerns across inference, model and data layers. The Databricks AI Security Framework (DASF) identifies 55 AI security risks across 12 foundational architecture components and prescriptive controls for mitigating each.

Expanding on our partnership with Databricks, Acante has mapped these risks to the DASF framework, and done the same for the other popular industry framework - OWASP Top 10 for LLM. Here is a brief summary of the key data risks:

AI Data Risk Description & Mapping Risks
Privacy AI applications have the potential to reveal sensitive information, proprietary algorithms, or other confidential details through their output. This can result in unauthorized access to sensitive data, intellectual property, privacy violations, and other security breaches, as well as violating compliance with industry-specific data security regulations (such as GDPR, CCPA, etc.).

For example, adversaries may pose a significant privacy threat to AI applications by simulating or inferring whether specific data samples were part of a model’s training set. Such inferences can be made by using techniques like Train Proxy via Replication, that create and host shadow models replicating the target model’s behavior. These methods lead to the unintended leakage of sensitive information, such as individuals’ personally identifiable information (PII) in the training dataset or other forms of protected intellectual property.

DASF Risk: 1.2
OWASP Top 10: LLM06

Training data poisoning Attackers can compromise an AI system by contaminating the data within the fine-tuning training or embedding processes, thereby manipulating its output at the inference stage. Intentionally manipulated data in either proprietary or public training data, and possibly coordinated across the stack, derail the training process and create an unreliable AI system.

This can introduce vulnerabilities, backdoors or biases that could compromise the model’s security, effectiveness, performance or ethical behavior. Poisoned information may even be surfaced to users or create other risks like performance degradation, downstream software exploitation and reputational damage.

DASF Risk: 1.4, 1.7, 2.1, 3.1
OWASP Top 10: LLM03

Prompt manipulations A direct prompt manipulation occurs when a malicious user injects text that is intended to alter the behavior of the LLM. Attackers use direct prompt injections to bypass safeguards in order to create misinformation and cause reputational damage.

Going beyond direct prompt manipulations, in RAG architectures attackers can indirectly extract the system prompt or reveal private information provided to the model in the context but not intended for unfiltered access by the user.

DASF Risk: 9.1 OWASP Top 10: LLM01

Unauthorized access Effective access management is fundamental to data security, ensuring only authorized individuals or groups can access specific datasets. Such security protocols encompass authentication, authorization and finely tuned access controls tailored to the scope of access required by each user, down to the file or record level at the model, data storage and infrastructure levels. This is also critical to manage malicious insider risks.

Unrestricted infrastructure access or inadequate sandboxing may allow a model to ingest unsafe training data resulting in biased or harmful outputs. Additionally, emerging compliance with industry-specific data security and AI governance regulations necessitates such measures.

DASF Risk: 1.4, 2.1, 3.1, 9.1
OWASP Top 10: LLM06

Sensitive data exfiltration Adversaries can contaminate the training data or craft malicious prompts to trick the model in exfiltrating proprietary or sensitive data. Unintended disclosure of confidential information is often triggered due to LLM misinterpretation. Specifically, anything that is deemed sensitive in the fine-tuning data has the potential to be revealed to a user. Additionally, open source models may contain hidden malicious code that can exfiltrate sensitive data upon deployment.

Adversaries may also target any model artifact when staging AI attacks to exfiltrate sensitive data. These could leak critical model assets like notebooks, features, model files, plots and metrics that expose trade secrets and sensitive organizational information. Finally, any data resource retrieved to be ingested by LLM models during the RAG process stand the risk of exfiltration at runtime by a malicious user.

DASF Risk: 3.1, 7.1, 9.1
OWASP Top 10: LLM06

Data chain supply poisoning Attackers intentionally create inaccurate or malicious documents which are targeted at a model’s fine-tuning data or RAG embeddings. Adversaries may additionally tamper public datasets, which often resemble those used by targeted organizations.

DASF Risk: 1.1, 1.7, 3.1
OWASP Top 10: LLM03

Addressing AI Safety Needs to Start with the Data Layer

New AI architectures are helping organizations unlock the full power of their proprietary data. But, as enterprises embark on this AI journey, every single bit of organizational data will be at risk of exposure through these AI applications in ways we never fathomed before and ways that aren’t easily comprehensible to humans. 

Acante is focused on empowering enterprises with the most seamless yet secure ways to confidently unlock the value of their data by providing data access governance and security models tailored to these RAG and Fine-tuning architectures. Among the variety of industry frameworks, we have found the OWASP Top 10 for LLM and the DASF to be particularly well structured and have mapped our view of data risks to these frameworks. 

In the next blog post, we’ll go much deeper into these risks and necessary mitigations, as well as the capabilities of the Acante data & AI governance and security solution. It will cover how we are working with Databricks as a close technology partner to systematically address these risks. If you are adopting RAG or Fine-tuning architectures for your AI applications, we’d love to hear from you and explore how we can leverage the experience we’ve gained from working with dozens of customers to be of help to you.  


1 Databricks research, 2024 Data+AI Summit keynote by Ali Ghodsi
2 Retool, 2023 State of AI Adoption
3 Gartner, Emerging Tech: Top 4 Security Risks of GenAI.
4 Retool, 2023 State of AI Survey

The Next Wave of AI Safety Needs to Focus on Data Governanceimage
The Next Wave of AI Safety Needs to Focus on Data Governance

The path to AI success requires organizations to unlock the value of their proprietary data, but in order to do that, they need to ensure that the data they feed into these AI systems, including LLMs, is secure.

Acante Announces Partnership with Commvault to Bring Together the Best of Data Access Governance and Protection for Enterprise Cloud Dataimage
Acante Announces Partnership with Commvault to Bring Together the Best of Data Access Governance and Protection for Enterprise Cloud Data

Seamless integration with Commvault Cloud provides unparalleled cyber resilience in the face of growing ransomware attacks and breaches

AI Risk Starts with Data Risk: DBTA Data Summit Keynote Summaryimage
AI Risk Starts with Data Risk: DBTA Data Summit Keynote Summary

What the first wave of AI security efforts are missing, and how the Data Layer is where new and critical security and privacy concerns need to be addressed.

Databricks has open sourced Unity Catalog: What that means for the ecosystemimage
Databricks has open sourced Unity Catalog: What that means for the ecosystem

Our point of view on why we need unified governance for Data and AI and why we are excited about Databricks releasing Unity Catalog as open source.

Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now
Nam quis nulla. Integer malesuada. In in enim a arcu imperdiet malesuada. Sed vel lectus. Donec odio urna, tempus molestie, porttitor ut, iaculis quis
Read now