An interconnected field of shields

Top 3 AI Breaches & How to Avoid Them

Solid technology can help organizations avoid major AI integrity breaches
Davi Ottenheimer, VP of Trust and Digital Ethics
June 13, 2024

While AI security risks have seen a meteoric rise, we are also witnessing the emergence of promising new frameworks, like the WC3 Solid protocol (“Solid”), that aim to restore control and security while still driving innovation. Created by the inventor of the World Wide Web, Sir Tim Berners-Lee, Solid empowers individuals to own and control their data, thereby enhancing transparency and accountability in ways that enable better, safer AI practices through decentralized data storage and management. 

In this blog, we’ll explore how Solid helps protect against the top 3 types of AI risks today

  1. Data Manipulation and Biases: Data forms the foundation of AI systems, such that manipulation or biases will skew outcomes, including results of unfairness and discrimination. Addressing these concerns is crucial to ensure data maintains integrity.
  2. Algorithmic Manipulation: The algorithms governing AI systems dictate their behavior and decision-making processes. Manipulating algorithms leads to unethical or harmful actions, such as exploiting an indifference to malicious purposes or distorting outcomes for personal gain. Again, without sufficient controls, data integrity is at risk.
  3. Output Fabrication: AI-generated outputs are expected to have reliability and trustworthiness. Fabricating outputs can lead to misinformation, deception, or manipulation of public opinion, such that breaches of data integrity will undermine credibility and effectiveness of AI.

All three types of integrity breaches have the potential to significantly impact various aspects of society and technology; as a result, they warrant focused attention and proactive mitigation efforts. The significance of fixing the first, data manipulation and biases, means establishing and maintaining trust in AI to uphold fact-based discourse. Biased outcomes undermine trust in any intelligence system. 

Even if companies agree to not use certain types of data in advertising, data bias can creep in in other ways: LinkedIn recently agreed to stop allowing advertisers to target users based on their participation in LinkedIn Groups, for fear that this data could reveal sensitive personal information, such as race, political leanings or sexual orientation. 

This is where data ownership and control solutions like Solid come squarely into play. Solid facilitates storing data where it can always remain under control of those who are likely to be the most invested and accountable to its quality. With the ability of data owners to grant and revoke access as needed, Solid offers a robust defense against the risks posed by data manipulation and biases in AI systems. When users are positioned by data platforms to manage their own data, it minimizes the risk of manipulation and ensures data integrity. 

In 2018, Amazon had to scrap its AI recruiting tool because it was biased against women. Instead of training on the tens of thousands of resumes of people who were applying currently, their tool was trained on old resumes that had been submitted over a ten-year period, which predominantly meant men and legacy technology, reflecting legacy thought and mistakes from the past instead of innovations and where the company wanted to go. By using Solid, such errors and biases are mitigated through training models on more modern, relevant and representative data sourced from diverse, individually managed Pods in the present. 

Notably, the Flanders government in the EU already has launched their Solid Pod “data utility” where every citizen in the country can store and manage their resume, meaning employers should expect to be able to request access to millions of current and new resumes actively maintained.

As AI becomes more commonly used for employment functionalities, all employers need to be wary of biased results from AI systems based on errors in programming or inaccurate training data. The American Bar Association emphasizes that organizations’ use of AI tools is still subject to federal employment discrimination laws. 

Algorithmic manipulation occurs when AI algorithms are altered or designed to produce deceptive or unethical outcomes. Using the Solid protocol with AI, reorienting intelligence around the data owner and legitimate purposes for processing, makes it far more difficult for malicious actors to alter or manipulate algorithms undetected. Furthermore, storing the AI models and their training datasets in secure, distributed Pods, Solid can provide a transparent and verifiable development environment that simultaneously enhances data confidentiality. 

When looking into cases of algorithmic manipulation, issues often emerge from the underlying system and not just the lack of transparency about that formula. Buzzfeed in July 2023, for example, generated Barbie doll pictures using Midjourney AI for almost 200 countries. Concerns about an integrity breach immediately came to light as the algorithm generated a Nazi uniform for Germany and a gun for South Sudan.

In 2016, I presented on how easy it was to optically manipulate automobile recognition algorithms, powerfully changing the meaning of traffic signs for AI systems without human perception. This has become so common in security news today that robots lacking countermeasures may be dangerous for public use. High-stakes risks such as these are why Solid enables developers to better track changes to their algorithms and datasets, providing transparent history to be audited to ensure no unauthorized inputs and modifications are made. This approach can allow for more rational and relevant auditing cycles as well as independent verification, reducing risk of manipulative practices and ensuring algorithms rise to ethical and technical safety standards.

Output fabrication involves the deliberate generation of false or misleading outputs by AI systems. Solid enhances the traceability and verification of data and AI-generated outputs. AI-generated content that does not trace back to an authenticated source can be easily flagged as not coming from a Solid Pod and therefore considered untrusted or anonymous. 

A real-world example of a harmful fabrication is related to political campaigns and the use of AI-generated fake text, images and video, which can spread misinformation rapidly and manipulate public opinion or cause real-world harm. For instance, AI algorithms have been used to create and disseminate false news stories during election cycles, leading to voter manipulation and undermining democratic processes. By adopting Solid, every piece of AI-generated content can be traced back to its source data and the algorithms used, making it easier to verify authenticity. This traceability allows independent verification of AI-generated content, ensuring the integrity of the information and combating the spread of disinformation and other forms of integrity risks.

During the most recent Chicago mayoral race, a video surfaced that appeared to show a candidate making controversial statements. The video was later identified as a digital fabrication likely created using AI, highlighting how generative AI can be manipulated to spread misinformation and potentially influence election outcomes. 

Reviewing the integrity breaches of AI brings to light how adopting the W3C Solid protocol can significantly enhance trust by distributing data management more logically instead of centrally, and empowering individuals to regain the freedom and liberty necessary to be in charge of their own destiny. This approach mitigates risks associated with data manipulation, algorithmic manipulation, and output fabrication while fostering a more transparent, accountable, and trustworthy AI ecosystem. 

By moving data closer to those who own and can carefully manage and audit it, Solid provides a robust framework for ensuring the quality and integrity of AI outputs as well as reducing harm from fraud, the spread of disinformation, or misuse of data.

View All Posts

Stay connected

Stay up-to-date with Inrupt and Solid. Receive notifications on the latest features, releases, and new products.

Your subscription could not be saved. Please try again.
You have successfully signed up for the Inrupt Newsletter!