New Capabilities for Trusted Systems Based on Inrupt’s Solid Technology
Today we announce the general availability of the version 2.2 release of Inrupt’s Enterprise Solid Server (ESS) as well as the availability of our Safe AI demo.
To support Solid’s adoption and Inrupt’s mission to power critical deployments and new applications around the world, v2.2 of the Enterprise Solid Server includes significant improvements to its compliance, audit, and traceability capabilities. These features enhance all deployments of Solid-based services, but they are particularly important for practical AI deployments and the pressing issue of how to pair Safe AI with GenAI services. To that, with the release of v2.2, Inrupt has created a demonstration of our Safe AI approach, showcasing for organizations how the use of personal data and AI services can coexist with consent, transparency, and trust.
Mission critical deployment readiness
The following features were developed and integrated into version 2.2 to accommodate for the production-grade readiness expected by Inrupt’s enterprise customers and their millions of users.
Compliance, Audit, Traceability, and Consent
With v2.2, ESS now includes improved system logging to make it intuitive to diagnose and debug issues with applications and services. ESS now has improved audit events to simplify compliance, while also allowing for applications to pass in metadata with requests and have that metadata propagated and captured in the logs and audit trail. In addition, ESS now has improved defaults, enabling query and access grants by default on ESS, setting the default expiration of an access grant to six months. Finally, for v2.2 Inrupt developed the Access Management UI; a modular reference implementation of a user-facing UI for managing and granting access to data on Pods — a vital piece in helping developers build new data collaboration solutions for users.
Inrupt’s approach to Safe AI
Web 3.0, personal data, and AI
It’s clear we’re entering the age of AI on the web. But it’s not yet certain whether AI will benefit the many or the few.
If we make the same mistakes we made in the Web 2.0 era, we will end up in the same place — locked into a system where harmful practices shape most digital services and consumer relationships across the web. Businesses and their customers will be worse off as a result.
Customers’ mistrust continues to grow. The latest Edelman Trust Barometer found that in the U.S.:
- Trust in AI companies has fallen over the past 5 years from 50% to 35%
- 52% of respondents worry that AI will compromise privacy
Inrupt has a different vision. One where AI can actually accelerate the open access and shared benefit inherent in Sir Tim’s vision of Web 3.0 (not Web3!). A web where AI is safe for millions of businesses and their customers.
To reach that future, we’ll need an evolution of the web as we know it to a web more focused on user-centricity. But we’ll need to start with basic capabilities around transparency and control of personal data — to prove that it is possible to build an AI system that’s safe to entrust with customer data.
That’s why we built our approach to Safe AI on the core principles and benefits of Solid, which is a W3C standards-based protocol that allows customers to participate directly in data sharing decisions, deciding when data is shared, and with whom.
When AI systems are built to rely on data from a Solid Pod:
- Customers can opt-in or opt-out of AI services
- The original purpose of the data collection remains transparent to the customer, a key requirement of the recent EU AI Act
- Customers grant consent to their personal data usage before it’s actually used
Taken together, these capabilities represent a foundation that enables trustworthy AI solutions between organizations and individuals.
How to sign up for the demo
Inrupt is excited to make available our Solid and Safe AI demo that demonstrates these capabilities to customers and prospects, allowing for organizations to reassure their customers that their data is being used for its intended purpose. Please reach out here or sign up here.
What’s unique in the demo is the role of the Pod in managing the customer’s consent. We’ll cover how:
- Access Grants allow users to opt-in and opt-out of services;
- Audit capabilities capture user’s consent and create clear traceability of data usage;
- All data, inputs and outputs, can be stored in a Solid Pod.
Importantly, we’ll highlight the value of these capabilities, and specifically how the Solid Pod when interacting with AI services can:
- Capture user consent before data is accessed;
- Manage (opt-in/opt-out) consent for processing, training, and tuning independently
- Increase end user trust by providing transparency of the data experience and traceability of actions and services.
Solid Movement
Both the v2.2 release and the availability of the demo are proof that Inrupt’s ESS is designed and built conscious of current and future regulations and the need for Safe AI services from every organization, where trusted systems operate as intended — reliably, securely, and free from harmful consequences.
Solid continues to inspire organizations and individuals alike. Again and again, it proves to be more than just a protocol, a concept, or a technology. Solid is a movement. As the vital piece of the third layer of the web, the momentum of Solid’s adoption will continue with exciting deployments and applications going live around the world.
For a full list of ESS capabilities along with the new v2.2 functionality, visit the Inrupt documentation and release notes.