The adoption of AI is growing rapidly, but so are concerns around data privacy, bias, and control. That’s because many AI systems today are developed and operated in centralized environments, where transparency into training data, model behavior, and data usage is limited.
So, how can you ensure that your organization can integrate AI into its everyday workflows while also maintaining full data control and transparency?
At Nextcloud, we have asked ourselves the same question, leading us to the development of the Ethical AI Rating: a simple way to evaluate how transparent, open, and trustworthy an AI solution really is.
The problem with Big Tech AI platforms, as well as the challenges of open source AI technologies
The development of AI is moving fast, and many of the new capabilities face ethical and even legal challenges. As we explored in our article on Big Tech AI privacy concerns, many AI systems rely on large-scale data collection, often without clear transparency or user control.
This can lead to issues with:
Use of data without permission
Discrimination and biases
Data theft and leakage
What’s more, the mere use of open source code is no longer enough to be able to say you are in control over your data or that the software is safe or ethical.
This is particularly true for neural network-based AI technologies.
The set of data and the software used in the training, as well as the availability of the final model, are all factors that determine the amount of freedom and control a user has.
Your guide to ethical AI: Nextcloud’s Ethical AI Rating
Not all AI solutions are equal, because some prioritize openness and user control, while others rely on opaque models and centralized data processing.
To help users and administrators make informed decisions, we developed the Nextcloud Ethical AI Rating: a rating system designed to give a quick insight into the ethical implications of a particular integration of AI in Nextcloud.
This rating aims to provide a quick, transparent overview of how much control and insight you have over an AI system.
This is especially important as organizations evaluate AI tools not just for performance, but for compliance, privacy, and long-term data sovereignty.
Users can still look more deeply into the specific solution they use, but the Nextcloud Ethical AI rating can simplify the choice for the majority of users and customers.
The Nextcloud Ethical AI rating in practice: What you need to know
The rating has four levels:
Red 🔴
Orange 🟠
Yellow 🟡
Green 🟢
And is based on points from these factors:
Transparency of the code
Is the software open source, both for inferencing and training?
Self-hosting options
Is the trained model freely available for self-hosting?
Availability of training data
Is the training data available and free to use?
This leads us to the following ranking system:
If all of these conditions are met, we give the AI solution a green label 🟢
If one condition is met, it receives an orange label 🟠
If two conditions are met, the label is yellow 🟡
If no conditions are met, it gets a red label 🔴
These colors give an immediate overview of the AI solution for factors such as sovereignty, transparency, and data control.
Caveat: Why critical thinking is still important to ensure ethical AI
We add one additional note to the rating: bias.
Bias remains a known challenge in AI systems. While it is difficult to guarantee complete neutrality, the rating highlights known issues where they exist, helping users make informed decisions.
So when we discover major biases in the data set or in the expression of the model at the time of our last check, you will see this mentioned in the rating.
This includes discrimination on race or gender for a face recognition technology, for example. were discovered in the data set or in the expression of the model.
There are other ethical considerations for AI, of course.
Think of legal challenges around the use of data sets, in particular, copyright, and the energy usage of especially deep neural networks is of great concern.
Unfortunately, those concerns are extremely hard to quantify in an objective manner, and while we intend to try to warn users of any open issues, we can not (yet) include them in our rating.
For that reason, we recommend users to investigate for themselves what the consequences of the use of AI are for their individual case using the Nextcloud Ethical AI Rating.
Ethical AI in Nextcloud Assistant
Nextcloud’s prime approach to AI is that it should never be fixed to any particular provider. In other words, the administrators can choose between different providers, including self-hosted options.
What’s more, organizations can decide where their models run, which models are used, and what happens to their data.
By doing so, your organization can benefit from AI-supported collaboration without giving up responsibility for its data and stay in control.
AI is still optional and configurable, instead of a mandatory layer imposed on all users or workflows. This allows organizations to adopt AI at their own pace, align it with internal policies, and decide which use cases make sense in their environment.
When AI is enabled, it becomes part of the collaboration environment instead of an external dependency. It integrates into existing workflows without breaking governance or compliance frameworks.
Summarize meetings and conversations in Nextcloud Talk
Provide live transcription and translation for multilingual collaboration
Integrate AI capabilities directly into email, chat, meetings, and file workflows
Nextcloud Hub 26 Winter also makes compliance easier: You can generate images and documents in various apps and automatically label content with watermarks. This ensures your organization is in line with the latest regulations, such as the AI Act in the EU.
In short: privacy-first AI solutions such as the Nextcloud Assistant give organizations the efficiency and convenience of AI, while keeping governance, compliance, and data ownership exactly where they belong: under their control.
Regain your digital autonomy with Nextcloud Hub 26 Winter
Our latest release of Nextcloud Hub 26 Winter is here! Discover the latest Nextcloud features.
È il momento di riprendere in mano i propri dati. Presentiamo il nuovo Nextcloud Hub, una potente piattaforma di collaborazione open source che ti mette al comando. Scopri le novità in termini di prestazioni, design e sicurezza, con tanti strumenti nuovi e migliorati per il lavoro e la vita di ogni giorno.
Nextcloud Hub 25 Autumn semplifica l'avvio di una potente collaborazione con il pieno controllo dei tuoi dati. Dagli aggiornamenti del design globale al miglioramento dell'usabilità e delle prestazioni, scopri la nostra ultima release in questo blog.
Le organizzazioni, grandi e piccole, necessitano di una soluzione che garantisca la resilienza e la sovranità digitale delle loro operazioni: un'alternativa open source e rispettosa della privacy a Teams. E oggi presentiamo questa soluzione: Nextcloud Talk.
New maintenance updates are available for Nextcloud Hub. Read more in this post or access the full changelog on our website. Keep your server up-to-date!
From government institutions to research organisations and private companies: There is a growing urgency to regain control over data, infrastructure and digital collaboration. Austria is actively shaping this movement with great lighthouse projects.
Salviamo alcuni cookie per contare i visitatori e rendere il sito più facile da usare. Questi dati non lasciano il nostro server e non servono a tracciare il tuo profilo personale! Per maggiori informazioni, consulta la nostra Informativa sulla privacy. Personalizza
I cookie statistici raccolgono informazioni in forma anonima e ci aiutano a capire come i visitatori utilizzano il nostro sito web. Utilizziamo Matomo in cloud.
Servizio:Matomo
Descrizione del cookie:
_pk_ses*: Conta la prima visita dell'utente
_pk_id*: Aiuta a non contare due volte le visite.
mtm_cookie_consent: Ricorda il consenso alla memorizzazione e all'utilizzo dei cookie dato dall'utente.
Scadenza del cookie:_pk_ses*: 30 minuti
_pk_id*: 13 mesi
mtm_cookie_consent: 30 giorni