Another Approach to AI

An approach to ethical AI focussed on transparency and running locally.

Jos Poortvliet

Playlists: 'osc24' videos starting here / audio

Progressively, there are more and more [risks](https://jarnoduursma.com/blog/the-risks-of-artificial-intelligence/) associated with computer intelligence, and as a transparent software company we at Nextcloud have the responsibility to intervene and protect our users. [Microsoft laid off its entire ethics and society team](https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs), the team that taught employees how to make AI tools responsibly. Nextcloud on the other hand, embraces the ethics and challenges that make up today’s AI and aims to take them head on.

The field of AI is moving fast, and many of the new capabilities face ethical and even legal challenges. Moreover, many people ask WHY do you need it in the first place.

Until Hub 3, we succeeded in offering features like related resources, recommended files, our priority inbox and even [face and object recognition](https://nextcloud.com/blog/all-you-need-to-know-about-facial-recognition-technology-and-the-nextcloud-recognize-app/) without reliance on proprietary blobs or third party servers.

Yet, while there is a large community developing ethical, safe and privacy-respecting technologies, there are many other relevant technologies users might want to use. We want to provide users these cutting-edge technologies – but also be transparent. For some use cases, ChatGPT might be reasonable, while for other data, absolutely not. To differentiate these, we developed an Ethical AI Rating.

I will describe how our Ethical AI rating works, and give a bunch of examples. And, as a bonus, show how it is integrated in Nextcloud and how it can help you get work done - without leaking your data!

I look forward to any feedback you wonderful folks in the audience have.

Progressively, there are more and more [risks](https://jarnoduursma.com/blog/the-risks-of-artificial-intelligence/) associated with computer intelligence, and as a transparent software company we at Nextcloud have the responsibility to intervene and protect our users. [Microsoft laid off its entire ethics and society team](https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs), the team that taught employees how to make AI tools responsibly. Nextcloud on the other hand, embraces the ethics and challenges that make up today’s AI and aims to take them head on.

The field of AI is moving fast, and many of the new capabilities face ethical and even legal challenges. Moreover, many people ask WHY do you need it in the first place.

Until Hub 3, we succeeded in offering features like related resources, recommended files, our priority inbox and even [face and object recognition](https://nextcloud.com/blog/all-you-need-to-know-about-facial-recognition-technology-and-the-nextcloud-recognize-app/) without reliance on proprietary blobs or third party servers.

Yet, while there is a large community developing ethical, safe and privacy-respecting technologies, there are many other relevant technologies users might want to use. We want to provide users these cutting-edge technologies – but also be transparent. For some use cases, ChatGPT might be reasonable, while for other data, absolutely not. To differentiate these, we developed an Ethical AI Rating.

I will describe how our Ethical AI rating works, and give a bunch of examples. And, as a bonus, show how it is integrated in Nextcloud and how it can help you get work done - without leaking your data!

I look forward to any feedback you wonderful folks in the audience have.

Download

Embed

Share:

Tags