[-] touzovitch@lemmy.ml 1 points 11 months ago* (last edited 11 months ago)

You're right, App traffic is something we'll need to crack. But as a first step, anything traffic going through a web browser is already significant.

[-] touzovitch@lemmy.ml 2 points 11 months ago

What do you mean by non private platforms?

In this POC, you can only encrypt content using Redakt’s public key. That way you are guaranteed to see the content since the key is already installed in the extension.

I intend to add the option to encrypt with a custom sharable key in the v.2.

[-] touzovitch@lemmy.ml 3 points 11 months ago* (last edited 11 months ago)

But why? Why do you people hate AI so much?

I don't think it's a question to "hate" AI or not. Personally, I have nothing against it.

As always with Privacy, it's a matter of choice: when I publish something online publicly, I would like to have the choice wether or not this content is going to be indexed or used to train models.

It's a dual dilemma. I want to benefit from the hosting and visibility of big platforms (Reddit, LinkedIn, Twitter etc.) but I don't want them doing literally anything with my content because lost somewhere in their T&C it's mentioned "we own your content, we do whatever tf we want with it".

[-] touzovitch@lemmy.ml 3 points 11 months ago
[-] touzovitch@lemmy.ml 8 points 11 months ago

but in general, if google can’t read it–few eyeballs will ever see it.

You bring up a good point. The Internet is full of spider bots that crawl the web to index it and improve search results (ex: Google). In my case, I don't want that any comment I post here or on big platforms like Reddit, Twitter or LinkedIn to be indexed. But I still want to be part of the conversation. At least I would like to have the choice wether or not any text I publish online is indexed.

[-] touzovitch@lemmy.ml 3 points 11 months ago* (last edited 11 months ago)

Exactly!

For example, here's a Medium article with encrypted content: https://redakt.org/demo/

[-] touzovitch@lemmy.ml 1 points 11 months ago* (last edited 11 months ago)

You're right. "Securing" is bad word. "Obfuscating" might be more appropriate. Actually had the same feedback from Jonah of Privacy Guides.

I use AES encryption with a single public key at the moment. That way, if I want to give the option to the user to create encrypt with a custom key, I don't have to change the encryption method.

EDIT: Editing the title of this thread ̶P̶r̶o̶t̶e̶c̶t̶

[-] touzovitch@lemmy.ml 1 points 11 months ago

You have a point. Or even malicious links!

We have to be careful with the decrypted output. Redakt is an open source and collaborative project, just saying........ 😜

[-] touzovitch@lemmy.ml 2 points 11 months ago

Slow them down and prevent them to scale is actually not that bad. We are in the context of public content accessible to anyone, so by definition it can not be bulletproof.

Online Privacy becomes less binary (public vs private) when the internet contains content encrypted using various encryption methods, making it challenging to collect data efficiently and at scale.

Thank you so much for your comment though <3

[-] touzovitch@lemmy.ml 6 points 11 months ago

I don't think AI is bad as a whole. At least I would like to choose if the content I post online can be used (or not) to train models.

[-] touzovitch@lemmy.ml 2 points 11 months ago* (last edited 11 months ago)

You are absolutely right! Using a single public encryption key can not be considered as secured. But it is still more than having your content in clear.

I intend to add more encryption options (sharable custom key, PGP), that way users can choose the level of encryption they want for their public content. Of course, the next versions will still be able to decrypt legacy encrypted content.

In a way, it makes online Privacy less binary:

Instead of having an Internet where we choose to have our content either "public" (in clear) or "private" (E2E encrypted), we have an Internet full of content encrypted with heterogeneous methods of encryption (single key, custom key, key pairs). It would be impossible to scale data collection at this rate!

[-] touzovitch@lemmy.ml 2 points 11 months ago* (last edited 11 months ago)

Captcha was just an example :-)

What I'm trying to say is that any small changes that we add to the extension will have very few (or none) effect on the real users, but will force the srappers to adapt. That might require important human and machine ressources to collect data at a massive scale.

EDIT: And thank you for your feedback <3

28
submitted 11 months ago* (last edited 11 months ago) by touzovitch@lemmy.ml to c/privacy@lemmy.ml

Hey everyone, so for the past few month I have been working on this project and I'd love to have your feedback on it.

As we all know any time we publish something public online (on Reddit, Twitter or even this forum), our posts, comments or messages are scrapped and read by thousands of bots for various legitimate or illegitimate reasons.

With the rise of LLMs like ChatGPT we know that the "understanding" of textual content at scale is more efficient than ever.

So I created Redakt, an open source zero-click decryption tool to encrypt any text you publish online to make it only understandable to other users that have the browser extension installed.

Try it! Feel free to install the extension (Chrome/Brave/Firefox ): https://redakt.org/browser/

EDIT: For example, here’s a Medium article with encrypted content: https://redakt.org/demo/

Before you ask: What if the bots adapt and also use Redakt's extension or encryption key?

Well first they don't at the moment (they're too busy gathering billions of data points "in clear"). If they do use the extension then any changes we'll add to the extension (captcha, encryption method) will force them to readapt and prevent them to scale their data collection.

Let me know what you guys think!

view more: next ›

touzovitch

joined 11 months ago