108
Thoughts on Kagi? (lemmy.world)
submitted 8 months ago by wavydotdot@lemmy.world to c/privacy@lemmy.ml

I've been using this search engine and I have to say I'm absolutely in love with it.

Search results are great, Google level even. Can't tell you how happy I am after trying multiple privacy oriented engines and always feeling underwhelmed with them.

Have you tried it? What are your thoughts on it?

you are viewing a single comment's thread
view the rest of the comments
[-] sudneo@lemmy.world 3 points 8 months ago

In reality I did not read anywhere that they intend to create a profile on you. What I read is some fuzzy idea about a future in which AIs could be customised to the individual level. So far, Kagi's attitude has been to offer features to do such customisations, rather than doing them on behalf of users, so I don't see why you are reading that and jumping to the conclusion that they want to build a profile on you, rather than giving you the tools to create that profile. It's still "data" given to them, but it's a voluntary action which is much different from data collection in the negative sense we all mean it.

[-] LWD@lemm.ee -3 points 8 months ago* (last edited 8 months ago)

It's still data given to them, no scare quotes needed. And if that data includes your political alignment, like they say in their manifesto, a data breach would be catastrophic. Far worse has been done with far less. (And even if there isn't one, using their manifesto to promise a dystopia where you are nestled in a political echo chamber sounds like a nightmare).

And even corporate brand loyalty is mentioned in their manifesto.

When DuckDuckGo complained about Google's filter bubble, even Google had the good sense to downplay it. Kagi seems giddy about it.

[-] sudneo@lemmy.world 1 points 8 months ago

It’s still data given to them, no scare quotes needed.

It is if you decide to give it to them. If it's a voluntary feature and not pure data collection, that's the difference. Which means if you don't want to take the risk, you don't provide that data. I am sure you understand the difference between this and the data collection as a necessary condition to provide the service.

And if that data includes your political alignment, like they say in their manifesto, a data breach would be catastrophic.

Which means you will simply decide not to use that feature and not give them that data?

And even if there isn’t one, using their manifesto to promise a dystopia where you are nestled in a political echo chamber sounds like a nightmare

It depends, really. When you choose which articles and newspapers you consider reputable, you consider that an echo chamber? I don't. This is different from using profiling and data collection to provide you, without your knowledge or input, with content that matches your preference. Curating the content that I want to find online is different from Meta pushing only posts that statistically resonate with me based on the behavioral analysis they have done on top of the data collected, all behind the scenes. I don't see where the dystopia is if I can curate my own content through tools. This is very different from megacorps curating our content for their own profit.

[-] LWD@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

I think there may be a miscommunication here, because I fundamentally also find great distaste with

Meta pushing only posts that statistically resonate with me based on the behavioral analysis they have done on top of the data collected

... Because based on their manifesto, that's exactly what Kagi wants to do with you as a search engine; show you the things it thinks you want to see.

if you don't want to take the risk, you don't provide that data

Every giant corporation has a privacy policy; the same could be said for what Mark Zuckerberg calls the "dumb fucks" who use Facebook.

[-] sudneo@lemmy.world 1 points 8 months ago

… Because based on their manifesto, that’s exactly what Kagi wants to do with you as a search engine; show you the things it thinks you want to see.

no, based on your interpretation of the manifesto. I already mentioned that the direction that kagi has taken so far is to give the user the option to customize the tools they use. So it's not kagi that shows you the thing you want to see, but you, who tell kagi the things who want to see. I imagine a future where you can tune the AI to be your personal assistance, not the company.

Every giant corporation has a privacy policy

It is not having a policy that matters, obviously, it's what inside it that does. Facebook privacy policy is exactly what you would expect, in fact.

[-] LWD@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

I've been quoting the Kagi Corp manifesto. In fact, across this entire thread, you've had nothing but total charity for the corporate entity and its leadership, even accusing eyewitnesses of the CEO's bad behavior of being liars.

But your comment did allow me to find another corporate manifesto, so let's take another crack at this.

You said this is bad:

Meta pushing only posts that statistically resonate with me based on the behavioral analysis they have done on top of the data collected

Kagi Corp says this is good:

In this future, instead of everyone sharing the same Siri, we will own our truly own Mike or Julia, or maybe Donald - the AI. And when you ask your own AI a question like "does God exist?" it will answer it relying on biases you preconfigured. When you ask it to recommend a good restaurant nearby, it will do so knowing what kind of food you like to eat. The same will happen when you ask it to recommend a good coffee maker - it will know the brands you like, your likely budget and the kind of coffee you usually drink.

What you say is bad for Facebook, is what Kagi Corp wants to do.

[-] sudneo@lemmy.world 1 points 8 months ago

I’ve been quoting the Kagi Corp manifesto.

Yes, but you have drawn conclusions that are not in the quotes.

Let me quote:

But there will also be search companions with different abilities offered at different price points. Depending on your budget and tolerance, you will be able to buy beginner, intermediate, or expert AIs. They’ll come with character traits like tact and wit or certain pedigrees, interests, and even adjustable bias. You could customize an AI to be conservative or liberal, sweet or sassy!

In the future, instead of everyone sharing the same search engine, you’ll have your completely individual, personalized Mike or Julia or Jarvis - the AI. Instead of being scared to share information with it, you will volunteer your data, knowing its incentives align with yours. The more you tell your assistant, the better it can help you, so when you ask it to recommend a good restaurant nearby, it’ll provide options based on what you like to eat and how far you want to drive. Ask it for a good coffee maker, and it’ll recommend choices within your budget from your favorite brands with only your best interests in mind. The search will be personal and contextual and excitingly so!

There is nothing here that says "we will collect information and build the thing for you". The message seems pretty clearly what I am claiming instead: "You tell the AI what it wants". Even if we take this as "something that is going to happen" (which is not necessarily), it clearly talks about tools to which we can input data, not tools that collect data. The difference is substantial, because data collection (a-la facebook) is a passive activity that is built-in into the functionality of the tool (which I can't use it without). Providing data to have functionalities that you want is a voluntary act that you as a user can do when you want and only for the category of data that you want, and does not preclude your use of the service (in fact, if you pay for a service and don't even use the features, it's a net positive for the company if that's how they make money!).

even accusing eyewitnesses of the CEO’s bad behavior of being liars.

What I witnessed is the ranting of a person in bad faith. You are giving credit to it simply because it fits your preconception. I criticized it based on elements within their own arguments, and concluded that for me that's not believable. If that's your only proof of "bad behavior" and that's enough for you, good for you.

What you say is bad for Facebook, is what Kagi Corp wants to do.

Let me reiterate on the above:

you will volunteer your data, knowing its incentives align with yours

Now, let's be clear because I have absolutely no intention to spending my evening repeating the same argument. Do you see the difference between the following:

  • I use a service to connect with people, share thoughts, read thoughts from others, and the service passively collects data about me so that it can serve me content that helps the company behind it maximizing their profits, and
  • I use a service that I can customize and provide data to in order to customize what I see and what is displayed to me, which has no financial incentive to do anything else with that data because I - the user - am the paying customer.

?

If you don't, and you don't see the difference between the two scenarios above, there is no point for me to continue this conversation, we fundamentally disagree. If you do see the difference, then you have to appreciate that the nature of the data collection moves the agency from the company to the user, and a different system of incentive in place creates an environment in which the company doesn't have to screw you over in order to earn money.

[-] LWD@lemm.ee 1 points 8 months ago

It's pretty clear that you only draw your conclusions from a predetermined trust in Kagi, a brand loyalty.

The CEO is good, therefore when he moved a public conversation to a private Discord server, anything he says about the private conversation is now true, and anyone who disagrees with him is a liar.

Kagi Corp is good, so feeding data to it is done in a good way, but Facebook Corp is bad so feeding data to it is done in a bad way.

Kagi's efforts to show you only things you want to see are good, because Kagi itself is good. When Facebook does it, it is bad.

[-] sudneo@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

It’s pretty clear that you only draw your conclusions from a predetermined trust in Kagi, a brand loyalty.

As I said before, I also draw this conclusion based on the direction that they have currently taken. Like the features that actually exist right now, you know. You started this whole thing about dystopian future when talking about lenses, a feature in which the user chooses to uprank/downrank websites based on their voluntary decision. I am specifically telling that this has been the general attitude, providing tools so that users can customize stuff, and therefore I am looking at that vision with this additional element in mind. You instead use only your own interpretation of that manifesto.

Kagi Corp is good, so feeding data to it is done in a good way, but Facebook Corp is bad so feeding data to it is done in a bad way.

You are just throwing the cards up. If you can't see the difference between me having the ability to submit data, when I want, what I want and Facebook collecting data, there are only two options: you don't understand how this works, or you are in bad faith. Which one it is?

[-] LWD@lemm.ee 0 points 8 months ago

You started this whole thing about dystopian future when talking about lenses

The "lens" feature isn't mentioned in either Kagi manifesto. That's why I consider the manifesto important: it shows what they want to produce and how willing they are to collect user data in order to produce it.

If you can't see the difference between me having the ability to submit data, when I want, what I want and Facebook collecting data

Let me quote Mark Zuckerberg of Facebook:

People just submitted it. I don't know why. They "trust me". Dumb fucks.

[-] sudneo@lemmy.world 1 points 8 months ago

The “lens” feature isn’t mentioned in either Kagi manifesto.

So? It exists, unlike the vision in the manifesto. Since the manifesto can be interpreted in many ways (despite what you might claim), I think this feature can be helpful to show the Kagi intentions, since they invested work into it no? They could have build data collection and automated ranking based on your clicks, they didn't.

People just submitted it. I don’t know why. They “trust me”. Dumb fucks.

Not sure what the argument is. The fact that people voluntary give data (for completely different reasons that do not benefit those users directly, but under the implicit blackmail to use the service)? I have no objections anyway against Facebook collecting the data that users submit voluntarily and that is disclosed by the policy. The problem is in the data inferred, in the behavioral data collected, which are much more sneaky, and in those collected about non users (shadow profiles through the pixel etc.). You putting Facebook and an imaginary future Kagi in the same pot, in my opinion, is completely out of place.

[-] LWD@lemm.ee 0 points 8 months ago

I love how you downplay what Kagi said they want their product to become. Elsewhere, you insist we must trust their privacy policy with blind faith. These two opinions are contradictory; you want people to simultaneously believe and disbelieve Kagi.

It doesn't make sense.... unless all your opinions stem from the presumption that Kagi is unquestionably good.

Regarding "Dumb Fucks": Zuckerberg described exactly what Kagi Corp wants their users to do.

[-] sudneo@lemmy.world 1 points 8 months ago

The manifesto is actually a future vision. And again, you are interpreting it in your own way.

At the same time, you are completely ignoring:

  • what the product already does
  • the features they actually invested to build
  • their documentation in which they stress and emphasize on privacy as a core value
  • their privacy policy in which they legally bind themselves to such commitment.

Because obviously who cares of facts, right? You have your own interpretation of a sentence which starts with "in the future we will have" and that counts more than anything.

Also, can you please share to me the quote where I say that I need to blindly trust the privacy policy? Thanks.

Because I remember to have said in various comments that the privacy policy is a legally binding document, and that I can make a report to a data protection authority if I suspect they are violating them, so that they will be audited. Also, guess what! The manifesto is not a legally binding document that they need to respond of, the privacy policy is. Nobody can hold them accountable if "in the future there will not be" all that stuff that are mentioned in the manifesto, but they are accountable already today for what they put in the privacy policy.

Do you see the difference?

[-] LWD@lemm.ee 0 points 8 months ago

No, I'm engaging in a good faith effort to find the corporation's words, while you downplay and reinterpret them at every turn.

I know you won't bother to look, but for my own personal amusement, Kagi Corp is clear in page after page they care about AI not privacy. Here's a third page demonstrating this:

Kagi has long heritage in AI, in fact we started as kagi.ai in 2018 and we've previously published products, research and even a sci-fi story about AI. While generative AI opens a new paradigm of search and a vast search space of queries that never previously existed we have taken special care to ensure a thoughtful user experience guided by this philosophy of AI integration

  • what the corporation did: AI stuff
  • the features they actually invested to build: AI integration

And this is rather ironic too:

At the same time, you are completely ignoring... their privacy policy in which they legally bind themselves to such commitment....

Also, can you please share to me the quote where I say that I need to blindly trust the privacy policy?

[-] sudneo@lemmy.world 1 points 8 months ago

You are really moving the goal post eh

Developing AI feature does not mean anything in itself. None of the AI features they built do anything at all in a personalized way. For sure they seem very invested into integrating AI in their product, but so far no data is used, and all the AI features are simply summarizers and research assistants. What is this supposed to prove?

I will make it simpler anyway:

What they wrote in a manifesto is a vague expression of what will happen in a non-specified future. If the whole AI fad will fade in a year, it won't happen. In addition, we have no idea of what specifically they are going to build, we have no idea of what the impact on privacy is, what are the specific implementation choices they will take and many other things. Without all of this, your dystopian interpretation is purely arbitrary.

And this is rather ironic too:

Ironic how? Saying that a document is binding doesn't mean blindly trusting it, it means that I know the power it holds, and it means it gives the power to get their ass audited and potentially fined on that basis if anybody doesn't trust them.

Your attempt to mess with the meaning of my sentences is honestly gross. Being aware of the fact that a company is accountable has nothing do to with blind trust.


Just to sum it up, your arguments so far are that:

  • they mention a "future" in which AI will be personalized and can act as our personal assistant, using data, in the manifesto.
  • they integrated AI features in the current offering

This somehow leads you to the conclusion that they are building some dystopian nightmare in which they get your data and build a bubble around you.

My arguments are that:

  • the current AI features are completely stateless and don't depend on user data in any way (this capability is not developed in general and they use external models).
  • the current features are very user-centric and the users have complete agency in what they can customize, hence we can only assume that similar agency will be implemented in AI features (in opposition to data being collected passively).
  • to strengthen the point above, their privacy policy is not only great, but it's also extremely clear in terms of implications of data collected. We can expect that if AI features "personalized" will come up, they will maintain the same standard in terms of clarity, so that users are informed exactly on the implication of disclosing their data. This differentiate the situation from Facebook, where the privacy policy is a book.
  • the company business model also gives hope. With no other customer to serve than the users, there are no substantial incentive for kagi to somehow get data for anything else. If they can be profitable just by having users paying, then there is no economical advantage in screwing the users (in fact, the opposite). This is also clearly written in their doc, and the emphasis on the business model and incentive is also present in the manifesto.

The reality is: we don't know. It might be that they will build something like you say, but the current track record doesn't give me any reason to think they will. I, and I am sure a substantial percentage of their user base, use their product specifically because they are good and because they are user-centric and privacy focused. If they change posture, I would dump them in a second, and a search engine is not inherently something that locks you in (like an email). At the moment they deliver, and I am all-in for supporting businesses that use revenue models that are in opposition to ad-driven models and don't rely on free labor. I do believe that economic and systemic incentive are the major reasons why companies are destroying user-privacy, I don't thing there is any inherent evil. That's why I can't really understand how a business which depends on users paying (kagi) can be compared to one that depends on advertisers paying (meta), where users (their data) are just a part of a product.

Like, even if we assume that what's written in the manifesto comes to life, if the data is collected by the company and only, exclusively, used to customize the AI in the way I want (not to tune it to sell me shit I don't need), within the scope I need, with the data I choose to give, with full awareness of the implication, where is the problem? This is not a dystopia. The dystopia is if google builds the same tool and tunes it automatically so that it benefits whoever pays google (not users, but the ones who want to sell you shit). If a tool is truly making my own interests and the company interest is simply that I find the tool useful, without additional goals (ad impressions, visits to pages, product sold), then that's completely acceptable in my view.

And now I will conclude this conversation, because I said what I had to, and I don't see progress.

[-] LWD@lemm.ee 0 points 8 months ago* (last edited 8 months ago)

You're right. We aren't getting anywhere. I'm trying to explain how 2 + 2 = 4, but you keep insisting it's zero.

Kagi Dot AI, with a past, present and future in AI, is the first part of the equation.

Private data consumption and regurgitation, which Kagi is allegedly not injecting into its AI right now, is the other part.

Look at them side by side and you see what the company wants to do, clear as day. But for some reason, you repeatedly insist there's nothing there.

Like, even if we assume that what's written in the manifesto comes to life, if the data is collected by the company and only, exclusively, used to customize the AI in the way I want

To be clear, you want a venture capital corporation to keep you in your filter bubble regarding your political beliefs, your corporate brand choices, your political beliefs, your philosophical beliefs, etc?

The dystopia is already here for you.

And even if you feel comfortable feeding all this private data into a soulless corporation, and you're not worried about data breaches, why would you evangelize that kind of product on a privacy forum?

[-] sudneo@lemmy.world 1 points 8 months ago

To be clear, you want a venture capital corporation to keep you in your filter bubble regarding your political beliefs, your corporate brand choices, your political beliefs, your philosophical beliefs, etc?

Thankfully, I kagi is not a VC-funded corp. The latest investment round was for 670k, pennies, from 42 investors, which means an average of less than 20k/investor (they also mention that most are kagi users too but who knows).

Also, it depends on what it means "being kept in a filter bubble". If I build my own bubble according to my own criteria (I don't want to see blogs filled with trackers, I want articles from reputable sources - I.e. what I consider reputable, if I am searching for code I only want rust because that's what I am using right now, etc.) and I have the option to choose when to look outside, then yes, I think it's OK. We all already do that anyway, if I see an article from fox news I won't even open it, if on the same topic I see something from somewhere else. That said, there are times where I can choose to read fox news specifically to see what conservatives think.

The crux of it all is: who is in charge? And what happens with that data? If the answers are "me" and "nothing", then it's something I consider acceptable. It doesn't mean I would use it or that I would use it for everything.

evangelize that kind of product on a privacy forum?

First, I am not evangelizing anything. That product doesn't even exist, I am simply speculating on its existence and the potential scenarios.

Second: privacy means that the data is not accessed or used by unintended parties and is not misused by the intended ones. Focus on unintended. Privacy does not mean that no data is gathered in any case, even though this is often the best way to ensure there is no misuse. This is also completely compatible with the idea that if I can choose which data to give, and whether I want to give it at all (and of course deleting it), and that data is not used for anything else than what I want it to be used for, then my privacy is completely protected.

this post was submitted on 19 Feb 2024
108 points (82.5% liked)

Privacy

31885 readers
583 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS