[-] pglpm@lemmy.ca 5 points 10 months ago

as evidenced by the rise of the Julius Caesar of our time—Donald Trump

Is it the author of the article who writes such idiocies? or the author of the book?

[-] pglpm@lemmy.ca 5 points 11 months ago

One big, sad problem in machine learning and AI is that many, hopefully not most, practitioners there are largely incompetent in statistics and probability. This is why they often incorrectly evaluate the performance before deployment.

[-] pglpm@lemmy.ca 5 points 1 year ago

Thank you for the suggestion. I tried Kagi a couple of times, but it missed the useful results that DDG or Google were giving, so I dropped it.

It depends of course on what kinds of searches one typically needs. Probably there isn't a universally best search engine.

[-] pglpm@lemmy.ca 5 points 2 years ago

It seems to me these scenes are introduced in films to sexualize them. Most often than not they don't add anything to the story. But blood & sex get more viewers. So I find the whole thing hypocritical.

Brings me to mind an episode of the hilarious series "Coupling", where Jeff says that the actress in the film "The Piano" (?) was naked in the whole film. His friends say she wasn't, it was only a scene in the film. And Jeff replies "it depends on how you watch it" 🤣

[-] pglpm@lemmy.ca 5 points 2 years ago

Realized now that I double-posted this. You beat me... to the Punch!

[-] pglpm@lemmy.ca 6 points 2 years ago

They can setup arbitrary rules or ban you without any rules. It’s their service,

Indeed this shows the change in meaning that "service" has undergone in the past 10 or maybe 20 years. Before, the very notion of "service" was that this kind of events could not happen – otherwise it wasn't a "service". Reliability and reliance were integral part of the definition of "service".

Today this word doesn't mean anything anymore.

[-] pglpm@lemmy.ca 5 points 2 years ago
[-] pglpm@lemmy.ca 5 points 2 years ago* (last edited 2 years ago)

Sorry for my miswriting of my reply; what I meant is that I agree and had expressed related concerns in another community, hence the link. Not backing myself up, just expressing my opinion :) Not only should we not throw more tech at the problem, but I think we should rethink about what we do without the tech.

[-] pglpm@lemmy.ca 5 points 2 years ago* (last edited 2 years ago)

Mathematical language is a language, but mathematics is not just a language. It is a structure with internal rules that are not determined by pure convention (as natural languages are). We could internationally agree from tomorrow to call "blue" whatever it's now called "red" and vice versa, but we couldn't agree to say that "2 + 2 = 5", because that would lead to internal inconsistencies (we could agree to use the symbol "5" for 4, but that's a different matter).

This is also related to a staple of science: that scientific and mathematical truth is not determined by a majority vote, but by internal consistency. Indeed modern science started with this very paradigm shift. Quoting Galilei:

But in the natural sciences, whose conclusions are true and necessary and have nothing to do with human will, one must take care not to place oneself in the defense of error; for here a thousand Demostheneses and a thousand Aristotles would be left in the lurch by every mediocre wit who happened to hit upon the truth for himself.

If we want to train an algorithm to infer rules from language, we need to give samples of language where the rules are obeyed strictly (and yet this may not be enough). Otherwise the algorithm will wrongly generalize that the rules aren't strict (in fact it'll just see a bunch of mutually inconsistent examples). Which is what happens with ChatGPT.

Edit: On top of this, Gödel's theorem and other related theorems have shown that mathematical reasoning cannot be reduced to pure symbol manipulation, Hilbert's unfulfilled dream. So one can't infer mathematical reasoning from language patterns. Children learn reasoning not only through language training, but also through behaviour training (this was pointed out by Turing). This is why large language models have intrinsic limitations in what they can achieve and be used for.

[-] pglpm@lemmy.ca 5 points 2 years ago* (last edited 2 years ago)

Completely agree! I didn't mention this, but I keep the back-up hard drive in another apartment.

This reminds me of a story that happened in some university in England: they had two backups of some server in two different locations. One day one back-up drive failed, and the second failed the day after. Apparently they were the same brand & model. The moral was: use also different back-up hardware brands or means!

[-] pglpm@lemmy.ca 5 points 2 years ago* (last edited 2 years ago)

P-values-based methods and statistical significance are flawed: even when used correctly (e.g.: stopping rule decided beforehand, various "corrections" of all kinds for number of datapoints, non-gaussianity, and so on), one can get results that are "statistically non-significant" but clearly significant in all common-sense meanings of this word; and vice-versa. There's a constant literature – with mathematical and logical proofs – dating back from the 1940s pointing out the in-principle flaws of "statistical significance" and null-hypothesis testing. The editorial from the American Statistical Association gives an extensive list.

I'd like to add: I'm saying this not because I read it somewhere (I don't like unscientific "my football team is better than yours"-like discussions), but because I personally sat down and patiently went through the proofs and counterexamples, and the (almost non-existing) counter-proofs. That's what made me change methodology. This is something that many researchers using "statistical significance" have not done.

view more: ‹ prev next ›

pglpm

joined 2 years ago
MODERATOR OF