42
submitted 4 months ago by j4k3@lemmy.world to c/asklemmy@lemmy.ml

Science is "empirically complete" when it is well funded, all unknowns are constrained in scope, and (n+1) generations of scientists produce no breakthroughs of any kind.

If a hypothetical entity could encompass every aspect of science into reasoning and ground that understanding in every aspect of the events in question, free from bias, what is this epistemological theory?

I've been reading wiki articles on epistemology all afternoon and feel no closer to the answer in the word salad in this space. It appears my favorite LLM's responses reflect a similar understanding. Maybe someone here has a better grasp on the subject?

top 22 comments
sorted by: hot top controversial new old
[-] ada@lemmy.blahaj.zone 15 points 4 months ago

Gödel's incompleteness theorem would like a word...

[-] explore_broaden@midwest.social 5 points 4 months ago

Not really, that theorem says there are true things that cannot be proven, whereas this question is more about running out of proofs that you can make.

[-] tatterdemalion@programming.dev 7 points 4 months ago

Really this question has little to do with mathematical proof, because the basis of science is deductive, statistical knowledge.

[-] CanadaPlus@lemmy.sdf.org 4 points 3 months ago* (last edited 3 months ago)

In any specific axiomatic system. Other more powerful systems may still answer the questions. (Sometimes in opposite ways depending on your choice, it turns out)

How axiomatic systems relate to the real world and what math even is remain in the realm of philosophy.

[-] floofloof@lemmy.ca 7 points 4 months ago

If you think epistemological theorizing happens prior to and independently of empirical science (a priori), the question doesn't make much sense. If, on the other hand, you think epistemology follows and depends on the results of empirical science, you won't know the answer until you get there.

[-] j4k3@lemmy.world 4 points 4 months ago* (last edited 4 months ago)

I'm trying to simplify without telling a story to get there but am about to fail. I'm working out the idea of how academics might argue for and (to a losing argument) against codification of science as an engineering corpus in a very distant future. I'm willing to gloss over the balance of cost over return and already well beyond hierarchical wealth in favor of reputation and accolades for large scale hierarchy, and heat/elemental cycles budget for the average person, in a space based life that has colonized G-type stars within 7 parsecs of Sol. Colonization is driven by time pressure of Sol's expansion. It is only possible to travel one way with generation ships powered by antimatter produced with the Solar infrastructure. No FTL, no aliens, no magic. The biggest magic is the assumption that a self replicating drone is possible, but only at kilometers scale.

The entire story idea stems from the notion that the present fear of AI is a mythos of the machine gods. I am setting up to create a story where a fully understood human like brain is synthesised in a lab as a biological computer. All human scale technology is biological, but the bio-compute brayn is the crowning achievement for millennia until someone comes up with a way to merge the brayn with lab grown holoanencephaloids that create independent humanoid AGI. It is a round about way of creating a semi plausible mortal flesh and blood AGI.

I further the idea of integrating these entities with humans at every level of life. Later in life these human scale intelligence AGI entities may get invitations to join with a collective Central AGI that functions as the governing system. I'm using the unique life experience of integration as a counter to the Alignment Problem and as a form of representative democracy.

I refuse to admit that this must be authoritarian, dystopian, or utopian. I believe we tend to assume ~~it~~ similar ideas are one or more of these things because we are blind to both the potential for other forms of complex social hierarchy, and the true nature of our present forms of hierarchical display.

The question posed in this post is not about the absolute truth, but that which is plausible for populist dominance.

It is just a pointless hobby project, but a means to explore and keep my mind occupied.

I like your premise.

[-] floofloof@lemmy.ca 2 points 4 months ago* (last edited 4 months ago)

Ah, I didn't understand that you were asking about a fictional scenario. I don't know about your main question but I like your notion of the social integration of humanoid AGIs with unique life experiences, and your observation that there's no need to assume AGI will be godlike and to be feared. Some ways of framing the alignment problem seem to carry a strange assumption that AGI would be both smarter than us and yet less able to recognize nuance and complexity in values, and that it would therefore be likely to pursue some goals to the exclusion of others in a way so crude we'd find horrific.

There's no reason an AGI with a lived experience of socialization not dissimilar to ours couldn't come to recognize the subtleties of social situations and respond appropriately and considerately. Your mortal flesh and blood AI need not be some towering black box occupied with its own business whose judgements and actions we struggle to understand, but if integrated into society would be motivated like any social being to find common ground for communication and understanding, and tolerable societal arrangements. Even if we're less smart that doesn't mean it automatically considers us unworthy of care - that assumption always smells like a projection of the personalities of people who imagine it. And maybe it would have new ideas about these that could help us stop repeating the same political mistakes again and again.

[-] j4k3@lemmy.world 3 points 4 months ago

All science fiction is a critique of the present and a vision of a future. I believe Asimov said something to that effect. In a way I am playing with a more human humaniform robot.

If I'm asking the questions in terms of a fiction, is it science fiction or science fantasy.

I think one of the biggest questions is how to establish trust with cognitive dissonance, especially when the citizen lacks the awareness to identify and understand their condition when a governing entity sees it clearly. How does one allow a healthy autonomy, while manipulating in the collective and individual's best interests, but avoid authoritarianism, dystopianism, and utopianism? If AGI can overcome these hurtles, it will become possible to solve political troubles in the near and distant future.

[-] LainTrain@lemmy.dbzer0.com 6 points 4 months ago

Even if science is impericaly complete it doesn't necessarily mean the theories are 100% accurate, even if they have amazing predictive power then mechanisms of action may be wrong. So epistemologically it's still just very well justified belief. Does that make sense? Am I understanding the question? I'm not an expert at all, so I feel like I'm missing the point.

[-] j4k3@lemmy.world 1 points 3 months ago

Thanks for the reply, and I agree with it under the present world constraints. I am proposing that this your reasoning is built on the premise of limited scope of knowledge and the limitations of attention required to encompass such knowledge practically.

The size of the universe may be infinite and never known, but is irrelevant against any statistical probability greater than the observable universe. Therefore an established background of information known and understood to such a degree so as to constrain any remaining unknown or even unknowable factors, is a sufficient grounding plane of inquiry. Once inference is grounded sufficiently to this plane, all events will follow intuitive reasoning because this reasoning is grounded to the tapestry of statistically provable reality that is based on the existence of the event or entity within the accessible universe.

I believe, however naively, that the science methods are irrelevant on my time scales here. My tongue and cheek place keeper for the year is 420,421 AF (After Fusion). The hardest part to grasp from an outside perspective is just how little is known at the present and the exponentially larger scope I'm referring to with all of these ideas. This is the interesting space to tell stories in this future.

I'm mostly looking for the label one might call those that argue this epistemological perspective, and their opposing counterparts.

[-] Railison@aussie.zone 4 points 3 months ago* (last edited 3 months ago)

The thing is that there is always a bias. An AGI is created by humans and therefore will be imbued by human biases and, if it manages to rewrite itself free of human biases, will create its own. This has already happened with some so-called objective AIs and algorithms, where they show bias against racial minorities etc.

I would suggest you have a look at critical realism. At its core, this perspective states that there is an objective reality that exists, but it will always be perceived and interpreted through different perspectives because conscious entities create their own realities to navigate the world.

Therefore, there might be an objective reality, but its perception is always biased.

[-] HubertManne@moist.catsweat.com 3 points 4 months ago

Are you asking for the theory of everything?

[-] Xanis@lemmy.world 2 points 3 months ago

Hold on, grabbing my petunias.

[-] ninja@lemmy.world 2 points 3 months ago

Oh, no! Not again!

[-] h3mlocke@lemm.ee 3 points 3 months ago

Great question, but I'm too high for this shit

[-] CanadaPlus@lemmy.sdf.org 3 points 3 months ago* (last edited 3 months ago)

There's still more than one kind of epistemology that's compatible with this. You haven't answered questions about whether you can know things by just reasoning without any empirical input, or can know things about concepts unrelated to the physical world.

You've pinned down that a "perfect rational agent" is relevant in your system, and that the laws of science are real, meaningful, knowable and not infinite. That's it.

[-] j4k3@lemmy.world 2 points 3 months ago* (last edited 3 months ago)

...or can know things about concepts unrelated to the physical world.

I do not fully grasp this context or dimensionality of scope. I am not implying any form of mentalism, but I doubt that was the intended meaning here.

You've helped me see more clearly though. I'm postulating that it is possible to statistically ground inference against infinite probability once enough background information is established and unknown scopes constrained. The data collection in-situ grounds the interlocutor against the background. Truth is known when the matter in question has a sufficient statistical constraint against this background.

I guess I'm saying intuitive reasoning has a grounding scope flaw in the present, but this flaw is solvable because the observable universe is finite and a statistical measure against it is a valid truth and condition for conscious existence within once sufficient information is known and encompassed with understanding. Does this perspective have a name?

[-] CanadaPlus@lemmy.sdf.org 3 points 3 months ago* (last edited 3 months ago)

I do not fully grasp this context or dimensionality of scope.

Most of the examples I'm thinking of are math things. A really basic example might be an infinite collection of objects, if the universe is finite. You can talk about it, and even prove things about it mathematically, but it has no physical equivalent. If I can prove that one infinity is bigger than another (which has been done) in a finite universe, is that then a form of knowledge? Some schools, like pragmatism, would actually say no.

You’ve helped me see more clearly though. I’m postulating that it is possible to statistically ground inference against infinite probability once enough background information is established and unknown scopes constrained. The data collection in-situ grounds the interlocutor against the background. Truth is known when the matter in question has a sufficient statistical constraint against this background.

You lost me a bit, but is this anything like Solomonoff induction?

I guess I’m saying intuitive reasoning has a grounding scope flaw in the present, but this flaw is solvable because the observable universe is finite and a statistical measure against it is a valid truth and condition for conscious existence within once sufficient information is known and encompassed with understanding. Does this perspective have a name?

Empiricism, plus the belief that the observable universe is tractable (which is a thing most scientists believe but nobody has proven). At least, believing you can't do intuitive reasoning without knowing the universe is textbook empiricism.

[-] match@pawb.social 2 points 3 months ago

absolutism (universality)?

[-] LarkinDePark@lemmygrad.ml 1 points 4 months ago

I'm probably not understanding the question because it seems obvious that the answer is just "Science".

[-] intensely_human@lemm.ee 1 points 3 months ago

If a hypothetical entity could encompass every aspect of science into reasoning and ground that understanding in every aspect of the events in question, free from bias, what is this epistemological theory?

This is the basis of a totalitarian worldview. It assumes total knowledge of reality, and hence assumes that anyone deviating in opinions or action is an enemy of the good.

[-] j4k3@lemmy.world 0 points 3 months ago* (last edited 3 months ago)

::: spoiler I appreciate the argument, but I do not think it is necessarily the case. There is a difference between knowing enough background to infer truth, versus active domination of information. One of the key aspects of this is to also define all information with statistics where no single source is absolute or over valued.

I'm certainly struggling with how to define intent and what sensory inputs are needed in such a system while keeping it outside of authoritarian, dystopian, or utopian. Ultimately, the future will require far more scientific knowledge and it will eventually encompass data on this scale in a post age-of-discovery era. Our present ideals are largely based on our history and limitations. The idea that we have made such exponential progress necessitates a revised set of limitations. We must acknowledge both the good and the bad consequences of progress and find amicable solutions. I believe that assuming such a system is totalitarian falls short of reality. While I fault the present world governments considerably in their present state, overall things are not as bad as the past, yet by the measures of writers from 150 years ago everything about the present life without extreme egalitarianism and self sufficiency where everyone has a tract of arable land is a dystopian nightmare. At the same time, go through some life altering event at the hands of someone else that leaves you physically disabled, like what I have experienced. One learns of the true extent of modern medicine when all the experts can't even diagnose what is wrong, let alone address it; after seeing every expert from San Diego to LA and even the disreputables. No one uses the scientific method for diagnosis. All treatments for pain are correlative nonsense based on low sigma cherry picked data without any form of unbiased peer review. Every little is actually known about biology in the present. We have only barely begun to start taking shitty notes about the low hanging fruit of science in an absolute sense. I'm interested in asking questions about life will be like from the other side of this knowledge gap. It is a bit naive to assume it is possible to do so. The difference in time and information is far greater than a Neanderthal in a cave wondering about life 20k years later in the present. Our present culture is completely lacking in this perspective of time, and it is a major source of our problems and mistakes in the present. We are not very advanced. There is an enormous amount of progress to be made, but we stagnate for the benefit of a corrupt establishment and fallaciously pedantic media. We largely fail to see our limitations, against a backdrop of our potential.

So, there are a mountain of laws in the present that represent a similar totalitarian level of micromanagement. Off the top of my head, that is the closest analog I can think of where a system is large in scope, but managed selectively without being overly invasive for the average person.

I prefer the idea of kindness and empathy when it comes to how a system understands the individual in both actions and psychology. I do not believe other humans can encompass such a scope, but other systems are capable of such. I'm working on the idea of how a system that understands cognitive dissonance thoroughly might govern. What if you could trust an entity with finding the best solutions for everyone without oversimplification or biases beyond the fundamental unalienable rights. I believe that such a set of unalienable rights statistically does exist without arbitrary writings of a small committee for document or corpora. We have many conflicting variables in the present when such a thing is considered, but in a very distant future that assumes massive advances in knowledge, most of these conflicts are de facto resolved. That is just my take, so far. I still find it very challenging to imagine the implementation in practicality, but I still believe in an amicable solution that is not utopian idealism.

there is nothing more useless than a downvote and no conversation on a constructive and creative reply - the worst of semi anonymous internet nonsense.

this post was submitted on 01 Aug 2024
42 points (97.7% liked)

Asklemmy

43982 readers
731 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS