39

So perplexity can kind of weakly analyze the first few pages of small file size pdfs one at a time, but I'd love to have something that would allow me to upload several hundred research papers and textbooks that could then be analyzed for consensus and contradictions and give me more meaningful search results and summaries than keyword searching alone. Does anything like this exist in a fairly user friendly accessible format?

top 16 comments
sorted by: hot top controversial new old
[-] DandomRude@lemmy.world 7 points 5 months ago

Afforai might be able to do stuff like this. I haven't tested it myself yet, but the service also seems to have some other features that might be relevant for your use case.

[-] Imgonnatrythis@sh.itjust.works 3 points 5 months ago

Wow. Yes, this looks spot on thanks! Warning, Whenever I find cool services like this it seems like they tend to go under within a year or two, so apologies in advance.

[-] LesserAbe@lemmy.world 6 points 5 months ago

Don't have an answer but I'd be interested in something like that too. I know Microsoft released a freely available lightweight LLM that's supposed to make it easier for people to run it locally, called Phi3. Decent article from ars technica: https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/

[-] merari42@lemmy.world 4 points 5 months ago* (last edited 5 months ago)

I have used this small R package that allows you to read the text content of a PDF and send it to a local llama model via ollama or one of the large LLM APIs. I could use that to get structured answers in JSON format on a whole folder of papers, but the context length of a typical model is only long enough to hold a single (roughly 40-page) paper in the memory. So I had to get separate structurer answers on each paper and then generate a complete summary from those. Unfortunately that is not user-friendly yet.

[-] Imgonnatrythis@sh.itjust.works 1 points 5 months ago

Interesting start, yeah looks a bit in weeds for my purposes right now though

[-] andrewrgross@slrpnk.net 2 points 5 months ago

I don't know of one, but I too would be interested to see what this looks like.

How do you currently store and organize PDFs? I used to use Mendeley during grad school, and honestly I really, really liked it. But being able to ask a question and get a natural language response that suggests which papers might contain insights when taken together would be an incredible asset.

[-] Imgonnatrythis@sh.itjust.works 3 points 5 months ago

Menedely hold out here myself! Tried switching to endnote because mendeley is for all practical purposes abandonware now but the conversion is very painful with loss of a lot of data - notes, organizational structure etc.

Still using the old desktop app nearly daily. If it was still a living project, integrating something like this into mendeley would be incredible.

[-] andrewrgross@slrpnk.net 1 points 5 months ago

It's abandoned!?

I used it circa 2010-14. I believe it was still active then.

That's a shame. It was a great program. Everyone thought I was weird for not paying for endnote, but it was as good and better!

[-] Imgonnatrythis@sh.itjust.works 2 points 5 months ago

It still runs, but no updates, they just push their web interface which is very weak compared to desktop app. New user adoption is likely next to nil and most people I talk to under 30 have never heard of it. Unless there is a better tool to switch though I'll never have time to replicate the organizational infrastructure I've built in Mendeley and really like it, so I'll use it until it dissappears.

[-] Kata1yst@kbin.social 1 points 5 months ago
[-] Imgonnatrythis@sh.itjust.works 1 points 5 months ago

Looks a bit beyond me unfortunately, but sounds interesting

[-] mozz@mbin.grits.dev 1 points 5 months ago

Chroma is supposed to be able to import a ton of information into a vectorized format that lets you search through it in a way that's semantically meaningful, so you (or your tool) can sort of pick out the stuff from a huge batch of source material that you need to pass to the LLM for any given query.

I played around with it a little bit and I wasn't able to determine if it was a real thing or just a weird AI hype thing, but people seem to take it seriously. I would bet that someone's attempted to make a little system on top of it that lets you do stuff like what you're wanting to do (since that's what it's made for), but IDK how well it would work... might be useful to search for stuff adjacent to Chroma or vector databases to see if there are tools like that, though.

[-] rufus@discuss.tchncs.de 1 points 5 months ago* (last edited 5 months ago)

I don't think you can use Retrieval Augmented Genaration or vector databases for a task like that. At least not if you want to compare the whole papers and not just a single statement or fact. And that'd be what most tools are focused on. As far a I know the tools that are concerned with big PDF libraries are meant to retrieve specific information out of the library. Relevant to a specific question from the user. If your task is to go through the complete texts, it's not the right tool because it's made to only pick out chunks of text.

I'd say you need an LLM with a long context length, like 128k or way more, fit all the texts in and add your question. Or you come up with a clever agent. Make it summarize each paper individually or extract facts, then feed that result back and let it search for contradictions, or do a summary of the summaries.

(And I'm not sure if AI is up to the task anyways. Doing meta-studies is a really complex task, done by highly skilled professionals of a field. And it takes them months... I don't think current AI's performance is anywhere near that level. It's probably going to make something up instead of outputting anything that's related to reality.)

[-] Imgonnatrythis@sh.itjust.works 2 points 5 months ago

Check out Afforai. It's not perfect at all, but it is on track to do what I want.

[-] rufus@discuss.tchncs.de 1 points 5 months ago

Ah, nice. Thanks for sharing.

[-] MigratingtoLemmy@lemmy.world 1 points 5 months ago

That would likely be a language model finetuned on said material. The problem is feeding PDFs as a structured data source for the model to ingest. The finetuning can't happen with random unstructured PDFs

this post was submitted on 23 May 2024
39 points (88.2% liked)

Ask Lemmy

26669 readers
1916 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS