165
you are viewing a single comment's thread
view the rest of the comments
[-] nightshade@hexbear.net 10 points 1 month ago* (last edited 1 month ago)

I'm pretty sure some of the newer ChatGPT-like products (the consumer-facing interface, not the raw LLM) do in fact do this. They try to detect certain types of inputs (i.e. math problems or requesting the current weather) and convert it to an API request to some other service and return the result instead of a LLM output. Frankly it comes across to me as an attempt to make the "AI" seem smarter than it really is by covering up its weaknesses.

I think chatgpt passes mathematical input to Wolfram alpha

[-] FunkyStuff@hexbear.net 3 points 1 month ago

Yeah, Siri has been capable of doing that for a long time, but my actual hope would be that moreso than handing the user the API response, the LLM could actually keep operating on that response and do more with it, composing several API calls. But that's probably prohibitively expensive to train since you'd have to do it billions of times to get the plagiarism machine to learn how to delegate work to an API properly.

this post was submitted on 18 Oct 2024
165 points (100.0% liked)

technology

23313 readers
305 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS