1

Not sure if this is the right place to put this, but I wrote a library (MIT) for creating "semantic functions" using LLMs to execute them. It's optimized for ergonomics and opacity, so you can write your functions like:

from servitor import semantic
@semantic
def list_people(text) -> list[str]:
    """List the people mentioned in the text."""

(That's not a typo - the body of the function is just the docstring, servitor detects that it returns None and uses the docstring instead)

Basic setup:

$ pip install .[openai]
$ pip install .[gpt4all]
$ cp .env.template .env

Then edit .env to have your API key or model name/path.

I'm hoping for this to be a first step towards people treating LLMs less like agents and more like inference engines - the former is currently prevalent because ChatGPT is a chatbot, but the latter is more accurate to what they actually are.

I designed it specifically so it's easy to switch between models and LLM providers without requiring dependencies for all of them. OpenAI is implemented because it's the easiest for me to test with, but I also implemented gpt4all support as a first local model library.

What do you think? Can you find any issues? Implement any connectors or adapters? Any features you'd like to see? What can you make with this?

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here
this post was submitted on 16 Jun 2023
1 points (100.0% liked)

Free and Open Source Software

17550 readers
43 users here now

If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS