Skip to content

discussions Search Results · repo:llmware-ai/llmware language:Python

48 results
 (159 ms)

48 results

inllmware-ai/llmware (press backspace or delete to remove)

Hi everyone 👋 I’ve been experimenting a lot with long-form LLM workflows recently and ended up building Inkfluence AI - a tool that generates full eBooks, guides, and workbooks using AI (with chapter ...

I try to run example RAG with sqlite / postgres - biz_bot.py. Ask: what is the annual rate of the base salary? And then everything is error, the query seems no space: **SELECTSUM(annual_spend)FROMcustomer_table_1WHEREcustomer_name= ...

Would it improve latency to be able to do function calls on list rather than items? response = model.function_call(aList) Are there caching strategy that can be used to speed up inferences?

Hi could you let me know what kind of framework is used for the demo shown in the youtube video https://www.youtube.com/watch?v=9eXwW6rKfBk

Before I go submitting a pull request I just want to see if there s already a way to get a doc_id and filename mapping? I added the following to the Query class: def get_docid_filename_map(self): ...

How to build a custom domain specific model for LLMware. Is this something which is in pipeline?

Thank your for this amazing project. I asked Chatgpt if LLMware support Semantic Chunking and it says yes but I m not sure. Can someone help to clarify please. Yes, LLMWare supports semantic chunking ...

Hello, I would love to browse the library. By that i mean access all the different contents and not just metadata on the library card. I would like to see and print out the different Blocks with their ...

Hi, I ran into error while embedding using library.install_new_embedding function. Here s a simple code I copied from fast start example 2: LLMWareConfig().set_active_db( sqlite ) LLMWareConfig().set_vector_db( ...

I have used the model llmware/bling-phi-3-gguf for my RAG pipeline . But the accuracy is not good . I have got 66% accuracy for my dataset using this model . I have also seen that if we increase the context ...