Is anyone building LLM apps on top of structured data e.g. SQL databases or MongoDB?

@japharish We have LLMs and other AI applications deployed at scale across global architecture. Our tech is pretty "enterprisey" and usually not relatable to startups as it comes with higher overhead and annoyingly long development times for reasons having little to do with tech itself.
 
@dejj1009 I'm building https://useturbine.com to solve exactly that. It lets you create data pipelines that keep your data sources and vector database in sync. Handles everything including reading data, chunking, deduping, creating embeddings and storing them in a vector database. All of these are massively parallelized and real-time, built for scale.

How are you doing this currently? Would love to talk to you more about this in the DMs.

P.S. Turbine is free for early adopters at https://console.useturbine.com
 
@japharish We use OpenSearch as a hybrid data store and then create multiple sets of embeddings to support complex contact analysis and QM style automation for contact centers. The LLMs are used for a) generating complex analyses: how was the conversation opening? How would you score this based on the provided rubric and a 5 point scale? What questions can be answered by this section? Then standard analytics for aggregating scores within and across cross sections; b) supporting natural language inference and interaction: are there a lot of contacts with Problem X? Can you find some? Where did this conversation go wrong? How would you recommend correcting it given the base script?

I think it’s still extremely important that any LLM apps for production are focused on supplementing a human vision since they still aren’t reliable enough on their own.
 
@japharish Is there a ton of companies doing RAG solutions for a company’s confluence docs? I see you mentioned notion and pdfs. I’m starting to wonder if it’s better to develop RAG products for a particular vertical
 

Similar threads

Back
Top