Looking for cofounder/cto

bathgate

New member
Hey. Quickly. Ive done abt 7 b2b saas things but never as founder. Started as low rung sales guy & since run major accts, run small teams for high perf startups, been head of sales for a few A&B series cos. Not always successful but always learning…

I need a cofounder who’s full stack & willing to do at least MVP dev. Im learning as fast as I can but cGPT and harvard CS101 are only so useful…

Will explain the play 1:1, but it leverages some APIs for data + (ideally) gemini Pro + RAG + whatever more cutting edge shit we can do w/o over-engineering.

The play (without losing my IP…).. is to sell to ICs at companies selling to other companies (big ones). It starts as groundswell, but have plans to grow it to team -> division -> org. Etc. if we do it right could be huge, but tbh we could also teat/validate the whole thing in 2 mos and either:

1) raise and go for broke
2) sell to ICs to make their job easier a la jetbrains or tableau // sell the cpy as soon as possible and go on a fun trip
3) fail fucking miserably but forge a solid bus relationship & win on next one

I’m not lazy. I’m trying to do it myself. However, between node.js, RAG, fucking DBs, networking etc. I am dying. And this is just to run local… even if this works I’m going to have to use some sort of scaling Public Cloud. Yep. Can’t do that. And don’t know what I don’t know…

But I can sell. I have the idea. I love being involved in product & eng. Ill never over promise / under deliver to customers. I want a partner here. I want us to grow it. Could be quick exit - know some possible avenues there; or could be worth building.

HMU in DMs. I’d like to do simple NDA but can be as open as I know how to be thereafter.

Tx
 
Ill add that this is not a “realtime” play per se.. it requires recent data, but nothing like waiting for models to retrain.
 
@613jono Same here. The only thing that came to mind is leveraging the massive context breakthrough it has, eg. 1M+ tokens for in context learning and knowledge injection. Some are considering this as a replacement for some scenarios to RAG, but since OP mentioned RAG, I am curious.
 
@konsidine It’s context window. GPT-4 is better (generally) but nyt lawsuit has hamstrung it a bit. Gemini has own issues. But dumping a 1mm context chunk obviates the need for pinecone vectorization or clever RAG. Not that those dont have a place, but the combination of all w/some cleverness is superior.

Also… Gemini is far less costly per query.
 
@konsidine RAG i consider more accurate matching vs whole chunk parsing (im not an ai engineer) if that makes sense. The queries here will be low frequency, but large. Matching Qs to As at higher fidelity is v important
 
@haydnp 1mm context is a big deal. Whether you have need for it not is disparate. This is a critical component of my plan. But appreciate you either way
 
@bathgate This isn't for me, but imo you're explaining too much too fast, can you answer this question:

Will explain the play 1:1, but it leverages some APIs for data + (ideally) gemini Pro + RAG + whatever more cutting edge shit we can do w/o over-engineering.

What core flows do you need? The way you've described it just sounds like a basic openai + rag wrapper, so can you go into more detail? If you can't because it's too risky, then that makes people feel that your idea is so simple that anyone could steal it and make it themselves.
 
@colinburhart It is exactly that. A wrapper. But most research I’ve seen (and I sell to F500 / Global 2K on LLM bs all day) is effectively that data > models. Im not trying to reinvent the wheel.

Im solving a problem I know needs to be solved with the current cutting edge solutions & imbuing specific prompting + guidance of the model & RAG function for a singular purpose. It’s niche. It won’t be a $bn business. But it will rip.

I know bc I have personally signed off on IC expenses 100x for similar software that doesn’t do 1/5 of what we can accomplish. There is no competition as of yet… and the sooner we strike the more training data we can leverage.

I can explain more, but rest assured I understand the B2B play here & GTM / growth is my specialty. As we all know… moats are becoming increasingly speed bumps — so I don’t want to go into too much detail w/o NDA.

Fair?
 
I’ll add that the two most important innovations here are 1mm+ token context & RAG. Memory is the achilles heel of LLMs. It’s not solved. But I believe these two innovations enable my model for my product.
 
@colinburhart Final thing re simplicity of model… it’s not simple per se… but there are 30-40 competitors I know of doing the same thing in a regulated market. That hamstrings them. The same insights are equally valuable to another market nobody is selling to & are not regulated. It’s not a big brain rocket science play. It’s simply an “you’re all sprinting to solve the hardest problem — but 86% good enough w/o regulatory oversight is a solution for this market.”

Nobody is doing it. But they will soon. I imagine. I’d like to be the first.

Tx.
 
@bathgate After reading through your needs, my advice is to skip a technical cofounder for now and just hire some guys off fiver/upwork to make you an MVP. The technology really isn't that difficult if it's going to be a flow like: gemini/chatgpt with RAG + uploadable data/documents.

I'd estimate the cost around USD 500-2000, which should be reasonable.

I personally suggest getting a technical cofounder if that specifc person makes sense as a cofounder (e.g. he has expertise/background in llm apps), if you just need someone to program a simple mvp asap go hire people first, and only if you hit a technological roadblock and need someone who actually knows the technology well then find the right cofounder.
 
@colinburhart I appreciate that. The nuance here is of course the performance of the model. Id like to be able to train LoRA or similar to different use cases. Maybe can get some fiverr person to do that. But this is a low volume / high quality + customization answer play. It’s also HITL so the downside is low for corporates / but productivity is huge potentially for ICs
 
That all said - thx for thoroughly thinking abt this & I am considering your input. Shoot me a DM if you have recs on actually capable body shop you deacribe
 
@bathgate
The nuance here is of course the performance of the model. Id like to be able to train LoRA or similar to different use cases.

Take care of that after you get an MVP. Even then, finetuning via lora or whatever finetuning method can be done with a hired developer from fiverr/upwork, it isn't so speciailized a job.

But this is a low volume / high quality + customization answer play

My advice is if you know what the output should look like, then you need to do the work to create a really good prompt that actually creates the output you are looking for. If you can't figure it out then hire a prompt engineer from fiverr and just be like "I need it to output something like this..., please make it work with this sample data"

It’s also HITL so the downside is low for corporates / but productivity is huge potentially for ICs

I don't know what you mean here but that's okay my advice is still the same.
 

Similar threads

Back
Top