Get pinpoint accuracy with just 1 line of code using our semantic filter technology.
Vector search compresses documents into a single vector leading to information loss, this combined with a lack of context on the query leads to suboptimal search results.
To fix this, our semantic filter analyzes the query and document together, utilizing multiple models to minimize information loss and hallucinations. This results in significantly higher accuracy compared to vector and hybrid search approaches alone.
Pongo sits right on top of your existing pipeline, whether you use a vector database or elasticsearch. Just send us your top 100-200 search results and we’ll return the relevant results.
We use a collection of different types of models and retrieval methods in conjunction with one another, combining results together to come up with a final score for each document.
This ensures we avoid hallucinations or shortcomings of any single retrieval method and return the most relevant results every time.
Yes, Pongo can be deployed in a VPC. Just book a call with us, and we'll find the best option for you.
Deploy tier is 600-650 ms for 100 documents of 512 tokens vs 350-400ms on the Lightning tier. By default requests are routed to US-West-2 in Oregon, please contact us if you need deployments in another region.
Yes, Pongo only operates at runtime. We store 0 data, and no data leaves our VPC in AWS. We are in the process of getting SOC2 compliance.
Yes, however fine-tuning Pongo is a complex process as we utlize multiple models, and requires a non-trivial amount of quality data samples. However we do offer fine-tuned models to enterprise customers.