LLM Search
1️⃣ Real-time Web Search: Breaking LLM Time Limitations for More Accurate and Reliable Outputs
We’ve enhanced OpenAI and Gemini series models with the ability to access the latest information from the web, helping you:
- ✅ Access Latest Information: Get real-time updates on current events, latest research, or live data
- ✅ Eliminate Knowledge Gaps: Overcome the time limitations of LLM training data by accessing post-training information
- ✅ Reduce Hallucinations: Provide fact-based answers through real-time web searches, significantly reducing AI confabulations
- ✅ Improve Decision Quality: Make more confident decisions based on analysis and recommendations grounded in current facts
Supported Models: Currently supporting OpenAI and Gemini model series with two integration methods:
1. Models with Native Search Capabilities Gemini Series (Ground with Google search):
- gemini-2.0-pro-exp-02-05-search
- gemini-2.0-flash-exp-search
- gemini-2.0-flash-search
OpenAI Series (Bing search):
- gpt-4o-search-preview
- gpt-4o-mini-search-preview
2. Parameter-Based Support
Simply add the parameter web_search_options={}
to enable web connectivity for all Gemini and OpenAI models.
The search fee for Gemini models is $3.5 per thousand searches.
Usage Guide
Before using, run pip install -U openai
to upgrade the openai package.
Example:
2️⃣ Smart Surfing: Allowing AI to Explore the Internet Freely
By appending :surfing
to the model id, any large language model can be equipped with search capabilities.
- Simply append the suffix, no complex integration is required
- This method will default to forwarding the user’s request to the Tavily search service, and the LLM will reference the search results for response
- Search fee: $0.006 per search
- The fee is currently deducted directly from the “credit change”, and the “log detail” does not list the search fee yet, but will be shown in the future
The model id can be copied from the model gallery.
Example:
API Response Example: