465 lines
14 KiB
HTML
465 lines
14 KiB
HTML
<h1 class="title is-1">JADE: Simple Multi-Model Chatbot</h1>
|
|
<br /><br />
|
|
<p>
|
|
I often use LLMs and quickly found myself asking GPT4, Gemini and Claude the
|
|
same question. I wanted to be able to ask the same question to multiple
|
|
models, compare their answers and pick the best one. So I did JADE.
|
|
</p>
|
|
|
|
<p>
|
|
JADE is a simple Multi-Model chatbot. The idea is to use multiple models within
|
|
the same conversation. Here are the key points:
|
|
</p>
|
|
<ol>
|
|
<li>
|
|
When asking a question, you can use multiple models and compare their
|
|
responses to choose the best one.
|
|
</li>
|
|
<li>
|
|
The selected response can be used for the next message across all models.
|
|
</li>
|
|
</ol>
|
|
|
|
<p>For example, a response from GPT-4 Omni can be used by Claude Haiku.</p>
|
|
|
|
<a class="button is-primary mt-2 mb-2" href="/signin">
|
|
Try JADE now for free!
|
|
</a>
|
|
|
|
<br /><br />
|
|
|
|
<h2>More information</h2>
|
|
<ul>
|
|
<li>
|
|
<h3>
|
|
Get access to all models.<button
|
|
class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('all-models-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<p id="all-models-details" style="display: none">
|
|
With JADE, you can easily switch between models like GPT 3.5 or 4o,
|
|
Gemini, Llama, Mistral, Claude, and more. Even custom endpoint.
|
|
<br /><br />This means you can choose the best model for your specific
|
|
needs, whether it's for general knowledge, creative writing, or technical
|
|
expertise. Having access to multiple models allows you to take advantage
|
|
of their unique strengths and weaknesses, ensuring you get the most
|
|
accurate and relevant responses. (See all models available in the last
|
|
section)<br /><br />
|
|
</p>
|
|
</li>
|
|
<li>
|
|
<h3>
|
|
Get the best answer from multiple models.<button
|
|
class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('multi-models-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<p id="multi-models-details" style="display: none">
|
|
You can ask a question and receive responses from several models at once,
|
|
enabling you to compare their answers and choose the most suitable one.
|
|
<br /><br />This feature is particularly useful for complex queries where
|
|
different models might offer unique insights or solutions.<br /><br />
|
|
</p>
|
|
</li>
|
|
<li>
|
|
<h3>
|
|
Even from the same model.<button
|
|
class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('same-models-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<p id="same-models-details" style="display: none">
|
|
The core feature of JADE are the bots. Each bot have a name, model,
|
|
temperature and system prompt. <br /><br />You can create as many bot as
|
|
you want and select as many to answer each question. An example is
|
|
creating the same model that reponse in different language.<br /><br />
|
|
</p>
|
|
</li>
|
|
<li>
|
|
<h3>
|
|
Reduce Hallucination.<button
|
|
class="button is-small ml-2 is-primary is-outlined"
|
|
onclick="toggleDetails('reduce-hallucination-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<p id="reduce-hallucination-details" style="display: none">
|
|
AI models sometimes generate information that is inaccurate or misleading,
|
|
a phenomenon known as "hallucination." <br /><br />By using multiple
|
|
models, JADE reduces each model's bias. This ensures that the responses
|
|
you receive are more reliable and trustworthy.<br /><br />
|
|
</p>
|
|
</li>
|
|
<li>
|
|
<h3>
|
|
Pay only for what you use or not at all.<button
|
|
class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('flexible-pricing-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<p id="flexible-pricing-details" style="display: none">
|
|
JADE use API, so you get access to free credits or tiers depending of the
|
|
provider (see next section). This is particularly beneficial for users who
|
|
may not need to use the chatbot extensively. Once the free credit use, you
|
|
pay based on the length of you message and the response generated in
|
|
tokens (a token is around 3 characters). Groq and Google also offer free
|
|
tiers that are enough for conversation.
|
|
</p>
|
|
</li>
|
|
<li>
|
|
<h3>
|
|
All providers and models.<button
|
|
class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('provider-details')"
|
|
>
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button>
|
|
</h3>
|
|
<div id="provider-details" style="display: none; overflow-x: hidden">
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>OpenAI</strong> - OpenAI offer 5$ credits when creating an API
|
|
account. Around 10 000 small question to GPT-4 Omni or 100 000 to
|
|
GPT-3.5 Turbo.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>GPT 4 Omni</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 4 Turbo</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 4</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 3.5 Turbo</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Anthropic</strong> - Anthropic offer 5$ credits when creating
|
|
an API account. Around 2 000 small question to Claude 3 Opus or 120
|
|
000 to Claude Haiku.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Claude 3 Opus</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Claude 3.5 Sonnet</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Claude 3 Haiku</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Mistral</strong> - Mistral do not offer free credits.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Mixtral 8x22b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral Large</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral Small</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Codestral</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Groq</strong> - Groq offer a free tier with limit of tokens
|
|
and request per minutes. The rate is plenty for a chatbot. 30 messages
|
|
and between 6 000 and 30 000 tokens per minute. Per tokens coming
|
|
soon.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Llama 3 70b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 8b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemma2 9b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemma 7b</strong>
|
|
</li>
|
|
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Google</strong> - Like Groq, Google offer a free tier with
|
|
limit of tokens and request per minutes. The rate is plenty for a
|
|
chatbot. 15 messages and 1 000 000 tokens per minute. Per tokens also
|
|
available.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Gemini 1.5 pro</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemini 1.5 flash</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemini 1.0 pro</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Perplexity</strong> - Perplexity do not offer a free tier or
|
|
credits. Perplexity offer what they call 'online' models that can
|
|
search online. So you can ask for the current weather for example.
|
|
Those models have additional cost of 5$ per 1 000 requests.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Sonar Large</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Large Online</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Small</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Small Online</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 70b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Fireworks</strong> - Fireworks AI offer 1$ of free credits
|
|
when creating an account. Firework AI have a lot of open source
|
|
models. I may add fine tuned models in the future.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>FireLLaVA-13B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x7B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x22B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 70B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Bleat</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Chinese Llama 2 LoRA 7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>DBRX Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemma 7B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Hermes 2 Pro Mistral 7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Japanese StableLM Instruct Beta 70B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Japanese Stable LM Instruct Gamma 7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 13B French</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama2 13B Guanaco QLoRA GGML</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 7B Summarize</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 13B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 13B Chat</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 70B Chat</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 2 7B Chat</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 70B Instruct (HF version)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 8B (HF version)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 8B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 8B Instruct (HF version)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>LLaVA V1.6 Yi 34B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7B Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7B Instruct v0.2</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7B Instruct v0p3</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x22B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x22B Instruct (HF version)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral MoE 8x7B Instruct (HF version)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>MythoMax L2 13b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Nous Hermes 2 - Mixtral 8x7B - DPO (fp8)</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Phi 3 Mini 128K Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Phi 3 Vision 128K Instruct</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Qwen1.5 72B Chat</strong>
|
|
</li>
|
|
<li>
|
|
<strong>StableLM 2 Zephyr 1.6B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>StableLM Zephyr 3B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>StarCoder 15.5B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>StarCoder 7B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Traditional Chinese Llama2</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Capybara 34B</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Yi Large</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
<br />
|
|
<strong>Hugging face</strong> - You can also use custom endpoints. I only
|
|
tested hugging face but in theory, as long as the key is valid and it use
|
|
the openai api, it should work. This part need some testing and
|
|
improvement.
|
|
<br />
|
|
<br />
|
|
<strong>Goose AI</strong> - Chat API will be available soon.
|
|
<br />
|
|
</div>
|
|
</li>
|
|
</ul>
|
|
|
|
<script>
|
|
function toggleDetails(id) {
|
|
var element = document.getElementById(id);
|
|
if (element.style.display === "none") {
|
|
element.style.display = "block";
|
|
} else {
|
|
element.style.display = "none";
|
|
}
|
|
}
|
|
</script>
|