300 lines
14 KiB
HTML
300 lines
14 KiB
HTML
<h1 class="title is-1">JADE: The First Multi-Model Chatbot</h1>
|
|
<br><br>
|
|
<p>The world of Large Language Models (LLMs) is vast and exciting, with each model having unique strengths and
|
|
weaknesses. However, this variety presents a challenge: using all available LLMs is practically impossible due to
|
|
cost and complexity. Wouldn't it be incredible to have an easy way to experiment with different models, compare
|
|
their responses, and even choose the best model for a specific task?</p>
|
|
|
|
<p>This is precisely why JADE was built. With a focus on simplicity, JADE eliminates unnecessary features like file or
|
|
image uploads, allowing you to interact with a variety of LLMs. This streamlined approach unlocks the
|
|
potential to compare models, leverage their individual strengths, and even mitigate biases through multi-message
|
|
conversations.</p>
|
|
|
|
<h2>Multi-Models</h2>
|
|
|
|
<p>JADE is the first Multi-Model chatbot. The idea is to use multiple models within the same conversation. Here are the
|
|
key points:</p>
|
|
<ol>
|
|
<li>When asking a question, you can query multiple models and compare their responses to choose the best one.</li>
|
|
<li>The selected response can be used as the basis for the next message across all models.</li>
|
|
</ol>
|
|
|
|
<p>For example, a response from GPT-4 Omni can be used by Claude Haiku in the next interaction.</p>
|
|
|
|
<p>This approach offers several benefits. First, it ensures you always have access to the best possible response by
|
|
leveraging the strengths of different models. Second, it provides a more comprehensive understanding of a topic by
|
|
considering various perspectives. Finally, using responses from one model as context for another can lead to more
|
|
engaging and insightful conversations.</p>
|
|
|
|
|
|
<a class="button is-primary mt-2 mb-2" href="/signin">
|
|
Try JADE now for free!
|
|
</a>
|
|
|
|
<br><br>
|
|
|
|
<h2>More information</h2>
|
|
<ul>
|
|
<li>
|
|
<h3>Variety of AI models from different providers.<button class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('all-models-details')">
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<p id="all-models-details" style="display:none;">With JADE, you can easily switch between models like GPT 3.5 or
|
|
4o, Gemini, Llama, Mistral, Claude, and more. Even custom endpoint. This means you can choose the best model
|
|
for
|
|
for your specific needs, whether it's for general knowledge, creative writing, or technical expertise.
|
|
Having access to multiple models allows you to take advantage of their unique strengths and weaknesses,
|
|
ensuring you get the most accurate and relevant responses. (See all models available in the last
|
|
section)<br><br></p>
|
|
</li>
|
|
<li>
|
|
<h3>Multiple models in a single conversation.<button class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('multi-models-details')">
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<p id="multi-models-details" style="display:none;">You can ask a question and receive responses from several
|
|
models at once, enabling
|
|
you to compare their answers and choose the most suitable one. This feature is particularly useful for
|
|
complex queries where different models might offer unique insights or solutions.<br><br></p>
|
|
</li>
|
|
<li>
|
|
<h3>Duplicate models in a single conversation.<button class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('same-models-details')">
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<p id="same-models-details" style="display:none;">Hoz JADE work is that you can create custom bot. Each bot have
|
|
a name, model,
|
|
temperature and system prompt. You can create as many bot as you want and select as many to answer each
|
|
question. An example is creating the same model that reponse in different language.<br><br></p>
|
|
</li>
|
|
<li>
|
|
<h3>Reduce Hallucination.<button class="button is-small ml-2 is-primary is-outlined"
|
|
onclick="toggleDetails('reduce-hallucination-details')"><span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<p id="reduce-hallucination-details" style="display:none;">AI models sometimes generate information that is
|
|
inaccurate or misleading, a phenomenon known as "hallucination." By using multiple models, JADE reduces each
|
|
model's bias. This ensures that the responses you receive are more reliable and trustworthy.<br><br></p>
|
|
</li>
|
|
<li>
|
|
<h3>Pay only for what you use.<button class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('flexible-pricing-details')">
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<p id="flexible-pricing-details" style="display:none;">JADE use API, so you get access to free credits or tiers
|
|
depending of the provider (see next section). This is particularly beneficial for users who may not need to
|
|
use the chatbot extensively. Once the free credit use, you pay based on the length of you message and the
|
|
response generated in tokens (a token is around 3 characters). <br><br>JADE starts with a
|
|
free tier that allows you to send up to 200 messages a month. For more intensive use, you can upgrade for
|
|
just $0.95/month. So you can use Llama 70b for free forever if using JADE with a Groq Cloud account for
|
|
example.<br><br></p>
|
|
</li>
|
|
<li>
|
|
<h3>All providers.<button class="button ml-2 is-small is-primary is-outlined"
|
|
onclick="toggleDetails('provider-details')">
|
|
<span class="icon is-small">
|
|
<i class="fa-solid fa-info"></i>
|
|
</span>
|
|
</button></h3>
|
|
<div id="provider-details" style="display:none; overflow-x: hidden;">
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Providers available:</strong>
|
|
</div>
|
|
<div class="column">
|
|
<strong>Models available:</strong>
|
|
</div>
|
|
</div>
|
|
<ul>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>OpenAI</strong> - OpenAI offer 5$ credits when creating an API account.
|
|
Around 10 000 small question to GPT-4 Omni or 100 000 to GPT-3.5 Turbo.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>GPT 4 Omni</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 4 Turbo</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 4</strong>
|
|
</li>
|
|
<li>
|
|
<strong>GPT 3.5 Turbo</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Anthropic</strong> - Anthropic offer 5$ credits when creating an API
|
|
account. Around 2 000 small question to Claude 3 Opus or 120 000 to Claude Haiku.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Claude 3 Opus</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Claude 3 Sonnet</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Claude 3 Haiku</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Mistral</strong> - Mistral do not offer free credits.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Mixtral 8x22b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral 7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral Large</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mistral Small</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Codestral</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Groq</strong> - Groq offer a free tier with limit of tokens and request per minutes.
|
|
The rate is plenty for a chatbot. 30 messages and between 6 000 and 30 000 tokens per
|
|
minute. Per tokens coming soon.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Llama 3 70b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 3 8b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemma 7b</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Google</strong> - Like Groq, Google offer a free tier with limit of tokens and
|
|
request per minutes. The rate is plenty for a chatbot. 15 messages and 1 000 000
|
|
tokens per minute. Per tokens also available.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Gemini 1.5 pro</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemini 1.5 flash</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Gemini 1.0 pro</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<div class="columns">
|
|
<div class="column is-two-thirds">
|
|
<strong>Perplexity</strong> - Perplexity do not offer a free tier or credits. Perplexity
|
|
offer what they call 'online' models that can search online. So you can ask for the current
|
|
weather for example. Those models have additional cost of 5$ per 1 000 requests.
|
|
</div>
|
|
<div class="column">
|
|
<ul>
|
|
<li>
|
|
<strong>Sonar Large</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Large Online</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Small</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Sonar Small Online</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 70b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Llama 7b</strong>
|
|
</li>
|
|
<li>
|
|
<strong>Mixtral 8x7b</strong>
|
|
</li>
|
|
</ul>
|
|
</div>
|
|
</div>
|
|
</li>
|
|
<br>
|
|
<li>
|
|
<strong>Hugging face</strong> - You can also use custom endpoints. I only tested hugging face but in
|
|
theory, as long as the key is valid and it use the openai api, it should work. This part need some
|
|
testing and improvement.
|
|
</li>
|
|
<br>
|
|
</ul>
|
|
</div>
|
|
</li>
|
|
</ul>
|
|
|
|
<script>
|
|
function toggleDetails(id) {
|
|
var element = document.getElementById(id);
|
|
if (element.style.display === "none") {
|
|
element.style.display = "block";
|
|
} else {
|
|
element.style.display = "none";
|
|
}
|
|
}
|
|
</script> |