Better welcome page
This commit is contained in:
parent
b12c6d4ea4
commit
0355cf374e
@ -1,2 +1,3 @@
|
||||
data
|
||||
wasm
|
||||
wasm
|
||||
dbschema
|
6
Chat.go
6
Chat.go
@ -346,15 +346,19 @@ func generateEnterKeyChatHTML() string {
|
||||
<li>For 1 large text, like a PDF with 30,000 characters (60-120 pages), you would pay around $0.05 per message for GPT-4o or $0.005 for GPT-3.5 turbo.</li>
|
||||
</ul>
|
||||
<p>Remember, prices and token limits may vary depending on the provider and the specific LLM you're using.</p>
|
||||
<h2>Start for free</h2>
|
||||
<p>Groq and Google offer free messages per minutes, enough for a conversation. Allowing you to use their models with JADE for free.</p>
|
||||
<p>OpenAI and Anthropic offer 5$ of free credits when creating an account. So you can try JADE with their models for free.</p>
|
||||
<h2>Get a key</h2>
|
||||
<p>To get a key and learn more about the different LLM providers and their offerings, check out their websites:</p>
|
||||
<ul>
|
||||
<li><a href="https://platform.openai.com/docs/overview" target="https://platform.openai.com/docs/overview">OpenAI</a></li>
|
||||
<li><a href="https://console.anthropic.com/" target="https://console.anthropic.com/">Anthropic</a></li>
|
||||
<li><a href="https://console.groq.com/login" target="https://console.groq.com/login">Groq</a></li>
|
||||
<li><a href="https://console.mistral.ai/" target="https://console.mistral.ai/">MistralAI</a></li>
|
||||
<li><a href="https://console.mistral.ai/" target="https://console.mistral.ai/">Mistral AI</a></li>
|
||||
<li><a href="https://aistudio.google.com" target="https://aistudio.google.com">Google</a></li>
|
||||
<li><a href="https://www.perplexity.ai/" target="https://www.perplexity.ai/">Perplexity</a></li>
|
||||
<li><a href="https://fireworks.ai/" target="https://fireworks.ai/">Fireworks AI</a></li>
|
||||
</ul>
|
||||
`
|
||||
|
||||
|
@ -10,7 +10,7 @@
|
||||
<div class="dropdown-item">
|
||||
<!-- Placeholder for additional text -->
|
||||
<div class="content" id="usage-content" style="max-height: 30vh; overflow-y: auto;">
|
||||
<table class="table is-narrow is-fullwidth is-striped">
|
||||
<table class="table is-narrow is-fullwidth is-striped" style="max-width: 200px;">
|
||||
<tbody>
|
||||
{% for usage in usages %}
|
||||
<tr>
|
||||
|
@ -36,7 +36,7 @@
|
||||
<h2>More information</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<h3>Variety of AI models from different providers.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
<h3>Get access to all models.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
onclick="toggleDetails('all-models-details')">
|
||||
<span class="icon is-small">
|
||||
<i class="fa-solid fa-info"></i>
|
||||
@ -53,7 +53,7 @@
|
||||
</p>
|
||||
</li>
|
||||
<li>
|
||||
<h3>Multiple models in a single conversation.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
<h3>Get the best answer from multiple models.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
onclick="toggleDetails('multi-models-details')">
|
||||
<span class="icon is-small">
|
||||
<i class="fa-solid fa-info"></i>
|
||||
@ -66,7 +66,7 @@
|
||||
complex queries where different models might offer unique insights or solutions.<br><br></p>
|
||||
</li>
|
||||
<li>
|
||||
<h3>Duplicate models in a single conversation.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
<h3>Even from the same model.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
onclick="toggleDetails('same-models-details')">
|
||||
<span class="icon is-small">
|
||||
<i class="fa-solid fa-info"></i>
|
||||
@ -90,7 +90,7 @@
|
||||
model's bias. This ensures that the responses you receive are more reliable and trustworthy.<br><br></p>
|
||||
</li>
|
||||
<li>
|
||||
<h3>Pay only for what you use.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
<h3>Pay only for what you use or not at all.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
onclick="toggleDetails('flexible-pricing-details')">
|
||||
<span class="icon is-small">
|
||||
<i class="fa-solid fa-info"></i>
|
||||
@ -99,10 +99,9 @@
|
||||
<p id="flexible-pricing-details" style="display:none;">JADE use API, so you get access to free credits or tiers
|
||||
depending of the provider (see next section). This is particularly beneficial for users who may not need to
|
||||
use the chatbot extensively. Once the free credit use, you pay based on the length of you message and the
|
||||
response generated in tokens (a token is around 3 characters). <br><br>JADE starts with a
|
||||
free tier that allows you to send up to 200 messages a month. For more intensive use, you can upgrade for
|
||||
just $0.95/month. So you can use Llama 70b for free forever if using JADE with a Groq Cloud account for
|
||||
example.<br><br></p>
|
||||
response generated in tokens (a token is around 3 characters). Groq and Google also offer free tiers that
|
||||
are enough for conversation. <br><br>JADE starts with a free tier that allows you to send up to 200 messages
|
||||
a month. For more intensive use, you can upgrade for just $0.95/month.<br><br></p>
|
||||
</li>
|
||||
<li>
|
||||
<h3>All providers and models.<button class="button ml-2 is-small is-primary is-outlined"
|
||||
@ -261,6 +260,149 @@
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
<div class="columns">
|
||||
<div class="column is-two-thirds">
|
||||
<strong>Fireworks</strong> - Fireworks AI offer 1$ of free credits when creating an account.
|
||||
Firework AI have a lot of open source models. I may add fine tuned models in the future.
|
||||
</div>
|
||||
<div class="column">
|
||||
<ul>
|
||||
<li>
|
||||
<strong>FireLLaVA-13B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x7B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x22B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 3 70B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Bleat</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Chinese Llama 2 LoRA 7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>DBRX Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Gemma 7B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Hermes 2 Pro Mistral 7b</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Japanese StableLM Instruct Beta 70B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Japanese Stable LM Instruct Gamma 7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 13B French</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama2 13B Guanaco QLoRA GGML</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 7B Summarize</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 13B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 13B Chat</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 70B Chat</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 2 7B Chat</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 3 70B Instruct (HF version)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 3 8B (HF version)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 3 8B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Llama 3 8B Instruct (HF version)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>LLaVA V1.6 Yi 34B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mistral 7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mistral 7B Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mistral 7B Instruct v0.2</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mistral 7B Instruct v0p3</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x22B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x22B Instruct (HF version)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Mixtral MoE 8x7B Instruct (HF version)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>MythoMax L2 13b</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Nous Hermes 2 - Mixtral 8x7B - DPO (fp8)</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Phi 3 Mini 128K Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Phi 3 Vision 128K Instruct</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Qwen1.5 72B Chat</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>StableLM 2 Zephyr 1.6B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>StableLM Zephyr 3B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>StarCoder 15.5B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>StarCoder 7B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Traditional Chinese Llama2</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Capybara 34B</strong>
|
||||
</li>
|
||||
<li>
|
||||
<strong>Yi Large</strong>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
<strong>Hugging face</strong> - You can also use custom endpoints. I only tested hugging face but in
|
||||
theory, as long as the key is valid and it use the openai api, it should work. This part need some
|
||||
testing and improvement.
|
||||
|
Loading…
x
Reference in New Issue
Block a user