About Groq

Groq is the world's fastest AI inference platform powered by the proprietary LPU™ (Language Processing Unit) Inference Engine, purpose-built hardware designed specifically for running large language models at exceptional speed and low cost.

The LPU architecture delivers 300-500 tokens per second with up to 18x faster processing than traditional GPUs through tensor streaming technology optimized for sequential computation and low-latency inference. GroqCloud provides API access to leading open-source models (Llama, Mixtral, Gemma) with Tokens-as-a-Service pricing, enabling developers to build production-ready AI applications with ultra-low latency and high throughput.

Key features include deterministic performance, reduced memory bottlenecks, energy-efficient processing, real-time inference capabilities, and scalable cloud deployment with straightforward API integration.

Step by step guide to use Groq API Key to chat with AI

1. Get Your Groq API Key

First, you'll need to obtain an API key from Groq. This key allows you to access their AI models directly and pay only for what you use.

  1. Visit Groq's API console
  2. Sign up or log in to your account
  3. Navigate to the API keys section
  4. Generate a new API key (copy it immediately as some providers only show it once)
  5. Save your API key in a secure password manager or encrypted note
Sometimes, this provider will require you to add a payment method to your account before you can use your API key to chat with AI. So, to avoid any hiccups, make sure you’ve got a payment method set up and ready to go!
Important: Keep your API key secure and never share it publicly. Store it safely as you'll need it to connect with TypingMind.

2. Connect Your Groq API Key on TypingMind

Once you have your Groq API key, connecting it to TypingMind to chat with AI is straightforward:

  1. Open TypingMind in your browser
  2. Click the "Settings" icon (gear symbol)
  3. Navigate to "Models" section
  4. Click "Add Custom Model"
  5. Fill in the model information:
    Name: openai/gpt-oss-120b via Groq (or your preferred name)
    Endpoint: https://api.groq.com/openai/v1/chat/completions
    Model ID: openai/gpt-oss-120b for example (check Groq model list)
    Context Length: Enter the model's context window (e.g., 32000 for openai/gpt-oss-120b)
    Groq Endpoint URL input fieldopenai/gpt-oss-120bhttps://api.groq.com/openai/v1/chat/completionsopenai/gpt-oss-120b via Groqhttps://www.typingmind.com/model-logo.webp32000
  6. Add custom headers by clicking "Add Custom Headers" in the Advanced Settings section:
    Authorization: Bearer <GROQ_API_KEY>:
    X-Title: typingmind.com
    HTTP-Referer: https://www.typingmind.com
  7. Enable "Support Plugins (via OpenAI Functions)" if the model supports the "functions" or "tool_calls" parameter, or enable "Support OpenAI Vision" if the model supports vision.
  8. Click "Test" to verify the configuration
  9. If you see "Nice, the endpoint is working!", click "Add Model"

3. Start Chatting with Groq models

Now you can start chatting with Groq models through TypingMind:

  • Select your preferred Groq model from the model dropdown menu
  • Start typing your message in the chat input
  • Enjoy faster responses and better features than the official interface
  • Switch between different AI models as needed
The best frontend AI chat for Groq API KeyThe best frontend AI chat for Groq API Keyopenai/gpt-oss-120bThe best frontend AI chat for Groq API Key
Pro tips for better results:

4. Monitor Your AI Usage and Costs

One of the biggest advantages of using API keys with TypingMind is cost transparency and control. Unlike fixed subscriptions, you pay only for what you actually use. Visit https://console.groq.com/dashboard/metrics to monitor your Groq API usage and set spending limits.

💡 Cost-saving tips:
  • Use less expensive models for simple tasks
  • Keep prompts concise but specific to reduce token usage
  • Use TypingMind's prompt caching to reduce repeat costs (How to enable prompt caching)
  • Using RAG (retrieval-augmented generation) for large documents to reduce repeat costs (How to use RAG)