Available

LiquidAI: LFM2.5-1.2B-Thinking (free)

LiquidAI: LFM2.5-1.2B-Thinking (free) — free model from OpenRouter.

LiquidAI: LFM2.5-1.2B-Thinking (free) — Free API Specifications

Context 33K
Max Output 8K
Modality text, reasoning
Rate Limit See provider page
Card Required Yes
OpenAI Compatible Yes

How to Configure LiquidAI: LFM2.5-1.2B-Thinking (free) for Free

Base URL https://openrouter.ai/api/v1
How to get an API key Get API Key →

One-Click Config for Claude Code, Cursor & More

Claude Code

# Claude Code works via OpenRouter's Anthropic-compatible API.
# Note: Only paid Anthropic Claude models are supported (e.g. claude-sonnet-4.6, claude-opus-4).
# Browse available Claude models at: https://openrouter.ai/models?q=anthropic

# Add to ~/.zshrc or ~/.bashrc
export OPENROUTER_API_KEY="<your-openrouter-api-key>"  # Get at https://openrouter.ai/settings/keys
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY"
export ANTHROPIC_API_KEY=""  # Must be explicitly empty to avoid conflicts

# Optional: pin specific models for each role
# export ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic/claude-sonnet-4.6"
# export ANTHROPIC_DEFAULT_HAIKU_MODEL="anthropic/claude-haiku-4.5"

# Then simply run: claude

Cursor

# Cursor → Settings (⚙️) → Models → Add Model
# Enter the model name exactly as shown, then fill in:
#   Override OpenAI Base URL: https://openrouter.ai/api/v1
#   OpenAI API Key: <your-api-key>   # Get at https://openrouter.ai
# Click "Verify" to confirm the connection, then enable the model.
#
# Model name to add: LiquidAI: LFM2.5-1.2B-Thinking (free)

Codex

# Add to ~/.zshrc or ~/.bashrc
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
export OPENAI_API_KEY="<your-api-key>"  # Get at https://openrouter.ai

# Then run:
codex --model "LiquidAI: LFM2.5-1.2B-Thinking (free)"

Gemini CLI

# ~/.gemini/settings.json
{
  "apiKey": "<your-api-key>",
  "model": "LiquidAI: LFM2.5-1.2B-Thinking (free)"
}
# Get API key at https://openrouter.ai

OpenCode

// ~/.config/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "free-llm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Free LLM",
      "options": {
        "baseURL": "https://openrouter.ai/api/v1",
        "apiKey": "<your-api-key>"
      },
      "models": {
        "LiquidAI: LFM2.5-1.2B-Thinking (free)": { "name": "LiquidAI: LFM2.5-1.2B-Thinking (free)" }
      }
    }
  }
}
// Get API key at https://openrouter.ai

Hermes

# Step 1 — Edit config.yaml
# Windows: C:\Users\<you>\AppData\Local\hermes\config.yaml
# macOS/Linux: ~/.config/hermes/config.yaml

model:
  default: LiquidAI: LFM2.5-1.2B-Thinking (free)
  provider: custom
  base_url: ${CUSTOM_BASE_URL}
  api_key: ${CUSTOM_API_KEY}
  model_aliases:
    LiquidAI: LFM2.5-1.2B-Thinking (free):
      model: "LiquidAI: LFM2.5-1.2B-Thinking (free)"
      provider: "custom"

# Step 2 — Edit .env (same directory as config.yaml)
# Windows: C:\Users\<you>\AppData\Local\hermes\.env
# macOS/Linux: ~/.config/hermes/.env

# ========================
# Custom API (OpenAI-compatible)
# ========================
CUSTOM_API_KEY=<your-api-key>        # Get at https://openrouter.ai
CUSTOM_BASE_URL=https://openrouter.ai/api/v1

OpenClaw

// ~/.openclaw/openclaw.json  (JSON5 format)
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "LiquidAI: LFM2.5-1.2B-Thinking (free)",
      },
    },
  },
  "models": {
    "providers": {
      // Option A — Built-in provider (OpenAI, Anthropic, Google…)
      // Just add apiKey; OpenClaw handles the baseUrl automatically
      // "openai": { "apiKey": "<your-api-key>" },

      // Option B — Custom OpenAI-compatible base URL (e.g. OpenRouter, NVIDIA)
      "free-llm": {
        "baseUrl": "https://openrouter.ai/api/v1",
        "apiKey": "<your-api-key>",  // Get at https://openrouter.ai
        "api": "openai-completions", // openai-completions | anthropic-messages | …
        "models": [
          { "id": "LiquidAI: LFM2.5-1.2B-Thinking (free)", "name": "LiquidAI: LFM2.5-1.2B-Thinking (free)" },
        ],
      },
    },
  },
}
// Apply: openclaw gateway restart
// Verify: openclaw doctor --fix

Frequently Asked Questions about LiquidAI: LFM2.5-1.2B-Thinking (free)

Is LiquidAI: LFM2.5-1.2B-Thinking (free) free to use?

Yes. LiquidAI: LFM2.5-1.2B-Thinking (free) is available on a permanently free tier via OpenRouter. A credit card may be required to activate the free tier. The free tier includes a rate limit of See provider page.

What is LiquidAI: LFM2.5-1.2B-Thinking (free) best for?

LiquidAI: LFM2.5-1.2B-Thinking (free) is optimized for chat, reasoning tasks. It supports text, reasoning modalities, with a context window of 33K tokens and a maximum output of 8K tokens. LiquidAI: LFM2.5-1.2B-Thinking (free) — free model from OpenRouter.

Is LiquidAI: LFM2.5-1.2B-Thinking (free) OpenAI-compatible?

Yes. LiquidAI: LFM2.5-1.2B-Thinking (free) uses an OpenAI-compatible API endpoint at https://openrouter.ai/api/v1. You can use it with the OpenAI Python/JS SDK, or any tool that accepts a custom baseURL — including Claude Code (cc), Cursor, Codex, and OpenCode.

How do I get an API key for LiquidAI: LFM2.5-1.2B-Thinking (free)?

Visit OpenRouter's API key page to register and generate a free API key. Once you have the key, use the configuration snippets above to set up Claude Code, Cursor, or your preferred AI coding tool.