Swipeer Documentation

Welcome to the command center. Swipeer is more than just a chat window—it's a local-first, model-agnostic workspace designed to bridge the gap between AI and your actual files.

Local-First Architecture

Your chat-history, notes, and knowledge-base data is only stored locally on your device.

BYOK Flexibility

Use your own OpenRouter keys or run Swipeer offline with Ollama for unlimited access without credit limits.

Quick Installation

Swipeer is a high-performance native desktop application built with Electron. Follow the instructions for your operating system:

Windows

Run the .exe installer and follow the wizard.

x64 / ARM64
macOS

Drag Swipeer.app to your Applications folder.

Intel / Silicon
Linux

Make the AppImage executable and launch.

AppImage

Permissions

On Linux, you may need to run: chmod +x Swipeer-v1.0.0.AppImage

Advanced AI Configuration

Cloud Intelligence: OpenRouter

OpenRouter is our recommended provider for accessing unlimited usage with your own API key.

1 Create Account

Register at OpenRouter and add some credits to your balance.

2 Copy API Key

Go to Keys under your profile and generate a new key for Swipeer.

3 Setup in App

Go to Settings ➔ Providers and paste the key in the OpenRouter section. Save and you are ready!

Why OpenRouter?

  • Access models from 10+ different companies
  • Pay only for what you use at wholesale prices
  • No monthly subscriptions for each model

Local Privacy & Cloud: Ollama

Run local models like Qwen or Mistral entirely on your own machine, or connect directly to Ollama Cloud. For more information, visit Ollama.

Option 1: Complete Local Privacy

Requires powerful GPU

$ ollama run qwen3
$ ollama run gemma3:4b

Option 2: Ollama Cloud (Local Proxy)

Run massive models without draining your own GPU resources. Sign in via your terminal, and your local Ollama daemon will securely route requests to the cloud.

$ ollama signin
$ ollama run glm-5:cloud

In Swipeer, leave the Endpoint URL as http://localhost:11434 (no API key needed).
Note: You need to run models via terminal (e.g. 'ollama run gemma3:4b' or 'ollama run glm-5:cloud') once before they appear in Swipeer.
Also note that Ollama Cloud has usage limits which you can view at ollama.com/settings.

Option 3: Direct API / Custom Remote Servers

If you don't want to run Ollama locally at all, or if you have your own secured remote VPS server, you can configure Swipeer to connect directly.

  1. Go to Settings ➔ Providers
  2. Change the Endpoint URL to your server or https://ollama.com
  3. Enter your API Key / Token if the server is secured.

Swipeer automatically fetches all models available to your account!

Swipeer automatically pings http://127.0.0.1:11434 by default. If you run Ollama on a different port, update the URL in Settings ➔ Providers.

💬

Chat History

Never lose a conversation again. Access your entire chat history via the folder icon in the sidebar. Use the search bar to find specific discussions instantly.

🎭

System Prompt (Persona)

Define the personality and behavior of your AI. Set a global instruction like "You are a Senior React Developer who prefers functional programming" to ensure every new chat starts with the right context and tone.

📋

Smart Templates

Save and reuse your most powerful prompts. Type any character in the chat input to trigger the autosuggestion process and press TAB or Enter to add your template to the chat.

Built-in Presets:

Grammar Check Summarize Translate to... Explain Code
📝

Notes & Knowledge Base

A built-in Markdown Editor for your documentation, code snippets, and ideas. But it's more than just notes—it's your AI's long-term memory.

Context Injection

Toggle "Knowledge Base" on any note to strictly inject it into the AI's context window. Perfect for project requirements, API documentation, or coding guidelines.

100% Local & Private

Your notes never leave your machine (unless you choose a cloud model). No vector databases, no complex setup—just pure text context.

HTTP Automations

Swipeer includes a built-in HTTP client (similar to Postman/Insomnia) that allows you to trigger external events or send data to other apps via webhooks and APIs.

Supported Methods

  • GET
  • POST (JSON / Form Data)
  • PUT / PATCH
  • DELETE

Advanced Auth

  • Bearer Token
  • Basic Auth
  • Custom API-Key Headers

Integration Examples

01

n8n / Make.com: Send data to a webhook to start a complex workflow (e.g., creating a JIRA ticket, saving a note to Notion, or emailing a summary).

02

Local API Testing: Test your developer endpoints without leaving the context of your AI pair-programmer.

Full Settings Reference

Category Features
GUI & Layout Theme selection (Light/Dark), Font scaling (80% - 150%), Window staying behavior, and language selection.
License Enter your key, check credit consumption, and see feature locks for Starter/Pro tiers.
Providers Configure OpenRouter keys and custom Ollama endpoints. This is the heart of your AI engine.
Profile (Backup) Export/Import your configuration. Perfect for transferring your setup to a new machine or backing up your templates.
⌨️

Keyboard Shortcuts

Speed up your workflow. Click the keyboard icon at the bottom of the sidebar to view a live map of all available shortcuts, including quick-create commands, window toggles, and navigation keys.