Native speech-to-text for Arch / Omarchy - Fast, accurate and easy system-wide dictation
all local | waybar integration | audio feedback | auto-paste | cpu or gpu | easy setup
pssst...un-mute!
2025-08-27.15-22-53.mp4
- Optimized for Arch Linux / Omarchy - Seamless integration with Omarchy / Hyprland & Waybar
- Whisper-powered - State-of-the-art speech recognition via OpenAI's Whisper
- Cross-platform GPU support - Automatic detection and acceleration for NVIDIA (CUDA) / AMD (ROCm)
- Hot model loading - pywhispercpp backend keeps models in memory for fast transcription
- Word overrides - Customize transcriptions, prompt and corrections
- Run as user - Runs in user space, just sudo once for the installer
- Omarchy
- NVIDIA GPU (optional, for CUDA acceleration)
- AMD GPU (optional, for ROCm acceleration)
"Just works" with Omarchy.
AUR:
New!
# Install package
yay -S hyprwhspr
# Setup package
hyprwhspr-setup
Script:
# Clone the repository
git clone https://github.com/goodroot/hyprwhspr.git
cd hyprwhspr
# Run the automated installer
./scripts/install-omarchy.sh
The installer will:
- ✅ Install system dependencies (ydotool, etc.)
- ✅ Copy application files to system directory (
/usr/lib/hyprwhspr
) - ✅ Set up Python virtual environment in user space (
~/.local/share/hyprwhspr/venv
) - ✅ Install pywhispercpp backend
- ✅ Download base model to user space (
~/.local/share/pywhispercpp/models/ggml-base.en.bin
) - ✅ Set up systemd services for hyprwhspr & ydotoolds
- ✅ Configure Waybar integration
- ✅ Test everything works
Ensure your microphone of choice is available in audio settings!
- Log out and back in (for group permissions)
- Press
Super+Alt+D
to start dictation - beep! - Speak naturally
- Press
Super+Alt+D
again to stop dictation - boop! - Bam! Text appears in active buffer!
Super+Alt+D
- Toggle dictation on/off
Edit ~/.config/hyprwhspr/config.json
:
Minimal config - only 2 essential options:
For choice of model, see model instructions.
Performance options - improve cpu transcription speed:
{
"threads": 4 // thread count for whisper cpu processing
}
Increase for more CPU parallelism when using CPU; on GPU, modest values are fine.
Word overrides - customize transcriptions:
{
"word_overrides": {
"hyperwhisper": "hyprwhspr",
"omarchie": "Omarchy"
}
}
Whisper prompt - customize transcription behavior:
{
"whisper_prompt": "Transcribe with proper capitalization, including sentence beginnings, proper nouns, titles, and standard English capitalization rules."
}
The prompt influences how Whisper interprets and transcribes your audio, eg:
-
"Transcribe as technical documentation with proper capitalization, acronyms and technical terminology."
-
"Transcribe as casual conversation with natural speech patterns."
-
"Transcribe as an ornery pirate on the cusp of scurvy."
Audio feedback - optional sound notifications:
{
"audio_feedback": true, // Enable audio feedback (default: false)
"start_sound_volume": 0.3, // Start recording sound volume (0.1 to 1.0)
"stop_sound_volume": 0.3, // Stop recording sound volume (0.1 to 1.0)
"start_sound_path": "custom-start.ogg", // Custom start sound (relative to assets)
"stop_sound_path": "custom-stop.ogg" // Custom stop sound (relative to assets)
}
Default sounds included:
- Start recording:
ping-up.ogg
(ascending tone) - Stop recording:
ping-down.ogg
(descending tone)
Custom sounds:
- Supported formats:
.ogg
,.wav
,.mp3
- Fallback: Uses defaults if custom files don't exist
Thanks for the sounds, @akx!
Automatically converts spoken words to symbols and punctuation for natural dictation:
Punctuation:
- "period" → "."
- "comma" → ","
- "question mark" → "?"
- "exclamation mark" → "!"
- "colon" → ":"
- "semicolon" → ";"
Symbols:
- "at symbol" → "@"
- "hash" → "#"
- "plus" → "+"
- "equals" → "="
- "dash" → "-"
- "underscore" → "_"
Brackets:
- "open paren" → "("
- "close paren" → ")"
- "open bracket" → "["
- "close bracket" → "]"
- "open brace" → "{"
- "close brace" → "}"
Special commands:
- "new line" → new line
- "tab" → tab character
Speech-to-text replacement list via WhisperTux, thanks g@cjams!
Clipboard behavior - control what happens to clipboard after text injection:
{
"clipboard_behavior": false, // Boolean: true = clear after delay, false = keep (default: false)
"clipboard_clear_delay": 5.0 // Float: seconds to wait before clearing (default: 5.0, only used if clipboard_behavior is true)
}
Clipboard behavior options:
clipboard_behavior: true
- Clipboard is automatically cleared after the specified delayclipboard_clear_delay
- How long to wait before clearing (only matters whenclipboard_behavior
istrue
)
PRIVACY: hyprwhspr never reads your existing - or any - clipboard / audio content
Paste behavior - control how text is pasted into applications:
{
"shift_paste": true // Boolean: true = Ctrl+Shift+V (default), false = Ctrl+V
}
Paste behavior options:
-
shift_paste: true
(default) - Uses Ctrl+Shift+V for pasting- ✅ Works in terminals (bash, zsh, fish, etc.)
- ✅ Probably works everywhere else
-
shift_paste: false
- Uses traditional Ctrl+V for pasting- ✅ Standard paste behavior
- ❌ May not work in some terminal applications
Add dynamic tray icon to your ~/.config/waybar/config
:
{
"custom/hyprwhspr": {
"exec": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh status",
"interval": 2,
"return-type": "json",
"exec-on-event": true,
"format": "{}",
"on-click": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh toggle",
"on-click-right": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh start",
"on-click-middle": "/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh restart",
"tooltip": true
}
}
Add CSS styling to your ~/.config/waybar/style.css
:
@import "/usr/lib/hyprwhspr/config/waybar/hyprwhspr-style.css";
Click interactions:
- Left-click: Toggle Hyprwhspr on/off
- Right-click: Start Hyprwhspr (if not running)
- Middle-click: Restart Hyprwhspr
- NVIDIA (CUDA) and AMD (ROCm) are detected automatically; pywhispercpp will use GPU when available
- No manual build steps required.
- If toolchains are present, installer can build pywhispercpp with GPU support; otherwise CPU wheel is used.
Default model installed: ggml-base.en.bin
(~148MB) to ~/.local/share/pywhispercpp/models/
Available models to download:
cd ~/.local/share/pywhispercpp/models/
# Tiny models (fastest, least accurate)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin
# Base models (good balance)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin
# Small models (better accuracy)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.bin
# Medium models (high accuracy)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.en.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin
# Large models (best accuracy, requires GPU)
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large.bin
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3.bin
Models large
and large-v3
require NVIDIA GPU acceleration for reasonable performance.
Without a GPU, these models will be extremely slow (10-30 seconds per transcription):
tiny.en
- Fastest, good for real-time dictationbase.en
- Best balance of speed/accuracy (recommended)small.en
- Better accuracy, still fastmedium.en
- High accuracy, slower processinglarge
- Best accuracy, requires GPU acceleration for reasonable speedlarge-v3
- Latest large model, requires GPU acceleration for reasonable speed
Update your config after downloading:
{
"model": "small.en"
}
hyprwhspr is designed as a system package:
/usr/lib/hyprwhspr/
- Main installation directory/usr/lib/hyprwhspr/lib/
- Python application~/.local/share/pywhispercpp/models/
- Whisper models (user space)~/.config/hyprwhspr/
- User configuration~/.config/systemd/user/
- Systemd service
hyprwhspr uses systemd for reliable service management:
hyprwhspr.service
- Main application service with auto-restartydotool.service
- Input injection daemon service- Tray integration - All tray operations use systemd commands
- Process management - No manual process killing or starting
- Service dependencies - Proper startup/shutdown ordering
If you're having persistent issues, you can completely reset hyprwhspr:
# Stop services
systemctl --user stop hyprwhspr ydotool
# Remove runtime data
rm -rf ~/.local/share/hyprwhspr/
# Remove user config
rm -rf ~/.config/hyprwhspr/
# Remove system files
sudo rm -rf /usr/lib/hyprwhspr/
And then...
# Then reinstall fresh
./scripts/install-omarchy.sh
I heard the sound, but don't see text!
It's fairly common in Arch and other distros for the microphone to need to be plugged in and set each time you log in and out of your session, including during a restart. Within sound options, ensure that the microphone is indeed set. The sound utility will show feedback from the microphone if it is.
Hotkey not working:
# Check service status for hyprwhspr
systemctl --user status hyprwhspr.service
# Check logs
journalctl --user -u hyprwhspr.service -f
# Check service statusr for ydotool
systemctl --user status ydotool.service
# Check logs
journalctl --user -u ydotool.service -f
Permission denied:
# Fix uinput permissions
/usr/lib/hyprwhspr/scripts/fix-uinput-permissions.sh
# Log out and back in
No audio input:
If your mic actually available?
# Check audio devices
pactl list short sources
# Restart PipeWire
systemctl --user restart pipewire
Audio feedback not working:
# Check if audio feedback is enabled in config
cat ~/.config/hyprwhspr/config.json | grep audio_feedback
# Verify sound files exist
ls -la /usr/lib/hyprwhspr/share/assets/
# Check if ffplay/aplay/paplay is available
which ffplay aplay paplay
Model not found:
# Check if model exists
ls -la ~/.local/share/pywhispercpp/models/
# Download a different model
cd ~/.local/share/pywhispercpp/models/
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin
# Verify model path in config
cat ~/.config/hyprwhspr/config.json | grep model
Stuck recording state:
# Check service health and auto-recover
/usr/lib/hyprwhspr/config/hyprland/hyprwhspr-tray.sh health
# Manual restart if needed
systemctl --user restart hyprwhspr.service
# Check service status
systemctl --user status hyprwhspr.service
- Check logs:
journalctl --user -u hyprwhspr.service
journalctl --user -u ydotool.service
- Verify permissions: Run the permissions fix script
- Test components: Check ydotool, audio devices, whisper.cpp
- Report issues: Include logs and system information
MIT License - see LICENSE file.
Create an issue, happy to help!
For pull requests, also best to start with an issue.
Built with ❤️ for the Omarchy community
Integrated and natural speech-to-text.