Building a Personal Fitness Assistant with OpenClaw
Turning "crayfish" into a running partner that truly understands me.

Preface
I've been playing around with OpenClaw recently, and the more I use it, the more I feel it's perfectly suited to become a long-term personal assistant.
Since I enjoy running and hiking, a very specific idea came to mind: could I transform it into a sports partner that truly understands me? Not just passively answering questions, but actively analyzing my running data, helping me generate training plans and pushing them to Garmin, and even providing me with a coach-like analysis report after my runs.
After tinkering with it, I found that not only is this feasible, but once it's up and running smoothly, the experience is incredibly seamless. The real key isn't about "letting AI figure everything out on its own," but rather about solidifying high-frequency operations and leaving the deep analysis to the AI. This way, you save on tokens while ensuring practicality.
Hardware and Tool Selection
My setup is simple:
- Device: Raspberry Pi 5
- Chat Interface: Telegram Bot
The Raspberry Pi 5's advantages are perfect for this scenario: it can run 24/7, has low power consumption and noise, and is virtually unnoticeable at home. The benefit of the Telegram Bot is the ability to customize menu buttons, making interaction very direct. In daily use, I just open Telegram, tap a menu command, and instantly see today's health data, last night's sleep quality, or an analysis of my most recent run.
From a "usability" standpoint, this is crucial. A sports assistant isn't meant to be shown off; it needs to be integrated into daily use.
First Pitfall: Token Consumption is Much Higher Than Expected
By default, OpenClaw generates a large number of context files in the workspace directory, including personality settings, user profiles, work rules, patrol configurations, conversation memories, and skill modules. The problem is that these are constantly brought into the context during runtime, causing token consumption to skyrocket quickly.
The OpenClaw working directory looks something like this (these .md files are all brought into queries as contextual memory):
~/.openclaw/workspace/
├── SOUL.md # AI Personality Definition
├── USER.md # User Profile
├── AGENTS.md # Work Rules and Command Handling
├── HEARTBEAT.md # Active Patrol Configuration
├── memory/ # Session Memory Files
│ ├── 2026-04-20-long-run-analysis.md
│ └── 2026-04-20-weekly-training-plan.md
└── skills/ # Skill Modules
I initially connected to DeepSeek's API, thinking the cost would be low. However, in practice, it burned through a significant amount of money in just a few days. The problem wasn't a single call, but its tendency to enter a high-frequency trial-and-error state when faced with "ambiguous tasks": constantly testing, generating temporary code, and confirming context. Each round consumed tokens, but the actual productive output was low.
Later, I started using Claude to refine my sports assistant requirements. I noticed it was much more focused on this type of task, less likely to go back and forth on the same issue, and its translation from requirements to code was efficient and clear. Of course, for OpenClaw's daily API calls, I still use DeepSeek, mainly because its API is affordable and its Chinese is good.
This made me realize a key point: Don't hand over high-frequency, deterministic tasks for the AI to figure out on the fly. Instead, engineer these tasks first.

The image above shows the usage spike during the initial phase of vague requirements (peak daily token consumption of 80 million). Once the requirements were clear and the tasks became stable engineering work, the overall usage dropped to less than 1% of the previous level.
Core Strategy: Three Steps to Turn OpenClaw into a Truly Usable Sports Assistant
The entire solution can be summarized in three steps:
- Write common commands as clear scripts
- Package the scripts as OpenClaw Skills
- Separate queries and analysis, handling them differently
These three steps seem simple, but they fundamentally determine whether the system is just "fun" or truly "useful."
Step 1: Solidify Common Commands into Scripts
Instead of having the AI think on the fly about "how to get Garmin data" or "how to generate a training plan," it's better to encapsulate these high-frequency operations into scripts.
For example, I wrote a unified entry point garmin_commands.py that organizes the most common health data and running queries into clear commands:
# Today's Health Data
python3 scripts/garmin_commands.py health_data
# Last Night's Sleep Analysis
python3 scripts/garmin_commands.py sleep_analysis
# Most Recent Run
python3 scripts/garmin_commands.py last_run
Training generation and pushing go through another unified entry point, integrated_openclaw_skill.py:
# Generate and push an easy run to Garmin Connect
python3 ~/.openclaw/workspace/skills/garmin-workout/scripts/integrated_openclaw_skill.py \
--command generate_and_push --args '{"command": "/easy_run 8km tomorrow"}'
The significance of this is straightforward: The AI no longer needs to "figure it out"; it just needs to "call the tool."
I currently support 6 training types, each with preset structures and pace zones:
| Command | Type | Structure |
|---|---|---|
/easy_run | Easy Run | Warm-up → Main Run @6:30–7:00/km → Cool-down |
/tempo_run | Tempo Run | Warm-up → Main Run @5:20–5:40/km → Cool-down |
/long_run | Long Run | Warm-up → Main Run @5:50–6:10/km → Cool-down |
/interval_run | Interval Run | Warm-up → Repeat structure → Cool-down |
/recovery_run | Recovery Run | Warm-up → Main Run @7:15–7:45/km → Cool-down |
/fartlek_run | Fartlek Run | Warm-up → Repeat structure → Cool-down |
After this step, the response time for many daily commands can be controlled within seconds, with almost no additional inference cost.
Step 2: Package Scripts as OpenClaw Skills
Scripts alone aren't enough. The key to making this system run stably is packaging them as OpenClaw Skills.
The core of a Skill is a SKILL.md file, placed in the skills/garmin-workout/ directory. It essentially tells the AI:
- What this skill does
- When to call it based on user requests
- What the call command should look like
- How to pass parameters
For example:
---
name: garmin-workout
description: Garmin Connect training integration. Use when the user asks about training plans,
pushing workouts to Garmin, or querying health data (sleep/VO2 Max/heart rate/body battery).
Covers all 6 training types. Requires python3 and garminconnect.
---
This layer is crucial because it solves a practical problem: After an AI restart, it can't rely on "memory" to work; it can only rely on "structured entry points."
Scripts sitting there are just tools existing; Once the Skill is clearly written, the AI truly knows when and how to use it.
Simultaneously, I mapped Telegram commands to specific execution scripts in AGENTS.md, for example:
| Telegram Command | Execution Script |
|---|---|
/last_run | python3 .../garmin_commands.py last_run |
/sleep_analysis | python3 .../garmin_commands.py sleep_analysis |
/easy_run 8km tomorrow | integrated_openclaw_skill.py --command generate_and_push ... |
The rules are also written very directly:
Upon receiving these commands, immediately run the corresponding script and return the output as-is; do not ask the user what data they want, and do not generate exploratory code on the fly.
After doing this, the system's behavior becomes much more stable: no detours, no trial-and-error, no repeated confirmations—just direct execution.
Step 3: Retain AI's Deep Analysis Capability
Scripting high-frequency tasks doesn't mean abandoning the value of AI. Quite the opposite, the real place for AI to shine is "analysis," not "data retrieval."
For example, after a run, I first send /last_run to get a structured data summary:
🏃 2026-04-21 Running Data
📋 Xuhui - Tempo Run 5.0km_2026-04-21
📏 Distance: 5.09 km
⏱️ Time: 33 minutes
⚡ Avg Pace: 6:30/km
🚀 Fastest 1km: 5:38
🚀 Fastest 5km: 32:19
💓 Avg HR: 139 bpm
🔴 Max HR: 162 bpm
📊 HR Zones:
Z1 Warm-up: 0 min (1%)
Z2 Aerobic: 6 min (20%)
Z3 Threshold: 23 min (70%)
Z4 Anaerobic: 2 min (9%)
👟 Cadence: 174 spm
⛰️ Elevation Gain: 16 m
🔥 Calories: 302 kcal
🔋 Energy Expenditure: 9 points
📈 Aerobic Effect: 3.0
🐸 Analyze well, run faster!
This stage is just script execution, consuming almost no tokens. If I then ask, "How was this run? How should I adjust next time?"—that's when I let the AI enter analysis mode.
It can then provide advice more like a real coach, based on metrics like heart rate zones, pace control, cadence, and aerobic effect, for example:
Compared to the tempo run on April 17th (7:03/km, HR132): Pace improved by 33 sec/km; Heart rate from 132→139 bpm: Intensity appropriately increased; Training effect from 2.8→3.0: Significant progress
I've increasingly felt that this "two-step" design is particularly critical:
- Queries go through scripts: fast, stable, token-efficient
- Analysis goes through AI: slower is okay, but it needs to be high quality
It's precisely because of this separation that the system truly balances daily usability with the value of AI.
Actual User Experience: It Starts to Feel Like a Real Daily Assistant
My Telegram menu now roughly has these commands.
Training Related
/training_plan– View the training plan for the upcoming week/easy_run 5km today– Generate an easy run and push to Garmin/tempo_run 8km tomorrow– Generate a tempo run/long_run 10km weekend– Generate a long run/interval_run today– Generate an interval run/garmin_workouts– View scheduled workouts for this week
Health Data Related
/health_data– Today's health overview/sleep_analysis– Last night's sleep analysis/last_run– Most recent run analysis/running_stats– Running stats for the past 7 days/body_battery– Today's body battery
For example, I send:
/easy_run 8km tomorrow
In a few seconds, a new training session for tomorrow appears in my Garmin Connect calendar, complete with warm-up, main run, and cool-down segments, along with pre-set pace targets.
This feels different. It's no longer just a "Q&A" chat tool; it's starting to become a personal sports assistant that can actually execute actions.
Current Directory Structure
The entire project currently looks roughly like this:
~/.openclaw/workspace/
├── AGENTS.md # Telegram command handling rules (Core)
├── SOUL.md # AI Personality Definition
├── USER.md # User Information
├── HEARTBEAT.md # Active Patrol Tasks
├── scripts/
│ └── garmin_commands.py # Unified entry point for health data
├── memory/
│ ├── 2026-04-20-long-run-analysis.md
│ └── 2026-04-20-weekly-training-plan.md
└── skills/
└── garmin-workout/
├── SKILL.md # Skill description (AI's entry point)
├── scripts/
│ ├── integrated_openclaw_skill.py # Training generation + push
│ ├── garmin_auth.py # Authentication management
│ ├── generate_workout.py # Generate training JSON
│ ├── fitcoach_simple.py # FitCoach AI planning
│ └── ...
└── workouts_template/
├── easy_run_5km.json
├── tempo_run_5km.json
├── long_run_10km.json
└── ...
From an engineering perspective, the advantages of this structure are clear: clear responsibilities, easy maintenance, and convenient for future expansion with more training types or health metrics.
Plan for Backups Early
OpenClaw's configuration, memory, and skills are mostly concentrated in the ~/.openclaw directory. So the simplest and most direct approach is to regularly back up the entire directory, or at least the workspace.
# Backup the entire openclaw configuration
tar -czf openclaw_backup_$(date +%Y%m%d).tar.gz ~/.openclaw
# Or just backup workspace (skills, scripts, memory)
tar -czf workspace_backup_$(date +%Y%m%d).tar.gz ~/.openclaw/workspace
Adding a cron job to automatically sync to external storage or the cloud daily can keep the risk relatively low. This way, even if the SD card fails later, critical content like training memories, Skill configurations, and Telegram menu settings can be restored relatively quickly.
For a long-running personal assistant like this, backups aren't a "deal with it later" task; they should be incorporated into the design from the start.
Summary
My biggest takeaway from tinkering with OpenClaw is this: To make it a truly useful personal sports assistant, the key isn't making the AI "smarter," but making the system's division of labor clearer.
The core idea is actually quite simple:
- Solidify high-frequency operations into scripts: Deterministic tasks like Garmin queries and training generation are called directly, not left to the AI's improvisation.
- Use Skills to persist entry points: Ensure the AI knows where the tools are and when to call them every time it starts.
- Separate queries and analysis: Daily data retrieval goes through scripts; deep advice is left for the AI.
With the Raspberry Pi running 24/7, Telegram as the interaction interface, and Garmin handling the training execution, this whole system is no longer just a "chatty AI." It becomes a sports partner truly integrated into daily life.
Related Project
Github: sayidhe/openclaw-garmin-workout