Your Codebase Now Has a Telegram Number

· 11 min read
aitelegramclaudedeveloper-toolsdevopsself-hosted

Your Codebase Now Has a Telegram Number

I spent an afternoon wiring up an AI coding agent that lives in our team's Telegram group. It can read our codebase, answer questions about it, and make edits — all from a @mention in chat. Here's what I actually did, what broke, and why I think this is a more interesting pattern than it looks.

The Problem with AI Coding Tools Today

Most AI coding tools are personal. You open a chat, you paste some code, you get an answer, you close the tab. The next time your teammate has the same question, they start from zero. There's no shared memory, no team context, no continuity.

Claude Code changed some of this by giving the AI direct filesystem access — it can actually read and write files, not just reason about code you paste in. But it still runs on your machine, in your terminal, and only you can talk to it.

What I wanted was something different: one agent, running on shared infrastructure, accessible to my whole team from the chat tool we're already in. Not a new dashboard. Not a new tool to onboard people to. Just a bot in a group chat that actually knows the codebase.

If you've read how I built an AI holding company that runs from my phone, this is the team-facing version of that same idea: Telegram as the control plane, Claude Code as the brain, a VM as the body.

The Architecture

The setup has three layers:

A cloud VM — a small Linux machine running 24/7. This is where the agent lives, where the repos are cloned, and where any code edits actually happen. Nothing runs on anyone's laptop.

OpenClaw — a self-hosted agent gateway. It handles the Telegram integration, manages the agent's identity and sessions, and exposes an ACP (Agent Communication Protocol) interface that Claude Code can talk to. Think of it as the nervous system: it receives messages from Telegram, routes them to the right agent backend, and sends responses back.

Claude Code — the actual AI brain. Running as an ACP sub-agent, it has full filesystem access to the repos on the VM. When someone sends a message in Telegram, OpenClaw receives it and invokes Claude Code, which can read files, make edits, and respond with structured output.

flowchart LR A["👤 Teammate"] -->|message| B["💬 Telegram\nGroup Chat"] B -->|Bot API| C["🔌 OpenClaw\nGateway (VM)"] C -->|ACP| D["🧠 Claude Code\nSub-Agent"] D -->|read/write| E["📁 Filesystem\n(Repos)"] D -->|API call| F["🔑 Anthropic API"] D -->|response| C C -->|reply| B style A fill:#1a1a2e,stroke:#e94560,color:#fff style B fill:#1a1a2e,stroke:#0f3460,color:#fff style C fill:#1a1a2e,stroke:#533483,color:#fff style D fill:#1a1a2e,stroke:#e94560,color:#fff style E fill:#1a1a2e,stroke:#0f3460,color:#fff style F fill:#1a1a2e,stroke:#533483,color:#fff

Everything except the Anthropic API call is self-hosted. The bot token and API key live on the VM. The repos never leave your infrastructure.

Setting It Up

The VM

I used a standard small instance on Google Cloud — 2 vCPU, 8GB RAM, 30GB disk, Ubuntu 24.04. Cost is roughly $50/month. You could run this on any Linux VPS.

After provisioning, the setup is:

sudo apt update && sudo apt upgrade -y

# Node 22 via nodesource
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs git

SSH keys go on the VM and get added to GitHub so it can clone private repos. The Anthropic API key goes in .bashrc as an environment variable so every process that runs on the machine can pick it up.

The Gateway

OpenClaw installs as a global npm package and runs as a systemd user service. The onboarding wizard walks you through the key decisions: which LLM provider, which channel (Telegram in this case), what auth mode. It generates a gateway auth token, writes a systemd unit file, and starts the service.

flowchart TD subgraph INSTALL ["📦 Install"] I1["npm install -g openclaw"] I2["openclaw init\n(onboarding wizard)"] end subgraph CONFIG ["⚙️ Configure"] C1["Set LLM provider\n(Anthropic)"] C2["Set channel\n(Telegram)"] C3["Set auth mode\n(allowlist)"] C4["Generate gateway\nauth token"] end subgraph RUN ["🚀 Run"] R1["systemd unit file\ngenerated"] R2["systemctl --user\nenable + start"] R3["Gateway live\n✅"] end I1 --> I2 --> C1 --> C2 --> C3 --> C4 --> R1 --> R2 --> R3 style INSTALL fill:#0d1117,stroke:#238636,color:#c9d1d9 style CONFIG fill:#0d1117,stroke:#1f6feb,color:#c9d1d9 style RUN fill:#0d1117,stroke:#a371f7,color:#c9d1d9

A few non-obvious things worth knowing:

Privacy mode on the Telegram bot matters. By default, Telegram bots in groups only see messages that @mention them directly. That's actually what you want — but you also need to disable "privacy mode" in BotFather settings, or the bot won't see @mentions from non-admins. Turn it off, remove the bot from the group, re-add it.

The groupPolicy config controls who can trigger the agent. Default is pairing mode (only pre-authorized users). For a team setup you want either open (anyone in the group) or allowlist (specific Telegram user IDs). Start with open to test, then lock it down with an allowlist once you confirm it's working.

The group chat ID is a negative integer. Get it by sending a message in the group and hitting the Telegram Bot API's getUpdates endpoint. It looks like -5144360085.

The Permission Problem

This was the hairiest part. OpenClaw's Claude Code integration (called acpx) tries to install itself into the npm global package directory on first boot. That directory is owned by root. The gateway process runs as your user. Result: EACCES on every startup, the plugin fails to initialize, and the gateway enters a broken state where it accepts WebSocket connections but hangs on RPC calls.

The fix is to give your user ownership of the acpx extension directory before the gateway runs:

sudo mkdir -p /usr/lib/node_modules/openclaw/extensions/acpx/node_modules
sudo chown -R YOUR_USER:YOUR_USER /usr/lib/node_modules/openclaw/extensions/acpx

After that, restart the gateway and watch the logs. You want to see acpx runtime backend ready — not acpx runtime setup failed.

Codebase Context

When you have multiple repos, you need to make intentional decisions about how the agent navigates between them. I settled on setting the agent's working directory to the parent folder that contains both repos, then writing a CLAUDE.md file at that level that explains the project structure.

Claude Code natively reads CLAUDE.md files as project context. So you can tell it: this is the frontend, this is the backend, here's what lives where, here's how to behave when something is ambiguous. It's a simple text file but it meaningfully changes how the agent responds to vague questions.

Mine looks roughly like:

# Project Codebase

## Repos
- frontend/ — Next.js app. Pages in pages/, components in components/.
- backend/ — API server.

## Guidelines
- Always confirm which repo is affected before making edits.
- Treat "the frontend" as frontend/ and "the backend" or "API" as backend/.
- After making edits, summarize what files were changed and why.

This is the same principle behind context graphs for go-to-market systems — give the AI structured context about the domain so it makes better decisions. The difference is that CLAUDE.md is a static file you write once, while context graphs update themselves.

What It Actually Feels Like

Once it's running, the UX is surprisingly natural. You're in a group chat with your team. Someone mentions a bug. Someone else @mentions the bot and asks what the relevant code looks like. The bot responds in the chat with the actual code and an explanation. Then you can ask it to make the fix. It does. You SSH into the VM and push.

sequenceDiagram participant Dev1 as Developer 1 participant TG as Telegram Group participant Bot as @CodeBot participant CC as Claude Code participant FS as Codebase Dev1->>TG: "auth is broken again" Dev1->>TG: "@CodeBot what does the auth middleware look like?" TG->>Bot: message received Bot->>CC: invoke with query CC->>FS: read auth files FS-->>CC: file contents CC-->>Bot: explanation + code snippet Bot-->>TG: formatted response Note over TG: Everyone in the group sees the answer Dev1->>TG: "@CodeBot fix the token expiry check" TG->>Bot: message received Bot->>CC: invoke with edit request CC->>FS: edit auth middleware CC-->>Bot: "Updated auth.ts — changed expiry from 1h to 24h" Bot-->>TG: edit summary

The latency is real — Claude Code takes a few seconds to think and respond, sometimes longer on complex tasks. But the interaction model is asynchronous by nature (it's a chat), so it doesn't feel slow the way it would in a synchronous tool.

The team sharing aspect is more valuable than it sounds. When the agent explains a piece of code in the group chat, everyone sees it. Questions that used to get asked four times get asked once. Context that used to live in one person's head gets surfaced in the chat history. Onboarding new teammates gets easier because they can just ask the bot about unfamiliar parts of the codebase.

What's Still Manual

A few things I haven't automated yet that are worth flagging:

Git push. The agent edits files on the VM but doesn't automatically commit or push. You still need to SSH in and push. Setting up auto-commit after agent edits is straightforward (a git hook or a wrapper script) but adds surface area for pushing broken code. In my phone-based setup, Claude handles the full git push cycle because I'm the only operator. With a team, the blast radius of auto-push is higher.

Security hardening. The allowlist setup (limiting which Telegram user IDs can trigger the agent) matters if you're running with groupPolicy: open. Get your team's numeric Telegram IDs from @userinfobot and set the allowlist. Otherwise anyone who ends up in the group can invoke Claude Code against your codebase.

Context windows. For very large repos, Claude Code's context window becomes a constraint. It handles this reasonably by being selective about what it loads, but there are edge cases where it misses relevant files. Being specific in your requests helps: "in the auth module" is better than "somewhere in the backend."

Why This Matters

I've been thinking about what makes this pattern interesting beyond the immediate convenience.

The boring version is: it's a bot that can look at code. That's not new.

The more interesting version is: this is the beginning of AI agents as shared infrastructure rather than personal tools. The agent has a stable identity, persistent access to the codebase, and is accessible to the whole team from a zero-friction interface. It accumulates context across sessions. It's not a cursor plugin that dies when you close the editor.

mindmap root((AI Coding Agent<br/>Evolution)) Personal Tools Copy-paste into ChatGPT IDE plugins (Copilot, Cursor) CLI tools (Claude Code locally) **Shared Infrastructure** **Team-accessible agent** **Persistent codebase access** **Shared context & history** **Zero-friction interface** Future Agents that watch PRs Auto-triage incoming bugs Cross-repo reasoning Institutional memory

Most teams have some version of a shared technical brain — usually a senior engineer who everyone asks questions. That person is a bottleneck. An agent like this doesn't replace the senior engineer, but it handles the routine load: explaining code, finding where things are defined, making mechanical changes, writing first drafts of components. That's not nothing.

The other thing worth noting is that the entire system is self-hosted. The LLM calls go to Anthropic's API, but the agent gateway, the repos, and the session state all live on your infrastructure. For teams with code sensitivity concerns, this matters.

The Stack

For reference, the full stack is:

| Component | Choice | Notes | |-----------|--------|-------| | Compute | Any Linux VPS | GCP e2-standard-2, ~$50/month | | Agent gateway | OpenClaw | Self-hosted, runs as systemd service | | AI backend | Claude Code | Via ACP/acpx plugin | | LLM provider | Anthropic API | Only external dependency | | Channel | Telegram Bot API | Group chat with @mention trigger | | Codebase context | CLAUDE.md | Convention for project structure |

Total setup time from zero to working: about two hours, mostly fighting the acpx permissions issue and the Telegram group policy config.

If You're Going to Try This

Start simple. Get the bot responding to basic questions before you try to wire up code editing. The Telegram integration and the Claude Code integration are separable — test them independently.

Watch the logs. OpenClaw writes structured JSON logs to /tmp/openclaw/. The signal-to-noise is decent and most failures have clear error messages if you look.

Lock down the allowlist before you share the group widely. An open groupPolicy with no sender restrictions is fine for testing but not for production.

And write a good CLAUDE.md. The more context you give the agent about your project structure and conventions, the less you have to spell out in every message.

The dream — AI agents as persistent, shared, infrastructure-grade tooling — is closer than most people's current workflows suggest. This is a working version of it.


This post is part of a series on building with AI agents. See also: building an AI holding company from Telegram for the solo version of this pattern, and self-correcting context graphs for how structured context improves AI decision-making.