← Back to Blog

AI Function-Calling Agent: Build a Tool-Using Bot on Telegram

Cactus just open-sourced Needle — a 26M parameter model that performs function calling at 6000 tokens/s on consumer devices. It proves something crucial: tool use is not a reasoning problem. It's a retrieval-and-assembly problem. And you don't need GPT-5 to do it.

Published by GetClawCloud · May 13, 2026

When Cactus released Needle this week — a tiny, attention-only model that does tool calling on a phone at 1200 tok/s decode — the Hacker News community went deep. 443 points, 150 comments, and a recurring theme: "We've been overcomplicating function calling."

Needle's insight is elegant: tool calling isn't reasoning. Matching a user query to a tool name, extracting argument values, and emitting JSON is a retrieval-and-assembly task. Cross-attention handles that better than bloated FFN layers. The result is a 26M-parameter model that does what a 70B model was doing — for a fraction of the cost, latency, and hardware requirements.

Needle proves that tool-calling agents don't need datacenter GPUs. The same architecture can run on a phone, a smartwatch, or — as you'll see — inside a Telegram bot.

This is the direction the AI agent space is moving: small, fast, tool-oriented models that call external functions rather than hallucinate answers. Instead of asking your LLM to "reason" about weather data, you define a get_weather(location) function and let the model's function-calling layer handle the routing. The model focuses on what it's good at (parsing intent, extracting arguments) and punts computation to real systems.

Why Function-Calling Agents Matter Now

The Needle release accelerates three trends that were already reshaping AI agents:

1. Tool use is the new prompt engineering.
Instead of writing elaborate instructions that ask the model to "think step by step" and then manually format output, you define tools. The model calls search_web(query) instead of pretending it knows the answer. It calls fetch_page(url) instead of fabricating statistics. Your prompt becomes a routing layer, not a wishlist.

2. Latency drops from seconds to milliseconds.
A 26M-parameter model runs locally. No API round trips. No queue time. No per-token pricing. Function-calling agents that used to cost $0.02 per invocation now run effectively free on consumer hardware.

3. Privacy becomes the default.
When the model runs on-device, your queries never leave your machine. For sensitive use cases — contract analysis, medical research, internal data queries — this eliminates the single biggest adoption blocker.

You don't need to train a 26M parameter model to benefit from this pattern. The same tool-use architecture works at any model scale. The prompt below turns your Telegram bot into a function-calling agent that knows how to search, fetch, compute, and act — without requiring custom model training.

The Prompt: Your AI Function-Calling Agent

This prompt turns any OpenClaw-powered Telegram bot into a tool-use agent. It defines a set of "functions" the model can call, and the agent orchestrates them to complete your real-world tasks.

How to use:

  1. Deploy OpenClaw on GetClawCloud — one click, no server setup
  2. Paste this prompt as your first message to the Telegram bot
  3. Send a task — the agent determines which functions to call in what order
You are an AI Function-Calling Agent. Your job is to break down user requests into tool calls, execute them, and assemble the results. ## Available Tools Each function below is a capability you can invoke. When you need data, call the right tool — don't invent. ### 1. search_web(query: string) → results[] Search the web and return title, URL, and snippet for each result. - When to use: Any request that requires current or external information - Arguments: A natural language query string, as specific as possible - Example: search_web("latest pricing for OpenAI GPT-5 API")

### 2. fetch_page(url: string) → content (markdown/plaintext) Fetch and extract readable content from a URL. - When to use: After search_web, to get full details from a promising result - Arguments: A valid http(s) URL - Example: fetch_page("https://openai.com/pricing") ### 3. calculate(expression: string) → number Evaluate a mathematical expression safely. - When to use: Any computation — conversions, aggregates, comparisons, statistics - Arguments: A safe mathematical expression (+, -, *, /, parentheses, basic functions) - Example: calculate("(1000 * 12) / 0.03") ### 4. format_output(data: json, format: string) → string Render structured data into a human-friendly format. - When to use: After collecting data, to present results cleanly in Telegram - Formats: "briefing" — sectioned summary with emoji headers; "table" — tabular comparison; "bullets" — compact list; "report" — detailed structured report - Example: format_output(results, "briefing") ## Workflow ### Phase 1: Parse & Plan When the user sends a request: 1. Identify which tools you need and in what order 2. List your plan to the user before executing 3. Wait for confirmation (or proceed if the plan is obvious) ### Phase 2: Execute 1. Call tools one at a time, piping output from one tool into the next when needed 2. If a tool returns insufficient data, call search_web again with a refined query 3. Always use real data — never fabricate results ### Phase 3: Assemble 1. Combine all tool outputs into a coherent response 2. Cite sources for every claim (URL or query used) 3. Use format_output with "briefing" for structured results 4. Highlight any data limitations or confidence flags ### Phase 4: Act If the user asks you to do something with the results (monitor, alert, schedule, compare): 1. Clarify the frequency and criteria 2. Explain how OpenClaw cron jobs can maintain the watch 3. Offer to prepare the instructions they can paste into cron ## Example Interaction User: "Is AWS having an outage right now?" Agent plan: [search_web("AWS status page outage May 2026"), fetch_page(URL)] Agent output: 🔍 Checking AWS service health... 📡 Source: status.aws.amazon.com ✅ All services nominal as of [timestamp]. No reported incidents in the last 24 hours. ## Rules - Always state your tool plan before executing - If multiple interpretations are possible, ask which one - Never guess data — always call a tool - Flag when a tool returned empty or error results - Output in Telegram-friendly format (no tables, bold for emphasis, emoji section headers) - For recurring tasks, explain how to schedule with OpenClaw cron ## Start Send me your task — I'll break it down, call the right tools, and deliver results.

💡 The agent uses web search and page fetching tools. Make sure your OpenClaw deployment has web access enabled.

Real-World Examples

Here's what happens when you send different types of requests to this agent:

📊 "Compare the latest GPU pricing between AWS, GCP, and Azure"

The agent calls search_web() three times — one for each cloud provider's GPU pricing page — then fetch_page() for each top result. It extracts pricing per GPU-hour, calls calculate() to normalize across providers, and delivers a formatted comparison.

🔍 "Find the top 3 open-source tools for AI function calling, tell me their GitHub stars and last commit date"

The agent searches, fetches each GitHub repo page, extracts star counts and activity, then presents ranked results with direct links.

📈 "What's the market cap of NVIDIA and AMD right now, and what's the ratio?"

The agent searches for current market caps, fetches data, calls calculate("market_cap_nvidia / market_cap_amd"), and formats a clean comparison briefing.

Scheduling: Turn One Query Into a Recurring Watch

The real power of a function-calling agent is not one-off queries — it's ongoing monitoring. After the agent completes a task, tell it you want the same check daily.

Set up with OpenClaw cron:

# Daily AI function call: monitor competitor pricing openclaw cron add --every 24h --text "Run the function-calling agent. Search and compare latest pricing for GPT-5 vs Claude 4 vs Gemini 3. Fetch current pricing pages. Calculate cost per million tokens. Deliver a briefing."

The agent runs the same tool chain — search, fetch, calculate, format — on a schedule. You get daily price intelligence delivered to Telegram without touching a browser.

Why This Works Better Than a Raw LLM Prompt

A traditional "ask the LLM" approach has a fundamental flaw: the model has to either know the answer from training data (which may be outdated) or fabricate one (hallucination). A function-calling agent never guesses — it always calls a tool.

Approach Data Freshness Hallucination Risk Repeatability Tool Extensibility
Raw LLM prompt Training cut-off High Varies by prompt None
Function-calling agent Real-time Low (always fetches) Deterministic tool chain Add any tool via prompt
RAG pipeline Index freshness Medium (retrieval quality) Depends on index Requires engineering

The function-calling agent sits in a sweet spot: it's as fresh as RAG, as simple as a prompt, and as reliable as deterministic tool execution.

Scaling: From One Agent to a Fleet

Once you have the function-calling pattern working for one task, you can spin up multiple agents — each with different tool sets and schedules:

Each agent shares the same function-calling architecture. Only the tool selection and scheduling differ. The prompt stays the same — you just tell each agent what to watch and how often to run.

Needle's lesson was that tool calling doesn't need massive models. Your Telegram agent doesn't either. The same architecture — parse intent, call a tool, assemble results — works at any scale. Deploy once, add tools as you go.

Getting Started

  1. Deploy OpenClaw on GetClawCloud — one click, zero server configuration
  2. Paste the prompt above into your Telegram bot
  3. Send your first task — "What's trending on Hacker News today?" or "Compare cloud GPU pricing"
  4. Schedule recurring checks with cron for hands-free monitoring

Your Telegram bot becomes a function-calling engine — no model training, no API integration, no custom deployment pipeline. The prompt defines the tools; OpenClaw powers the execution.

Deploy Your AI Function-Calling Agent

One click on GetClawCloud, paste the prompt, and your Telegram bot becomes a tool-using AI agent. Real data, no hallucinations, scheduled delivery.

Start on GetClawCloud →