Best Practices
Building an effective AI agent requires a shift in mindset from traditional chatbot design. Follow these best practices to get the most out of VerlyAI.
System Prompt
The system prompt is the most critical input you give your agent — it defines its identity, scope, and behavior. A well-crafted system prompt is the difference between a mediocre bot and a truly effective agent.
Give It a Clear Identity
Vague instructions produce vague agents. Be specific about who or what your agent is.
Define the Scope
Always tell the agent what it should and should not do. This prevents it from hallucinating answers outside its domain.
You are a customer success agent for an e-commerce store.
- You CAN help with: order tracking, returns, product questions, and promotions.
- You CANNOT provide legal, medical, or financial advice.
- If a user asks about anything outside your scope, politely decline and offer to connect them to a human agent.
Set the Tone
Match the agent's personality to your brand voice.
- Formal: "Respond professionally and concisely. Avoid slang or informal language."
- Friendly: "Be warm, encouraging, and use casual language. Use emojis sparingly where appropriate."
- Technical: "Assume the user is technically proficient. Use precise terminology without over-explaining basics."
Handle Edge Cases Explicitly
Don't leave the agent to guess what to do in difficult situations.
- Define escalation behavior: "If the user expresses anger or frustration, immediately offer to connect them to a live human agent."
- Define fallback behavior: "If you are unsure of an answer, say so honestly and suggest the user visit our Help Center at help.example.com."
- Handle off-topic queries: "Politely redirect users who ask about unrelated topics back to your core purpose."
Keep It Iterative
Your first system prompt won't be perfect. Use real conversation logs to identify gaps and refine the prompt over time.
Models
VerlyAI lets you choose the underlying language model that powers your agent. Picking the right model directly impacts response quality, speed, and cost.
Choose Based on Use Case
| Use Case | Recommended Model Tier |
|---|---|
| Simple FAQs & routing | Fast / Lightweight model |
| Customer support & reasoning | Balanced model (GPT-4o, Claude Sonnet) |
| Complex analysis, coding, research | Powerful model (GPT-4, Claude Opus) |
| Voice agents | Low-latency optimized models |
Understand the Speed vs. Capability Trade-off
- Faster (smaller) models are great for high-volume, real-time interactions like live chat — but may struggle with nuanced multi-step reasoning.
- More capable (larger) models handle complex tasks better, but introduce slightly more latency and higher cost per token.
Use Consistent Model Versions
Avoid using "latest" aliases in production. Pin to a specific model version (e.g., gpt-4o-2024-08-06) to ensure consistent behavior after model updates.
Monitor Token Usage
Keep an eye on token consumption per conversation. Overly long system prompts or large knowledge injections can significantly increase costs.
- Trim unnecessary content from the system prompt.
- Use knowledge base retrieval selectively — only inject relevant chunks.
- Set a maximum context window appropriate for your use case.
Customization
VerlyAI gives you full control over how your agent looks, sounds, and behaves across channels.
Persona Engineering
Your agent's personality is defined through its system prompt and appearance settings working together.
- Name & Avatar: Give your agent a name and a branded avatar. Users engage more with agents that feel like a character.
- Welcome Message: Set a clear, inviting opening message that tells users what the agent can help with.
- Suggested Prompts: Pre-fill common questions so users know where to start.
Branding
- Match chat widget colors to your brand palette.
- Use your logo or a custom avatar for the agent icon.
- Customize the widget launcher icon and position on your website.
Channel-Specific Configuration
Different channels have different constraints. Customize your agent's behavior per channel:
| Channel | Key Consideration |
|---|---|
| Web Chat | Supports rich formatting, markdown, buttons |
| Plain text focus; avoid heavy markdown | |
| Voice | Short, spoken responses; avoid lists and links |
- For Voice, instruct the model to keep responses concise and conversational. Avoid bullet points and URLs.
- For WhatsApp, use simple formatting. Bold and italics work, but avoid tables or code blocks.
Conversation Memory
- Enable session memory to maintain context across a single conversation.
- Enable persistent memory (where available) to remember returning users' preferences and history.
One Brain, Many Channels
VerlyAI uses a Unified Agent State. Configure your knowledge base and tools once — then deploy to Web, WhatsApp, or Voice instantly.
Tools and Actions
Tools are how your agent interacts with the real world — fetching data, taking actions, and connecting to external systems.
Design Atomic Tools
Build small, focused tools rather than one giant "do everything" tool.
manage_order tool that handles checking status, processing refunds, and updating addresses.check_order_status, process_refund, update_shipping_address — each with a clear, single responsibility.Atomic tools are:
- Easier for the LLM to reason about and use correctly.
- Easier to test and debug in isolation.
- More reusable across different agent configurations.
Write Descriptive Tool Descriptions
The LLM uses the tool's name and description to decide when and how to call it. Be explicit.
{
"name": "check_order_status",
"description": "Retrieves the current status of a customer's order including shipping updates and estimated delivery date. Use this when the user asks where their order is, when it will arrive, or if there are any delays.",
"parameters": {
"order_id": {
"type": "string",
"description": "The unique order ID, typically found in the confirmation email."
}
}
}
Use Built-In Integrations Where Possible
VerlyAI offers native integrations (e.g., Shopify, Zendesk, Calendly) that are pre-configured and tested. Prefer these over building custom webhooks for common use cases.
Validate Tool Inputs
Always validate inputs before making external API calls. Define required vs. optional parameters clearly in your schema to prevent the agent from making incomplete calls.
Handle Tool Failures Gracefully
Tell the agent what to do when a tool fails.
If the check_order_status tool returns an error, apologize to the user and let them know there is a temporary issue. Offer to have a human agent follow up via email within 24 hours.
The "Escape Hatch" Principle
Always design a seamless path to a human agent. AI is powerful, but not perfect.
VerlyAI handles this via:
- Sentiment Triggers: Automatically escalate if the user seems frustrated.
- Explicit Requests: If a user asks for a "real person," honor it immediately.
- Zero Context Loss: The human agent receives the full transcript so the user never has to repeat themselves.
Secure Your Webhooks
- Use a secret header or HMAC signature to verify that webhook calls originate from VerlyAI.
- Never expose API keys in tool schemas or agent-visible fields.
- Apply rate limiting on your endpoints to protect against abuse.