Frequently asked questions
A modern chatbox becomes a front‑line assistant across channels (web, WhatsApp, Messenger, email). We connect it to your knowledge base (policies, product sheets, SOPs) using retrieval (RAG), so answers are grounded and cited. It can qualify leads, book calls, create tickets, check order status, and push clean records to your CRM. With role and rate limits, it knows when to hand over to a human and logs every step for audit. Multilingual by default, it adapts tone to your brand voice. You get faster replies, fewer repetitive tasks, and measurable gains in conversion per support hour.
Yes. We extend—not rip‑and‑replace. Typical patterns: API/webhook orchestration, secure service accounts with least‑privilege access, read‑only modes for risky tables, and write operations via approved endpoints (e.g., create order, update status, post note). Where APIs are limited, we can use queue‑based bridges or RPA fallbacks for edge cases. Everything is idempotent and fully logged. The result: agents that can fetch stock levels, calculate availability, generate quotes, or update tickets—while your ERP/CRM (SAP B1, Dynamics, Odoo, Pipedrive/HubSpot, etc.) remains the system of record.
We treat an “EYE” setup as a camera/computer‑vision layer. Our pipeline ingests events (frame streams, motion, barcode/label reads), applies vision models (object detection, area occupancy, pallet counting), and correlates them with WMS/ERP data. Use cases: automatic dock/pallet arrival detection, pick/pack verification, misplacement alerts, and safety zone breaches. The agent publishes structured events to your WMS (via API/webhook), opens tasks for staff, and keeps an auditable trail with snapshots. No need to replace cameras—if your EYE system exposes feeds or events, we integrate; if the name refers to your in‑house “eye” system, we align on the signal format during Discovery and build the adapter.
Yes—privacy and compliance are baked into the architecture. We practice data minimisation, purpose-limited processing, role-based access, and short retention by default. Hosting can be region-bound (EU) with private networking and secrets management. Sensitive fields can be redacted before any model sees them.
We document DPIA considerations, sign DPA as needed, and keep complete audit logs for queries and actions. Human-in-the-loop is standard for critical operations, and we allow model/tenant isolation where required. In short: business-grade security with the paperwork and controls your DPO will actually like.
We aim for value in 4–6 weeks.
Phase 1 (Week 0–2): discovery, data mapping, KPI baseline, and a small proof (e.g., chatbox answering from your docs, read-only ERP queries).
Phase 2 (Week 3–6): production pilot with guardrails—limited users, clear success metrics (AHT, deflection, cost per action).
Phase 3 (Week 7+): expand actions, automate approvals where safe, and tune costs (prompt caching, model right-sizing).
Pricing is transparent: a fixed setup for the pilot, then monthly for maintenance/iteration plus pass-through model usage. We keep spend predictable with budgets, alerts, and dashboards. If a workflow doesn’t move the KPI, we fix it or turn it off—no sunk-cost theatre.
We’re model-agnostic. For generation we select per use case (accuracy, latency, privacy, cost): frontier APIs or open models (e.g., GPT-class, Claude-class, Llama/Mistral family, domain-specific OCR/ASR). For retrieval we use portable stores (Postgres + pgvector, or managed options like Pinecone/Weaviate/Qdrant) so your knowledge base is exportable. Orchestration is kept thin: small services and queues (HTTP, webhooks, worker jobs), with optional frameworks (LangChain/LlamaIndex) behind an internal interface so we can swap them. Observability is standard: metrics, logs, traces, prompt/version tracking. The lock-in avoidance is architectural—open schemas, export paths, and abstraction layers—so we can change models or vendors without breaking your workflows or data.
We run a light but structured change-management track. Week 1: Shadow mode (agents observe and draft; humans decide). Week 2–3: Assisted mode (agents perform low-risk actions with approvals). Week 4+: Guardrailed automation where KPIs prove it’s safe. We deliver a playbook: roles & approvals, “do-not-guess” policy, escalation rules, and example prompts. Teams get a sandbox, short live sessions, and train-the-trainer materials (videos, SOPs, cheat sheets). Adoption is measured (usage, deflection, AHT) and feedback loops are built in (flag, correct, learn). Admins see an audit console with permissions, logs, and cost dashboards. Result: confident users, controlled risks, and faster time-to-value.
