Aarchid - AI Botanical Intelligence

Forensic plant-health platform: multimodal vision diagnosis grounded by real-time, citation-backed LLM reasoning. Built as co-creator - PM + engineer.

Co-creator · Product + Engineering·Aug 2025 – present
Gemini 1.5 ProExa AI APINext.jsCloudflare WorkersHonoSupabaseTypeScript

TL;DR

Plant owners had no personalized, real-time care guidance. Built with Dilpreet Grover (@dfordp), Aarchid diagnoses plant health from a single photo by combining Gemini 1.5 Pro vision with Exa AI API research-augmented reasoning - returning a health score, severity tier, and cited action plan in under 10s on the edge.

92%Diagnosis Accuracy
90%Cited Recommendations
<10sP95 Latency (edge)
$0.25Infra Cost / User / Mo

Context & Problem

Plant enthusiasts and collectors struggle to diagnose plant-health issues early. Existing apps (Planta, PictureThis, Greg) lean on static care calendars and species lookups. None correlate visual symptoms with environmental history, and none cite their sources.

The gap: a forensic, auditable diagnosis - look at _your_ plant, factor in _your_ local conditions, and recommend an action grounded in botanical research, not generic templates.

Source: aarchid-rework · aarchid-api · live at aarchid.space.

Research & Discovery

  • Interviewed 10+ plant enthusiasts about failure modes and trust gaps in existing apps
  • Benchmarked top 4 apps on diagnostic specificity, citation quality, and edge-case handling
  • Unmet need crystallised: image-based diagnosis + hyper-local environmental context + cited recommendations

Solution & Approach

1. Architecture - the "Edge Stack"

Next.js PWA → Cloudflare Workers (Hono) orchestrator → Gemini 1.5 Pro (vision) + Exa AI API (research) + OpenWeather (environment) → Supabase (Postgres) + Cloudflare R2 (image storage, zero egress). Result: sub-$5/mo fixed infra at low usage, ~$0.25 per active user at scale.

2. Forensic Health Audit

Upload photo → Gemini 1.5 Pro detects pests, deficiencies, stress → Workers call Exa AI API to pull peer-reviewed citations → OpenWeather adds 7-day local context → response returns a health score (1–100), severity tier (Healthy / Warning / Critical), and a cited action plan.

3. Growth Velocity Tracking

Pixel-based measurement across photo logs tracks leaf expansion, stem elongation, and internodal distance over weeks. Species-specific benchmarks tell users whether a plant is thriving, stagnating, or declining.

4. Pro-sumer Asset Management

Batch care actions for 50+ plants at once grouped by "micro-climate zones". Exportable timestamped Health Certificates enable peer-to-peer plant sales with verifiable provenance - the business wedge.

Implementation

The edge stack prioritised latency, auditability, and cost ceiling. Three decisions shaped the build:

1. Cloudflare Workers (Hono) as orchestrator, not a monolith backend. Image upload lands on R2 (zero egress). A single Worker fans out three parallel calls - Gemini vision, Exa research, OpenWeather - then composes the response. No cold starts, <50ms overhead, global by default. 2. Every claim cites a source. Exa AI API returns peer-reviewed citations inline. The frontend renders them as a footnote trail so users can audit why the app recommended neem oil for that particular spider-mite presentation. 3. A golden eval set, not a vibe check. 50 labelled photos across 12 common plant-health failure modes. Every model/prompt change re-runs the set and reports accuracy deltas. Current model + prompt: 92% on the golden set, 90% citation grounding, P95 <10s end-to-end.
// Simplified orchestrator flow inside the Worker
const [diagnosis, research, weather] = await Promise.all([
  gemini.analyze({ image, prompt: DIAGNOSIS_PROMPT }),
  perplexity.research({ symptoms, species }),
  openweather.local({ lat, lon, days: 7 }),
]);

const report = composeReport({ diagnosis, research, weather });
return c.json(report); // <10s P95 on the edge

Key decision: Keeping the orchestrator stateless pushed all persistence to Supabase (history, growth tracking) and R2 (images). The Worker is cheap to re-run, trivial to version, and the eval harness can replay any historical photo against a new model without touching user data.

Outcome & Metrics

  • Diagnosis accuracy: 92% on a 50-sample golden set covering 12 failure modes
  • Citation grounding: 90% of recommendations carry a peer-reviewed source link
  • Latency: P95 <10s end-to-end on Cloudflare's edge
  • Unit economics: ~$0.25 per active user per month at scale; sub-$5/mo fixed infra at low usage
  • Live: aarchid.space - co-built with Dilpreet Grover

Learnings

What worked

The eval harness was the unlock. Once we had a labelled golden set, prompt iteration went from "vibes" to measurable - every change produced an accuracy delta in minutes. The edge stack was the second: pushing orchestration to a Worker (instead of a traditional backend) kept fixed costs at single-digit dollars and made the app globally fast from day one.

What I'd change

A structured feedback loop - let users confirm or correct diagnoses, then feed the deltas back into the golden set. The harness exists; the human-in-the-loop pipe to update it is the next PR.

Related Work

data

Customer Churn Analysis

Built a predictive churn model that identified at-risk users and reduced monthly churn by 15%.

~15%Churn Reduction
technical

TCS NQT Prep Hub

Open-source TCS NQT preparation platform - 424+ practice questions, 10 previous papers, and an installable PWA with quizzes, flashcards, analytics, and offline support.

424+Practice Questions
technical

KiteEdge - Portfolio Intelligence

Self-hosted analytics platform for Zerodha Kite - 43+ technical indicators, risk analytics with Monte Carlo VaR, and ARIMA/Prophet forecasting.

43+Technical Indicators