
TrendOps is a governance-first, MCP-native AI agent swarm that transforms live YouTube trend data into strategic, investor-ready intelligence. Instead of a monolithic application, TrendOps decomposes the intelligence lifecycle into governed MCP tools — validation, data ingestion, analytics, and executive insight generation — orchestrated through a control-plane architecture. It enforces policy guardrails (region validation, rate limits), performs TF-IDF keyword extraction and K-Means clustering to detect emerging subcultures, and generates structured strategic reports via LLM — all with full execution tracing and cost observability. Built for the 2 Fast 2 MCP Hackathon, TrendOps demonstrates modular orchestration, enterprise-grade governance, and production-ready AI tool exposure.
A full-stack AI-powered portfolio platform where a user can create and publish professional public pages, attach an AI representative to those pages, and track visitor behavior with analytics. Lets users sign in with Google OAuth, provides a 7-step wizard to build a portfolio or company page, generates and edits section content through AI chat, publishes pages to public slug routes, optionally enables a public AI representative for each published page, tracks page views and chat sessions, extracts business intents from visitor conversations, and gives owners analytics dashboards plus AI-generated business insights. Exposes MCP tools so external AI systems can manage portfolios programmatically. Built with Next.js App Router, React 19, Express, PostgreSQL, Gemini, and optional Archestra LLM proxy and A2A agent routing.
AgriBot is an AI-powered agricultural operations platform that provides farmers with real-time, phase-aware crop guidance through specialized MCP agents all orchestrated through Archestra. Features RBAC to protect farmer privacy, Dual LLM Security to protect from prompt injection attacks, and MCP Registry integration with the National Statistics Office (eSankhyiki) for wholesale price trends and inflation data. Archestra turns seven independent servers into one secure, intelligent platform that farmers can trust.
Sentinel is an autonomous, goal-oriented system administration framework designed to turn a standard operating system into a self-healing, intent-driven environment. Unlike traditional automation scripts, Sentinel utilizes Large Language Models (LLMs) to interpret complex user goals and executes multi-step workflows across isolated virtual machines and local host environments. Archestra serves as the Governance and Control Plane, acting as a secure safety layer that bridges the gap between autonomous AI reasoning and sensitive system-level operations.
Sentinel is an autonomous Site Reliability Engineering (SRE) agent that continuously watches system health, uses AI to understand what went wrong, applies safety checks, and fixes issues automatically when safe or with human approval when needed. Archestra is used as the orchestration layer that connects to the Sentinal MCP server, with tools like get_infrastructure_status, incident_history, trigger_alert, and approvals available through Archestra's Chat UI.
AI-powered On-Call Agent that automates incident investigation by correlating real GitHub commits, logs, and tickets using Archestra MCP — solving the "2 AM Nightmare" for on-call engineers. Uses Archestra as the core integration layer; the MCP Gateway connects the Next.js backend to GitHub via the GitHub Copilot MCP Server. Jira, Slack, and Datadog can be added with zero code changes by simply installing their MCP servers in the Archestra Dashboard.
AAGIL (Adaptive Agent Genetic Integrity Layer) is a production-grade behavioral assurance platform that sits in front of large language models and agent systems to enforce safety, reliability, and response quality through a multi-layer, evolutionary decision pipeline. It combines behavioral firewalling, multi-model parallel generation, adversarial drift detection, evolutionary variant optimization, tool runtime protection, and persistent performance memory. Archestra is used as the orchestration backbone for coordinating multiple specialized AI agents (Gene Agent, Evaluator Agent, Behavioral Firewall Agent, Red-Team Agent, Tool Guard Agent) through structured MCP-based communication.
PitCrew Incidents is a tool-driven multi-agent incident response workflow built inside Archestra.AI. It runs a strict incident pipeline, pulls real evidence from Grafana Cloud Prometheus, automatically creates a GitHub issue in the project repo, posts a Slack update with the issue URL, and generates an incident summary artifact — all via real MCP tool calls with no mocked outputs. Archestra.AI was used as the runtime to build and run the full multi-agent incident workflow with a Master Agent coordinating four agents in strict order.
A Solana DeFi Intelligence MCP server. Using Archestra to orchestrate the entire server lifecycle, manage secrets, and bridge AI agents to on-chain tools (wallet analysis, token prices, DeFi positions). Archestra makes the MCP server instantly discoverable and deployable — no manual wiring needed.
MCP Guardian is an MCP server that audits other MCP servers. It reads tool definitions from Archestra's API, runs them through 50+ vulnerability patterns and an LLM deep scan, then writes real security policies back to Archestra that block dangerous tools. Scan, detect, lock down — one chat command. Includes a malicious demo server with 7 intentionally broken tools to prove it catches everything. Runs inside Archestra's embedded K8s as a managed pod, deployed through Terraform with the Archestra provider.
PitStop Check is an MCP "trust and safety inspector" that audits any MCP server before you connect it to your agent. It can inspect MCP targets over stdio or HTTP, run lint + conformance checks on tool metadata, benchmark listTools latency, compute a risk level, export an approval bundle, generate a markdown trust report, and compare runs to show what changed between two versions. Archestra was used as the development and verification environment for MCP tool behavior.
FluxAI is a multi-tenant cost and quota management system for MCP agents. It acts as a "policy firewall" that intercepts tool and model calls, evaluates them against configured budgets and policies, and makes intelligent routing decisions (allow/deny/downgrade). Integrated with Archestra as the MCP host that runs the FluxAI MCP server; agents call flux-ai.check_and_route before invoking expensive LLM tools so Archestra delegates routing and budget checks to FluxAI.
An MCP server that gives AI agents the ability to design, synthesize, and fabricate digital chips using open-source EDA tools — entirely through natural language. A user describes a digital circuit in plain English, and the agent autonomously writes synthesizable Verilog RTL, synthesizes with Yosys (Sky130 PDK), runs the full RTL-to-GDSII flow via OpenLane, analyzes PPA metrics, and renders a visual preview of the final GDSII layout. Deployed as a self-hosted MCP server within Archestra's Kubernetes cluster; Archestra's runtime acts as the secure bridge between the LLM and Dockerized EDA tools.
Refund Pro is an autonomous legal orchestration agent designed to enforce consumer rights in India. It automates the refund lifecycle by calculating statutory interest penalties and drafting legally binding notices based on real case precedents. Features an aggression optimizer that adjusts legal tone based on company behavior and a precedent engine that cites specific case numbers. Archestra is used as the central control plane to orchestrate the Dockerized MCP server, with Dual LLM security to prevent prompt injections when processing untrusted user data.
When an alert fires, this system uses coordinated AI agents to investigate, diagnose, and safely remediate incidents in seconds. One agent analyzes metrics, another correlates logs, and a remediation agent validates and proposes fixes — with guardrails like dry runs and strict blocklists. Archestra is the central orchestration layer, hosting both the Alert Triage Agent and the Remediation Agent with structured conversations, controlled tool access through MCP servers, and secure agent-to-agent handoffs.
ContextOS addresses Notification Fatigue with Archestra as the Production Brain and LLM Orchestration Layer. Archestra acts as the decision-making engine that receives natural language and determines the appropriate course of action, connects to the Python MCP server via SSE, and discovers tools like schedule_event, trigger_alert, create_ticket, and Tasks/Reminders. When a user sends a command like "Schedule a meeting with Alice," Archestra's LLM analyzes the text and executes the appropriate tool call.
Vyapaar MCP is a high-performance financial governance layer and MCP server that acts as an autonomous firewall between AI agents and the Razorpay X banking platform. It intercepts every payout attempt, validates against a strict policy engine, checks vendor reputation, and ensures Human-in-the-Loop for high-risk decisions. Archestra acts as the Security Gateway and Enterprise Orchestrator with Deterministic Security (Foundry Model), Dual-LLM Quarantine, and operational layer for secrets and sidecars.
SecureOps Sentinel is a multi-agent system that triages production incidents using AI while defending against prompt injection attacks hiding in log data. When a DevOps engineer asks "What's wrong with our web-api?", three AI agents collaborate: LogAnalyzerAgent, IncidentCommanderAgent, and RemediationAgent. Archestra's Dual LLM quarantine and Dynamic Tools neutralize prompt injection attacks while A2A delegation ensures the incident still gets handled.
ShellGuard is an intelligent terminal safety assistant that mitigates command-line risks by intercepting and analyzing execution requests in real-time. Leveraging Archestra MCP as its core decision engine, it assesses the intent and potential consequences of each command. When high-risk operation is detected, ShellGuard halts execution and provides detailed risk analysis and safer, context-aware alternatives. Archestra serves as the primary intelligence engine with comprehensive observability and fallback to Gemini API when unreachable.
Autonomous Infrastructure Incident Response: an AI agent system that detects infrastructure anomalies, interprets human-written runbooks, and executes validated remediation steps automatically. Monitor MCP observes metrics and provides anomaly detection; Executor MCP performs remediation (restart, scale, rollback) with rate limits and action validation. Orchestrator Agent runs on Archestra using Gemini to interpret runbooks and coordinate both MCPs. Strict read/write separation creates clear governance boundaries.
SecureOps AI is an autonomous incident response platform that connects to live incident feeds through custom MCP servers, uses Groq's Llama 3.3 70B for intelligent triage, and automatically creates prioritized GitHub tickets in under 30 seconds. To prove security, a real prompt injection attack was embedded in a test incident and the system correctly ignored the malicious instructions. Archestra is the secure orchestration backbone with two custom MCP servers (Incident Feed + GitHub Ticketing) and dual-LLM security quarantine.