
AI Agents Assemble
A global showdown where builders unite to create the next generation of intelligent agents. Assemble your skills, assemble your tools, assemble your team.
8 Dec - 14 Dec
AI Agents Assemble

AI Agents Assemble is a global showdown where builders unite to create the next generation of intelligent agents with $20,000 in cash prizes, interview opportunities, Google Summer of Code mentorship, and exclusive swag.
For 7 intense days, you will design agents that think, automate, orchestrate, and evolve. This is your chance to push beyond prompts, unleash your creativity, and build something the world has never seen.
Assemble your skills, assemble your tools, assemble your team and bring the future of AI agents to life.
Assemble Your Tools
Multiple technologies unite
Combine Cline, Kestra, Vercel, Oumi, and CodeRabbit to build powerful AI agent systems. Use multiple sponsor tools in your project to increase your chances of winning by submitting to multiple tracks!
Build the Future
Create intelligent agents
Design agents that think, automate, orchestrate, and evolve. Push beyond prompts and unleash your creativity to build something revolutionary.
Win Big
$20,000+ in prizes
Compete for cash rewards, exclusive swag, Google Summer of Code mentorship, and job interview opportunities. Three Infinity Stone awards await the best projects.
Prizes
Every winner will receive swag + job interviews. Everyone who submits a project will receive a certificate. Use multiple sponsor tools in your project to increase your chances of winning by submitting to multiple tracks!

Wakanda Data Award
Awarded to the best project using Kestra's built-in AI Agent to summarise data from other systems. Bonus credit for enabling the agent to make decisions based on the summarised data.

Infinity Build Award
Awarded to the best project that uses Cline CLI to build powerful autonomous coding workflows and meaningful capabilities on top of the CLI.

Iron Intelligence Award
Awarded for the most effective and creative use of Oumi to train/evaluate new LLMs/VLMs and/or most impactful contributions to the open source Oumi repository that would benefit the community.

Stormbreaker Deployment Award
Awarded to the strongest Vercel deployment, showing a smooth, fast, and production-ready experience.

Captain Code Award
Awarded to the team that demonstrates the best open-source engineering using CodeRabbit through clean PRs, documentation, and solid OSS workflows.

All participants will receive mentorship and guidance to help them prepare for and apply to Google Summer of Code.
Social Media Raffle
Post about taking part in the hackathon on social media and tag WeMakeDevs. 10 raffle winners will receive exclusive swag packs from our sponsors.
The Sponsor Stones
Six powerful technologies assemble to power your AI agents
Oumi
The most comprehensive repository for end-to-end research and development with LLMs and VLMs; pretraining, post training (SFT and RL with FFT or LoRA), evaluations, data synthesis and more. Build custom AI models and agents to improve quality and reduce latency/costs for your applications or research.
Assemble The Stones
Combine these powerful technologies to create unprecedented AI agent systems. Each stone brings unique capabilities - together they're unstoppable.



Judging Criteria
Your projects will be evaluated based on these six key dimensions
Potential Impact
Evaluates how effectively the project addresses a meaningful problem or unlocks a valuable real-world use case.
Creativity & Originality
Assesses the uniqueness of the idea, novelty in approach, and how creatively sponsor technologies (Cline, Vercel, Kestra, Oumi, CodeRabbit) are applied.
Technical Implementation
Reviews how well the project was executed technically, whether it functions as intended, and the quality of integration with required sponsor technologies.
Learning & Growth
Recognizes the progress made during the hackathon, especially for first-time builders or teams pushing themselves with new tools or concepts.
Aesthetics & User Experience
Considers how intuitive, user-friendly, and polished the final project is, particularly if it has a frontend or user interaction layer.
Presentation & Communication
Evaluates the clarity of the README, the quality of the demo video, and how effectively the team communicates their idea and results.

Pro Tip
Focus on creating a project that not only works well technically but also solves a real problem and provides a great user experience. The best projects often excel in multiple criteria! Assemble your skills and tools to maximize your impact.

Need Answers?
Frequently Asked Questions
Quick responses for the most common questions about AI Agents Assemble.





