AI Agent that handles engineering tasks end-to-end: integrates with developers’ tools, plans, executes, and iterates until it achieves a successful result.
-
Updated
Mar 18, 2026 - Rust
AI Agent that handles engineering tasks end-to-end: integrates with developers’ tools, plans, executes, and iterates until it achieves a successful result.
SE-Agent is a self-evolution framework for LLM Code agents. It enables trajectory-level evolution to exchange information across reasoning paths via Revision, Recombination, and Refinement, expanding the search space and escaping local optima. On SWE-bench Verified, it achieves SOTA performance
An LLM council that reviews your coding agent's every move
Squeeze verbose LLM agent tool output down to only the relevant lines
Model Context Protocol Benchmark Runner
Lean orchestration platform for enterprise AI — where each decision costs hundreds. State machine core, HITL as a first-class state, corrections that accumulate. First use-case being Coding agent. Open research, early stage.
Open benchmark for AI coding agents on SWE-bench Verified. Compare resolution rates, cost, and unique wins.
Do MCP tools serialize in Claude Code? Empirical study: readOnlyHint controls parallelism, IPC overhead is ~5ms/call. Reproduces #14353.
Benchmark suite for evaluating LLMs and SLMs on coding and SE tasks. Features HumanEval, MBPP, SWE-bench, and BigCodeBench with an interactive Streamlit UI. Supports cloud APIs (OpenAI, Anthropic, Google) and local models via Ollama. Tracks pass rates, latency, token usage, and costs.
A technical guide and live-tracking repository for the world's top AI models, specialized by coding, reasoning, and multimodal performance.
A Rust reimplementation of mini-swe-agent with CLI task execution, benchmark runners, trajectory inspection, and multi-environment support.
Recursive Formal Alignment Factory — Autonomous pipeline for generating ultra-high-density vericoding trajectories and training SOTA code models on a single T4 GPU. Multi-agent verification, formal proofs, dense rewards, recursive self-improvement.
Supplementary materials for SRE shadow-mode PR replay experiment
Architecture Decision Quality Benchmark — the first public benchmark for evaluating AI on architecture-level engineering decisions
One-command SWE-bench eval harness in Go. Native ARM64 containers with 6.3x test runner speedup on Apple Silicon and AWS Graviton. Pre-built images on Docker Hub.
A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 37.3% tasks (pass@1) in SWE-bench lite and 51.6% tasks (pass@3) in SWE-bench verified with each task costs less than $0.7.
Add a description, image, and links to the swe-bench topic page so that developers can more easily learn about it.
To associate your repository with the swe-bench topic, visit your repo's landing page and select "manage topics."