HLER: Human-in-the-Loop Economic Research via Multi-Agent Pipelines for Empirical Discovery
Why this is worth your attention
This paper matters because it makes a specific part of “AI can automate research” look more operationally real: not autonomous genius, but a cheap, structured workflow that turns a dataset into a draft empirical paper with humans approving the key decisions. The headline change is less about model brilliance than about reducing wasted cycles from bad questions—HLER’s dataset-aware setup cut infeasible hypotheses sharply and completed most runs end to end in 20–25 minutes at very low API cost. If that pattern holds outside this small test, economics, policy, market research, and internal analytics teams could industrialize parts of empirical analysis faster than most current research workflows assume. The catch is readiness: evidence is still from just 14 runs on three datasets, and some quality claims rely on the same LLM family grading its own output.
SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via Continuous Integration
Why this is worth your attention
This paper matters because it shifts the question from “can an AI fix a bug?” to “can it keep a real codebase healthy as requirements keep changing over months?” That is much closer to where engineering budgets are actually spent, and it puts pressure on agent vendors to prove durability, not just one-shot demo wins. The paper’s main contribution is the benchmark rather than proof that agents are already ready for autonomous maintenance, but if this style of evaluation catches on, product, engineering, and procurement teams will need to compare coding agents on regression risk and long-horizon maintainability, not just task completion.