Rendered at 09:54:38 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
lmeyerov 16 minutes ago [-]
I liked they did this work + its sister paper, but disliked how it was positioned basically opposite of the truth. It set up the community to misinterpreting it from a quick read, punishing people for a quick title scan or abstract scan. So for the next X months, instead of the paper helping, we have to deal with the brain damage.
The good: It shows on one kind of benchmark, some kinds of agentically-generated docs don't help. So naively generating these, for one kind of task, doesn't work. Thank you, useful to know!
The bad: Some people assume this means in general these don't work, or automation can't generate useful ones.
The truth: These files help measurably and just a bit of engineering enables you to guarantee that for the typical case. As soon as you have an objective function, you can flip it into an eval, and set an AI coder to editing these files until they work.
Ex: We recently released https://github.com/graphistry/graphistry-skills for more easily using graphistry via AI coding, and by having our authoring AI loop a bit with our evals, we jumped the scores from 30-50% success rate to 90%+. As we encounter more scenarios (and mine them from our chats etc), it's pretty straight forward to flip them into evals and ask Claude/Codex to loop until those work well too.
We do these kind of eval-driven AI coding loops all the time , and IMO how to engineer these should be the message, not that they don't work on average. Deeper example near the middle/end of the talk here: https://media.ccc.de/v/39c3-breaking-bots-cheating-at-blue-t...
nayroclade 1 hours ago [-]
I suspect AGENTS.md files will prove to be a short-lived relic of an era when we had to treat coding agents like junior devs, who often need explicit instructions and guardrails about testing, architecture, repo structure, etc. But when agents have the equivalent (or better) judgement ability as a senior engineer, they can make their own calls about these aspects, and trying to "program" their behaviour via an AGENTS.md file becomes as unhelpful as one engineer trying to micro-manage another's approach to solving a problem.
sdenton4 13 minutes ago [-]
Eh, even for a senior engineer, dropping into a new codebase is greatly helped by an orientation from someone who works on the code. What's where, common gotchas, which tests really matter, and so on. The agents file serves a similar role.
35 minutes ago [-]
CrzyLngPwd 26 minutes ago [-]
I have a legacy codebase of around 300k lines spread across 1.5k files, and have had amazing success with the agents.md file.
It just prevents hallucinations and coerces the AI to use existing files and APIs instead of inventing them. It also has gold-standard tests and APIs as examples.
Before the agents file, it was just chaos of hallucinations and having to correct it all the time with the same things.
OutOfHere 9 minutes ago [-]
You might have better luck with more focused task-specific instructions if you can be bothered to write them.
noemit 2 hours ago [-]
The research mostly points to LLM-generated context lowering performance. Human-generated context improves performance, but any kind of AGENTS.md file increases token use, for what they say is "fake thinking." More research is needed.
d1sxeyes 1 hours ago [-]
Agree. Also, sometimes I intentionally want the agent to do something differently to how it would naturally solve the problem. For example, there might be a specific design decision that the agent should adhere to. Obviously, this will lead to slower task completion, higher inference costs etc. because I’m asking the agent not to take the path of least resistance.
This kind of benchmark completely misses that nuance.
stingraycharles 1 hours ago [-]
I’d say that it needs to be maintained and reviewed by a human, but it’s perfectly fine to let an LLM generate it.
sheept 12 minutes ago [-]
If you let an LLM generate it (e.g. Claude's /init), it'll be a lot more verbose then it needs to be, which wastes tokens and deemphasizes any project-specific preferences you actually want the agent to heed.
dev_l1x_be 46 minutes ago [-]
I never use these files and give the current guardrails of a specific task to each short run for agents. Have task specific “agents.md” works better for me.
The good: It shows on one kind of benchmark, some kinds of agentically-generated docs don't help. So naively generating these, for one kind of task, doesn't work. Thank you, useful to know!
The bad: Some people assume this means in general these don't work, or automation can't generate useful ones.
The truth: These files help measurably and just a bit of engineering enables you to guarantee that for the typical case. As soon as you have an objective function, you can flip it into an eval, and set an AI coder to editing these files until they work.
Ex: We recently released https://github.com/graphistry/graphistry-skills for more easily using graphistry via AI coding, and by having our authoring AI loop a bit with our evals, we jumped the scores from 30-50% success rate to 90%+. As we encounter more scenarios (and mine them from our chats etc), it's pretty straight forward to flip them into evals and ask Claude/Codex to loop until those work well too.
We do these kind of eval-driven AI coding loops all the time , and IMO how to engineer these should be the message, not that they don't work on average. Deeper example near the middle/end of the talk here: https://media.ccc.de/v/39c3-breaking-bots-cheating-at-blue-t...
It just prevents hallucinations and coerces the AI to use existing files and APIs instead of inventing them. It also has gold-standard tests and APIs as examples.
Before the agents file, it was just chaos of hallucinations and having to correct it all the time with the same things.
This kind of benchmark completely misses that nuance.
AGENTS.md are extremely helpful if done well.