I hacked together something similar to the concept they describe a few months ago (https://github.com/btucker/agentgit) and then ended up not actually finding it that useful and abandoning it.
I feel like the value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings, as part of a flywheel setup. As a human I would rarely if ever want to look at this -- I don't even have time to look at the final code itself!
> value would be in analyzing those rich traces with another agent to extract (failure) patterns and learnings
Claude Code supports hooks. This allows me to run an agent skill at the end of every agent execution to automatically determine if there were any lessons worth learning from the last session. If there were. new agent skills are automatically created or existing ones automatically updated as apporpriate.
Yes, I've done the same. But the issue is that the agent tends to learn too many lessons, or to overfit those lessons to that single session. I think the benefit of a tool like this is that you can give that agent a wider view when formulating recommendations.
Completely agree. But I wonder how much of that is just accomplished with well placed code comments that explain the why for future agent interactions to prevent them from misunderstanding. I have something like this in my AGENTS.md.
There is no such command, according to the docs [0]. /s
I continue to find it painfully ironic that the Claude Code team is unable to leverage their deep expertise and unlimited token budget to keep the docs even close to up-to-date automatically. Either that or they have decided accurate docs aren't important.
I've been working on https://github.com/btucker/selkie which is a complete implementation of the Mermaid parser & renderer in rust as an experiment in what's possible with Claude Code. It's still rough around the edges, but I've been blown away by what's been possible. (I'm now using it as a test repo for https://github.com/btucker/midtown)
Last year here in Chicago my wife's bike was stolen overnight. It has an airtag hidden in a bell on the handlebars. When we woke up and noticed it was missing, we traced it to a park not too far away. We ran over there and called the Chicago PD who showed up in <10min. We told them a description of the bike and showed where FindMy said it was. They went and retrieved it. Surprisingly happy ending & I was impressed the Chicago PD were so helpful!
I haven't dug too deep, but it appears to be using a bubblewrap sandbox inside a vm on the Mac using Apple's Virtualization.framework from what I can tell. It then uses unix sockets to proxy network via socat.
I disagree with labeling AI to be a cargo cult. Crypto fits the description but the definition of a cargo cult has to imply some sort of ultimate end in which its follower's expectations are drastically reduced.
What AI feels like is the early days of the internet. We've seen the dot com bubble but we ultimately live in the internet age. There is no doubt that post-AI bubble will be very much AI orientated.
This is very different from crypto which isn't by any measure a technological leap rather more than a crowd frenzy aimed at self-enrichment via ponzi mechanisms.
Great Engineer + AI = Great Engineer++ (Where a great engineer isn't just someone who is a great coder, they also are a great communicator & collaborator, and love to learn)
I recently watched a mid-level engineer use AI to summarize some our code, and he had it put together a big document describing all the various methods in a file, what they're used for, and so forth. It looked to me like a huge waste of time, as the code itself was already very readable (I say this as someone who recently joined the project), and the "documentation" the AI spit out wasn't that different than what you'd get just by running pydoc.
He took a couple days doing this, which was shocking to me. Such a waste of time that would have been better spent reading the code and improving any missing documentation - and most importantly asking teammates about necessary context that couldn't just be inferred from the code.
I hate to break it to you, but this guy probably wasn’t working at all. That sounds like a pretense to goof off.
Now I could believe an intern would do such a thing. I’ve seen a structural engineer intern spend four weeks creating a finite element model of a single concrete vault. he could have treated the top deck as a concrete beam used conservative assumptions about the loading and solved it with pen and paper in 30 minutes.
The same people who just copy-pasted stack overflow answers and didn't understand why or how things work are now using AI to create stuff that they also don't understand.
lol. I am SO glad I don't have to go to StackExchange anymore. There is something toxically awful about using advice from a thread that starts with "Why doesn't my code work?".
Another common synonym for mediocre: has no place on a software development team. Not technically correct, admittedly, but that's how I read that word in an software engineering context. Adequate is not good enough.
“Mediocre” is one of those words where common parlance doesn’t quite line up with the textbook definition. e.g. from the Oxford English Dictionary: “Of middling quality; neither bad nor good...”
Yes, this is it. "Logging on" and "Logging off" were explicit actions that you took as part of your day, instead of just being perpetually connected and reachable.
I love Monkeybrains! I had something in the neighborhood of a 600mbps symmetric connection through them in the late 2010s when I lived in SF. The only issue was when it rained hard the speeds would deteriorate.
Interesting you're getting such slow speeds. Ask them if a tech can stop by and troubleshoot with you.
I understand you're trying to "both sides" an argument. What have you found that has achieved for you in the past? Do you change people's opinions with this?
reply