Since Anthropic launched we've been using it at a lot. It's the best programming agent I've seen so far: it gives concise answers, it can run shell tools a...
I’m someone who use LLM, including Claude Code a lot.
But come on, this could be a tiny script that do the same, write it with claude code if you dont want to write it, instead of having claude code run each time, you’ll be deterministic this way !
Yeah this should just be a standard GitHub action. It’s a waste of energy to have an LLM do this over and over.
I see this trend happening a lot lately, where instead of getting the LLM to write code that does a thing, the LLM is repeatedly asked to do the thing, which leads to it just writing the same or very nearly the same code over and over.
Their company is an AI assistant for shopping, so trying to put AI everywhere including places it shouldn’t be is gonna happen.
I like my build scripts dependable, debuggable, and deterministic. This is wild. When the bot makes a pull request, and the user (who may be someone else at some point) doesn’t respond with exactly what the prompt wants, what happens? What happens when Claude Code updates, or has an outage? Don’t change that GitHub action at the end’s name without remembering to update the prompt as well.
Or worse. A single bad actor (according to the company) poisoned grok to be white supremacist. How many unsupervised, privileged LLM commands could run in a short time if an angry employee at Anthropic poisons the LLM to cause malicious damage to servers, environments, or pipelines it has access to?
I’m someone who use LLM, including Claude Code a lot.
But come on, this could be a tiny script that do the same, write it with claude code if you dont want to write it, instead of having claude code run each time, you’ll be deterministic this way !
Yeah this should just be a standard GitHub action. It’s a waste of energy to have an LLM do this over and over.
I see this trend happening a lot lately, where instead of getting the LLM to write code that does a thing, the LLM is repeatedly asked to do the thing, which leads to it just writing the same or very nearly the same code over and over.
Their company is an AI assistant for shopping, so trying to put AI everywhere including places it shouldn’t be is gonna happen.
I like my build scripts dependable, debuggable, and deterministic. This is wild. When the bot makes a pull request, and the user (who may be someone else at some point) doesn’t respond with exactly what the prompt wants, what happens? What happens when Claude Code updates, or has an outage? Don’t change that GitHub action at the end’s name without remembering to update the prompt as well.
Or worse. A single bad actor (according to the company) poisoned grok to be white supremacist. How many unsupervised, privileged LLM commands could run in a short time if an angry employee at Anthropic poisons the LLM to cause malicious damage to servers, environments, or pipelines it has access to?
They only actually need Claude to skip QA and hope that reading the code is a good enough substitute.
This really could be a script to create a PR for the merge, request a review from Claude, then automate the rest.