I don’t use AI tools when I code (my work IDE is way too old & I prefer it that way), but elsewhere where I work they did a pilot of people trying Cursor for a number of months.
What they found was that it was useful as a first step in the process, but almost always required being checked by hand afterwards. Another thing was that “code efficiency” changes fell between 10% faster and 30% slower, averaging overall ~20% slower. But almost all participants reported feeling like they’d improved by 20% faster. It made them feel like they were working faster than they were, even though it seems to have been actively hindering them.



Absolutely agree.
From what I understand of out pilot, most of what the users ended up using it for was pregenerating scripts that are effectively “copy > paste > tweak” dozens if not hundreds of times but can’t be automated for one reason or another and then quickly checking the script for errors, as opposed to your pm/eng use cases, but I believe your sentiment holds true.
I don’t use LLMs because I personally do not like them, so I don’t really know where someone might think they fit best inside a workflow. But I can very easily see my self spending half an hour trying to get the perfect result by prompting rather than spend 10 minutes doing it myself because I tend to basically put on blinders once I start a task.