Replies: 1 comment 1 reply
-
|
We don't currently have a policy but there have been more and more PRs and other interaction (comments, issues, etc.) that seem LLM-generated so I think it's getting time to adopt one. @archseer and I were discussing this the other day and we like Ghostty's policy: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md. I think LLMs are good at certain tasks like building timelines or helping you dig into unfamiliar code. But it's very easy to abuse them to waste peoples' time. And I find the output quality pretty questionable most of the time. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I've just been checking out Helix recently, and I quite enjoy what I've seen so far! It's fast, intuitive, ships a lot of nice defaults, and works well with what I've used it for thus far!
I apologize if this has been asked before (I was unable to find anything discussing it previously); I was curious if the project had any specific guidelines on whether it would be utilizing LLMs as part of the development process, or whether the project would accept PRs that had involved Claude, CoPilot, ChatGPT, etc, in some fashion?
(My own personal interest in asking is that I am actively disinterested in using software stacks that incorporate the usage of LLMs anymore than I am forced to).
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions