Replies: 2 comments 3 replies
-
Beta Was this translation helpful? Give feedback.
2 replies
-
Measuring the impact is an inherently difficult task, as you note, so I can only really comment on it from my perspective. When I'm using AI assistants for coding, I generally have an idea of what I'm trying to achieve, so I'm looking for a way to speed up the process. This is where AI comes in, I delegate one or more tasks to it, and when it's complete I can review it through the lens of "well this is what I would have done". |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi folks,
I am enjoying using instructions on my projects. However, I aim to assess the actual impact on GitHub Copilot.
I have read the topic - We need evals or something. But it will take time to implement, and frankly speaking, it will be too general (one project's trash is another project's treasure).
Do you have any criteria to evaluate the impact of instructions on a project?
I am tuning every instruction to the project's needs, but I don't know how to measure real impact.
I have instructed Copilot to respond in various human languages or adopt a Yoda-like tone. This impact is easy to spot.
Which criteria do you use to measure the impact of technical instructions?
Beta Was this translation helpful? Give feedback.
All reactions