Skip to content

Question regarding "Accuracy" Metric #171

@litcoderr

Description

@litcoderr

Dear Authors.

Based on your evaluation script,
accuracy scores are accumulated if and only if the answer strictly matches the GT.

    def eval_acc(self):
        scores = []
        for i in range(len(self.accuracy["answer"])):
            answer = self.accuracy["answer"][i]
            GT = self.accuracy["GT"][i]
            if answer == GT:
                scores.append(1.0)
            else:
                scores.append(0.0)

        scores = sum(scores) / len(scores)
        return scores

I find this odd since, there could be multiple variations of correct answers like so.

GT: A
answer (model prediction): The correct answer is A. The ego vehicle is steering to the left.

Such case is not counted as a correct answer, while the answer(model prediction) is contextually same as the GT.

Since we do not get the ground truth of the validation set, I was not able to verify if my concern was true or not. I would really appreciate if you could revise this inquiry and get back to us.

Thank you very much for your amazing work and your precious time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions