Skip to main content

What is the agreement rate, and how is it used to fine-tune an auto-complete evaluation form?

The agreement rate measures how often the answers selected by Virtual Supervisor match those selected by a human evaluator. It is calculated both per question and as an overall metric for the evaluation form. A higher agreement rate indicates stronger alignment between AI-generated evaluations and human judgment, while lower agreement highlights areas that may need refinement.

Teams use the agreement rate to identify unclear questions, inconsistent scoring logic, or gaps in evaluation guidance, and to improve how effectively the auto-complete evaluation form performs at scale.

How to calculate and use the agreement rate

  1. Evaluate interactions using the form
    • Go to Menu > Analytics > Analytics Workspace > Interactions.
    • Open an interaction and navigate to the Quality Summary tab.
    • Click Create Evaluation.
    • Select the Agent Auto-Complete evaluation form.
    • Choose a human evaluator to manually review the auto-completed answers.
    • Click Create to generate the evaluation.
  2. Review and update the evaluation
    • Review each question in the evaluation.
    • Use the transcript as evidence to update any incorrect automated responses.
    • Submit the completed evaluation.
  3. Test the form across multiple interactions
    • Repeat this process for at least 20 different interactions to ensure reliable agreement data.
  4. Review agreement metrics
    • Go to Conversation Intelligence > Quality Management > Evaluation Forms.
    • Open the latest published version of the evaluation form.
    • Review the overall agreement rate and the agreement rate for each question.
  5. Refine the form
    • Identify questions with low agreement rates.
    • Update question wording, answer options, or scoring logic based on observed discrepancies.
    • Retest the form as needed until the desired agreement rate is achieved.