Skip to main content

FAQs: AI Scoring

What is the agreement rate, and how is it used to fine-tune an auto-complete evaluation form?

The agreement rate measures how often the answers selected by Virtual Supervisor match those selected by a human evaluator. It is calculated both per question and as an overall metric for the evaluation form. A higher agreement rate indicates stronger alignment between AI-generated evaluations and human judgment, while lower agreement highlights areas that may need refinement.

Teams use the agreement rate to identify unclear questions, inconsistent scoring logic, or gaps in evaluation guidance, and to improve how effectively the auto-complete evaluation form performs at scale.

How to calculate and use the agreement rate

  1. Evaluate interactions using the form
    • Go to Menu > Analytics > Analytics Workspace > Interactions.
    • Open an interaction and navigate to the Quality Summary tab.
    • Click Create Evaluation.
    • Select the Agent Auto-Complete evaluation form.
    • Choose a human evaluator to manually review the auto-completed answers.
    • Click Create to generate the evaluation.
  2. Review and update the evaluation
    • Review each question in the evaluation.
    • Use the transcript as evidence to update any incorrect automated responses.
    • Submit the completed evaluation.
  3. Test the form across multiple interactions
    • Repeat this process for at least 20 different interactions to ensure reliable agreement data.
  4. Review agreement metrics
    • Go to Conversation Intelligence > Quality Management > Evaluation Forms.
    • Open the latest published version of the evaluation form.
    • Review the overall agreement rate and the agreement rate for each question.
  5. Refine the form
    • Identify questions with low agreement rates.
    • Update question wording, answer options, or scoring logic based on observed discrepancies.
    • Retest the form as needed until the desired agreement rate is achieved.

Will reports include auto-complete evaluation data?

Q: Do current reports include data from auto-complete evaluations?
No. At this time, auto-complete evaluation data is not included in Quality Management reports.

Q: When will auto-complete evaluation data be available in reports?
Genesys is targeting support for auto-complete evaluation data in existing Quality Management reports, as well as in the upcoming question-level reports, by mid-Q2 2026.

Q: What does this mean for supervisors and analysts?
Until reporting support is available, auto-complete evaluation data will not appear in dashboards or exported reports. Once the update is released, supervisors and analysts will be able to view and analyze auto-complete and manually completed evaluations together, providing a more comprehensive view of quality performance.

Q: Will any action be required to access this data once it’s available?
No. Once reporting support is released, auto-complete evaluation data will be included automatically in applicable reports.

Can I use Quality Policies to create Agent Auto Complete Evaluations?

No, Quality Policies don’t support Agent Auto-Complete evaluations.

For more information on how to generate an agent auto complete evaluation, see How do I generate an Agent Auto Complete Evaluation?

How do I generate an Agent Auto Complete Evaluation?

You can generate evaluations using an Agent Auto-Complete evaluation form in two ways: manually (per interaction) or automatically using the AI Scoring Rules Management API.

Learn more

When Does AI Scoring Generate a Charge?

A charge for AI Scoring is incurred whenever a quality evaluation form includes one or more AI-Scoring-enabled questions and that form is used to evaluate an interaction. Charges apply regardless of whether the AI-Scoring-enabled questions are ultimately answered.

The only exception is when the evaluation encounters an AI Scoring–related error at the evaluation level. In those cases, no charge is generated.

Will Reports Include Auto-Complete Evaluation Data?

Q: Do current reports include data from auto-complete evaluations?

A: Not yet. Currently, reports do not include data generated from auto-complete evaluations. Support for this data is planned for both existing Quality Management reports and the new question-level reports, with availability targeted for mid-Q2 2026.

Q: What does this mean for supervisors and analysts?

A: Until reporting support is released, auto-complete evaluation data will not appear in dashboards or exported reports. Once the update becomes available, you’ll be able to review and analyze auto-complete evaluations alongside manually completed evaluations, providing a more complete picture of overall quality performance.

Q: Will any action be required to access this data once it becomes available?

A: No. Once the reporting update is released, auto-complete evaluation data will be included automatically in all applicable reports—no configuration changes or additional setup required.

Can I use Quality Policies to create Agent Auto Complete Evaluations?

No, Quality Policies do not support Agent Auto-Complete evaluations.

For more information on how to generate an agent auto complete evaluation, see How do I generate an Agent Auto Complete Evaluation?

How do I generate an Agent Auto Complete Evaluation?

You can generate an evaluation for a specific interaction in one of two ways:

Generating Auto-Complete Evaluations Using AI Scoring Rules Management

To automate the generation of evaluations at scale, configure an Agent Scoring Rule using the AI Scoring Rules Management API.

Step 1: Create an Agent Scoring Rule

Use the following API:

POST /api/v2/quality/programs/{programId}/agentscoringrules

Example Request:

POST /api/v2/quality/programs/bd27fab3-6e94-4a93-831e-6f92e664fc61/agentscoringrules HTTP/1.1
Host: api.inindca.com
Authorization: Bearer *******************
Content-Type: application/json

Example JSON body:

{
“programId”: “bd27fab3-6e94-4a93-831e-6f92e664fc61”,
“samplingType”: “Percentage”,
“submissionType”: “Automated”,
“evaluationFormContextId”: “14818b50-88c0-4cc5-8284-4ed0b76e3193”,
“enabled”: true,
“published”: true,
“samplingPercentage”: 97
}

Field Explanations

  • programId – ID of the Speech & Text Analytics (STA) program.
  • evaluationFormContextId – The contextId of the automated evaluation form to use.
  • samplingPercentage – Percentage of interactions that should automatically generate evaluations.
  • enabled – Must be true for the scoring rule to be active.
  • published – Must be true for the rule to take effect.
  • submissionType – Set to "Automated" to ensure evaluations are auto-generated.

Once the rule is active, evaluations will automatically be created for interactions that meet the rule’s criteria.

Additional Resources

Which Genesys Cloud regions support AI scoring, and how are they mapped to AWS Bedrock regions?

The following table shows the AWS region mappings used by Genesys Cloud for AI scoring with Bedrock models.

Genesys Region

Mapped to Bedrock Region for AI Scoring

us-east-1 

us-east-1 

me-central-1 

eu-west-1 

eu-west-2 

eu-central-1 

us-west-2 

us-west-2 

ap-southeast-2 

ap-southeast-2 

ap-northeast-2 

ap-northeast-2 

ap-northeast-1 

ap-northeast-3 

ap-northeast-1 

eu-west-2 

eu-west-2 

sa-east-1 

sa-east-1 

ca-central-1 

ca-central-1 

ap-south-1 

ap-south-1 

FedRAMP – us-east-2 

us-east-1 

us-west-2 

*Done via AWS using cross region inference

eu-central-1 

eu-central-1 

Is there a best practices guide for using AI Scoring?

Yes. To learn how to use AI Scoring effectively and get the most accurate results, see AI scoring best practices.