Responsible AI at Pearson VUE
As technology becomes more powerful and accessible, high-stakes testing methods have evolved in parallel. For Pearson VUE, this includes the adoption of assistive capabilities that employ Artificial Intelligence (AI). We are sensitive to the valid concerns regarding some uses of AI, particularly with respect to privacy, security, and bias, and we are committed to not only to fostering equality and fairness, but also to transparency and maintaining the highest ethical standards in testing and proctoring practices.
Our promise to testing candidates and exam owners
We recognize that when using Pearson VUE systems and tools, candidates and test owners place their trust in us. That’s a responsibility we take very seriously, and it’s why we’re committed to designing, developing, and using AI with integrity. Any application of AI technology is responsibly designed to respect and protect data privacy while mitigating the risk of discrimination and fraud on behalf of both candidates and testing programs. Above all, we are committed to a human-controlled approach to the use of AI technology for test delivery purposes. The following values and principles guide our efforts to use AI responsibly, together with actionable commitments to implement AI technology wisely, ethically, and fairly.
|Privacy and security
||Diligently protect test-taker PII data.
Our third-party partners who provide AI services do not retain candidate data and all third parties that we collaborate with must meet stringent requirements and agree to data deletion policies before handling candidate data.
We adhere to local and global data privacy and retention laws and build our systems to enable compliance with required regulations.
System tests, audits, and penetration testing are standard, ongoing practices.
|Fairness and anti-bias
||Minimize the potential for unfair bias and/or impermissible discriminatory decisions.
To protect the integrity of both test takers and certification/licensure programs, we use AI technology throughout the testing experience to assist human greeters and proctors with detection and notification of potential irregularities that require human evaluation, such as:
- Identification verification: Confirmation that the physical face of the on-screen testing candidate matches the image on the identification document provided during check-in. If unable to solidify a match between both images, human greeters are engaged to evaluate and make necessary identification confirmations.
- Facial comparison/verification: Continuous identity confirmation during the testing process. A limited type of facial recognition known as “facial comparison” or “facial verification” compares the on-screen image of the exam candidate to the image captured during the check-in process, and confirms the on-screen person is the same throughout the testing process. If unable to secure a match between the two images, human intervention is requested to complete the validation.
- Body presence/movement sensing: Monitors single test-taker presence throughout the online testing process. Sensors verify the test-taker is present and in complete view of the observing proctor to protect against fraud occurrences.
||Take responsibility for candidate-impacting decisions and actions.
We use AI systems and processes that prohibit automated decisions that could jeopardize a candidate’s ability to take or complete a test.
AI functionality is limited to exam session observation and, when appropriate, requesting further review by human proctors, who then take necessary actions. For example, upon observing behaviors most associated with attempted fraud, such as off-screen eye movements or background noise, the AI technology will engage the assistance of a live (human) proctor to ensure these receive human contextual observation for handling and decision making.
|Transparency and governance
||Apply AI best practices broadly, effectively, and consistently, with ethical and technical oversight.
At present, there are no universal ethical and technical standards for the use of AI. Until such standards are implemented, to provide strong assurance regarding AI standards and best practices, we contract independent, third-party reviews and audits across AI design, development, and operations.
A governance committee will provide oversight of engagements and have the responsibility 1) to monitor operational performance data and 2) for any decisions made in respect of these practices.
We welcome the opportunity to work with partners and customers who share our commitment to managing AI-based services with the utmost integrity and whose ethics align with our own.
Last updated 2023-05-22