CAQE makes audio quality evaluation easier by moving it from the lab to the web, allowing fast recruitment and results with little effort by the investigators.

CAQE makes audio quality evaluation easier by moving it from the lab to the web, allowing fast recruitment and results with little effort by the investigators.

Crowdsourced Audio Quality Evaluation (CAQE)

Researchers

Mark Cartwright, Bryan Pardo, Gautham Mysore, Matt Hoffman

Overview

Crowdsourced Audio Quality Evaluation (CAQE—pronounced "cake") is a project to make the evaluation of audio quality for new audio algorithms a piece of cake for researchers.  Automated objective methods of audio evaluation are fast, cheap, and require little effort by the investigator. However, objective evaluation methods do not exist for the output of all audio processing algorithms, often have output that correlates poorly with human quality assessments, and require ground truth data in their calculation. Subjective human ratings of audio quality are the gold standard for many tasks, but are expensive, slow, and require a great deal of effort to recruit subjects and run listening tests. Moving listening tests from the lab to the micro-task labor market of Amazon Mechanical Turk speeds data collection and reduces investigator effort. However, it also reduces the amount of control investigators have over the testing environment, adding new variability and potential biases to the data. This work addresses these concerns and compares crowdsourced evaluations to lab-based evaluations.

Related Papers

[pdf] [poster] Cartwright, M., Pardo, B., Mysore, G., Hoffman, M. Fast and Easy Crowdsourced Perceptual Audio Evaluation. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016.

Software

CAQE Github repository