Translation Team Discovers Incomplete Verse Just Before Publishing with the Help of AQuA

AQuA (Augmented Quality Assessment) is a quality checking co-pilot for translators and consultants designed by some of SIL’s AI team. AQuA’s assessments help them identify potential areas for revision in their translation drafts faster and more thoroughly.

AQuA assessments include heat maps designed to help translators and consultants quickly identify verses and chapters which may require a closer look in terms of word correspondence and semantic similarity.

The user can also visualize predicted word alignments between their draft and a reference translation or see highlighting within a verse to find potential extraneous or missing words.

The AQuA app is undergoing extensive field testing in Bible translation projects all over the globe and has garnered significant positive feedback.

Local translation teams in Asia recently ran AQuA’s assessments on a draft that had completed the robust quality-checking process and was considered ready to be published.

After reviewing the assessments, the translation team was surprised to discover a portion of a verse that had not been translated from the source text.  They also found words they believed had been incorrectly or inconsistently translated. These issues were confirmed by a consultant or by a consultant-in-training who stated that without the insight provided by AQuA, they would not have found these issues before publication. Because of AQuA, they will be remedied in the final published text.

Other AQuA trial participants are reporting that AQuA is accelerating translation and quality checking when incorporated earlier in the process.  One consultant estimated a potential 10-20% increase in overall efficiency.

“I’m excited to see that the months of coding to produce the AQuA tools is now having a tangible impact in real-world translation projects, resulting in an increase in quality in several Bible translations where it has been trialed,” reported Cassie Weishaupt, Data Scientist at SIL. “We are excited to scale up use of AQuA in the coming months, and hopefully see these tools positively impact many more language communities!”

Trial participants have also provided constructive feedback indicating where they felt the assessments could be improved, helping the AQuA team as they continue to develop the tool, adjusting current assessments and adding new ones for users to evaluate.

“It’s great to hear positive feedback from AQuA users, but even better is hearing feedback that helps our team improve AQuA’s usefulness in a variety of linguistic contexts,” explained Jeremy Hodes, SIL’s AI Advocate. “We can’t do that without the invaluable knowledge, wisdom, and prayerful insight of those working directly with local language communities. We’re deeply thankful to those who graciously contribute their time testing new tools that can be buggy and imperfect at times.”

AQuA team lead, Mark Woodward added, “It is such a blessing to interact face-to-face with the translation teams on the ground and learn more about the wonderful work they’re doing. Any tools we can provide to make their jobs easier is a big win. We are looking forward to continued collaboration with translation teams and hearing about the ways they’re choosing to incorporate AI into their workflow.”