While models are becoming more sophisticated, one major difficulty is ensuring AI understands and responds intuitively across languages and modalities. Cohere For AI, Cohere's open research division, has launched Aya Vision to push the boundaries of multilingual and multimodal AI, providing a glimpse into a more inclusive and context-aware future for AI.
A New Standard for AI Evaluation
Traditionally, AI models were evaluated using rigid accuracy metrics. However, these methods frequently penalise models for minor deviations like punctuation or language, even when the intended meaning remains identical. This strict evaluation approach fails to capture what people genuinely desire: AI that is natural, intuitive, and contextually aware.
Recognising this limitation, Cohere created a
Aya Vision Pushes AI Beyond Language Barriers with Multimodal Breakthroughs
- By Anshika Mathews
- Published on
Aya Vision is designed to operate fluently in 23 languages spoken by more than half the world’s population.
