UK-based startup Wayve has introduced LINGO-1, a Vision-Language-Action Model (VLAM) designed to advance the learning and explainability of its AI Driver technology, the core driving force behind autonomous vehicles. LINGO-1 is poised to address a long-standing challenge in artificial intelligence by shedding light on the decision-making process within neural networks, offering valuable insights into the “why” and “how” behind their choices.
Drawing from real-world data obtained through expert drivers actively providing commentary during their journeys, LINGO-1 is uniquely positioned to elucidate the rationale behind various driving actions. By incorporating language as a crucial component, Wayve introduces a novel data source that aids in interpreting, explaining, and training AI models, taking a significant stride towards enhancing the safety and intelligence of self-driving systems.
Notably, LINGO-1 boasts the capability to respond to inquiries regarding diverse driving scenarios, facilitating valuable feedback loops for model improvement. Wayve, which operates both in London and California and conducts testing with its fleet of vehicles in multiple UK cities, aspires to be the pioneer in deploying autonomous technology across 100 urban centers.
Alex Kendall, Co-founder & CEO of Wayve, expressed his enthusiasm for LINGO-1, stating, “LINGO-1 marks a big step for embodied AI: aligning vision, language, and action to deliver more intelligent and trusted autonomous vehicles. We are excited by the capabilities we observe from LINGO-1 today and we believe natural language will provide a powerful step change in how we understand and interact with robotics.”
Kendall further emphasized Wayve's commitment to advancing the frontiers of science to usher in a safer, smarter, and more sustainable future of transportation. LINGO-1, he asserts, has the potential to not only enhance the intelligence of their AI Driver system but also bridge the gap in public trust—a promising beginning in harnessing its full capabilities.