Computer Vision Engineer - Autonomous Driving - London

London, United Kingdom

Applications have closed

Lyft

Rideshare with Lyft. Lyft is your friend with a car, whenever you need one. Download the app and get a ride from a friendly driver within minutes.

View company page

Level 5, part of Woven Planet, is developing self-driving technology using a machine-learned approach to create safe mobility for everyone. Our goal is to build level 4 autonomous vehicles to improve personal transportation on a global scale. Woven Planet is a software-first subsidiary of Toyota whose vision is to create mobility of people, goods, and information that everyone can enjoy and trust.

As part of Woven Planet, Level 5 has the backing of one of the world’s largest automakers, the talent to deliver on our goal, and the opportunity for near-term product impact and revenue—a combination rarely seen in the AV industry.

Level 5 is looking for doers and creative problem solvers to join us in improving mobility for everyone with self-driving technology. We’ve built a diverse and talented group of software and hardware engineers, and each has the opportunity to make a meaningful impact on our self-driving stack.

Our team of more than 300 works in brand new garages and labs in Palo Alto, tests AVs at our dedicated test track in the Silicon Valley, and explores the AV industry’s most compelling research problems at our office in London. With support from more than 800 Woven Planet colleagues in Tokyo, Level 5’s work to improve the future of mobility spans the globe. And we’re moving fast — in Level 5’s first 18 months, we launched an employee pilot, and are now testing our fourth generation vehicle platform in San Francisco. Learn more at level-5.global.

In London, our team is working on accelerating autonomous driving by exploring novel CV/ML solutions on petabytes of data collected from AVs and our own Fleet vehicles. This enables us to tackle some of the hardest problems in self-driving, from building accurate and up-to-date 3D maps, to understanding human driving patterns, to increasing the sophistication of our simulation tests by having access to rare real-world driving situations. By leveraging this data, Level 5 is uniquely positioned to develop safe, efficient, and intuitive self-driving systems.

Responsibilities:
  • Work with other Computer Vision and Machine Learning engineers on high-impact projects and innovate new solutions to problems in the self-driving space
  • Leverage data derived from one of the largest vehicle networks in the world to accelerate autonomous driving technology
  • Develop cutting-edge mapping, localisation & perception algorithms operating on commodity sensors like cameras
  • Work on distributed computer vision and machine learning systems that can do this at unprecedented scale
  • Be a team player; uplift others through mentoring and inclusion efforts
Experience:
  • BSc / MSc / PhD or industrial experience in computer vision or related field, for example
    • Structure-from-motion
    • SLAM
    • Localisation
    • 3D Perception
    • Sensor fusion
    • Object recognition
    • 3D reconstruction
    • Optical flow
    • Depth estimation
    • Multi-view geometry
  • Experience with Python / C++
  • Ability to work in a fast-paced environment and collaborate across teams and disciplines with great interpersonal skills

OUR COMMITMENT

・We are an equal opportunity employer and value diversity.

・We pledge that any information we receive from candidates will be used ONLY for the purpose of hiring assessment.

Tags: 3D Reconstruction Autonomous Driving Computer Vision Industrial Machine Learning PhD Python Research SLAM Testing

Region: Europe
Country: United Kingdom
Job stats:  22  2  1

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.