Senior Solutions Architect - Autonomous Vehicles

US, CA, Santa Clara

NVIDIA

NVIDIA erfindet den Grafikprozessor und fördert Fortschritte in den Bereichen KI, HPC, Gaming, kreatives Design, autonome Fahrzeuge und Robotik.

View company page

We are looking for a motivated Solutions Architect or Engineer with experience in designing and building Networking fabrics for large supercomputing clusters for the purpose of development and testing of Autonomous Vehicles' Deep Learning and Generative AI (e.g., LLMs) models in Data Center environments. As part of the Automotive Entreprise Solutions Architecture and Engineering team, you would work with the most exciting networking and computing hardware and software, driving the latest breakthroughs for our sophisticated Automotive customers. This role offers an excellent opportunity to expand your career in the rapidly growing field of AI accelerated computing while empowering the world's most successful automotive and self-driving cars companies.

We offer best-in-class Networking InfiniBand and Ethernet solutions that include adapter cards, switches, cables and software to support our groundbreaking networking technologies. Our products optimize data center performance and deliver industry-leading bandwidth and scalability. We serve a wide range of sectors including high performance computing, enterprise, data centers, and cloud computing. We constantly reinvent ourselves to stay ahead of the market and bring pioneering products and services to the industry. A Solutions Architect is the first line of technical expertise between NVIDIA and our customers so you will engage directly with developers, researchers, and data scientists with some of NVIDIA’s most strategic technology customers as well as work directly with business and engineering teams on product strategy as it pertains to these customers.

What you'll be doing:

  • Provide technical mentorship to partners and customers on data center GPU servers and networking infrastructure projects. Guide customer journey to server/network/cluster deployments and lead discussions about network topologies, compute, storage, and etc.

  • Build demonstrations and POCs for solutions that address critical business needs of our customers. Help in draft requirements for missing features to unblock progress at customers/partners.

  • Educate customers on new NVIDIA Networking technologies and platforms. Prepare and deliver technical training presentations and workshops.

  • Build collateral (notebook/ blog) applied to Automotive industry use-cases.

  • Partner with NVIDIA engineering, product, sales teams to secure design wins at customers. Enable development and growth of NVIDIA product features through customer feedback and POC evaluations.

What we need to see:

  • MS/PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields or equivalent experience.

  • 10+ years of hands-on validated technical customer-facing experience in Networking technologies for HPC or AI computing clusters.

  • Experience writing code C, C++, Rust, or Python. Comfortable with Linux shell commands.

  • Knowledge of DevOps/MLOps technologies such as Docker containers, Kubernetes.

  • Practical knowledge of Networking including Data Center topologies, routing and switching protocols, and Networking Protocols such as Ethernet and Infiniband.

  • System level understanding of server architecture, PCIe devices, NICs, Linux OS and kernel drivers.

  • Communicate ideas/codes/commands clearly through blog posts, kernels, GitHub. Effective verbal/written communication, and technical presentation skills.

  • Enjoy working with multiple levels and teams across organizations (engineering, research, product, sales and marketing teams).

  • Self-starter with a vision for growth, real passion for continuous learning and sharing findings across the team.

Ways to stand out from the crowd:

  • Networking certificates such as CCIE.

  • Systems engineering, coding, and debugging skills including experience with C/C++, Linux kernels and drivers.

  • Hands-on experience with NVIDIA SDKs (e.g. CUDA, DoCA), NVIDIA Networking technologies (e.g., RoCE, InfiniBand, DPU).

  • Familiarity of concepts for virtualization environments.

  • Willingness and ability to dive into unfamiliar territories to address sophisticated problems.

The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Apply now Apply later
  • Share this job via
  • or

Tags: Architecture Computer Science CUDA Deep Learning DevOps Docker Engineering Generative AI GitHub GPU HPC InfiniBand Kubernetes Linux LLMs Mathematics MLOps PhD Physics Python Research Rust Testing

Perks/benefits: Career development Equity

Region: North America
Country: United States
Job stats:  9  0  0
Category: Architecture Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.