Data Engineer

San Francisco, CA or Austin, TX

Applications have closed

Shippo

Shippo is the best multi-carrier shipping software for e-commerce businesses. Find the best shipping rates, integrate with e-commerce platforms, print shipping labels, track package delivery, and verify addresses with either our shipping API or...

View company page

Before you read on, take a look around you. Chances are, pretty much everything you see has been shipped, often multiple times, in order to get there. E-commerce is exploding, and with it, parcel shipping is becoming a meaningful factor in a business' ability to succeed. Creating a compelling shipping experience for customers is hard but necessary.
At Shippo, our goal is to level the playing field by providing businesses access to shipping tools and terms that would not be available to them otherwise.
Shippo lowers the barriers to shipping for businesses around the world. As free and fast shipping becomes the norm, better access to shipping is a competitive advantage for businesses. Through Shippo, e-commerce businesses, marketplaces, and platforms are able to connect to multiple shipping carriers around the world from one API and dashboard. Businesses can get shipping rates, print labels, automate international documents, track shipments, and facilitate returns.
Internally, we think of Shippo as the building blocks of shipping. Shippos are a diverse set of individuals. We look for cultural and skill fit in every new person. Join us to build the foundations of something great, roll up your sleeves, and get important work done everyday. Founded in 2013, we are a proud team based out of San Francisco. Shippo’s investors include D1 Capital Partners, Bessemer Venture Partners, Union Square Ventures, Uncork Capital, VersionOne Ventures, FundersClub, and others.
We are seeking a new Data Engineer! You will be responsible for building systems to collect and process events of massive scale to gain operational and business insight into the performance and optimization of shipping services.The data engineer will work closely with product, engineering, and business leads in generating customer-facing and internal dashboards, ad hoc reports, and models to provide insights and affect platform behavior. This will also include building and maintaining the infrastructure to collect and transform raw data.

Responsibilities and Impact

  • Design, build, scale, and evolve our large scale data infrastructure and processing workflows to support running our business intelligence, data analytics and data science processes
  • Build robust, efficient and reliable data pipelines and data integration consisting of diverse data sources and transformation techniques, and ensure consistency and availability of data insights 
  • Collaborates with product, engineering and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-informed decision making across the organization Articulate and present findings and recommendations at different levels, with a clear bias towards impactful learning and results 
  • Develop clean, well-designed, reusable, scalable code following TDD practices Champion engineering organization’s adoption and ongoing use of the data infrastructure Embody Shippo’s cultural values in your everyday work and interactions

Requirements

  •  3+ years of experience in software development
  • Experience designing, building, and maintaining data pipeline systemsCoding experience in server-side programming languages (e.g. Python, Scala, Go, Java) as well as database languages (SQL)
  • Experience with data technologies and concepts such as Airflow, Kafka, Hadoop, Hive, Spark, MapReduce, RDBMS, NoSQL, and Columnar databases
  • Exceptional verbal, written, and interpersonal communication skills
  • Deep understanding of customer needs and passion for customer success
  • Exhibit core behaviors focused on craftsmanship, continuous improvement, and team success
  • BS or MS degree in Computer Science or equivalent experience

Bonus Points

  • Experience with implementing ETL process
  • Experience with Big Data frameworks such as Hadoop, MapReduce and associated tools
  • Experience building stream-processing systems, using solutions such as Kinesis Stream, Kafka or Spark-Streaming
  • Experience integrating with APIs that use REST, gRPC, SOAP and other technologies
  • Experience with cloud environments and DevOps tools; working experience with AWS and its associated products a plus Experience with machine learning a plus

Benefits, Perks, and More

  • Medical, dental, and vision healthcare coverage for you and your dependents. Pets coverage is also available!
  • Flexible policy for PTO and work arrangement
  • 3 VTO days for ShippoCares volunteering events$2,500 annual learning stipend for your personal and professional growth
  • Charity donation match up to $100Free daily catered lunch, drinks, and snacks
  • Fun team events outside of work hours - happy hours, “escape room” adventures, hikes, and more!

Tags: Airflow APIs AWS Big Data Business Intelligence Computer Science Data Analytics Data pipelines DevOps E-commerce Engineering ETL Hadoop Kafka Kinesis Machine Learning NoSQL Pipelines Python RDBMS Scala Spark SQL Streaming TDD

Perks/benefits: Career development Flex hours Flex vacation Health care Salary bonus Team events

Region: North America
Country: United States
Job stats:  3  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.