Senior Data Engineer

San Francisco, Austin, or Remote

Applications have closed

Shippo

Shippo is the best multi-carrier shipping software for e-commerce businesses. Find the best shipping rates, integrate with e-commerce platforms, print shipping labels, track package delivery, and verify addresses with either our shipping API or...

View company page

Before you read on, take a look around you. Chances are, pretty much everything you see has been shipped, often multiple times, in order to get there. E-commerce and parcel shipping volumes are exploding but so are customer expectations about shipping speed and cost. Managing shipping and logistics operations to meet increasingly exacting demands is an extremely hard endeavor, especially for SMBs who can be left in the dust by larger and far more sophisticated competitors. But this does not have to be so.
At Shippo, our goal is to level the playing field by providing businesses with access to shipping tools and terms that would not be available to them otherwise. We lower the barriers to shipping for businesses around the world, and move shipping from a pain point to a competitive advantage.
Through Shippo, e-commerce businesses, from fast-growing brands to mom-and-pop shops are able to connect to multiple shipping carriers around the world from one API and dashboard, and seamlessly run every aspect of their shipping operations, from checkout shipping options to returns.
Join us to build the foundations of something hard yet meaningful, roll up your sleeves, and get important work done everyday. Founded in 2013, and funded by top-tier investors like D1 Capital Partners, Bessemer Venture Partners, Union Square Ventures, Uncork Capital, VersionOne Ventures, FundersClub, we are a fast-growing and proudly distributed Unicorn with hubs in San Francisco and Austin. We are also featured in Wealthfront’s Career Launching List  and Forbes’ Cloud 100 list of fast growing startups.
You will be responsible for building systems to collect and process events of massive scale to gain operational and business insight into the performance and optimization of shipping services. 
This individual will work closely with product, engineering, and business leads in generating customer-facing and internal dashboards, ad hoc reports, and models to provide insights on platform behavior. This will also include building and maintaining the infrastructure to collect and transform raw data. 

Responsibilities and Impact

  • Design, build, scale, and evolve our large scale data infrastructure and processing workflows to support running our business intelligence, data analytics and data science processes
  • Build robust, efficient and reliable data pipelines and data integration consisting of diverse data sources and transformation techniques, and ensure consistency and availability of data insights
  • Collaborates with product, engineering and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-informed decision making across the organization
  • Articulate and present findings and recommendations at different levels, with a clear bias towards impactful learning and results
  • Develop clean, well-designed, reusable, scalable code following TDD practices
  • Champion engineering organization’s adoption and ongoing use of the data infrastructure
  • Embody Shippo’s cultural values in your everyday work and interactions

Requirements

  • 3+ years of experience in software development
  • Experience designing, building, and maintaining data pipeline systems
  • Coding experience in server-side programming languages (e.g. Python, Scala, Go, Java) as well as database languages (SQL)
  • Experience with data technologies and concepts such as Airflow, Kafka, Hadoop, Hive, Spark, MapReduce, RDBMS, NoSQL, and Columnar databases
  • Exceptional verbal, written, and interpersonal communication skills
  • Deep understanding of customer needs and passion for customer success
  • Exhibit core behaviors focused on craftsmanship, continuous improvement, and team success
  • BS or MS degree in Computer Science or equivalent experience

Bonuses

  • Experience with implementing data pipeline and ETL process
  • Experience with Big Data frameworks such as Hadoop, MapReduce and associated tools
  • Experience building stream-processing systems, using solutions such as Kinesis Stream, Kafka or Spark-Streaming
  • Experience integrating with APIs that use REST, gRPC, SOAP and other technologies
  • Experience with cloud environments and DevOps tools; working experience with AWS and its associated products a plus
  • Experience with machine learning a plus
Benefits and Perks
Medical, dental, and vision healthcare coverage for you and your dependents. Pets coverage is also available!Flexible policy for PTO and work arrangement Two 1-week company shutdowns to rest and recharge (summer and winter)3 VTO days for ShippoCares volunteering events $2,500 annual learning stipend for your personal and professional growthCharity donation match up to $100Fun team events outside of work hours - happy hours, “escape room” adventures, hikes, and more!

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Airflow APIs AWS Big Data Business Intelligence Computer Science Data Analytics Data pipelines DevOps E-commerce Engineering ETL Hadoop Kafka Kinesis Machine Learning NoSQL Pipelines Python RDBMS Scala Spark SQL Streaming TDD

Perks/benefits: Career development Flex hours Flex vacation Health care Team events

Regions: Remote/Anywhere North America
Country: United States
Job stats:  5  2  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.