Sr Data Engineer

Phoenix

Full Time Senior-level / Expert
Upgrade Inc. logo
Upgrade Inc.
Apply now Apply later

Posted 2 weeks ago

Upgrade is a fintech unicorn backed by a top 10 global bank and other leading fintech investors. Founded in 2017, Upgrade has already delivered $5 billion in consumer credit and achieved $125 million in annual revenue run rate and cash profitability.
Upgrade is building a neobank offering exceptional value to mainstream consumers, including affordable and responsible credit through cards and loans. In 4 short years 12 million people have already applied for an Upgrade Card or loan.
Upgrade has been named a “Best Place to Work in the Bay Area” by the San Francisco Business Times and Silicon Valley Business Journal 3 years in a row, and received “Best Company for Women” and “Best Company for Diversity” awards from Comparably.
We are looking for new team members who get excited about designing and implementing new and better products to join a team of over 500 talented and passionate professionals. Come join us if you like to tackle big problems and make a meaningful difference in people's lives.
Are you a data engineer that loves creating order in the “1s and 0s”, has extensive experience designing and organizing data warehouses, applying ETL principles, and enjoys the hustle and bustle of an operations environment?  Are you attracted to an entrepreneurial culture and energized by high growth?  Do you appreciate the freedom to drive significant value and create your own vision for how to build an incredible infrastructure and ecosystem? If so, read on…Upgrade Inc., a fast-growing fintech "unicorn", is looking for a "rockstar" Sr Data Engineer to join our Analytical Insights & Decision Optimization team.  This key role will increase the organizational value of the Phoenix Service Center through building world-class infrastructure and ecosystem to be used for leveraging data to make better decisions. You will also team up with data/business analysts and data scientists to build out solutions for capacity planning, workload forecasting, workforce analytics, business impact analytics, call center analytics, and customer analytics.  To be successful in this role, you will need to bring extensive experience in setting up data infrastructures/ecosystems from the ground up, be a self-starter, entrepreneurial by nature, and enjoy a fast-paced challenging environment.  

Primary Responsibilities

  • Quickly gain a deep understanding of the business and how data flows through the organization
  • Work with data architects to immediately begin optimizing the design of the existing operational data mart
  • Re-build the data mart infrastructure and ecosystem that the analysts rely upon through effectively collecting and integrating data  from various sources, building data platforms, and optimizing the operational data mart
  • Increase speed of delivery and maintain data integrity of data sets through applying advanced data engineering techniques and integrating ETL processes
  • Collaborate with the analytics team members (data scientists, data analytics, business analysts) in implementing the most effective back-end applications for speed of delivery, ensuring high data quality and ensuring ease of maintenance
  • Drive best practices to improve the overall quality of data being organized and ingested to maximize reliability of models / insights
  • Optimize processes for data intake, validation, and engineering 
  • Identify opportunities to automate manual processes and improve efficiencies within the existing data infrastructure leveraging advanced data engineering principles

Skills, Experience & Qualifications

  • Comfortable with ambiguity, developing creative solutions and delivering against aggressive timelines
  • Obsessed with building automation into everything that is implemented
  • Bachelor's degree required (Computer Science or related major (i.e. Computer Science, Information Systems etc.,) preferred)
  • Advanced degree preferred
  • Minimum 6+ years of data engineering experience in designing, building, maintaining and scheduling efficient, scalable and reliable batch and real-time data pipelines using SQL and general data processing languages such as Python.
  • Experience with query tuning and optimization, cloud technologies, real time data processing and Python frameworks
  • Working knowledge of overall system architecture, scalability, reliability  and performance in a large data warehousing environment such as Redshift, Snowflake, Teradata etc.
  • Expertise in MDM, metadata development, API integrations, database design etc.
  • Exposure to complex data models and BI methodologies
  • Experience in financial services is preferred
  • Strong interpersonal communication and active listening skills
  • Proficiency with the Microsoft Office Suite, highly proficient in MS Excel & PowerPoint 

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Job tags: Data Analytics Data pipelines Data Warehousing Engineering ETL Python Redshift SQL
Job region(s): North America
Job stats:  5  1  0
  • Share this job via
  • or

More AI/ML/Data Science position highlights