Staff, Data Engineer
Bengaluru, India
Twilio
Connect with customers on their preferred channels—anywhere in the world. Quickly integrate powerful communication APIs to start building solutions for SMS and WhatsApp messaging, voice, video, and email.See yourself at Twilio
Join the team as our next Staff, Data Engineer.
Who we are & why we’re hiring
Twilio powers real-time business communications and data solutions that help companies and developers worldwide build better applications and customer experiences.
Although we're headquartered in San Francisco, we have presence throughout South America, Europe, Asia and Australia. We're on a journey to becoming a globally anti-racist, anti-oppressive, anti-bias company that actively opposes racism and all forms of oppression and bias. At Twilio, we support diversity, equity & inclusion wherever we do business. We employ thousands of Twilions worldwide, and we're looking for more builders, creators, and visionaries to help fuel our growth momentum.
About the job
This position is needed to design and build data pipelines and services in cloud environments that fuels strategic business decisions across Twilio. You will partner with other engineers and product managers to translate data needs into critical information that can be used to implement scalable data platforms and self-service tools. We are looking for someone who is passionate about solving problems using engineering and data, thrives in an evolving environment, brings an enthusiastic and collaborative attitude, and delights in making a difference. As a successful candidate, you must have a deep background in data engineering and a proven track record of solving data problems at scale leveraging distributed data systems. You are a self-starter, embody a growth attitude, and can collaborate optimally across the entire Twilio organization.
Responsibilities
In this role, you’ll:
- Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.
- Develop set processes for analytics, data modelling, and data production
- Collaborate with Data Architects and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering educated decision-making across the organization.
- Implement processes and systems to manage data quality, ensuring production data is always accurate and available for key partners and business processes that depend on it.
- Writes unit/integration tests, contributes to engineering wiki, and documents work.
- Perform data analysis required to solve data-related issues and assist in the resolution of data issues.
- Work closely with a team of frontend and backend engineers, product managers, and analysts.
- Design data integrations and data quality framework.
- Prepare accurate database design and architecture reports for management and Business teams.
- Execute the migration of data from legacy systems to new solutions.
- Manage the system performance by performing regular tests, solving problems and integrating new features.
- Recommend solutions to improve new and existing database systems.
- Offer support by responding to system problems in a timely manner.
Qualifications
Not all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!
Required:
- 7+ years of diverse experience in building data pipelines and in data warehousing
- Experience working with distributed analytics platforms - Hive, Presto etc.
- Experience working with data processing technologies like Kafka, Spark
- Expertise in SQL
- Expert coding skills across a number of languages including Scala, Java, Python etc.
- Experience in processing various file formats such as Parquet, Avro, JSON etc
- Experience in AWS EC2, S3
Desired:
- Experience with DAG-based open-source workflow management solutions airflow etc.
Location
This role will be based in our Bengaluru office.
Approximately 10% travel is anticipated.
What We Offer
There are many benefits to working at Twilio, including, in addition to competitive pay, things like generous time-off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.
Twilio thinks big. Do you?
We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.
So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!
If this role isn't what you're looking for, please consider other open positions.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Avro AWS Business Intelligence Data analysis Data pipelines Data Warehousing EC2 Engineering JSON Kafka Parquet Pipelines Python Scala Spark SQL
Perks/benefits: Competitive pay Parental leave
More jobs like this
Explore more AI, ML, Data Science career opportunities
Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.
- Open AI Engineer jobs
- Open MLOps Engineer jobs
- Open Data Science Manager jobs
- Open Data Manager jobs
- Open Senior Business Intelligence Analyst jobs
- Open Data Engineer II jobs
- Open Power BI Developer jobs
- Open Sr Data Engineer jobs
- Open Principal Data Engineer jobs
- Open Data Analytics Engineer jobs
- Open Junior Data Scientist jobs
- Open Business Intelligence Developer jobs
- Open Data Scientist II jobs
- Open Senior Data Architect jobs
- Open Product Data Analyst jobs
- Open Business Data Analyst jobs
- Open Sr. Data Scientist jobs
- Open Big Data Engineer jobs
- Open Data Analyst Intern jobs
- Open Manager, Data Engineering jobs
- Open Data Quality Analyst jobs
- Open Junior Data Engineer jobs
- Open Azure Data Engineer jobs
- Open Data Product Manager jobs
- Open ETL Developer jobs
- Open Data quality-related jobs
- Open Business Intelligence-related jobs
- Open ML models-related jobs
- Open Data management-related jobs
- Open GCP-related jobs
- Open Java-related jobs
- Open Privacy-related jobs
- Open Finance-related jobs
- Open Data visualization-related jobs
- Open APIs-related jobs
- Open Deep Learning-related jobs
- Open PyTorch-related jobs
- Open Snowflake-related jobs
- Open Consulting-related jobs
- Open TensorFlow-related jobs
- Open PhD-related jobs
- Open CI/CD-related jobs
- Open NLP-related jobs
- Open Data governance-related jobs
- Open Kubernetes-related jobs
- Open Airflow-related jobs
- Open Data warehouse-related jobs
- Open LLMs-related jobs
- Open Databricks-related jobs
- Open Hadoop-related jobs