Sr Cloud Data engineer - (R-13417)

Hyderabad - India

Applications have closed

Dun & Bradstreet

View company page

Why We Work at Dun & BradstreetDun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us!
Job description•Experience in Data Engineering, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education•Acts as a senior developer in providing application design guidance and consultation, utilizing a thorough understanding of applicable technology, tools and existing designs.•Verifies program logic by overseeing the preparation of test data, testing and debugging of programs•Troubleshoot applications and work directly with various application/ business/support team partners•Assemble large, complex data sets that meet functional / non-functional business requirements.•Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.•Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using multi-cloud environment (AWS and GCP).•Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.•Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.•Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.•Create data tools and custom solutions for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.•Work with data and analytics experts to strive for greater functionality in our data systems.•Create Technical design documents and understand the business•Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.•Strong analytic skills related to working with structured, semi-structure, and unstructured datasets.
Essential Qualifications:•Bachelor’s degree with 10+ years. of hands-on dev and support exp. in one or multiple Data Engineering projects.•5+ Experience in building and optimizing ‘big data’ data pipelines, architectures, and data sets using Spark Scala/Python.•5+ Experience with AWS ecosystem (EMR, EC2, S3) and Google Cloud Platforms (and capabilities•5+ years of Unix shell scripting knowledge•5+ years of SQL exp.•5+ Experience of Databricks and Snowflake•3+ Experience of AWS QuickSight•Experience with multi-cloud services (AWS, GCP): EC2, EMR, RDS, Redshift, Glue, Data pipeline, Lambda, Athena, Elastic Search, S3, QuickSight, Looker, Compute Engine, Dataflow, Dataproc, BigQuery, GCS, Cloud SQL•Excellent verbal, written interpersonal communication skills•Strong analytical skills incl. analyzing complex data•Experience supporting and working with cross-functional teams in a dynamic environment.
Desired Qualifications:•Knowledge of Cloud Platform•Build processes supporting data transformation, data structures, metadata, dependency, and workload management.•A successful history of manipulating, processing, and extracting value from large, disconnected datasets.•3+ Years of experience in developing applications using REST APIs and micro-services•Knowledge understanding of emerging data platforms and analytic tools Hadoop, Aster R•5+ years of Experience working with agile development methodologies such as Sprint and Scrum•Experience with relational SQL NoSQL, and other databases including Postgres, ClickHouse, Neptune, and Cassandra.•Good understanding of DWH concepts and SDLC•Ability to coordinate completion of multiple tasks meet aggressive time frames•Exp. withoHadoop ecosystem tools and orchestrating job streamsoBatch Automation tools like AutoSys, ControlM, TivolioCode promotion toolsoSupport of an enterprise application
Global Recruitment Privacy Notice

Tags: Agile APIs Architecture Athena AWS Big Data BigQuery Cassandra Databricks Dataflow Data pipelines Dataproc EC2 Engineering GCP Google Cloud Hadoop Lambda Looker NoSQL Pipelines PostgreSQL Privacy Python QuickSight R Redshift Scala Scrum SDLC Shell scripting Snowflake Spark SQL Testing

Region: Asia/Pacific
Country: India
Job stats:  1  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.