Senior Data Engineer

Atlanta, GA, United States

Kroll

As the leading independent provider of risk and financial advisory solutions, Kroll leverages our unique insights, data and technology to help clients stay ahead of complex demands. Click for more details.

View company page

In a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical skills, paired with the latest technology, allow us to give our clients clarity—not just answers—in all areas of business. We embrace diverse backgrounds and global perspectives, and we cultivate diversity by respecting, including, and valuing one another. As part of One team, One Kroll, you’ll contribute to a supportive and collaborative work environment that empowers you to excel. 

At Kroll, your work will help deliver clarity to our clients’ most complex governance, risk, and transparency challenges. Apply now to join One team, One Kroll. 

RESPONSIBILITIES:

  • Participate in Scrum teams of data & analytics engineers who build, manage, and support enterprise data and analytics technology infrastructure, tools, and products.
  • Accelerate building and improving capabilities centered around data engineering that focuses heavily on building foundational data assets, data engineering pipelines, data platform initiatives, and data product development.
  • Accountable for delivery of business unit and Internal Firm Services data that will be made available through the Kroll Connected Ecosystem
  • Build cross-functional relationships with Business product owners, data engineers, data scientists, and analysts to understand product needs and delivery expectations.
  • Guide the adoption of new products, platforms, and data assets to improve data-informed operations across the organization.
  • Build and grow data engineering capabilities that deliver performance solutions that drive customer value and business outcomes.
  • Establish a product mindset and cross-functional team structure to support the strategy and fast-paced delivery of quality solutions.
  • Understand the cloud ecosystem, markets, competition, and user requirements in-depth. Help facilitate to Launch of new products and features, test their performance, and iterate quickly.
  • Build scalable functions, fault-tolerant batch & real-time data pipelines to validate/extract/transform/integrate large-scale datasets, inclusive of location and time series data, across multiple platforms.
  • Optimize and expand Kroll's data platform, which will span across multiple business units, cloud providers, services, data warehouses, and applications.
  • Help facilitate setting up an enterprise-level data strategy, including Governance, Data Architecture, Big data analytics, Delivery leadership, Knowledge of Automation
  • Lead Cross-functional teams across the Globe.

REQUIREMENTS:

  • Bachelor's degree with Minimum 3 years of overall experience, with hands-on experience in setting up enterprise-level data lakes using any big data platform (Azure Data Lake and/or Databricks a plus).
  • Able to understand and guide teams in the implementation of Data Lake and ELT concepts in the Cloud using Databricks, Azure Data Factory, Python, C#, GraphQL, PySpark, Pandas, etc.
  • Expertise with Azure & Databricks in the Data lifecycle & AI domain –Data Migration, Data Transformation, Data Modernization, Modern Data Warehouse, Analytics, Azure ML, etc.
  • A deep understanding of data architecture, data security, and modern processing techniques using data pipelines.
  • Must be able to interpret and analyze large sets of data for complex business situations and understand the implications for the team.
  • Experience with key platform technologies including APIs & Management, Platform Services, Streaming Systems, Stream Processing, and Persistent Storage for Analytics and Applications at the Enterprise level.
  • Practical experience deploying applications and implementing continuous-integration tools and patterns.
  • Prior experience with Analytics and BI tools like Qlik, and PowerBI for reporting and streaming or messaging technologies like Kafka, Amazon Kinesis, or SNS. 
  • Relevant experience guiding teams on DevOps, analyzing applications, and cloud environment performance.
  • Relevant experience that leverages scientific methods, processes, algorithms, and systems to discover business insights from structured and unstructured financial datasets is a huge plus.
  • Quick to understand business needs and learn domain-specific knowledge.
  • Familiarity with financial statements and valuation methodologies/metrics preferred.

DESIRED SKILLS:

  • Experience working as a Data Engineer with ETL/ELT using various Cloud technologies.
  • Experience with Python/ PySpark, Scala.
  • Experience with MS Office suite including Python/co-pilot integrations.
  • Experience in designing ER diagrams, database architecting.
  • Experience of relational and non-relational databases. (Oracle, SQL, PostgreSQL, CosmosDB, DynamoDB).
  • Comfortable in various flavors of SQL.
  • Must have used the Databricks platform in Lakehouse Implementation.
  • Knowledge of Python libraries like Pandas, Numpy, spaCy, or NLTK.
  • Relevant experience in using Jenkins or Synapse for workflow scheduling. 
  • Prior experience in CloudFormation and/or Terraform for Code Deployment & Integration.

In order to be considered for a position, you must formally apply via careers.kroll.com.

Kroll is committed to creating an inclusive work environment. We are proud to be an equal opportunity employer and will consider all qualified applicants regardless of gender, gender identity, race, religion, color, nationality, ethnic origin, sexual orientation, marital status, veteran status, age or disability.

The current salary range for this position is $50,000 - $125,000.

#LI-CN1

#LI-Remote

Apply now Apply later
  • Share this job via
  • or

Tags: APIs Architecture Azure Big Data CloudFormation Data Analytics Databricks Data pipelines Data strategy Data warehouse DevOps DynamoDB ELT Engineering ETL Excel GraphQL Kafka Kinesis Machine Learning NLTK NumPy Oracle Pandas Pipelines PostgreSQL Power BI PySpark Python Qlik RDBMS Scala Scrum Security spaCy SQL Streaming Terraform

Perks/benefits: Career development

Regions: Remote/Anywhere North America
Country: United States
Job stats:  6  1  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.