Data Engineer

Portland, OR

Applications have closed

Openlogix Corporation

Openlogix is a top-tier MuleSoft and Salesforce partner. Integration is at the core of what we do. It is in our DNA. Customers of all sizes trust us to deliver MuleSoft-powered integration solutions for them. You have an integration challenge?...

View company page

Description:Working Job title: Data EngineerLocation: Portland, Oregon (100% remote)Duration: 6+ months
Brief Job Description: Key ResponsibilitiesBuild data pipelines: Architect, design, create, optimize, maintain data pipelines.
Drive automation through effective metadata management:
Collaborate across departments:
Participate in ensuring compliance and governance during data use:
Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them.  The data engineer will be required to train counterparts such as [data scientists, data analysts, LOB users or any data consumers] in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
Skillsets and experience• Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
• Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
• Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration, complex event processing and data virtualization
• Experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem
• Strong experience with popular database programming languages including SQL, PL/SQL, others for relational databases
• Nice to have some experience with NoSQL/Hadoop oriented databases like MongoDB, Cassandra, others for nonrelational databases.
• Experience with AWS Snowflake Data warehouse
• Nice to have Experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Java, C++, Scala, and others.
• Nice to have experience working with message queuing technologies such as Amazon Simple queuing Service, Kinesis
• Experience in working with DevOps capabilities like version control, automated builds, testing and release management capabilities using tools like Git, Jenkins, Puppet, Ansible.
• Nice to have experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms
• Strong experience supporting and working with cross-functional teams in a dynamic business environment.
• Is a confident, energetic self-starter, with strong interpersonal skills.
• Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
Is this role associated with a project? If yes, please provide a short description of the project: Project 360 Data Strategy
What is the size of the team this resource will be working with? 5-6
Education requirements: Bachelor’s degree in business, computer science, engineering, management or other related field or equivalent experience.
# of Years of Experience: 5 +
Approximate hrs/week: 40
WFH or will this person need to be onsite? Remote

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Ansible APIs AWS Cassandra Computer Science Data management Data pipelines Data strategy DevOps ELT Engineering ETL Git Hadoop Kinesis Machine Learning ML models MongoDB NoSQL Pipelines Python R RDBMS Scala Security Snowflake SQL Testing

Region: North America
Country: United States
Job stats:  1  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.