Data Architect for Marcel Product

Makati, Philippines

Applications have closed

Company Description

Publicis Re:Sources is the technology and business services backbone of Publicis Groupe.

Publicis Groupe is the 3rd largest communications group worldwide, Leader in digital and Interactive Communication. With activities spanning 104 countries on five continents, Publicis Groupe employs approximately 80,000 professionals. Publicis Groupe offers local and international clients a complete range of communication services.

About Marcel Project:
Marcel is the AI platform that connects more than 80,000 employees at Publicis Groupe— across geographies, agencies and capabilities. Marcel helps our employees learn, share, create, connect
and grow more than ever before. Marcel connects employees to our culture, helps them master new skills, inspires them and tackles diversity and inclusion head on to help build a better world together. It’s a place where we come together every day to amplify each other as one global team.
All of this employee engagement creates over 100 million data points that power our AI-enabled knowledge graph, making the experience even more relevant for employees. And for our clients, our knowledge graph makes Marcel one of the most powerful tools ever invented for finding exactly the right expertise, teams and knowledge that we need to win in the Platform World.
Marcel is a strategic investment in our people and is aimed at being their personal growth engine in this hybrid world. This role is joining the dynamic Marcel team in helping build and evolve this product.

Job Description

Key responsibilities:

The key accountabilities for this role are, but not limited to;

·          Ensure the data models of the Marcel program are managed efficiently and that model enhancements are in alignment with the data modelling principles, standards and meta data practices

·          Ensure that all data lifecycle events are efficiently managed by the Marcel platform, aligning technology and feature teams accordingly.

·          Ensure that data quality is maintained in production through measurement and operationally supported

·          Work closely with feature teams to ensure that all analytics, data and architectures are in alignment with the Data Strategy.

·          Act as a point of contact and advisor on all data related features of Marcel and where relevant drive enhancements from concept through to production delivery.

·          Coaching and mentor others on best practices, data principles, performance.

·          Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

·          Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs..

·          Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Specific responsibilities:

·          Responsible for overall Data Architecture of the platform

·          Responsible for leading the team of data engineers to build data pipelines using a combination of Azure Data Factory and Databricks

·          Accountable for delivery of team commitments

·          Responsible for training and development of team members

·          Responsible for the design and architecture of feeds and data integrations

·          Responsible for sign off of deliverables

·          Responsible for establishing best practices and standards

·          Write maintainable and effective data feeds, and pipelines

·          Follow best practices for test driven environment, continuous integration.

·          Design, develop, test and implement end-to-end requirement

·          Contribute on all phases of development life cycle

·          Perform unit testing and troubleshooting applications

Business Compliance

Ensure a sound understanding of, demonstrate commitment to and comply with all legislation & Publicis Groupe Policies e.g., Janus, GSO and IT policies, etc.

Personal & Team Accountabilities

Actively develop and maintain strong working relationships with all Re:Sources personnel both at a interpersonal level and across all business processes within the wider business environment.

Actively maintain communication and behaviour standards that foster a culture of strong customer and service excellence both within Re:Sources and across all customer and supplier organisations.

Qualifications

Minimum Experience (relevant): 5 (Overall experience of atleast 10+ years)

Maximum Experience (relevant): 9

Must have skills:

·         Strong written and verbal communication skills

·         Strong experience in implementing Graph database technologies (property graph)

·         Strong experience in leading data modelling activities for a production graph database solution

·         Strong experience in Cypher (or Tinkerpop Gremlin) with understand of tuning

·         Strong experience working with data integration technologies, specifically Azure Services, ADF, ETLs, JSON, Hop or ETL orchestration tools.

·         Strong experience using PySpark, Scala, DataBricks

·         3-5+ years’ experience in design and implementation of complex distributed systems architectures

·         Strong experience with Master Data Management solutions

  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.

·         Strong knowledge Azure based services

·         Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources

·         Experience with GraphQL

·         Experience in high availability and disaster recovery solutions

·         Experience with test driven development

·         Understanding of Jenkins, CI/CD processes using ADF, and DataBricks.

  • Strong analytical skills related to working with unstructured datasets.

·         Strong analytical skills necessary to triage and troubleshoot

·         Results-oriented and able to work across the organization as an individual contributor

 

Good to have skills:

·         Knowledge in graph data science, such as graph embedding

·         Knowledge in Neo4J HA Architecture for Critical Applications (Clustering, Multiple Data Centers, etc.)

·         Experience in working with EventHub, streaming data.

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.

·         Experience with Redis

·         Understanding of ML models and experience in building ML pipeline, MLflow, AirFlow.

  • Bachelor's degree in engineering, computer science, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable Azure based data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Understanding of Node.js is a plus, but not required.

Additional Information

Attributes/behaviours

·         Ability to design, develop, implement complex requirement.

·         Building reusable components and front-end libraries for future use

·         Translating designs and wireframes into high quality code

  • Pro-active support to the business is a key attribute for this role with a customer service focus to link both systems requirements with business outcomes.

Tags: Airflow Architecture Azure Big Data Cassandra CI/CD Clustering Computer Science Databricks Data management Data pipelines Data quality Data strategy Distributed Systems Engineering ETL GraphQL Hadoop JSON Kafka Machine Learning MLFlow ML models Neo4j Node.js NoSQL Pipelines PostgreSQL PySpark RDBMS Scala Spark SQL Streaming Testing

Perks/benefits: Career development Startup environment Team events

Region: Asia/Pacific
Country: Philippines
Job stats:  6  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.