Data Engineer - Customer Success Pod

United States, San Diego, CA

Applications have closed

PlayStation Global

Erkunde die neue Generation von PlayStation 4- und PS5-Konsolen – erlebe immersives Gaming mit Tausenden Spiele-Hits aus allen Genres, die die Regeln für das, was eine PlayStation-Konsole kann, neu schreiben.

View company page

Why PlayStation?

PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, PlayStation™Now, acclaimed PlayStation software titles from PlayStation Studios, and more.

PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.

The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Corporation.

Data Engineer- Customer Success Pod

San Diego, CA

The Future Technology Group [FTG] is leading the cloud gaming revolution, putting console-quality video games on any device! As a Data Engineer with FTG, you will play a key role in the design and development of server-side applications for shaping, managing and transforming data on a large geographically distributed infrastructure.

You’ll bring passion and expertise to continuously improve the value that our data platform provides to data producers and consumers in the organization, providing tooling, best practices and state-of-the art technology to empower data-driven decisions to the company.

Responsibilities:

  • Liaise with product domain teams to help them consume and produce data instantaneously
  • Identify needs and design distributed architectures to support their data products
  • Build data structures to enable analysts to discover and communicate information
  • Shape solutions to ensure the privacy and security of our players’ data
  • Contribute in building a data catalog making wisdom accessible across the organization
  • Implement, measure, and monitor reliability for the services you will provide
  • Understand, advise and improve ETL and ETL pipelines and help develop new pipelines with streaming frameworks
  • Design complete or parts of data pipelines taking scale, performance, security, usability under consideration
  • Enjoy working in a fast-paced environment, always chasing for the latest state-of-the-art technology
  • Demonstrate strong communication and documentation skills
  • Contribute valuable idea exchanges and technical conversations with your colleagues, and be able to work independently and remotely most of the time
  • Own decisional space on how to approach a problem, working in accordance with vision and business objective
  • Provide contributions to the open-source community through blog posts and articles as well as source code and artifacts
  • These values resonate with you:: Innovation, Inclusion, Iteration, Customer-centricity, Extreme ownership, Collaboration, Results drive, Embracing failure, Work-life balance

Requirements:

  • Degree in Computer Science, or a related quantitative field
  • At least 3 years of experience with any JVM language such as Java or Kotlin
  • Skilled in designing microservices and high throughput streaming
  • Strong understanding of concepts such as concurrency, parallelism, and event driven architecture
  • Big data experience with ETL, consumption, aggregation, ingestion, and storage

Nice to Have:

  • Experience with AWS in general
  • Experience with Apache Kafka
  • Knowledge in NoSQL, Docker, Kubernetes, Prometheus, Consul, ElasticSearch, Kibana, and other CNCF technologies
  • Experience with big data stream processing or batch processing technologies such as Apache Flink, Apache Spark and Apache Hadoop
  • Knowledge of serializers across different formats (Avro, Protobuf)
  • Skills related to manage a data lake on AWS (S3, Athena, Quicksight, Kinesis, Glue Services, Lambda / StepFunctions)
  • Experience with Kubernetes
  • Knowledge of Web technologies including REST, JSON, gRPC, and WebSockets
  • Familiarity with software development tools and processes like Git and CI/CD (Gitlab, Jenkins)
  • Experience or background in Machine Learning projects

#LI-JM2

Equal Opportunity Statement:

Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender (including gender identity, gender expression and gender reassignment), race (including colour, nationality, ethnic or national origin), religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy or maternity, trade union membership or membership in any other legally protected category.

We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Athena Avro AWS Big Data CI/CD Computer Science Data pipelines Docker Elasticsearch ETL Flink Git GitLab Hadoop JSON Kafka Kibana Kinesis Kubernetes Lambda Machine Learning Microservices NoSQL Pipelines QuickSight Security Spark Streaming VR

Region: North America
Country: United States
Job stats:  3  0  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.