Epicareer Might not Working Properly
Learn More
E

Data Engineer

Salary undisclosed

Apply on

Availability Status

This job is expected to be in high demand and may close soon. We’ll remove this job ad once it's closed.


Original
Simplified

Job Title: Sr Data Engineer

Location: Glendale, CA Hybrid Onsite Schedule

The Company

Headquartered in Los Angeles, this leader in the Entertainment & Media space is focused on delivering world-class stories and experiences to it's global audience. To offer the best entertainment experiences, their technology teams focus on continued innovation and utilization of cutting edge technology.

Platform / Stack

You will work with technologies that include Python, AWS, Airflow and Snowflake.

Compensation Expectation- $70 90/hr W2

What You'll Do As a Sr Data Engineer:

Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines

Build tools and services to support data discovery, lineage, governance, and privacy

Collaborate with other software/data engineers and cross-functional teams

Work on a Tech stack that includes Airflow, Spark, Databricks, Delta Lake, and Snowflake

Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform

Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more

Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements

Maintain detailed documentation of your work and changes to support data quality and data governance requirements

Qualifications

You could be a great fit if you have:

5+ years of data engineering experience developing large data pipelines

Proficiency in at least one major programming language (e.g. Python, Java, Scala)

Strong SQL skills and ability to create queries to analyze complex datasets

Hands-on production environment experience with distributed processing systems such as Spark

Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines

Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query).

Experience in developing APIs with GraphQL

Deep Understanding of AWS or other cloud providers as well as infrastructure as code

Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job