Position : Senior Data Engineer
Location: Salt Lake City- MUST be onsite
Contract: 12+ Months
Job Description:
Your future duties and responsibilities
How you'll make an impact
Play key role in establishing and implementing migration patterns for the Data Lake Modernization project
Actively migrate use cases from our on-premises Data Lake to Databricks on Google Cloud Platform
Collaborate with Product Management and business partners to understand case requirements and reporting
Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation)
Document and showcase feature designs/workflows
Participate in team meetings and discussions around product development
Stay up to date on industry latest industry trends and design patterns
Required qualifications to be successful in this role
What you'll bring
6+ years development experience with Spark (PySpark), Python and SQL
Extensive knowledge building data pipelines
Hands on experience with Databricks Development
Strong experience developing on Linux OS
Experience with scheduling and orchestration (e.g. Databricks Workflows, airflow, prefect, control-m)
Solid understanding of distributed systems, data structures, design principles
Comfortable communicating with teams via showcases/demos
Agile Development Methodologies (e.g. SAFe, Kanban, Scrum)
Bachelor's in Computer Science, Computer Engineering or related field
Desired qualifications (Nice to Have):
3+ years experience with GIT
3+ years experience with CI/CD (e.g. Azure Pipelines)
Experience with streaming technologies, such as Kafka, Spark
Experience building applications on Docker and Kubernetes
Cloud experience (e.g. Azure, Google)
Position : Senior Data Engineer
Location: Salt Lake City- MUST be onsite
Contract: 12+ Months
Job Description:
Your future duties and responsibilities
How you'll make an impact
Play key role in establishing and implementing migration patterns for the Data Lake Modernization project
Actively migrate use cases from our on-premises Data Lake to Databricks on Google Cloud Platform
Collaborate with Product Management and business partners to understand case requirements and reporting
Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation)
Document and showcase feature designs/workflows
Participate in team meetings and discussions around product development
Stay up to date on industry latest industry trends and design patterns
Required qualifications to be successful in this role
What you'll bring
6+ years development experience with Spark (PySpark), Python and SQL
Extensive knowledge building data pipelines
Hands on experience with Databricks Development
Strong experience developing on Linux OS
Experience with scheduling and orchestration (e.g. Databricks Workflows, airflow, prefect, control-m)
Solid understanding of distributed systems, data structures, design principles
Comfortable communicating with teams via showcases/demos
Agile Development Methodologies (e.g. SAFe, Kanban, Scrum)
Bachelor's in Computer Science, Computer Engineering or related field
Desired qualifications (Nice to Have):
3+ years experience with GIT
3+ years experience with CI/CD (e.g. Azure Pipelines)
Experience with streaming technologies, such as Kafka, Spark
Experience building applications on Docker and Kubernetes
Cloud experience (e.g. Azure, Google)