Data Engineer - Data Bricks / AWS
Apply on
SOFT's client located in New York, NY ( Hybrid ) is looking for a Data Engineer - Data Bricks / AWS for a long term contract assignment.
Qualifications:
Minimum of 5 years of experience in data engineering roles, with a focus on AWS and Databricks.
Highly proficient with Databricks, Spark, Starburst/Trino, Python, PySpark and SQL
Hands-on experience in Gitlab with CI/CD.
Hands-on experience in AWS Services like S3, RDS, Lambda, SQS, SNS, MSK is required.
Strong SQL skills to perform data analysis and understanding of source data.
Experience with data pipeline orchestration tools
Responsibilities:
Design, develop, monitor, and maintain data pipelines in an AWS ecosystem with Databricks, Delta Lake, Python, SQL and Starburst as the technology stack. Collaborate with cross-functional teams to understand data needs and translate them into effective data pipeline solutions.
Establish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.
Automate testing of the data pipelines and configure as part of CICD
Optimize data processing and query performance for large-scale datasets within AWS and Databricks environments.
Document data engineering processes, architecture, and configurations.
Troubleshooting and debugging data-related issues on the AWS Databricks platform.
Integrating Databricks with other AWS products such as SNS, SQS, and MSK.
Comments/Special Instructions
Please refer to the Position ID when inquiring about a job posting or sending in your resume.
***INDEPENDENT CONSULTANTS ONLY! NO THIRD PARTIES/NO SUB CONTRACTORS***