
Principal Data Engineer
Are you a seasoned Data Engineer with a passion for building robust, scalable, and efficient data pipelines? We're looking for an expert to join our team and drive our analytics, business intelligence, and machine learning initiatives to the next level!
Key Responsibilities:
Design, develop, and maintain ETL pipelines using AWS, Python, and Spark.
Optimize data ingestion, transformation, and storage for high-performance processing.
Work with structured and unstructured data, ensuring integrity and governance.
Write efficient SQL queries for data extraction and manipulation.
Implement data validation and testing frameworks using Pytest.
Collaborate with data scientists, analysts, and engineers for scalable solutions.
Monitor and troubleshoot data pipelines for seamless operation.
Stay updated with industry trends and cloud technologies.
Required Skills & Qualifications:
10+ years of experience in Data Engineering or a related field.
Proficiency in AWS (S3, Glue, Lambda, EMR, Redshift, etc.).
Hands-on experience with Python and Apache Spark.
Strong knowledge of ETL pipelines and data warehousing concepts.
Proficiency in SQL and Pytest for testing frameworks.
Familiarity with CI/CD pipelines and version control (e.g., Git).
Preferred Qualifications:
Experience with Terraform, Docker, or Kubernetes.
Knowledge of big data tools like Apache Kafka or Airflow.
Exposure to data governance and security best practices.
Ready to take your Data Engineering career to the next level?
Apply now and be part of our innovative team driving data solutions!
Are you a seasoned Data Engineer with a passion for building robust, scalable, and efficient data pipelines? We're looking for an expert to join our team and drive our analytics, business intelligence, and machine learning initiatives to the next level!
Key Responsibilities:
Design, develop, and maintain ETL pipelines using AWS, Python, and Spark.
Optimize data ingestion, transformation, and storage for high-performance processing.
Work with structured and unstructured data, ensuring integrity and governance.
Write efficient SQL queries for data extraction and manipulation.
Implement data validation and testing frameworks using Pytest.
Collaborate with data scientists, analysts, and engineers for scalable solutions.
Monitor and troubleshoot data pipelines for seamless operation.
Stay updated with industry trends and cloud technologies.
Required Skills & Qualifications:
10+ years of experience in Data Engineering or a related field.
Proficiency in AWS (S3, Glue, Lambda, EMR, Redshift, etc.).
Hands-on experience with Python and Apache Spark.
Strong knowledge of ETL pipelines and data warehousing concepts.
Proficiency in SQL and Pytest for testing frameworks.
Familiarity with CI/CD pipelines and version control (e.g., Git).
Preferred Qualifications:
Experience with Terraform, Docker, or Kubernetes.
Knowledge of big data tools like Apache Kafka or Airflow.
Exposure to data governance and security best practices.
Ready to take your Data Engineering career to the next level?
Apply now and be part of our innovative team driving data solutions!