Roll: Data Engineer
Location: Plano, TX or Reston, VA (2-3 days a week onsite)
Final round interview IN-PERSON
Duration: 1-2 years
job consists of setting up Change Data Capture (or CDC) for multiple types of databases for the purpose of hydrating a data lake. Debezium or other CDC knowledge is required.
Along with data hydration, job requires knowledge on ETL transformations using Apache spark, both streaming and batch processing of data.
Engineer needs to know how to work with Apache Spark Data Frames, ETL jobs, and streaming data pipelines that will orchestrate raw CDC data and transform it into useable and query-able data for analytics. Big Data concepts, including performance tuning is a plus.
Skill set:
Java Mid to Senior level experience
Python Mid level experience (pyspark)
Apache Spark Data Frames, Spark SQL, Spark Streaming and ETL pipelines
Apache Airflow
Scala not required but a plus
Apache Hudi not required, but a plus
Apache Griffin not required, but a plus
AWS Skillset:
Extensive knowledge with S3 and S3 operations (CRUD)
EMR & EMR Serverless
Glue Data Catalog
Step Functions
MWAA (Managed Workflows Apache Airflow)
Lambdas (Python)
AWS Batch