Checking job availability...
Original
Simplified
Job Description:
- Job consists of setting up Change Data Capture (or CDC) for multiple types of databases for the purpose of hydrating a data lake.
- Debezium or other CDC knowledge required.
- Along with data hydration, job requires knowledge on ETL transformations using Apache spark, both streaming and batch processing of data.
- Engineer needs to know how to work with Apache Spark Data Frames, ETL jobs, and streaming data pipelines that will orchestrate raw CDC data and transform it into useable and query-able data for analytics. Big Data concepts, including performance tuning is a plus.
Skill set:
- Java Mid to Senior level experience
- Python Mid level experience (pyspark)
- Apache Spark Data Frames, Spark SQL, Spark Streaming and ETL pipelines
- Apache Airflow
- Scala not required but a plus
- Apache Hudi not required, but a plus
- Apache Griffin not required, but a plus
AWS Skillset:
- Extensive knowledge with S3 and S3 operations (CRUD)
- EMR and EMR Serverless
- Glue Data Catalog
- Step Functions
- MWAA (Managed Workflows Apache Airflow)
- Lambdas (Python)
- AWS Batch
- AWS Deequ not required, but a plus
Qualification: Must Have: FNMACompany Default Category:
- 5 to 7 plus years software development experience: 5 years
- Bachelor degree in Computer Science, Information Systems or related field: Yes
- Post-graduate degree desired: Yes
- Professional certification(s) desired: Yes
- Strong knowledge of Software Development Lifecycle (SDLC)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job