Data Engineer
Apply on
Availability Status
This job is expected to be in high demand and may close soon. We’ll remove this job ad once it's closed.
Need- Data bricks, Pyspark, Sql
Responsibilities:
Experience of designing and implementing an operational production grade large-scale data solution on Microsoft Azure some experience on Snowflake Data Warehouse.
Including hands on experience with productionized data ingestion and processing pipelines using Python, Data bricks, Snow SQL
Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Skill set
Databricks
Architecture
Unity Catalog
Delta lake tables
Auto Loader
Delta Live Tables
Single/Multiple csv files data ingestion
SCD Type1, type2 implementation
Azure Services:
ADLS Gen2
Blob storage configuration
Key vault
Databricks
ADF
SQL:
All Analytical/Window functions
CTE
Subqueries
Constraints
Joins
Union/Union All
SCD Type1 and Type2 implementation
Datawarehouse:
Dimensional modelling
SCD Types