
Data Engineer
Job Details
Position: Data Engineer
Location: FULLY Remote
Duration: 3-6 months
Start Date: ASAP
Interview Process/Times: The candidate will be interviewed by three individuals on the Data Engineering and BI team.
Notes:
Possibility of extension
Location - remote or hybrid (Charlotte) / EST time zone
3-5 Must Have Skills:
Strong Azure data engineering skills
Experience with PySpark and SQL
Ability to provide direction to offshore team members doing the data pipelining
List top Skills, Experience, Background, Certifications, Education, etc. - Experience at an enterprise level in a large, complex organization
Years of experience - 8+
Description:
Essential Qualifications / Requirements:
Bachelor s degree in Computer Engineering, Computer Science or related discipline, Master s Degree preferred.
7+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment.
3+ years of experience with setting up and operating data pipelines using Python or SQL
7+ years of advanced SQL Programming: PL/SQL, T-SQL
5+ years of Enterprise Data & Analytics solution architecture
3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions.
2+ years of experience in defining and enabling data quality standards for auditing, and monitoring.
Strong analytical abilities and a strong intellectual curiosity
In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts
Deep understanding of REST and good API design.
Strong collaboration and teamwork skills & excellent written and verbal communications skills.
Self-starter and motivated with ability to work in a fast-paced development environment.
Agile experience highly desirable.
Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools.
Working experience with Python, Spark, Pyspark.
Strong Leadership capabilities.
Preferred Skills:
2+ years of experience with Big Data Management (BDM) for relational and non-relational data (formats like json, xml, Avro, parquet, copybook, etc.)
Knowledge of Dev-Ops processes (CI/CD) and infrastructure as code.
Knowledge of Master Data Management (MDM) and Data Quality tools.
Experience developing REST APIs.
Kafka
Knowledge of key machine learning concepts & MLOPS
Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake