Lead/Sr Azure Data Engineer || W2 Only
Apply on
Availability Status
This job is expected to be in high demand and may close soon. We’ll remove this job ad once it's closed.
Job Title: Lead/Sr Azure Data Engineer
Location: Must be Hybrid in Minneapolis, MN from Day 1
Duration: 6 Months CTH
Employment Type: W2 or Self-corp only
Job Description:
Must Haves:
12+ years in Data Engineering, focused in Azure
5+ years in Sr/Lead capacity
Expert in Python, Apache Spark, Azure Synapse, and Azure Data Engineering services
Key Responsibilities
- ETL and Data Pipeline Development
- Design, develop, and optimize scalable ETL processes using Python, Apache Spark, and Azure Synapse.
- Build and manage Azure Data Factory pipelines to orchestrate complex data workflows.
- Use SQL Pools and Spark Pools within Synapse to manage and process large datasets efficiently.
- Implement Data Warehousing solutions using Azure Synapse Analytics to provide structured and queryable data layers.
- Ensure the data platform supports real-time and batch AI/ML data requirements.
- Azure Cloud Development & CI/CD Deployment
- Build, configure, and manage CI/CD pipelines on Azure DevOps for ETL and data processing tasks.
- Automate infrastructure provisioning, testing, and deployment using Infrastructure-as-Code (IaC) tools like ARM templates or Terraform.
- Optimize Azure Data Lake Storage (ADLS Gen2) to store and manage raw and processed data efficiently, ensuring proper access control and data security.
- Cross-Functional Collaboration
- Collaborate with Data Scientists, Data Engineers, ML Engineers, and Business Analysts to translate business requirements into data solutions.
- Work with the DevOps and Security teams to ensure smooth and secure deployment of applications and pipelines.
- Act as the technical lead in designing, developing, and implementing data solutions, mentoring junior team members.
- Data Engineering and API Development
- Develop and integrate with external and internal APIs for data ingestion and data exchange.
- Build, test, and deploy RESTful APIs for secure data access.
- Use Kubernetes for containerizing and deploying data processing applications.
- Manage data storage and transformation to support advanced Data Science and AI/ML models.
- Agile Project Management
- Participate in and lead Agile ceremonies, such as sprint planning, daily stand-ups, and retrospectives.
- Collaborate with cross-functional teams in iterative development to ensure high-quality and timely feature delivery.
- Adapt to changing project priorities and business needs in an Agile environment.
Required Skills and Qualifications
- Technical Skills:
- Expertise in Python and Apache Spark for large-scale data processing.
- Strong experience in Azure Synapse Analytics, including SQL Pools and Spark Pools.
- Advanced proficiency in Azure Data Factory for ETL pipeline orchestration and management.
- Knowledge of Data Warehousing principles, with hands-on experience building solutions on Azure.
- Experience with SQL, including complex queries, optimization, and performance tuning.
- Familiarity with CI/CD tools like Azure DevOps and managing infrastructure in Azure Cloud.
- Experience in Java for API integration and microservices architecture.
- Hands-on knowledge of Kubernetes for containerized data processing environments.
- Proficiency in working with Azure Data Lake Storage (ADLS) Gen2 for data storage and management.
- Experience working with APIs (REST, SOAP) and building API-based data integrations.