Epicareer Might not Working Properly
Learn More
K

Lead Data Engineer FebS25005

Salary undisclosed

Checking job availability...

Original
Simplified

Lead Daat Engineer

Charlotte, NC

Fulltime Opportunity

Hybrid

Job Description:

We are looking for an experienced Lead Data Engineer with 10+ years of expertise in data engineering, cloud technologies, and database solutions. The ideal candidate will have a strong background in designing, building, and optimizing data pipelines, integrating cloud-based solutions, and leading data initiatives that support business intelligence and analytics.

Key Responsibilities:

  • Lead the design, development, and implementation of scalable data engineering solutions in Azure (primary), AWS (secondary), and Google Cloud Platform environments.
  • Develop, optimize, and maintain ETL pipelines using SSIS (2015, 2019) and Azure Data Factory.
  • Design and implement data warehouse solutions that support analytics, reporting, and business intelligence.
  • Architect, optimize, and troubleshoot SQL Server (2016 and above) databases, including writing complex queries, stored procedures, views, and indexing strategies.
  • Integrate and manage data workflows with APIs (REST, SOAP, RPC) to ensure seamless data exchange between systems.
  • Provide technical leadership and mentorship to data engineering teams, ensuring best practices and standards are followed.
  • Perform data modeling (dimensional and relational) to support enterprise-wide data architecture.
  • Enhance database performance by tuning queries, optimizing indexing strategies, and ensuring high availability.
  • Collaborate with DBA, BI, DevOps, and business teams to deliver high-quality, scalable data solutions.
  • Develop and enforce data governance, security, and compliance standards across data engineering processes.
  • Utilize Git repositories, CI/CD pipelines, and automation tools for source control and deployment management.
  • Conduct unit testing, data validation, and QA/UAT processes to ensure data integrity and accuracy.
  • Monitor, troubleshoot, and resolve production issues, providing 3rd-tier technical support as needed.

Required Technical Skills:

  • 10+ years of experience in SQL Server (2016 and above), including database design, query optimization, and stored procedures.
  • Extensive experience in ETL development using SSIS (2015, 2019) and Azure Data Factory.
  • Strong expertise in Azure (Data Factory, Synapse, Databricks), AWS, and Google Cloud Platform.
  • Proficiency in Python for data processing, automation, and scripting.
  • Hands-on experience with data warehousing and data modeling (dimensional and relational).
  • Experience integrating data with REST APIs, SOAP, and RPC-based services.
  • Advanced skills in database performance tuning, indexing strategies, and query optimization.
  • Experience working with source control tools (Git) and CI/CD pipelines for automated deployments.
  • Strong understanding of data security, governance, and compliance best practices.

Preferred Qualifications:

  • Experience with big data technologies (Databricks, Spark, Snowflake, Synapse Analytics).
  • Familiarity with containerization (Docker, Kubernetes) for data pipeline deployment.
  • Knowledge of real-time data streaming.
  • Experience in ML and AI-driven data processing is a plus.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job

Lead Daat Engineer

Charlotte, NC

Fulltime Opportunity

Hybrid

Job Description:

We are looking for an experienced Lead Data Engineer with 10+ years of expertise in data engineering, cloud technologies, and database solutions. The ideal candidate will have a strong background in designing, building, and optimizing data pipelines, integrating cloud-based solutions, and leading data initiatives that support business intelligence and analytics.

Key Responsibilities:

  • Lead the design, development, and implementation of scalable data engineering solutions in Azure (primary), AWS (secondary), and Google Cloud Platform environments.
  • Develop, optimize, and maintain ETL pipelines using SSIS (2015, 2019) and Azure Data Factory.
  • Design and implement data warehouse solutions that support analytics, reporting, and business intelligence.
  • Architect, optimize, and troubleshoot SQL Server (2016 and above) databases, including writing complex queries, stored procedures, views, and indexing strategies.
  • Integrate and manage data workflows with APIs (REST, SOAP, RPC) to ensure seamless data exchange between systems.
  • Provide technical leadership and mentorship to data engineering teams, ensuring best practices and standards are followed.
  • Perform data modeling (dimensional and relational) to support enterprise-wide data architecture.
  • Enhance database performance by tuning queries, optimizing indexing strategies, and ensuring high availability.
  • Collaborate with DBA, BI, DevOps, and business teams to deliver high-quality, scalable data solutions.
  • Develop and enforce data governance, security, and compliance standards across data engineering processes.
  • Utilize Git repositories, CI/CD pipelines, and automation tools for source control and deployment management.
  • Conduct unit testing, data validation, and QA/UAT processes to ensure data integrity and accuracy.
  • Monitor, troubleshoot, and resolve production issues, providing 3rd-tier technical support as needed.

Required Technical Skills:

  • 10+ years of experience in SQL Server (2016 and above), including database design, query optimization, and stored procedures.
  • Extensive experience in ETL development using SSIS (2015, 2019) and Azure Data Factory.
  • Strong expertise in Azure (Data Factory, Synapse, Databricks), AWS, and Google Cloud Platform.
  • Proficiency in Python for data processing, automation, and scripting.
  • Hands-on experience with data warehousing and data modeling (dimensional and relational).
  • Experience integrating data with REST APIs, SOAP, and RPC-based services.
  • Advanced skills in database performance tuning, indexing strategies, and query optimization.
  • Experience working with source control tools (Git) and CI/CD pipelines for automated deployments.
  • Strong understanding of data security, governance, and compliance best practices.

Preferred Qualifications:

  • Experience with big data technologies (Databricks, Spark, Snowflake, Synapse Analytics).
  • Familiarity with containerization (Docker, Kubernetes) for data pipeline deployment.
  • Knowledge of real-time data streaming.
  • Experience in ML and AI-driven data processing is a plus.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job