Data Engineering Manager
Data Engineering Manager Location : Seattle, WA Remote (but need to visit once in month to client location) Duration : 3-6 Months converts to Fulltime
Job Description
Day to Day Responsibilities:
Lead the data engineering team:
driving the design, build and testing of new data pipelines and data models on the new Azure Snowflake platform.
Leading the development team, including performance monitoring and reviews, resource allocation, career development, and training.
Provide data architecture leadership in the realm of data foundations, including tools, and infrastructure, to facilitate the democratization of data consumption by downstream teams.
Drive data warehouse design, architecture, controls, and access principles across the team to effectively establish a data-driven culture.
Collaborate with product owners and key business stakeholders for roadmap planning and prioritization, to deliver robust cloud-based data solutions.
Establishes programming code standards and disciplines, including design/code reviews.
Manage vendor relationships with key data service providers and provide budget recommendations necessary to fulfill ongoing projects and ongoing support.
Oversee cloud cost management activities.
- Bachelor's degree in computer science, Engineering, Math, IT or related discipline or equivalent experience.
- 7 years of professional experience in building data pipelines, platforms, architecture, structures, and data modeling.
- Minimum of 5 years' experience in offering technical leadership and guiding teams for data engineering best practices.
Preferred Skills:
- Knowledge of healthcare business processes and healthcare data is preferred.
- Live in Washington State or be willing to relocate to WA. - Proficiency in Microsoft Azure.
- Proficiency in Snowflake on Azure.
- Experience with cloud-based analytics platforms, such as Databricks and Synapse.
- Experience in architectural leadership on data pipeline design and development.
- Experience in project management, product development, and process improvement.
- Strong experience in building ETL data pipelines and conducting analysis using Python, SQL, and PySpark.
- Preferred experience with data and workflow management tools.
- Advanced working knowledge of SQL and experience working with relational databases.
- Strong analytic skills for working with both unstructured and structured datasets