Data Architect-Minneapolis, MN-W2 ONLY
Apply on
Data Architect
Duration- 1 year-Extension or Conversion
Hybrid- 3 days onsite per week
Location- Charlotte, NC or Minneapolis, MN
W2 ONLY
Job description:
The Lead Engineer will possess knowledge and experience in applications software development principles and methods sufficient to participate in the design, development, testing and implementation of new or modified applications software; operating systems installation and configuration procedures; organization s operational environment; software design principles, methods and approaches; principles, methods and procedures for designing, developing, optimizing and integrating new and/or reusable systems components; pertinent government regulations; infrastructure requirements, such as bandwidth and server sizing; database management principles and methodologies, including data structures, data modeling, data warehousing and transaction processing; functionality and operability of the current operating environment; systems engineering concepts and factors such as structured design, supportability, survivability, reliability, scalability and maintainability; optimization concepts and methods; establish and maintain cooperative working relationships with those contacted in the course of the work; and speak and write effectively and prepare effective reports.
Skills:
1. Strong understanding of relational database concepts, SQL (Structured Query Language), and data modeling. Knowledge of various databases used in data warehousing, such as Oracle, SQL Server, PostgreSQL, or MySQL.
2. Proficiency in ETL tools like Azure Data Factory, Azure Synapse, or Microsoft SSIS (SQL Server Integration Services) to extract data from various sources, transform it to fit the target schema, and load it into the data warehouse.
3. Ability to design and implement data warehouse data models, including star schema, snowflake schema, and dimension hierarchies for optimized data retrieval and analysis.
4. Expert in data integration techniques and data quality processes to ensure data accuracy, consistency, and reliability in the data warehouse.
5. Expert on data warehouse architecture principles, such as data staging areas, data marts, data lakes, and the overall data flow.
6. Proficient with data warehouse development methodologies and the ability to apply best practices in building scalable and maintainable data warehouses.
7. Proficient in scripting languages like Python, Perl, or Shell scripting for automating ETL processes and data manipulation.
8. Understanding of data security principles and compliance regulations to protect sensitive information in the data warehouse.
9. Skills in optimizing data warehouse performance, including query optimization, index creation, and partitioning.
10. Experience in Azure Storage Containers, Key Vault, Log Analytics, and Synapse
11. Experience in Synapse Integration runtimes, Linked Services, Serverless SQL Pool, Spark Pool
12. Experience in Synapse Triggers and Pipeline Monitoring
13. Experience in Synapse pipeline activities and notebooks (Pyspark, Spark SQL)
14. Experience in Lake Database, Delta Lake, Medallion architecture
Work Experience Required:
This classification must have a minimum of seven (7) years of experience in electronic data processing systems study, design, and programming.
At least four (4) years of that experience must have been in a lead capacity.
1) 3 years of experience in the past 4 years in working with SQL database architecture, data modeling, normalization, and performance optimization.
2) 3 years of experience in the past 4 years in working with Microsoft Azure Cloud platform, including familiarity with other Azure services like Azure Data Lake Storage, Azure Databricks, Azure Data Factory, and Azure DevOps.
3) 3 years of experience in the past 4 years in working with designing and developing data warehouses using other platforms like Microsoft SQL Server, Oracle, or Teradata
4) 3 years of experience in the past 4 years in working with years of experience in big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka
5) 3 years of experience in the past 4 years in working with data visualization tools like Power BI to create insightful visualizations and reports based on data stored in Synapse Data Warehouse.
6) 3 years of experience in the past 4 years in working with data cleansing, data profiling, and data validation techniques to ensure high data integrity in the data warehouse.