AWS Data Engineer
Apply on
Position: SR AWS Data Engineer
Minneapolis MN - Hybrid 2-3 days a week onsite
Direct Hire
Leadership Inventory Skill
Data Modelling, Data Design,
Orchestration, Managing Large datasets, Data lake fornmats
Project is on Hotel Transaction.
Candidate must be able to understand the nature of Data
Non- Locals will work but must be comfortable relocating (No EXCEPTIONS)
No Relocation expenses provided by the client
Now we just need you!
We are on the lookout for a highly skilled and experienced data engineer to become part of our dynamic team. In this role, you will hold a vital position by helping build a next-generation data platform with Amazon Web Services (AWS). Your responsibilities will involve defining standards for our data pipelines, including technology selection, building CI/CD processes and implementing new pipelines to move and transform data within our platform built on top of AWS, This position is Hybrid with an expectation of 3 days a week in our offices in Bloomington, Minneapolis.
What You ll Be Doing...
- Technical Ownership: Owning the technology selection for ETL/ELT pipelines and define the standards around pipeline orchestration, data quality and CI/CD.
- Engineering: Lead in design of new data pipelines and help educate the team and the larger organization on best practices and common patterns to follow.
- Stakeholder Collaboration: Work with integration and analytics teams to ensure a smooth transition from data capture and collection to refinement and aggregation.
- Governance: Help implement data governance policies and observability in data pipelines and participate in the refinement of quality and retention policies
- Ensure all applicable security policies and processes are followed to support the organization s secure software development goals.
What You ll Bring To Us
- At least 5+ Experience as a data engineer having worked up to a senior level or are currently a senior data engineer, with experience in data integration, data engineering and/or data science.
- 5+ years hands-on experience with building data pipelines in AWS.
- Experience working with technologies such as AWS, Java, snowflake, databricks.
- Experience with high-volume batch and streaming analytical applications
- Experience in working with various data platforms and systems, such as relational databases, data warehouses, data lakes, cloud services, and big data technologies, and with selecting and using the appropriate data platforms and systems for SaaS products or cloud-based applications
- Familiarity with data virtualization technologies such as Trino or Presto
- Familiarity of working in a DevSecOps environment, utilizing CI/CD best practices