Senior AWS Data Engineer (Coding expert)
Senior AWS Data Engineer
Hand on Coding (python, pandas coding expert)
Experience: 10+ Years
Start - ASAP
Duration 12 months
Remote role consultant can live anywhere in the US
Citizenship does NOT matter but communication does matter and must be excellent.
For this role you must have hard core PYTHON experience and at least 10 years of experience. We need basic data wrangling with python and pandas.
Job description -
They DO NOT WANT big data engineers or architects. They want resources that know python coding inside and out. A python engineer who can hard code python, and specifically they want pandas python coding. Pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language.
We need someone who has done recent projects with python coding and they should also be seasoned pandas coder, and also seasoned NumPy coder. NumPy coding" refers to writing code using the NumPy library in Python. NumPy is an open source mathematical and scientific computing library for Python programming tasks.
In short they need AWS Data engineers who are very strong Python coders, and also strong with Pandas and NumPy coding.
The client needs a few Senior AWS data engineers (not architects). They need hands on developers (roll up sleeves and work) with STRONG EXPERTISE IN PYTHON. Resource must have at least 10 years of experience. Energy, utilities, GIS expertise highly desired. AWS architect certification is a big plus.
AWS SENIOR TECH LEAD DATA ENGINEER
Don t need architects or Big Data engineers. We Need AWS Lead Data engineers who can do hands-on coding on these tools python, pandas, pyspark, Terraform, AWS Glue, Lambda, S3, Redshift, EMR. Let's make sure any candidates you submit have most, if not all, of these critical skills.
About the Role:
We are seeking an experienced AWS Data Engineer to join our dynamic team. The ideal candidate will have 5+ years of experience in data engineering with a strong focus on AWS technologies. This role involves designing, developing, and maintaining scalable data pipelines and processing systems. The candidate should be adept at managing and optimizing data architectures and be passionate about data-driven solutions. Knowledge of machine learning is a plus.
Key Responsibilities:
- Design and implement scalable data pipelines using AWS services such as Glue, Redshift, S3, Lambda, EMR, Athena
- Develop and maintain ELT processes to transform and integrate data from various sources.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
- Optimize and tune performance of data pipelines and queries.
- Ensure data quality and integrity through robust testing and validation processes.
- Implement data security and compliance best practices.
- Monitor and troubleshoot data pipeline issues and ensure timely resolution.
- Stay updated with the latest developments in AWS data engineering technologies and best practices.
Required Skills and Qualifications:
- Bachelor s or Master s degree in Computer Science, Information Technology, or a related field.
- 10+ years of experience in data engineering with a focus on AWS technologies.
- Expertise in AWS services such as Glue, Redshift, S3, Lambda, EMR, Athena,
- Strong programming skills in Python, Pandas, SQL
- Experience with database systems such as AWS RDS, Postgres and SAP HANA.
- Knowledge of data modeling, ETL processes, and data warehousing concepts.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Experience writing infrastructure as code using Terraform.
- Familiarity with Glue Notebooks, Sagemaker Notebooks, Textract, Rekognition, Bedrock, and any GenAI/LLM tools
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Nice to Have:
- AWS Certification (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect).
- Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
- Knowledge of AWS SageMaker and its integration within data pipelines.
- Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka.
- Experience with data visualization tools like Tableau, Power BI, or AWS QuickSight.
- Familiarity with Azure DevOps and Azure Pipelines.
- Familiarity with Data Catalog and Governance tools such as AWS DQ, Collibra, and profiling tools such as AWS Databrew
Uday Raj Manager at Onwardpath 2701 Larsen Rd #BA142, Green Bay, WI 54303 Ph: +1 | Certified WBE & MBE |