Python Pyspark developer (Data Transformation/Data lakes)
Apply on
Job: Python Pyspark developer(Data Transformation/Data lakes)
Location: Charlotte, NC (HYBRID)
Long Term Contract
Job Description:
6+ years of experience in Design, develop, and maintain ETL processes using PySpark for data transformation and loading into data lakes or warehouses. 6+ years of experience in writing efficient and reusable Python scripts for data manipulation, integration, and analysis. Experience in developing and implementing Shell scripts for automation of data processing tasks.
Experience in Managing and scheduling data workflows using Autosys to ensure timely and reliable data availability.
Strong proficiency in SQL and experience with relational databases (e.g., MySQL, SQL Server).
Hands on experience in optimizing data processing performance and troubleshoot issues in data pipelines.
Knowledge and experience with CI/CD, Agile tools, Source control and designing tools.
Experience working with Agile Methodology.
Develop, conduct code reviews, Test and Deploy code.
Provide fixes to issues/bugs reported from test and production environment.
Provide support during SIT and UAT.
Fix defects raised in production.