Epicareer Might not Working Properly
Learn More

Big Data PySpark Architect - CG

Salary undisclosed

Apply on

Availability Status

This job is expected to be in high demand and may close soon. We’ll remove this job ad once it's closed.


Original
Simplified

Role: Big Data PySpark Architect Locations: Charlotte, NC (Hybrid Onsite) Duration: 12+ Months Contract

Note: Candidate needs to be in the office 3 Days every week. Local or candidates from Adjacent states only.

Job Description:
The ideal candidate will have 10+ years of experience in software architecture, data engineering, and large-scale data processing systems, with a strong focus on PySpark.
Experience in Finance Technology or Enterprise Function technology domains will be a significant advantage.
This role requires a leader with a strategic mindset who can design, implement, and oversee high-performance, distributed data processing systems.

Key Responsibilities:
Lead the architecture, design, and implementation of large-scale distributed data systems using PySpark.
Collaborate with business stakeholders, technology teams, and data engineers to gather requirements, define objectives, and build scalable data pipelines.
Drive end-to-end solution design, including data acquisition, storage, processing, and analysis.
Optimize performance of big data processing systems, ensuring low-latency and high-throughput data flows.
Ensure alignment with industry best practices and compliance standards in data security and privacy.
Mentor and guide a team of developers and engineers, promoting best practices in coding, architecture, and design patterns.
Evaluate new tools and technologies, identifying opportunities for innovation and driving their implementation.
Collaborate closely with cross-functional teams, including finance and enterprise functions, to ensure solutions meet business objectives.
Support critical decision-making and roadmapping to enhance the organization s data processing capabilities.

Qualifications:
10+ years of experience in software architecture, with a strong focus on PySpark and big data processing systems.
Proficient in Apache Spark, Hadoop, and other distributed computing frameworks.
Hands-on experience with streaming technologies like Kafka and real-time data processing.
Deep understanding of data architecture, ETL/ELT processes, and cloud-based data platforms.
Proven experience in Finance Technology or Enterprise Functions is highly desirable.
Strong knowledge of relational databases, NoSQL databases, and data warehousing solutions.
Solid experience in working with cloud platforms such as AWS, Azure, or Google Cloud.

Excellent problem-solving skills, strategic thinking, and a results-oriented approach.
Proven leadership abilities, with experience managing technical teams in a fast-paced environment.
Strong communication skills, capable of presenting ideas clearly to both technical and non-technical stakeholders.

Preferred Skills:
Experience with financial systems or enterprise applications.
Familiarity with machine learning frameworks and AI-driven data insights.
Experience with DevOps practices and CI/CD pipelines in data engineering.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job