Epicareer Might not Working Properly
Learn More
D

Hadoop Engineer/ Architect

Salary undisclosed

Apply on


Original
Simplified

Hi,

I hope this message finds you well! I am reaching out to you on an exciting Direct client opportunity with one of our clients. Can you please go through the requirements and let me know if you are interested in this position?

Job Title: Hadoop Engineer/ Architect

Location: Jersey city, NJ

Hybrid 3 days a week onsite

Experience: 10+ years

Job Description:

We are seeking a talented Hadoop Engineer / Architect to join our team. The ideal candidate will have strong experience designing, building, and maintaining large-scale data solutions using the Hadoop ecosystem. This role will involve working closely with cross-functional teams to architect, implement, and optimize data processing systems for big data analytics and storage.

Key Responsibilities:

Architect and design scalable, reliable, and high-performance Hadoop-based big data solutions.

Manage and maintain Hadoop clusters, ensuring optimal performance, scalability, and security.

Collaborate with data engineers and data scientists to design efficient data pipelines and ETL processes.

Develop Architect and design solutions for data ingestion, processing, and storage using tools within the Hadoop ecosystem such as HDFS, Hive, HBase, MapReduce, Pig, Spark, Flume, and Kafka.

Implement monitoring, tuning, and troubleshooting strategies for performance optimization.

Ensure data integrity and implement security protocols for sensitive data.

Provide thought leadership and recommend enhancements to the existing architecture based on the latest Hadoop technologies and best practices.

Assist with the migration of legacy systems and ensure seamless data integration with the Hadoop ecosystem.

Guide the bank in meeting its product goals with deep focus on big data architecture modernization, data monetization, data availability and data management.

Collaborate with DevOps teams to ensure efficient deployment and automation of Hadoop solutions

Qualifications:

Bachelor s/Master s degree in Computer Science, Engineering, or a related field.

5+ years of experience working with Hadoop ecosystem components (HDFS, Hive, HBase, etc.).

Proven expertise in data architecture and Hadoop cluster management.

Hands-on experience with Spark, MapReduce, and NoSQL databases.

Proficient in Java, Python, or Scala for data processing and scripting.

Strong understanding of distributed computing and parallel processing.

Experience with cloud platforms (AWS, Azure, Google Cloud Platform) and their big data solutions (e.g., Amazon EMR, Azure HDInsight).

Knowledge of data governance, security protocols, and compliance.

Familiarity with DevOps practices, including automation of deployments and scaling solutions.

Excellent problem-solving skills and ability to work in a fast-paced environment.

Preferred Skills:

Experience with containerization technologies (Docker, Kubernetes).

Knowledge of machine learning tools and integration with Hadoop.

Experience in migrating on-prem Hadoop clusters to cloud platforms.

Familiarity with CI/CD pipelines for big data solutions.

A product mindset is a must. Should have played a key role in formulating product centric data strategy for a financial services client.

Exposure to treasury areas like liquidity management, payments or capital management will be a huge plus.

Technical skills: Big data, cloud data management, Data lineage, data quality platforms, data distribution architectures, Enterprise data patterns across multiple data layers.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job