Job Title: Big Data Developer
Location: Charlotte, NC
Job Type: Contract/Full-time
Key Responsibilities:
- Design, develop, and optimize big data solutions for processing large datasets.
- Implement and maintain ETL pipelines using big data technologies.
- Work with distributed systems to store and process data efficiently.
- Develop and manage real-time and batch data processing applications.
- Ensure data quality, consistency, and security across all platforms.
- Collaborate with data scientists, analysts, and other engineers to meet business requirements.
- Monitor and troubleshoot performance issues in big data applications.
- Stay up to date with emerging big data technologies and best practices.
Required Skills & Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in big data development.
- Hands-on experience with Apache Spark, Hadoop, Hive, and Kafka.
- Proficiency in SQL and NoSQL databases (e.g., PostgreSQL, Cassandra, HBase).
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Knowledge of data warehousing and ETL concepts.
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
Applicant Consent:
By submitting your application, you agree to ApTask's () and , and provide your consent to receive SMS and voice call communications regarding employment opportunities that match your resume and qualifications. You understand that your personal information will be used solely for recruitment purposes and that you can withdraw your consent at any time by contacting us at or . Message frequency may vary. Msg & data rates may apply.
Job Title: Big Data Developer
Location: Charlotte, NC
Job Type: Contract/Full-time
Key Responsibilities:
- Design, develop, and optimize big data solutions for processing large datasets.
- Implement and maintain ETL pipelines using big data technologies.
- Work with distributed systems to store and process data efficiently.
- Develop and manage real-time and batch data processing applications.
- Ensure data quality, consistency, and security across all platforms.
- Collaborate with data scientists, analysts, and other engineers to meet business requirements.
- Monitor and troubleshoot performance issues in big data applications.
- Stay up to date with emerging big data technologies and best practices.
Required Skills & Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in big data development.
- Hands-on experience with Apache Spark, Hadoop, Hive, and Kafka.
- Proficiency in SQL and NoSQL databases (e.g., PostgreSQL, Cassandra, HBase).
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Knowledge of data warehousing and ETL concepts.
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
Applicant Consent:
By submitting your application, you agree to ApTask's () and , and provide your consent to receive SMS and voice call communications regarding employment opportunities that match your resume and qualifications. You understand that your personal information will be used solely for recruitment purposes and that you can withdraw your consent at any time by contacting us at or . Message frequency may vary. Msg & data rates may apply.