Tigergraph DB Developer
Role: Tigergraph DB Developer
We are seeking a highly motivated and skilled Data Engineer with a strong focus on graph databases to join our growing team. You will play a critical role in designing, developing, and maintaining our data infrastructure, with a particular emphasis on leveraging graph database technologies like Neo4j, TigerGraph, or ArangoDB. This role requires a deep understanding of data engineering principles, experience with big data processing frameworks like Spark, and a passion for building robust and scalable systems.
Responsibilities:
Design and implement data pipelines to ingest, transform, and load data from various sources into our data lake and graph database.
Develop and maintain ETL processes using Spark and PySpark.
Design and implement the schema and data model for our graph database (Neo4j, TigerGraph, or ArangoDB).
Write efficient and performant queries for data retrieval, analysis, and visualization within the graph database.
Work with NoSQL databases to support various data storage and retrieval needs.
Collaborate with data scientists and other stakeholders to understand their data requirements and translate them into effective data solutions.
Monitor and optimize the performance and scalability of the data infrastructure, including the graph database and Spark clusters.
Implement best practices for data governance, security, and quality.
Work with staged environments (development, integration/testing, and production) to ensure smooth deployments.
Contribute to the development of internal tools and libraries for working with data.
Document data pipelines, data models, and other technical aspects of the data infrastructure.
Stay up-to-date with the latest advancements in data engineering, big data technologies, and graph databases.
[Optional: If BOM domain is crucial] Apply knowledge of Bill of Materials (BOM) structures and data to design and optimize data pipelines and graph database models.
Qualifications:
5+ years of professional experience in data engineering.
3-5 years of hands-on experience with Graph Databases (Neo4j, TigerGraph, or ArangoDB).
3-5 years of hands-on experience with Spark.
2-5 years of experience with NoSQL databases.
1+ years of experience with PySpark.
1+ years of experience with Data Science concepts and practices.
Strong understanding of data engineering principles and best practices.
Proficiency in at least one of the following query languages: Cypher (Neo4j), GSQL (TigerGraph), or AQL (ArangoDB).
Experience with data modeling and schema design, especially for graph databases.
Experience with ETL processes and tools.
Solid understanding of database performance tuning and optimization.
Excellent problem-solving and analytical skills.
Strong communication and collaboration 1 skills.
Role: Tigergraph DB Developer
We are seeking a highly motivated and skilled Data Engineer with a strong focus on graph databases to join our growing team. You will play a critical role in designing, developing, and maintaining our data infrastructure, with a particular emphasis on leveraging graph database technologies like Neo4j, TigerGraph, or ArangoDB. This role requires a deep understanding of data engineering principles, experience with big data processing frameworks like Spark, and a passion for building robust and scalable systems.
Responsibilities:
Design and implement data pipelines to ingest, transform, and load data from various sources into our data lake and graph database.
Develop and maintain ETL processes using Spark and PySpark.
Design and implement the schema and data model for our graph database (Neo4j, TigerGraph, or ArangoDB).
Write efficient and performant queries for data retrieval, analysis, and visualization within the graph database.
Work with NoSQL databases to support various data storage and retrieval needs.
Collaborate with data scientists and other stakeholders to understand their data requirements and translate them into effective data solutions.
Monitor and optimize the performance and scalability of the data infrastructure, including the graph database and Spark clusters.
Implement best practices for data governance, security, and quality.
Work with staged environments (development, integration/testing, and production) to ensure smooth deployments.
Contribute to the development of internal tools and libraries for working with data.
Document data pipelines, data models, and other technical aspects of the data infrastructure.
Stay up-to-date with the latest advancements in data engineering, big data technologies, and graph databases.
[Optional: If BOM domain is crucial] Apply knowledge of Bill of Materials (BOM) structures and data to design and optimize data pipelines and graph database models.
Qualifications:
5+ years of professional experience in data engineering.
3-5 years of hands-on experience with Graph Databases (Neo4j, TigerGraph, or ArangoDB).
3-5 years of hands-on experience with Spark.
2-5 years of experience with NoSQL databases.
1+ years of experience with PySpark.
1+ years of experience with Data Science concepts and practices.
Strong understanding of data engineering principles and best practices.
Proficiency in at least one of the following query languages: Cypher (Neo4j), GSQL (TigerGraph), or AQL (ArangoDB).
Experience with data modeling and schema design, especially for graph databases.
Experience with ETL processes and tools.
Solid understanding of database performance tuning and optimization.
Excellent problem-solving and analytical skills.
Strong communication and collaboration 1 skills.