Epicareer Might not Working Properly
Learn More
H

Founding Python Engineer

Salary undisclosed

Apply on


Original
Simplified

Job Description

Job Description
About Hyperspell

Hyperspell is building the future of secure, seamless data access for AI applications. Our platform, a Data-as-a-Service solution, will enable AI powered apps to connect safely and efficiently to users' personal data across emails, documents, calendars, and more. By simplifying data integration, ensuring robust security, and providing user controlled access, Hyperspell will empower developers to build apps that harness personal data without complex setup.

Role Description

As a Founding Python Generalist Engineer at Hyperspell, you ll be at the forefront of developing our core Data-as-a-Service infrastructure from scratch. We have a working prototype for internal use and are going from works for us to works for everyone you'll play a critical role in designing and implementing our architecture, setting best practices, and building the foundation for a rapidly scaling product. You ll design and implement the foundational Transform and Retrieve modules, creating the backbone of Hyperspell s data processing pipeline, which will allow AI applications to access and use personal data with ease and security. This is a hands on, high impact role where you ll build APIs, SDKs, and scalable ETL pipelines that enable developers to connect to and query user data seamlessly.

Joining as a founding engineer, you'll have the unique opportunity to influence every aspect of the engineering approach and product direction, setting the tone for future hires. Hyperspell is a product by developers, for developers, so we're looking for someone who understands the importance of intuitive, developer friendly tools and can champion a product focused engineering culture.

Role Responsibilities
  • Collaborate on initial architecture and infrastructure: work closely with product and DevOps teams to set up a robust, scalable, and secure infrastructure.

  • Develop developer friendly APIs and SDKs from scratch: create APIs and SDKs (in Python and Typescript) that will make it easy for developers to connect to Hyperspell and access its capabilities.

  • Design and build data transformation and retrieval from the ground up: define and implement our data ingestion, transformation, and retrieval processes, ensuring data is processed efficiently and stored securely.

  • Establish scalable ETL pipelines: architect and implement ETL pipelines to handle data ingestion, transformation, and storage from multiple data sources, including APIs, webhooks, and file systems.

  • Implement search and retrieval algorithms: build and optimize the retrieval layer to provide accurate responses for user queries, including vector based search and reranking algorithms.

  • Ensure data security and compliance: implement security best practices from the start, including token rotation, encryption, and secure storage mechanisms to protect sensitive user data.

  • Contribute to scaling efforts as we grow: design infrastructure with future scaling needs in mind, anticipating high data volumes, continuous ingestion, and complex query handling.

What You Need to Be Successful (a.k.a. Qualifications)
  • Strong proficiency in Python: extensive experience in Python, particularly in data pipeline development and backend engineering.

  • Hands-on experience with retrieval augmented generation (RAG) and vector databases: hands-on experience

  • Understanding of data security practices: knowledge of secure data storage, token management, OAuth workflows, and compliance standards like SOC2/GDPR.

  • Developer empathy: experience building developer focused tools or platforms, with a strong understanding of creating intuitive, accessible APIs and SDKs.

  • Comfort with ambiguity and early stage environments: ability to navigate uncharted territory and build foundational systems in a fast paced, evolving startup environment.

  • Strong problem solving skills: willingness to explore new tools and approaches to address complex challenges related to data integration, transformation, and retrieval.

Bonus points if you:
  • Have experience with graph databases (e.g., Neo4j) and search engines (e.g., ElasticSearch) for knowledge graph management and document storage.

  • Proven expertise in building ETL pipelines from scratch: skilled in designing, developing, and maintaining ETL pipelines to manage ingestion, transformation, and storage for dynamic data sources.

  • Are skilled at balancing development speed with high standards for code quality and security.

  • Are comfortable advocating for best practices and mentoring future hires to establish a high-performance engineering culture.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job