Design Engineer
Apply on
The team goes in 5 days a week into our office in (within walking distance of the Caltrain station, accessible from San Francisco, commuter benefits).
Challenges
How do we develop code generation product(s) capable of solving many everyday developer tasks?
What's the next big interface after autocomplete and chat for interacting with AI?
How do we maintain reliability and scalability of our service across a wide range of IDEs, platforms, hardware, and programming languages?
What we're looking for
Extensive experience with frontend technologies such as Typescript and React. Familiarity with NextJS and TailwindCSS is a plus.
Strong product instincts and ability to iterate on low, medium, and high fidelity mockups with the team in Figma.
Ability to learn and become an expert quickly.
A team player. Communicate well cross-functionally and does the most important thing for the company.
A self-starter. Hunger to dream up, plan, design, build, and iterate on AI products independently.
Passion for AI-powered developer tools like Codeium, Copilot, ChatGPT, and others is a strong plus.
What we believe
- Engineers own projects end to end. No one knows the product better than the creator and they should drive brainstorming, design, iteration, and user research. We rarely do "handoffs."
Research is in service of a better product. While we read many papers, we won't have time to write them. The best AI researchers have excellent software engineering skills and know that infrastructure and evaluation work are critical.
Recent projects
Some of the things that our engineers have worked on recently:
Regularly deploying an that scales to hundreds of thousands of daily active users across 40+ IDEs.
Live Chat in your browser with popular repositories.
An internal Kubernetes-native data processing framework to handle petabytes of data across thousands of spot CPUs.
A code attribution service for customers who want to ensure any generated code is licensed properly.
Instruction and edit fine-tuned models for Command
Model inference performance optimization using Nvidia CUTLASS, CUDA C++, and PTX assembly language.
Remote parsing, embedding, and indexing of users' codebases.