🕑 Full Time 📌 Remote (US/CA time zones)
Salary: USD 140,000 - 180,000 per year
Equity: 1.0 - 1.5%
Benefits: 401k match, health insurance, hardware setup of choice, flexible work hours
At pganalyze, we redefine the user experience for optimizing the performance of Postgres databases. Our product helps customers such as Atlassian, Robinhood and DoorDash to understand complex Postgres problems and performance issues.
Application developers use pganalyze to get deep insights into complex database behaviors. Our product is heavy on automated analysis and custom visualizations, and makes automatic recommendations, such as suggesting the best index to create for a slow query.
You will enjoy working at pganalyze if you are a software craftsperson at heart, who cares about writing tools for developers. You will take new features from idea to production deployment end-to-end within days. Your work will regularly involve writing or contributing to open-source components as well as the Postgres project.
We are a fully remote company, with the core team based in the San Francisco Bay Area. Our company is bootstrapped and profitable. We emphasize autonomy and focus time by having few meetings per week.
Your core responsibility: To develop and optimize our Postgres statistics and analysis pipeline, end-to-end, and work on the processes that generate automated insights from the complex data set. This work involves having a detailed understanding of the core data points that are collected from the source Postgres database as a timeseries, optimizing how they get retrieved, transported to the pganalyze servers, and then processed and analyzed.
Today, this data pipeline is a combination of open-source Go code (in the pganalyze collector), and statistics processing written in Ruby. You will be responsible for improving this pipeline, introducing new technologies, including a potential rewrite of the statistics processing in Rust.
Some of the work will lead into the depths of Postgres code, and you might need to compile some C code, or understand how the pganalyze parser library, pg_query, works in detail.
Your work is the foundation of the next generation of pganalyze, with a focus on the automatic insights we can derive from the workload of the monitored Postgres databases, and giving fine-tuned recommendations such as which indexes to create, or which config settings to tune.
At pganalyze, you will:
- Collaborate with other engineers on shipping new functionality end-to-end, and ensure features are performant and well implemented
- Be the core engineer for the foundational components of pganalyze, such as the statistics pipeline that processes all data coming into the product
- Develop new functionality that monitors additional Postgres statistics, or derives new insights from the existing time series information
- Write Ruby, Go or Rust code on the pganalyze backend and the pganalyze collector
- Evaluate and introduce new technologies, such as whether we should utilize Rust in more places of the product
- Optimize the performance of pganalyze components, using language-specific profilers, or Linux tools like “perf”
- Scale out our backend, which relies heavily on Postgres itself for statistics storage
- Contribute to our existing open-source projects, such as pg_query, or create new open-source projects in the Postgres space
- Work with upstream communities, such as the Postgres project, and contribute code back
Previously, you have:
- Worked professionally for at least 5 years as a software engineer
- Written complex, data heavy backend code with Rust, Go, Ruby or Python
- Used Postgres for multiple projects, are comfortable writing SQL, and are familiar with “EXPLAIN”
- Created indexes on a Postgres database based on a query being slow
- Looked at the source for a complex open-source project to chase a hard to understand bug
- Written code that fetches data and/or interacts with cloud provider APIs
- Structured your work and set your schedule to optimize for your own productivity
Optionally, you may also have:
- Written low-level C code, for fun
- Used Protocol Buffers, FlatBuffers, msgpack or Cap’n Proto to build your own APIs
- Analyzed patterns in time series data and run statistical analysis on the data
- Experimented with ML frameworks to analyze complex data sets
- Optimized a data-heavy application built on Postgres
- Written your own Postgres extensions
- Used APM and tracing tools to understand slow requests end-to-end
You could also be familiar with:
- Building your own Linux system from scratch
- The many regards of Tom Lane on the Postgres mailing list
- Reproducible builds, and why it would be really nice to have them, like yesterday