Lead Data Pipeline Engineer
Two Six TechnologiesFull Time
Junior (1 to 2 years)
Candidates must have 5+ years of experience as a data engineer and 8+ years of total software engineering experience, including data engineering roles. Expertise in SQL and Python is required, along with proficiency in at least one additional data engineering language such as Scala, Java, or Rust. Strong knowledge of data infrastructure and architecture design, as well as hands-on experience with modern orchestration tools like Airflow, Dagster, or Prefect, are essential. Experience developing and scaling dbt projects, working in a SaaS or high-growth tech environment, and experience with open table formats like Apache Iceberg are considered advantageous.
The Senior Data Engineer will design, build, and manage scalable and reliable data pipelines for ingesting product and event data. They will develop and maintain canonical datasets for tracking key product and business metrics, and architect robust systems for large-volume batch data processing. This role involves driving decisions on data architecture, tooling, and engineering best practices, as well as enhancing the observability and monitoring of existing workflows. The engineer will also partner cross-functionally with various teams to understand data needs and deliver solutions, and provide product feedback by utilizing new data infrastructure and AI technology.
Cloud-based data management platform for analytics
GetDBT.com is a cloud-based data management platform that helps companies streamline their data development processes. It allows users to write business logic more efficiently, enhances code reusability, and ensures data quality through testing and governance features. Unlike its competitors, GetDBT.com focuses on scalability and complexity, making it suitable for businesses at various stages of data maturity. The company's goal is to empower organizations to manage their data effectively while providing reliable service through a subscription model.