Senior Software Engineer - Data Engineering
Position Overview
The Senior Software Engineer - Data Engineering is responsible for improving data pipelines, building monitoring and reliability solutions, and ensuring good data governance. This role is crucial within the Data Engineering team, leading a revamp of analytical foundations.
Employment Type: Full-Time
Reports to: Engineering Manager
Responsibilities
- Lead the design, development, and refactoring of critical data pipelines to reduce failures and improve efficiency.
- Implement comprehensive monitoring, alerting, and service level agreement tracking to achieve and maintain high operational uptime.
- Resolve data pipeline issues, contributing to faster incident resolution.
- Contribute to the development, implementation, and adoption of data governance standards across critical datasets.
- Ensure data quality and integrity throughout the data lifecycle.
- Enforce governance standards on core pipelines.
- Participate in the redesign and deployment of core subject areas within our analytical data model to improve clarity, utility, and support business reporting needs.
- Establish dashboard curation standards to improve usability and user satisfaction.
- Eliminate unused or inefficient tasks within our data processing frameworks.
- Develop structured data extractors for various application use cases.
- Contribute to compute cost reduction efforts through task reconfiguration and the implementation of efficient incremental data processing strategies.
- Develop mechanisms to make application engineers aware of potential breaking changes to data schemas.
Qualifications
- Experience: 5+ years of hands-on experience in data engineering, with a strong focus on building and maintaining scalable and reliable data pipelines (ETL/ELT).
- Data Warehousing & Modeling: Proven experience with data warehousing concepts, data modeling (e.g., dimensional, relational), and building analytical datasets.
- Technical Skills:
- Proficiency in Python and SQL.
- Experience with data pipelining tools like Spark/PySpark and dbt.
- Experience with platforms like AWS, Databricks, or Google Bigquery.
- Proficiency in version control (Github) and CI/CD tools (CircleCI).
- Data Governance & Quality: Experience with data governance principles, data quality management, and data security.
- Monitoring & Alerting: Experience implementing monitoring and alerting for data pipelines.
- BI Tools: Familiarity with BI tools (Looker and LookML) and how data is consumed for analytics and reporting.
- Collaboration: Experience working with non-technical teammates to identify dataset requirements.
Physical/Cognitive Requirements
- Capability to remain seated in a stationary position for prolonged periods.
- Eye-hand coordination and manual dexterity to operate keyboard, computer, and other office-related equipment.
Compensation & Benefits
- Pay Range: $138,380 - $254,111+ equity + benefits (United States new hire base salary target)
- The starting base salary will depend on factors such as education, training, skills, years and depth of experience, certifications, licensure, organizational needs, internal peer equity, and geographic/market data.
- Compensation structures and ranges are tailored to specific geographic zones to ensure fair and equitable compensation. Your Recruiter can share your geographic zone upon inquiry.
- Benefits & Perks:
- Remote-first culture
- 401(k) savings plan through Fidelity
- Comprehensive medical, vision, and dental coverage through multiple medical plan options
- Disability insurance
- Paid Time Off ("PTO")
Application Instructions
- Information not provided in the original job description.
Company Information
- Information not provided in the original job description.