\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Who You Are:\u003C/strong>\u003C/p>\u003Cul style=\"min-height:1.5em\">\u003Cli>\u003Cp style=\"min-height:1.5em\">5+ years of experience in software or data engineering, with a focus on building scalable data infrastructure\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Strong experience with data pipelining and modeling, including tools like Apache Airflow, Databricks, Snowflake and dbt\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">In-depth knowledge of streaming technologies such as Apache Kafka\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Skilled in designing and maintaining ELT/ETL workflows using modern tooling\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Proficient in SQL and comfortable working with both relational and NoSQL databases (e.g., Postgres, Bigtable, Spanner)\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Experience working with cloud platforms, ideally GCP\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Familiarity with JavaScript and front-end tracking concepts, especially in non-browser environments like CTV.\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Strong problem-solving and debugging skills, especially with distributed systems and large-scale event data\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Excellent collaboration and communication skills\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Bonus: experience in adtech, martech, or CTV attribution\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">The approximate compensation range for this position is $145,000-$210,000. The actual offer, reflecting the total compensation package and benefits, will be determined by a number of factors including the applicant's experience, knowledge, skills, and abilities, as well as internal equity among our team.\u003Cbr />\u003Cbr />#LI-Remote\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>We are Madhive\u003C/strong>\u003C/p>\u003Cp style=\"min-height:1.5em\">Madhive is a dynamic, diverse, innovative, and friendly place to work. We embrace our differences and believe they fuel our creativity. We come from varied backgrounds and think that’s important. Whether it’s taking ideas from previous lives and applying them in different ways or creating something completely new, we are all trail-blazing team players who think big and want to make an impact. \u003C/p>\u003Cp style=\"min-height:1.5em\">We are committed to cultivating a culture of inclusion and collaboration. We welcome diversity in education, culture, opinions, race, ethnicity, gender identity, veteran status, religion, disability, sexual orientation, and beliefs.\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Please be advised that we will NOT be using third-party recruiting agencies for this search.\u003C/strong>\u003C/p>","https://jobs.ashbyhq.com/madhive/bdd16cbb-0989-499e-a25b-f919e99e358f",{"id":81,"name":82,"urlSafeSlug":82,"logo":83},[404],{"city":18,"region":18,"country":24},"2025-08-30T07:15:06.598Z","Candidates should have over 5 years of experience in software or data engineering, with a focus on building scalable data infrastructure. Strong experience with data pipelining and modeling, including tools like Apache Airflow, Databricks, Snowflake, and dbt is required. In-depth knowledge of streaming technologies such as Apache Kafka, proficiency in SQL, and experience with cloud platforms, ideally GCP, are also necessary. Familiarity with JavaScript and front-end tracking concepts, strong problem-solving skills, and excellent collaboration and communication skills are essential. Bonus points for experience in adtech, martech, or CTV attribution.","The Senior Data Engineer will design and implement scalable data pipelines for ingesting, processing, and transforming large volumes of data. They will build and maintain real-time and batch workflows, collaborate with cross-functional teams to ensure accurate event data capture, and own and optimize ELT processes. Responsibilities include developing and maintaining data models, monitoring pipeline health, implementing anomaly detection, maintaining high data quality standards, contributing to cloud data infrastructure evolution, and documenting data pipelines and workflows. The role also involves promoting and enforcing best practices in data engineering, observability, and data governance.",{"employment":409,"compensation":412,"experience":413,"visaSponsorship":417,"location":418,"skills":419,"industries":431},{"type":410},{"id":121,"name":122,"description":411},"Commit to a standard 40-hour workweek, usually with full benefits.",{"minAnnualSalary":18,"maxAnnualSalary":18,"currency":18,"details":18},{"experienceLevels":414},[415],{"id":358,"name":359,"description":416},"Bring extensive experience to lead projects and mentor others.",{"type":42},{"type":42},[420,421,422,423,424,425,426,365,427,428,429,430],"Data Pipelines","ETL","ELT","Kafka","Airflow","BigQuery","GCP","Data Modeling","Anomaly Detection","Data Quality","Cloud Data Infrastructure",[432,433,434],{"id":151,"name":318},{"id":151,"name":205},{"id":151,"name":379},["Reactive",436],{"$ssite-config":437},{"env":438,"name":439,"url":440},"production","nuxt-app","https://jobo.world/",["Set"],["ShallowReactive",443],{"company-Madhive":-1,"company-jobs-f08d465c-bf6b-4319-8f7e-24e48ba0a8e5-carousel":-1},"/company/Madhive",{}]