\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Own the design, review, and optimization of \u003Cstrong>production pipelines\u003C/strong>, ensuring \u003Cstrong>high performance, reliability, and maintainability\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Drive \u003Cstrong>customer data onboarding projects\u003C/strong>, standardizing external feeds into canonical models.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Collaborate with senior leadership to define \u003Cstrong>team priorities, project roadmaps, and data standards\u003C/strong>, translating objectives into actionable assignments for your team.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Lead sprint planning and work with cross-functional stakeholders to \u003Cstrong>prioritize initiatives that improve customer metrics and product impact\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Partner closely with \u003Cstrong>Product, ML, Analytics, Engineering, and Customer teams\u003C/strong> to translate business needs into effective data solutions.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Ensure \u003Cstrong>high data quality, observability, and automated validations\u003C/strong> across all pipelines.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Contribute hands-on when necessary to \u003Cstrong>architecture, code reviews, and pipeline design\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Identify and implement \u003Cstrong>tools, templates, and best practices\u003C/strong> that improve team productivity and reduce duplication.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Build \u003Cstrong>cross-functional relationships\u003C/strong> to advocate for data-driven decision-making and solve complex business problems.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Hire, mentor, and develop team members, fostering a \u003Cstrong>culture of innovation, collaboration, and continuous improvement\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Communicate \u003Cstrong>technical concepts and strategies\u003C/strong> effectively to both technical and non-technical stakeholders.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Measure team impact through \u003Cstrong>metrics and KPIs\u003C/strong>, ensuring alignment with company goals.\u003Cbr />\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Ch2>\u003Cstrong>What You Bring\u003C/strong>\u003C/h2>\u003Cul style=\"min-height:1.5em\">\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Degree\u003C/strong> in Computer Science, Engineering, or a related field.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>3+ years\u003C/strong> of combined technical leadership and engineering management experience, preferably in a startup, with a proven track record of \u003Cstrong>managing data teams and delivering high-impact projects\u003C/strong> from concept to deployment.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>10+ years\u003C/strong> of experience in data engineering, including building and maintaining \u003Cstrong>production pipelines\u003C/strong> and distributed computing frameworks.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Strong expertise in \u003Cstrong>Python, Spark, SQL, and Airflow\u003C/strong>\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Hands-on experience in \u003Cstrong>pipeline architecture, code review, and mentoring junior engineers\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Prior experience with \u003Cstrong>customer data onboarding\u003C/strong> and standardizing non-canonical external data.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Deep understanding of \u003Cstrong>distributed data processing, pipeline orchestration, and performance tuning\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Exceptional ability to \u003Cstrong>manage priorities, communicate clearly, and work cross-functionally\u003C/strong>, with experience building and leading high-performing teams.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Demonstrated experience \u003Cstrong>leading small teams\u003C/strong>, including performance management and career development.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Comfortable with \u003Cstrong>ambiguity\u003C/strong>, taking initiative, thinking strategically, and executing methodically.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Ability to \u003Cstrong>drive change, inspire distributed teams, and solve complex problems with a data-driven mindset\u003C/strong>.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Customer-oriented, ensuring work significantly advances \u003Cstrong>product value and impact\u003C/strong>.\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Bonus:\u003C/strong>\u003C/p>\u003Cul style=\"min-height:1.5em\">\u003Cli>\u003Cp style=\"min-height:1.5em\">Familiarity with \u003Cstrong>healthcare data\u003C/strong> (837/835 claims, EHR, UB04).\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Experience with \u003Cstrong>cloud platforms\u003C/strong> (AWS/GCP), \u003Cstrong>databricks\u003C/strong> , \u003Cstrong>streaming frameworks\u003C/strong> (Kafka/SQS), and \u003Cstrong>containerized workflows\u003C/strong> (Docker/Kubernetes).\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">Experience building \u003Cstrong>internal DE tooling, frameworks, or SDKs\u003C/strong> to improve team productivity.\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Ch2>\u003Cstrong>Why you'll love working here\u003C/strong>\u003C/h2>\u003Cul style=\"min-height:1.5em\">\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>High Impact:\u003C/strong> Your team’s work powers key decisions across product, ML, operations, and customer-facing initiatives.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Ownership & Growth:\u003C/strong> Influence the data platform and pipeline architecture while mentoring a growing team.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Cross-Functional Exposure:\u003C/strong> Work with product, platform, engineering , ML, analytics, and customer teams to solve meaningful problems.\u003Cbr />\u003C/p>\u003C/li>\u003Cli>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Remote Flexibility:\u003C/strong> Fully remote with opportunities to collaborate across teams.\u003Cbr />\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Early Builder Advantage:\u003C/strong> Shape processes, standards, and practices as we scale.\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003C/p>\u003Cp style=\"min-height:1.5em\">\u003Cstrong>Equal Employment Opportunity at Machinify\u003C/strong>\u003C/p>\u003Cp style=\"min-height:1.5em\">Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace. \u003C/p>\u003Cp style=\"min-height:1.5em\">See our Candidate Privacy Notice at: \u003Ca target=\"_blank\" rel=\"noopener noreferrer nofollow\" href=\"https://www.machinify.com/candidate-privacy-notice/\">https://www.machinify.com/candidate-privacy-notice/\u003C/a>\u003C/p>","https://jobs.ashbyhq.com/machinify/5dfc67d1-5243-4f3f-9cf6-cc5fa649d003",{"id":48,"name":46,"urlSafeSlug":46,"logo":49},[593],{"city":52,"region":53,"country":16},"2025-09-25T09:37:14.413Z","Candidates should possess a degree in Computer Science, Engineering, or a related field, with at least 3 years of technical leadership and engineering management experience, preferably in a startup environment. A minimum of 10 years of experience in data engineering is required, including building and maintaining production pipelines and distributed computing frameworks. Strong expertise in Python, Spark, SQL, and Airflow is essential, along with hands-on experience in pipeline architecture, code review, and mentoring junior engineers. Prior experience with customer data onboarding and standardizing non-canonical external data is necessary, as is a deep understanding of distributed data processing, pipeline orchestration, and performance tuning. Exceptional ability to manage priorities, communicate clearly, and work cross-functionally, with demonstrated experience building and leading high-performing teams, including performance management and career development, is also required.","The Data Engineering Manager will lead, mentor, and grow a high-performing team of Data Engineers, fostering technical excellence, collaboration, and career growth. They will own the design, review, and optimization of production pipelines, ensuring high performance, reliability, and maintainability. This role involves driving customer data onboarding projects, standardizing external feeds into canonical models, and collaborating with senior leadership to define team priorities, project roadmaps, and data standards. The manager will lead sprint planning, prioritize initiatives that improve customer metrics and product impact, and partner closely with Product, ML, Analytics, Engineering, and Customer teams to translate business needs into effective data solutions. Responsibilities also include ensuring high data quality, observability, and automated validations across all pipelines, contributing hands-on when necessary to architecture, code reviews, and pipeline design, and identifying and implementing tools, templates, and best practices to improve team productivity. The role requires building cross-functional relationships to advocate for data-driven decision-making, solving complex business problems, hiring, mentoring, and developing team members, and communicating technical concepts and strategies effectively to both technical and non-technical stakeholders, while measuring team impact through metrics and KPIs.",{"employment":598,"compensation":600,"experience":601,"visaSponsorship":604,"location":605,"skills":606,"industries":612},{"type":599},{"id":61,"name":62,"description":124},{"minAnnualSalary":17,"maxAnnualSalary":17,"currency":17,"details":17},{"experienceLevels":602},[603],{"id":136,"name":137,"description":138},{"type":74},{"type":74},[558,607,608,609,79,610,77,196,148,402,144,611],"Pipeline Design","Mentoring","Project Management","Cloud Platforms","Cross-functional Collaboration",[613,615,617],{"id":151,"name":614},"Healthcare Technology",{"id":151,"name":616},"Artificial Intelligence",{"id":151,"name":512},{"id":619,"title":11,"alternativeTitles":620,"slug":633,"jobPostId":619,"description":634,"applyUrl":635,"company":636,"companyOption":637,"locations":641,"listingDate":643,"listingSite":118,"isRemote":15,"requirements":644,"responsibilities":645,"status":18,"expiryDate":17,"summary":646},"58b32094-3a67-48e5-8572-1c1efbcbd929",[621,622,623,624,625,626,217,627,628,629,324,630,631,218,632],"Python SQL Data Engineer","ETL Pipeline Developer","Cloud Data Engineer AWS","Real-time Analytics Engineer","Sports Data Engineer","Machine Learning Data Engineer","Data Pipeline Architect","API Data Engineer","Big Data Engineer Python","Predictive Analytics Engineer","AWS Data Pipeline Specialist","Sports Betting Data Scientist","data-engineer-58b32094-3a67-48e5-8572-1c1efbcbd929","\u003Cdiv>\u003Ch3>Company Overview\u003C/h3>\n\u003Cp>\u003Cspan style=\"font-weight: 400;\">Swish Analytics is a sports analytics, betting and fantasy startup building the next generation of predictive sports analytics data products. We believe that oddsmaking is a challenge rooted in engineering, mathematics, and sports betting expertise; not intuition. We're looking for team-oriented individuals with an authentic passion for accurate and predictive real-time data who can execute in a fast-paced, creative, and continually-evolving environment without sacrificing technical excellence. Our challenges are unique, so we hope you are comfortable in uncharted territory and passionate about building systems to support products across a variety of industries and consumer/enterprise clients.\u003C/span>\u003C/p>\n\u003Ch3>\u003Cstrong>Job Description\u003C/strong>\u003C/h3>\n\u003Cp>\u003Cspan style=\"font-weight: 400;\">The Swish Analytics team is seeking Senior Data Engineers to have direct impact on the infrastructure and delivery of our core consumer and enterprise data offerings. We're a team passionate about accurate predictions and real-time data, and hope you find satisfaction in building new products with the latest and greatest technologies. \u003Cstrong>This is a remote position.\u003C/strong>\u003C/span>\u003C/p>\n\u003Cp>\u003Cstrong>Duties\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>Architect low-latency, real-time analytics systems including raw data collection, feature development and endpoint production\u003C/li>\n\u003Cli>Build new sports betting data products and predictions offerings \u003C/li>\n\u003Cli>Integrate large and complex real-time datasets into new consumer and enterprise products\u003C/li>\n\u003Cli>Develop production-level predictive analytics into enterprise-grade APIs\u003C/li>\n\u003Cli>Support production systems and help triage issues during live sporting events\u003C/li>\n\u003Cli>Contribute to the design and implementation of new, fully-automated sports data delivery frameworks\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Requirements\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>BS/BA degree in Mathematics, Computer Science, or related STEM field\u003C/li>\n\u003Cli>Minimum of 5+ years of demonstrated experience writing production level code (Python)\u003C/li>\n\u003Cli>Proficiency in Python and SQL (preferably MySQL); minimum of 5 years of experience\u003C/li>\n\u003Cli>Demonstrated experience with Airflow\u003C/li>\n\u003Cli>Demonstrated experience with Kubernetes\u003C/li>\n\u003Cli>Experience building end-to-end ETL pipelines \u003C/li>\n\u003Cli>Experience utilizing REST APIs\u003C/li>\n\u003Cli>Experience with version control (git), continuous integration and deployment, shell scripting, and cloud-computing infrastructures (AWS)\u003C/li>\n\u003Cli>Experience with web scraping and cleaning unstructured data\u003C/li>\n\u003Cli>Knowledge of data science and machine learning concepts\u003C/li>\n\u003Cli>Knowledge of sports betting\u003C/li>\n\u003Cli>Must have knowledge and understanding of NBA OR NFL and the ability use your knowledge of the sport to inform your work with complex datasets\u003C/li>\n\u003C/ul>\n\u003Cp>Salary: $120,000 - 165,500\u003C/p>\n\u003Ch6>\u003Cspan style=\"font-weight: 400;\">Swish Analytics is an Equal Opportunity Employer. All candidates who meet the qualifications will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, pregnancy status, genetic, military, veteran status, marital status, or any other characteristic protected by law. The position responsibilities are not limited to the responsibilities outlined above and are subject to change. At the employer’s discretion, this position may require successful completion of background and reference checks.\u003C/span>\u003C/h6>\u003C/div>","https://job-boards.greenhouse.io/swishanalytics/jobs/4612572005","Swish Analytics",{"id":638,"name":636,"urlSafeSlug":639,"logo":640},"ca0aa144-8bef-48a6-a902-d10d5f2b3c97","SwishAnalytics","w4ks1qx6ryiurq7yzq6z",[642],{"city":435,"region":53,"country":16},"2025-09-26T07:21:45.412Z","Candidates must possess a BS/BA degree in Mathematics, Computer Science, or a related STEM field, with a minimum of 5 years of demonstrated experience writing production-level code in Python and SQL, preferably MySQL. Experience with Airflow, Kubernetes, building end-to-end ETL pipelines, utilizing REST APIs, and version control (git) is required. Proficiency in continuous integration and deployment, shell scripting, cloud computing infrastructures (AWS), web scraping, cleaning unstructured data, and knowledge of data science/machine learning concepts are also necessary. A strong understanding of sports betting, specifically the NBA or NFL, is essential to inform work with complex datasets.","The Data Engineer will be responsible for architecting low-latency, real-time analytics systems, including raw data collection, feature development, and endpoint production. They will build new sports betting data products and predictions offerings, integrating large and complex real-time datasets into new consumer and enterprise products. The role involves developing production-level predictive analytics into enterprise-grade APIs, supporting production systems, and triaging issues during live sporting events. Additionally, the Data Engineer will contribute to the design and implementation of new, fully-automated sports data delivery frameworks.",{"employment":647,"compensation":649,"experience":650,"visaSponsorship":653,"location":654,"skills":655,"industries":662},{"type":648},{"id":61,"name":62,"description":124},{"minAnnualSalary":17,"maxAnnualSalary":17,"currency":17,"details":17},{"experienceLevels":651},[652],{"id":132,"name":133,"description":134},{"type":74},{"type":74},[196,77,656,78,657,558,658,659,660,661],"MySQL","Kubernetes","Predictive Analytics","API Development","Real-time Data Processing","Low-latency Systems",[663,665,667,669],{"id":151,"name":664},"Sports Analytics",{"id":151,"name":666},"Betting",{"id":151,"name":668},"Fantasy Sports",{"id":151,"name":670},"Data Products",{"id":672,"title":26,"alternativeTitles":673,"slug":689,"jobPostId":672,"description":690,"applyUrl":691,"company":692,"companyOption":693,"locations":696,"listingDate":698,"listingSite":118,"isRemote":15,"requirements":699,"responsibilities":700,"status":18,"expiryDate":17,"summary":701},"de171dab-d136-4b69-9e57-76f350ed013b",[674,675,676,677,678,679,680,681,682,683,684,685,686,687,688],"Data Engineer (dbt & Snowflake)","BI Engineer (Healthcare Data)","SQL Data Modeler (Snowflake)","Data Transformation Specialist (dbt)","Healthcare Analytics Developer","Data Warehouse Engineer (Snowflake)","Business Intelligence Developer (QuickSight)","Data Pipeline Engineer (dbt)","Cloud Data Engineer (Snowflake)","Senior Analytics Engineer (Healthcare)","Data Analyst (SQL & dbt)","Data Solutions Engineer (Healthcare Tech)","ETL Developer (Snowflake & dbt)","Data Modeler (Healthcare Analytics)","Production SQL Developer (dbt)","analytics-engineer-de171dab-d136-4b69-9e57-76f350ed013b","\u003Cdiv>\u003Cp>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Based in San Francisco, \u003Ca href=\"https://www.arine.io/\" target=\"_blank\">Arine\u003C/a> is a rapidly growing healthcare technology and clinical services company with a mission to ensure individuals receive the safest and most effective treatments for their unique and evolving healthcare needs. \u003C/span>\u003C/p>\n\u003Cp>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Frequently, medications cause more harm than good. Incorrect drugs and doses costs the US healthcare system over $528 billion in waste, avoidable harm, and hospitalizations each year. Arine is redefining what excellent healthcare looks like by solving these issues through our software platform (SaaS). We combine cutting edge data science, machine learning, AI, and deep clinical expertise to introduce a patient-centric view to medication management, and develop and deliver personalized care plans on a massive scale for patients and their care teams.\u003C/span>\u003C/p>\n\u003Cp>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Arine is committed to improving the lives and health of complex patients that have an outsized impact on healthcare costs and have traditionally been difficult to identify and address. These patients face numerous challenges including complicated prescribing issues across multiple medications and providers, medication challenges with many chronic diseases, and patient issues with access to care. Backed by leading healthcare investors and collaborating with top healthcare organizations and providers, we deliver recommendations and facilitate clinical interventions that lead to significant, measurable health improvements for patients and cost savings for customers. \u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cstrong>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cspan style=\"text-decoration: underline;\">Why is Arine a Great Place to Work?:\u003C/span>\u003C/span>\u003C/strong>\u003C/p>\n\u003Cp>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cstrong>Outstanding Team and Culture -\u003C/strong> Our shared mission unites and motivates us to do our best work. We have a relentless passion and commitment to the innovation required to be the market leader in medication intelligence. \u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cstrong>Making a Proven Difference in Healthcare -\u003C/strong> We are saving patient lives, and enabling individuals to experience improved health outcomes, including significant reductions in hospitalizations and cost of care.\u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cstrong>Market Opportunity -\u003C/strong> Arine is backed by leading healthcare investors and was founded to tackle one of the largest healthcare problems today. Non-optimized medications therapies which cost the US 275,000 lives and $528 billion annually.\u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cstrong>Dramatic Growth -\u003C/strong> Arine is managing more than 18 million lives across prominent health plans after only 4 years in the market, and was ranked 236 on the 2024 Inc. 5000 list and was named the 5th fastest-growing company in the AI category.\u003C/span>\u003C/p>\u003C/div>\u003Cdiv>\u003Ch3>The Role\u003C/h3>\n\u003Cp>The Analytics Engineer role will report to the Director of Data Services and work within the Analytics team alongside our data operations engineers. This position offers an excellent opportunity to develop your analytics engineering skills while working within our medallion architecture to build data transformations that power business intelligence solutions. You will collaborate closely with senior analytics engineers to translate business needs into technical specifications and deliver high-quality data products.\u003C/p>\n\u003Cp>As an Analytics Engineer, you will work within the intermediate and marts layers of our data platform, building on the solid foundation provided by our data operations team's staging layer. This role provides clear growth opportunities toward more senior analytics responsibilities while contributing to data-driven decision making across the organization.\u003C/p>\n\u003Ch3>\u003Cstrong>What You'll be Doing\u003C/strong>\u003C/h3>\n\u003Cul>\n\u003Cli>Build and maintain dbt data models within our medallion architecture, transforming staging data into intermediate and mart layers for business consumption\u003C/li>\n\u003Cli>Develop dashboards and reports using QuickSight and other BI tools to present healthcare data in clear, actionable formats for stakeholders\u003C/li>\n\u003Cli>Collaborate with product teams and Customer Solutions Architects to understand requirements and translate business needs into data model specifications\u003C/li>\n\u003Cli>Write production-quality SQL transformations in Snowflake, following established patterns and best practices for data modeling\u003C/li>\n\u003Cli>Implement data validation and testing using dbt tests and other quality assurance frameworks to ensure data accuracy and completeness\u003C/li>\n\u003Cli>Support metric standardization by contributing to our single source of truth for organizational KPIs and healthcare outcome measurements\u003C/li>\n\u003Cli>Work closely with data operations engineers to optimize data flows from staging through marts layers and provide feedback on staging layer requirements\u003C/li>\n\u003Cli>Create technical documentation for data models, transformations, and analytics solutions to support team knowledge sharing\u003C/li>\n\u003Cli>Assist in user support and training for analytics tools and help stakeholders understand and effectively use data products\u003C/li>\n\u003Cli>Process and analyze healthcare datasets including claims data, clinical outcomes, and medication management metrics while maintaining HIPAA compliance\u003C/li>\n\u003C/ul>\n\u003Ch3>\u003Cstrong>Who You Are and What You Bring\u003C/strong>\u003C/h3>\n\u003Cul>\n\u003Cli>2-4 years of experience in analytics engineering, business intelligence development, or data analysis roles\u003C/li>\n\u003Cli>Strong SQL skills with experience in data warehousing platforms, preferably Snowflake\u003C/li>\n\u003Cli>Experience with dbt or similar transformation frameworks for building data models and pipelines\u003C/li>\n\u003Cli>Proficiency with BI tools such as QuickSight, Tableau, Power BI, or similar visualization platforms\u003C/li>\n\u003Cli>Understanding of data modeling concepts including dimensional modeling, fact/dimension tables, and analytics layer design\u003C/li>\n\u003Cli>Experience with version control (Git) and collaborative development workflows\u003C/li>\n\u003Cli>Python programming skills for data processing, automation, and integration tasks\u003C/li>\n\u003Cli>Familiarity with cloud data platforms and AWS services, particularly those supporting analytics workloads\u003C/li>\n\u003Cli>Healthcare data exposure or strong interest in learning healthcare analytics, claims data, and clinical outcomes\u003C/li>\n\u003Cli>Strong analytical and problem-solving abilities with attention to data quality and validation\u003C/li>\n\u003Cli>Excellent communication skills and ability to translate technical concepts for non-technical stakeholders\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Preferred Qualifications:\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli>Experience with medallion architecture or layered data platform design (staging → intermediate → marts)\u003C/li>\n\u003Cli>Background in healthcare analytics with understanding of claims data, clinical datasets, or population health metrics\u003C/li>\n\u003Cli>Advanced dbt experience including macros, packages, and performance optimization techniques\u003C/li>\n\u003Cli>AWS analytics services experience including S3, Lambda, Step Functions, or QuickSight administration\u003C/li>\n\u003Cli>Agile development experience and familiarity with sprint-based delivery and stakeholder collaboration\u003C/li>\n\u003Cli>Data visualization design skills with focus on user experience and actionable insights\u003C/li>\n\u003Cli>Understanding of healthcare compliance including HIPAA requirements and data privacy standards\u003C/li>\n\u003Cli>Cross-functional collaboration experience working with clinical, product, and customer success teams\u003C/li>\n\u003C/ul>\n\u003Cp>\u003Cstrong>Technical Skills:\u003C/strong>\u003C/p>\n\u003Cp>Analytics Platforms: Snowflake, dbt, QuickSight, AWS analytics services Programming: SQL (advanced), Python, basic shell scripting Data Modeling: Dimensional modeling, fact/dimension design, metrics frameworks Development: Git, dbt testing frameworks, CI/CD for analytics workflows Healthcare Data: Claims data analysis, clinical outcomes, medication management metrics\u003C/p>\n\u003Ch3>\u003Cstrong>Remote Work Requirements\u003C/strong>\u003C/h3>\n\u003Cul>\n\u003Cli>An established private work area that ensures information privacy\u003C/li>\n\u003Cli>A stable high-speed internet connection for remote work\u003C/li>\n\u003Cli>This role is remote, but you will be required to come to on-site meetings multiple times per year. This may be in the interview process, onboarding, and team meetings\u003C/li>\n\u003C/ul>\n\u003Ch3>\u003Cstrong>Perks\u003C/strong>\u003C/h3>\n\u003Cp>Joining Arine offers you a dynamic role and the opportunity to contribute to the company's growth and shape its future. You'll have unparalleled learning and growth prospects, collaborating closely with experienced Clinicians, Engineers, Data Scientists, Software Architects, and Digital Health Entrepreneurs.\u003C/p>\n\u003Cp>The posted range represents the expected base salary for this position and does not include any other potential components of the compensation package, benefits, and perks. Ultimately, the final pay decision will consider factors such as your experience, job level, location, and other relevant job-related criteria. The base salary range for this position is: \u003Cstrong>$110,000-135,000/year\u003C/strong>.\u003C/p>\n\u003Cp> \u003C/p>\u003C/div>\u003Cdiv>\u003Cp style=\"line-height: 1.3;\">\u003Cstrong>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cspan style=\"text-decoration: underline;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Job Requirements:\u003C/span>\u003C/span>\u003C/span>\u003C/strong>\u003C/p>\n\u003Cul>\n\u003Cli style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt; line-height: 2;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Ability to pass a background check\u003C/span>\u003C/li>\n\u003Cli style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt; line-height: 2;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Must live in and be eligible to work in the United States\u003C/span>\u003C/li>\n\u003C/ul>\n\u003Cp style=\"line-height: 1.3;\">\u003Cstrong>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cspan style=\"text-decoration: underline;\">Information Security Roles and Responsibilities:\u003C/span>\u003C/span>\u003C/strong>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">All staff at Arine are expected to be part of its Information Security Management Program and undergo periodic training on Information Security Awareness and HIPAA guidelines. Each user is responsible to maintain a secure working environment and follow all policies and procedures. Upon hire, each person is assigned and must complete trainings before access is granted for their specific role within Arine.\u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">\u003Cem>Arine is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace where all employees are treated with fairness and respect. We do not discriminate on the basis of race, ethnicity, color, religion, gender, sexual orientation, age, disability, or any other legally protected status. Our hiring decisions and employment practices are based solely on qualifications, merit, and business needs. We encourage individuals from all backgrounds to apply and join us in our mission.\u003C/em>\u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">Check our website at \u003Cspan style=\"text-decoration: underline;\">\u003Cem>\u003Cstrong>https://www.arine.io\u003C/strong>\u003C/em>\u003C/span>. This is a unique opportunity to join a growing start-up revolutionizing the healthcare industry!\u003C/span>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cem>\u003Cstrong>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 8pt;\">Job Offers: Arine uses the arine.io domain and email addresses for all official communications. If you received communication from any other domain, please consider it spam. \u003C/span>\u003C/strong>\u003C/em>\u003C/p>\n\u003Cp style=\"line-height: 1.3;\">\u003Cem>\u003Cstrong>\u003Cspan style=\"font-family: arial, helvetica, sans-serif; font-size: 8pt;\">Note to Recruitment Agencies: We appreciate your interest in finding talent for Arine, but please be advised that we do not accept unsolicited resumes from recruitment agencies. All resumes submitted to Arine without a prior written agreement in place will be considered property of Arine, and no fee will be paid in the event of a hire. Thank you for your understanding.\u003C/span>\u003C/strong>\u003C/em>\u003C/p>\u003C/div>","https://job-boards.greenhouse.io/arine/jobs/5645161004","Arine",{"id":694,"name":692,"urlSafeSlug":692,"logo":695},"1076c564-1f2e-40d0-8983-19e249e22cfe","ggceq0tw5pnlncrgxzu5",[697],{"city":17,"region":17,"country":16},"2025-09-09T07:17:31.477Z","Candidates should have experience building and maintaining dbt data models within a medallion architecture, transforming staging data into intermediate and mart layers. Proficiency in writing production-quality SQL transformations in Snowflake and implementing data validation and testing using dbt tests is required. Experience with BI tools like QuickSight and a background in healthcare technology are preferred.","The Analytics Engineer will build and maintain dbt data models, transforming staging data for business consumption. They will develop dashboards and reports using QuickSight and other BI tools to present healthcare data. This role involves collaborating with product teams and Customer Solutions Architects to understand requirements and translate business needs into data model specifications. Additionally, the Analytics Engineer will write SQL transformations in Snowflake and implement data validation and testing.",{"employment":702,"compensation":704,"experience":705,"visaSponsorship":708,"location":709,"skills":710,"industries":715},{"type":703},{"id":61,"name":62,"description":124},{"minAnnualSalary":17,"maxAnnualSalary":17,"currency":17,"details":17},{"experienceLevels":706},[707],{"id":132,"name":133,"description":134},{"type":74},{"type":74},[711,144,712,512,80,713,714,614],"Data Science","AI","Clinical Expertise","Medication Management",[716,717,718],{"id":151,"name":614},{"id":358,"name":359},{"id":151,"name":512},{"id":720,"title":420,"alternativeTitles":721,"slug":726,"jobPostId":720,"description":727,"applyUrl":728,"company":729,"companyOption":730,"locations":733,"listingDate":737,"listingSite":118,"isRemote":15,"requirements":738,"responsibilities":739,"status":18,"expiryDate":17,"summary":740},"af3e1420-8480-4123-8111-a70997ef3459",[91,366,367,215,159,37,722,723,33,30,724,26,725,369,370],"AWS Data Engineer","Databricks Engineer","Data Architect","Data Infrastructure Engineer","staff-data-engineer-af3e1420-8480-4123-8111-a70997ef3459","# Staff Data Engineer\n\n## About Life360\n\nLife360’s mission is to keep people close to the ones they love. Our category-leading mobile app and Tile tracking devices empower members to protect the people, pets, and things they care about most, with a range of services including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 83.7 million monthly active users (MAU), as of May 2025 across more than 170 countries.\n\nLife360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends that basically are family).\n\nLife360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.\n\nLife360 is a Remote First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above.\n\n## About The Team\n\nThe Data Platform team’s purpose is to design, build, and maintain scalable and efficient data infrastructure that empowers Life360 teams to make data-driven decisions. We transform raw data into reliable, accessible, and actionable insights, ensuring data quality, compliance, security, costs and performance at every step. By leveraging innovative technologies and best practices, we enable Product, Analytics and Partners to unlock the full potential of data, driving operational excellence and strategic growth.\n\n### Compensation\n\n* **US Salary Range:** $166,500 - $245,000 USD\n* **Canada Salary Range:** 195,500 - 230,000 CAD (Note: Job title will be \"Developer\" in Canada)\n\n*Note: Base pay offered may vary considerably depending on geographic location, job-related knowledge, skills, and experience. The compensation package includes a wide range of medical, dental, vision, financial, and other benefits, as well as equity.*\n\n## About The Job\n\nAt Life360, we collect a lot of data: 60 billion unique location points, 12 billion user actions, 8 billion miles driven every single month, and so much more. As a Staff Data Engineer, you will contribute to enhancing and modernizing our data platform infrastructure towards a robust and secure data system. You should have a strong engineering background and even more importantly a desire to take ownership of our data systems to make them world class.\n\n## What You’ll Do\n\nPrimary responsibilities include, but are not limited to:\n\n* Design, implement, and manage scalable data processing platforms used for real-time analytics and exploratory data analysis.\n* Build and Manage our core data infrastructure from ingestion through ETL to storage and batch and real-time processing utilizing the latest tools and tech in the industry.\n* Automate, test and harden all data workflows.\n* Architect logical and physical data models to ensure the needs of the business are met.\n* Collaborate with multiple stakeholders like mobile app, cloud, product, data science and analytics teams, in recommendation and applying best practices.\n* Architect and develop systems and algorithms for distributed real-time analytics and data processing.\n* Implement strategies for acquiring data to develop new insights.\n* Mentor junior engineers, imparting best practices and institutionalizing efficient processes to foster growth and innovation within the team.\n\n## What We’re Looking For\n\nExperiences required for success in this role include:\n\n* Minimum 5+ years of experience working with high volume data infrastructure.\n* Experience with Databricks, AWS, ETL and Job orchestration.","https://job-boards.greenhouse.io/life360/jobs/8092933002","Life360",{"id":731,"name":729,"urlSafeSlug":729,"logo":732},"1a8feabd-9bc9-4976-b5a5-9cae39619c65","zqwdqarecv3wrvefdxy0",[734],{"city":735,"region":736,"country":16},"Canada","Kentucky","2025-07-26T07:14:12.5Z","Candidates must possess a minimum of 5 years of experience working with high-volume data infrastructure and have experience with Databricks, AWS, ETL, and job orchestration.","Responsibilities include designing, implementing, and managing scalable data processing platforms for real-time and exploratory analytics. The role involves building and managing core data infrastructure from ingestion through ETL to storage and processing, automating and testing data workflows, and architecting data models to meet business needs. Collaboration with various stakeholders, development of systems for distributed real-time analytics, implementation of data acquisition strategies, and mentoring junior engineers are also key duties.",{"employment":741,"compensation":743,"experience":747,"visaSponsorship":750,"location":751,"skills":752,"industries":758},{"type":742},{"id":61,"name":62,"description":124},{"minAnnualSalary":744,"maxAnnualSalary":745,"currency":66,"details":746},166500,245000,"Salary range for Canada is 195,500 to 230,000 CAD. Job title in Canada will be 'Developer'.",{"experienceLevels":748},[749],{"id":136,"name":137,"description":138},{"type":74},{"type":74},[558,247,253,753,754,755,756,757,203,80],"Efficiency","Data Quality","Compliance","Security","Performance",[759,760,762],{"id":516,"name":517},{"id":151,"name":761},"Mobile Applications",{"id":151,"name":763},"Data Services",{"id":765,"title":11,"alternativeTitles":766,"slug":770,"jobPostId":765,"description":771,"applyUrl":772,"company":773,"companyOption":774,"locations":778,"listingDate":780,"listingSite":118,"isRemote":15,"requirements":781,"responsibilities":782,"status":18,"expiryDate":17,"summary":783},"d083253a-f59a-4633-8e76-7982edb0a9a4",[36,33,34,37,165,162,161,160,767,768,769,722,324,159,215],"Airflow Engineer","dbt Developer","Data Orchestration Engineer","data-engineer-d083253a-f59a-4633-8e76-7982edb0a9a4","## About the Role\nAs a Data Engineer on the Data Analytics team, you will play a pivotal role in building and maintaining robust, reliable, and scalable data infrastructure. This position is critical for ensuring our data infrastructure keeps pace with our analytical needs, especially as Human Interest scales to meet the demands of a rapidly growing company. You will contribute to improving our data foundation, and future-proofing our capabilities for advanced analytics and analytics self-service.\n\n## About the Data Analytics Team\nThe Data Analytics team currently consists of Data Analysts focused on analytics engineering, complex data analysis, and data science. This role will be embedded directly within the Data Analytics team, with lines of mentorship to the decentralized data engineers who helped build the analytics infrastructure to where it is today. You will have the opportunity to mentor analysts desiring to work further upstream in the data stack, and learn from their domain expertises, contributing to a collaborative and growth-oriented environment positioned to have great business impact.\n\n## What You Get To Do Everyday\n- Build and optimize data models in dbt Core to create reliable, efficient, and accessible data for downstream reporting and analysis, with a strong understanding of end-user needs.\n- Design, develop, and maintain scalable data ingestion and orchestration using Meltano, Snowpipe, Airflow and other tools.\n- Manage and automate data infrastructure in AWS using Terraform.\n- Collaborate with Data Analysts and Software Engineers to clarify data requirements and translate them into effective data engineering solutions.\n- Proactively identify and implement improvements in data orchestration, cost/performance management, and security within Snowflake.\n- Develop new data ingestion pipelines from various source systems into Snowflake, including full-stack development for brand-new pipelines from ingestion to data modeling of core user-facing tables.\n- Implement efficient testing within dbt to detect system changes and ensure data quality, contributing to the operational health of the data platform.\n\n## What You Bring to the Role\n### Base Qualifications\n- **3+ years experience** as a Data Engineer with a strong focus on data pipeline development and data warehousing, consistently delivering high-quality work on a timely basis.\n- **Strong hands-on experience** with data modeling, knowledgeable about general design patterns and architectural approaches.\n- **Hands-on experience** with cloud data warehouses.\n- **Strong Python and SQL skills** and experience with data manipulation and analysis, capable of quickly absorbing and synthesizing complex information.\n- **Experience** with data ingestion tools and ETL/ELT processes.\n- **Experience** with Airflow.\n- A **proactive mindset** to keep an eye out for areas to improve our data infrastructure.\n- Ability to **independently define projects**, clarify requirements while considering solutions with the help of mentorship for complex projects.\n- **Excellent problem-solving skills** and attention to detail, with a high-level understanding of how downstream users leverage data.\n\n### Nice to Have\n- Experience in **Terraform** or other infrastructure as code tools\n- Understanding of **data security governance** best practices and techniques\n- Experience with **DBT**\n- Experience with **Snowflake**\n- Experience with **Meltano**\n- Experience curating data and data pipelines for the consumption of **large language models**\n\n## Why You Will Love Working at Human Interest\nHuman Interest is tackling one of our country's biggest challenges - closing the retirement gap. You'll be instrumental in architecting and scaling solutions that bring financial security to employees at small and medium-sized businesses nationwide. We’ve made significant progress, but there is still growth ahead, offering you a unique opportunity to solve complex problems, drive innovation, and advance your career alongside a dedicated, mission-driven team. We value hard work and recognize that our team's contribution","https://job-boards.greenhouse.io/humaninterest/jobs/7141258","Human Interest",{"id":775,"name":773,"urlSafeSlug":776,"logo":777},"6546f3c5-1fd8-4a44-b016-a4e4436c825f","Human-Interest","uxzsxi5shmjsoxmamfjh",[779],{"city":17,"region":17,"country":16},"2025-08-16T07:18:36.463Z","Candidates should have 3+ years of experience as a Data Engineer with a strong focus on data pipeline development and data warehousing, possessing hands-on experience with data modeling, cloud data warehouses, Python, SQL, data ingestion tools, ETL/ELT processes, and Airflow. Experience with Terraform, DBT, Snowflake, and Meltano is considered a plus.","The Data Engineer will build and optimize data models using dbt Core, design and develop scalable data ingestion and orchestration using Meltano, Snowpipe, and Airflow, and manage data infrastructure in AWS using Terraform. They will collaborate with analysts and engineers to define data requirements, implement improvements in data orchestration, cost/performance management, and security in Snowflake, and develop new data ingestion pipelines from various source systems into Snowflake. Additionally, they will implement efficient testing within dbt to ensure data quality and contribute to the operational health of the data platform.",{"employment":784,"compensation":786,"experience":787,"visaSponsorship":791,"location":792,"skills":793,"industries":800},{"type":785},{"id":61,"name":62,"description":124},{"minAnnualSalary":17,"maxAnnualSalary":17,"currency":17,"details":17},{"experienceLevels":788},[789,790],{"id":71,"name":72,"description":191},{"id":132,"name":133,"description":134},{"type":74},{"type":74},[79,794,795,796,78,797,798,404,799,148,143,77],"dbt Core","Meltano","Snowpipe","AWS","Terraform","Data Ingestion",[801,802],{"id":361,"name":362},{"id":358,"name":359},["Reactive",804],{"$ssite-config":805},{"env":806,"name":807,"url":808},"production","nuxt-app","https://jobo.world",["Set"],["ShallowReactive",811],{"landing-page-remote-data-engineer-us":-1,"jobs-remote-data-engineer-us-1":-1},"/jobs/remote-data-engineer-us",{}]