3+ years of experience in data engineering or data product operations
Proficiency in Python and SQL, especially in Snowflake and BigQuery
Experience with web scraping frameworks and data governance
Experience working with unstructured or alternative data sources
Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions
Experience with Snowflake for data modeling and performance tuning
Knowledge of frontend/backend development (React, APIs, Python Flask or FastAPI, databases, cloud technologies) is a plus
Skills in ETL/ELT pipeline development and automated workflows
Strong problem-solving skills and attention to detail
Expertise in anomaly detection, data drift, or schema changes
Perspective of treating datasets as products (lay SLAs, roadmaps, and metrics)
Project management experience using Agile, Kanban, or similar methodologies
Excellent documentation skills for technical specs, runbooks, and SOPs
Effective communication skills for collaboration between data engineering and business teams
Leadership readiness, demonstrated through mentoring, roadmap planning, and team coordination
Responsibilities
Build and manage production-grade data pipelines for large-scale unstructured and web-harvested data
Operate data products end-to-end, from raw acquisition through to quality control, delivery, and issue resolution
Establish and maintain standard methodologies for data engineering, focusing on data quality, security, and scalability
Stay up to date with the latest data management trends and technologies to enhance internal processes and tools
Collaborate closely with investment and technology teams, researchers, technologists, and portfolio managers to develop and manage data products and ensure alignment with investment needs
Contribute to the design and execution of data operations
Contribute to technical roadmap and team development, with a trajectory toward leadership