Staff Big Data Engineer
H1Full Time
Expert & Leadership (9+ years)
Candidates should possess at least 3 years of data engineering experience, including exposure to on-premise systems like Spark, Hadoop, and HDFS. A strong understanding of engineering best practices, proficiency in Python or Java/Scala, and knowledge of SQL with various database/query dialects are essential. Experience with data pipeline tools such as Airflow, Kafka, Spark, and Hive, along with working knowledge of CI/CD processes and software containerization, is required. Desirable qualifications include experience in architectural/system design, technical ownership, data governance, data lineage, and data quality initiatives.
The Data Engineer will be responsible for designing and building scalable, robust data pipelines using tools like Airflow, Spark, and Kafka. They will implement monitoring and alerting systems to ensure data quality and support data governance and lineage initiatives by designing and implementing solutions for data tracking and management. Additionally, the role involves collaborating with peers to enhance the shared data platform for various use cases and improving system reliability, maintainability, and performance for operational excellence.
Operates Wikipedia and free knowledge projects
The Wikimedia Foundation operates Wikipedia and other free knowledge projects, aiming to create a world where everyone can freely access and share knowledge. It provides a platform for users to read, contribute, and share content, while also supporting the volunteer communities that help maintain these projects. The foundation is funded by donations from individuals and institutions, emphasizing its nonprofit status. Unlike many other organizations, it focuses on making knowledge accessible to all without charge, advocating for policies that support free knowledge initiatives. Its goal is to empower individuals to contribute to and benefit from a collective pool of knowledge.