Senior Cloud Data Infrastructure Engineer
ClickhouseFull Time
Senior (5 to 8 years), Expert & Leadership (9+ years)
Candidates must have hands-on experience with AWS, Azure, GCP, or OCI APIs, demonstrating a strong understanding of multi-region cloud architecture, scalability patterns, and infrastructure-level security enforcement. Proficiency in Terraform, Packer, or Chef is required, along with experience designing and maintaining GitOps-based CI/CD pipelines using Jenkins, ArgoCD, or Tekton. Experience building and operating containerized services with Docker, Kubernetes, ECS, EKS, or Rancher is necessary, as is familiarity with deployment strategies, service discovery, and multi-cluster environments. The role requires the ability to write production-grade infrastructure automation in Python, Bash, or similar, and to build tools that interact with REST APIs, system telemetry, and structured data formats like JSON and XML. Proven experience operating mission-critical systems in production, leading incident response, implementing SLOs, and contributing to platform stability and service health is essential.
The Senior Infrastructure Engineer will design and deliver automation for packaging, testing, deploying, and operating infrastructure globally, building systems that enforce lifecycle patterns and eliminate manual workflows. They will deploy and manage infrastructure across major cloud providers, architecting for scale, fault tolerance, and operational simplicity while considering cost, performance, and security. Responsibilities include delivering reusable infrastructure using Infrastructure-as-Code tools and architecting CI/CD pipelines with GitOps workflows for rapid, safe delivery. The engineer will design and deploy containerized applications, own service lifecycle patterns, and embed security into platform automation through Infrastructure-as-Code and policy-as-code. They will leverage AI models and agent-based systems to automate support, detect anomalies, and reduce human effort in operations. Building and operating system observability with tools like ELK and Prometheus, and integrating telemetry with signal-aware automation is also key. The role involves building tools and frameworks to improve developer experience and infrastructure safety, and maintaining high-quality documentation for all infrastructure systems and workflows. Additionally, the engineer will partner with other teams to deliver shared tooling, drive adoption of standards, and scale infrastructure patterns across the organization.
Online platform connecting freelancers and clients
Upwork connects freelancers with clients looking for various services in the gig economy, which focuses on short-term contracts instead of permanent jobs. The platform allows freelancers to create profiles that showcase their skills, while clients can post job listings for specific projects. Freelancers bid on these projects, and clients choose the best candidates based on proposals and reviews. Upwork earns revenue through service fees charged to freelancers based on their earnings, with a tiered structure that rewards long-term client relationships. The platform also offers premium memberships and additional services for enhanced visibility and access to job listings. Upwork provides tools for time tracking, invoicing, and project management, making it easier for both freelancers and clients to manage their work and payments. The goal of Upwork is to facilitate successful project completion by bridging the gap between freelancers and clients.