Data Engineer at ShyftLabs

Noida, Uttar Pradesh, India

ShyftLabs Logo
Not SpecifiedCompensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
N/AIndustries

Requirements

  • 3-5+ years of hands-on experience in a Data Engineering, Software Engineering, or similar role
  • Strong proficiency in a programming language such as Python or Java for data processing and automation
  • Expert SQL proficiency: Mastery of SQL for complex data manipulation, DDL/DML operations, and query optimization
  • Proven expertise in using Google BigQuery as a data warehouse, including data modeling, performance tuning, and cost management
  • Hands-on experience building data pipelines using the GCP ecosystem (e.g., Dataflow, Pub/Sub, Cloud Storage, Cloud Composer/Airflow)
  • Deep understanding of ETL/ELT principles and data warehousing architecture (e.g., Star Schema, Data Lakes)
  • Strong problem-solving and troubleshooting skills with a focus on building scalable, maintainable, and automated systems
  • Preferred Qualifications
  • Experience building data models that power BI tools like Looker (knowledge of LookML is a strong plus), Tableau, or Power BI
  • Experience with tools like dbt, Dataform, or Fivetran for data transformation and integration
  • Familiarity with tools like Terraform or Deployment Manager for managing cloud infrastructure
  • Knowledge of Docker and Kubernetes
  • Google Cloud Professional Data Engineer certification
  • Proficiency with Git

Responsibilities

  • Design, build, and maintain scalable and reliable batch and real-time ETL/ELT data pipelines using GCP services like Dataflow, Cloud Functions, Pub/Sub, and Cloud Composer
  • Develop and manage central data warehouse in Google BigQuery. Implement data models, schemas, and table structures optimized for performance and scalability
  • Write clean, efficient, and robust code (primarily in SQL and Python) to transform raw data into curated, analysis-ready datasets
  • Monitor, troubleshoot, and optimize data infrastructure for performance, reliability, and cost-effectiveness. Implement BigQuery best practices, including partitioning, clustering, and materialized views
  • Build and maintain curated data models that serve as the "source of truth" for business intelligence and reporting, ensuring data is ready for consumption by BI tools like Looker
  • Implement automated data quality checks, validation rules, and monitoring to ensure the accuracy and integrity of data pipelines and warehouse
  • Work closely with software engineers, data analysts, and data scientists to understand their data requirements and provide the necessary infrastructure and data products

Skills

Key technologies and capabilities for this role

GCPBigQueryDataflowCloud FunctionsPub/SubCloud ComposerSQLPythonETLELT

Questions & Answers

Common questions about this position

What are the key required skills for this Data Engineer role?

The role requires 3-5+ years of hands-on experience in Data Engineering or similar, strong proficiency in Python or Java, expert SQL proficiency for complex data manipulation and optimization, proven expertise in Google BigQuery including data modeling and performance tuning, and hands-on experience with GCP data services like Dataflow, Pub/Sub, and Cloud Composer.

What is the salary or compensation for this position?

This information is not specified in the job description.

Is this Data Engineer position remote or does it require office work?

This information is not specified in the job description.

What does the company culture or team environment look like at ShyftLabs?

ShyftLabs is a growing data product company founded in early 2020 that works primarily with Fortune 500 companies, focusing on innovation to accelerate business growth; the role involves close collaboration with software engineers, data analysts, and data scientists.

What makes a strong candidate for this Data Engineer position?

A strong candidate will have 3-5+ years of hands-on Data Engineering experience, expert proficiency in SQL and Python/Java, proven expertise in Google BigQuery for data warehousing and performance tuning, and hands-on experience building scalable data pipelines with GCP services.

ShyftLabs

Data-driven decision-making solutions for organizations

About ShyftLabs

ShyftLabs helps organizations adopt a data-first approach to their decision-making processes. Their services focus on establishing systems that enable companies to make quicker and more informed decisions based on data analysis. This approach allows businesses to gain insights that can keep them ahead of their competitors. Unlike other companies that may offer generic consulting services, ShyftLabs emphasizes the importance of data in driving decisions, ensuring that organizations can leverage their data effectively to enhance their strategic planning and operational efficiency.

None, CanadaHeadquarters
2018Year Founded
VENTURE_UNKNOWNCompany Stage
Data & Analytics, ConsultingIndustries
11-50Employees

Benefits

Health Insurance
Hybrid Work Options
Professional Development Budget

Risks

Increased competition from startups offering innovative, cost-effective solutions.
Growing demand for in-house analytics teams reducing reliance on consultants.
Rapid AI advancements may outpace ShyftLabs' current technology offerings.

Differentiation

ShyftLabs specializes in data governance, warehousing, and predictive analysis services.
The firm empowers organizations with a data-first approach for decision-making.
ShyftLabs establishes processes for faster, insightful decisions to outpace competition.

Upsides

Increased demand for data governance due to stricter privacy regulations.
Growing interest in predictive analytics in retail for inventory optimization.
Rising adoption of cloud-based BI tools among SMEs for cost-effectiveness.

Land your dream remote job 3x faster with AI