We have established a high-performing Data Engineering team that plays a critical role in enabling intelligent, data-driven operations across our food delivery platform. From optimizing logistics and delivery efficiency to enhancing the customer experience, our data systems are central to our business success. As we continue to expand and take on more complex challenges, we are seeking a Data Engineering Team Lead to provide technical leadership, guide a skilled team of engineers, and contribute to the design and evolution of our scalable data infrastructure. If you are a strategic thinker with strong technical expertise and a passion for leading high-impact teams, we invite you to join us in shaping the future of data in online food delivery.
Responsibilities:
- Provide technical and strategic leadership to the data engineering team, fostering a high-performance culture rooted in collaboration, accountability, and continuous learning.
- Lead the design, implementation, and maintenance of scalable and efficient data pipelines—both batch and streaming—to support the organization’s data ingestion, transformation, and analytics needs.
- Architect and evolve the organization’s data infrastructure, ensuring its reliability, scalability, security, and cost-effectiveness across containerized environments.
- Develop and manage robust ETL/ELT workflows that integrate data from various internal and external sources, including event-based systems and change data capture (CDC) streams, while ensuring data accuracy, consistency, and lineage.
- Collaborate closely with data science, analytics, product, and engineering teams to understand business requirements and deliver data solutions that drive measurable value.
- Establish and promote best practices in data governance, quality assurance, metadata management, and access control, ensuring compliance with internal standards and external regulations.
- Monitor and troubleshoot data systems to ensure high availability, performance, and observability, proactively addressing issues and optimizing system behavior.
- Stay informed about emerging technologies and methodologies in the data engineering domain, assessing their potential applicability and contributing to the long-term data strategy.
- Manage the execution of data engineering projects, including task prioritization, resource planning, and stakeholder communication, ensuring timely and effective delivery of outcomes.
Requirements:
- Minimum of 5 years of hands-on experience in data engineering, including at least 2 years in a leadership or team lead capacity.
- Bachelor’s degree in Computer Science, Software Engineering, or a related technical field.
- Proven proficiency in Big Data processing frameworks (e.g., Apache Spark), real-time data streaming platforms (e.g., Apache Kafka), and distributed query engines (e.g., Trino).
- Strong experience with data transformation and modeling tools such as dbt or equivalent.
- Solid understanding of relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., MongoDB), and analytical OLAP systems (e.g., StarRocks, ClickHouse).
- Practical experience in building and orchestrating data workflows using tools like Apache Airflow.
- Expertise in real-time stream processing technologies, such as Spark Structured Streaming and Kafka Streams.
- Familiarity with modern data lakehouse architectures and table formats (e.g., Apache Iceberg).
- Proficiency in programming and scripting languages, particularly Python and Bash.
- Experience with monitoring, logging, and observability stacks such as Prometheus, Grafana, and the ELK stack.
- Working knowledge of CI/CD practices and tools (e.g., GitLab-CI, ArgoCD) for automating data deployments and infrastructure updates.
- Strong background in containerization and orchestration technologies, including Docker and Kubernetes.
- Demonstrated ability to work with large-scale, complex datasets and to design and optimize high-performance data pipelines.
- Excellent communication, collaboration, and stakeholder management skills.
- Strong analytical and problem-solving capabilities with a proactive, solution-oriented mindset.
Preferred Qualifications:
- Experience in building and maintaining machine learning data pipelines and supporting end-to-end ML workflows in collaboration with data science teams.
- Familiarity with modern data visualization and BI tools such as Apache Superset, Tableau, or Microsoft Power BI for enabling self-service analytics.
- Knowledge of data governance principles, data quality management frameworks, and best practices for ensuring data lineage, traceability, and compliance.