Data Engineer (Food Delivery)
Published: 2025-10-21Coherent Solutions is a digital product engineering company focused on empowering business success. Our global team of 2000+ talented professionals collaborate seamlessly to deliver innovative solutions that drive measurable business impact. Headquartered in Minneapolis, USA, the company’s core competencies across 10 locations worldwide include product software development, IT consulting, data and analytics, machine learning, mobile app development, DevOps, Salesforce, and ...
Job details
Our client is a leading online and mobile food ordering company operating in over 1,600 U.S. cities and London. With a portfolio of well-known food delivery brands, they serve millions of diners and process hundreds of thousands of orders daily. The company supports a large and growing restaurant network and prioritizes excellent customer experience through 24/7 support, innovative technology, and scalable infrastructure.
Project DescriptionThis role is part of the data engineering team focused on marketing analytics and reporting infrastructure. You’ll enhance and extend a big data platform that supports marketing activities, senior-level metrics dashboards, and customer engagement systems. The role offers a unique opportunity to design and build end-to-end solutions for data storage, transformation, and visualization - ensuring that business decisions are data-driven and scalable.
Technologies- Python
- PySpark
- SQL
- Spark
- AWS (S3, EMR)
- Git
- Jenkins
- CI/CD
- PyCharm
- Distributed Systems
- Design, build, and maintain large-scale data pipelines using Spark and PySpark;
- Develop robust ETL processes to support automated analytics, performance monitoring, and campaign reporting;
- Collaborate with marketing and analytics stakeholders to translate business needs into technical requirements;
- Write unit tests and leverage CI/CD tools to ensure high code quality and performance;
- Support and improve existing data infrastructure while planning for long-term scalability;
- Ensure efficient data processing using distributed systems and cloud-native services;
- Contribute to architectural decisions and recommend best practices for data handling, modeling, and storage;
- 6+ years of experience in Spark and SQL development;
- Solid Python programming experience with emphasis on data engineering;
- Experience with PySpark in distributed environments;
- Familiarity with cloud services (especially AWS S3 and EMR);
- Knowledge of ETL, data modeling, and performance optimization;
- Understanding of software engineering best practices (e.g., unit testing, CI/CD, Git);
- Strong communication skills and ability to explain technical concepts clearly;
- Attention to detail and a strong sense of ownership;
- English - B2 or higher;
The global benefits package includes:
- Technical and non-technical training for professional and personal growth;
- Internal conferences and meetups to learn from industry experts;
- Support and mentorship from an experienced employee to help you professional grow and development;
- Internal startup incubator;
- Health insurance;
- English courses;
- Sports activities to promote a healthy lifestyle;
- Flexible work options, including remote and hybrid opportunities;
- Referral program for bringing in new talent;
- Work anniversary program and additional vacation days.