We're all about connecting hungry diners with our network of over 300,000 restaurants nationwide. User-friendly platforms and streamlined delivery capabilities set us apart in the world of online food ordering. Grubhub is a place where authentically fun culture meets innovation and teamwork. We believe in empowering people and opening doors for new opportunities. If you're looking for a place that values relationships, embraces diverse ideas–all while having fun together–then Grubhub is the place for you!
The Opportunity
At Grubhub, we treat Data as a Product - a valuable asset and a critical piece of our decision making process. The mission of the Grubhub Data Platform team is to democratize data and provide a cost-efficient, observable and reliable data platform that empowers and enables the teams to accelerate their ability to safely use data to generate actionable insights and product features.
We are looking for a data focused Senior Engineer to join our Data Platform team to own and provide technical expertise and leadership around core data systems ranging from data services to processing frameworks to generate and build domain based data products. You will partner with stakeholders to standardize metrics for usage across reporting, experimentation and ad-hoc analysis. This is an opportunity to use cutting edge big data capabilities of Grubhub's Data Platform and to collaborate with our partner teams to build innovative data solutions, helping Grubhub to derive value from data at a rapid pace.
The Impact You Will Make
- You'll write performant and concise code to meet the defined standards here at GrubHub, review the code of peers, and ensure security and scalability of the features you work on
- Follow engineering best practices to deliver high quality ETL code and tests. Use CI/CD automated testing, data validations and data observability to ensure high quality data.
- Work with product feature teams enabling them to create data pipelines and publish golden datasets.
- Educate and promote best practices around data engineering, use of data engineering architecture and new technologies.
- Coach and mentor engineers across the engineering team to build a data culture.
What You Bring to the Table
- Bachelor's Degree in a science, programming or engineering related field
- You have 5+ years of experience in engineering especially distributed systems, software engineering and data engineering.
- You have experience in a cloud environment like AWS with experience running, scaling and deploying data pipelines and microservices in the cloud.
- Strong computer science fundamentals in data structures, algorithms
- You have experience with data serialization/table formats like Avro, Parquet, Iceberg/Hudi/Delta.
- You have experience with Python or Java/ Scala and experience working on large distributed systems and frameworks like Spark/ Flink with high uptimes (4x9's uptime etc.).
- You are passionate about event-driven architecture, microservices , data reliability, observability, and data privacy.
- Proven experience working with rapid product development in an agile environment is preferred
- You are pragmatic without compromising on software quality and standards.
Got These? Even Better
- Experience in building data pipelines processing high volume clickstream data to support product analytics.
- Familiarity with A/B testing concepts and experience in building pipelines to automate experiment results analysis
- Exposure to concepts of Metric or Semantic layer, related tooling (dbt/cube) and productionized solutions.