We’re a Barcelona-based startup and the fastest-growing delivery player in Europe, Hispanic America and Africa. With food at the core of the business, Glovo delivers any product within your city at any time of day. We currently deliver over +100M annual orders and operate in 22 countries, and in over +300 cities.
Our vision and ambition are not only to make everything immediately available in your city but it is also to offer our employees the job of their lives. A job where you'll be challenged and have the most fun working in through tech-enabled experiences.
Your work-life opportunity:
We are looking for talented and passionate Data Engineer to join the Data Engineering team in our Barcelona HQ.
Glovo has a culture of data-driven decision-making, and demands data that is timely, accurate, and actionable. We grow really fast collecting Terabytes of data from tens of data sources and providing interfaces for our internal customers to access and query the data hundreds of thousands of times per day.
As a Data Engineer you will be building and constantly improving Glovo’s reliable and scalable Big Data Platform using technologies like Amazon Web Services (AWS), Spark, Python and many more. Joining us your work will have an immediate influence on the shape of data consumed by teams across Glovo including Central BI, Data Scientists and Business Analysts.
Your depth of experience and past achievements will speak for itself having helped deliver on data platform wide projects that have had significant impact. You have developed complex, scalable and well designed data pipelines and defined engineering standards which have helped different data teams achieve efficiency while providing mentoring and technical leadership where necessary. You will be at times working independently and at other times within a team to achieve a given goal. Others respect you and you are a sought out Data Engineer because of your expertise and depth of knowledge.
Be a part of a team where you will:
Design, implement and keep improving Glovo Data Platform
Build scalable data pipelines using different technologies
Participate in the development of Data Lake, Data Warehouse, different methods of data ingestion and Self Service ETL tools using best architectural practices.
Mentor & share technical expertise with data engineers, data scientists, BI analysts and other technology colleagues
Be on top of new technologies and industry trends
At least 3+ years of software/data engineering experience
At least 2+ years experience in Python and Spark
Professional experience building complex ETLs/data pipelines
Working experience with Amazon Web Services / Google Cloud Platform
Experience with task orchestration tools (Airflow, Luigi)
Cloud Data Warehousing experience in Redshift or another distributed platform (e.g. Hadoop + Hive/Presto, BigQuery or Snowflake)
Experience in Data Streaming (Spark, Flume, Kafka, Kinesis, Flink, etc.)
Import and transform data from many third-party APIs
Strong analytical and problem-solving skills
Very good English
Nice to have:
Experience working with AWS EMR, AWS Glue, Databricks
Experience with Docker, Kubernetes
Experience in building Data Lake
Orchestration of Machine Learning pipelines