We are Kaizen Gaming
Kaizen Gaming is the leading GameTech company in Greece and one of the fastest-growing in the world, operating in 13 markets with 2 brands, Betano & Stoiximan.
We always aim to leverage cutting-edge technology, providing the best experience to our millions of customers who trust us for their entertainment.
We are a diverse team of more than 2.200 Kaizeners, from 40+ nationalities spreading across 3 continents. Our #oneteam is proud to be among the Best Workplaces in Europe and certified Great Place to Work across our offices. Here, there’ll be no average day for you. Ready to press play on potential?
Let's start with the role
At Kaizen, our aim is to make data driven decisions in order to automate our services while also focusing on offering tailored customer experiences. Our machine learning team is dedicated to this mission by building a variety of models, from binary classification tasks up to recommendation systems. We focus on transforming business needs into production applications and we cover a wide range of business sectors utilizing different data types and handling a broad project diversity.
Our teams comprise of three different roles, data scientists, machine learning engineer and data engineers so that they include the full skillset to deliver projects to production.
We’re looking for passionate and ambitious Data Engineers who can find creative solutions to challenging problems. The role will be part of a machine learning team and will work on delivering optimized data/feature pipelines in individual projects as well as in our main infrastructure in the feature store.
As a Data Engineer you will
- Design, implement and operate large-scale, high-volume, high-performance data structures;
- Work closely with the data scientists to optimize data queries both for real-time and batch sources;
- Design big data architectures with a focus on scalability and performance for the data science problems;
- Support the developments in our real time feature generation infrastructure.
What you’ll bring
Must have:
- 3+ years of experience with Spark Core;
- Hands on experience experience with Spark structured streaming;
- 3+ years of hands-on experience in writing complex, highly-optimized SQL queries across large data sets;
- 3-5 years of experience in Python programming;
- Experience with workflow engines i.e. airflow;
- Strong skills in teamwork, communication, and analytical thinking;
- Knowledge of version control tools;
- Fluency in English, both oral and written.
Nice to have:
- Experience with Azure / Databricks;
- Experience with feature engineering;
- Experience with feature store design and implementation;
- Experience with delta lake;
- Experience with Redis;
- Experience with KAFKA;
- Experience with SQL and NoSQL databases.
Recruitment Privacy Notice
Regarding the data you share with us, you may find and read our recruitment privacy notice here.