Our Data team is growing! During the last year, the Data team has grown significantly and has been a key driver of our business with products like Minerva – and we still have a lot more to do!
We’re on the lookout for a Data Engineer who loves working with the latest technologies and has a passion for programming and maths! Could this be you?
At FACEIT we work to provide gamers worldwide with great competitive experiences for their favourite videogames. We develop unique products for players, partners and game developers centred around establishing and building competitive communities and ecosystems for multiplayer video games. Our online platform hosts over 30 million matches a month, and we are proud to work closely with multiple gaming publishers and developers to keep building the next generation of competitive experiences.
About the job:
As a Data Engineer you will build and maintain data streaming solutions and data pipelines, help implement AI and ML infrastructure and code in production to solve problems such as cheating identification, matchmaking and detection of abusive behaviour and contribute to creating a best in class data engineering function, supporting best practices for data and improving team effectiveness.
- Create and maintain data streaming processes
- Create and maintain AI and ML infrastructures on GCP that our data scientists can leverage to train and host ML models
- Create and maintain an optimal data pipeline architecture required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Work closely with analysts, data scientists and technology partners to understand their engineering requirements.
- Advanced working SQL knowledge
- Experience working with relational and non-relational databases
- A deep understanding of both object oriented and scripting languages (Java, Golang, Python, etc…)
- Strong analytic skills related to working with unstructured datasets
- Experience with microservice architecture / version control (Git) / Continuous Integration
- Stream-processing systems for Big Data: Apache Beam, Google DataFlow
- Extensive work experience building and optimising 'big data' data pipelines, architectures and data sets
- Exposure to the following software/tools:
- SQL and NoSQL databases, including Postgres and MongoDB.
- Graph technologies, including Neo4j
- Data pipeline and workflow management tools: Airflow, Luigi, etc
- Standard data science libraries (pandas, numpy, tensorflow, pytorch).
- Working in an Agile environment
- Google Cloud services: Bigquery, Dataflow, Cloud Ai, Cloudrun, etc
- Bonus Skills
- Passionate about learning new, cutting-edge technologies and finding applicable business cases as needed;
- A passion for gaming
What we can offer
- Work with the best tech available;
- Fully remote
- Company book club, and gaming nights;
- Flexible working environment;
- Company (virtual) drinks session every Friday
- Monthly massages;
- Quarterly team outings;
- Your professional growth is important to us. We provide ongoing training opportunities.