*** Please note that a Financial services/Trading background is mandatory for this role ***
Do not apply if you do not have the relevant financial systems experience.
This is a unique opportunity to join a leading proprietary trading firm with an entrepreneurial and innovative culture at the heart of its business. We value quick-witted, creative minds and challenge them to make full use of their capacities.
As a Senior Research Engineer/ Machine Learning, you will be responsible for helping to lead the development of our trading model research framework and its integration with our platform for deploying/running those models in production. You'll be responsible for expanding the current (limited) scope of the framework and platform to become the global standard way of training, consuming, combining, and transforming any data source in a data-driven systematic way.
What you will do
- Lead the development and global rollout of our research framework for defining and training models through various optimization procedures (supervised learning, backtesting etc.), as well as its integration with our platform for deploying and running those models in production
- Develop tools/frameworks for use by our Quantitative Researchers and Traders to test hypotheses and tune/develop data-driven systematic trading strategies
- Work with the business to refine requirements, collect feedback, and iterate on design and implementation of research tools
What you need to succeed
- Advanced degree (Master's or PhD) in a relevant field
- 5+ years of hands-on experience in MLOps and Research Engineering
- Strong proficiency in programming languages such as Python, with experience in libraries like TensorFlow, PyTorch, and scikit-learn
- Demonstrated experience in designing and implementing end-to-end machine learning pipelines, including data preprocessing, model training, deployment, and monitoring
- Prior commercial experience in designing and implementing systematic trading systems
- Experience in developing and deploying RESTful APIs and microservices for model serving
- Familiarity with Big Data technologies such as Hadoop and Spark
- Experience with data visualization and reporting tools
- Familiarity with databases and query languages for data extraction and transformation
- Understanding of and experience with modern software development practices and tools (e.g. Agile, version control, automated testing, CI/CD, observability)
- Solid understanding of cloud platforms (e. g., AWS, Azure, GCP) and containerization technologies (e. g., Docker, Kubernetes)