Welcome to my GitHub! I'm currently a consultant at Netlight within Data Science & Analytics. I have a masters degree in Financial Mathematics and a bachelor's in Applied Mathematics + Industrial Engineering and Management, both at KTH.
Despite my education, my biggest interest is in Data Analytics and Data Science, more specifically in Machine Learning. This is reflected in the repos you can find under my handle. Continue reading for a selection of the projects I'm most proud of!
In collaboration with Skandinaviska Enskilda Baken (SEB), me and a classmate wrote our thesis about Reinforcement Learning for Market Making. Market making is the process of quoting buy and sell prices in a financial asset in order to provide liquidity and earn a profit on the spread. Setting these prices "correctly" is essential to drive volume, minimize risks and earn a profit – which historically has been done using analytical methods. However, derivation of optimal market making strategies is only possible when you make limiting and naïve assumptions on how markets work. There is thus an argument for using reinforcement learning to find better strategies since they do not depend on any of these assumptions.
Using two ways of modelling the market, we were able to use reinforcement learning to find market making strategies. In the first model, for which one can derive analytical strategies, we were able to use tabular Q-learning to find strategies that matched the analytically optimal strategies in performance. In the second model, which is significantly more sophisticated, we compared the performance of tabular Q-learning and Double Deep Q-Network (DDQN) and found that the latter was more suitable for this problem.
For more about our results, have a look at our thesis.
Below follows an illustration of a limit order book (LOB), a central concept of market making.
As part of the final project in a Deep Learning course at KTH me and three classmates got the idea to of building a model that translates handwritten mathemtical expressions directly to LaTeX-code. This could save us a lot of time since manually entering equations into LaTeX is a very tedious task.
Looking into previous research we found that an Encoder-Decoder model consisting of a convolutional neural network (CNN) and a long short-term memory (LSTM) network would be most promising for our task. We thus constructed an Encoder consisting of a CNN with batch normalization and max-pooling and a Decoder consisting of a LSTM with a soft attention mechanism. For better performance beam search was used during prediction.
While the results weren't super promising for longer expressions the model performed well on some expression I wrote myself. Here are some examples!
Working together with a fintech firm and three classmates we looked into the possibility of using Machine Learning to speed up their operations. The team within the firm we were working with had one main task: solving a constrained non-linear optimization problem using an evolutionary optimization algorithm called CMA-ES. However, there was one problem, deciding upon the feasibility of the solutions the algorithm suggested was computationally heavy. Our task was thus to see if Machine Learning could be used to filter out infeasible solutions. We took an explorative approach, testing a wide range of Machine Learning algorithms, supervised as well as unsupervised. Unfortunetaly, no method yielded useful results. We thinks this is mainly due to the evolutionary algorithm advancing towards the optimum in iterational steps through the large feature space (~4000 dimensions), which means that the classifiers needs to extrapolate.
Below follows a gif of how the CMA-ES moves during its first 100 iterations, projected down to 3 dimensions using PCA.
While the projects above are my favourites, I still have more to show. Take a look at the following list if you want to learn more (some are unfortunately in Swedish):
Deep Learning
Machine Learning
Coding challenges
Numerical analysis
Games