Continuous Integration & Experimentation

How our Deep Learning Models Evolve

Predicto
5 min readJun 10, 2021
Photo by Jennifer Griffin on Unsplash

“The only constant in life is Change.” — Heraclitus

I’m a big fan of continuity.

The environment around us changes every second. Our bodies and minds continuously learn and adapt every time we experience something new. A new city, a new person, a new recipe.

Technology is another example of continuity. It advances so fast, continuously building on top of every new discovery. We learn, we adapt and we go one step further. Using past knowledge to discover new.

I’m also a big believer of continuous education, and I advise everyone around me to keep learning and educating themselves. If we don’t adapt and learn, we are left behind. It’s inevitable.

But that’s not what I want to talk about.

Today I want to talk about the importance of continuous experimentation and continuous integration. And about why it is important to invest in building a solid infrastructure that supports those concepts early on.

Introduction

Here at Predicto, we deal with stocks and cryptocurrencies. As you probably know, conditions in those markets change rapidly. What happened 1 week ago is already old news. Continuous learning and adapting is key.

We currently track more than 150 stocks from the US stock market plus several cryptocurrencies. We maintain at least 3 different Deep Learning models for each stock, which give us daily forecasts for the next 2 to 3 weeks.

Our goal is not to predict the future.

Our goal is to identify complex patterns, provide uncertainty and risk metrics and, finally, generate explainable forecasts that can give us hints about potential opportunities on a specific stock.

We want our models to always be up to date and absorb latest market conditions. This means that we need to regularly retrain and fine tune hundreds of them. It is also very important to continuously experiment with new ideas and adapt to the learnings fast.

Market doesn’t wait and opportunity windows are very narrow.

To accomplish those goals and streamline the process, we designed and implemented a general-purpose forecasting platform, datafloat.ai, that allows us to:

  • Perform model training at scale. We can train hundreds of models in a few hours.
  • Easy model experimentation with zero code. We can design a new deep learning model in a few minutes that can forecast anything we ask it to, with a few clicks.

Let’s dig into more detail.

Model training at scale

As mentioned earlier, we want our models to always be up to date and absorb latest market conditions. Currently, our retrain frequency is once every 2 months and this means we need to retrain/fine-tune more than 500 Deep Learning models. Each model involves many years of data, a large number of features and several layers.

Here is how this process works:

Our latest training environment is packed in a docker container living in a private container registry. It contains an agent ready to listen for new model training requests populated in a message queue. When we are ready to start the retraining process, we start scaling our container instances on the cloud in order to process requests in parallel.

After each model training, agents generate validation metrics and graphs that we can use to automatically validate the model (or manually inspect it later). Below you can see some of the generated graphs for one of our Facebook stock models as an example.

Left: Full training dataset fit + Validation forecasts. Right: Validation dataset forecasts

All models are uploaded to a private file storage on the cloud that supports versioning. If a model passes validation then it is promoted to production.

Model experimentation with zero code

When we are not retraining models, we like to do quick experimentation using our no-code forecasting platform. Our platform allows us to directly use a database’s schema to design our next model by selecting features across tables, and then experimenting with the model architecture, the number of layers, dropout rates and more. We can also experiment with sample sizes and prediction horizon. We can even ask our platform to predict different things, not just stock price movements. And this is where we get creative.

A few model experimentation examples might be:

  • predict the next 3 days trading volume for AMZN stock, using samples of 3 weeks size with features such as AMZN stock price, number of Amazon daily news articles, and more.
  • predict the next 15 days stock price movements for TSLA stock, using samples of 45 days size with features such as TSLA volume, competitors’ price & trading volume movements (e.g. NIO), index prices (such as SPY/QQQ), options data, news data and more.
  • predict tomorrow’s news coverage for a specific company.
  • predict Bitcoin price movements using stock and cryptocurrency features.
  • predict next week’s stock market volatility using stock prices and options features.

And the list goes on.

To design those models, we use a no-code approach where we can describe a new model using our database schema an a few clicks!

Here is a snapshot of a very simple model definition example we created in 1 minute that predicts news coverage for Tesla for the next 2 weeks. For simplicity we only used recent stock price and volume data. Training time was 10 minutes. When training is complete, we are presented with several graphs to help us understand the forecasting power of our concept model.

Model Features section: Our model’s data definition uses data from 2 tables
Training and Validation section: Graphs showing our model’s performance over training and validation datasets
Input data section: Overview of training data

And below we can inspect a sample forecast with explainability graphs from our newly generated model!

As you can see, the main feature that influenced this forecast was the actual daily articles number. A small influence from the trading volume is also visible.

Forecast sample: Tesla news articles model forecast and explainability

Conclusion

I hope you are now thinking more seriously about continuous experimentation and continuous integration.

We’ve shown you how we approach these concepts using our datafloat.ai platform. The best part is that we don’t need to write a single line of code for any of this. Everything happens with a few clicks in a portal and everyone in the team can experiment and design new models.

No Deep Learning experience is needed.

Just creativity. There is hidden information in every data set waiting to be discovered.

For more information, feel free to contact us or leave a comment!

See you soon and stay safe!

— The Predicto Team

--

--

Predicto
Predicto

Written by Predicto

Stock & Cryptocurrency Forecasting AI. Based on Options Data. Powered by Intelligible Deep Learning models. https://predic.to