Deep Reinforcement Learning for Bitcoin Trading Strategies

·

In the evolving landscape of cryptocurrency trading, quantitative strategies powered by artificial intelligence are gaining traction. This article explores how deep reinforcement learning (DRL) can be applied to train an autonomous Bitcoin trading agent using the Proximal Policy Optimization (PPO) algorithm combined with Long Short-Term Memory (LSTM) networks. Unlike traditional price prediction models, this approach directly learns optimal trading actions—buy, sell, or hold—through interaction with a simulated market environment.

By integrating deep learning with reinforcement learning principles, we build a system that adapts to market dynamics and maximizes cumulative returns over time. The implementation is built from scratch using PyTorch and follows a Gym-like environment structure for flexibility and clarity.


Core Concepts: PPO and LSTM in Trading

Deep reinforcement learning bridges neural networks and decision-making under uncertainty. In financial markets, where outcomes are stochastic and feedback delayed, DRL excels by learning policies that map market states to profitable actions.

The PPO algorithm is a policy-gradient method known for stable and efficient training. It optimizes a clipped probability ratio to prevent destructive updates, making it ideal for complex, high-variance domains like crypto trading.

Meanwhile, LSTM networks capture temporal dependencies in sequential data—perfect for modeling candlestick patterns and volume trends in Bitcoin price history.

Together, LSTM-PPO forms a powerful framework: the LSTM processes historical price sequences, while PPO learns to take actions that maximize long-term rewards.

👉 Discover how AI-driven trading strategies can enhance your investment decisions


Building the LSTM-PPO Model

Below is a streamlined version of the PPO agent enhanced with LSTM layers. The model takes normalized market data as input and outputs action probabilities (buy/sell/hold).

import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Categorical

class PPO(nn.Module):
    def __init__(self, state_size, action_size):
        super(PPO, self).__init__()
        self.data = []
        self.fc1 = nn.Linear(state_size, 10)
        self.lstm = nn.LSTM(10, 10)
        self.fc_pi = nn.Linear(10, action_size)
        self.fc_v = nn.Linear(10, 1)
        self.optimizer = optim.Adam(self.parameters(), lr=0.0005)

    def pi(self, x, hidden):
        x = torch.relu(self.fc1(x)).view(-1, 1, 10)
        x, hidden = self.lstm(x, hidden)
        prob = torch.softmax(self.fc_pi(x), dim=2)
        return prob, hidden

    def v(self, x, hidden):
        x = torch.relu(self.fc1(x)).view(-1, 1, 10)
        x, _ = self.lstm(x, hidden)
        return self.fc_v(x)

    def put_data(self, transition):
        self.data.append(transition)

    def make_batch(self):
        # Batch preparation logic here
        pass

    def train_net(self):
        # Training step with advantage estimation and clipped loss
        pass

This model uses clipped surrogate loss and Generalized Advantage Estimation (GAE) to stabilize learning. It also maintains separate policy and value networks for more accurate action evaluation.


Designing the Bitcoin Trading Environment

To train the agent, we simulate a realistic trading environment inspired by OpenAI Gym. The BitcoinTradingEnv class encapsulates state transitions, reward calculation, and episode termination.

Key Features:

class BitcoinTradingEnv:
    def __init__(self, df, commission=0.00075, initial_balance=10000):
        self.df = df
        self.norm_df = 100 * (df / df.shift(1) - 1).fillna(0)
        self.commission = commission
        self.initial_balance = initial_balance
        self.mode = False
        self.sample_length = 500

    def reset(self):
        # Initialize episode with random start point
        pass

    def step(self, action):
        # Execute trade, update portfolio, return next_state, reward, done
        pass

The environment returns a composite state vector including both market features and portfolio metrics—enabling the agent to learn context-aware strategies.


Critical Implementation Details

Several design choices significantly impact performance:

Why does the initial portfolio include Bitcoin?
Using a mixed starting portfolio (cash + BTC) allows the model to evaluate opportunity cost. Profits are calculated relative to a baseline value that includes both assets. This ensures meaningful rewards even during downtrends when selling BTC generates profit despite falling prices.

Why use random sampling instead of full-history training?
Training on random 500-hour segments prevents overfitting to specific market phases. With over 10,000 possible starting points in the dataset, the agent encounters diverse market conditions—including crashes, rallies, and consolidation periods.

How are zero-balance scenarios handled?
If insufficient funds or BTC exist for a trade, the action defaults to "hold." While this blurs the distinction between intentional holds and forced inaction, it avoids invalid trades and maintains training stability.

Why include portfolio state in observations?
Account balance and BTC holdings inform the value network about current exposure. A bullish signal only has positive expected value if the agent holds cash; bearish signals matter most when holding BTC. This coupling enables smarter risk management.

When should the agent choose “hold”?
The model learns to skip trades when expected gains don’t justify transaction costs. Note: the agent doesn’t predict price direction—it learns which actions yield higher returns given the current state.


Data Preparation and Training Process

Historical BTC/USD hourly candles from Bitfinex (May 2018 – June 2019) are fetched via API:

resp = requests.get('https://www.quantinfo.com/API/m/chart/history?symbol=BTC_USD_BITFINEX&resolution=60&from=1525622626&to=1561607596')
data = resp.json()
df = pd.DataFrame(data, columns=['t','o','h','l','c','v'])
df.index = df['t']
df = df.astype(np.float32)

Normalization uses percentage changes rather than absolute scaling to prevent overfitting:

self.norm_df = 100 * (df / df.shift(1) - 1).fillna(0)

Training loop runs for 10,000 episodes:

env = BitcoinTradingEnv(df)
model = PPO(state_size=7, action_size=3)

for n_epi in range(10000):
    s = env.reset()
    hidden = (torch.zeros(1,1,10), torch.zeros(1,1,10))
    done = False
    while not done:
        prob, hidden = model.pi(torch.tensor(s).float(), hidden)
        m = Categorical(prob)
        a = m.sample().item()
        s_prime, r, done, profit = env.step(a)
        model.put_data((s,a,r/10,s_prime,prob[0,a].item(),hidden,done))
        s = s_prime
    model.train_net()

👉 Learn how advanced trading tools can improve your strategy performance


Training Results and Analysis

The training curve shows early instability followed by gradual improvement:

On full-data backtesting:

Signs of mild overfitting appear—the agent holds near market bottoms but fails to maintain positions during sustained rallies.


Out-of-Sample Test Performance

Testing on unseen data (June 2019 onward), where BTC dropped from ~$13K to ~$9K:

This suggests the model adapts well to mean-reverting behavior but struggles in trending or low-volatility regimes.


Lessons Learned and Future Improvements

Key insights from this experiment:

Future enhancements:


Frequently Asked Questions

Q: Can this model make real-world profits?

A: While promising in simulation, real-world deployment requires handling slippage, latency, and exchange API limits. Paper profits don’t guarantee live success.

Q: Why not use simple LSTM for price prediction instead?

A: Predicting prices doesn’t equate to profitable trading. A DRL agent learns end-to-end optimal actions under transaction costs and risk constraints.

Q: How important is GPU acceleration?

A: Crucial for faster iterations. GPU training runs ~3x faster than CPU—essential when running thousands of episodes.

Q: Is overfitting avoidable in financial DRL?

A: Not entirely—but mitigated via random sampling, dropout layers, and out-of-sample validation.

Q: What are the risks of automated crypto trading?

A: Market volatility, flash crashes, and model failure can lead to significant losses. Always test thoroughly in simulation first.

Q: Can I apply this to other assets?

A: Yes—the framework works for stocks, forex, or any time-series data with volume and price information.

👉 Start building smarter trading systems with cutting-edge tools