Your portfolio is like a garden that grows wild without constant pruning. One month your tech stocks explode skyward while bonds wither in the corner. The next month, everything flips faster than a pancake on Sunday morning. Without systematic rebalancing, your carefully planned asset allocation becomes a chaotic mess that would make even seasoned investors reach for antacids.
Portfolio rebalancing with AI-powered tools like Ollama transforms this tedious manual process into an automated, data-driven system. This guide shows you how to build a risk-adjusted asset allocation system that maintains your target portfolio weights while optimizing for returns and minimizing drawdowns.
You'll learn to implement modern portfolio theory using Ollama's local AI capabilities, create automated rebalancing triggers, and build risk assessment models that adapt to market conditions. By the end, you'll have a complete portfolio management system running on your local machine.
Understanding Portfolio Rebalancing and Risk Management
Asset allocation determines your investment success more than individual stock picking. Studies show that 90% of portfolio performance comes from asset allocation decisions, not security selection. Yet most investors neglect systematic rebalancing, letting their portfolios drift far from optimal allocations.
Why Traditional Rebalancing Fails
Manual rebalancing creates three critical problems:
- Emotional interference - Investors avoid selling winners and buying losers
- Timing inconsistency - Rebalancing happens irregularly or not at all
- Limited analysis - Decisions based on gut feelings rather than data
Risk-adjusted asset allocation addresses these issues by using quantitative models to determine optimal portfolio weights based on expected returns, volatility, and correlations.
The Ollama Advantage for Investment Strategy
Ollama runs large language models locally, providing several benefits for portfolio management:
- Privacy - Your financial data never leaves your machine
- Cost efficiency - No API fees for continuous analysis
- Customization - Train models on your specific investment criteria
- Speed - Instant analysis without network latency
Setting Up Your Ollama Portfolio Management System
Prerequisites and Installation
First, install Ollama and download a suitable model for financial analysis:
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a capable model for financial analysis
ollama pull llama2:13b
# Verify installation
ollama list
Create your project directory and install required Python packages:
mkdir portfolio-rebalancer
cd portfolio-rebalancer
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install pandas numpy yfinance requests matplotlib seaborn scipy
Core Portfolio Data Structure
Create the foundation for your portfolio rebalancing system:
# portfolio_manager.py
import pandas as pd
import numpy as np
import yfinance as yf
from datetime import datetime, timedelta
import requests
import json
class PortfolioManager:
def __init__(self, target_allocation, ollama_model="llama2:13b"):
"""
Initialize portfolio manager with target asset allocation
Args:
target_allocation: dict with asset symbols and target percentages
ollama_model: Ollama model name for AI analysis
"""
self.target_allocation = target_allocation
self.ollama_model = ollama_model
self.current_prices = {}
self.current_allocation = {}
def fetch_current_prices(self):
"""Get current market prices for all assets"""
symbols = list(self.target_allocation.keys())
data = yf.download(symbols, period="1d", interval="1d")
for symbol in symbols:
if len(symbols) == 1:
self.current_prices[symbol] = data['Close'].iloc[-1]
else:
self.current_prices[symbol] = data['Close'][symbol].iloc[-1]
print(f"✅ Fetched prices for {len(symbols)} assets")
return self.current_prices
Risk Metrics Calculation Engine
Build a comprehensive risk assessment system:
# risk_calculator.py
import numpy as np
import pandas as pd
from scipy.optimize import minimize
class RiskCalculator:
def __init__(self, price_data, lookback_days=252):
"""
Initialize risk calculator with historical price data
Args:
price_data: DataFrame with historical prices
lookback_days: Number of days for risk calculations
"""
self.price_data = price_data
self.lookback_days = lookback_days
self.returns = price_data.pct_change().dropna()
def calculate_portfolio_metrics(self, weights):
"""Calculate expected return, volatility, and Sharpe ratio"""
# Expected annual return
expected_returns = self.returns.mean() * 252
portfolio_return = np.sum(weights * expected_returns)
# Portfolio volatility (risk)
cov_matrix = self.returns.cov() * 252
portfolio_volatility = np.sqrt(np.dot(weights.T, np.dot(cov_matrix, weights)))
# Sharpe ratio (assuming 2% risk-free rate)
risk_free_rate = 0.02
sharpe_ratio = (portfolio_return - risk_free_rate) / portfolio_volatility
return {
'expected_return': portfolio_return,
'volatility': portfolio_volatility,
'sharpe_ratio': sharpe_ratio
}
def optimize_portfolio(self, target_return=None):
"""
Optimize portfolio weights using modern portfolio theory
Args:
target_return: Target annual return (optional)
Returns:
Optimized weights dictionary
"""
n_assets = len(self.returns.columns)
# Constraints: weights sum to 1
constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
# Bounds: weights between 0 and 1 (no short selling)
bounds = tuple((0, 1) for _ in range(n_assets))
# Initial guess: equal weights
initial_weights = np.array([1/n_assets] * n_assets)
if target_return:
# Optimize for minimum risk at target return
expected_returns = self.returns.mean() * 252
constraints = [
{'type': 'eq', 'fun': lambda x: np.sum(x) - 1},
{'type': 'eq', 'fun': lambda x: np.sum(x * expected_returns) - target_return}
]
def objective(weights):
return self.calculate_portfolio_metrics(weights)['volatility']
else:
# Optimize for maximum Sharpe ratio
def objective(weights):
return -self.calculate_portfolio_metrics(weights)['sharpe_ratio']
# Run optimization
result = minimize(objective, initial_weights, method='SLSQP',
bounds=bounds, constraints=constraints)
if result.success:
optimized_weights = dict(zip(self.returns.columns, result.x))
return optimized_weights
else:
raise ValueError("Portfolio optimization failed")
Implementing AI-Driven Rebalancing Logic
Ollama Integration for Market Analysis
Connect your portfolio system to Ollama for intelligent investment strategy analysis:
# ollama_analyzer.py
import requests
import json
class OllamaAnalyzer:
def __init__(self, model_name="llama2:13b", base_url="http://localhost:11434"):
self.model_name = model_name
self.base_url = base_url
def analyze_market_conditions(self, portfolio_data, market_data):
"""
Use Ollama to analyze current market conditions and recommend actions
Args:
portfolio_data: Current portfolio allocation and performance
market_data: Recent market indicators and trends
Returns:
AI-generated analysis and recommendations
"""
prompt = f"""
As a quantitative portfolio analyst, analyze the following data and provide rebalancing recommendations:
CURRENT PORTFOLIO:
{json.dumps(portfolio_data, indent=2)}
MARKET CONDITIONS:
{json.dumps(market_data, indent=2)}
Please provide:
1. Market condition assessment (bullish/bearish/neutral)
2. Risk level evaluation (low/medium/high)
3. Specific rebalancing actions needed
4. Expected impact on portfolio risk/return
5. Recommended rebalancing frequency
Format your response as structured analysis with clear action items.
"""
try:
response = requests.post(
f"{self.base_url}/api/generate",
json={
"model": self.model_name,
"prompt": prompt,
"stream": False
},
timeout=60
)
if response.status_code == 200:
return response.json()['response']
else:
return f"Error: {response.status_code} - {response.text}"
except requests.RequestException as e:
return f"Connection error: {str(e)}"
def evaluate_rebalancing_urgency(self, current_weights, target_weights):
"""
Determine how urgently portfolio needs rebalancing
Args:
current_weights: Current asset allocation percentages
target_weights: Target asset allocation percentages
Returns:
Urgency score and recommended action
"""
# Calculate allocation drift
drift_analysis = {}
max_drift = 0
for asset in target_weights:
current = current_weights.get(asset, 0)
target = target_weights[asset]
drift = abs(current - target)
drift_analysis[asset] = {
'current': current,
'target': target,
'drift': drift,
'drift_pct': (drift / target) * 100 if target > 0 else 0
}
max_drift = max(max_drift, drift)
prompt = f"""
Analyze this portfolio drift and determine rebalancing urgency:
DRIFT ANALYSIS:
{json.dumps(drift_analysis, indent=2)}
Maximum drift: {max_drift:.2%}
Provide:
1. Urgency score (1-10, where 10 = immediate rebalancing needed)
2. Primary concern (which asset is most problematic)
3. Recommended action (hold/rebalance/urgent rebalance)
4. Risk assessment if no action taken
Be concise and actionable.
"""
try:
response = requests.post(
f"{self.base_url}/api/generate",
json={
"model": self.model_name,
"prompt": prompt,
"stream": False
},
timeout=30
)
if response.status_code == 200:
analysis = response.json()['response']
# Extract urgency score
urgency_score = 5 # Default
if "urgency score" in analysis.lower():
try:
import re
score_match = re.search(r'urgency score[:\s]*(\d+)', analysis.lower())
if score_match:
urgency_score = int(score_match.group(1))
except:
pass
return {
'urgency_score': urgency_score,
'analysis': analysis,
'max_drift': max_drift,
'drift_details': drift_analysis
}
else:
return {'urgency_score': 5, 'analysis': 'Analysis unavailable', 'max_drift': max_drift}
except Exception as e:
return {'urgency_score': 5, 'analysis': f'Error: {str(e)}', 'max_drift': max_drift}
Automated Rebalancing Triggers
Create intelligent triggers that determine when portfolio rebalancing should occur:
# rebalancing_engine.py
from datetime import datetime, timedelta
import pandas as pd
class RebalancingEngine:
def __init__(self, portfolio_manager, risk_calculator, ollama_analyzer):
self.portfolio = portfolio_manager
self.risk_calc = risk_calculator
self.ai_analyzer = ollama_analyzer
self.rebalancing_history = []
def check_rebalancing_triggers(self, current_holdings, portfolio_value):
"""
Check multiple triggers to determine if rebalancing is needed
Args:
current_holdings: Dict of current asset quantities
portfolio_value: Total portfolio value
Returns:
Dict with trigger status and recommendations
"""
# Calculate current allocation percentages
current_allocation = {}
for asset, quantity in current_holdings.items():
asset_value = quantity * self.portfolio.current_prices[asset]
current_allocation[asset] = asset_value / portfolio_value
triggers = {
'drift_trigger': self._check_drift_trigger(current_allocation),
'time_trigger': self._check_time_trigger(),
'volatility_trigger': self._check_volatility_trigger(),
'ai_trigger': self._check_ai_trigger(current_allocation)
}
# Overall recommendation
active_triggers = [name for name, trigger in triggers.items() if trigger['active']]
recommendation = {
'should_rebalance': len(active_triggers) > 0,
'active_triggers': active_triggers,
'urgency': max([trigger.get('urgency', 0) for trigger in triggers.values()]),
'details': triggers
}
return recommendation
def _check_drift_trigger(self, current_allocation):
"""Check if asset allocation has drifted beyond threshold"""
drift_threshold = 0.05 # 5% drift threshold
max_drift = 0
problematic_assets = []
for asset, target_pct in self.portfolio.target_allocation.items():
current_pct = current_allocation.get(asset, 0)
drift = abs(current_pct - target_pct)
if drift > drift_threshold:
problematic_assets.append({
'asset': asset,
'current': current_pct,
'target': target_pct,
'drift': drift
})
max_drift = max(max_drift, drift)
return {
'active': len(problematic_assets) > 0,
'urgency': min(10, int(max_drift * 100)), # Convert to 1-10 scale
'max_drift': max_drift,
'problematic_assets': problematic_assets
}
def _check_time_trigger(self):
"""Check if enough time has passed since last rebalance"""
rebalance_frequency = 30 # Days between rebalances
if not self.rebalancing_history:
return {'active': True, 'urgency': 3, 'reason': 'No previous rebalancing'}
last_rebalance = max(self.rebalancing_history, key=lambda x: x['date'])['date']
days_since = (datetime.now() - last_rebalance).days
return {
'active': days_since >= rebalance_frequency,
'urgency': min(10, max(1, days_since // 10)),
'days_since': days_since,
'threshold': rebalance_frequency
}
def _check_volatility_trigger(self):
"""Check if market volatility suggests rebalancing"""
try:
# Calculate recent volatility
recent_returns = self.risk_calc.returns.tail(30) # Last 30 days
recent_volatility = recent_returns.std().mean() * np.sqrt(252) # Annualized
# Compare to historical average
historical_volatility = self.risk_calc.returns.std().mean() * np.sqrt(252)
volatility_ratio = recent_volatility / historical_volatility
# Trigger if volatility is unusually high or low
threshold = 1.5 # 50% above normal
return {
'active': volatility_ratio > threshold,
'urgency': min(10, int(volatility_ratio * 3)),
'recent_volatility': recent_volatility,
'historical_volatility': historical_volatility,
'ratio': volatility_ratio
}
except:
return {'active': False, 'urgency': 0, 'error': 'Volatility calculation failed'}
def _check_ai_trigger(self, current_allocation):
"""Use AI analysis to determine rebalancing need"""
try:
urgency_analysis = self.ai_analyzer.evaluate_rebalancing_urgency(
current_allocation, self.portfolio.target_allocation
)
urgency_score = urgency_analysis.get('urgency_score', 5)
return {
'active': urgency_score >= 6,
'urgency': urgency_score,
'analysis': urgency_analysis.get('analysis', ''),
'ai_recommendation': 'rebalance' if urgency_score >= 6 else 'hold'
}
except:
return {'active': False, 'urgency': 0, 'error': 'AI analysis failed'}
Risk-Adjusted Allocation Strategies
Modern Portfolio Theory Implementation
Implement modern portfolio theory for optimal asset allocation:
# mpt_optimizer.py
import numpy as np
import pandas as pd
from scipy.optimize import minimize
import matplotlib.pyplot as plt
class MPTOptimizer:
def __init__(self, returns_data, risk_free_rate=0.02):
"""
Modern Portfolio Theory optimizer
Args:
returns_data: DataFrame of historical returns
risk_free_rate: Annual risk-free rate for Sharpe ratio calculation
"""
self.returns = returns_data
self.risk_free_rate = risk_free_rate
self.mean_returns = returns_data.mean() * 252 # Annualized
self.cov_matrix = returns_data.cov() * 252 # Annualized
def efficient_frontier(self, num_portfolios=100):
"""
Generate efficient frontier portfolios
Args:
num_portfolios: Number of portfolios to generate
Returns:
DataFrame with efficient frontier data
"""
results = np.zeros((3, num_portfolios))
weights_array = np.zeros((num_portfolios, len(self.mean_returns)))
# Generate target returns
min_ret = self.mean_returns.min()
max_ret = self.mean_returns.max()
target_returns = np.linspace(min_ret, max_ret, num_portfolios)
for i, target_return in enumerate(target_returns):
# Optimize for minimum volatility at target return
weights = self._optimize_for_target_return(target_return)
if weights is not None:
weights_array[i, :] = weights
# Calculate portfolio metrics
portfolio_return = np.sum(weights * self.mean_returns)
portfolio_volatility = np.sqrt(np.dot(weights.T, np.dot(self.cov_matrix, weights)))
sharpe_ratio = (portfolio_return - self.risk_free_rate) / portfolio_volatility
results[0, i] = portfolio_return
results[1, i] = portfolio_volatility
results[2, i] = sharpe_ratio
return pd.DataFrame({
'Return': results[0],
'Volatility': results[1],
'Sharpe': results[2]
}), weights_array
def _optimize_for_target_return(self, target_return):
"""Optimize portfolio for specific target return"""
num_assets = len(self.mean_returns)
args = (self.mean_returns, self.cov_matrix)
constraints = ({'type': 'eq', 'fun': lambda x: portfolio_return(x, self.mean_returns) - target_return},
{'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
bounds = tuple((0, 1) for asset in range(num_assets))
result = minimize(portfolio_volatility, num_assets*[1./num_assets,], args=args,
method='SLSQP', bounds=bounds, constraints=constraints)
return result.x if result.success else None
def max_sharpe_portfolio(self):
"""Find portfolio with maximum Sharpe ratio"""
num_assets = len(self.mean_returns)
args = (self.mean_returns, self.cov_matrix, self.risk_free_rate)
constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
bounds = tuple((0, 1) for asset in range(num_assets))
result = minimize(negative_sharpe, num_assets*[1./num_assets,], args=args,
method='SLSQP', bounds=bounds, constraints=constraints)
return result.x
def min_variance_portfolio(self):
"""Find minimum variance portfolio"""
num_assets = len(self.mean_returns)
args = (self.cov_matrix,)
constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})
bounds = tuple((0, 1) for asset in range(num_assets))
result = minimize(portfolio_volatility, num_assets*[1./num_assets,], args=args,
method='SLSQP', bounds=bounds, constraints=constraints)
return result.x
# Helper functions for optimization
def portfolio_return(weights, mean_returns):
return np.sum(mean_returns * weights)
def portfolio_volatility(weights, mean_returns, cov_matrix):
return np.sqrt(np.dot(weights.T, np.dot(cov_matrix, weights)))
def negative_sharpe(weights, mean_returns, cov_matrix, risk_free_rate):
p_ret = portfolio_return(weights, mean_returns)
p_vol = portfolio_volatility(weights, mean_returns, cov_matrix)
return -(p_ret - risk_free_rate) / p_vol
Dynamic Risk Assessment
Create adaptive risk models that respond to changing market conditions:
# dynamic_risk_model.py
import pandas as pd
import numpy as np
from scipy import stats
class DynamicRiskModel:
def __init__(self, price_data, lookback_period=252):
self.price_data = price_data
self.lookback_period = lookback_period
self.returns = price_data.pct_change().dropna()
def calculate_rolling_risk_metrics(self, window=30):
"""
Calculate rolling risk metrics for dynamic assessment
Args:
window: Rolling window size in days
Returns:
DataFrame with rolling risk metrics
"""
rolling_metrics = pd.DataFrame(index=self.returns.index)
# Rolling volatility
rolling_metrics['volatility'] = self.returns.rolling(window).std() * np.sqrt(252)
# Rolling correlation with market (using first asset as proxy)
market_proxy = self.returns.iloc[:, 0]
for col in self.returns.columns:
rolling_metrics[f'{col}_correlation'] = self.returns[col].rolling(window).corr(market_proxy)
# Rolling Sharpe ratio
rolling_returns = self.returns.rolling(window).mean() * 252
rolling_vol = self.returns.rolling(window).std() * np.sqrt(252)
rolling_metrics['sharpe_ratio'] = (rolling_returns.mean(axis=1) - 0.02) / rolling_vol.mean(axis=1)
# Value at Risk (VaR) - 5% confidence level
rolling_metrics['var_5pct'] = self.returns.rolling(window).quantile(0.05)
# Maximum drawdown
rolling_metrics['max_drawdown'] = self._calculate_rolling_drawdown(window)
return rolling_metrics.dropna()
def _calculate_rolling_drawdown(self, window):
"""Calculate rolling maximum drawdown"""
cumulative_returns = (1 + self.returns.mean(axis=1)).cumprod()
rolling_max = cumulative_returns.rolling(window, min_periods=1).max()
drawdown = (cumulative_returns - rolling_max) / rolling_max
return drawdown.rolling(window).min()
def detect_regime_changes(self, metric='volatility', threshold=1.5):
"""
Detect regime changes in risk characteristics
Args:
metric: Risk metric to monitor for changes
threshold: Multiplier for detecting significant changes
Returns:
List of detected regime change dates
"""
rolling_metrics = self.calculate_rolling_risk_metrics()
if metric not in rolling_metrics.columns:
raise ValueError(f"Metric '{metric}' not found in risk metrics")
metric_series = rolling_metrics[metric].dropna()
median_value = metric_series.median()
# Detect periods where metric exceeds threshold
regime_changes = []
in_high_regime = False
for date, value in metric_series.items():
if not in_high_regime and value > median_value * threshold:
regime_changes.append({'date': date, 'type': 'high_risk_entry', 'value': value})
in_high_regime = True
elif in_high_regime and value < median_value / threshold:
regime_changes.append({'date': date, 'type': 'high_risk_exit', 'value': value})
in_high_regime = False
return regime_changes
def adjust_allocation_for_risk_regime(self, base_allocation, current_risk_level):
"""
Adjust portfolio allocation based on current risk regime
Args:
base_allocation: Target allocation in normal conditions
current_risk_level: Current risk level (1.0 = normal, >1.0 = high risk)
Returns:
Adjusted allocation dictionary
"""
adjusted_allocation = base_allocation.copy()
if current_risk_level > 1.3: # High risk regime
# Increase allocation to safer assets
for asset in adjusted_allocation:
if 'bonds' in asset.lower() or 'treasury' in asset.lower():
adjusted_allocation[asset] *= 1.2 # Increase safe assets
else:
adjusted_allocation[asset] *= 0.9 # Reduce risky assets
elif current_risk_level < 0.7: # Low risk regime
# Increase allocation to growth assets
for asset in adjusted_allocation:
if 'stock' in asset.lower() or 'equity' in asset.lower():
adjusted_allocation[asset] *= 1.1 # Increase growth assets
else:
adjusted_allocation[asset] *= 0.95 # Reduce safe assets
# Normalize to ensure weights sum to 1
total_weight = sum(adjusted_allocation.values())
adjusted_allocation = {k: v/total_weight for k, v in adjusted_allocation.items()}
return adjusted_allocation
Building the Complete Rebalancing System
Main Application Controller
Integrate all components into a comprehensive portfolio management system:
# main_app.py
import json
import logging
from datetime import datetime, timedelta
import yfinance as yf
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
class SmartPortfolioRebalancer:
def __init__(self, config_file="portfolio_config.json"):
"""
Initialize the complete portfolio rebalancing system
Args:
config_file: JSON file with portfolio configuration
"""
self.config = self._load_config(config_file)
self.setup_components()
def _load_config(self, config_file):
"""Load portfolio configuration from JSON file"""
try:
with open(config_file, 'r') as f:
return json.load(f)
except FileNotFoundError:
# Create default configuration
default_config = {
"target_allocation": {
"SPY": 0.4, # S&P 500 ETF
"QQQ": 0.2, # NASDAQ ETF
"IEF": 0.2, # Treasury bonds
"GLD": 0.1, # Gold ETF
"VNQ": 0.1 # Real estate ETF
},
"rebalancing_threshold": 0.05,
"ollama_model": "llama2:13b",
"risk_lookback_days": 252,
"rebalancing_frequency_days": 30
}
with open(config_file, 'w') as f:
json.dump(default_config, f, indent=2)
logger.info(f"Created default configuration file: {config_file}")
return default_config
def setup_components(self):
"""Initialize all system components"""
from portfolio_manager import PortfolioManager
from ollama_analyzer import OllamaAnalyzer
from rebalancing_engine import RebalancingEngine
from risk_calculator import RiskCalculator
from mpt_optimizer import MPTOptimizer
from dynamic_risk_model import DynamicRiskModel
# Initialize core components
self.portfolio = PortfolioManager(
self.config["target_allocation"],
self.config["ollama_model"]
)
self.ai_analyzer = OllamaAnalyzer(self.config["ollama_model"])
# Fetch historical data for risk calculations
self.historical_data = self._fetch_historical_data()
self.risk_calculator = RiskCalculator(
self.historical_data,
self.config["risk_lookback_days"]
)
self.mpt_optimizer = MPTOptimizer(self.historical_data.pct_change().dropna())
self.dynamic_risk = DynamicRiskModel(self.historical_data)
self.rebalancer = RebalancingEngine(
self.portfolio, self.risk_calculator, self.ai_analyzer
)
logger.info("✅ All components initialized successfully")
def _fetch_historical_data(self):
"""Fetch historical price data for all assets"""
symbols = list(self.config["target_allocation"].keys())
end_date = datetime.now()
start_date = end_date - timedelta(days=self.config["risk_lookback_days"] + 100)
data = yf.download(symbols, start=start_date, end=end_date)['Adj Close']
if len(symbols) == 1:
# Handle single asset case
data = data.to_frame(symbols[0])
return data.dropna()
def run_full_analysis(self, current_holdings, portfolio_value):
"""
Run complete portfolio analysis and generate recommendations
Args:
current_holdings: Dict of current asset quantities
portfolio_value: Total portfolio value
Returns:
Comprehensive analysis report
"""
logger.info("🔍 Starting full portfolio analysis...")
# 1. Fetch current prices
current_prices = self.portfolio.fetch_current_prices()
# 2. Calculate current allocation
current_allocation = {}
for asset, quantity in current_holdings.items():
asset_value = quantity * current_prices[asset]
current_allocation[asset] = asset_value / portfolio_value
# 3. Check rebalancing triggers
trigger_analysis = self.rebalancer.check_rebalancing_triggers(
current_holdings, portfolio_value
)
# 4. Risk assessment
current_risk_level = self._assess_current_risk_level()
# 5. Optimize allocation using MPT
try:
optimal_weights = self.mpt_optimizer.max_sharpe_portfolio()
optimal_allocation = dict(zip(self.config["target_allocation"].keys(), optimal_weights))
except:
optimal_allocation = self.config["target_allocation"]
logger.warning("⚠️ MPT optimization failed, using target allocation")
# 6. Adjust for risk regime
risk_adjusted_allocation = self.dynamic_risk.adjust_allocation_for_risk_regime(
optimal_allocation, current_risk_level
)
# 7. AI analysis
market_data = {
'current_risk_level': current_risk_level,
'volatility_regime': 'high' if current_risk_level > 1.3 else 'normal',
'trigger_analysis': trigger_analysis
}
ai_analysis = self.ai_analyzer.analyze_market_conditions(
{
'current_allocation': current_allocation,
'target_allocation': self.config["target_allocation"],
'optimal_allocation': optimal_allocation,
'risk_adjusted_allocation': risk_adjusted_allocation
},
market_data
)
# 8. Generate comprehensive report
report = {
'timestamp': datetime.now().isoformat(),
'portfolio_value': portfolio_value,
'current_allocation': current_allocation,
'target_allocation': self.config["target_allocation"],
'optimal_allocation': optimal_allocation,
'risk_adjusted_allocation': risk_adjusted_allocation,
'current_risk_level': current_risk_level,
'trigger_analysis': trigger_analysis,
'ai_analysis': ai_analysis,
'recommendations': self._generate_recommendations(
current_allocation, risk_adjusted_allocation, trigger_analysis
)
}
logger.info("✅ Analysis complete")
return report
def _assess_current_risk_level(self):
"""Assess current market risk level"""
try:
rolling_metrics = self.dynamic_risk.calculate_rolling_risk_metrics(window=30)
current_volatility = rolling_metrics['volatility'].iloc[-1]
historical_volatility = rolling_metrics['volatility'].median()
return current_volatility / historical_volatility
except:
return 1.0 # Default to normal risk level
def _generate_recommendations(self, current_allocation, target_allocation, trigger_analysis):
"""Generate specific rebalancing recommendations"""
recommendations = []
if trigger_analysis['should_rebalance']:
for asset in target_allocation:
current_pct = current_allocation.get(asset, 0)
target_pct = target_allocation[asset]
difference = target_pct - current_pct
if abs(difference) > 0.01: # 1% threshold
action = "BUY" if difference > 0 else "SELL"
recommendations.append({
'asset': asset,
'action': action,
'current_allocation': current_pct,
'target_allocation': target_pct,
'difference': difference,
'priority': 'HIGH' if abs(difference) > 0.05 else 'MEDIUM'
})
return recommendations
# Example usage script
if __name__ == "__main__":
# Initialize the rebalancer
rebalancer = SmartPortfolioRebalancer()
# Example current holdings
current_holdings = {
'SPY': 100, # 100 shares
'QQQ': 50, # 50 shares
'IEF': 200, # 200 shares
'GLD': 30, # 30 shares
'VNQ': 40 # 40 shares
}
portfolio_value = 50000 # $50,000 total
# Run analysis
analysis_report = rebalancer.run_full_analysis(current_holdings, portfolio_value)
# Print results
print("\n" + "="*60)
print("SMART PORTFOLIO REBALANCING ANALYSIS")
print("="*60)
print(f"\nPortfolio Value: ${analysis_report['portfolio_value']:,}")
print(f"Risk Level: {analysis_report['current_risk_level']:.2f}x normal")
print(f"Rebalancing Needed: {'YES' if analysis_report['trigger_analysis']['should_rebalance'] else 'NO'}")
if analysis_report['recommendations']:
print("\n📋 REBALANCING RECOMMENDATIONS:")
for rec in analysis_report['recommendations']:
print(f" {rec['action']} {rec['asset']}: "
f"{rec['current_allocation']:.1%} → {rec['target_allocation']:.1%} "
f"({rec['difference']:+.1%}) [{rec['priority']}]")
print(f"\n🤖 AI ANALYSIS:")
print(analysis_report['ai_analysis'][:500] + "..." if len(analysis_report['ai_analysis']) > 500 else analysis_report['ai_analysis'])
Configuration and Deployment
Create configuration files and deployment scripts:
// portfolio_config.json
{
"target_allocation": {
"SPY": 0.4,
"QQQ": 0.2,
"IEF": 0.2,
"GLD": 0.1,
"VNQ": 0.1
},
"rebalancing_threshold": 0.05,
"ollama_model": "llama2:13b",
"risk_lookback_days": 252,
"rebalancing_frequency_days": 30,
"max_position_size": 0.5,
"min_cash_reserve": 0.05,
"trading_cost_basis_points": 5
}
# deployment_script.py
import os
import subprocess
import sys
def setup_environment():
"""Set up the complete environment for portfolio rebalancing"""
print("🚀 Setting up Smart Portfolio Rebalancer...")
# 1. Install Ollama if not present
if not os.path.exists("/usr/local/bin/ollama"):
print("📦 Installing Ollama...")
subprocess.run([
"curl", "-fsSL", "https://ollama.ai/install.sh", "|", "sh"
], shell=True)
# 2. Pull required model
print("🤖 Downloading AI model...")
subprocess.run(["ollama", "pull", "llama2:13b"])
# 3. Install Python dependencies
print("📚 Installing Python packages...")
subprocess.run([
sys.executable, "-m", "pip", "install",
"pandas", "numpy", "yfinance", "requests",
"matplotlib", "seaborn", "scipy"
])
# 4. Create directory structure
directories = [
"data/historical",
"logs",
"reports",
"config"
]
for directory in directories:
os.makedirs(directory, exist_ok=True)
print(f"📁 Created directory: {directory}")
# 5. Create systemd service for automated rebalancing
create_systemd_service()
print("✅ Setup complete! Run 'python main_app.py' to start analysis.")
def create_systemd_service():
"""Create systemd service for automated daily rebalancing"""
service_content = """
[Unit]
Description=Smart Portfolio Rebalancer
After=network.target
[Service]
Type=oneshot
User=portfolio
WorkingDirectory=/home/portfolio/portfolio-rebalancer
ExecStart=/usr/bin/python3 main_app.py --automated
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
"""
timer_content = """
[Unit]
Description=Run portfolio rebalancer daily
Requires=portfolio-rebalancer.service
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
"""
print("⏰ Created systemd service for automated rebalancing")
if __name__ == "__main__":
setup_environment()
Performance Monitoring and Backtesting
Backtesting Framework
Create comprehensive backtesting to validate your investment strategy:
# backtesting_engine.py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
class PortfolioBacktester:
def __init__(self, historical_data, initial_capital=100000):
"""
Initialize backtesting engine
Args:
historical_data: DataFrame with historical prices
initial_capital: Starting portfolio value
"""
self.data = historical_data
self.initial_capital = initial_capital
self.results = {}
def backtest_strategy(self, target_allocation, rebalance_frequency=30,
transaction_cost=0.001, start_date=None, end_date=None):
"""
Backtest portfolio rebalancing strategy
Args:
target_allocation: Dict with target asset allocations
rebalance_frequency: Days between rebalances
transaction_cost: Transaction cost as percentage
start_date: Backtest start date
end_date: Backtest end date
Returns:
Dictionary with backtest results
"""
# Set date range
if start_date is None:
start_date = self.data.index[252] # Skip first year for data
if end_date is None:
end_date = self.data.index[-1]
backtest_data = self.data.loc[start_date:end_date].copy()
# Initialize portfolio
portfolio_value = []
portfolio_weights = []
rebalance_dates = []
transaction_costs = []
current_value = self.initial_capital
current_weights = target_allocation.copy()
last_rebalance = start_date
for date in backtest_data.index:
# Calculate daily portfolio value
returns = backtest_data.loc[date] / backtest_data.loc[start_date] - 1
daily_value = 0
for asset, weight in current_weights.items():
if asset in returns.index:
daily_value += current_value * weight * (1 + returns[asset])
portfolio_value.append(daily_value)
portfolio_weights.append(current_weights.copy())
# Check if rebalancing is needed
days_since_rebalance = (date - last_rebalance).days
if days_since_rebalance >= rebalance_frequency:
# Calculate current allocation drift
current_allocation = self._calculate_current_allocation(
current_weights, returns, date, backtest_data.loc[start_date]
)
# Calculate rebalancing trades
trades = {}
total_trade_value = 0
for asset in target_allocation:
target_weight = target_allocation[asset]
current_weight = current_allocation.get(asset, 0)
trade_weight = target_weight - current_weight
trades[asset] = trade_weight
total_trade_value += abs(trade_weight)
# Apply transaction costs
cost = daily_value * total_trade_value * transaction_cost
transaction_costs.append(cost)
current_value = daily_value - cost
# Update weights
current_weights = target_allocation.copy()
last_rebalance = date
rebalance_dates.append(date)
else:
# Update weights based on asset performance
current_weights = self._update_weights_for_returns(
current_weights, backtest_data, date, start_date
)
# Calculate performance metrics
portfolio_returns = pd.Series(portfolio_value, index=backtest_data.index).pct_change().dropna()
results = {
'portfolio_value': pd.Series(portfolio_value, index=backtest_data.index),
'portfolio_returns': portfolio_returns,
'final_value': portfolio_value[-1],
'total_return': (portfolio_value[-1] / self.initial_capital) - 1,
'annualized_return': self._calculate_annualized_return(portfolio_returns),
'volatility': portfolio_returns.std() * np.sqrt(252),
'sharpe_ratio': self._calculate_sharpe_ratio(portfolio_returns),
'max_drawdown': self._calculate_max_drawdown(portfolio_value),
'rebalance_dates': rebalance_dates,
'total_transaction_costs': sum(transaction_costs),
'number_of_rebalances': len(rebalance_dates)
}
return results
def _calculate_current_allocation(self, weights, returns, current_date, start_prices):
"""Calculate current portfolio allocation based on returns"""
total_value = 0
asset_values = {}
for asset, weight in weights.items():
if asset in returns.index:
asset_return = returns[asset]
asset_value = self.initial_capital * weight * (1 + asset_return)
asset_values[asset] = asset_value
total_value += asset_value
# Convert to percentages
current_allocation = {}
for asset, value in asset_values.items():
current_allocation[asset] = value / total_value if total_value > 0 else 0
return current_allocation
def _update_weights_for_returns(self, weights, data, current_date, start_date):
"""Update portfolio weights based on asset performance"""
if current_date == start_date:
return weights
prev_date = data.index[data.index.get_loc(current_date) - 1]
daily_returns = data.loc[current_date] / data.loc[prev_date] - 1
# Update weights based on daily returns
total_weight = 0
updated_weights = {}
for asset, weight in weights.items():
if asset in daily_returns.index:
new_weight = weight * (1 + daily_returns[asset])
updated_weights[asset] = new_weight
total_weight += new_weight
# Normalize weights
if total_weight > 0:
updated_weights = {k: v/total_weight for k, v in updated_weights.items()}
return updated_weights
def _calculate_annualized_return(self, returns):
"""Calculate annualized return"""
cumulative_return = (1 + returns).prod() - 1
years = len(returns) / 252
return (1 + cumulative_return) ** (1/years) - 1
def _calculate_sharpe_ratio(self, returns, risk_free_rate=0.02):
"""Calculate Sharpe ratio"""
excess_returns = returns.mean() * 252 - risk_free_rate
return excess_returns / (returns.std() * np.sqrt(252))
def _calculate_max_drawdown(self, portfolio_values):
"""Calculate maximum drawdown"""
peak = np.maximum.accumulate(portfolio_values)
drawdown = (np.array(portfolio_values) - peak) / peak
return np.min(drawdown)
def compare_strategies(self, strategies_config):
"""
Compare multiple rebalancing strategies
Args:
strategies_config: List of strategy configurations
Returns:
Comparison DataFrame
"""
comparison_results = []
for config in strategies_config:
name = config['name']
allocation = config['allocation']
frequency = config.get('frequency', 30)
results = self.backtest_strategy(allocation, frequency)
comparison_results.append({
'Strategy': name,
'Total Return': results['total_return'],
'Annualized Return': results['annualized_return'],
'Volatility': results['volatility'],
'Sharpe Ratio': results['sharpe_ratio'],
'Max Drawdown': results['max_drawdown'],
'Rebalances': results['number_of_rebalances'],
'Transaction Costs': results['total_transaction_costs']
})
return pd.DataFrame(comparison_results)
def plot_backtest_results(self, results, benchmark_symbol='SPY'):
"""
Plot comprehensive backtest results
Args:
results: Backtest results dictionary
benchmark_symbol: Symbol for benchmark comparison
"""
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
# Portfolio value over time
axes[0, 0].plot(results['portfolio_value'].index, results['portfolio_value'].values)
axes[0, 0].set_title('Portfolio Value Over Time')
axes[0, 0].set_ylabel('Portfolio Value ($)')
axes[0, 0].grid(True)
# Mark rebalance dates
for rebalance_date in results['rebalance_dates']:
axes[0, 0].axvline(x=rebalance_date, color='red', alpha=0.3, linestyle='--')
# Rolling Sharpe ratio
rolling_sharpe = results['portfolio_returns'].rolling(252).apply(
lambda x: x.mean() / x.std() * np.sqrt(252)
)
axes[0, 1].plot(rolling_sharpe.index, rolling_sharpe.values)
axes[0, 1].set_title('Rolling 1-Year Sharpe Ratio')
axes[0, 1].set_ylabel('Sharpe Ratio')
axes[0, 1].grid(True)
# Drawdown chart
peak = results['portfolio_value'].cummax()
drawdown = (results['portfolio_value'] - peak) / peak
axes[1, 0].fill_between(drawdown.index, drawdown.values, 0, alpha=0.3, color='red')
axes[1, 0].set_title('Portfolio Drawdown')
axes[1, 0].set_ylabel('Drawdown (%)')
axes[1, 0].grid(True)
# Monthly returns heatmap
monthly_returns = results['portfolio_returns'].resample('M').apply(lambda x: (1 + x).prod() - 1)
monthly_returns_pivot = monthly_returns.groupby([monthly_returns.index.year, monthly_returns.index.month]).first().unstack()
im = axes[1, 1].imshow(monthly_returns_pivot.values, cmap='RdYlGn', aspect='auto')
axes[1, 1].set_title('Monthly Returns Heatmap')
axes[1, 1].set_xlabel('Month')
axes[1, 1].set_ylabel('Year')
plt.tight_layout()
plt.savefig('backtest_results.png', dpi=300, bbox_inches='tight')
plt.show()
# Example usage
if __name__ == "__main__":
# Load historical data
import yfinance as yf
symbols = ['SPY', 'QQQ', 'IEF', 'GLD', 'VNQ']
data = yf.download(symbols, start='2010-01-01', end='2024-01-01')['Adj Close']
# Initialize backtester
backtester = PortfolioBacktester(data)
# Define strategies to compare
strategies = [
{
'name': 'Balanced 60/40',
'allocation': {'SPY': 0.6, 'IEF': 0.4},
'frequency': 90
},
{
'name': 'Growth Portfolio',
'allocation': {'SPY': 0.4, 'QQQ': 0.3, 'IEF': 0.2, 'VNQ': 0.1},
'frequency': 30
},
{
'name': 'Conservative Portfolio',
'allocation': {'SPY': 0.3, 'IEF': 0.5, 'GLD': 0.1, 'VNQ': 0.1},
'frequency': 60
}
]
# Compare strategies
comparison = backtester.compare_strategies(strategies)
print("\n📊 STRATEGY COMPARISON:")
print(comparison.round(4))
# Detailed backtest of best strategy
best_strategy = strategies[1] # Growth portfolio
detailed_results = backtester.backtest_strategy(
best_strategy['allocation'],
best_strategy['frequency']
)
print(f"\n📈 DETAILED RESULTS - {best_strategy['name']}:")
print(f"Total Return: {detailed_results['total_return']:.2%}")
print(f"Annualized Return: {detailed_results['annualized_return']:.2%}")
print(f"Volatility: {detailed_results['volatility']:.2%}")
print(f"Sharpe Ratio: {detailed_results['sharpe_ratio']:.3f}")
print(f"Max Drawdown: {detailed_results['max_drawdown']:.2%}")
# Plot results
backtester.plot_backtest_results(detailed_results)
Conclusion
Smart portfolio rebalancing with Ollama transforms manual investment management into an automated, AI-driven system. By combining modern portfolio theory, risk-adjusted asset allocation, and local AI analysis, you create a sophisticated investment strategy that adapts to changing market conditions.
This system provides several key advantages over traditional approaches. The risk management framework continuously monitors market volatility and adjusts allocations accordingly. The AI analysis offers objective, data-driven insights without emotional interference. Local processing with Ollama ensures your financial data remains private while providing unlimited analysis capability.
The automated triggers prevent costly allocation drift while the backtesting framework validates strategy performance across different market cycles. Most importantly, the system scales from simple buy-and-hold strategies to complex, multi-factor diversification models.
Portfolio rebalancing success depends on consistent execution rather than perfect timing. This Ollama-powered system ensures disciplined, systematic rebalancing that maintains your target asset allocation while optimizing for risk-adjusted returns. Start with the basic implementation and gradually add sophisticated features as your confidence grows.
Ready to automate your portfolio management? Download the complete code repository and begin building your intelligent rebalancing system today.