Ever watched Bitcoin crash 10% while you were reading "experts predict new highs" headlines? Welcome to crypto's emotional rollercoaster, where sentiment changes faster than a day trader's portfolio. Traditional news analysis takes hours—but crypto markets move in minutes.
Ollama changes this game completely. This open-source AI platform lets you analyze crypto news sentiment in real-time, giving you market intelligence before the crowd catches on. You'll build automated systems that process hundreds of news articles instantly, extract sentiment scores, and deliver actionable insights for trading decisions.
This guide shows you exactly how to set up Ollama for crypto sentiment analysis, from installation to deployment. You'll create a system that monitors news feeds, processes sentiment data, and generates market intelligence reports automatically.
Why Crypto Sentiment Analysis Matters for Trading Success
Cryptocurrency markets are 90% emotion and 10% fundamentals. When Elon Musk tweets about Dogecoin or regulatory news breaks, sentiment drives price action faster than technical analysis can react.
Traditional sentiment analysis fails crypto traders because:
- Manual news reading takes too long
- Human bias skews interpretation
- Market opportunities disappear quickly
- Volume of information overwhelms individual analysis
Automated sentiment analysis with Ollama solves these problems by:
- Processing hundreds of articles per minute
- Maintaining objective sentiment scoring
- Delivering real-time market intelligence
- Scaling analysis across multiple cryptocurrencies
Understanding Ollama for Cryptocurrency Market Intelligence
Ollama runs large language models locally on your machine without external API dependencies. This approach gives you complete control over your crypto sentiment analysis pipeline while maintaining data privacy.
Key advantages for crypto analysis:
- Privacy: Your trading strategies stay confidential
- Speed: Local processing eliminates API delays
- Cost: No per-request fees for high-volume analysis
- Reliability: Works offline during market volatility
Ollama supports multiple models optimized for different tasks:
- Llama 3.1: Best overall performance for sentiment analysis
- Mistral: Faster processing for real-time applications
- CodeLlama: Excellent for technical crypto documentation analysis
Setting Up Ollama for Crypto News Analysis
Installation and Model Setup
First, install Ollama on your system:
# Download and install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull the recommended model for sentiment analysis
ollama pull llama3.1:8b
# Verify installation
ollama list
Basic Sentiment Analysis Test
Test Ollama with a crypto news sample:
# Test basic sentiment analysis
ollama run llama3.1:8b "Analyze the sentiment of this crypto news: 'Bitcoin reaches new all-time high as institutional adoption accelerates. Major banks announce crypto custody services.' Provide sentiment score from -1 to 1 and explanation."
Expected output:
Sentiment Score: 0.85
Explanation: This news shows strong positive sentiment. Keywords like "all-time high," "institutional adoption," and "major banks" indicate bullish market conditions. The announcement of custody services suggests mainstream acceptance, typically driving positive price action.
Building a Real-Time Crypto Sentiment Analysis System
Project Structure Setup
Create your sentiment analysis project:
mkdir crypto-sentiment-analyzer
cd crypto-sentiment-analyzer
# Create project structure
mkdir -p {data,models,scripts,outputs}
touch requirements.txt main.py config.py
Core Dependencies Installation
# requirements.txt
requests==2.31.0
pandas==2.1.0
feedparser==6.0.10
python-dotenv==1.0.0
schedule==1.2.0
ollama==0.1.7
Install dependencies:
pip install -r requirements.txt
Configuration Setup
# config.py
import os
from dotenv import load_dotenv
load_dotenv()
# News sources for crypto sentiment analysis
CRYPTO_NEWS_SOURCES = [
"https://cointelegraph.com/rss",
"https://cryptonews.com/news/feed/",
"https://coindesk.com/arc/outboundfeeds/rss/",
"https://bitcoinmagazine.com/.rss/full/",
"https://decrypt.co/feed"
]
# Ollama configuration
OLLAMA_MODEL = "llama3.1:8b"
OLLAMA_HOST = "http://localhost:11434"
# Analysis parameters
SENTIMENT_THRESHOLD_POSITIVE = 0.3
SENTIMENT_THRESHOLD_NEGATIVE = -0.3
ANALYSIS_INTERVAL_MINUTES = 15
# Output settings
OUTPUT_DIR = "outputs"
DATA_DIR = "data"
News Data Collection Module
# news_collector.py
import feedparser
import requests
import pandas as pd
from datetime import datetime
import json
class CryptoNewsCollector:
def __init__(self, sources):
self.sources = sources
self.collected_articles = []
def fetch_news_from_source(self, source_url):
"""Fetch news articles from RSS feed"""
try:
feed = feedparser.parse(source_url)
articles = []
for entry in feed.entries:
article = {
'title': entry.title,
'summary': entry.summary if hasattr(entry, 'summary') else '',
'link': entry.link,
'published': entry.published if hasattr(entry, 'published') else '',
'source': feed.feed.title if hasattr(feed.feed, 'title') else source_url,
'collected_at': datetime.now().isoformat()
}
articles.append(article)
return articles
except Exception as e:
print(f"Error fetching from {source_url}: {e}")
return []
def collect_all_news(self):
"""Collect news from all configured sources"""
all_articles = []
for source in self.sources:
print(f"Collecting from: {source}")
articles = self.fetch_news_from_source(source)
all_articles.extend(articles)
self.collected_articles = all_articles
return all_articles
def save_articles(self, filename):
"""Save collected articles to JSON file"""
with open(filename, 'w') as f:
json.dump(self.collected_articles, f, indent=2)
print(f"Saved {len(self.collected_articles)} articles to {filename}")
Sentiment Analysis Engine
# sentiment_analyzer.py
import ollama
import json
import re
from datetime import datetime
class CryptoSentimentAnalyzer:
def __init__(self, model_name="llama3.1:8b"):
self.model_name = model_name
self.client = ollama.Client()
def create_sentiment_prompt(self, title, content):
"""Create optimized prompt for crypto sentiment analysis"""
prompt = f"""
Analyze the sentiment of this cryptocurrency news article:
Title: {title}
Content: {content}
Instructions:
1. Provide a sentiment score from -1.0 (very negative) to 1.0 (very positive)
2. Identify key sentiment indicators (words, phrases, events)
3. Assess market impact potential (LOW, MEDIUM, HIGH)
4. Extract mentioned cryptocurrencies
Response format:
SENTIMENT_SCORE: [score]
KEY_INDICATORS: [list of indicators]
MARKET_IMPACT: [LOW/MEDIUM/HIGH]
CRYPTOCURRENCIES: [list of mentioned cryptos]
EXPLANATION: [brief explanation]
"""
return prompt
def analyze_article_sentiment(self, article):
"""Analyze sentiment for single article"""
try:
prompt = self.create_sentiment_prompt(
article['title'],
article.get('summary', '')
)
response = self.client.generate(
model=self.model_name,
prompt=prompt,
stream=False
)
return self.parse_sentiment_response(response['response'], article)
except Exception as e:
print(f"Error analyzing article: {e}")
return None
def parse_sentiment_response(self, response, article):
"""Parse Ollama response into structured sentiment data"""
try:
# Extract sentiment score
score_match = re.search(r'SENTIMENT_SCORE:\s*([-+]?\d*\.?\d+)', response)
sentiment_score = float(score_match.group(1)) if score_match else 0.0
# Extract key indicators
indicators_match = re.search(r'KEY_INDICATORS:\s*(.+?)(?=MARKET_IMPACT|$)', response, re.DOTALL)
key_indicators = indicators_match.group(1).strip() if indicators_match else ""
# Extract market impact
impact_match = re.search(r'MARKET_IMPACT:\s*(LOW|MEDIUM|HIGH)', response)
market_impact = impact_match.group(1) if impact_match else "LOW"
# Extract cryptocurrencies
crypto_match = re.search(r'CRYPTOCURRENCIES:\s*(.+?)(?=EXPLANATION|$)', response, re.DOTALL)
cryptocurrencies = crypto_match.group(1).strip() if crypto_match else ""
# Extract explanation
explanation_match = re.search(r'EXPLANATION:\s*(.+)', response, re.DOTALL)
explanation = explanation_match.group(1).strip() if explanation_match else ""
return {
'article_id': hash(article['title'] + article['link']),
'title': article['title'],
'source': article['source'],
'link': article['link'],
'sentiment_score': sentiment_score,
'key_indicators': key_indicators,
'market_impact': market_impact,
'cryptocurrencies': cryptocurrencies,
'explanation': explanation,
'analyzed_at': datetime.now().isoformat(),
'raw_response': response
}
except Exception as e:
print(f"Error parsing sentiment response: {e}")
return None
def analyze_batch(self, articles):
"""Analyze sentiment for multiple articles"""
results = []
for i, article in enumerate(articles):
print(f"Analyzing article {i+1}/{len(articles)}: {article['title'][:50]}...")
sentiment_result = self.analyze_article_sentiment(article)
if sentiment_result:
results.append(sentiment_result)
return results
Market Intelligence Report Generator
# report_generator.py
import pandas as pd
import json
from datetime import datetime, timedelta
from collections import Counter
class MarketIntelligenceReporter:
def __init__(self):
self.sentiment_data = []
def load_sentiment_data(self, filepath):
"""Load sentiment analysis results"""
with open(filepath, 'r') as f:
self.sentiment_data = json.load(f)
def calculate_overall_sentiment(self):
"""Calculate overall market sentiment metrics"""
if not self.sentiment_data:
return None
scores = [item['sentiment_score'] for item in self.sentiment_data]
return {
'average_sentiment': sum(scores) / len(scores),
'positive_articles': len([s for s in scores if s > 0.3]),
'negative_articles': len([s for s in scores if s < -0.3]),
'neutral_articles': len([s for s in scores if -0.3 <= s <= 0.3]),
'total_articles': len(scores),
'sentiment_volatility': pd.Series(scores).std()
}
def analyze_cryptocurrency_mentions(self):
"""Analyze which cryptocurrencies are mentioned most"""
crypto_mentions = []
for item in self.sentiment_data:
cryptos = item.get('cryptocurrencies', '').lower()
# Extract common crypto names
common_cryptos = ['bitcoin', 'ethereum', 'btc', 'eth', 'dogecoin', 'cardano', 'solana', 'polkadot', 'chainlink']
for crypto in common_cryptos:
if crypto in cryptos:
crypto_mentions.append(crypto)
return Counter(crypto_mentions).most_common(10)
def identify_high_impact_news(self):
"""Find news with high market impact potential"""
high_impact = [
item for item in self.sentiment_data
if item.get('market_impact') == 'HIGH'
]
return sorted(high_impact, key=lambda x: abs(x['sentiment_score']), reverse=True)
def generate_trading_signals(self):
"""Generate basic trading signals based on sentiment"""
recent_sentiment = self.calculate_overall_sentiment()
if not recent_sentiment:
return {"signal": "HOLD", "confidence": "LOW", "reason": "Insufficient data"}
avg_sentiment = recent_sentiment['average_sentiment']
positive_ratio = recent_sentiment['positive_articles'] / recent_sentiment['total_articles']
if avg_sentiment > 0.4 and positive_ratio > 0.6:
return {
"signal": "BUY",
"confidence": "HIGH",
"reason": f"Strong positive sentiment: {avg_sentiment:.2f}, {positive_ratio:.1%} positive articles"
}
elif avg_sentiment < -0.4 and positive_ratio < 0.3:
return {
"signal": "SELL",
"confidence": "HIGH",
"reason": f"Strong negative sentiment: {avg_sentiment:.2f}, only {positive_ratio:.1%} positive articles"
}
else:
return {
"signal": "HOLD",
"confidence": "MEDIUM",
"reason": f"Mixed sentiment: {avg_sentiment:.2f}, {positive_ratio:.1%} positive articles"
}
def create_html_report(self, output_file):
"""Generate comprehensive HTML report"""
overall_sentiment = self.calculate_overall_sentiment()
crypto_mentions = self.analyze_cryptocurrency_mentions()
high_impact_news = self.identify_high_impact_news()
trading_signal = self.generate_trading_signals()
html_content = f"""
<!DOCTYPE html>
<html>
<head>
<title>Crypto Sentiment Analysis Report</title>
<style>
body {{ font-family: Arial, sans-serif; margin: 40px; }}
.header {{ background: #1f2937; color: white; padding: 20px; border-radius: 8px; }}
.metric {{ background: #f3f4f6; padding: 15px; margin: 10px 0; border-radius: 5px; }}
.positive {{ color: #10b981; font-weight: bold; }}
.negative {{ color: #ef4444; font-weight: bold; }}
.neutral {{ color: #6b7280; font-weight: bold; }}
.signal {{ padding: 20px; margin: 20px 0; border-radius: 8px; }}
.signal.BUY {{ background: #d1fae5; border: 2px solid #10b981; }}
.signal.SELL {{ background: #fee2e2; border: 2px solid #ef4444; }}
.signal.HOLD {{ background: #fef3c7; border: 2px solid #f59e0b; }}
</style>
</head>
<body>
<div class="header">
<h1>Crypto Market Sentiment Analysis</h1>
<p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</p>
</div>
<div class="signal {trading_signal['signal']}">
<h2>Trading Signal: {trading_signal['signal']}</h2>
<p><strong>Confidence:</strong> {trading_signal['confidence']}</p>
<p><strong>Reason:</strong> {trading_signal['reason']}</p>
</div>
<h2>Overall Sentiment Metrics</h2>
<div class="metric">
<strong>Average Sentiment:</strong> <span class="{'positive' if overall_sentiment['average_sentiment'] > 0 else 'negative'}">{overall_sentiment['average_sentiment']:.3f}</span>
</div>
<div class="metric">
<strong>Article Distribution:</strong>
<span class="positive">{overall_sentiment['positive_articles']} Positive</span> |
<span class="neutral">{overall_sentiment['neutral_articles']} Neutral</span> |
<span class="negative">{overall_sentiment['negative_articles']} Negative</span>
</div>
<div class="metric">
<strong>Sentiment Volatility:</strong> {overall_sentiment['sentiment_volatility']:.3f}
</div>
<h2>Top Cryptocurrency Mentions</h2>
<ul>
"""
for crypto, count in crypto_mentions:
html_content += f"<li><strong>{crypto.title()}:</strong> {count} mentions</li>"
html_content += """
</ul>
<h2>High Impact News</h2>
<ul>
"""
for news in high_impact_news[:5]:
sentiment_class = 'positive' if news['sentiment_score'] > 0 else 'negative'
html_content += f"""
<li>
<strong><a href="{news['link']}" target="_blank">{news['title']}</a></strong><br>
<span class="{sentiment_class}">Sentiment: {news['sentiment_score']:.2f}</span> |
Source: {news['source']}<br>
<em>{news['explanation'][:150]}...</em>
</li>
"""
html_content += """
</ul>
</body>
</html>
"""
with open(output_file, 'w') as f:
f.write(html_content)
print(f"HTML report generated: {output_file}")
Main Application Script
# main.py
import schedule
import time
import json
import os
from datetime import datetime
from config import *
from news_collector import CryptoNewsCollector
from sentiment_analyzer import CryptoSentimentAnalyzer
from report_generator import MarketIntelligenceReporter
def run_sentiment_analysis():
"""Complete sentiment analysis workflow"""
print(f"Starting crypto sentiment analysis - {datetime.now()}")
# Step 1: Collect news
collector = CryptoNewsCollector(CRYPTO_NEWS_SOURCES)
articles = collector.collect_all_news()
if not articles:
print("No articles collected, skipping analysis")
return
# Save raw articles
articles_file = os.path.join(DATA_DIR, f"articles_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json")
collector.save_articles(articles_file)
# Step 2: Analyze sentiment
analyzer = CryptoSentimentAnalyzer(OLLAMA_MODEL)
sentiment_results = analyzer.analyze_batch(articles[:10]) # Limit for demo
# Save sentiment results
sentiment_file = os.path.join(DATA_DIR, f"sentiment_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json")
with open(sentiment_file, 'w') as f:
json.dump(sentiment_results, f, indent=2)
# Step 3: Generate report
reporter = MarketIntelligenceReporter()
reporter.sentiment_data = sentiment_results
report_file = os.path.join(OUTPUT_DIR, f"sentiment_report_{datetime.now().strftime('%Y%m%d_%H%M%S')}.html")
reporter.create_html_report(report_file)
print(f"Analysis complete. Report saved to: {report_file}")
def main():
"""Main application entry point"""
# Ensure directories exist
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(OUTPUT_DIR, exist_ok=True)
# Run initial analysis
print("Running initial sentiment analysis...")
run_sentiment_analysis()
# Schedule regular analysis
schedule.every(ANALYSIS_INTERVAL_MINUTES).minutes.do(run_sentiment_analysis)
print(f"Scheduled analysis every {ANALYSIS_INTERVAL_MINUTES} minutes")
print("Press Ctrl+C to stop")
# Keep the script running
while True:
schedule.run_pending()
time.sleep(60)
if __name__ == "__main__":
main()
Advanced Sentiment Analysis Techniques
Multi-Model Ensemble Analysis
Combine multiple Ollama models for more accurate sentiment scoring:
# ensemble_analyzer.py
class EnsembleSentimentAnalyzer:
def __init__(self):
self.models = ["llama3.1:8b", "mistral:7b", "qwen2:7b"]
self.client = ollama.Client()
def analyze_with_ensemble(self, article):
"""Analyze sentiment using multiple models"""
results = []
for model in self.models:
try:
result = self.analyze_single_model(article, model)
if result:
results.append(result)
except Exception as e:
print(f"Error with model {model}: {e}")
return self.combine_results(results)
def combine_results(self, results):
"""Combine sentiment scores from multiple models"""
if not results:
return None
# Calculate weighted average
scores = [r['sentiment_score'] for r in results]
confidence_scores = [r.get('confidence', 1.0) for r in results]
weighted_score = sum(s * c for s, c in zip(scores, confidence_scores)) / sum(confidence_scores)
return {
'ensemble_sentiment': weighted_score,
'model_agreement': self.calculate_agreement(scores),
'individual_results': results
}
def calculate_agreement(self, scores):
"""Calculate how much models agree on sentiment"""
if len(scores) < 2:
return 1.0
variance = pd.Series(scores).var()
# Convert variance to agreement score (0-1)
agreement = max(0, 1 - (variance * 2))
return agreement
Real-Time Price Correlation Analysis
Connect sentiment scores with actual price movements:
# price_correlation.py
import requests
import pandas as pd
class PriceCorrelationAnalyzer:
def __init__(self):
self.price_data = {}
def fetch_crypto_prices(self, symbols=['bitcoin', 'ethereum']):
"""Fetch current crypto prices from CoinGecko API"""
try:
url = "https://api.coingecko.com/api/v3/simple/price"
params = {
'ids': ','.join(symbols),
'vs_currencies': 'usd',
'include_24hr_change': 'true'
}
response = requests.get(url, params=params)
return response.json()
except Exception as e:
print(f"Error fetching prices: {e}")
return {}
def correlate_sentiment_price(self, sentiment_data, price_data):
"""Analyze correlation between sentiment and price changes"""
correlations = {}
for crypto in price_data:
# Filter sentiment for this crypto
crypto_sentiment = [
item for item in sentiment_data
if crypto.lower() in item.get('cryptocurrencies', '').lower()
]
if crypto_sentiment:
avg_sentiment = sum(item['sentiment_score'] for item in crypto_sentiment) / len(crypto_sentiment)
price_change = price_data[crypto].get('usd_24h_change', 0)
correlations[crypto] = {
'sentiment_score': avg_sentiment,
'price_change_24h': price_change,
'article_count': len(crypto_sentiment)
}
return correlations
Deployment and Automation Strategies
Docker Containerization
Create a containerized deployment for consistent environments:
# Dockerfile
FROM python:3.11-slim
# Install Ollama
RUN curl -fsSL https://ollama.ai/install.sh | sh
# Set working directory
WORKDIR /app
# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application code
COPY . .
# Create data directories
RUN mkdir -p data outputs
# Expose port for web interface (if added)
EXPOSE 8000
# Start Ollama service and run application
CMD ["sh", "-c", "ollama serve & sleep 10 && ollama pull llama3.1:8b && python main.py"]
Cloud Deployment Configuration
# docker-compose.yml
version: '3.8'
services:
crypto-sentiment:
build: .
volumes:
- ./data:/app/data
- ./outputs:/app/outputs
environment:
- OLLAMA_HOST=http://localhost:11434
ports:
- "8000:8000"
restart: unless-stopped
redis:
image: redis:alpine
ports:
- "6379:6379"
mongodb:
image: mongo:latest
volumes:
- mongodb_data:/data/db
ports:
- "27017:27017"
volumes:
mongodb_data:
Performance Monitoring
# monitoring.py
import time
import psutil
import logging
from datetime import datetime
class PerformanceMonitor:
def __init__(self):
logging.basicConfig(
filename='performance.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
def monitor_analysis_performance(self, func):
"""Decorator to monitor analysis performance"""
def wrapper(*args, **kwargs):
start_time = time.time()
start_memory = psutil.Process().memory_info().rss / 1024 / 1024 # MB
result = func(*args, **kwargs)
end_time = time.time()
end_memory = psutil.Process().memory_info().rss / 1024 / 1024 # MB
execution_time = end_time - start_time
memory_used = end_memory - start_memory
logging.info(f"""
Function: {func.__name__}
Execution Time: {execution_time:.2f} seconds
Memory Usage: {memory_used:.2f} MB
Timestamp: {datetime.now()}
""")
return result
return wrapper
Troubleshooting Common Issues
Ollama Connection Problems
# troubleshooting.py
def diagnose_ollama_connection():
"""Diagnose common Ollama connection issues"""
import requests
try:
# Test Ollama service
response = requests.get("http://localhost:11434/api/tags")
if response.status_code == 200:
print("✓ Ollama service is running")
models = response.json().get('models', [])
print(f"✓ Available models: {[m['name'] for m in models]}")
else:
print("✗ Ollama service not responding correctly")
except requests.ConnectionError:
print("✗ Cannot connect to Ollama service")
print("Solutions:")
print("1. Start Ollama: ollama serve")
print("2. Check if port 11434 is blocked")
print("3. Verify installation: ollama --version")
def check_model_availability(model_name):
"""Check if required model is available"""
try:
import ollama
client = ollama.Client()
# Try to generate with the model
response = client.generate(
model=model_name,
prompt="Test",
stream=False
)
print(f"✓ Model {model_name} is working correctly")
return True
except Exception as e:
print(f"✗ Model {model_name} error: {e}")
print(f"Solution: ollama pull {model_name}")
return False
Memory and Performance Optimization
# optimization.py
def optimize_analysis_batch_size():
"""Determine optimal batch size for your system"""
import psutil
available_memory = psutil.virtual_memory().available / 1024 / 1024 / 1024 # GB
# Estimate memory per article analysis
memory_per_article = 0.1 # GB (conservative estimate)
optimal_batch_size = int(available_memory * 0.7 / memory_per_article)
# Cap at reasonable limits
optimal_batch_size = min(max(optimal_batch_size, 5), 50)
print(f"Available memory: {available_memory:.1f} GB")
print(f"Recommended batch size: {optimal_batch_size} articles")
return optimal_batch_size
Extending Your Sentiment Analysis System
Adding Technical Analysis Integration
# technical_integration.py
import pandas as pd
import numpy as np
class TechnicalSentimentFusion:
def __init__(self):
self.price_data = pd.DataFrame()
self.sentiment_data = pd.DataFrame()
def calculate_sentiment_sma(self, window=14):
"""Calculate sentiment moving average"""
return self.sentiment_data['sentiment_score'].rolling(window=window).mean()
def generate_fusion_signals(self):
"""Combine sentiment and technical indicators"""
# Simple example: sentiment + RSI
sentiment_signal = 1 if self.sentiment_data['sentiment_score'].mean() > 0.3 else -1
# Add your technical analysis here
# rsi_signal = calculate_rsi_signal()
fusion_score = sentiment_signal # * rsi_signal
return {
'fusion_signal': fusion_score,
'sentiment_component': sentiment_signal,
'confidence': abs(self.sentiment_data['sentiment_score'].mean())
}
Building Alert Systems
# alert_system.py
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
class SentimentAlertSystem:
def __init__(self, email_config):
self.email_config = email_config
self.alert_thresholds = {
'extreme_positive': 0.7,
'extreme_negative': -0.7,
'high_volatility': 0.5
}
def check_alert_conditions(self, sentiment_data):
"""Check if any alert conditions are met"""
alerts = []
avg_sentiment = np.mean([item['sentiment_score'] for item in sentiment_data])
sentiment_volatility = np.std([item['sentiment_score'] for item in sentiment_data])
if avg_sentiment > self.alert_thresholds['extreme_positive']:
alerts.append({
'type': 'EXTREME_POSITIVE',
'value': avg_sentiment,
'message': f"Extremely positive sentiment detected: {avg_sentiment:.2f}"
})
elif avg_sentiment < self.alert_thresholds['extreme_negative']:
alerts.append({
'type': 'EXTREME_NEGATIVE',
'value': avg_sentiment,
'message': f"Extremely negative sentiment detected: {avg_sentiment:.2f}"
})
if sentiment_volatility > self.alert_thresholds['high_volatility']:
alerts.append({
'type': 'HIGH_VOLATILITY',
'value': sentiment_volatility,
'message': f"High sentiment volatility detected: {sentiment_volatility:.2f}"
})
return alerts
def send_email_alert(self, alert):
"""Send email alert for significant sentiment changes"""
try:
msg = MIMEMultipart()
msg['From'] = self.email_config['from_email']
msg['To'] = self.email_config['to_email']
msg['Subject'] = f"Crypto Sentiment Alert: {alert['type']}"
body = f"""
Alert Type: {alert['type']}
Value: {alert['value']:.3f}
Message: {alert['message']}
Generated at: {datetime.now()}
"""
msg.attach(MIMEText(body, 'plain'))
server = smtplib.SMTP(self.email_config['smtp_server'], self.email_config['smtp_port'])
server.starttls()
server.login(self.email_config['username'], self.email_config['password'])
text = msg.as_string()
server.sendmail(self.email_config['from_email'], self.email_config['to_email'], text)
server.quit()
print(f"Alert sent: {alert['type']}")
except Exception as e:
print(f"Failed to send alert: {e}")
Running Your Complete System
System Startup Checklist
Start Ollama service:
ollama servePull required models:
ollama pull llama3.1:8b ollama pull mistral:7b # Optional for ensembleRun the main application:
python main.pyMonitor logs:
tail -f performance.log
Expected Output Flow
Initial startup:
Starting crypto sentiment analysis - 2025-07-08 10:30:00
Collecting from: https://cointelegraph.com/rss
Collecting from: https://cryptonews.com/news/feed/
...
Saved 150 articles to data/articles_20250708_103000.json
Analyzing article 1/10: Bitcoin reaches new all-time high as...
Analyzing article 2/10: Ethereum network upgrade shows...
...
Analysis complete. Report saved to: outputs/sentiment_report_20250708_103500.html
Scheduled analysis every 15 minutes
Sample HTML Report Output:
Conclusion: Building Market Intelligence with Ollama
You now have a complete crypto sentiment analysis system using Ollama that processes news feeds, analyzes sentiment, and generates actionable market intelligence. This automated approach gives you a significant advantage in fast-moving crypto markets.
Key benefits of your new system:
- Real-time processing of hundreds of news articles
- Objective sentiment scoring without human bias
- Automated trading signals based on market sentiment
- Privacy-focused analysis with local AI processing
- Customizable alerts for significant sentiment changes
Next steps to enhance your system:
- Add more news sources for broader coverage
- Integrate with trading APIs for automated execution
- Implement backtesting to validate signal accuracy
- Scale to analyze social media sentiment
- Add multi-language news source support
The crypto market never sleeps, but now your sentiment analysis system doesn't either. Start running your automated intelligence gathering today and gain the edge that separates successful traders from the rest of the market.
Ready to deploy your system? Download the complete code package and start analyzing crypto sentiment with Ollama in under 30 minutes.