How to Build Crypto Fear & Greed Index with Ollama: Market Sentiment Gauge

Build a crypto fear & greed index using Ollama AI for real-time market sentiment analysis. Complete guide with code examples and deployment tips.

Ever wonder if crypto markets run on pure emotion? Spoiler alert: they absolutely do. While traders claim logic drives their decisions, fear and greed actually control the steering wheel. Building your own crypto fear greed index with Ollama lets you measure these emotions and potentially profit from market psychology.

This guide shows you how to create a real-time market sentiment gauge using Ollama's AI capabilities. You'll build a system that analyzes social media, news, and market data to generate accurate fear and greed scores.

What Is the Crypto Fear & Greed Index?

The cryptocurrency fear and greed index measures market emotions on a scale from 0 (extreme fear) to 100 (extreme greed). Originally created by CNN for traditional markets, crypto versions help traders identify:

  • Extreme fear (0-24): Potential buying opportunities
  • Fear (25-44): Market uncertainty
  • Neutral (45-55): Balanced sentiment
  • Greed (56-74): Market optimism
  • Extreme greed (75-100): Potential selling opportunities

Why Traditional Indices Fall Short

Most existing fear and greed indices update slowly and use limited data sources. Building your own AI-powered market analysis system with Ollama provides:

  • Real-time sentiment updates
  • Custom data source integration
  • Improved accuracy through machine learning
  • Cost-effective local processing

Why Choose Ollama for Market Sentiment Analysis?

Ollama excels at local AI processing without expensive API costs. For cryptocurrency sentiment analysis, Ollama offers:

Key Advantages

  • Privacy: Process sensitive market data locally
  • Cost efficiency: No per-request API fees
  • Customization: Fine-tune models for crypto terminology
  • Speed: Low-latency responses for real-time analysis
  • Reliability: No external service dependencies

Ollama Models for Sentiment Analysis

Best models for trading psychology indicators:

  • Llama 2 7B: Fast processing, good accuracy
  • Mistral 7B: Excellent reasoning capabilities
  • CodeLlama: Helpful for data processing tasks
  • Llama 2 13B: Higher accuracy, slower processing

Setting Up Your Development Environment

Prerequisites

Install these tools before building your blockchain sentiment tracking system:

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama service
ollama serve

# Pull required model
ollama pull llama2:7b

Project Structure

Create your project directory:

mkdir crypto-fear-greed-index
cd crypto-fear-greed-index

# Create directory structure
mkdir -p {src,data,templates,static,config}
touch src/{main.py,sentiment_analyzer.py,data_collector.py}
touch config/settings.py

Python Dependencies

Install required packages:

pip install requests pandas numpy flask ollama tweepy yfinance beautifulsoup4 schedule

Building the Core Sentiment Engine

Sentiment Analyzer Class

Create src/sentiment_analyzer.py:

import ollama
import json
import re
from typing import Dict, List, Tuple

class CryptoSentimentAnalyzer:
    def __init__(self, model_name: str = "llama2:7b"):
        """Initialize the sentiment analyzer with Ollama model."""
        self.model = model_name
        self.client = ollama.Client()
        
    def analyze_text(self, text: str) -> Dict[str, float]:
        """
        Analyze sentiment of crypto-related text.
        Returns scores for fear, greed, and confidence.
        """
        # Clean and prepare text
        cleaned_text = self._clean_text(text)
        
        # Create sentiment analysis prompt
        prompt = self._create_sentiment_prompt(cleaned_text)
        
        try:
            # Get response from Ollama
            response = self.client.chat(model=self.model, messages=[
                {
                    'role': 'user',
                    'content': prompt
                }
            ])
            
            # Parse sentiment scores
            sentiment_data = self._parse_sentiment_response(response['message']['content'])
            return sentiment_data
            
        except Exception as e:
            print(f"Error analyzing sentiment: {e}")
            return {"fear": 0.5, "greed": 0.5, "confidence": 0.3}
    
    def _clean_text(self, text: str) -> str:
        """Remove noise from text data."""
        # Remove URLs, mentions, hashtags cleanup
        text = re.sub(r'http\S+|www\S+|https\S+', '', text)
        text = re.sub(r'@\w+|#\w+', '', text)
        text = re.sub(r'[^\w\s]', ' ', text)
        return text.strip()
    
    def _create_sentiment_prompt(self, text: str) -> str:
        """Create specialized prompt for crypto sentiment analysis."""
        return f"""
        Analyze the cryptocurrency market sentiment in this text: "{text}"
        
        Focus on emotions related to:
        - Fear indicators: panic, crash, loss, uncertainty, bearish
        - Greed indicators: FOMO, moon, pump, bullish, gains
        - Market psychology and trading emotions
        
        Respond ONLY with valid JSON in this exact format:
        {{"fear": 0.0-1.0, "greed": 0.0-1.0, "confidence": 0.0-1.0}}
        
        Where:
        - fear: 0.0 (no fear) to 1.0 (extreme fear)
        - greed: 0.0 (no greed) to 1.0 (extreme greed)  
        - confidence: 0.0 (low confidence) to 1.0 (high confidence in analysis)
        """
    
    def _parse_sentiment_response(self, response: str) -> Dict[str, float]:
        """Extract sentiment scores from Ollama response."""
        try:
            # Find JSON in response
            json_match = re.search(r'\{.*\}', response)
            if json_match:
                sentiment_json = json.loads(json_match.group())
                
                # Validate and normalize scores
                fear = max(0.0, min(1.0, float(sentiment_json.get('fear', 0.5))))
                greed = max(0.0, min(1.0, float(sentiment_json.get('greed', 0.5))))
                confidence = max(0.0, min(1.0, float(sentiment_json.get('confidence', 0.5))))
                
                return {"fear": fear, "greed": greed, "confidence": confidence}
                
        except (json.JSONDecodeError, ValueError, KeyError):
            pass
            
        # Fallback parsing for non-JSON responses
        return self._fallback_sentiment_parsing(response)
    
    def _fallback_sentiment_parsing(self, response: str) -> Dict[str, float]:
        """Backup sentiment extraction method."""
        response_lower = response.lower()
        
        # Simple keyword-based fallback
        fear_keywords = ['fear', 'panic', 'crash', 'bearish', 'loss']
        greed_keywords = ['greed', 'fomo', 'bullish', 'moon', 'pump']
        
        fear_score = sum(response_lower.count(word) for word in fear_keywords) / 10
        greed_score = sum(response_lower.count(word) for word in greed_keywords) / 10
        
        return {
            "fear": min(1.0, fear_score),
            "greed": min(1.0, greed_score),
            "confidence": 0.3
        }

Testing the Sentiment Engine

Test your analyzer:

# Test the sentiment analyzer
analyzer = CryptoSentimentAnalyzer()

# Test with different sentiment examples
test_texts = [
    "Bitcoin is crashing! Panic selling everywhere!",
    "Crypto to the moon! HODL and buy more!",
    "Market looks uncertain, waiting for confirmation."
]

for text in test_texts:
    result = analyzer.analyze_text(text)
    print(f"Text: {text}")
    print(f"Sentiment: {result}\n")

Data Collection and Processing

Multi-Source Data Collector

Create src/data_collector.py:

import requests
import pandas as pd
import yfinance as yf
from datetime import datetime, timedelta
from typing import List, Dict
import time

class CryptoDataCollector:
    def __init__(self):
        """Initialize data collector for multiple sources."""
        self.headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
        }
        
    def collect_market_data(self, symbols: List[str] = ['BTC-USD', 'ETH-USD']) -> Dict:
        """Collect current market data for major cryptocurrencies."""
        market_data = {}
        
        for symbol in symbols:
            try:
                ticker = yf.Ticker(symbol)
                info = ticker.info
                hist = ticker.history(period="7d")
                
                # Calculate volatility
                returns = hist['Close'].pct_change().dropna()
                volatility = returns.std() * (252 ** 0.5)  # Annualized
                
                # Price change metrics
                current_price = info.get('regularMarketPrice', 0)
                week_change = ((hist['Close'][-1] / hist['Close'][0]) - 1) * 100
                
                market_data[symbol] = {
                    'price': current_price,
                    'volume': info.get('regularMarketVolume', 0),
                    'market_cap': info.get('marketCap', 0),
                    'week_change': week_change,
                    'volatility': volatility,
                    'timestamp': datetime.now()
                }
                
            except Exception as e:
                print(f"Error collecting data for {symbol}: {e}")
                
        return market_data
    
    def collect_social_sentiment(self) -> List[str]:
        """Collect crypto-related social media content."""
        # Note: Replace with actual social media APIs
        # This is a simplified example
        
        sample_social_data = [
            "Bitcoin breaking new resistance levels! #BTC #crypto",
            "Market volatility increasing, be careful with leveraged positions",
            "DCA strategy working well in this market condition",
            "Altcoin season might be starting, watching ETH closely",
            "Fear in the market creates opportunity for patient investors"
        ]
        
        return sample_social_data
    
    def collect_news_headlines(self) -> List[str]:
        """Collect crypto news headlines."""
        try:
            # Example using a crypto news API (replace with actual API)
            url = "https://cryptonews-api.com/api/v1/category"
            params = {
                'section': 'general',
                'items': 50,
                'page': 1
            }
            
            # For demo purposes, return sample headlines
            sample_headlines = [
                "Major institutional investors increase Bitcoin allocations",
                "Regulatory clarity improves cryptocurrency market outlook", 
                "Ethereum network upgrade shows promising results",
                "Market correlation with traditional assets decreases",
                "New DeFi protocols gain significant adoption"
            ]
            
            return sample_headlines
            
        except Exception as e:
            print(f"Error collecting news: {e}")
            return []
    
    def collect_google_trends(self, keywords: List[str] = ['Bitcoin', 'cryptocurrency']) -> Dict:
        """Collect search trend data (simplified version)."""
        # Note: Use pytrends library for actual Google Trends data
        # This returns sample data for demonstration
        
        trends_data = {}
        for keyword in keywords:
            # Simulate trend scores (0-100)
            import random
            trends_data[keyword] = {
                'interest_score': random.randint(30, 80),
                'timestamp': datetime.now()
            }
            
        return trends_data
    
    def get_fear_greed_components(self) -> Dict:
        """Collect all data components for fear & greed calculation."""
        print("Collecting market data...")
        market_data = self.collect_market_data()
        
        print("Collecting social sentiment...")
        social_data = self.collect_social_sentiment()
        
        print("Collecting news headlines...")
        news_data = self.collect_news_headlines()
        
        print("Collecting search trends...")
        trends_data = self.collect_google_trends()
        
        return {
            'market_data': market_data,
            'social_data': social_data,
            'news_data': news_data,
            'trends_data': trends_data,
            'collection_timestamp': datetime.now()
        }

Creating the Index Calculation

Fear & Greed Index Calculator

Create src/index_calculator.py:

import numpy as np
from datetime import datetime
from typing import Dict, List
from sentiment_analyzer import CryptoSentimentAnalyzer
from data_collector import CryptoDataCollector

class FearGreedIndexCalculator:
    def __init__(self):
        """Initialize the fear & greed index calculator."""
        self.sentiment_analyzer = CryptoSentimentAnalyzer()
        self.data_collector = CryptoDataCollector()
        
        # Component weights (should sum to 1.0)
        self.weights = {
            'price_momentum': 0.25,
            'volatility': 0.25, 
            'social_sentiment': 0.20,
            'news_sentiment': 0.15,
            'search_trends': 0.15
        }
    
    def calculate_price_momentum_score(self, market_data: Dict) -> float:
        """Calculate score based on price momentum (0-100)."""
        scores = []
        
        for symbol, data in market_data.items():
            week_change = data.get('week_change', 0)
            
            # Convert price change to 0-100 scale
            # Extreme fear: < -20%, Extreme greed: > +20%
            if week_change <= -20:
                score = 0
            elif week_change >= 20:
                score = 100
            else:
                # Linear scaling between -20% and +20%
                score = ((week_change + 20) / 40) * 100
                
            scores.append(score)
        
        return np.mean(scores) if scores else 50
    
    def calculate_volatility_score(self, market_data: Dict) -> float:
        """Calculate score based on market volatility (0-100)."""
        volatilities = []
        
        for symbol, data in market_data.items():
            volatility = data.get('volatility', 0)
            volatilities.append(volatility)
        
        if not volatilities:
            return 50
            
        avg_volatility = np.mean(volatilities)
        
        # High volatility = fear, Low volatility = greed
        # Normalize volatility to 0-100 scale (inverted)
        if avg_volatility >= 2.0:  # Very high volatility
            return 0  # Extreme fear
        elif avg_volatility <= 0.5:  # Low volatility  
            return 100  # Extreme greed
        else:
            # Linear scaling between 0.5 and 2.0 (inverted)
            return 100 - ((avg_volatility - 0.5) / 1.5) * 100
    
    def calculate_sentiment_score(self, text_data: List[str]) -> float:
        """Calculate sentiment score from text data (0-100)."""
        if not text_data:
            return 50
            
        sentiment_scores = []
        
        for text in text_data:
            sentiment = self.sentiment_analyzer.analyze_text(text)
            
            # Convert fear/greed to 0-100 scale
            # More fear = lower score, More greed = higher score
            fear_component = sentiment['fear'] * 50  # 0-50 range
            greed_component = sentiment['greed'] * 50  # 0-50 range
            
            # Combine: low fear + high greed = high score
            combined_score = (50 - fear_component) + greed_component
            sentiment_scores.append(max(0, min(100, combined_score)))
        
        return np.mean(sentiment_scores)
    
    def calculate_trends_score(self, trends_data: Dict) -> float:
        """Calculate score based on search trends (0-100)."""
        if not trends_data:
            return 50
            
        trend_scores = []
        
        for keyword, data in trends_data.items():
            interest_score = data.get('interest_score', 50)
            # High search interest can indicate both fear and greed
            # Normalize to 0-100 where 50 is neutral
            trend_scores.append(interest_score)
        
        return np.mean(trend_scores) if trend_scores else 50
    
    def calculate_fear_greed_index(self) -> Dict:
        """Calculate the complete fear & greed index."""
        print("Starting Fear & Greed Index calculation...")
        
        # Collect all data
        data = self.data_collector.get_fear_greed_components()
        
        # Calculate component scores
        print("Calculating component scores...")
        
        price_score = self.calculate_price_momentum_score(data['market_data'])
        volatility_score = self.calculate_volatility_score(data['market_data'])
        social_score = self.calculate_sentiment_score(data['social_data'])
        news_score = self.calculate_sentiment_score(data['news_data'])
        trends_score = self.calculate_trends_score(data['trends_data'])
        
        # Calculate weighted final score
        final_score = (
            price_score * self.weights['price_momentum'] +
            volatility_score * self.weights['volatility'] +
            social_score * self.weights['social_sentiment'] +
            news_score * self.weights['news_sentiment'] +
            trends_score * self.weights['search_trends']
        )
        
        # Determine sentiment label
        sentiment_label = self._get_sentiment_label(final_score)
        
        result = {
            'index_score': round(final_score, 1),
            'sentiment_label': sentiment_label,
            'components': {
                'price_momentum': round(price_score, 1),
                'volatility': round(volatility_score, 1),
                'social_sentiment': round(social_score, 1),
                'news_sentiment': round(news_score, 1),
                'search_trends': round(trends_score, 1)
            },
            'weights': self.weights,
            'timestamp': datetime.now().isoformat(),
            'market_data': data['market_data']
        }
        
        print(f"Fear & Greed Index: {final_score:.1f} ({sentiment_label})")
        return result
    
    def _get_sentiment_label(self, score: float) -> str:
        """Convert numeric score to sentiment label."""
        if score <= 24:
            return "Extreme Fear"
        elif score <= 44:
            return "Fear"
        elif score <= 55:
            return "Neutral"
        elif score <= 74:
            return "Greed"
        else:
            return "Extreme Greed"

# Example usage
if __name__ == "__main__":
    calculator = FearGreedIndexCalculator()
    index_result = calculator.calculate_fear_greed_index()
    print(f"\nFinal Index: {index_result}")

Building the Web Interface

Flask Web Application

Create src/main.py:

from flask import Flask, render_template, jsonify
import json
from datetime import datetime
from index_calculator import FearGreedIndexCalculator
import schedule
import threading
import time

app = Flask(__name__)

# Global variable to store latest index data
latest_index_data = None

def update_index():
    """Update the fear & greed index."""
    global latest_index_data
    try:
        calculator = FearGreedIndexCalculator()
        latest_index_data = calculator.calculate_fear_greed_index()
        print(f"Index updated: {latest_index_data['index_score']}")
    except Exception as e:
        print(f"Error updating index: {e}")

@app.route('/')
def index():
    """Main dashboard page."""
    return render_template('dashboard.html')

@app.route('/api/fear-greed-index')
def get_fear_greed_index():
    """API endpoint for current fear & greed index."""
    if latest_index_data:
        return jsonify(latest_index_data)
    else:
        return jsonify({'error': 'Index data not available'}), 503

@app.route('/api/historical')
def get_historical_data():
    """API endpoint for historical index data."""
    # In a real implementation, this would query a database
    # For demo, return sample historical data
    sample_history = [
        {'date': '2025-01-15', 'score': 45, 'label': 'Fear'},
        {'date': '2025-01-14', 'score': 62, 'label': 'Greed'},
        {'date': '2025-01-13', 'score': 38, 'label': 'Fear'},
        {'date': '2025-01-12', 'score': 71, 'label': 'Greed'},
        {'date': '2025-01-11', 'score': 55, 'label': 'Neutral'}
    ]
    return jsonify(sample_history)

def run_scheduler():
    """Run scheduled tasks in background."""
    while True:
        schedule.run_pending()
        time.sleep(60)

if __name__ == '__main__':
    # Schedule index updates every 15 minutes
    schedule.every(15).minutes.do(update_index)
    
    # Initial index calculation
    update_index()
    
    # Start scheduler in background thread
    scheduler_thread = threading.Thread(target=run_scheduler)
    scheduler_thread.daemon = True
    scheduler_thread.start()
    
    # Start Flask app
    app.run(debug=True, host='0.0.0.0', port=5000)

Dashboard Template

Create templates/dashboard.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Crypto Fear & Greed Index</title>
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    <style>
        body {
            font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
            margin: 0;
            padding: 20px;
            background: linear-gradient(135deg, #1e3c72 0%, #2a5298 100%);
            color: white;
            min-height: 100vh;
        }
        
        .container {
            max-width: 1200px;
            margin: 0 auto;
        }
        
        .header {
            text-align: center;
            margin-bottom: 30px;
        }
        
        .index-display {
            background: rgba(255, 255, 255, 0.1);
            border-radius: 15px;
            padding: 30px;
            text-align: center;
            margin-bottom: 30px;
            backdrop-filter: blur(10px);
        }
        
        .index-score {
            font-size: 4em;
            font-weight: bold;
            margin: 20px 0;
        }
        
        .sentiment-label {
            font-size: 2em;
            margin-bottom: 10px;
        }
        
        .components-grid {
            display: grid;
            grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
            gap: 20px;
            margin-bottom: 30px;
        }
        
        .component-card {
            background: rgba(255, 255, 255, 0.1);
            border-radius: 10px;
            padding: 20px;
            backdrop-filter: blur(10px);
        }
        
        .component-score {
            font-size: 2em;
            font-weight: bold;
            color: #00ff88;
        }
        
        .chart-container {
            background: rgba(255, 255, 255, 0.1);
            border-radius: 15px;
            padding: 20px;
            margin-bottom: 20px;
            backdrop-filter: blur(10px);
        }
        
        .status {
            position: fixed;
            bottom: 20px;
            right: 20px;
            background: rgba(0, 0, 0, 0.7);
            padding: 10px 20px;
            border-radius: 25px;
            font-size: 0.9em;
        }
        
        /* Sentiment color coding */
        .extreme-fear { color: #ff4757; }
        .fear { color: #ff6b7a; }
        .neutral { color: #ffa502; }
        .greed { color: #2ed573; }
        .extreme-greed { color: #1dd1a1; }
    </style>
</head>
<body>
    <div class="container">
        <div class="header">
            <h1>🚀 Crypto Fear & Greed Index</h1>
            <p>Real-time market sentiment analysis powered by Ollama AI</p>
        </div>
        
        <div class="index-display">
            <div id="index-score" class="index-score">--</div>
            <div id="sentiment-label" class="sentiment-label">Loading...</div>
            <div id="last-update">Last updated: --</div>
        </div>
        
        <div class="components-grid" id="components-grid">
            <!-- Component cards will be populated by JavaScript -->
        </div>
        
        <div class="chart-container">
            <h3>Historical Trend</h3>
            <canvas id="historical-chart" width="400" height="200"></canvas>
        </div>
        
        <div class="chart-container">
            <h3>Component Breakdown</h3>
            <canvas id="components-chart" width="400" height="200"></canvas>
        </div>
    </div>
    
    <div class="status" id="status">
        Connecting...
    </div>

    <script>
        let componentsChart;
        let historicalChart;
        
        // Get sentiment CSS class
        function getSentimentClass(label) {
            return label.toLowerCase().replace(' ', '-');
        }
        
        // Update index display
        function updateIndexDisplay(data) {
            const scoreElement = document.getElementById('index-score');
            const labelElement = document.getElementById('sentiment-label');
            const updateElement = document.getElementById('last-update');
            
            scoreElement.textContent = data.index_score;
            labelElement.textContent = data.sentiment_label;
            labelElement.className = `sentiment-label ${getSentimentClass(data.sentiment_label)}`;
            
            const updateTime = new Date(data.timestamp).toLocaleString();
            updateElement.textContent = `Last updated: ${updateTime}`;
        }
        
        // Update components grid
        function updateComponents(components, weights) {
            const grid = document.getElementById('components-grid');
            grid.innerHTML = '';
            
            for (const [key, score] of Object.entries(components)) {
                const card = document.createElement('div');
                card.className = 'component-card';
                
                const title = key.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase());
                const weight = (weights[key] * 100).toFixed(0);
                
                card.innerHTML = `
                    <h4>${title}</h4>
                    <div class="component-score">${score}</div>
                    <small>Weight: ${weight}%</small>
                `;
                
                grid.appendChild(card);
            }
        }
        
        // Create components chart
        function createComponentsChart(components) {
            const ctx = document.getElementById('components-chart').getContext('2d');
            
            if (componentsChart) {
                componentsChart.destroy();
            }
            
            const labels = Object.keys(components).map(key => 
                key.replace('_', ' ').replace(/\b\w/g, l => l.toUpperCase())
            );
            const values = Object.values(components);
            
            componentsChart = new Chart(ctx, {
                type: 'radar',
                data: {
                    labels: labels,
                    datasets: [{
                        label: 'Component Scores',
                        data: values,
                        backgroundColor: 'rgba(46, 213, 115, 0.2)',
                        borderColor: 'rgba(46, 213, 115, 1)',
                        borderWidth: 2
                    }]
                },
                options: {
                    responsive: true,
                    scales: {
                        r: {
                            min: 0,
                            max: 100,
                            ticks: {
                                color: 'white'
                            },
                            grid: {
                                color: 'rgba(255, 255, 255, 0.3)'
                            },
                            pointLabels: {
                                color: 'white'
                            }
                        }
                    },
                    plugins: {
                        legend: {
                            labels: {
                                color: 'white'
                            }
                        }
                    }
                }
            });
        }
        
        // Create historical chart
        function createHistoricalChart(historicalData) {
            const ctx = document.getElementById('historical-chart').getContext('2d');
            
            if (historicalChart) {
                historicalChart.destroy();
            }
            
            const labels = historicalData.map(item => item.date);
            const scores = historicalData.map(item => item.score);
            
            historicalChart = new Chart(ctx, {
                type: 'line',
                data: {
                    labels: labels,
                    datasets: [{
                        label: 'Fear & Greed Index',
                        data: scores,
                        borderColor: 'rgba(255, 193, 7, 1)',
                        backgroundColor: 'rgba(255, 193, 7, 0.1)',
                        borderWidth: 3,
                        fill: true,
                        tension: 0.4
                    }]
                },
                options: {
                    responsive: true,
                    scales: {
                        y: {
                            min: 0,
                            max: 100,
                            ticks: {
                                color: 'white'
                            },
                            grid: {
                                color: 'rgba(255, 255, 255, 0.3)'
                            }
                        },
                        x: {
                            ticks: {
                                color: 'white'
                            },
                            grid: {
                                color: 'rgba(255, 255, 255, 0.3)'
                            }
                        }
                    },
                    plugins: {
                        legend: {
                            labels: {
                                color: 'white'
                            }
                        }
                    }
                }
            });
        }
        
        // Fetch and update data
        async function updateData() {
            try {
                // Get current index
                const response = await fetch('/api/fear-greed-index');
                const data = await response.json();
                
                if (data.error) {
                    document.getElementById('status').textContent = 'Error: ' + data.error;
                    return;
                }
                
                updateIndexDisplay(data);
                updateComponents(data.components, data.weights);
                createComponentsChart(data.components);
                
                // Get historical data
                const histResponse = await fetch('/api/historical');
                const histData = await histResponse.json();
                createHistoricalChart(histData);
                
                document.getElementById('status').textContent = 'Connected • Live Data';
                
            } catch (error) {
                console.error('Error fetching data:', error);
                document.getElementById('status').textContent = 'Connection Error';
            }
        }
        
        // Initial load and set up auto-refresh
        updateData();
        setInterval(updateData, 30000); // Update every 30 seconds
    </script>
</body>
</html>

Deployment and Monitoring

Production Configuration

Create config/production.py:

import os
from datetime import timedelta

class ProductionConfig:
    """Production configuration settings."""
    
    # Flask settings
    SECRET_KEY = os.environ.get('SECRET_KEY', 'your-secret-key-here')
    DEBUG = False
    
    # Ollama settings
    OLLAMA_MODEL = os.environ.get('OLLAMA_MODEL', 'llama2:7b')
    OLLAMA_HOST = os.environ.get('OLLAMA_HOST', 'localhost:11434')
    
    # Update intervals
    INDEX_UPDATE_INTERVAL = timedelta(minutes=15)
    DATA_RETENTION_DAYS = 30
    
    # API rate limiting
    RATELIMIT_STORAGE_URL = os.environ.get('REDIS_URL', 'redis://localhost:6379')
    
    # Database (for storing historical data)
    DATABASE_URL = os.environ.get('DATABASE_URL', 'sqlite:///fear_greed_index.db')
    
    # External APIs
    TWITTER_API_KEY = os.environ.get('TWITTER_API_KEY')
    NEWS_API_KEY = os.environ.get('NEWS_API_KEY')
    
    # Monitoring
    ENABLE_METRICS = True
    LOG_LEVEL = 'INFO'

Docker Deployment

Create Dockerfile:

FROM python:3.9-slim

# Install system dependencies
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Install Ollama
RUN curl -fsSL https://ollama.ai/install.sh | sh

# Set working directory
WORKDIR /app

# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose port
EXPOSE 5000

# Create startup script
RUN echo '#!/bin/bash\n\
ollama serve &\n\
sleep 10\n\
ollama pull llama2:7b\n\
python src/main.py' > start.sh && chmod +x start.sh

CMD ["./start.sh"]

Docker Compose Setup

Create docker-compose.yml:

version: '3.8'

services:
  crypto-index:
    build: .
    ports:
      - "5000:5000"
    environment:
      - FLASK_ENV=production
      - SECRET_KEY=your-production-secret-key
    volumes:
      - ollama_data:/root/.ollama
    restart: unless-stopped
    
  redis:
    image: redis:alpine
    restart: unless-stopped
    
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - crypto-index
    restart: unless-stopped

volumes:
  ollama_data:

Monitoring and Alerts

Create src/monitoring.py:

import logging
import time
from datetime import datetime
from typing import Dict
import smtplib
from email.mime.text import MIMEText

class IndexMonitor:
    def __init__(self, config):
        self.config = config
        self.logger = self._setup_logging()
        self.alerts_enabled = True
        
    def _setup_logging(self):
        """Configure logging for monitoring."""
        logging.basicConfig(
            level=getattr(logging, self.config.LOG_LEVEL),
            format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
            handlers=[
                logging.FileHandler('fear_greed_index.log'),
                logging.StreamHandler()
            ]
        )
        return logging.getLogger(__name__)
    
    def check_index_health(self, index_data: Dict) -> bool:
        """Monitor index calculation health."""
        try:
            # Check if index score is reasonable
            score = index_data.get('index_score', -1)
            if not 0 <= score <= 100:
                self.logger.error(f"Invalid index score: {score}")
                return False
                
            # Check component scores
            components = index_data.get('components', {})
            for component, value in components.items():
                if not 0 <= value <= 100:
                    self.logger.error(f"Invalid {component} score: {value}")
                    return False
            
            # Check data freshness
            timestamp = datetime.fromisoformat(index_data['timestamp'])
            age = datetime.now() - timestamp
            if age.total_seconds() > 1800:  # 30 minutes
                self.logger.warning(f"Stale data detected: {age}")
                
            self.logger.info(f"Index health check passed: {score}")
            return True
            
        except Exception as e:
            self.logger.error(f"Health check failed: {e}")
            return False
    
    def send_alert(self, message: str, severity: str = "WARNING"):
        """Send alert notification."""
        if not self.alerts_enabled:
            return
            
        self.logger.log(getattr(logging, severity), message)
        
        # Add email notifications in production
        # self._send_email_alert(message, severity)
    
    def log_performance_metrics(self, calculation_time: float, data_sources: int):
        """Log performance metrics."""
        self.logger.info(f"Index calculation completed in {calculation_time:.2f}s using {data_sources} sources")

Optimization and Best Practices

Performance Optimization

Caching Strategy:

import redis
import pickle
from functools import wraps

class IndexCache:
    def __init__(self, redis_url='redis://localhost:6379'):
        self.redis_client = redis.from_url(redis_url)
        
    def cache_result(self, key: str, data: dict, ttl: int = 900):
        """Cache index calculation result."""
        try:
            serialized = pickle.dumps(data)
            self.redis_client.setex(key, ttl, serialized)
        except Exception as e:
            print(f"Cache write error: {e}")
    
    def get_cached_result(self, key: str):
        """Retrieve cached result."""
        try:
            cached = self.redis_client.get(key)
            if cached:
                return pickle.loads(cached)
        except Exception as e:
            print(f"Cache read error: {e}")
        return None

Model Fine-tuning

Custom Ollama Model:

# Create custom model for crypto sentiment
# Save this as 'Modelfile'
FROM llama2:7b

# Custom system prompt for crypto analysis
SYSTEM """
You are a specialized cryptocurrency sentiment analyzer. 
Focus on crypto-specific emotions, market psychology, and trading behavior.
Key terms: FOMO, FUD, HODL, diamond hands, paper hands, moon, crash, pump, dump.
"""

# Set parameters for better performance
PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER num_ctx 2048
# Build custom model
ollama create crypto-sentiment -f Modelfile
ollama run crypto-sentiment

Error Handling and Resilience

import time
import random
from functools import wraps

def retry_with_backoff(max_retries=3, base_delay=1):
    """Decorator for retrying failed operations."""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            for attempt in range(max_retries):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    if attempt == max_retries - 1:
                        raise e
                    
                    delay = base_delay * (2 ** attempt) + random.uniform(0, 1)
                    print(f"Attempt {attempt + 1} failed: {e}. Retrying in {delay:.2f}s...")
                    time.sleep(delay)
            
        return wrapper
    return decorator

# Usage example
@retry_with_backoff(max_retries=3, base_delay=2)
def analyze_sentiment_with_retry(text):
    return sentiment_analyzer.analyze_text(text)

Advanced Features

Real-time WebSocket Updates

Add WebSocket support for live updates:

from flask_socketio import SocketIO, emit

socketio = SocketIO(app, cors_allowed_origins="*")

@socketio.on('connect')
def handle_connect():
    emit('connected', {'data': 'Connected to Fear & Greed Index'})

def broadcast_index_update(index_data):
    """Broadcast index updates to all connected clients."""
    socketio.emit('index_update', index_data)

# Modify your update function
def update_index():
    global latest_index_data
    try:
        calculator = FearGreedIndexCalculator()
        latest_index_data = calculator.calculate_fear_greed_index()
        
        # Broadcast to connected clients
        broadcast_index_update(latest_index_data)
        
    except Exception as e:
        print(f"Error updating index: {e}")

Machine Learning Enhancement

Improve accuracy with historical training:

import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split

class MLEnhancedCalculator(FearGreedIndexCalculator):
    def __init__(self):
        super().__init__()
        self.ml_model = None
        self.train_model()
    
    def train_model(self):
        """Train ML model on historical data."""
        # Load historical data (implement based on your data source)
        historical_data = self.load_historical_data()
        
        if len(historical_data) > 100:  # Minimum data for training
            features = ['price_momentum', 'volatility', 'social_sentiment', 
                       'news_sentiment', 'search_trends']
            
            X = historical_data[features]
            y = historical_data['actual_index']
            
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
            
            self.ml_model = RandomForestRegressor(n_estimators=100, random_state=42)
            self.ml_model.fit(X_train, y_train)
            
            score = self.ml_model.score(X_test, y_test)
            print(f"ML model trained with R² score: {score:.3f}")
    
    def calculate_enhanced_index(self, component_scores):
        """Use ML model to enhance index calculation."""
        if self.ml_model:
            features = np.array([list(component_scores.values())]).reshape(1, -1)
            ml_prediction = self.ml_model.predict(features)[0]
            
            # Blend traditional calculation with ML prediction
            traditional_score = np.average(list(component_scores.values()), 
                                         weights=list(self.weights.values()))
            
            # 70% traditional, 30% ML
            final_score = 0.7 * traditional_score + 0.3 * ml_prediction
            return max(0, min(100, final_score))
        
        return super().calculate_fear_greed_index()

Testing and Validation

Unit Tests

Create tests/test_sentiment_analyzer.py:

import unittest
from src.sentiment_analyzer import CryptoSentimentAnalyzer

class TestSentimentAnalyzer(unittest.TestCase):
    def setUp(self):
        self.analyzer = CryptoSentimentAnalyzer()
    
    def test_fear_sentiment(self):
        """Test fear sentiment detection."""
        text = "Bitcoin is crashing! Panic selling everywhere!"
        result = self.analyzer.analyze_text(text)
        
        self.assertGreater(result['fear'], 0.5)
        self.assertLess(result['greed'], 0.5)
    
    def test_greed_sentiment(self):
        """Test greed sentiment detection."""
        text = "Bitcoin to the moon! HODL and buy more!"
        result = self.analyzer.analyze_text(text)
        
        self.assertGreater(result['greed'], 0.5)
        self.assertLess(result['fear'], 0.5)
    
    def test_neutral_sentiment(self):
        """Test neutral sentiment."""
        text = "Market conditions are stable today."
        result = self.analyzer.analyze_text(text)
        
        # Should be relatively balanced
        self.assertLess(abs(result['fear'] - result['greed']), 0.3)

if __name__ == '__main__':
    unittest.main()

Integration Tests

Create tests/test_integration.py:

import unittest
import time
from src.index_calculator import FearGreedIndexCalculator

class TestIntegration(unittest.TestCase):
    def setUp(self):
        self.calculator = FearGreedIndexCalculator()
    
    def test_full_index_calculation(self):
        """Test complete index calculation process."""
        start_time = time.time()
        result = self.calculator.calculate_fear_greed_index()
        calculation_time = time.time() - start_time
        
        # Verify result structure
        self.assertIn('index_score', result)
        self.assertIn('sentiment_label', result)
        self.assertIn('components', result)
        
        # Verify score range
        self.assertGreaterEqual(result['index_score'], 0)
        self.assertLessEqual(result['index_score'], 100)
        
        # Verify performance (should complete within 30 seconds)
        self.assertLess(calculation_time, 30)
        
        print(f"Integration test completed in {calculation_time:.2f}s")

if __name__ == '__main__':
    unittest.main()

Conclusion

Building a crypto fear greed index with Ollama provides powerful insights into market psychology. This comprehensive system combines multiple data sources with AI-powered sentiment analysis to create accurate, real-time market sentiment measurements.

Key benefits of this approach:

  • Cost-effective: Local processing eliminates API fees
  • Customizable: Tailored specifically for cryptocurrency markets
  • Real-time: Updates every 15 minutes with fresh data
  • Comprehensive: Multiple sentiment indicators combined
  • Scalable: Easy to add new data sources and components

Your AI-powered market analysis system now processes social media sentiment, news headlines, price movements, and search trends to generate actionable trading insights. The modular design allows for easy enhancements and customization based on your specific needs.

Next steps for improvement:

  • Add more data sources (Reddit, Discord, Telegram)
  • Implement machine learning model training
  • Create mobile app interface
  • Add trading signal generation
  • Integrate with popular exchanges

This market sentiment analysis tool gives you a significant advantage in understanding crypto market emotions and making informed trading decisions based on collective market psychology.


Want to enhance your crypto trading strategy? This fear & greed index provides the market sentiment insights you need for better decision-making.