How to Use AI to Generate Code for Financial Modeling - Save 5 Hours Per Model

Stop writing financial models from scratch. Use AI to generate Python code for DCF, Monte Carlo, and portfolio analysis in 30 minutes instead of hours.

I used to spend entire weekends building financial models from scratch. Then I discovered how to use AI to generate 80% of the code in minutes.

What you'll build: Three complete financial models using AI-generated Python code Time needed: 45 minutes (vs 6+ hours manually) Difficulty: Intermediate Python skills needed

Here's what changed everything: Instead of Googling formulas and debugging for hours, I learned to prompt AI correctly for financial code. The models are more accurate, and I actually understand what I built.

Why I Started Using AI for Financial Modeling

My breaking point came during a weekend DCF model build for a client presentation Monday morning. Six hours in, I was still debugging NPV calculations and fighting with pandas DataFrames.

My setup:

  • MacBook Pro M1, 16GB RAM
  • Python 3.11 with Jupyter notebooks
  • Previous experience: 5 years building Excel models, 2 years Python

What wasn't working:

  • Copy-pasting code from Stack Overflow (syntax errors everywhere)
  • Financial libraries had terrible documentation
  • Spent more time debugging than analyzing

The breakthrough: I started treating AI like a senior financial analyst who codes, not just a code generator.

Step 1: Set Up Your AI Financial Modeling Environment

The problem: Random Python environments lead to package conflicts and broken models.

My solution: Create a dedicated environment with specific financial packages.

Time this saves: 2 hours of troubleshooting import errors later.

Install Required Packages

# Create virtual environment
python -m venv ai_finance
source ai_finance/bin/activate  # On Windows: ai_finance\Scripts\activate

# Install core packages (exact versions I use)
pip install pandas==2.0.3
pip install numpy==1.24.3  
pip install matplotlib==3.7.1
pip install scipy==1.11.1
pip install yfinance==0.2.18
pip install jupyter==1.0.0

What this does: Creates an isolated environment with financial analysis packages that play nice together.

Expected output: You should see successful installation messages for each package.

Personal tip: "Pin those version numbers. I learned this the hard way when a pandas update broke my entire model library."

Test Your Environment

# Run this in Jupyter to verify everything works
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import yfinance as yf

print("All packages loaded successfully!")
print(f"Pandas: {pd.__version__}")
print(f"NumPy: {np.__version__}")

Personal tip: "If any imports fail, delete the environment and start over. Trust me, it's faster than debugging version conflicts."

Step 2: Master the AI Prompting Formula for Financial Code

The problem: Generic prompts give you generic (broken) financial code.

My solution: Use this specific prompting template that gets accurate financial models.

Time this saves: 3 hours of fixing incorrect formulas and logic errors.

The Magic Prompting Template

Here's my exact template that generates working financial models:

Create a Python function for [SPECIFIC FINANCIAL MODEL] with these requirements:

CONTEXT:
- Purpose: [Exact use case, like "DCF valuation for SaaS companies"]
- Data source: [Where data comes from, like "manual inputs" or "Yahoo Finance"]
- Output needed: [Specific result, like "fair value per share"]

FINANCIAL ASSUMPTIONS:
- [List 3-5 specific assumptions, like discount rates, growth rates]
- [Include ranges if applicable]

CODE REQUIREMENTS:
- Use pandas and numpy only
- Include input validation
- Add detailed comments explaining each calculation
- Return structured output (dictionary or DataFrame)
- Handle edge cases (like negative growth, division by zero)

EXAMPLE INPUTS:
[Provide sample data the function should handle]

EXPECTED OUTPUT FORMAT:
[Show exactly what the return should look like]

Real Example - DCF Model Generation

I'll show you exactly how I prompted Claude to generate a DCF model:

Create a Python function for DCF (Discounted Cash Flow) valuation with these requirements:

CONTEXT:
- Purpose: Value technology companies using 5-year DCF model
- Data source: Manual financial projections  
- Output needed: Intrinsic value per share and sensitivity analysis

FINANCIAL ASSUMPTIONS:
- WACC between 8-15%
- Terminal growth rate 2-4%
- 5-year projection period
- Tax rate 25%

CODE REQUIREMENTS:
- Use pandas and numpy only
- Include input validation for negative inputs
- Add detailed comments explaining each DCF step
- Return dictionary with valuation metrics
- Handle division by zero in terminal value

EXAMPLE INPUTS:
- Free cash flows: [100, 120, 144, 173, 207] millions
- WACC: 10%
- Terminal growth: 3%
- Shares outstanding: 50 million

EXPECTED OUTPUT FORMAT:
{
    'dcf_value_per_share': 45.67,
    'terminal_value': 2500.50,
    'pv_of_terminal': 1875.30,
    'sensitivity_analysis': DataFrame with WACC vs growth scenarios
}

What this does: Gives the AI enough context to generate a complete, working financial model instead of generic code.

Personal tip: "The 'Expected Output Format' section is crucial. It forces the AI to structure the response properly from the start."

Step 3: Generate Your First AI-Powered DCF Model

The problem: DCF models have complex interdependent calculations that are easy to mess up.

My solution: Let AI handle the formula complexity while you focus on the business logic.

Time this saves: 4 hours of formula debugging and Excel-to-Python translation.

Copy-Paste This Exact Prompt

Use this prompt with Claude, ChatGPT, or your preferred AI:

Create a comprehensive DCF valuation function in Python with the following specifications:

CONTEXT:
- Purpose: Enterprise DCF valuation for growth companies
- Data source: Projected financial statements (manual input)
- Output: Detailed valuation breakdown with sensitivity analysis

FINANCIAL ASSUMPTIONS:
- 5-year explicit forecast period
- WACC range: 8-15% (default 10%)
- Terminal growth: 2-4% (default 2.5%)
- Tax rate: 25%

CODE REQUIREMENTS:
- Function name: calculate_dcf_valuation
- Use only pandas, numpy, and matplotlib
- Input validation for all parameters
- Detailed docstring with example usage
- Error handling for edge cases
- Return comprehensive results dictionary

EXAMPLE INPUTS:
- revenues: [1000, 1200, 1440, 1728, 2074] # millions
- ebitda_margins: [0.20, 0.22, 0.24, 0.25, 0.25]
- tax_rate: 0.25
- capex_percent_revenue: 0.05
- nwc_percent_revenue: 0.10
- wacc: 0.10
- terminal_growth: 0.025
- shares_outstanding: 100 # millions

EXPECTED OUTPUT:
{
    'enterprise_value': float,
    'equity_value': float, 
    'value_per_share': float,
    'terminal_value': float,
    'pv_terminal_value': float,
    'yearly_fcf': list,
    'yearly_pv_fcf': list,
    'sensitivity_table': pandas.DataFrame
}

Include a sensitivity analysis table showing value per share across different WACC (8-12%) and terminal growth (2-4%) assumptions.

The AI-Generated Code (This Actually Works)

Here's what Claude generated for me:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

def calculate_dcf_valuation(revenues, ebitda_margins, tax_rate=0.25, 
                          capex_percent_revenue=0.05, nwc_percent_revenue=0.10,
                          wacc=0.10, terminal_growth=0.025, shares_outstanding=100,
                          debt=0, cash=0):
    """
    Calculate DCF valuation for a growth company.
    
    Parameters:
    -----------
    revenues : list
        Projected revenues for 5 years (in millions)
    ebitda_margins : list  
        EBITDA margins for each year (as decimals)
    tax_rate : float
        Corporate tax rate (default 0.25)
    capex_percent_revenue : float
        CapEx as % of revenue (default 0.05)
    nwc_percent_revenue : float
        Net Working Capital as % of revenue (default 0.10)
    wacc : float
        Weighted Average Cost of Capital (default 0.10)
    terminal_growth : float
        Terminal growth rate (default 0.025)
    shares_outstanding : float
        Number of shares outstanding (in millions)
    debt : float
        Total debt (in millions, default 0)
    cash : float
        Cash and equivalents (in millions, default 0)
    
    Returns:
    --------
    dict : Comprehensive DCF valuation results
    
    Example:
    --------
    >>> revenues = [1000, 1200, 1440, 1728, 2074]
    >>> margins = [0.20, 0.22, 0.24, 0.25, 0.25]
    >>> result = calculate_dcf_valuation(revenues, margins)
    >>> print(f"Value per share: ${result['value_per_share']:.2f}")
    """
    
    # Input validation
    if len(revenues) != len(ebitda_margins) or len(revenues) != 5:
        raise ValueError("Must provide exactly 5 years of revenue and EBITDA margin projections")
    
    if wacc <= 0 or wacc >= 1:
        raise ValueError("WACC must be between 0 and 1")
        
    if terminal_growth >= wacc:
        raise ValueError("Terminal growth rate must be less than WACC")
    
    # Convert to numpy arrays for calculations
    revenues = np.array(revenues)
    ebitda_margins = np.array(ebitda_margins)
    
    # Calculate EBITDA
    ebitda = revenues * ebitda_margins
    
    # Calculate EBIT (assuming D&A is minimal for growth companies)
    ebit = ebitda
    
    # Calculate NOPAT (Net Operating Profit After Tax)
    nopat = ebit * (1 - tax_rate)
    
    # Calculate CapEx
    capex = revenues * capex_percent_revenue
    
    # Calculate change in Net Working Capital
    nwc = revenues * nwc_percent_revenue
    nwc_change = np.diff(np.concatenate([[nwc[0]], nwc]))
    
    # Calculate Free Cash Flow
    fcf = nopat - capex - nwc_change
    
    # Calculate present value of FCF
    years = np.arange(1, 6)
    discount_factors = (1 + wacc) ** years
    pv_fcf = fcf / discount_factors
    
    # Calculate terminal value
    terminal_fcf = fcf[-1] * (1 + terminal_growth)
    terminal_value = terminal_fcf / (wacc - terminal_growth)
    pv_terminal_value = terminal_value / ((1 + wacc) ** 5)
    
    # Calculate enterprise value
    enterprise_value = np.sum(pv_fcf) + pv_terminal_value
    
    # Calculate equity value
    equity_value = enterprise_value + cash - debt
    
    # Calculate value per share
    value_per_share = equity_value / shares_outstanding
    
    # Create sensitivity analysis
    wacc_range = np.arange(0.08, 0.13, 0.01)
    growth_range = np.arange(0.02, 0.045, 0.005)
    
    sensitivity_data = []
    for w in wacc_range:
        row = []
        for g in growth_range:
            # Recalculate with different assumptions
            pv_fcf_sens = fcf / ((1 + w) ** years)
            terminal_val_sens = (fcf[-1] * (1 + g)) / (w - g)
            pv_terminal_sens = terminal_val_sens / ((1 + w) ** 5)
            ev_sens = np.sum(pv_fcf_sens) + pv_terminal_sens
            value_per_share_sens = (ev_sens + cash - debt) / shares_outstanding
            row.append(value_per_share_sens)
        sensitivity_data.append(row)
    
    sensitivity_df = pd.DataFrame(
        sensitivity_data,
        index=[f"{w:.1%}" for w in wacc_range],
        columns=[f"{g:.1%}" for g in growth_range]
    )
    
    return {
        'enterprise_value': enterprise_value,
        'equity_value': equity_value,
        'value_per_share': value_per_share,
        'terminal_value': terminal_value,
        'pv_terminal_value': pv_terminal_value,
        'yearly_fcf': fcf.tolist(),
        'yearly_pv_fcf': pv_fcf.tolist(),
        'sensitivity_table': sensitivity_df,
        'assumptions': {
            'wacc': wacc,
            'terminal_growth': terminal_growth,
            'tax_rate': tax_rate,
            'years_projected': len(revenues)
        }
    }

What this does: Creates a complete DCF model with input validation, error handling, and sensitivity analysis.

Expected output: When you run this with sample data, you get a comprehensive valuation breakdown.

Personal tip: "The sensitivity analysis table is gold. It shows you exactly how sensitive your valuation is to key assumptions."

Test Your DCF Model

# Test with sample data
revenues = [1000, 1200, 1440, 1728, 2074]  # Growing tech company
ebitda_margins = [0.20, 0.22, 0.24, 0.25, 0.25]  # Improving margins

result = calculate_dcf_valuation(
    revenues=revenues,
    ebitda_margins=ebitda_margins,
    wacc=0.10,
    terminal_growth=0.025,
    shares_outstanding=100
)

print(f"Enterprise Value: ${result['enterprise_value']:,.0f}M")
print(f"Value Per Share: ${result['value_per_share']:.2f}")
print("\nSensitivity Analysis:")
print(result['sensitivity_table'].round(2))

Personal tip: "Always test with realistic data first. I use actual company numbers from 10-K filings to validate the model makes sense."

Step 4: Build a Monte Carlo Risk Analysis Model

The problem: Single-point estimates hide the real uncertainty in financial projections.

My solution: Generate Monte Carlo simulations to show the full range of possible outcomes.

Time this saves: 6 hours of complex probability modeling and visualization code.

The Monte Carlo Prompt

Create a Monte Carlo simulation for financial modeling with these specifications:

CONTEXT:
- Purpose: Risk analysis for financial projections using probability distributions
- Use case: Stress test DCF valuations and investment scenarios
- Output: Distribution of possible outcomes with confidence intervals

REQUIREMENTS:
- Function name: run_monte_carlo_analysis
- 10,000 simulation runs (configurable)
- Support normal, triangular, and uniform distributions for inputs
- Calculate percentile outcomes (10th, 50th, 90th percentiles)
- Generate histogram and summary statistics
- Use only numpy, pandas, matplotlib, and scipy

EXAMPLE INPUTS:
- revenue_growth: normal distribution (mean=0.15, std=0.05)
- ebitda_margin: triangular distribution (min=0.18, mode=0.22, max=0.28)
- wacc: normal distribution (mean=0.10, std=0.015)
- terminal_growth: uniform distribution (min=0.02, max=0.04)

EXPECTED OUTPUT:
{
    'mean_value': float,
    'percentiles': {'10th': float, '50th': float, '90th': float},
    'probability_positive': float,
    'simulation_results': array,
    'summary_stats': dict
}

Include visualization showing distribution histogram with percentile markers.

AI-Generated Monte Carlo Code

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats

def run_monte_carlo_analysis(base_model_func, model_params, variable_distributions, 
                           num_simulations=10000, random_seed=42):
    """
    Run Monte Carlo simulation on a financial model.
    
    Parameters:
    -----------
    base_model_func : callable
        The financial model function to simulate
    model_params : dict
        Fixed parameters for the model
    variable_distributions : dict
        Variable parameters with their probability distributions
        Format: {'param_name': {'type': 'normal', 'params': [mean, std]}}
    num_simulations : int
        Number of simulation runs
    random_seed : int
        Random seed for reproducibility
    
    Returns:
    --------
    dict : Monte Carlo simulation results
    """
    
    np.random.seed(random_seed)
    results = []
    
    print(f"Running {num_simulations:,} Monte Carlo simulations...")
    
    for i in range(num_simulations):
        # Generate random values for each variable
        sim_params = model_params.copy()
        
        for var_name, dist_config in variable_distributions.items():
            dist_type = dist_config['type']
            params = dist_config['params']
            
            if dist_type == 'normal':
                value = np.random.normal(params[0], params[1])
            elif dist_type == 'triangular':
                value = np.random.triangular(params[0], params[1], params[2])
            elif dist_type == 'uniform':
                value = np.random.uniform(params[0], params[1])
            else:
                raise ValueError(f"Unsupported distribution type: {dist_type}")
            
            sim_params[var_name] = value
        
        # Run the model with random parameters
        try:
            result = base_model_func(**sim_params)
            # Extract the key metric (assuming it's value_per_share)
            if isinstance(result, dict) and 'value_per_share' in result:
                results.append(result['value_per_share'])
            else:
                results.append(result)
        except Exception as e:
            # Handle edge cases where model fails
            continue
    
    results = np.array(results)
    
    # Calculate statistics
    percentiles = np.percentile(results, [10, 25, 50, 75, 90])
    
    analysis = {
        'mean_value': np.mean(results),
        'std_dev': np.std(results),
        'percentiles': {
            '10th': percentiles[0],
            '25th': percentiles[1], 
            '50th': percentiles[2],
            '75th': percentiles[3],
            '90th': percentiles[4]
        },
        'probability_positive': np.sum(results > 0) / len(results),
        'simulation_results': results,
        'summary_stats': {
            'min': np.min(results),
            'max': np.max(results),
            'skewness': stats.skew(results),
            'kurtosis': stats.kurtosis(results),
            'num_valid_sims': len(results)
        }
    }
    
    # Create visualization
    plt.figure(figsize=(12, 8))
    
    # Main histogram
    plt.subplot(2, 2, 1)
    plt.hist(results, bins=50, alpha=0.7, color='skyblue', edgecolor='black')
    plt.axvline(analysis['percentiles']['10th'], color='red', linestyle='--', 
                label=f"10th: ${analysis['percentiles']['10th']:.2f}")
    plt.axvline(analysis['percentiles']['50th'], color='green', linestyle='--', 
                label=f"50th: ${analysis['percentiles']['50th']:.2f}")
    plt.axvline(analysis['percentiles']['90th'], color='orange', linestyle='--', 
                label=f"90th: ${analysis['percentiles']['90th']:.2f}")
    plt.xlabel('Value per Share ($)')
    plt.ylabel('Frequency')
    plt.title('Monte Carlo Simulation Results')
    plt.legend()
    plt.grid(True, alpha=0.3)
    
    # Cumulative distribution
    plt.subplot(2, 2, 2)
    sorted_results = np.sort(results)
    probabilities = np.arange(1, len(sorted_results) + 1) / len(sorted_results)
    plt.plot(sorted_results, probabilities, linewidth=2)
    plt.xlabel('Value per Share ($)')
    plt.ylabel('Cumulative Probability')
    plt.title('Cumulative Distribution Function')
    plt.grid(True, alpha=0.3)
    
    # Box plot
    plt.subplot(2, 2, 3)
    plt.boxplot(results, vert=True)
    plt.ylabel('Value per Share ($)')
    plt.title('Distribution Summary')
    plt.grid(True, alpha=0.3)
    
    # Statistics summary
    plt.subplot(2, 2, 4)
    plt.axis('off')
    stats_text = f"""
    Monte Carlo Results Summary
    
    Mean: ${analysis['mean_value']:.2f}
    Std Dev: ${analysis['std_dev']:.2f}
    
    Confidence Intervals:
    10th percentile: ${analysis['percentiles']['10th']:.2f}
    50th percentile: ${analysis['percentiles']['50th']:.2f}
    90th percentile: ${analysis['percentiles']['90th']:.2f}
    
    Probability > $0: {analysis['probability_positive']:.1%}
    Valid Simulations: {analysis['summary_stats']['num_valid_sims']:,}
    """
    plt.text(0.1, 0.9, stats_text, transform=plt.gca().transAxes, 
             fontsize=10, verticalalignment='top', fontfamily='monospace')
    
    plt.tight_layout()
    plt.show()
    
    return analysis

# Helper function to run Monte Carlo on our DCF model
def monte_carlo_dcf_example():
    """
    Example showing how to run Monte Carlo analysis on DCF model.
    """
    
    # Fixed parameters
    base_params = {
        'revenues': [1000, 1200, 1440, 1728, 2074],
        'ebitda_margins': [0.20, 0.22, 0.24, 0.25, 0.25],
        'tax_rate': 0.25,
        'capex_percent_revenue': 0.05,
        'nwc_percent_revenue': 0.10,
        'shares_outstanding': 100,
        'debt': 0,
        'cash': 50
    }
    
    # Variable parameters with probability distributions
    variable_distributions = {
        'wacc': {
            'type': 'normal',
            'params': [0.10, 0.015]  # mean=10%, std=1.5%
        },
        'terminal_growth': {
            'type': 'triangular', 
            'params': [0.02, 0.025, 0.04]  # min=2%, mode=2.5%, max=4%
        }
    }
    
    # Run Monte Carlo analysis
    mc_results = run_monte_carlo_analysis(
        base_model_func=calculate_dcf_valuation,
        model_params=base_params,
        variable_distributions=variable_distributions,
        num_simulations=10000
    )
    
    return mc_results

What this does: Creates a complete Monte Carlo framework that works with any financial model, including proper error handling and visualization.

Personal tip: "Start with 1,000 simulations for testing, then bump to 10,000 for final analysis. More isn't always better if your model has edge cases."

Run Your Monte Carlo Analysis

# Test the Monte Carlo simulation
mc_results = monte_carlo_dcf_example()

print(f"Monte Carlo DCF Analysis:")
print(f"Expected Value: ${mc_results['mean_value']:.2f}")
print(f"10th Percentile: ${mc_results['percentiles']['10th']:.2f}")
print(f"90th Percentile: ${mc_results['percentiles']['90th']:.2f}")
print(f"Probability of Positive Value: {mc_results['probability_positive']:.1%}")

Personal tip: "The 10th-90th percentile range is your 'realistic scenario band.' If it's too wide, your assumptions need more research."

Step 5: Create an Automated Portfolio Analysis System

The problem: Portfolio optimization requires complex mathematical formulations and market data integration.

My solution: Use AI to generate a complete Modern Portfolio Theory implementation with real market data.

Time this saves: 8+ hours of financial mathematics and data handling code.

Portfolio Analysis Prompt

Create a comprehensive portfolio analysis system in Python with these specifications:

CONTEXT:
- Purpose: Modern Portfolio Theory implementation with real market data
- Data source: Yahoo Finance via yfinance library
- Output: Optimal portfolio weights, efficient frontier, and risk metrics

FEATURES REQUIRED:
- Function name: analyze_portfolio
- Download historical price data automatically
- Calculate expected returns, volatility, and correlations
- Find optimal portfolio weights (max Sharpe ratio)
- Generate efficient frontier curve
- Include risk metrics (VaR, CVaR, maximum drawdown)
- Handle missing data and market holidays

TECHNICAL REQUIREMENTS:
- Use yfinance, pandas, numpy, matplotlib, scipy
- 1-year lookback period for calculations
- 252 trading days per year assumption
- Handle portfolio rebalancing scenarios
- Include performance attribution

EXAMPLE INPUTS:
- tickers: ['AAPL', 'MSFT', 'GOOGL', 'AMZN', 'TSLA']
- risk_free_rate: 0.03
- confidence_level: 0.05 (for VaR)

EXPECTED OUTPUT:
{
    'optimal_weights': dict,
    'portfolio_return': float,
    'portfolio_volatility': float,
    'sharpe_ratio': float,
    'efficient_frontier': DataFrame,
    'risk_metrics': dict,
    'correlation_matrix': DataFrame
}

AI-Generated Portfolio Code

Here's the complete portfolio analysis system Claude generated:

import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from datetime import datetime, timedelta
import warnings
warnings.filterwarnings('ignore')

def analyze_portfolio(tickers, risk_free_rate=0.03, confidence_level=0.05, 
                     lookback_days=252):
    """
    Comprehensive portfolio analysis using Modern Portfolio Theory.
    
    Parameters:
    -----------
    tickers : list
        List of stock ticker symbols
    risk_free_rate : float
        Risk-free rate for Sharpe ratio calculation
    confidence_level : float
        Confidence level for VaR calculation (0.05 = 95% confidence)
    lookback_days : int
        Number of trading days for historical analysis
    
    Returns:
    --------
    dict : Complete portfolio analysis results
    """
    
    print(f"Analyzing portfolio with {len(tickers)} assets...")
    print(f"Tickers: {', '.join(tickers)}")
    
    # Download historical data
    end_date = datetime.now()
    start_date = end_date - timedelta(days=int(lookback_days * 1.5))  # Buffer for weekends
    
    try:
        data = yf.download(tickers, start=start_date, end=end_date, progress=False)
        prices = data['Adj Close']
        
        # Handle single ticker case
        if len(tickers) == 1:
            prices = prices.to_frame()
            prices.columns = tickers
        
        # Remove any missing data
        prices = prices.dropna()
        
        if len(prices) < lookback_days * 0.8:  # At least 80% of expected data
            raise ValueError(f"Insufficient data: only {len(prices)} days available")
            
    except Exception as e:
        raise ValueError(f"Error downloading data: {str(e)}")
    
    # Calculate daily returns
    returns = prices.pct_change().dropna()
    
    # Calculate expected annual returns and volatility
    expected_returns = returns.mean() * 252
    volatility = returns.std() * np.sqrt(252)
    correlation_matrix = returns.corr()
    
    print(f"Data period: {returns.index[0].date()} to {returns.index[-1].date()}")
    print(f"Number of observations: {len(returns)}")
    
    # Portfolio optimization functions
    def portfolio_stats(weights, returns, risk_free_rate):
        """Calculate portfolio statistics"""
        portfolio_return = np.sum(returns.mean() * weights) * 252
        portfolio_std = np.sqrt(np.dot(weights.T, np.dot(returns.cov() * 252, weights)))
        sharpe_ratio = (portfolio_return - risk_free_rate) / portfolio_std
        return portfolio_return, portfolio_std, sharpe_ratio
    
    def negative_sharpe(weights, returns, risk_free_rate):
        """Objective function to minimize (negative Sharpe ratio)"""
        return -portfolio_stats(weights, returns, risk_free_rate)[2]
    
    # Constraints and bounds
    num_assets = len(tickers)
    constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1})  # Weights sum to 1
    bounds = tuple((0, 1) for _ in range(num_assets))  # Long-only portfolio
    initial_guess = num_assets * [1. / num_assets]  # Equal weights
    
    # Optimize for maximum Sharpe ratio
    print("Optimizing portfolio...")
    optimal_result = minimize(negative_sharpe, initial_guess,
                            method='SLSQP', bounds=bounds, constraints=constraints,
                            args=(returns, risk_free_rate))
    
    optimal_weights = optimal_result.x
    opt_return, opt_volatility, opt_sharpe = portfolio_stats(optimal_weights, returns, risk_free_rate)
    
    # Create weights dictionary
    weights_dict = {ticker: weight for ticker, weight in zip(tickers, optimal_weights)}
    
    # Generate efficient frontier
    print("Generating efficient frontier...")
    target_returns = np.linspace(expected_returns.min(), expected_returns.max(), 100)
    efficient_portfolios = []
    
    for target in target_returns:
        # Constraint for target return
        target_constraint = ({'type': 'eq', 'fun': lambda x, target=target: 
                            portfolio_stats(x, returns, risk_free_rate)[0] - target})
        
        # Minimize volatility for target return
        def portfolio_volatility(weights, returns):
            return portfolio_stats(weights, returns, risk_free_rate)[1]
        
        try:
            result = minimize(portfolio_volatility, initial_guess,
                            method='SLSQP', bounds=bounds, 
                            constraints=[constraints, target_constraint],
                            args=(returns,))
            
            if result.success:
                ret, vol, sharpe = portfolio_stats(result.x, returns, risk_free_rate)
                efficient_portfolios.append({'Return': ret, 'Volatility': vol, 'Sharpe': sharpe})
        except:
            continue
    
    efficient_df = pd.DataFrame(efficient_portfolios)
    
    # Calculate risk metrics
    portfolio_returns = returns.dot(optimal_weights)
    
    # Value at Risk (VaR)
    var_95 = np.percentile(portfolio_returns, confidence_level * 100)
    var_99 = np.percentile(portfolio_returns, 1)
    
    # Conditional Value at Risk (CVaR)
    cvar_95 = portfolio_returns[portfolio_returns <= var_95].mean()
    cvar_99 = portfolio_returns[portfolio_returns <= var_99].mean()
    
    # Maximum Drawdown
    portfolio_cumulative = (1 + portfolio_returns).cumprod()
    running_max = portfolio_cumulative.expanding().max()
    drawdown = (portfolio_cumulative - running_max) / running_max
    max_drawdown = drawdown.min()
    
    # Annualize risk metrics
    risk_metrics = {
        'var_95_daily': var_95,
        'var_99_daily': var_99,
        'var_95_annual': var_95 * np.sqrt(252),
        'var_99_annual': var_99 * np.sqrt(252),
        'cvar_95_daily': cvar_95,
        'cvar_99_daily': cvar_99,
        'cvar_95_annual': cvar_95 * np.sqrt(252),
        'cvar_99_annual': cvar_99 * np.sqrt(252),
        'max_drawdown': max_drawdown,
        'volatility_annual': opt_volatility
    }
    
    # Create visualization
    plt.figure(figsize=(15, 10))
    
    # Efficient Frontier
    plt.subplot(2, 3, 1)
    if len(efficient_df) > 0:
        plt.plot(efficient_df['Volatility'], efficient_df['Return'], 'b-', linewidth=2, label='Efficient Frontier')
    plt.scatter(opt_volatility, opt_return, marker='*', color='red', s=300, label='Optimal Portfolio')
    plt.scatter(volatility, expected_returns, alpha=0.4, s=40)
    for i, ticker in enumerate(tickers):
        plt.annotate(ticker, (volatility.iloc[i], expected_returns.iloc[i]))
    plt.xlabel('Annual Volatility')
    plt.ylabel('Expected Annual Return')
    plt.title('Efficient Frontier')
    plt.legend()
    plt.grid(True, alpha=0.3)
    
    # Portfolio Weights
    plt.subplot(2, 3, 2)
    weights_to_plot = [w for w in optimal_weights if w > 0.01]  # Only significant weights
    tickers_to_plot = [t for t, w in zip(tickers, optimal_weights) if w > 0.01]
    colors = plt.cm.Set3(np.linspace(0, 1, len(weights_to_plot)))
    plt.pie(weights_to_plot, labels=tickers_to_plot, autopct='%1.1f%%', colors=colors)
    plt.title('Optimal Portfolio Weights')
    
    # Correlation Heatmap
    plt.subplot(2, 3, 3)
    im = plt.imshow(correlation_matrix, cmap='coolwarm', aspect='auto', vmin=-1, vmax=1)
    plt.colorbar(im)
    plt.xticks(range(len(tickers)), tickers, rotation=45)
    plt.yticks(range(len(tickers)), tickers)
    plt.title('Asset Correlation Matrix')
    
    # Portfolio Performance
    plt.subplot(2, 3, 4)
    portfolio_cumulative = (1 + portfolio_returns).cumprod()
    plt.plot(portfolio_returns.index, portfolio_cumulative, linewidth=2, label='Optimal Portfolio')
    plt.ylabel('Cumulative Returns')
    plt.title('Portfolio Performance')
    plt.legend()
    plt.grid(True, alpha=0.3)
    
    # Drawdown Analysis
    plt.subplot(2, 3, 5)
    plt.fill_between(drawdown.index, drawdown, 0, alpha=0.3, color='red')
    plt.plot(drawdown.index, drawdown, color='red', linewidth=1)
    plt.ylabel('Drawdown')
    plt.title(f'Drawdown Analysis (Max: {max_drawdown:.2%})')
    plt.grid(True, alpha=0.3)
    
    # Risk Metrics Summary
    plt.subplot(2, 3, 6)
    plt.axis('off')
    risk_text = f"""
    Risk Metrics Summary
    
    Portfolio Statistics:
    Expected Return: {opt_return:.2%}
    Volatility: {opt_volatility:.2%}
    Sharpe Ratio: {opt_sharpe:.3f}
    
    Risk Measures:
    VaR (95%): {risk_metrics['var_95_annual']:.2%}
    CVaR (95%): {risk_metrics['cvar_95_annual']:.2%}
    Max Drawdown: {max_drawdown:.2%}
    
    Top Holdings:
    {chr(10).join([f"{ticker}: {weight:.1%}" for ticker, weight in 
                   sorted(weights_dict.items(), key=lambda x: x[1], reverse=True)[:5]])}
    """
    plt.text(0.05, 0.95, risk_text, transform=plt.gca().transAxes, 
             fontsize=9, verticalalignment='top', fontfamily='monospace')
    
    plt.tight_layout()
    plt.show()
    
    # Compile results
    results = {
        'optimal_weights': weights_dict,
        'portfolio_return': opt_return,
        'portfolio_volatility': opt_volatility,
        'sharpe_ratio': opt_sharpe,
        'efficient_frontier': efficient_df,
        'risk_metrics': risk_metrics,
        'correlation_matrix': correlation_matrix,
        'individual_returns': expected_returns.to_dict(),
        'individual_volatility': volatility.to_dict(),
        'portfolio_daily_returns': portfolio_returns,
        'analysis_period': {
            'start': returns.index[0].date(),
            'end': returns.index[-1].date(),
            'observations': len(returns)
        }
    }
    
    return results

# Example usage function
def portfolio_analysis_example():
    """
    Example showing how to use the portfolio analysis system.
    """
    
    # Define portfolio
    tech_portfolio = ['AAPL', 'MSFT', 'GOOGL', 'AMZN', 'NVDA']
    
    print("=" * 60)
    print("AI-GENERATED PORTFOLIO ANALYSIS")
    print("=" * 60)
    
    try:
        results = analyze_portfolio(
            tickers=tech_portfolio,
            risk_free_rate=0.03,  # 3% risk-free rate
            confidence_level=0.05,  # 95% confidence for VaR
            lookback_days=252  # 1 year of data
        )
        
        print(f"\n🎯 OPTIMAL PORTFOLIO RESULTS:")
        print(f"Expected Annual Return: {results['portfolio_return']:.2%}")
        print(f"Annual Volatility: {results['portfolio_volatility']:.2%}")
        print(f"Sharpe Ratio: {results['sharpe_ratio']:.3f}")
        
        print(f"\n📊 OPTIMAL WEIGHTS:")
        for ticker, weight in sorted(results['optimal_weights'].items(), 
                                   key=lambda x: x[1], reverse=True):
            if weight > 0.01:  # Only show significant allocations
                print(f"{ticker}: {weight:.1%}")
        
        print(f"\n⚠️  RISK METRICS:")
        print(f"95% VaR (Annual): {results['risk_metrics']['var_95_annual']:.2%}")
        print(f"95% CVaR (Annual): {results['risk_metrics']['cvar_95_annual']:.2%}")
        print(f"Maximum Drawdown: {results['risk_metrics']['max_drawdown']:.2%}")
        
        return results
        
    except Exception as e:
        print(f"Error in portfolio analysis: {str(e)}")
        return None

What this does: Creates a complete portfolio optimization system with real market data, risk analysis, and professional visualizations.

Expected output: Comprehensive portfolio analysis with optimal weights, efficient frontier, and risk metrics.

Personal tip: "Always check the analysis period dates. Market volatility changes dramatically, so 6-month vs 2-year lookbacks give very different results."

Test Your Portfolio System

# Run the complete portfolio analysis
portfolio_results = portfolio_analysis_example()

# Access specific results
if portfolio_results:
    print(f"\nCorrelation insights:")
    corr_matrix = portfolio_results['correlation_matrix']
    
    # Find highest and lowest correlations
    mask = np.triu(np.ones_like(corr_matrix, dtype=bool), k=1)
    correlations = corr_matrix.where(mask).stack()
    
    highest_corr = correlations.max()
    lowest_corr = correlations.min()
    
    print(f"Highest correlation: {correlations.idxmax()} = {highest_corr:.3f}")
    print(f"Lowest correlation: {correlations.idxmin()} = {lowest_corr:.3f}")

Personal tip: "Watch out for high correlations (>0.8) between assets. They provide less diversification benefit than the math suggests."

What You Just Built

You now have three complete AI-generated financial models that would have taken weeks to build manually:

DCF Valuation Model: Complete discounted cash flow analysis with sensitivity testing Monte Carlo Risk Engine: Probability-based scenario analysis for any financial model
Portfolio Optimization System: Modern Portfolio Theory implementation with live market data

Key Takeaways (Save These)

  • AI prompting is everything: Specific context and examples get you working code, not generic tutorials
  • Start with structure: Define inputs, outputs, and edge cases before asking for code
  • Test immediately: Run AI-generated code with real data to catch errors early
  • Build incrementally: Generate one model at a time, then combine them

Tools I Actually Use

  • Claude Sonnet 4: Best for complex financial code generation with detailed explanations
  • Jupyter Notebooks: Perfect for iterative financial model development
  • Yahoo Finance API: Free, reliable market data that actually works
  • Python Financial Docs: QuantLib documentation for advanced implementations

The real breakthrough isn't just generating code faster. It's having AI explain the financial logic so you understand what you built. That's what turns copy-paste code into actual expertise.