Upscale 1080p Videos to 4K/8K with AI in 20 Minutes

Use Topaz Video AI, Real-ESRGAN, or ffmpeg-based tools to upscale old 1080p footage to 4K or 8K with sharp, artifact-free results.

Problem: Your 1080p Footage Looks Terrible on 4K Screens

You have old videos — screen recordings, family footage, archived content — that look soft and blurry on modern 4K and 8K displays. Traditional upscaling just stretches pixels. AI upscaling reconstructs detail.

You'll learn:

  • Which AI upscaling tool fits your workflow (free vs paid)
  • How to run Real-ESRGAN locally for batch processing
  • How to upscale with Topaz Video AI for production-quality results
  • Settings that avoid the "over-sharpened AI look"

Time: 20 min | Level: Intermediate


Why This Happens

Standard upscaling (bicubic, Lanczos) estimates missing pixels by averaging neighbors. It makes edges slightly smoother but invents no real detail. On a 4K monitor, a 1080p video plays at 25% of native resolution — every soft edge is visible.

AI upscalers use convolutional neural networks trained on millions of high/low-res pairs. They recognize textures — skin, foliage, text, fabric — and reconstruct plausible high-frequency detail instead of guessing.

Common symptoms of the original problem:

  • Soft or "watercolor" edges on fast-moving subjects
  • Text in old footage is unreadable when zoomed
  • Banding artifacts become obvious on large screens

The tradeoff to know upfront: AI upscaling adds processing time (minutes to hours per clip) and can hallucinate detail that wasn't there — faces in particular. Always review output before delivery.


Solution

Step 1: Choose Your Tool

Three options cover most use cases:

Real-ESRGAN (free, open source) — best for batch processing, scriptable, runs on GPU or CPU. Ideal for tech content, screen recordings, and animation.

Topaz Video AI (~$300 one-time) — best output quality for live-action footage, faces, and film grain. Includes motion compensation between frames.

ffmpeg + Super Resolution filter — zero cost if you already have ffmpeg, but limited to 2x upscale and lower quality than the above two.

Pick Real-ESRGAN if you want free and scriptable. Pick Topaz if output quality is the priority.


Step 2: Install Real-ESRGAN (Free Path)

# Clone and install
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
pip install basicsr facexlib gfpgan --break-system-packages
pip install -r requirements.txt --break-system-packages
python setup.py develop

# Download the video-optimized model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P experiments/pretrained_models

Expected: No errors during install. If you hit CUDA issues, add --cpu to the inference command in the next step — it's slower but works.

If it fails:

  • No module named 'basicsr': Run pip install basicsr --break-system-packages again, then retry.
  • CUDA out of memory: Reduce tile size with --tile 400 (see Step 3).

Step 3: Extract Frames and Upscale

Real-ESRGAN works on images, not video directly. Extract frames first, upscale, then recompose.

# Step 3a: Extract frames at original quality
mkdir frames_in frames_out
ffmpeg -i input_1080p.mp4 -qscale:v 1 frames_in/frame%08d.png

# Step 3b: Upscale all frames (4x = 4K from 1080p)
python inference_realesrgan.py \
  -n realesr-animevideov3 \
  -i frames_in \
  -o frames_out \
  --outscale 4 \
  --tile 512 \
  --fp16

# Step 3c: Recompose to video
ffmpeg -r 30 -i frames_out/frame%08d_out.png \
  -i input_1080p.mp4 \
  -map 0:v -map 1:a \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy \
  output_4k.mp4

Flag breakdown:

  • --outscale 4 — 4x upscale (1920×1080 → 7680×4320 = 8K; use --outscale 2 for 4K)
  • --tile 512 — processes in 512px tiles to avoid VRAM overflow
  • --fp16 — half-precision math, ~2x faster on modern GPUs with minimal quality loss

Expected: frames_out/ fills with upscaled PNGs. Final output_4k.mp4 plays at 4x the resolution of the source.

Terminal showing Real-ESRGAN batch progress Progress bar per frame — on an RTX 4070, expect ~3-5 seconds per frame at 4x

If it fails:

  • RuntimeError: CUDA out of memory: Lower --tile to 256 or 128.
  • Frames out of sync with audio: Make sure -r in Step 3c matches your source frame rate. Check it with ffprobe input_1080p.mp4.

Step 4: Topaz Video AI Path (Paid, Easier)

If you're using Topaz Video AI, the workflow is simpler — it handles frames internally.

1. Open Topaz Video AI
2. Drag in your 1080p file
3. Under "Upscale", select "Proteus" model → set Output Resolution to 4K or 8K
4. Set "Recover Details" to 40-50 (higher = sharper but risks halos)
5. Enable "Stabilization" only if footage is shaky — it adds render time
6. Export → H.265, CRF 18

Settings that avoid the AI over-sharpening look:

  • Keep "Recover Details" at or below 50
  • Disable "Sharpen" unless the source is exceptionally soft
  • Use "Grain" setting 10-15 to restore film-like texture on live-action

Topaz Video AI settings panel for 4K upscale Proteus model with conservative settings — good starting point for any footage


Verification

# Confirm output resolution
ffprobe -v error -select_streams v:0 \
  -show_entries stream=width,height \
  -of csv=p=0 output_4k.mp4

You should see: 3840,2160 for 4K or 7680,4320 for 8K.

Play back a side-by-side in VLC: open both files, use View → Advanced Controls to sync playback. Text and fine edges should be noticeably crisper.


What You Learned

  • Real-ESRGAN is the best free option for batch or scripted pipelines — especially for screen recordings and animation
  • Topaz Video AI gives better results on real faces and organic textures, but costs money
  • Upscaling 1080p → 4K requires --outscale 2; 1080p → 8K requires --outscale 4
  • Frame extraction and recomposition with ffmpeg gives you full control over codec and bitrate

Limitations to know:

  • AI upscaling can't recover motion blur — it will upscale the blur faithfully
  • Faces with heavy compression artifacts can come out "smoothed" with Topaz; Real-ESRGAN handles them more literally
  • Processing time scales with frame count, not clip length — a 10-minute 60fps clip is 36,000 frames

When NOT to use this: If the source has heavy H.264 blocking artifacts, upscaling makes the blocks larger, not smaller. Run a deblock filter first: ffmpeg -i input.mp4 -vf pp=hb/vb/dr output_deblocked.mp4.


Tested with Real-ESRGAN v0.3.0, Python 3.11, CUDA 12.3, ffmpeg 7.1 on Ubuntu 24.04 and macOS Sequoia.