Skip to main content

Free Website Screenshot API

Capture full-page screenshots of any website with a single POST request. No signup, no API key, no watermarks. Just send a URL and get back an image.

Same engine, same rate limits as webshot.site. Supports JPG, PNG, and WebP. Built on headless Chrome (Puppeteer).

Why use the Webshot API?

🔓

No signup required

No API key, no account, no credit card. Start sending requests in seconds.

🚫

No watermarks

Every screenshot is clean. Use them anywhere — production, client work, monitoring.

📊

IP-based rate limits

Same fair limits as the website. Honest, predictable, and free for everyone.

🖼️

Three formats

JPG, PNG, or WebP. Pick the format that suits your use case best.

Quick Start

The fastest possible way to get a screenshot. Run this in your terminal right now:

curl -X POST https://webshot.site/api/capture \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","format":"png","mode":"desktop_full"}' \
  --output screenshot.png

That's it. No headers, no auth, no setup. The response body is the binary image. The mode parameter is optional — it defaults to desktop_full.

Mobile viewport example

curl -X POST https://webshot.site/api/capture \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","format":"webp","mode":"mobile_viewport"}' \
  --output mobile-hero.webp

Captures the above-the-fold view at 390×844 (iPhone 15 Pro) with a mobile user-agent — perfect for consistent hero thumbnails.

Endpoint Reference

POST https://webshot.site/api/capture

Request body

Accepts application/json or application/x-www-form-urlencoded.

ParameterTypeRequiredDescription
url string Yes The full URL to capture. Must start with http:// or https://. Private and reserved IPs are blocked (SSRF protection).
format string No Output format: jpg (default), png, or webp.
mode string No Capture mode (defaults to desktop_full):
  • desktop_full — entire scrollable page at 1920×1080 desktop viewport (default)
  • desktop_viewport — just the 1920×1080 above-the-fold view
  • mobile_full — entire scrollable page at 390×844 iPhone 15 Pro viewport with mobile UA
  • mobile_viewport — just the 390×844 above-the-fold mobile view

Response

On success (HTTP 200), the response body is the raw image binary with the appropriate Content-Type header (image/jpeg, image/png, or image/webp).

Every response includes rate limit + mode headers:

HeaderDescription
X-RateLimit-LimitMaximum captures per window for your IP.
X-RateLimit-RemainingCaptures left in the current window.
X-RateLimit-ResetUnix timestamp when the next token is released.
X-Webshot-ModeThe capture mode used (echoes back your mode param or the default).
Retry-AfterSeconds to wait before retrying (only on 429).

Status codes

CodeMeaning
200Success — body is the screenshot image.
400Bad request — invalid URL, missing parameter, or SSRF-blocked target.
405Method not allowed — only POST is accepted.
429Rate limit exceeded — wait Retry-After seconds and try again.
500Capture failed — the target site couldn't be loaded or rendered.

Error response shape

All non-2xx responses return JSON:

{
  "error": "Rate limit exceeded. 5 captures per 900 seconds.",
  "retry_after": 142,
  "limit": 5,
  "available": 0,
  "docs": "https://webshot.site/developers"
}

Rate Limits

The Webshot API is free for everyone, so we use IP-based rate limits to keep the service fair and available. The default limit is 5 captures per 15 minutes per IP address — exactly the same as the website.

Tokens refill continuously: roughly 1 capture every 3 minutes. You don't need to wait for the entire window to expire. Watch the X-RateLimit-Remaining header on every response to track your budget.

Need higher limits? Get in touch below.

Code Samples

Copy-paste ready examples in cURL, Python, Node.js, PHP, and Go. Every snippet hits the real /api/capture endpoint — no placeholders to fill in. Each language tab includes three patterns: a basic capture, a version with full error handling and rate limit retry, and a batch processor for capturing many URLs while respecting the 5-per-15-minute limit.

Basic capture

Save a screenshot to a file with one command. The response body is the binary image.

# Capture example.com as PNG and save to screenshot.png
curl -X POST https://webshot.site/api/capture \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","format":"png"}' \
  --output screenshot.png

Read rate-limit headers

Use -i to dump headers, then grep for X-RateLimit-* to track your token budget.

# Show rate-limit headers without saving the image
curl -sS -i -X POST https://webshot.site/api/capture \
  -H "Content-Type: application/json" \
  -d '{"url":"https://github.com","format":"webp"}' \
  --output screenshot.webp \
  | grep -i '^x-ratelimit\|^retry-after\|^http/'

# Example output:
#   HTTP/2 200
#   x-ratelimit-limit: 5
#   x-ratelimit-remaining: 4
#   x-ratelimit-reset: 1775789560

Batch capture from a URL list

Read URLs line by line from urls.txt and capture each one, sleeping 180 seconds between calls to stay under the limit.

# urls.txt contains one URL per line
i=0
while read url; do
  i=$((i+1))
  echo "[$i] Capturing $url"
  curl -sS -X POST https://webshot.site/api/capture \
    -H "Content-Type: application/json" \
    -d "{\"url\":\"$url\",\"format\":\"png\"}" \
    --output "shot-$i.png"
  sleep 180  # 5 captures per 15 min = 1 every 180s
done < urls.txt

Basic capture

Use requests.post to fetch the screenshot binary and write it to disk. The mode parameter lets you pick desktop vs mobile and full-page vs viewport.

import requests

response = requests.post(
    'https://webshot.site/api/capture',
    json={
        'url': 'https://example.com',
        'format': 'png',
        'mode': 'desktop_full',  # desktop_full | desktop_viewport | mobile_full | mobile_viewport
    },
    timeout=120,
)
response.raise_for_status()

with open('screenshot.png', 'wb') as f:
    f.write(response.content)

print(f"Saved {len(response.content)} bytes in {response.headers['X-Webshot-Mode']} mode")

With error handling and 429 retry

Read rate-limit headers, retry automatically on 429 using the Retry-After value.

import requests, time

def capture(url, fmt='png', max_retries=3):
    for attempt in range(max_retries):
        r = requests.post(
            'https://webshot.site/api/capture',
            json={'url': url, 'format': fmt},
            timeout=120,
        )
        if r.status_code == 200:
            return r.content
        if r.status_code == 429:
            wait = int(r.headers.get('Retry-After', 60))
            print(f"Rate limited, sleeping {wait}s...")
            time.sleep(wait)
            continue
        raise RuntimeError(f"HTTP {r.status_code}: {r.json().get('error')}")
    raise RuntimeError("Max retries exceeded")

img = capture('https://github.com', 'webp')
open('github.webp', 'wb').write(img)

Batch capture multiple URLs

Process a list of URLs sequentially, sleeping between calls to respect the rate limit.

import requests, time, pathlib

URLS = [
    'https://example.com',
    'https://github.com',
    'https://wikipedia.org',
    'https://news.ycombinator.com',
]

pathlib.Path('shots').mkdir(exist_ok=True)

for i, url in enumerate(URLS, 1):
    print(f"[{i}/{len(URLS)}] {url}")
    r = requests.post(
        'https://webshot.site/api/capture',
        json={'url': url, 'format': 'png'},
        timeout=120,
    )
    if r.ok:
        pathlib.Path(f'shots/{i:03d}.png').write_bytes(r.content)
        print(f"  remaining: {r.headers.get('X-RateLimit-Remaining')}")
    if i < len(URLS):
        time.sleep(180)  # 1 capture per 180s = 5 per 15 min

Basic capture

Use Node 18+'s native fetch to grab the screenshot and write it to disk. The mode parameter picks desktop/mobile and full/viewport.

// Node.js 18+ — native fetch, no dependencies
import { writeFile } from 'node:fs/promises';

const res = await fetch('https://webshot.site/api/capture', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    url: 'https://example.com',
    format: 'png',
    mode: 'mobile_viewport',  // desktop_full | desktop_viewport | mobile_full | mobile_viewport
  }),
});

if (!res.ok) throw new Error(`HTTP ${res.status}`);

const buffer = Buffer.from(await res.arrayBuffer());
await writeFile('screenshot.png', buffer);
console.log(`Saved ${buffer.length} bytes`);

With error handling and 429 retry

Wrap the call in an async function with automatic retry on rate limit.

async function capture(url, format = 'png', maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const res = await fetch('https://webshot.site/api/capture', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ url, format }),
    });

    if (res.ok) {
      return Buffer.from(await res.arrayBuffer());
    }
    if (res.status === 429) {
      const wait = parseInt(res.headers.get('retry-after') || '60', 10);
      console.log(`Rate limited, waiting ${wait}s...`);
      await new Promise(r => setTimeout(r, wait * 1000));
      continue;
    }
    const err = await res.json();
    throw new Error(err.error || `HTTP ${res.status}`);
  }
  throw new Error('Max retries exceeded');
}

const img = await capture('https://github.com', 'webp');
console.log(`Captured ${img.length} bytes`);

Batch capture multiple URLs

Loop over an array, sleeping 180 seconds between calls so you stay inside the 5/15min budget.

import { writeFile, mkdir } from 'node:fs/promises';

const urls = [
  'https://example.com',
  'https://github.com',
  'https://wikipedia.org',
  'https://news.ycombinator.com',
];

await mkdir('shots', { recursive: true });
const sleep = ms => new Promise(r => setTimeout(r, ms));

for (let i = 0; i < urls.length; i++) {
  console.log(`[${i + 1}/${urls.length}] ${urls[i]}`);
  const res = await fetch('https://webshot.site/api/capture', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ url: urls[i], format: 'png' }),
  });
  if (res.ok) {
    const buf = Buffer.from(await res.arrayBuffer());
    await writeFile(`shots/${String(i + 1).padStart(3, '0')}.png`, buf);
    console.log(`  remaining: ${res.headers.get('x-ratelimit-remaining')}`);
  }
  if (i < urls.length - 1) await sleep(180_000);
}

Basic capture

Plain curl_exec — no SDKs, no Composer dependencies. Pass mode to pick desktop/mobile and full/viewport.

<?php
$ch = curl_init('https://webshot.site/api/capture');
curl_setopt_array($ch, [
    CURLOPT_POST           => true,
    CURLOPT_HTTPHEADER     => ['Content-Type: application/json'],
    CURLOPT_POSTFIELDS     => json_encode([
        'url'    => 'https://example.com',
        'format' => 'png',
        'mode'   => 'desktop_viewport',  // desktop_full | desktop_viewport | mobile_full | mobile_viewport
    ]),
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_TIMEOUT        => 120,
]);

$body   = curl_exec($ch);
$status = curl_getinfo($ch, CURLINFO_HTTP_CODE);

if ($status === 200) {
    file_put_contents('screenshot.png', $body);
    echo 'Saved ' . strlen($body) . " bytes\n";
} else {
    echo "HTTP {$status}\n";
}

With rate-limit headers and 429 retry

Capture response headers via CURLOPT_HEADER, parse Retry-After, and retry on 429.

<?php
function capture(string $url, string $format = 'png', int $maxRetries = 3): string {
    for ($attempt = 0; $attempt < $maxRetries; $attempt++) {
        $ch = curl_init('https://webshot.site/api/capture');
        curl_setopt_array($ch, [
            CURLOPT_POST           => true,
            CURLOPT_HTTPHEADER     => ['Content-Type: application/json'],
            CURLOPT_POSTFIELDS     => json_encode(['url' => $url, 'format' => $format]),
            CURLOPT_RETURNTRANSFER => true,
            CURLOPT_TIMEOUT        => 120,
            CURLOPT_HEADER         => true,
        ]);
        $response   = curl_exec($ch);
        $headerSize = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
        $status     = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        $headers    = substr($response, 0, $headerSize);
        $body       = substr($response, $headerSize);

        if ($status === 200) return $body;
        if ($status === 429) {
            preg_match('/^retry-after:\s*(\d+)/im', $headers, $m);
            $wait = (int) ($m[1] ?? 60);
            echo "Rate limited, sleeping {$wait}s\n";
            sleep($wait);
            continue;
        }
        throw new RuntimeException("HTTP {$status}");
    }
    throw new RuntimeException('Max retries exceeded');
}

$img = capture('https://github.com', 'webp');
file_put_contents('github.webp', $img);

Batch capture multiple URLs

Process an array of URLs sequentially with a built-in delay between calls.

<?php
$urls = [
    'https://example.com',
    'https://github.com',
    'https://wikipedia.org',
    'https://news.ycombinator.com',
];

@mkdir('shots', 0755, true);

foreach ($urls as $i => $url) {
    $n = $i + 1;
    echo "[{$n}/" . count($urls) . "] {$url}\n";

    $ch = curl_init('https://webshot.site/api/capture');
    curl_setopt_array($ch, [
        CURLOPT_POST           => true,
        CURLOPT_HTTPHEADER     => ['Content-Type: application/json'],
        CURLOPT_POSTFIELDS     => json_encode(['url' => $url, 'format' => 'png']),
        CURLOPT_RETURNTRANSFER => true,
        CURLOPT_TIMEOUT        => 120,
    ]);
    $body = curl_exec($ch);

    if (curl_getinfo($ch, CURLINFO_HTTP_CODE) === 200) {
        file_put_contents(sprintf('shots/%03d.png', $n), $body);
    }
    if ($i < count($urls) - 1) sleep(180);
}

Basic capture

Standard library only — no third-party packages required.

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
    "os"
)

func main() {
    payload, _ := json.Marshal(map[string]string{
        "url":    "https://example.com",
        "format": "png",
    })

    resp, err := http.Post(
        "https://webshot.site/api/capture",
        "application/json",
        bytes.NewBuffer(payload),
    )
    if err != nil { panic(err) }
    defer resp.Body.Close()

    if resp.StatusCode != 200 {
        fmt.Printf("Error: HTTP %d\n", resp.StatusCode)
        return
    }

    f, _ := os.Create("screenshot.png")
    defer f.Close()
    n, _ := io.Copy(f, resp.Body)
    fmt.Printf("Saved %d bytes\n", n)
}

With 429 retry and exponential backoff

A reusable capture function that retries on rate limit, using the Retry-After header.

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
    "strconv"
    "time"
)

func capture(url, format string, maxRetries int) ([]byte, error) {
    payload, _ := json.Marshal(map[string]string{"url": url, "format": format})

    for attempt := 0; attempt < maxRetries; attempt++ {
        resp, err := http.Post(
            "https://webshot.site/api/capture",
            "application/json",
            bytes.NewBuffer(payload),
        )
        if err != nil { return nil, err }
        defer resp.Body.Close()

        if resp.StatusCode == 200 {
            return io.ReadAll(resp.Body)
        }
        if resp.StatusCode == 429 {
            wait, _ := strconv.Atoi(resp.Header.Get("Retry-After"))
            if wait == 0 { wait = 60 }
            fmt.Printf("Rate limited, sleeping %ds\n", wait)
            time.Sleep(time.Duration(wait) * time.Second)
            continue
        }
        return nil, fmt.Errorf("HTTP %d", resp.StatusCode)
    }
    return nil, fmt.Errorf("max retries exceeded")
}

Batch capture from a slice

Loop over a slice of URLs, sleeping 180 seconds between captures.

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
    "os"
    "time"
)

func main() {
    urls := []string{
        "https://example.com",
        "https://github.com",
        "https://wikipedia.org",
        "https://news.ycombinator.com",
    }
    os.MkdirAll("shots", 0755)

    for i, url := range urls {
        fmt.Printf("[%d/%d] %s\n", i+1, len(urls), url)
        payload, _ := json.Marshal(map[string]string{"url": url, "format": "png"})
        resp, err := http.Post("https://webshot.site/api/capture", "application/json", bytes.NewBuffer(payload))
        if err != nil { continue }

        if resp.StatusCode == 200 {
            f, _ := os.Create(fmt.Sprintf("shots/%03d.png", i+1))
            io.Copy(f, resp.Body)
            f.Close()
        }
        resp.Body.Close()

        if i < len(urls)-1 {
            time.Sleep(180 * time.Second)
        }
    }
}

Common Use Cases

📈

Website monitoring

Capture nightly snapshots to detect visual regressions or unauthorized changes.

🔬

Competitor research

Track how rival sites evolve over time without manual screenshots.

📰

Social media previews

Generate share images on the fly for any URL your users post.

🗄️

Web archiving

Build a visual archive of pages alongside the HTML for evidence or reference.

🧪

QA & testing

Capture states from staging or production as part of automated test suites.

📋

Client reporting

Embed up-to-date screenshots in dashboards, weekly emails, or PDF reports.

API vs Website

Both endpoints use the exact same screenshot engine and the same per-IP rate limits. The differences are in shape and intended use:

Website (/)API (/api/capture)
AuthCSRF token (browser session)None — open POST
CORSSame-origin onlyOpen (Access-Control-Allow-Origin: *)
ResponseJSON with download tokenBinary image (direct)
Rate limit5 / 15 min per IP5 / 15 min per IP (shared bucket)
EngineHeadless Chrome + Puppeteer StealthHeadless Chrome + Puppeteer Stealth
FormatsJPG, PNG, WebPJPG, PNG, WebP
Best forCasual one-off capturesAutomation, scripts, integrations

Need higher limits?

The free tier covers most use cases, but if you need to capture thousands or millions of URLs per month, we have you covered. Tell us about your project and we'll get back to you.

Frequently Asked Questions

Is the Webshot API really free?

Yes. 100% free for everyone with the same IP-based rate limits as the website. No signup, no API key, no credit card, no watermarks. If you need higher limits for high-volume commercial use, contact [email protected].

Do I need an API key?

No API key is required. Just send a POST request to /api/capture with a JSON body containing the URL. The API returns the screenshot binary directly.

What are the rate limits?

The API uses the same IP-based rate limits as the website — by default, 5 captures per 15-minute window per IP. The current state is returned in the X-RateLimit-Limit and X-RateLimit-Remaining response headers on every API call.

Can I use Webshot commercially?

Yes. Webshot is free for commercial use under the same rate limits. For sustained high-volume commercial usage, please use the form above or email [email protected] to discuss higher limits.

How do I get higher rate limits?

Fill out the contact form above with your expected volume and use case, and we'll get back to you with options.

Does it work with JavaScript-heavy single-page applications?

Yes. Webshot uses headless Chrome via Puppeteer with the stealth plugin to render JavaScript, lazy-loaded images, and dynamic content before capturing the screenshot.

What output formats are supported?

Webshot supports JPG, PNG, and WebP. Pass the format parameter in your POST body. WebP is recommended for the smallest file size with excellent quality.

Can I capture pages behind a login?

The free public API does not support authenticated captures. For pages requiring login, cookies, or custom headers, contact [email protected] about enterprise options.

Share: 𝕏 Twitter Facebook LinkedIn