Extract OG tags, social card data, favicon, canonical URL, and more from any webpage — one HTTP call, no browser setup required.
Get free API key →100 calls/month free. No credit card.
One call returns everything. No second requests, no parsing HTML yourself.
Clean JSON. Every field present. Ready to store, display, or pipe into your pipeline.
Sample /v1/metadata response
{
"url": "https://github.com/",
"title": "GitHub: Let's build from here",
"description": "GitHub is where over 100 million developers shape the future of software.",
"og_title": "GitHub: Let's build from here",
"og_description": "GitHub is where over 100 million developers shape the future of software.",
"og_image": "https://github.githubassets.com/images/modules/site/social-cards/homepage.png",
"og_type": "website",
"favicon": "https://github.com/favicon.ico",
"canonical": "https://github.com/",
"language": "en"
}
No SDK required. Plain HTTP works everywhere.
curl
# Fetch metadata for any URL curl "https://snapapi.tech/v1/metadata?url=https://github.com" \ -H "x-api-key: YOUR_KEY" # Pipe into jq to extract just the OG image curl "https://snapapi.tech/v1/metadata?url=https://github.com" \ -H "x-api-key: YOUR_KEY" | jq '.og_image'
Python
import requests # Single URL res = requests.get( "https://snapapi.tech/v1/metadata", params={"url": "https://github.com"}, headers={"x-api-key": "YOUR_KEY"} ) data = res.json() print(data["og_image"]) # https://github.githubassets.com/... # Bulk audit — check OG completeness for a list of pages pages = ["https://yoursite.com", "https://yoursite.com/blog", "https://yoursite.com/pricing"] for url in pages: r = requests.get("https://snapapi.tech/v1/metadata", params={"url": url}, headers={"x-api-key": "YOUR_KEY"}) d = r.json() missing = [f for f in ["og_title","og_image","og_description"] if not d.get(f)] print(f"{url} → {'PASS' if not missing else 'MISSING: ' + ', '.join(missing)}")
Anything that needs to understand a URL before displaying or processing it.
Bulk-check your entire site for missing OG tags, broken canonical URLs, or empty descriptions before a launch or migration.
Build Slack-style unfurl previews in your own app. Fetch title, description, and og:image and render them client-side without scraping.
Enrich a feed of URLs with structured metadata before storing them in a database. No headless browser infra to manage on your end.
Validate that links shared via your scheduling tool will render correctly on Twitter, LinkedIn, and iMessage before they go out.
Start free. Scale when you need to.
What fields does the metadata endpoint return?
Every call returns: url, title, description, og_title, og_description, og_image, og_type, favicon, canonical, and language. Fields present in the HTML will have their value; fields that are missing on the page return an empty string rather than null, so your code can always safely access every key.
Does it execute JavaScript to get the metadata?
Yes. Every request runs a real Chromium browser (via Puppeteer) and waits for the page to fully render before reading the <head>. This means OG tags injected by JavaScript frameworks (Next.js, Nuxt, SPA routers) are captured correctly — not just what's in the raw HTML source.
What are the rate limits on the free tier?
The free tier allows 100 calls per month with 1 concurrent request. Calls reset on the 1st of each month. There is no per-minute rate limit on the free tier — you can spend your 100 calls however you like. Paid tiers start at $9/month for 1,000 calls and unlock batch processing for higher-volume workloads.
How do I use this for a bulk OG audit?
On the Starter plan and above, use the POST /v1/batch endpoint with "endpoint": "metadata" and a list of URLs. You get back an array of metadata objects — one per URL — in a single response. On the free tier, loop through URLs one at a time with a small delay between requests to stay within the 1-concurrent limit.