Rank Tracking
Track your website’s keyword rankings across search engines, locations, and devices.
This guide shows you how to build a rank tracking system using the SerpWatch API.
Overview
Rank tracking involves monitoring where your website appears in search results for specific keywords.
The SerpWatch API makes this easy by providing structured SERP data that you can parse to find your domain’s position.
Use Cases
- SEO Performance Monitoring – Track how your rankings change over time
- Competitor Analysis – Monitor competitor positions for target keywords
- Local SEO – Track rankings in specific cities or regions
- Mobile vs Desktop – Compare rankings across device types
- Algorithm Updates – Detect ranking changes after Google updates
How It Works
The rank tracking workflow consists of four steps:
Prepare Keywords
Organize your target keywords with their associated locations and device settings.
Submit Batch Request
Send keywords to the batch endpoint for efficient processing.
Receive Results
Get results via webhook or by polling the task status endpoint.
Extract Rankings
Parse the SERP data to find your domain’s position in organic results.
Step-by-Step Guide
Step 1: Prepare Your Keywords
Create a list of keywords you want to track. For each keyword, specify the target location,
device type, and how many results to fetch (depth).
# Define keywords to track
keywords_to_track = [
{
"keyword": "project management software",
"location_name": "United States",
"iso_code": "US",
"device": "desktop",
"depth": 100 # Get top 100 results
},
{
"keyword": "best crm software",
"location_name": "United States",
"iso_code": "US",
"device": "desktop",
"depth": 100
},
{
"keyword": "task management app",
"location_name": "United Kingdom",
"iso_code": "GB",
"device": "mobile",
"depth": 50
}
]
# Your domain to track
TARGET_DOMAIN = "yourdomain.com"
// Define keywords to track
const keywordsToTrack = [
{
keyword: "project management software",
location_name: "United States",
iso_code: "US",
device: "desktop",
depth: 100 // Get top 100 results
},
{
keyword: "best crm software",
location_name: "United States",
iso_code: "US",
device: "desktop",
depth: 100
},
{
keyword: "task management app",
location_name: "United Kingdom",
iso_code: "GB",
device: "mobile",
depth: 50
}
];
// Your domain to track
const TARGET_DOMAIN = "yourdomain.com";
Step 2: Submit Batch Request
Use the batch endpoint to submit all keywords in a single request. This is more efficient
than making individual requests and helps you stay within rate limits.
curl -X POST "https://engine.v2.serpwatch.io/api/v2/serp/crawl/google/batch" \
-H "Authorization: Bearer $SERPWATCH_API_KEY" \
-H "Content-Type: application/json" \
-d '[
{
"keyword": "project management software",
"location_name": "United States",
"iso_code": "US",
"device": "desktop",
"depth": 100,
"postback_url": "https://yourapp.com/webhook/serp"
},
{
"keyword": "best crm software",
"location_name": "United States",
"iso_code": "US",
"device": "desktop",
"depth": 100,
"postback_url": "https://yourapp.com/webhook/serp"
}
]'
import requests
import os
API_KEY = os.environ.get("SERPWATCH_API_KEY")
BASE_URL = "https://engine.v2.serpwatch.io"
# Prepare batch request
batch_requests = []
for kw in keywords_to_track:
batch_requests.append({
"keyword": kw["keyword"],
"location_name": kw["location_name"],
"iso_code": kw["iso_code"],
"device": kw["device"],
"depth": kw["depth"],
"language_code": "en",
"postback_url": "https://yourapp.com/webhook/serp" # Optional
})
# Submit batch
response = requests.post(
f"{BASE_URL}/api/v2/serp/crawl/google/batch",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json=batch_requests
)
tasks = response.json()
print(f"Submitted {len(tasks)} tasks")
# Store task IDs for later retrieval
task_ids = [task["id"] for task in tasks]
const API_KEY = process.env.SERPWATCH_API_KEY;
const BASE_URL = "https://engine.v2.serpwatch.io";
// Prepare batch request
const batchRequests = keywordsToTrack.map(kw => ({
keyword: kw.keyword,
location_name: kw.location_name,
iso_code: kw.iso_code,
device: kw.device,
depth: kw.depth,
language_code: "en",
postback_url: "https://yourapp.com/webhook/serp" // Optional
}));
// Submit batch
const response = await fetch(
`${BASE_URL}/api/v2/serp/crawl/google/batch`,
{
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify(batchRequests)
}
);
const tasks = await response.json();
console.log(`Submitted ${tasks.length} tasks`);
// Store task IDs for later retrieval
const taskIds = tasks.map(task => task.id);
Webhook vs Polling
For production systems, use webhooks (postback_url) instead of polling.
The API will POST results to your webhook URL when each task completes.
See the Webhooks guide for details.
Step 3: Retrieve Results
If not using webhooks, poll the task endpoint until results are ready.
Tasks typically complete within 30-60 seconds.
import time
def wait_for_task(task_id, max_wait=120):
"""Poll for task completion with timeout."""
start_time = time.time()
while time.time() - start_time < max_wait:
response = requests.get(
f"{BASE_URL}/api/v2/serp/crawl/{task_id}",
headers={"Authorization": f"Bearer {API_KEY}"}
)
result = response.json()
if result["status"] in ["success", "completed"]:
return result
elif result["status"] == "error":
raise Exception(f"Task failed: {result.get('error_message')}")
time.sleep(3) # Wait 3 seconds between polls
raise TimeoutError(f"Task {task_id} timed out")
# Collect all results
all_results = []
for task_id in task_ids:
try:
result = wait_for_task(task_id)
all_results.append(result)
print(f"Got results for: {result['keyword']}")
except Exception as e:
print(f"Error for task {task_id}: {e}")
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
async function waitForTask(taskId, maxWait = 120000) {
const startTime = Date.now();
while (Date.now() - startTime < maxWait) {
const response = await fetch(
`${BASE_URL}/api/v2/serp/crawl/${taskId}`,
{ headers: { "Authorization": `Bearer ${API_KEY}` } }
);
const result = await response.json();
if (result.status === "success" || result.status === "completed") {
return result;
} else if (result.status === "error") {
throw new Error(`Task failed: ${result.error_message}`);
}
await sleep(3000); // Wait 3 seconds between polls
}
throw new Error(`Task ${taskId} timed out`);
}
// Collect all results
const allResults = [];
for (const taskId of taskIds) {
try {
const result = await waitForTask(taskId);
allResults.push(result);
console.log(`Got results for: ${result.keyword}`);
} catch (e) {
console.error(`Error for task ${taskId}:`, e.message);
}
}
Step 4: Extract Your Rankings
Parse the organic results to find your domain’s position. The results include
the full URL and domain for each ranking.
from urllib.parse import urlparse
def find_domain_rank(result, target_domain):
"""
Find the ranking position for a domain in SERP results.
Returns:
dict with position, url, title or None if not found
"""
# Get organic results from the left array
left_results = result.get("result", {}).get("left", [])
organic = [r for r in left_results if r.get("type") == "organic"]
for item in organic:
url = item.get("url", "")
domain = urlparse(url).netloc.lower().replace("www.", "")
# Check if target domain matches (handles subdomains)
if target_domain.lower() in domain or domain.endswith(target_domain.lower()):
return {
"position": int(item["position"]),
"url": url,
"title": item.get("title", ""),
"domain": domain
}
return None # Domain not found in results
# Process all results
rankings = []
for result in all_results:
keyword = result["keyword"]
rank_info = find_domain_rank(result, TARGET_DOMAIN)
rankings.append({
"keyword": keyword,
"location": result.get("location_name", ""),
"device": result.get("device", ""),
"position": rank_info["position"] if rank_info else None,
"url": rank_info["url"] if rank_info else None,
"found": rank_info is not None
})
# Display results
print("\n=== Ranking Report ===")
for r in rankings:
status = f"#{r['position']}" if r["found"] else "Not in top 100"
print(f"{r['keyword']}: {status}")
function findDomainRank(result, targetDomain) {
/**
* Find the ranking position for a domain in SERP results.
* Returns object with position, url, title or null if not found
*/
// Get organic results from the left array
const leftResults = result.result?.left || [];
const organic = leftResults.filter(r => r.type === "organic");
for (const item of organic) {
const url = item.url || "";
const domain = new URL(url).hostname.toLowerCase().replace("www.", "");
// Check if target domain matches (handles subdomains)
if (domain.includes(targetDomain.toLowerCase()) ||
domain.endsWith(targetDomain.toLowerCase())) {
return {
position: parseInt(item.position),
url: url,
title: item.title || "",
domain: domain
};
}
}
return null; // Domain not found in results
}
// Process all results
const rankings = allResults.map(result => {
const keyword = result.keyword;
const rankInfo = findDomainRank(result, TARGET_DOMAIN);
return {
keyword,
location: result.location_name || "",
device: result.device || "",
position: rankInfo?.position || null,
url: rankInfo?.url || null,
found: rankInfo !== null
};
});
// Display results
console.log("\n=== Ranking Report ===");
for (const r of rankings) {
const status = r.found ? `#${r.position}` : "Not in top 100";
console.log(`${r.keyword}: ${status}`);
}
Complete Example
Here’s a complete Python script that tracks rankings and exports to CSV:
#!/usr/bin/env python3
"""
SerpWatch Rank Tracker
Tracks keyword rankings and exports to CSV
"""
import os
import csv
import time
import requests
from datetime import datetime
# Configuration
API_KEY = os.environ.get("SERPWATCH_API_KEY")
BASE_URL = "https://engine.v2.serpwatch.io"
TARGET_DOMAIN = "yourdomain.com"
KEYWORDS = [
{"keyword": "project management software", "location": "United States", "iso_code": "US", "device": "desktop"},
{"keyword": "best crm software", "location": "United States", "iso_code": "US", "device": "desktop"},
{"keyword": "task management app", "location": "United Kingdom", "iso_code": "GB", "device": "mobile"},
]
def submit_batch(keywords):
"""Submit batch crawl request."""
batch = [{
"keyword": kw["keyword"],
"location_name": kw["location"],
"iso_code": kw["iso_code"],
"device": kw["device"],
"depth": 100,
"language_code": "en"
} for kw in keywords]
response = requests.post(
f"{BASE_URL}/api/v2/serp/crawl/google/batch",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json=batch
)
response.raise_for_status()
return response.json()
def wait_for_results(task_ids, timeout=300):
"""Wait for all tasks to complete."""
results = {}
pending = set(task_ids)
start = time.time()
while pending and (time.time() - start) < timeout:
for task_id in list(pending):
response = requests.get(
f"{BASE_URL}/api/v2/serp/crawl/{task_id}",
headers={"Authorization": f"Bearer {API_KEY}"}
)
data = response.json()
if data["status"] in ["success", "completed", "error"]:
results[task_id] = data
pending.remove(task_id)
if pending:
time.sleep(5)
return results
def find_rank(result, domain):
"""Find domain position in organic results."""
from urllib.parse import urlparse
left_results = result.get("result", {}).get("left", [])
for item in left_results:
if item.get("type") != "organic":
continue
url = item.get("url", "")
item_domain = urlparse(url).netloc.lower().replace("www.", "")
if domain.lower() in item_domain:
return int(item["position"])
return None
def main():
print(f"Tracking {len(KEYWORDS)} keywords for {TARGET_DOMAIN}")
print("-" * 50)
# Submit batch
tasks = submit_batch(KEYWORDS)
task_ids = [t["id"] for t in tasks]
print(f"Submitted {len(tasks)} tasks")
# Wait for results
print("Waiting for results...")
results = wait_for_results(task_ids)
# Process and export
timestamp = datetime.now().isoformat()
rows = []
for task in tasks:
task_id = task["id"]
result = results.get(task_id, {})
rank = find_rank(result, TARGET_DOMAIN) if result.get("status") == "completed" else None
rows.append({
"timestamp": timestamp,
"keyword": task["keyword"],
"location": task.get("location_name", ""),
"device": task.get("device", ""),
"position": rank,
"status": result.get("status", "unknown")
})
status = f"#{rank}" if rank else "Not ranked"
print(f" {task['keyword']}: {status}")
# Export to CSV
filename = f"rankings_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
with open(filename, "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=rows[0].keys())
writer.writeheader()
writer.writerows(rows)
print(f"\nExported to {filename}")
if __name__ == "__main__":
main()
Best Practices
Optimize for Efficiency
- Use batch endpoints – Submit multiple keywords per request to reduce API calls
- Set appropriate depth – Use
depth: 10for top-10 tracking,depth: 100for deeper analysis - Use caching – Set
frequencyparameter to reuse recent results for the same query - Schedule off-peak – Run large batch jobs during off-peak hours when possible
Handle Results Properly
- Check for subdomains – Your content might rank on
www.,blog., or other subdomains - Track multiple URLs – Different pages may rank for the same keyword over time
- Store raw data – Keep the full SERP data for historical analysis, not just positions
- Handle missing results – “Not found” doesn’t always mean not ranking – could be beyond depth limit
Position Fluctuations
Rankings naturally fluctuate throughout the day. For accurate tracking,
run checks at consistent times and use averages over multiple data points.
Related Topics
Batch Processing
Learn advanced batch techniques for processing thousands of keywords.
Webhooks
Set up webhooks for real-time result delivery without polling.
Multi-Location
Track rankings across different geographic locations.