Skip to content

Quickstart

This guide walks you through creating your first aggregator and adding a source.

  • A Fetchosaurus account (sign up here)
  • A target website you want to collect data from
  1. Log in to the dashboard

    Go to app.fetchosaurus.com and sign in.

  2. Create an aggregator

    Click New Aggregator and choose a built-in schema:

    • Job - For job listings
    • Event - For events and happenings
    • Listing - For products or real estate
    • Article - For news and blog posts
    • Recipe - For recipes

    Give your aggregator a name (e.g., “Remote DevOps Jobs”).

  3. Add a source

    Click Add Source and paste the URL of a page containing the data you want to extract.

  4. Describe what to extract

    Tell the AI what data you want:

    “Extract all job listings. The title is in the h2 tag, company name is below it, and the location is in the gray text.”

  5. Preview and test

    Review the extracted data preview. If it looks correct, click Test Flight to run a full extraction.

  6. Save and schedule

    Choose a schedule (hourly, daily, weekly, or manual) and save your source.

Once your sources are running, fetch the collected items via API:

Terminal window
curl -H "Authorization: Bearer YOUR_API_KEY" \
"https://api.fetchosaurus.com/api/v1/aggregators/YOUR_AGGREGATOR_ID/items"

Response:

{
"items": [
{
"id": "clx1abc123",
"source_id": "clx2def456",
"schema_version": 1,
"created_at": "2026-01-20T10:30:00Z",
"data": {
"title": "Senior DevOps Engineer",
"company": "Acme Corp",
"location": "Remote",
"url": "https://acme.com/jobs/123"
}
}
],
"next_cursor": "48291",
"has_more": true
}