Wait...

API for Web Data and
Research Agents

Get clean data for your AI from any website and automate
your web research workflows in a no-code way

Diagonal Sections

Using the rotation transform is how you might think to do it but I think skew is the way to go!

One unified API for your AI
< Your AI Agent >
  • Start "research YC" workflow
  • Automate brand protection
  • Research donors in NYC
    Find local businesses
  • Analyze brand visibility
[-- Data Layer --]
· Research agents
· Parsers - structured data
· Data router
· Automation engine
· Click, fill forms
· Distributed infra
· Map/Crawl
· VM sandboxes
· Batches API
•   Output

{
    "id": "request_56is5c9gyw",
    "created": 1317322740,
    "result": {
        "markdown_content": "# Ex",         "json_content": {}
        "html_content": "<DOC>"
    }
}

Join the best startups in the world building on Olostep

From seed-stage to Series-B startups and scaleups

Gumloop, AthenaHQ, Openmart are backed by YC. Podqi is backed by Afore and GC

Diagonal Sections

Using the rotation transform is how you might think to do it but I think skew is the way to go!

Developer-centric
1import requests
2
3API_URL = 'https://api.olostep.com/v1/answers'
4API_KEY = '<your_token>'
5
6headers = {
7    'Authorization': f'Bearer {API_KEY}',
8    'Content-Type': 'application/json'
9}
10
11data = {
12    "task": "What is the latest book by J.K. Rowling?",
13    "json": {
14        "book_title": "",
15        "author": "",
16        "release_date": ""
17    }
18}
19
20response = requests.post(API_URL, headers=headers, json=data)
21result = response.json()
22
23print(json.dumps(result, indent=4))
1// Using native fetch API (Node.js v18+)
2const API_URL = 'https://api.olostep.com/v1/answers';
3const API_KEY = '<your_token>';
4
5fetch(API_URL, {
6  method: 'POST',
7  headers: {
8    'Authorization': `Bearer ${API_KEY}`,
9    'Content-Type': 'application/json'
10  },
11  body: JSON.stringify({
12    "task": "What is the latest book by J.K. Rowling?",
13    "json": {
14        "book_title": "",
15        "author": "",
16        "release_date": ""
17    }
18  })
19})
20  .then(response => response.json())
21  .then(result => {
22    console.log(JSON.stringify(result, null, 4));
23  })
24  .catch(error => console.error('Error:', error));
1import requests
2
3API_URL = 'https://api.olostep.com/v1/crawls'
4API_KEY = '<token>'
5
6headers = {'Authorization': f'Bearer {API_KEY}'}
7data = {
8    "start_url": "https://docs.stripe.com/api",
9    "include_urls": ["/**"],
10    "max_pages": 10
11}
12
13response = requests.post(API_URL, headers=headers, json=data)
14result = response.json()
15
16print(f"Crawl ID: {result['id']}")
17print(f"URL: {result['start_url']}")
1// Using native fetch API (Node.js v18+)
2const API_URL = 'https://api.olostep.com/v1/crawls';
3const API_KEY = '<token>';
4
5fetch(API_URL, {
6  method: 'POST',
7  headers: {
8    'Authorization': `Bearer ${API_KEY}`,
9    'Content-Type': 'application/json'
10  },
11  body: JSON.stringify({
12    "start_url": "https://docs.stripe.com/api",
13    "include_urls": ["/**"],
14    "max_pages": 10
15  })
16})
17.then(response => response.json())
18.then(result => {
19  console.log(`Crawl ID: ${result.id}`);
20  console.log(`URL: ${result.start_url}`);
21})
22.catch(error => console.error('Error:', error));
1import requests
2
3API_URL = 'https://api.olostep.com/v1/scrapes'
4API_KEY = '<your_token>'
5
6headers = {'Authorization': f'Bearer {API_KEY}'}
7data = {"url_to_scrape": "https://github.com"}
8
9response = requests.post(API_URL, headers=headers, json=data)
10result = response.json()
11
12print(f"Scrape ID: {result['id']}")
13print(f"URL: {result['url_to_scrape']}")
1// Using native fetch API (Node.js v18+)
2const API_URL = 'https://api.olostep.com/v1/scrapes';
3const API_KEY = '<your_token>';
4
5fetch(API_URL, {
6  method: 'POST',
7  headers: {
8    'Authorization': `Bearer ${API_KEY}`,
9    'Content-Type': 'application/json'
10  },
11  body: JSON.stringify({
12    "url_to_scrape": "https://github.com"
13  })
14})
15.then(response => response.json())
16.then(result => {
17  console.log(`Scrape ID: ${result.id}`);
18  console.log(`URL: ${result.url_to_scrape}`);
19})
20.catch(error => console.error('Error:', error));
1import requests
2
3API_URL = 'https://api.olostep.com/v1/agents' # endpoint available to select customers
4API_KEY = '<token>'
5
6headers = {'Authorization': f'Bearer {API_KEY}', 'Content-Type': 'application/json'}
7data = {
8    "prompt": '''
9      Search every portfolio company from every fund from 
10      (https://www.vcsheet.com/funds) and return the results into a google sheet 
11      with the following columns (Fund Name, Fund Website 
12      URL, Fund LinkedIn URL, Portfolio Company Name, Portfolio
13      Company URL, Portfolio Company LinkedIn URL). Run every week 
14      on Monday at 9:00 AM. Send an email to steve@example.com when 
15      new portfolio companies are added to any of these funds.  
16    ''',
17    "model": "gpt-4.1"
18}
19
20response = requests.post(API_URL, headers=headers, json=data)
21result = response.json()
22
23print(f"Agent ID: {result['id']}")
24print(f"Status: {result['status']}")
25# You can then schedule this agent
1// Using native fetch API (Node.js v18+)
2const API_URL = 'https://api.olostep.com/v1/agents'; // endpoint available to select customers
3const API_KEY = '<token>';
4
5fetch(API_URL, {
6  method: 'POST',
7  headers: {
8    'Authorization': `Bearer ${API_KEY}`,
9    'Content-Type': 'application/json'
10  },
11  body: JSON.stringify({
12    "prompt": `
13      Search every portfolio company from every fund from 
14      (https://www.vcsheet.com/funds) and return the results into a google sheet 
15      with the following columns (Fund Name, Fund Website 
16      URL, Fund LinkedIn URL, Portfolio Company Name, Portfolio
17      Company URL, Portfolio Company LinkedIn URL). Run every week 
18      on Monday at 9:00 AM. Send an email to steve@example.com when 
19      new portfolio companies are added to any of these funds.
20    `,
21    "model": "gpt-4.1"
22  })
23})
24  .then(response => response.json())
25  .then(result => {
26    console.log(`Agent ID: ${result.id}`);
27    console.log(`Status: ${result.status}`);
28    // You can then schedule this agent
29  })
30  .catch(error => console.error('Error:', error));

Get the data in the format you want

Get Markdown, HTML, PDF or Structured JSON

Pass the URL to the API and retrieve the HTML, Markdown, PDF, or plain text of the website. You can also specify the schema to only get the structured, clean JSON data you want

JS execution + residential ipS

Web-pages rendered in a browser

Full JS support is the norm for every request, as well as premium residential IP addresses and proxies rotation to avoid all bot detection

Crawl

Get all the data from a single URL

Multi-depth crawling enables you to get clean markdown from all the subpages of a website. Works also without a sitemap (e.g. useful for doc websites).

Get clean data

We handle the heavy lifting

Browser infra, rate limits and js-rendered content

Crawling

Get the data from all subpages of a website. No sitemap required. This is useful if you are building an AI agent that need to get a specific context from a documentation website

Batches

You can submit from 100 to 100k URLs in a batch and have the content (markdown, html, raw pdfs or structured JSON) back in 5-7 mins. Useful for deep research agents, monitoring social media, and for aggregating data at scale

Reliable

Get the content you want when you want it. All requests are done with a premium proxy

PDF parsing

Olostep can parse and output content from web hosted pdfs, docx, and more.

Actions

Click, type, fill forms, scroll, wait and more dynamically on websites

Most cost-effective API on the market

Pricing that Makes Sense

Start for free. Scale with no worries.
We want you to be able to build a business on top of Olostep

Free
$0
No credit card required
500 successful requests
All requests are JS rendered + utilizing residential IP addresses
Low rate limits
Starter
$9
per month
5000 successful requests/month
Everything in Free Plan
100 concurrent requests
Standard
$99 USD
per month
200K successful requests/month
Everything in Starter Plan
Self-Healing LLM Parsers
Scale
$399 USD
per month
1 Million successful requests/month
Everything in Standard Plan
AI-powered Browser Automations
Free
$0
per month
3000 successful scrapes
All requests are JS rendered + utilizing residential IP addresses
Starter
$29
per month
20K successful scrapes
All requests are JS rendered + utilizing residential IP addresses
Standard
$99 USD
per month
200K successful scrapes
All requests are JS rendered + utilizing residential IP addresses
Scale
$399 USD
per month
1 Million successful scrapes
All requests are JS rendered + utilizing residential IP addresses

Top-ups

Need flexibility or have spiky usage? You can buy as many credits pack as you want. They are valid for 6 months.

Credit pack

$25 for 20k credits
Purchase Credit Pack

Credit pack

$375 for 500k credits
Purchase Credit Pack

Credit pack

$1000 for 2M credits
Purchase Credit Pack

A minimum $9/month subscription is required. Credits are added on top of your base plan

Use Cases

Data for every industry

Deep research agents

Enable your agent to conduct deep research on large Web datasets.

Spreadsheet enrichment

Get real-time web data to enrich your spreadsheets and analyze data.

Lead generation

Research, enrich, validate and analyze leads. Enhance your sales data

Vertical AI search

Build industry specific search engines to turn data into an actionable resource.

AI Brand visibility

Monitor brands to help improve their AI visibility (Answer Engine Optimization).

Agentic Web automations

Enable AI Agents to automate tasks on the Web: fill forms, click on buttons, etc.

Sales Research

Search Stackoverflow, coding docs, and Github repos. Get access to a model specifically trained on high-accuracy code referencing.

Cursor references repos and docs to output the most accurate code

“ Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. “

Akihiro Suzuki
Company

“ Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. “

Akihiro Suzuki
Company

“ Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. “

Akihiro Suzuki
Company
Questions?

Frequently asked questions

Have other questions? Get in touch via info@olostep.com

What is Olostep?

Olostep is the Web Data API for AI and Research Agents.

The Olostep API is the best web search, scraping and crawling API for AI used by some of the leading startups in the world.

The Olostep Agent allows anyone to automate research workflows and build data pipelines in a no code way with just a prompt in natural language

What is counted as a request?

1 request is counted as one webpage/pdf. We don't charge you additionally for GB or for proxies and all those costs are included in the cost per request.

Does Olostep charge for failed requests?

We don't charge for failed requests. If you are using the answer endpoint or an endpoint that needs to make LLM calls we will pass down those costs to you but on our end we only charge for successful requests. We are building and improving fallback systems to retry failed requests internally and return the results

Which websites can Olostep access/interact?

You can access and interact with any website that is publicly accessible. If you are building AI automations and your agent needs to pass cookies or login, get in touch at info@olostep.com

Can Olostep support my high-volume requests?

Yes, the API can scale to billions of requests per month

How can I pay?

You can pay using the Stripe Payment Links.

Why should I use Olostep?

Because it's reliable (99.5%), cost-effective (up to 70% cheaper), scalable, and flexible to be compatible with your existing workflows and backend. You can request the features you need and we will try to build them for you. Plus you can test it for free to see if it fits your need. Get your free API keys from here.

Can I switch plans after signing up?

Yes, plans are pro-rated, meaning if you've already paid for a previous plan, the remaining credits will be transferred to your next plan. You won't have to pay again for what you've already covered.

Is Olostep free?

Olostep is free for the first 500 requests. Then we have plans that start from $9/month for 5000 credits per month. Just click on the "Get free API keys" to start for free. We'll also help you get up and running so you can test the product until you love it.

Can I ask for a refund if I don't use it?

We’re fully committed to building products that you love. If for whatever reason you’re unsatisfied with the Olostep API, please email us at info@olostep.com to receive a full refund within a few hours. We'll also refund you if it doesn't turn out being useful. If you decide to use it but only after a certain period of time, we'll refund the time you don't use it.

How does it return the results?

The API returns the id of the request (for future retrievals), the Markdown and the HTML of the page. You can also retrieve JSON with specific parsers or structured data with LLM extraction. If you are using the /answers endpoint as the search basis for your AI it will return an answer, a json in the schema you have defined and the sources Olostep has searched.

Can Olostep automate my data pipelines?

Reach out to us at info@olostep.com or contact our sales team https://www.olostep.com/contact-sales with your use case and we can take a look. Our aim with the Olostep Agent is to be able to automate any business data pipeline and research workflow on the Web so we will do our best to assist your use case.

Who should use Olostep?

Olostep is especially useful for AI startups that rely on Web data to power or improve their services or for companies that need to enrich data, monitor websites changes, analyze historical web data and equip their AIs with web search capabilities to ground them on real world data and facts. Olostep can also be used by developers, AI engineers, data scientists, and researchers looking to use web data for market research, LLM-finetuning, and more. Olostep returns clean, structured data in one single API so that it's compatible with existing backend.

Can I extract data with a prompt?

Yes, Olostep lets you extract data using natural language prompts. If you know the exact URL containing your data, use the /scrapes endpoint with llm_extract and describe what you want to extract. For more complex tasks like searching for data, navigating between pages, handling pagination, or validating results, use the /agents endpoint that automatically finds and extracts data based on your prompt.