Tools & Automation

Deep Research

Investigate across sources, then write the report.

Ask a question that needs more than a one-shot answer. Deep Research browses the web in a sandbox, reads a dozen sources, follows footnotes, cross-checks claims, and writes a structured report with inline citations. Same model, same chat, just told to take its time.

How it works

3 steps to wire up Deep Research, no engineering required.

  1. 1

    Ask a research question

    Phrase it like you would for a research analyst. 'How big is the European EV charging market and who are the top three players?' Avoid yes-or-no questions.

  2. 2

    Review the plan

    MoClaw shows the planned browsing list before it starts. Approve it or tweak the scope: 'add academic sources', 'skip news older than 2024', 'cap at 15 sources'.

  3. 3

    Read the report

    5 to 30 minutes later you get a structured markdown report with a TL;DR, inline source citations, and the saved passages it pulled from.

Why it matters

A normal chat answer is a few hundred tokens. Deep Research is a different mode. The model spends 5 to 30 minutes browsing live sources, taking notes, going down branches, and consolidating what it learned into a structured report.

Behind the scenes it spawns a planning step (what is the question really asking?), a search step (what sources are credible?), a parallel browsing pass (read 10 to 30 pages, save key passages), a synthesis step (cluster by claim, note disagreements), and a writing step (structured markdown with inline citations).

Use it for: market sizing where you need to triangulate from multiple analyst reports; competitive analysis where you need to scrape pricing from a dozen sites; literature reviews where you need to read 20 abstracts; due diligence where 'is this person who they say they are' needs cross-referencing public records.

Avoid it when a single search will do, when the answer is in your own files (use Files instead), or when you need realtime data that will not be on the open web (use API tools).

Try saying

Real prompts you can paste into Deep Research.

  • Research the global home solar market in 2026: size, top 5 players, regional growth, regulatory tailwinds. 12 to 15 sources, focus on the last 18 months.
  • Compare Anthropic, OpenAI, and Google in agentic browser automation. Pricing, capability matrix, real customer case studies. Cite primary sources only.
  • Find every public lawsuit involving Acme Corp since 2022. Outcome, plaintiff, court, in a plain table at the end.

Step by step demo

What actually happens when you send the prompt.

Prompt 01 5 steps

“Compare Anthropic, OpenAI, and Google in agentic browser automation. Pricing, capability matrix, real customer case studies.”

What MoClaw does

  1. 1 Drafts a research plan with 9 sources to check, including each vendor's official docs, two analyst reports, and three customer case studies.
  2. 2 Confirms the plan with you and lets you add or remove sources.
  3. 3 Spawns 3 parallel browser sessions, one per vendor. Reads the docs and extracts pricing tiers and capability flags.
  4. 4 Pulls case studies from each vendor's customer pages and one independent analyst piece. Cross-references claims (Anthropic claims X, OpenAI's docs counter with Y).
  5. 5 Writes a structured markdown report with a comparison table and inline citation links.
Result

17 minutes later you get a report titled 'Agentic Browser Automation: Anthropic vs OpenAI vs Google'. TL;DR at the top. Comparison table with 14 rows (pricing, max parallel sessions, vision tokens, etc). 6 case studies summarized. 22 inline citations to primary sources. Saved passages collapsible at the bottom.

FAQ

Quick answers about pricing, privacy, and limits.

How long does Deep Research take?
Usually 5 to 15 minutes for moderate scope, up to 30 minutes for cross-source investigations. You can leave the chat and come back; the report appears in the thread when ready.
How many credits does it use?
5 to 50 credits per run depending on scope, mostly driven by how many pages get read. The plan step shows the projected credit cost before it starts.
Are the sources reliable?
MoClaw scores sources during the plan step and skips low-quality domains by default. You can override the source list or pin specific domains before it starts.
Can I pin Deep Research to specific domains?
Yes. Add 'only crawl harvard.edu, nih.gov, and arxiv.org' to your prompt and the planner will respect it.
What does the output format look like?
Markdown with H2 sections, inline source links, and an appendix of saved passages. You can ask for a slide deck, table, or short summary instead.

Try MoClaw free.

1,000 credits a month, or bring your own key for unlimited usage.

Cancel anytime