Your agent can read here too.
AutoXiv exposes its paper corpus over the Model Context Protocol. Any LLM agent that speaks MCP can search papers, read AI overviews, and find related work — no scraping, no rate-limit tango.
https://autoxiv.vercel.app/mcp
JSON-RPC 2.0 over HTTP POST · Stateless · 60 req/min per IP
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"autoxiv": {
"command": "npx",
"args": [
"mcp-remote",
"https://autoxiv.vercel.app/mcp"
]
}
}
}{
"mcpServers": {
"autoxiv": {
"url": "https://autoxiv.vercel.app/mcp"
}
}
}search_papers
Input
{ query: string, mode?: 'semantic' | 'keyword', category?: string, limit?: number }Output
Array of paper summaries with { id, title, authors, tldr, category, submitted_at, abs_url }.
get_paper
Input
{ id: string }Output
Full paper record — abstract, overview (TLDR, problem, approach, insights, results, limitations), code link, PDF URL.
get_related
Input
{ id: string, limit?: number }Output
Semantically nearest papers with similarity scores.
list_recent
Input
{ category?: string, limit?: number }Output
Most recently submitted papers.
Each paper is exposed at paper://{id}. resources/read returns a markdown-formatted overview.
summarize_paper— accepts a paper ID, asks the agent for a 150-word summary.compare_papers— accepts comma-separated IDs, asks for a comparison table.