Back to Skills

ARIS (Auto-Research-In-Sleep)

by wanshuiyin

researchintermediate
Claude Code skillsARISAuto-Research-In-Sleepautonomous ML researchcross-model reviewAI research automationagent skills

ARIS turns your AI coding agent into an ML researcher that works while you sleep. Literally. Start a research loop before bed. Wake up to a paper draft with scores improving from 5/10 to 7.5/10 across four autonomous review rounds overnight.31 composable Markdown-only skills cover the full research lifecycle. Literature survey scans papers and builds citation graphs. Idea brainstorming generates 8-12 concepts per session. Novelty filtering checks each idea against existing work. GPU pilot experiments run verification on real hardware. Cross-model adversarial review sends your draft to a competing AI for criticism. LaTeX generation compiles the paper. Rebuttal drafting responds to reviewer objections.The cross-model adversarial review is the standout feature. Claude Code writes and executes the research. GPT-5.4 xhigh reviews it. This is not the same model talking to itself. These are two different AI architectures with different blind spots. Claude is fast and fluid. GPT-5.4 xhigh is slower but more rigorous in critique. The adversarial tension between speed and rigor produces better papers than either model alone.How It WorksEach skill is a plain Markdown file. No framework. No dependencies. No lock-in. Copy the skill files into your agent's skills directory and they work immediately. ARIS supports Claude Code (primary), Codex, OpenClaw, Cursor, Trae, and any LLM agent that reads Markdown skill files.Four effort levels control how deep the research goes. Lite uses 0.4x the default token budget for quick passes. Balanced is the default. Max runs at 2.5x for thorough reviews. Beast mode hits 5-8x for maximum-depth research sprints.Research WikiThe Research Wiki maintains persistent memory across sessions. Every paper read, every idea generated, every experiment attempted, and every failed approach gets logged. This creates an anti-repetition knowledge base. The agent does not re-explore dead ends or re-propose rejected ideas. Three skills (/research-lit, /idea-creator, /result-to-claim) hook directly into the wiki for context-aware generation.Safety GatesThree safety mechanisms prevent hallucinated citations. ARIS fetches real BibTeX entries from DBLP and CrossRef before including any reference. If a citation cannot be verified against these databases, it is flagged rather than fabricated. This addresses one of the biggest risks in AI-generated academic writing.LimitationsCross-model review requires API access to both Claude and GPT-5.4 xhigh, which means paying for two providersGPU pilot experiments assume access to GPU hardware or cloud compute; the skill does not provision resourcesv0.3.11 is early. Some skills advertised in the README may have incomplete implementationsResearch Wiki grows without automatic pruning, potentially degrading retrieval quality over timeFor more Claude Code skills, browse repos.skila.ai/skills. For AI research tools, check tools.skila.ai. For articles on AI-powered development workflows, visit news.skila.ai.

Installation

git clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep.git && cp -r skills/* ~/.claude/skills/

Key Features

  • 31 composable Markdown-only skills covering the full ML research lifecycle
  • Cross-model adversarial review: Claude Code executes while GPT-5.4 xhigh critiques
  • 4-round autonomous review loops that improve paper scores from 5/10 to 7.5/10 overnight
  • Research Wiki for persistent memory and anti-repetition across sessions
  • Safety gates that verify citations against DBLP and CrossRef before inclusion
  • 4 effort levels: lite (0.4x), balanced (1x), max (2.5x), beast (5-8x)

Use Cases

  • Autonomous overnight ML research with paper drafting
  • Literature survey and novelty verification for research ideas
  • Cross-model adversarial paper review to find weaknesses
  • Automated LaTeX paper generation and compilation
  • Rebuttal drafting for peer review responses

Related Resources

Weekly AI Digest