ARGUS — Autonomous Deep-Tech Opportunity Screening
How I built a 4-stage AI pipeline scanning 20+ sources for VC-grade deep-tech opportunities. 70K LoC, 875 tests, running 24/7 on local hardware.
ARGUS is a four-stage AI pipeline I built to surface deep-tech investment opportunities continuously. It pulls from 20+ sources — SEC EDGAR, arXiv, PatentsView, EPO, EU procurement, news, Reddit — and scores each candidate across five axes before passing it through an adversarial red-team challenge and writing an IC-grade memo. Verdicts: INVEST / EXPLORE / PASS across 25 monitored verticals (AI, defense, fintech, quantum, drones).
Why I built it
A first-year VC analyst spends weeks learning to source manually. ARGUS encodes that workflow as code, running every hour on local hardware — zero cloud, zero API leakage of theses. The point isn’t to replace the analyst; it’s to compress the deal-discovery cycle from weeks to hours and free human attention for the calls that actually matter.
Architecture
Four stages, each typed and individually testable:
- Source ingest — async scrapers + RSS/SERP fanout, dedup by SHA + entity-link to the knowledge graph
- 5-axis screening — heuristic + LLM scoring on team / traction / IP / market / capital efficiency
- Adversarial red-team — a separate LLM agent attacks the thesis with bear cases pulled from the same sources
- IC memo — structured Markdown output with citations linked back to source documents
The whole pipeline is a Neo4j knowledge graph under the hood; new evidence updates priors via confidence-calibrated scoring and a meta-learning layer flags drift.
What’s next
Productionising the calibration layer (the scoring model is currently bootstrapped from 264 historical
reports) and shipping a public dashboard at argus.cnob.me. The codebase is open at
github.com/builtbycnob/argus.
Stack
Python · MLX · LangGraph · Neo4j · FastAPI · RAG