-
Notifications
You must be signed in to change notification settings - Fork 43
Description
Motivation
~50% of XGI users build hypergraphs with XGI but leave the ecosystem to do computation. A growing number of developers use AI coding assistants (Copilot, Claude, Cursor, etc.) to write code. These assistants often don't know about XGI's algorithms, stats framework, or visualization capabilities, so they suggest users reach for networkx or scipy instead.
AGENTS.md is an emerging convention where a file in the repo root tells AI coding assistants how to work with a project's API, patterns, and best practices.
Proposal
Add an AGENTS.md to the repository root that covers:
- What XGI is — quick orientation for an agent that's never seen the library
- Core data structures —
Hypergraph,SimplicialComplex,DiHypergraph, and how to create/manipulate them - Stats framework — how
H.nodes.<stat>andH.edges.<stat>work, available stats, filtering - Algorithms module — centrality, clustering, components, community detection, etc. — the stuff users most often miss
- Visualization —
xgi.draw()and related functions - I/O — reading/writing HIF, edge lists, JSON, etc.
- Common patterns & anti-patterns — e.g., "use
H.edges.filterby('order', 2)instead of manual list comprehensions", "don't convert to networkx for things XGI can do natively"
Why not just better docs?
This complements human-facing docs rather than replacing them. The format is optimized for LLM consumption: concise, example-heavy, structured for quick lookup. Human docs need narrative flow and tutorials; agent docs need dense, correct API references and "prefer X over Y" guidance.
Acknowledging the elephant in the room
This might be controversial. In academia there are legitimate concerns about using LLMs to write research-critical code — questions around reproducibility, correctness, understanding of the tools being used, and whether AI-generated code belongs in scientific workflows at all. Adding an AGENTS.md could be seen as the project endorsing or encouraging AI-assisted coding in research contexts.
To be clear: this file wouldn't change anything about how XGI itself works. It's purely a reference that helps AI assistants give better suggestions to users who are already using these tools. But it's worth discussing whether we're comfortable with that signal, and whether we want to add any framing (e.g., a note encouraging users to review and understand AI-generated code before using it in research).
Would love to hear what other maintainers think before we start writing this.