Problem
Currently, DeepGraph has no capability to generate publication-ready figures or visualizations. After running the SciForge validation loop, the system only produces text logs and numeric results—no charts, diagrams, or overview figures that could be used in a manuscript.
Evidence from Codebase
- Zero plotting libraries: No imports of
matplotlib, seaborn, plotly, PIL, graphviz, or any equivalent across the entire codebase.
- No visualization Agent: There is no
agents/visualization_agent.py or similar module.
- Frontend limited to navigation:
web/static/js/app.js uses D3.js solely for a radial taxonomy tree (navigation UI). It does not render experiment result charts, method comparison plots, or knowledge graph subgraphs.
- Documentation/artifact gap: While
HANDOFF.md mentions placing figures in artifacts/, there is zero code logic to actually generate those figures.
Missing Scenarios
- Overview / Motivation Diagrams: The first figure in a paper (e.g., "Our Approach") showing existing method limitations vs. the proposed improvement.
- Experimental Result Charts: After the validation loop, there is no automatic generation of bar/line charts comparing
baseline vs proposed metrics.
- Knowledge Graph Subgraph Visualization: The entity-relation graph exists in SQLite but cannot be rendered into a publication-quality graph figure.
- Method Architecture Diagrams: No pipeline to translate a structured method description (from
deep_insights.proposed_method) into an architecture diagram.
Impact
The SciForge closed loop currently ends with a text verdict (confirmed/refuted) and a final_report.md. Without figure generation, the system cannot produce a complete manuscript-ready artifact. A human researcher would still need to manually create all figures, which breaks the "autonomous scientist" vision.
Proposed Direction
Introduce a visualization_agent module with two tiers:
- Programmatic plots: Use
matplotlib/seaborn to auto-generate experiment comparison charts, learning curves, and heatmaps from experiment_iterations and results tables.
- Conceptual diagrams: Use LLM-generated TikZ / Graphviz / Diagrams-as-code to produce overview and motivation figures from
deep_insights content.
Figures should be saved into the experiment workspace (~/sciforge_runs/exp_N_*/figures/) and referenced in the generated final_report.md.
This is a structural gap that prevents the system from producing complete, publication-ready research outputs.
Problem
Currently, DeepGraph has no capability to generate publication-ready figures or visualizations. After running the SciForge validation loop, the system only produces text logs and numeric results—no charts, diagrams, or overview figures that could be used in a manuscript.
Evidence from Codebase
matplotlib,seaborn,plotly,PIL,graphviz, or any equivalent across the entire codebase.agents/visualization_agent.pyor similar module.web/static/js/app.jsuses D3.js solely for a radial taxonomy tree (navigation UI). It does not render experiment result charts, method comparison plots, or knowledge graph subgraphs.HANDOFF.mdmentions placing figures inartifacts/, there is zero code logic to actually generate those figures.Missing Scenarios
baseline vs proposedmetrics.deep_insights.proposed_method) into an architecture diagram.Impact
The SciForge closed loop currently ends with a text verdict (
confirmed/refuted) and afinal_report.md. Without figure generation, the system cannot produce a complete manuscript-ready artifact. A human researcher would still need to manually create all figures, which breaks the "autonomous scientist" vision.Proposed Direction
Introduce a
visualization_agentmodule with two tiers:matplotlib/seabornto auto-generate experiment comparison charts, learning curves, and heatmaps fromexperiment_iterationsandresultstables.deep_insightscontent.Figures should be saved into the experiment workspace (
~/sciforge_runs/exp_N_*/figures/) and referenced in the generatedfinal_report.md.This is a structural gap that prevents the system from producing complete, publication-ready research outputs.