Skip to content
This repository was archived by the owner on Mar 18, 2026. It is now read-only.

Commit cfa1b29

Browse files
javdlclaude
andauthored
Add prek git hooks (#1)
* Add prek git hooks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add prek.toml config Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add lefthook commit-msg and pre-push hooks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Complete lefthook to prek migration Move commit-msg and pre-push hooks from lefthook.yml into prek.toml as local shell hooks. Build prek from source in flake.nix instead of using nixpkgs. Remove lefthook from dev shell. * Add prek CI workflow and document nix commands Add GitHub Actions workflow using Determinate Nix to run prek checks on push/PR. Document nix develop commands in README.md and AGENTS.md. * Fix beads-rust x86_64-linux hash for upstream tarball change Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix trailing whitespace and end-of-file issues Auto-fixed by prek builtin hooks (trailing-whitespace, end-of-file-fixer). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use self-hosted runner with fallback for prek CI Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Trigger CI --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent bf5e125 commit cfa1b29

File tree

26 files changed

+294
-184
lines changed

26 files changed

+294
-184
lines changed

.beads/metadata.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
22
"database": "beads.db",
33
"jsonl_export": "issues.jsonl"
4-
}
4+
}

.github/workflows/prek.yml

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
name: Prek
2+
3+
on:
4+
push:
5+
branches: [main]
6+
pull_request:
7+
8+
concurrency:
9+
group: ${{ github.workflow }}-${{ github.ref }}
10+
cancel-in-progress: true
11+
12+
permissions:
13+
contents: read
14+
15+
jobs:
16+
determine-runner:
17+
runs-on: ubuntu-latest
18+
outputs:
19+
runner: ${{ steps.runner.outputs.use-runner }}
20+
steps:
21+
- name: Determine runner
22+
id: runner
23+
uses: mikehardy/runner-fallback-action@v1
24+
with:
25+
github-token: ${{ secrets.GH_RUNNER_TOKEN }}
26+
primary-runner: self-hosted-16-cores
27+
fallback-runner: ubuntu-latest
28+
organization: fuww
29+
fallback-on-error: true
30+
31+
prek:
32+
runs-on: ${{ fromJson(needs.determine-runner.outputs.runner) }}
33+
needs: [determine-runner]
34+
steps:
35+
- uses: actions/checkout@v4
36+
- uses: DeterminateSystems/nix-installer-action@main
37+
- uses: DeterminateSystems/magic-nix-cache-action@main
38+
- name: Run prek checks
39+
run: nix develop --command prek run --all-files

AGENTS.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,3 +87,9 @@ All content is markdown and JSON — edit directly, no build step required. When
8787
- NEVER stop before pushing - that leaves work stranded locally
8888
- NEVER say "ready to push when you are" - YOU must push
8989
- If push fails, resolve and retry until it succeeds
90+
91+
## Code Quality
92+
93+
```bash
94+
nix develop --command prek run --all-files # Run all pre-commit checks
95+
```

PROMPT_plan.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
- For each epic, verify child tasks cover all aspects of the specification
99
- Check for missing dependencies using `bd dep cycles` (should be empty)
1010
- Identify any tasks that should block others but don't
11-
11+
1212
2. Update the beads database to fix any issues found:
1313
- Create missing tasks with `bd create "title" -t task -p <priority> -d "description"`
1414
- Add missing dependencies with `bd dep add <child> <parent> --type blocks`
@@ -20,7 +20,7 @@
2020
- `bd blocked` should show tasks waiting on dependencies
2121
- `bd stats` should show accurate counts
2222

23-
IMPORTANT: Plan only. Do NOT implement anything. Do NOT assume functionality is missing;
23+
IMPORTANT: Plan only. Do NOT implement anything. Do NOT assume functionality is missing;
2424
use `bd list` and code search to verify first.
2525

26-
ULTIMATE GOAL: Refactor all knowledge-work plugins from generic Anthropic templates to FashionUnited-specific workflows, tools, and domain context. Ensure all necessary tasks exist as beads with proper dependencies so `bd ready` always shows the right next work.
26+
ULTIMATE GOAL: Refactor all knowledge-work plugins from generic Anthropic templates to FashionUnited-specific workflows, tools, and domain context. Ensure all necessary tasks exist as beads with proper dependencies so `bd ready` always shows the right next work.

README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,3 +96,10 @@ Plugins are just markdown files. Fork the repo, make your changes, and submit a
9696
## License
9797

9898
This fork is licensed under Apache 2.0, same as the [original Anthropic repository](https://github.com/anthropics/knowledge-work-plugins). See [LICENSE](LICENSE) for details.
99+
100+
## Development Environment
101+
102+
```bash
103+
nix develop # Enter dev shell with all tools
104+
nix develop --command prek run --all-files # Run pre-commit checks
105+
```

archived/bio-research/skills/clinical-trial-protocol/SKILL.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ description: Generate clinical trial protocols for medical devices or drugs. Thi
2121

2222
## Overview
2323

24-
This skill generates clinical trial protocols for **medical devices or drugs** using a **modular, waypoint-based architecture**
24+
This skill generates clinical trial protocols for **medical devices or drugs** using a **modular, waypoint-based architecture**
2525

2626
## What This Skill Does
2727

@@ -504,5 +504,3 @@ When this skill is invoked:
504504
- **Research Only:** Display research summary location and offer to continue with full protocol
505505
- **Full Protocol:** Congratulate user, display protocol location and next steps
506506
- Remind user of disclaimers
507-
508-

archived/bio-research/skills/clinical-trial-protocol/references/00-initialize-intervention.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,4 +198,4 @@ If `waypoints/intervention_metadata.json` already exists:
198198
- Ensure the intervention_id is filesystem-safe (no spaces, special chars)
199199
- Validate that required fields are not empty
200200
- Write clean, formatted JSON with proper indentation
201-
- Handle both device and drug interventions appropriately with the right terminology
201+
- Handle both device and drug interventions appropriately with the right terminology

archived/bio-research/skills/clinical-trial-protocol/references/02-protocol-foundation.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -250,11 +250,11 @@ STATEMENT OF COMPLIANCE
250250
**Content to Generate:**
251251

252252
STATEMENT OF COMPLIANCE
253-
Provide a statement that the trial will be conducted in compliance with the protocol, International Conference on Harmonisation Good Clinical Practice (ICH GCP) and applicable state, local and federal regulatory requirements. Each engaged institution must have a current Federal-Wide Assurance (FWA) issued by the Office for Human Research Protections (OHRP) and must provide this protocol and the associated informed consent documents and recruitment materials for review and approval by an appropriate Institutional Review Board (IRB) or Ethics Committee (EC) registered with OHRP. Any amendments to the protocol or consent materials must also be approved before implementation. Select one of the two statements below:
253+
Provide a statement that the trial will be conducted in compliance with the protocol, International Conference on Harmonisation Good Clinical Practice (ICH GCP) and applicable state, local and federal regulatory requirements. Each engaged institution must have a current Federal-Wide Assurance (FWA) issued by the Office for Human Research Protections (OHRP) and must provide this protocol and the associated informed consent documents and recruitment materials for review and approval by an appropriate Institutional Review Board (IRB) or Ethics Committee (EC) registered with OHRP. Any amendments to the protocol or consent materials must also be approved before implementation. Select one of the two statements below:
254254

255-
(1) [The trial will be carried out in accordance with International Conference on Harmonisation Good Clinical Practice (ICH GCP) and the following:
255+
(1) [The trial will be carried out in accordance with International Conference on Harmonisation Good Clinical Practice (ICH GCP) and the following:
256256

257-
• United States (US) Code of Federal Regulations (CFR) applicable to clinical studies (45 CFR Part 46, 21 CFR Part 50, 21 CFR Part 56, 21 CFR Part 312, and/or 21 CFR Part 812)
257+
• United States (US) Code of Federal Regulations (CFR) applicable to clinical studies (45 CFR Part 46, 21 CFR Part 50, 21 CFR Part 56, 21 CFR Part 312, and/or 21 CFR Part 812)
258258

259259
National Institutes of Health (NIH)-funded investigators and clinical trial site staff who are responsible for the conduct, management, or oversight of NIH-funded clinical trials have completed Human Subjects Protection and ICH GCP Training.
260260

@@ -341,7 +341,7 @@ This section contains three major components. Generate each with appropriate det
341341

342342
#### Section 1.2: Schema (30 lines)
343343

344-
**Generate a text-based flow diagram** showing study progression.
344+
**Generate a text-based flow diagram** showing study progression.
345345

346346
**Required Elements:**
347347
- **Screening Period:** Show duration (e.g., "Within 28 days") and key activities (eligibility assessment)

archived/bio-research/skills/instrument-data-to-allotrope/references/flattening_guide.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,11 +29,11 @@ ASM Hierarchy → Flat Column
2929
device-system-document.
3030
device-identifier → instrument_serial_number
3131
model-number → instrument_model
32-
32+
3333
measurement-aggregate-document.
3434
analyst → analyst
3535
measurement-time → measurement_datetime
36-
36+
3737
measurement-document[].
3838
sample-identifier → sample_id
3939
viable-cell-density.value → viable_cell_density
@@ -185,43 +185,43 @@ import pandas as pd
185185
def flatten_asm(asm_dict, technique="cell-counting"):
186186
"""
187187
Flatten ASM JSON to pandas DataFrame.
188-
188+
189189
Args:
190190
asm_dict: Parsed ASM JSON
191191
technique: ASM technique type
192-
192+
193193
Returns:
194194
pandas DataFrame with one row per measurement
195195
"""
196196
rows = []
197-
197+
198198
# Get aggregate document
199199
agg_key = f"{technique}-aggregate-document"
200200
agg_doc = asm_dict.get(agg_key, {})
201-
201+
202202
# Extract device info
203203
device = agg_doc.get("device-system-document", {})
204204
device_info = {
205205
"instrument_serial_number": device.get("device-identifier"),
206206
"instrument_model": device.get("model-number")
207207
}
208-
208+
209209
# Get technique documents
210210
doc_key = f"{technique}-document"
211211
for doc in agg_doc.get(doc_key, []):
212212
meas_agg = doc.get("measurement-aggregate-document", {})
213-
213+
214214
# Extract common metadata
215215
common = {
216216
"analyst": meas_agg.get("analyst"),
217217
"measurement_datetime": meas_agg.get("measurement-time"),
218218
**device_info
219219
}
220-
220+
221221
# Extract each measurement
222222
for meas in meas_agg.get("measurement-document", []):
223223
row = {**common}
224-
224+
225225
# Flatten measurement fields
226226
for key, value in meas.items():
227227
if isinstance(value, dict) and "value" in value:
@@ -232,9 +232,9 @@ def flatten_asm(asm_dict, technique="cell-counting"):
232232
row[f"{col}_unit"] = value["unit"]
233233
else:
234234
row[key.replace("-", "_")] = value
235-
235+
236236
rows.append(row)
237-
237+
238238
return pd.DataFrame(rows)
239239

240240
# Usage

archived/bio-research/skills/instrument-data-to-allotrope/scripts/export_parser.py

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -63,16 +63,16 @@
6363
def convert_to_asm(filepath: str) -> Optional[Dict[str, Any]]:
6464
"""
6565
Convert {instrument_name} file to ASM format.
66-
66+
6767
Args:
6868
filepath: Path to input file
69-
69+
7070
Returns:
7171
ASM dictionary or None if conversion fails
7272
"""
7373
if not ALLOTROPY_AVAILABLE:
7474
raise ImportError("allotropy library required. Install with: pip install allotropy")
75-
75+
7676
try:
7777
asm = allotrope_from_file(filepath, Vendor.{vendor})
7878
return asm
@@ -84,36 +84,36 @@ def convert_to_asm(filepath: str) -> Optional[Dict[str, Any]]:
8484
def flatten_asm(asm: Dict[str, Any]) -> list:
8585
"""
8686
Flatten ASM to list of row dictionaries for CSV export.
87-
87+
8888
Args:
8989
asm: ASM dictionary
90-
90+
9191
Returns:
9292
List of flattened row dictionaries
9393
"""
9494
technique = "{technique}"
9595
rows = []
96-
96+
9797
agg_key = f"{{technique}}-aggregate-document"
9898
agg_doc = asm.get(agg_key, {{}})
99-
99+
100100
# Extract device info
101101
device = agg_doc.get("device-system-document", {{}})
102102
device_info = {{
103103
"instrument_serial_number": device.get("device-identifier"),
104104
"instrument_model": device.get("model-number"),
105105
}}
106-
106+
107107
doc_key = f"{{technique}}-document"
108108
for doc in agg_doc.get(doc_key, []):
109109
meas_agg = doc.get("measurement-aggregate-document", {{}})
110-
110+
111111
common = {{
112112
"analyst": meas_agg.get("analyst"),
113113
"measurement_time": meas_agg.get("measurement-time"),
114114
**device_info
115115
}}
116-
116+
117117
for meas in meas_agg.get("measurement-document", []):
118118
row = {{**common}}
119119
for key, value in meas.items():
@@ -125,7 +125,7 @@ def flatten_asm(asm: Dict[str, Any]) -> list:
125125
else:
126126
row[clean_key] = value
127127
rows.append(row)
128-
128+
129129
return rows
130130
131131
@@ -134,36 +134,36 @@ def main():
134134
parser.add_argument("input", help="Input file path")
135135
parser.add_argument("--output", "-o", help="Output JSON path")
136136
parser.add_argument("--flatten", action="store_true", help="Also generate CSV")
137-
137+
138138
args = parser.parse_args()
139-
139+
140140
input_path = Path(args.input)
141141
if not input_path.exists():
142142
print(f"Error: File not found: {{args.input}}")
143143
return 1
144-
144+
145145
# Convert to ASM
146146
print(f"Converting {{args.input}}...")
147147
asm = convert_to_asm(str(input_path))
148-
148+
149149
if asm is None:
150150
print("Conversion failed")
151151
return 1
152-
152+
153153
# Write ASM JSON
154154
output_path = args.output or str(input_path.with_suffix('.asm.json'))
155155
with open(output_path, 'w') as f:
156156
json.dump(asm, f, indent=2, default=str)
157157
print(f"ASM written to: {{output_path}}")
158-
158+
159159
# Optionally flatten
160160
if args.flatten and PANDAS_AVAILABLE:
161161
rows = flatten_asm(asm)
162162
df = pd.DataFrame(rows)
163163
flat_path = str(input_path.with_suffix('.flat.csv'))
164164
df.to_csv(flat_path, index=False)
165165
print(f"CSV written to: {{flat_path}}")
166-
166+
167167
return 0
168168
169169

0 commit comments

Comments
 (0)