Skip to content

Commit

Permalink
Fix CI (#1)
Browse files Browse the repository at this point in the history
* Explicitly install pre-commit

* Run test on windows

* Cannot fetch model from github action

* Fix documentation build

* Remove gemma test

* typo

* test gemma back
  • Loading branch information
alessandropalla authored Feb 28, 2024
1 parent 3e16f10 commit d19ae2b
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 4 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,9 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install .[dev]
- name: Install pre-commit
run: |
pip install pre-commit
pre-commit install
- name: Run tests
run: pre-commit run --all-files
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ on:

jobs:
build:
runs-on: ubuntu-latest
runs-on: windows-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
Expand Down
1 change: 0 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
# 'sphinx.ext.autodoc',
"sphinx.ext.napoleon",
"breathe",
"sphinx_rtd_theme",
"myst_parser",
]

Expand Down
4 changes: 2 additions & 2 deletions test/python/test_optimizations.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def get_model(model_name, hidden_size, intermediate_size, bias):

return LlamaMLP(conf)
elif model_name == "GemmaMLP":
conf = GemmaConfig.from_pretrained("google/gemma-2b-it")
conf = GemmaConfig()
conf.num_hidden_layers = 1
conf.hidden_size = hidden_size
conf.head_dim = conf.hidden_size // conf.num_attention_heads
Expand All @@ -83,7 +83,7 @@ def get_model(model_name, hidden_size, intermediate_size, bias):

return LlamaModel(conf)
elif model_name == "GemmaModel":
conf = GemmaConfig.from_pretrained("google/gemma-2b-it")
conf = GemmaConfig()
conf.num_hidden_layers = 1
conf.hidden_size = hidden_size
conf.head_dim = conf.hidden_size // conf.num_attention_heads
Expand Down

0 comments on commit d19ae2b

Please sign in to comment.