Skip to content

Release/v0.1.0 #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 97 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,106 @@ It is recommended that you consult [the current working branch](https://github.c

A Stephen Fox endeavor to become an Applied AI Scientist.

## Background Resources

### Key Ideas

1. Make it simple to refine neural architectures
2. Focus on dropping model parameter size while __keeping performance as high as possible__
3. Make the tools user-friendly, and clearly documented

### Project Roadmap

- Please see [the GitHub Project board](https://github.com/stephenjfox/Morph.py/projects/1)

---

## Usage

### Installation

`pip install morph-py`

### Code Example

```python
import morph

morph_optimizer = None
# train loop
for e in range(epoch_count):

for input, target in dataloader:
optimizer.zero_grad() # optional: zero gradients or don't...
output = model(input)

loss = loss_fn(output, target)
loss.backward()
optim.step()


# setup for comparing the morphed model
if morph_optimizer:
morph_optimizer.zero_grad()
morph_loss = loss_fn(morph_model(input), target)

logging.info(f'Morph loss - Standard loss = {morph_loss - loss}')

morph_loss.backward()
morph_optimizer.step()


# Experimentally supported: Initialize our morphing halfway training
if e == epoch_count // 2:
# if you want to override your model
model = morph.once(model)

# if you want to compare in parallel
morph_model = morph.once(model)

# either way, you need to tell your optimizer about it
morph_optimizer = init_optimizer(params=morph_model.parameters())

```

## What is Morph.py?

Morph.py is a Neural Network Architecture Optimization toolkit targeted at Deep Learning researchers
and practitioners.
* It acts outside of the current paradigm of [Neural Architecture Search](https://github.com/D-X-Y/awesome-NAS)
while still proving effective
* It helps one model accuracy of a model with respect to its size (as measured by "count of model parameters")
* Subsequently, you could be nearly as effective (given some margin of error) with a __much__ smaller
memory footprint
* Provides you, the researcher, with [better insight on how to improve your model](https://github.com/stephenjfox/Morph.py/projects/3)

Please enjoy this [Google Slides presentation](https://goo.gl/ZzZrng)

Coming soon:
* A walkthrough of the presentation (more detail than my presenter's notes)
* More [supported model architectures](https://github.com/stephenjfox/Morph.py/projects/2)


### Current support

* Dynamic adjustment of a given layer's size
* Weight persistence across layer resizing
* To preserve all the hard work you spent in

---

# Contributing

## Setup (to work alongside me)

`git clone https://github.com/stephenjfox/Morph.py.git`

## Requisites
### Requisites

### [Install Anaconda](https://www.anaconda.com/download/)
#### [Install Anaconda](https://www.anaconda.com/download/)
* They've made it easier with the years. If you haven't already, please give it a try

### Install Pip
#### Install Pip

1. `conda install pip`
2. Proceed as normal
Expand All @@ -34,4 +124,7 @@ A Stephen Fox endeavor to become an Applied AI Scientist.
- Jupyter Notebook
* And a few tools to make it better on your local environment like `nb_conda`, `nbconvert`, and `nb_conda_kernels`
- Python 3.6+ because [Python 2 is dying](https://pythonclock.org/)
- PyTorch (`conda install torch torchvision`)
- PyTorch (`conda install torch torchvision -c pytorch`)

All of these and more are covered in the `environment.yml` file:
+ Simply run `conda env create -f environment.yml -n <desired environment name>`
171 changes: 171 additions & 0 deletions check-prune-widen.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import morph"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<module 'morph.nn' from '/Users/stephen/Documents/Insight-AI/Insight-AI-Fellowship-Project/src/morph/nn/__init__.py'>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"morph.nn"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"??morph.nn.once"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import morph.nn.shrink as ms"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from morph.testing.models import EasyMnist"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"some_linear = ms.nn.Linear(3, 2)\n",
"c = [c for c in some_linear.children()]\n",
"len(c)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"EasyMnist(\n",
" (linear1): Linear(in_features=784, out_features=1000, bias=True)\n",
" (linear2): Linear(in_features=1000, out_features=30, bias=True)\n",
" (linear3): Linear(in_features=30, out_features=10, bias=True)\n",
")"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"EasyMnist()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Module(\n",
" (linear1): Linear(in_features=784, out_features=700, bias=True)\n",
" (linear2): Linear(in_features=700, out_features=21, bias=True)\n",
" (linear3): Linear(in_features=21, out_features=10, bias=True)\n",
")"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ms.prune(EasyMnist())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.2"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}
11 changes: 8 additions & 3 deletions demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,25 @@
import morph.nn as net
from morph.layers.sparse import sparsify

from morph._models import EasyMnist
from morph.testing.models import EasyMnist


def random_dataset():
return TensorDataset(torch.randn(2, 28, 28))

def main():
my_model = EasyMnist()
# do one pass through the algorithm
modified = morph.once(my_model)

print(modified) # proof that the thing wasn't tampered with
print(modified) # take a peek at the new layers. You take it from here

my_dataloader = DataLoader(TensorDataset(torch.randn(2, 28, 28)))
my_dataloader = DataLoader(random_dataset())

# get back the class that will do work
morphed = net.Morph(my_model, epochs=5, dataloader=my_dataloader)

# TODO: we need your loss function, but this is currentry __unsupported__
morphed.run_training()


Expand Down
Loading