Skip to content

Commit 4b505ce

Browse files
authored
Merge pull request #107 from slegroux/81-u-net
update resblock; add preact conv; super-res with autoencoder; and simple u-net; + tuts egs
2 parents 04bdd9b + 1b29850 commit 4b505ce

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+19143
-3069
lines changed

.gitignore

+7-1
Original file line numberDiff line numberDiff line change
@@ -131,18 +131,22 @@ dmypy.json
131131
*.gz
132132
recipes/*/logs
133133
*.bak
134+
*.bk
134135
tests/
135136
.hydra/
136137
*.png
137138
*.wav
138139
*.pt
140+
*.npy
139141
*.onnx
140142
*.ckpt
141143
*.pkg
142144
*.lca
143145
*.wandb
144146
*tfevents*
145147

148+
149+
146150
*/wandb/*
147151
*_logs*
148152
# recipes/image/
@@ -152,6 +156,8 @@ _docs/
152156
/data/codeparrot
153157
/data/en/Libri*
154158
/data/
159+
/sandbox/
160+
recipes
155161
token
156162
env-file
157163
Dockerfile.base
@@ -160,7 +166,7 @@ Dockerfile.paperspace
160166
# wandb stuff
161167
wandb
162168
artifacts
163-
169+
Miniforge*
164170
.vscode
165171
*.swp
166172

README.md

+4-27
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# Nimrod
22

3+
34
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
45

56
[![python](https://img.shields.io/badge/-Python_3.7_%7C_3.8_%7C_3.9_%7C_3.10-blue?logo=python&logoColor=white)](https://github.com/pre-commit/pre-commit)
@@ -19,43 +20,19 @@ you need python \<3.12
1920

2021
### Install using Pip
2122

22-
Install package:
2323
``` sh
2424
pip install slg-nimrod
2525
```
26-
Install espeak for LM:
27-
```bash
28-
brew install espeak #macos
29-
```
30-
Install Spacy english model
31-
```bash
32-
python -m spacy download en_core_web_sm
33-
```
34-
3526

3627
## Usage
3728

38-
Download test data on which to run example recipes:
39-
40-
```bash
41-
# if not already installed on your system
42-
git lfs install
43-
# update changes
44-
git lfs fetch --all
45-
# copy the actual data
46-
git lfs checkout
47-
# or just
48-
git lfs pull # combing both steps above into one (like usual git pull)
49-
```
50-
5129
Check recipes in `recipes/` folder. E.g. for a simple digit recognizer
5230
on MNIST:
5331

5432
``` bash
5533
git clone https://github.com/slegroux/nimrod.git
56-
cd nimrod/recipes/images/mnist
57-
python train.py datamodule.num_workers=8 trainer.max_epochs=20 trainer.accelerator='mps' loggers='tensorboard'
58-
head conf/train.yaml
34+
python train.py experiment=mnist_mlp data.num_workers=8 trainer.max_epochs=20
35+
head config/train.yaml
5936
```
6037

6138
All the parameters of the experiment are editable and read from a .yaml
@@ -98,7 +75,7 @@ to compare training results on different model parameters:
9875

9976
``` bash
10077
cd nimrod/recipes/images/mnist
101-
python train.py --multirun model.n_h=16,64,256 loggers='tensorboard' trainer.max_epochs=5
78+
python train.py --multirun model.n_h=16,64,256 logger='tensorboard' trainer.max_epochs=5
10279
```
10380

10481
## Server

config/callbacks/default.yaml

+3-2
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ defaults:
33
- early_stopping
44
- model_summary
55
- rich_progress_bar
6+
- learning_rate_monitor
67
- _self_
78

89
model_checkpoint:
@@ -19,5 +20,5 @@ early_stopping:
1920
patience: 100
2021
mode: "min"
2122

22-
model_summary:
23-
max_depth: -1
23+
# model_summary:
24+
# max_depth: -1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
learning_rate_monitor:
2+
_target_: lightning.pytorch.callbacks.LearningRateMonitor
3+
logging_interval: step

config/data/image/fashion_mnist.yaml

+3
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,9 @@ transforms:
1010
_target_: torchvision.transforms.Compose
1111
transforms:
1212
- _target_: torchvision.transforms.ToTensor
13+
- _target_: torchvision.transforms.Normalize
14+
mean: 0.28
15+
std: 0.35
1316
- _target_: torchvision.transforms.Resize
1417
size: 32
1518

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
_target_: nimrod.image.datasets.ImageSuperResDataModule
2+
3+
name: 'fashion_mnist'
4+
data_dir: '../data/image'
5+
train_val_split: [0.8, 0.2]
6+
batch_size: 512
7+
num_workers: 0
8+
pin_memory: True
9+
persistent_workers: False
10+
11+
transform_x:
12+
_target_: torchvision.transforms.Compose
13+
transforms:
14+
- _target_: torchvision.transforms.ToTensor
15+
- _target_: torchvision.transforms.Resize
16+
size: 32
17+
- _target_: torchvision.transforms.Normalize
18+
mean: 0.28
19+
std: 0.35
20+
21+
transform_y:
22+
_target_: torchvision.transforms.Compose
23+
transforms:
24+
- _target_: torchvision.transforms.ToTensor
25+
- _target_: torchvision.transforms.Resize
26+
size: 32
27+
- _target_: torchvision.transforms.Normalize
28+
mean: 0.28
29+
std: 0.35
30+

config/data/image/mnist.yaml

+2-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@ data_dir: "../data/image"
44
train_val_split: [0.8, 0.2]
55
batch_size: 64
66
num_workers: 0
7-
pin_memory: False
7+
8+
pin_memory: True
89
persistent_workers: False
910
transforms:
1011
_target_: torchvision.transforms.Compose

config/data/image/tiny_imagenet.yaml

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
_target_: nimrod.image.datasets.ImageDataModule
2+
name: 'slegroux/tiny-imagenet-200-clean'
3+
data_dir: '../data/image'
4+
train_val_split: [0.8, 0.2]
5+
batch_size: 512
6+
num_workers: 0
7+
pin_memory: True
8+
persistent_workers: False
9+
transforms:
10+
_target_: torchvision.transforms.Compose
11+
transforms:
12+
- _target_: torchvision.transforms.ToTensor
13+
- _target_: torchvision.transforms.Normalize
14+
mean: [0.4822, 0.4495, 0.3985]
15+
std: [0.2771, 0.2690, 0.2826]
16+
# - _target_: torchvision.transforms.Resize
17+
# size: [32,32]
18+
+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
_target_: nimrod.image.datasets.ImageDataModule
2+
name: 'slegroux/tiny-imagenet-200-clean'
3+
data_dir: '../data/image'
4+
train_val_split: [0.8, 0.2]
5+
batch_size: 512
6+
num_workers: 0
7+
pin_memory: True
8+
persistent_workers: False
9+
transforms:
10+
_target_: torchvision.transforms.Compose
11+
transforms:
12+
- _target_: torchvision.transforms.ToTensor
13+
- _target_: torchvision.transforms.Normalize
14+
mean: [0.4822, 0.4495, 0.3985]
15+
std: [0.2771, 0.2690, 0.2826]
16+
- _target_: torchvision.transforms.Resize
17+
size: 64
18+
- _target_: torchvision.transforms.RandomCrop
19+
size: 64
20+
- _target_: torchvision.transforms.RandomHorizontalFlip
21+
- _target_: torchvision.transforms.RandomVerticalFlip
22+
23+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
_target_: nimrod.image.datasets.ImageSuperResDataModule
2+
name: 'slegroux/tiny-imagenet-200-clean'
3+
data_dir: '../data/image'
4+
train_val_split: [0.8, 0.2]
5+
batch_size: 512
6+
num_workers: 0
7+
pin_memory: True
8+
persistent_workers: False
9+
transform_x:
10+
_target_: torchvision.transforms.Compose
11+
transforms:
12+
- _target_: torchvision.transforms.ToTensor
13+
- _target_: torchvision.transforms.Normalize
14+
mean: [0.4822, 0.4495, 0.3985]
15+
std: [0.2771, 0.2690, 0.2826]
16+
- _target_: torchvision.transforms.Resize
17+
size: [32,32]
18+
- _target_: torchvision.transforms.Resize
19+
size: [64,64]
20+
21+
transform_y:
22+
_target_: torchvision.transforms.Compose
23+
transforms:
24+
- _target_: torchvision.transforms.ToTensor
25+
- _target_: torchvision.transforms.Normalize
26+
mean: [0.4822, 0.4495, 0.3985]
27+
std: [0.2771, 0.2690, 0.2826]
28+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
_target_: nimrod.image.datasets.ImageDataModule
2+
name: 'zh-plus/tiny-imagenet'
3+
data_dir: "../data/image"
4+
exclude_grey_scale: true
5+
train_val_split: [0.8, 0.2]
6+
batch_size: 64
7+
num_workers: 0
8+
pin_memory: True
9+
persistent_workers: False
10+
transforms:
11+
_target_: torchvision.transforms.Compose
12+
transforms:
13+
- _target_: torchvision.transforms.ToTensor
14+
- _target_: torchvision.transforms.Normalize
15+
mean: [0.4822, 0.4495, 0.3985]
16+
std: [0.2771, 0.2690, 0.2826]
17+
# - _target_: torchvision.transforms.Resize
18+
# size: [32,32]
19+
+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# @package _global_
2+
3+
# python train.py experiment=mnist_conv
4+
5+
defaults:
6+
- override /data: image/fashion_mnist
7+
- override /model: image/convnetx
8+
- override /trainer: default
9+
- override /logger: wandb
10+
- _self_
11+
12+
13+
project: "FASHION-MNIST-Classifier"
14+
tags: ["n_features:${model.nnet.n_features}", "bs:${data.batch_size}", "dev"]
15+
train: True
16+
tune_batch_size: False
17+
tune_lr: True
18+
plot_lr_tuning: False
19+
test: True
20+
ckpt_path: null
21+
22+
23+
data:
24+
batch_size: 1024
25+
num_workers: 0
26+
pin_memory: true
27+
28+
model:
29+
nnet:
30+
n_features: [1, 8, 16, 32, 64, 32]
31+
32+
trainer:
33+
max_epochs: 5
34+
check_val_every_n_epoch: 1
35+
log_every_n_steps: 1
36+
37+
logger:
38+
wandb:
39+
tags: ${tags}
40+
project: ${project}
+44
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# @package _global_
2+
3+
# python train.py experiment=mnist_mlp
4+
5+
defaults:
6+
- override /data: image/fashion_mnist
7+
- override /model: image/mlpx
8+
- override /trainer: default
9+
- override /logger: wandb
10+
# - override /scheduler: one_cycle_lr
11+
- _self_
12+
13+
14+
15+
16+
project: "FASHION-MNIST-Classifier"
17+
tags: ["n_h:${model.nnet.n_h}", "dropout:${model.nnet.dropout}", "dev"]
18+
train: True
19+
tune_batch_size: False
20+
tune_lr: True
21+
plot_lr_tuning: False
22+
test: True
23+
ckpt_path: null
24+
25+
data:
26+
batch_size: 2048
27+
num_workers: 0
28+
pin_memory: true
29+
30+
model:
31+
nnet:
32+
n_in: 1024
33+
n_h: 512
34+
dropout: 0.1
35+
36+
trainer:
37+
max_epochs: 5
38+
check_val_every_n_epoch: 1
39+
log_every_n_steps: 1
40+
41+
logger:
42+
wandb:
43+
tags: ${tags}
44+
project: ${project}

config/experiment/mnist_conv.yaml

+41
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# @package _global_
2+
3+
# python train.py experiment=mnist_conv
4+
5+
defaults:
6+
- override /data: image/mnist
7+
- override /model: image/convnetx
8+
- override /trainer: default
9+
- override /logger: wandb
10+
- _self_
11+
12+
tags: ["mnist", "mlp", "dev"]
13+
project: "mnist-convnetx"
14+
name: "test_feats" # name of run
15+
seed: 42
16+
train: True
17+
tune_lr: False
18+
test: False
19+
20+
21+
data:
22+
batch_size: 1024
23+
num_workers: 0
24+
pin_memory: true
25+
data_dir: ${data_dir}
26+
27+
model:
28+
nnet:
29+
n_features: [1, 8, 16, 32, 16]
30+
31+
trainer:
32+
max_epochs: 2
33+
check_val_every_n_epoch: 1
34+
log_every_n_steps: 1
35+
36+
logger:
37+
wandb:
38+
tags: ${tags}
39+
group: "mnist"
40+
project: ${project}
41+
name: ${name} #bs:${data.batch_size}-lr:${model.optimizer.lr}

0 commit comments

Comments
 (0)