Skip to content

Implement multitask training #25

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 33 commits into from
Apr 13, 2020
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
a3323e5
chg: Use lists for y case to allow multiple labels
ivyleavedtoadflax Mar 24, 2020
0de34d5
chg: Fix tests and evaluate method
ivyleavedtoadflax Mar 24, 2020
af73eb4
chg: revert OOV and NUM tokens
ivyleavedtoadflax Mar 25, 2020
a989b50
fix: output_layers count
ivyleavedtoadflax Mar 25, 2020
711354f
chg: Save predictions for multiple tasks
ivyleavedtoadflax Mar 25, 2020
57e7588
chg: Use the max_len sent at init
ivyleavedtoadflax Mar 25, 2020
322792e
chg: Combine artefacts into indices.pickle
ivyleavedtoadflax Mar 25, 2020
1f8da40
fixup indices
ivyleavedtoadflax Mar 25, 2020
a65d260
chg: Update CHANGELOG
ivyleavedtoadflax Mar 26, 2020
3d1a055
new: Bump version to 2020.3.3
ivyleavedtoadflax Mar 26, 2020
966cba8
chg: fix: missing logging statements
ivyleavedtoadflax Mar 26, 2020
c33dd75
chg: Solve issue with quotes in tsv files
ivyleavedtoadflax Mar 27, 2020
b61de98
chg: Fix logging messages in split and parse
ivyleavedtoadflax Mar 27, 2020
182a9cb
new: Add multitask config
ivyleavedtoadflax Mar 30, 2020
3e48684
new: Update parser and splitter model
ivyleavedtoadflax Mar 30, 2020
f392f9f
new: Add multitask split_parse command
ivyleavedtoadflax Mar 31, 2020
795679d
Add multitask 3.18 tsvs to datasets in Makefile
lizgzil Apr 2, 2020
3e1b20b
chg: Use lower level weight loading
ivyleavedtoadflax Apr 5, 2020
20afa75
chg: Update predict function for multitask scenario
ivyleavedtoadflax Apr 5, 2020
77971e5
chg: Update split_parse to deal with multiple predictions
ivyleavedtoadflax Apr 5, 2020
b33c2b2
new: Handle no config error
ivyleavedtoadflax Apr 12, 2020
5b17587
new: Add logic to handle single task case
ivyleavedtoadflax Apr 12, 2020
fdfe5d3
chg: Update to 2020.3.19 multitask model
ivyleavedtoadflax Apr 12, 2020
1be3864
chg: Update datasets recipe
ivyleavedtoadflax Apr 12, 2020
88f1a24
Merge pull request #30 from wellcometrust/add-multitask-makefile
ivyleavedtoadflax Apr 12, 2020
0edf65f
Merge branch 'feature/ivyleavedtoadflax/multitask_2' of github.com:we…
ivyleavedtoadflax Apr 12, 2020
15306b4
chg: Use output labels to detect output size
ivyleavedtoadflax Apr 12, 2020
8e3a155
fix: failing test
ivyleavedtoadflax Apr 12, 2020
ad6d6bb
new: Add tests for SplitParser
ivyleavedtoadflax Apr 12, 2020
fad8d58
chg: Update README.md
ivyleavedtoadflax Apr 12, 2020
6db7d8e
new: Update CHANGELOG
ivyleavedtoadflax Apr 12, 2020
0e6658c
new: Add split_parse model config to setup.py
ivyleavedtoadflax Apr 12, 2020
fceed1b
fix: typo
ivyleavedtoadflax Apr 13, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Changelog

## 2020.3.3 - Pre-release

* Adds support for a Multitask models as in the original Rodrigues paper
* Combines artefacts into a single `indices.pickle` rather than the several previous pickles. Now the model just requires the embedding, `indices.pickle`, and `weights.h5`.

## 2020.3.2 - Pre-release

* Adds parse command that can be called with `python -m deep_reference_parser parse`
Expand Down
2 changes: 2 additions & 0 deletions deep_reference_parser/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@
from .train import train
from .split import split
from .parse import parse
from .split_parse import split_parse

commands = {
"split": split,
"parse": parse,
"train": train,
"split_parse": split_parse,
}

if len(sys.argv) == 1:
Expand Down
7 changes: 4 additions & 3 deletions deep_reference_parser/__version__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
__name__ = "deep_reference_parser"
__version__ = "2020.3.2"
__version__ = "2020.3.3"
__description__ = "Deep learning model for finding and parsing references"
__url__ = "https://github.com/wellcometrust/deep_reference_parser"
__author__ = "Wellcome Trust DataLabs Team"
__author_email__ = "[email protected]"
__license__ = "MIT"
__splitter_model_version__ = "2019.12.0_splitting"
__parser_model_version__ = "2020.3.2_parsing"
__splitter_model_version__ = "2020.3.6_splitting"
__parser_model_version__ = "2020.3.8_parsing"
__multitask_model_version__ = "2020.3.18_multitask"
9 changes: 3 additions & 6 deletions deep_reference_parser/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from urllib import parse, request

from .logger import logger
from .__version__ import __splitter_model_version__, __parser_model_version__
from .__version__ import __splitter_model_version__, __parser_model_version__, __multitask_model_version__


def get_path(path):
Expand All @@ -15,6 +15,7 @@ def get_path(path):

SPLITTER_CFG = get_path(f"configs/{__splitter_model_version__}.ini")
PARSER_CFG = get_path(f"configs/{__parser_model_version__}.ini")
MULTITASK_CFG = get_path(f"configs/{__multitask_model_version__}.ini")


def download_model_artefact(artefact, s3_slug):
Expand Down Expand Up @@ -47,13 +48,9 @@ def download_model_artefacts(model_dir, s3_slug, artefacts=None):
if not artefacts:

artefacts = [
"char2ind.pickle",
"ind2label.pickle",
"ind2word.pickle",
"label2ind.pickle",
"indices.pickle"
"maxes.pickle",
"weights.h5",
"word2ind.pickle",
]

for artefact in artefacts:
Expand Down
35 changes: 0 additions & 35 deletions deep_reference_parser/configs/2019.12.0_splitting.ini

This file was deleted.

41 changes: 41 additions & 0 deletions deep_reference_parser/configs/2020.3.18_multitask.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
[DEFAULT]
version = 2020.3.18_multitask
description = Multitask model trained on a combination of Reach and Rodrigues
data. The Rodrigues data have been concatenated into a single continuous
document and then cut into sequences of length=line_length, so that the
Rodrigues data and Reach data have the same lengths without need for much
padding or truncating.
deep_reference_parser_version = b61de984f95be36445287c40af4e65a403637692

[data]
# Note that test and valid proportion are only used for data creation steps,
# not when running the train command.
test_proportion = 0.25
valid_proportion = 0.25
data_path = data/
respect_line_endings = 0
respect_doc_endings = 1
line_limit = 150
policy_train = data/multitask/2020.3.18_multitask_train.tsv
policy_test = data/multitask/2020.3.18_multitask_test.tsv
policy_valid = data/multitask/2020.3.18_multitask_valid.tsv
s3_slug = https://datalabs-public.s3.eu-west-2.amazonaws.com/deep_reference_parser/

[build]
output_path = models/multitask/2020.3.18_multitask/
output = crf
word_embeddings = embeddings/2020.1.1-wellcome-embeddings-300.txt
pretrained_embedding = 0
dropout = 0.5
lstm_hidden = 400
word_embedding_size = 300
char_embedding_size = 100
char_embedding_type = BILSTM
optimizer = adam

[train]
epochs = 60
batch_size = 100
early_stopping_patience = 5
metric = val_f1

39 changes: 0 additions & 39 deletions deep_reference_parser/configs/2020.3.2_parsing.ini

This file was deleted.

39 changes: 39 additions & 0 deletions deep_reference_parser/configs/2020.3.6_splitting.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
[DEFAULT]
version = 2020.3.6_splitting
description = Splitting model trained on a combination of Reach and Rodrigues
data. The Rodrigues data have been concatenated into a single continuous
document and then cut into sequences of length=line_length, so that the
Rodrigues data and Reach data have the same lengths without need for much
padding or truncating.
deep_reference_parser_version = e489f7efa31072b95175be8f728f1fcf03a4cabb

[data]
test_proportion = 0.25
valid_proportion = 0.25
data_path = data/
respect_line_endings = 0
respect_doc_endings = 1
line_limit = 250
policy_train = data/splitting/2020.3.6_splitting_train.tsv
policy_test = data/splitting/2020.3.6_splitting_test.tsv
policy_valid = data/splitting/2020.3.6_splitting_valid.tsv
s3_slug = https://datalabs-public.s3.eu-west-2.amazonaws.com/deep_reference_parser/

[build]
output_path = models/splitting/2020.3.6_splitting/
output = crf
word_embeddings = embeddings/2020.1.1-wellcome-embeddings-300.txt
pretrained_embedding = 0
dropout = 0.5
lstm_hidden = 400
word_embedding_size = 300
char_embedding_size = 100
char_embedding_type = BILSTM
optimizer = rmsprop

[train]
epochs = 30
batch_size = 100
early_stopping_patience = 5
metric = val_f1

38 changes: 38 additions & 0 deletions deep_reference_parser/configs/2020.3.8_parsing.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
[DEFAULT]
version = 2020.3.8_parsing
description = Parsing model trained on a combination of Reach and Rodrigues
data. The Rodrigues data have been concatenated into a single continuous
document and then cut into sequences of length=line_length, so that the
Rodrigues data and Reach data have the same lengths without need for much
padding or truncating.
deep_reference_parser_version = e489f7efa31072b95175be8f728f1fcf03a4cabb

[data]
test_proportion = 0.25
valid_proportion = 0.25
data_path = data/
respect_line_endings = 0
respect_doc_endings = 1
line_limit = 100
policy_train = data/parsing/2020.3.8_parsing_train.tsv
policy_test = data/parsing/2020.3.8_parsing_test.tsv
policy_valid = data/parsing/2020.3.8_parsing_valid.tsv
s3_slug = https://datalabs-public.s3.eu-west-2.amazonaws.com/deep_reference_parser/

[build]
output_path = models/parsing/2020.3.8_parsing/
output = crf
word_embeddings = embeddings/2020.1.1-wellcome-embeddings-300.txt
pretrained_embedding = 0
dropout = 0.5
lstm_hidden = 400
word_embedding_size = 300
char_embedding_size = 100
char_embedding_type = BILSTM
optimizer = rmsprop

[train]
epochs = 30
batch_size = 100
early_stopping_patience = 5
metric = val_f1
Loading