-
-
Notifications
You must be signed in to change notification settings - Fork 46.6k
Adding LSTM algorithm from scratch in neural network algorithm sections #12082
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
##### Testing ##### | ||
# lstm.test() | ||
|
||
# testing can be done by uncommenting the above lines of code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error occurred while parsing the file: neural_network/lstm.py
Traceback (most recent call last):
File "/opt/render/project/src/algorithms_keeper/parser/python_parser.py", line 146, in parse
reports = lint_file(
^^^^^^^^^^
libcst._exceptions.ParserSyntaxError: Syntax Error @ 317:1.
parser error: error at 317:62: expected INDENT
# testing can be done by uncommenting the above lines of code.
^
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
self.char_to_idx = {c: i for i, c in enumerate(self.chars)} | ||
self.idx_to_char = {i: c for i, c in enumerate(self.chars)} | ||
|
||
self.train_X, self.train_y = self.data[:-1], self.data[1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable and function names should follow the snake_case
naming convention. Please update the following name accordingly: train_X
self.initialize_weights() | ||
|
||
##### Helper Functions ##### | ||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
self.wy = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
np.sqrt(6 / (input_dim + output_dim)) | ||
|
||
##### Activation Functions ##### | ||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list) -> list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list, inputs: list) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
[d_bf, d_bi, d_bc, d_bo, d_by]): | ||
param -= self.lr * grad | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
# Backward pass and weight updates | ||
self.backward(errors, inputs) | ||
|
||
def predict(self, inputs: list) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function predict
neural_network/lstm.py
Outdated
output = self.forward(inputs)[-1] | ||
return self.idx_to_char[np.argmax(self.softmax(output))] | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
# lstm.train() | ||
|
||
# # Test the LSTM network and compute accuracy | ||
# lstm.test() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error occurred while parsing the file: neural_network/lstm.py
Traceback (most recent call last):
File "/opt/render/project/src/algorithms_keeper/parser/python_parser.py", line 146, in parse
reports = lint_file(
^^^^^^^^^^
libcst._exceptions.ParserSyntaxError: Syntax Error @ 358:1.
parser error: error at 359:0: expected INDENT
# lstm.test()
^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
self.char_to_idx = {c: i for i, c in enumerate(self.chars)} | ||
self.idx_to_char = {i: c for i, c in enumerate(self.chars)} | ||
|
||
self.train_X, self.train_y = self.data[:-1], self.data[1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable and function names should follow the snake_case
naming convention. Please update the following name accordingly: train_X
self.initialize_weights() | ||
|
||
##### Helper Functions ##### | ||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
self.wy = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
) | ||
|
||
##### Activation Functions ##### | ||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list) -> list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list, inputs: list) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
): | ||
param -= self.lr * grad | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
# Backward pass and weight updates | ||
self.backward(errors, inputs) | ||
|
||
def predict(self, inputs: list) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function predict
neural_network/lstm.py
Outdated
output = self.forward(inputs)[-1] | ||
return self.idx_to_char[np.argmax(self.softmax(output))] | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
self.char_to_idx = {c: i for i, c in enumerate(self.chars)} | ||
self.idx_to_char = dict(enumerate(self.chars)) | ||
|
||
self.train_X, self.train_y = self.data[:-1], self.data[1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable and function names should follow the snake_case
naming convention. Please update the following name accordingly: train_X
self.initialize_weights() | ||
|
||
##### Helper Functions ##### | ||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
neural_network/lstm.py
Outdated
self.wy = self.init_weights(self.hidden_dim, self.char_size, rng) | ||
self.by = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
) | ||
|
||
##### Activation Functions ##### | ||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return exp_x / exp_x.sum(axis=0) | ||
|
||
##### LSTM Network Methods ##### | ||
def reset(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list) -> list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list, inputs: list) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
self.wy += d_wy * self.lr | ||
self.by += d_by * self.lr | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward(errors, self.concat_inputs) | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
neural_network/lstm.py
Outdated
self.char_to_idx = {c: i for i, c in enumerate(self.chars)} | ||
self.idx_to_char = dict(enumerate(self.chars)) | ||
|
||
self.train_X, self.train_y = self.data[:-1], self.data[1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable and function names should follow the snake_case
naming convention. Please update the following name accordingly: train_X
self.initialize_weights() | ||
|
||
##### Helper Functions ##### | ||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
neural_network/lstm.py
Outdated
self.wy = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
) | ||
|
||
##### Activation Functions ##### | ||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return exp_x / exp_x.sum(axis=0) | ||
|
||
##### LSTM Network Methods ##### | ||
def reset(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list) -> list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list, inputs: list) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
self.wy += d_wy * self.lr | ||
self.by += d_by * self.lr | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward(errors, self.concat_inputs) | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
|
||
self.initialize_weights() | ||
|
||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by: np.ndarray = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function tanh
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
exp_x = np.exp(x - np.max(x)) | ||
return exp_x / exp_x.sum(axis=0) | ||
|
||
def reset(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
self.wy += d_wy * self.lr | ||
self.by += d_by * self.lr | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward(errors, inputs) | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
|
||
self.initialize_weights() | ||
|
||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by: np.ndarray = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function tanh
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
exp_x = np.exp(x - np.max(x)) | ||
return exp_x / exp_x.sum(axis=0) | ||
|
||
def reset(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
self.wy += d_wy * self.lr | ||
self.by += d_by * self.lr | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward(errors, inputs) | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
|
||
self.initialize_weights() | ||
|
||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_idx[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size) | ||
self.by: np.ndarray = np.zeros((self.char_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function tanh
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
exp_x = np.exp(x - np.max(x)) | ||
return exp_x / exp_x.sum(axis=0) | ||
|
||
def reset(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset
neural_network/lstm.py
Outdated
self.input_gates = {} | ||
self.outputs = {} | ||
|
||
def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward
neural_network/lstm.py
Outdated
|
||
return outputs | ||
|
||
def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward
self.wy += d_wy * self.lr | ||
self.by += d_by * self.lr | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward(errors, inputs) | ||
|
||
def test(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
if it gets accepted, please give me hacktober fest accepted tag. Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
|
||
self.initialize_weights() | ||
|
||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_index[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
) | ||
self.output_layer_bias: np.ndarray = np.zeros((self.vocabulary_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function tanh
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return 1 - x**2 | ||
return np.tanh(x) | ||
|
||
def softmax(self, x: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function softmax
Please provide descriptive name for the parameter: x
exp_x = np.exp(x - np.max(x)) | ||
return exp_x / exp_x.sum(axis=0) | ||
|
||
def reset_network_state(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset_network_state
self.output_gate_activations = {} | ||
self.network_outputs = {} | ||
|
||
def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward_pass
|
||
return outputs | ||
|
||
def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward_pass
neural_network/lstm.py
Outdated
|
||
return output | ||
|
||
def test_lstm_workflow(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: test_lstm_workflow
. If the function does not return a value, please provide the type hint as: def function() -> None:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
|
||
self.initialize_weights() | ||
|
||
def one_hot_encode(self, char: str) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function one_hot_encode
vector[self.char_to_index[char]] = 1 | ||
return vector | ||
|
||
def initialize_weights(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function initialize_weights
) | ||
self.output_layer_bias: np.ndarray = np.zeros((self.vocabulary_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function sigmoid
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function tanh
Please provide descriptive name for the parameter: x
exp_x = np.exp(x - np.max(x)) | ||
return exp_x / exp_x.sum(axis=0) | ||
|
||
def reset_network_state(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function reset_network_state
self.output_gate_activations = {} | ||
self.network_outputs = {} | ||
|
||
def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward_pass
|
||
return outputs | ||
|
||
def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward_pass
self.output_layer_weights += d_output_layer_weights * self.learning_rate | ||
self.output_layer_bias += d_output_layer_bias * self.learning_rate | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward_pass(errors, inputs) | ||
|
||
def test(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: test
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
) | ||
self.output_layer_bias = np.zeros((self.vocabulary_size, 1)) | ||
|
||
def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function init_weights
neural_network/lstm.py
Outdated
6 / (input_dim + output_dim) | ||
) | ||
|
||
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return x * (1 - x) | ||
return 1 / (1 + np.exp(-x)) | ||
|
||
def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
neural_network/lstm.py
Outdated
return 1 - x**2 | ||
return np.tanh(x) | ||
|
||
def softmax(self, x: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
self.output_gate_activations = {} | ||
self.network_outputs = {} | ||
|
||
def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function forward_pass
|
||
return outputs | ||
|
||
def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function backward_pass
self.output_layer_weights += d_output_layer_weights * self.learning_rate | ||
self.output_layer_bias += d_output_layer_bias * self.learning_rate | ||
|
||
def train(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function train
neural_network/lstm.py
Outdated
|
||
self.backward_pass(errors, inputs) | ||
|
||
def test(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: test
. If the function does not return a value, please provide the type hint as: def function() -> None:
As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py
, please provide doctest for the function test
…names in sigmoid function from x to input array
Describe your change:
Checklist: