Skip to content

Adding LSTM algorithm from scratch in neural network algorithm sections #12082

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: master
Choose a base branch
from

Conversation

LEVIII007
Copy link

Describe your change:

  • Add an algorithm?
  • Fix a bug or typo in an existing algorithm?
  • Add or change doctests? -- Note: Please avoid changing both code and tests in a single pull request.
  • Documentation change?

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
  • If this pull request resolves one or more open issues then the description above includes the issue number(s) with a closing keyword: "Fixes #ISSUE-NUMBER".

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

##### Testing #####
# lstm.test()

# testing can be done by uncommenting the above lines of code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error occurred while parsing the file: neural_network/lstm.py

Traceback (most recent call last):
  File "/opt/render/project/src/algorithms_keeper/parser/python_parser.py", line 146, in parse
    reports = lint_file(
              ^^^^^^^^^^
libcst._exceptions.ParserSyntaxError: Syntax Error @ 317:1.
parser error: error at 317:62: expected INDENT

# testing can be done by uncommenting the above lines of code.
^

@algorithms-keeper algorithms-keeper bot added the awaiting reviews This PR is ready to be reviewed label Oct 15, 2024
@algorithms-keeper algorithms-keeper bot added the tests are failing Do not merge until tests pass label Oct 15, 2024
@algorithms-keeper algorithms-keeper bot added require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required labels Oct 15, 2024
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

self.char_to_idx = {c: i for i, c in enumerate(self.chars)}
self.idx_to_char = {i: c for i, c in enumerate(self.chars)}

self.train_X, self.train_y = self.data[:-1], self.data[1:]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable and function names should follow the snake_case naming convention. Please update the following name accordingly: train_X

self.initialize_weights()

##### Helper Functions #####
def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy = self.init_weights(self.hidden_dim, self.char_size)
self.by = np.zeros((self.char_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

np.sqrt(6 / (input_dim + output_dim))

##### Activation Functions #####
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list) -> list:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list, inputs: list) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

[d_bf, d_bi, d_bc, d_bo, d_by]):
param -= self.lr * grad

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train

# Backward pass and weight updates
self.backward(errors, inputs)

def predict(self, inputs: list) -> str:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function predict

output = self.forward(inputs)[-1]
return self.idx_to_char[np.argmax(self.softmax(output))]

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

@algorithms-keeper algorithms-keeper bot removed require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required labels Oct 15, 2024
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

# lstm.train()

# # Test the LSTM network and compute accuracy
# lstm.test()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error occurred while parsing the file: neural_network/lstm.py

Traceback (most recent call last):
  File "/opt/render/project/src/algorithms_keeper/parser/python_parser.py", line 146, in parse
    reports = lint_file(
              ^^^^^^^^^^
libcst._exceptions.ParserSyntaxError: Syntax Error @ 358:1.
parser error: error at 359:0: expected INDENT

    # lstm.test()
^

@algorithms-keeper algorithms-keeper bot added require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required labels Oct 15, 2024
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

self.char_to_idx = {c: i for i, c in enumerate(self.chars)}
self.idx_to_char = {i: c for i, c in enumerate(self.chars)}

self.train_X, self.train_y = self.data[:-1], self.data[1:]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable and function names should follow the snake_case naming convention. Please update the following name accordingly: train_X

self.initialize_weights()

##### Helper Functions #####
def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy = self.init_weights(self.hidden_dim, self.char_size)
self.by = np.zeros((self.char_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

)

##### Activation Functions #####
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list) -> list:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list, inputs: list) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

):
param -= self.lr * grad

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train

# Backward pass and weight updates
self.backward(errors, inputs)

def predict(self, inputs: list) -> str:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function predict

output = self.forward(inputs)[-1]
return self.idx_to_char[np.argmax(self.softmax(output))]

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

self.char_to_idx = {c: i for i, c in enumerate(self.chars)}
self.idx_to_char = dict(enumerate(self.chars))

self.train_X, self.train_y = self.data[:-1], self.data[1:]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable and function names should follow the snake_case naming convention. Please update the following name accordingly: train_X

self.initialize_weights()

##### Helper Functions #####
def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy = self.init_weights(self.hidden_dim, self.char_size, rng)
self.by = np.zeros((self.char_size, 1))

def init_weights(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

)

##### Activation Functions #####
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return exp_x / exp_x.sum(axis=0)

##### LSTM Network Methods #####
def reset(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list) -> list:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list, inputs: list) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

self.wy += d_wy * self.lr
self.by += d_by * self.lr

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward(errors, self.concat_inputs)

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

self.char_to_idx = {c: i for i, c in enumerate(self.chars)}
self.idx_to_char = dict(enumerate(self.chars))

self.train_X, self.train_y = self.data[:-1], self.data[1:]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable and function names should follow the snake_case naming convention. Please update the following name accordingly: train_X

self.initialize_weights()

##### Helper Functions #####
def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy = self.init_weights(self.hidden_dim, self.char_size)
self.by = np.zeros((self.char_size, 1))

def init_weights(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

)

##### Activation Functions #####
def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return exp_x / exp_x.sum(axis=0)

##### LSTM Network Methods #####
def reset(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list) -> list:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list, inputs: list) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

self.wy += d_wy * self.lr
self.by += d_by * self.lr

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward(errors, self.concat_inputs)

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.


self.initialize_weights()

def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size)
self.by: np.ndarray = np.zeros((self.char_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function tanh

Please provide descriptive name for the parameter: x

exp_x = np.exp(x - np.max(x))
return exp_x / exp_x.sum(axis=0)

def reset(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

self.wy += d_wy * self.lr
self.by += d_by * self.lr

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward(errors, inputs)

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.


self.initialize_weights()

def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size)
self.by: np.ndarray = np.zeros((self.char_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function tanh

Please provide descriptive name for the parameter: x

exp_x = np.exp(x - np.max(x))
return exp_x / exp_x.sum(axis=0)

def reset(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

self.wy += d_wy * self.lr
self.by += d_by * self.lr

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward(errors, inputs)

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.


self.initialize_weights()

def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_idx[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

self.wy: np.ndarray = self.init_weights(self.hidden_dim, self.char_size)
self.by: np.ndarray = np.zeros((self.char_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function tanh

Please provide descriptive name for the parameter: x

exp_x = np.exp(x - np.max(x))
return exp_x / exp_x.sum(axis=0)

def reset(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset

self.input_gates = {}
self.outputs = {}

def forward(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward


return outputs

def backward(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward

self.wy += d_wy * self.lr
self.by += d_by * self.lr

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward(errors, inputs)

def test(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

@algorithms-keeper algorithms-keeper bot removed the tests are failing Do not merge until tests pass label Oct 15, 2024
@LEVIII007
Copy link
Author

if it gets accepted, please give me hacktober fest accepted tag. Thank you!

@algorithms-keeper algorithms-keeper bot added the require type hints https://docs.python.org/3/library/typing.html label Oct 15, 2024
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.


self.initialize_weights()

def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_index[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

)
self.output_layer_bias: np.ndarray = np.zeros((self.vocabulary_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function tanh

Please provide descriptive name for the parameter: x

return 1 - x**2
return np.tanh(x)

def softmax(self, x: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function softmax

Please provide descriptive name for the parameter: x

exp_x = np.exp(x - np.max(x))
return exp_x / exp_x.sum(axis=0)

def reset_network_state(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset_network_state

self.output_gate_activations = {}
self.network_outputs = {}

def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward_pass


return outputs

def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward_pass


return output

def test_lstm_workflow():

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: test_lstm_workflow. If the function does not return a value, please provide the type hint as: def function() -> None:

@algorithms-keeper algorithms-keeper bot added the tests are failing Do not merge until tests pass label Oct 15, 2024
@algorithms-keeper algorithms-keeper bot removed the tests are failing Do not merge until tests pass label Oct 15, 2024
Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.


self.initialize_weights()

def one_hot_encode(self, char: str) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function one_hot_encode

vector[self.char_to_index[char]] = 1
return vector

def initialize_weights(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function initialize_weights

)
self.output_layer_bias: np.ndarray = np.zeros((self.vocabulary_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function sigmoid

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function tanh

Please provide descriptive name for the parameter: x

exp_x = np.exp(x - np.max(x))
return exp_x / exp_x.sum(axis=0)

def reset_network_state(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function reset_network_state

self.output_gate_activations = {}
self.network_outputs = {}

def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward_pass


return outputs

def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward_pass

self.output_layer_weights += d_output_layer_weights * self.learning_rate
self.output_layer_bias += d_output_layer_bias * self.learning_rate

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward_pass(errors, inputs)

def test(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: test. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

Copy link

@algorithms-keeper algorithms-keeper bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click here to look at the relevant links ⬇️

🔗 Relevant Links

Repository:

Python:

Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.

algorithms-keeper commands and options

algorithms-keeper actions can be triggered by commenting on this PR:

  • @algorithms-keeper review to trigger the checks for only added pull request files
  • @algorithms-keeper review-all to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.

NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.

)
self.output_layer_bias = np.zeros((self.vocabulary_size, 1))

def init_weights(self, input_dim: int, output_dim: int) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function init_weights

6 / (input_dim + output_dim)
)

def sigmoid(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

return x * (1 - x)
return 1 / (1 + np.exp(-x))

def tanh(self, x: np.ndarray, derivative: bool = False) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

return 1 - x**2
return np.tanh(x)

def softmax(self, x: np.ndarray) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide descriptive name for the parameter: x

self.output_gate_activations = {}
self.network_outputs = {}

def forward_pass(self, inputs: list[np.ndarray]) -> list[np.ndarray]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function forward_pass


return outputs

def backward_pass(self, errors: list[np.ndarray], inputs: list[np.ndarray]) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function backward_pass

self.output_layer_weights += d_output_layer_weights * self.learning_rate
self.output_layer_bias += d_output_layer_bias * self.learning_rate

def train(self) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function train


self.backward_pass(errors, inputs)

def test(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide return type hint for the function: test. If the function does not return a value, please provide the type hint as: def function() -> None:

As there is no test file in this pull request nor any test function or class in the file neural_network/lstm.py, please provide doctest for the function test

@algorithms-keeper algorithms-keeper bot removed require descriptive names This PR needs descriptive function and/or variable names require tests Tests [doctest/unittest/pytest] are required require type hints https://docs.python.org/3/library/typing.html labels Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting reviews This PR is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant