Skip to content

Added leaky rectified linear algorithm #6260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

atomicsorcerer
Copy link

@atomicsorcerer atomicsorcerer commented Jul 20, 2022

Describe your change:

Added the leaky rectified linear algorithm (also known as leaky ReLU). Leaky ReLU is an alternative to normal ReLU because it solves the dying ReLU problem, which is an issue in some neural networks.

  • Add an algorithm?
  • Fix a bug or typo in an existing algorithm?
  • Documentation change?

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
  • If this pull request resolves one or more open issues then the commit message contains Fixes: #{$ISSUE_NO}.

@atomicsorcerer atomicsorcerer requested a review from Kush1101 as a code owner July 20, 2022 19:46
@ghost ghost added the awaiting reviews This PR is ready to be reviewed label Jul 20, 2022
@stale
Copy link

stale bot commented Nov 2, 2022

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Used to mark an issue or pull request stale. label Nov 2, 2022
Script inspired from its corresponding Wikipedia article
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
"""
from __future__ import annotations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the __future__ import needed here?

Comment on lines +15 to +17
def leaky_relu(
vector: float | list[float], negative_slope: float = 0.01
) -> float | list[float]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def leaky_relu(
vector: float | list[float], negative_slope: float = 0.01
) -> float | list[float]:
def leaky_relu(vector: np.ndarray, negative_slope: float = 0.01) -> np.ndarray:

I think just type hinting it as np.ndarray is fine for this function. Using numpy arrays is pretty standard when it comes to NN-related Python programming, and numpy functions that take in arrays generally also support scalars as well ("array_like")

Comment on lines +36 to +39
if isinstance(vector, int):
raise ValueError(
"leaky_relu() only accepts floats or a list of floats for vector"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we not want to support ints as input? They can all be cast to floats as output

Comment on lines +40 to +41
if not isinstance(negative_slope, float):
raise ValueError("leaky_relu() only accepts a float value for negative_slope")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the constraints on the possible range for the negative slope should be clearer. Are we restricting it to a float between 0 and 1? If so, that should be the if-condition instead

Comment on lines +43 to +52
if isinstance(vector, float):
if vector < 0:
return vector * negative_slope
return vector

for index, value in enumerate(vector):
if value < 0:
vector[index] = value * negative_slope

return vector
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if isinstance(vector, float):
if vector < 0:
return vector * negative_slope
return vector
for index, value in enumerate(vector):
if value < 0:
vector[index] = value * negative_slope
return vector
return np.maximum(vector, negative_slope * vector)

numpy functions can handle these cases very easily. Also, leaky ReLU is equivalent to $f(x) = \max(x, ax)$ for negative slopes $0 \leq a \leq 1$ (see the Wikipedia article you cited for more info)

@stale stale bot removed stale Used to mark an issue or pull request stale. labels Jun 18, 2023
@tianyizheng02
Copy link
Contributor

Closing in favor of #8962

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting reviews This PR is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants