-
-
Notifications
You must be signed in to change notification settings - Fork 46.6k
Added leaky rectified linear algorithm #6260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Script inspired from its corresponding Wikipedia article | ||
https://en.wikipedia.org/wiki/Rectifier_(neural_networks) | ||
""" | ||
from __future__ import annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the __future__
import needed here?
def leaky_relu( | ||
vector: float | list[float], negative_slope: float = 0.01 | ||
) -> float | list[float]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def leaky_relu( | |
vector: float | list[float], negative_slope: float = 0.01 | |
) -> float | list[float]: | |
def leaky_relu(vector: np.ndarray, negative_slope: float = 0.01) -> np.ndarray: |
I think just type hinting it as np.ndarray
is fine for this function. Using numpy
arrays is pretty standard when it comes to NN-related Python programming, and numpy
functions that take in arrays generally also support scalars as well ("array_like
")
if isinstance(vector, int): | ||
raise ValueError( | ||
"leaky_relu() only accepts floats or a list of floats for vector" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we not want to support ints as input? They can all be cast to floats as output
if not isinstance(negative_slope, float): | ||
raise ValueError("leaky_relu() only accepts a float value for negative_slope") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the constraints on the possible range for the negative slope should be clearer. Are we restricting it to a float between 0 and 1? If so, that should be the if-condition instead
if isinstance(vector, float): | ||
if vector < 0: | ||
return vector * negative_slope | ||
return vector | ||
|
||
for index, value in enumerate(vector): | ||
if value < 0: | ||
vector[index] = value * negative_slope | ||
|
||
return vector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if isinstance(vector, float): | |
if vector < 0: | |
return vector * negative_slope | |
return vector | |
for index, value in enumerate(vector): | |
if value < 0: | |
vector[index] = value * negative_slope | |
return vector | |
return np.maximum(vector, negative_slope * vector) |
numpy
functions can handle these cases very easily. Also, leaky ReLU is equivalent to
Closing in favor of #8962 |
Describe your change:
Added the leaky rectified linear algorithm (also known as leaky ReLU). Leaky ReLU is an alternative to normal ReLU because it solves the dying ReLU problem, which is an issue in some neural networks.
Checklist:
Fixes: #{$ISSUE_NO}
.