Skip to content

Added leaky rectified linear algorithm #6260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 58 additions & 0 deletions maths/leaky_relu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
"""
This algorithm implements the leaky rectified linear algorithm (LReLU).

LReLU is at times used as a substitute to ReLU because it fixes the dying ReLU problem.
This is done by adding a slight slope to the negative portion of the function.
The default value for the slope is 0.01.
The new slope is determined before the network is trained.

Script inspired from its corresponding Wikipedia article
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
"""
from __future__ import annotations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the __future__ import needed here?



def leaky_relu(
vector: float | list[float], negative_slope: float = 0.01
) -> float | list[float]:
Comment on lines +15 to +17
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def leaky_relu(
vector: float | list[float], negative_slope: float = 0.01
) -> float | list[float]:
def leaky_relu(vector: np.ndarray, negative_slope: float = 0.01) -> np.ndarray:

I think just type hinting it as np.ndarray is fine for this function. Using numpy arrays is pretty standard when it comes to NN-related Python programming, and numpy functions that take in arrays generally also support scalars as well ("array_like")

"""
Implements the leaky rectified linear activation function

:param vector: The float or list of floats to apply the algorithm to
:param slope: The multiplier that is applied to every negative value in the list
:return: The modified value or list of values after applying LReLU

>>> leaky_relu([-5])
[-0.05]
>>> leaky_relu([-2, 0.8, -0.3])
[-0.02, 0.8, -0.003]
>>> leaky_relu(-3.0)
-0.03
>>> leaky_relu(2)
Traceback (most recent call last):
...
ValueError: leaky_relu() only accepts floats or a list of floats for vector
"""
if isinstance(vector, int):
raise ValueError(
"leaky_relu() only accepts floats or a list of floats for vector"
)
Comment on lines +36 to +39
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we not want to support ints as input? They can all be cast to floats as output

if not isinstance(negative_slope, float):
raise ValueError("leaky_relu() only accepts a float value for negative_slope")
Comment on lines +40 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the constraints on the possible range for the negative slope should be clearer. Are we restricting it to a float between 0 and 1? If so, that should be the if-condition instead


if isinstance(vector, float):
if vector < 0:
return vector * negative_slope
return vector

for index, value in enumerate(vector):
if value < 0:
vector[index] = value * negative_slope

return vector
Comment on lines +43 to +52
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if isinstance(vector, float):
if vector < 0:
return vector * negative_slope
return vector
for index, value in enumerate(vector):
if value < 0:
vector[index] = value * negative_slope
return vector
return np.maximum(vector, negative_slope * vector)

numpy functions can handle these cases very easily. Also, leaky ReLU is equivalent to $f(x) = \max(x, ax)$ for negative slopes $0 \leq a \leq 1$ (see the Wikipedia article you cited for more info)



if __name__ == "__main__":
import doctest

doctest.testmod()