Skip to content

Added sigmoid like activation functions #9011

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions neural_network/activation_functions/sigmoid_like.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
import numpy as np


def _base_activation(vector: np.ndarray, alpha: float, beta: float) -> np.ndarray:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As there is no test file in this pull request nor any test function or class in the file neural_network/activation_functions/sigmoid_like.py, please provide doctest for the function _base_activation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples that you have kept under comments, make it doctest, remove that Example word, and merge both statements i,e.. result and np.linalg....

 >>> np.linalg.norm(np.array([0.5, 0.66666667, 
        0.83333333]) - ( _base_activation(np.array([0, 
        np.log(2), np.log(5)]), 0, 1))) < 10**(-5)
 True

"""
Base activation for sigmoid, swish, and SiLU.
"""
return np.power(vector, alpha) / (1 + np.exp(-beta * vector))


def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
The standard sigmoid function.
Args:
vector: (np.ndarray): The input array.
Returns:
np.ndarray: The result of the sigmoid activation applied to the input array.
Examples:
>>> result = sigmoid(vector=np.array([0, np.log(2), np.log(5)]))
>>> np.linalg.norm(np.array([0.5, 0.66666667, 0.83333333]) - result) < 10**(-5)
True
"""
return _base_activation(vector, 0, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return _base_activation(vector, 0, 1)
return _base_activation(vector, alpha=0, beta=1)



def swish(vector: np.ndarray, beta: float) -> np.ndarray:
"""
Swish activation: https://arxiv.org/abs/1710.05941v2
Args:
vector: (np.ndarray): The input array.
beta: (float)
Returns:
np.ndarray: The result of the swish activation applied to the input array.
Examples:
>>> result = swish(np.array([1, 2, 3]), 0)
>>> np.linalg.norm(np.array([0.5, 1., 1.5]) - result) < 10**(-5)
True
>>> result = swish(np.array([0, 1, 2]), np.log(2))
>>> np.linalg.norm(np.array([0, 0.66666667, 1.6]) - result) < 10**(-5)
True
"""
return _base_activation(vector, 1, beta)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return _base_activation(vector, 1, beta)
return _base_activation(vector, alpha=1, beta=beta)



def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
SiLU activation: https://arxiv.org/abs/1606.08415
Args:
vector: (np.ndarray): The input array.

Returns:
np.ndarray: The result of the sigmoid linear unit applied to the input array.
Examples:
>>> result = sigmoid_linear_unit(np.array([0, 1, np.log(2)]))
>>> np.linalg.norm(np.array([0, 0.7310585, 0.462098]) - result) < 10**(-5)
True
"""
return swish(vector, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return swish(vector, 1)
return swish(vector, beta=1)



if __name__ == "__main__":
import doctest

doctest.testmod()