Skip to content

Commit 153c35e

Browse files
Added Scaled Exponential Linear Unit Activation Function (#9027)
* Added Scaled Exponential Linear Unit Activation Function * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update scaled_exponential_linear_unit.py * Update scaled_exponential_linear_unit.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update scaled_exponential_linear_unit.py * Update scaled_exponential_linear_unit.py * Update scaled_exponential_linear_unit.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update scaled_exponential_linear_unit.py * Update scaled_exponential_linear_unit.py --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 9e4f996 commit 153c35e

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
"""
2+
Implements the Scaled Exponential Linear Unit or SELU function.
3+
The function takes a vector of K real numbers and two real numbers
4+
alpha(default = 1.6732) & lambda (default = 1.0507) as input and
5+
then applies the SELU function to each element of the vector.
6+
SELU is a self-normalizing activation function. It is a variant
7+
of the ELU. The main advantage of SELU is that we can be sure
8+
that the output will always be standardized due to its
9+
self-normalizing behavior. That means there is no need to
10+
include Batch-Normalization layers.
11+
References :
12+
https://iq.opengenus.org/scaled-exponential-linear-unit/
13+
"""
14+
15+
import numpy as np
16+
17+
18+
def scaled_exponential_linear_unit(
19+
vector: np.ndarray, alpha: float = 1.6732, lambda_: float = 1.0507
20+
) -> np.ndarray:
21+
"""
22+
Applies the Scaled Exponential Linear Unit function to each element of the vector.
23+
Parameters :
24+
vector : np.ndarray
25+
alpha : float (default = 1.6732)
26+
lambda_ : float (default = 1.0507)
27+
28+
Returns : np.ndarray
29+
Formula : f(x) = lambda_ * x if x > 0
30+
lambda_ * alpha * (e**x - 1) if x <= 0
31+
Examples :
32+
>>> scaled_exponential_linear_unit(vector=np.array([1.3, 3.7, 2.4]))
33+
array([1.36591, 3.88759, 2.52168])
34+
35+
>>> scaled_exponential_linear_unit(vector=np.array([1.3, 4.7, 8.2]))
36+
array([1.36591, 4.93829, 8.61574])
37+
"""
38+
return lambda_ * np.where(vector > 0, vector, alpha * (np.exp(vector) - 1))
39+
40+
41+
if __name__ == "__main__":
42+
import doctest
43+
44+
doctest.testmod()

0 commit comments

Comments
 (0)