Skip to content

Commit 0fc24e8

Browse files
authored
[mypy] Annotates other/scoring_algorithm (#5621)
* scoring_algorithm: Moves doctest into function docstring so it will be run * [mypy] annotates other/scoring_algorithm * [mypy] renames temp var to unique value to work around mypy issue in other/scoring_algorithm reusing loop variables with the same name and different types gives this very confusing mypy error response. pyright correctly infers the types without issue. ``` scoring_algorithm.py:58: error: Incompatible types in assignment (expression has type "float", variable has type "List[float]") scoring_algorithm.py:60: error: Unsupported operand types for - ("List[float]" and "float") scoring_algorithm.py:65: error: Incompatible types in assignment (expression has type "float", variable has type "List[float]") scoring_algorithm.py:67: error: Unsupported operand types for - ("List[float]" and "float") Found 4 errors in 1 file (checked 1 source file) ``` * scoring_algorithm: uses enumeration instead of manual indexing on loop var * scoring_algorithm: sometimes we look before we leap. * clean-up: runs `black` to fix formatting
1 parent 5c8a6c8 commit 0fc24e8

File tree

1 file changed

+14
-15
lines changed

1 file changed

+14
-15
lines changed

Diff for: other/scoring_algorithm.py

+14-15
Original file line numberDiff line numberDiff line change
@@ -20,39 +20,38 @@
2020
lowest mileage but newest registration year.
2121
Thus the weights for each column are as follows:
2222
[0, 0, 1]
23-
24-
>>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
25-
[[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
2623
"""
2724

2825

29-
def procentual_proximity(source_data: list, weights: list) -> list:
26+
def procentual_proximity(
27+
source_data: list[list[float]], weights: list[int]
28+
) -> list[list[float]]:
3029

3130
"""
3231
weights - int list
3332
possible values - 0 / 1
3433
0 if lower values have higher weight in the data set
3534
1 if higher values have higher weight in the data set
35+
36+
>>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
37+
[[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
3638
"""
3739

3840
# getting data
39-
data_lists = []
40-
for item in source_data:
41-
for i in range(len(item)):
42-
try:
43-
data_lists[i].append(float(item[i]))
44-
except IndexError:
45-
# generate corresponding number of lists
41+
data_lists: list[list[float]] = []
42+
for data in source_data:
43+
for i, el in enumerate(data):
44+
if len(data_lists) < i + 1:
4645
data_lists.append([])
47-
data_lists[i].append(float(item[i]))
46+
data_lists[i].append(float(el))
4847

49-
score_lists = []
48+
score_lists: list[list[float]] = []
5049
# calculating each score
5150
for dlist, weight in zip(data_lists, weights):
5251
mind = min(dlist)
5352
maxd = max(dlist)
5453

55-
score = []
54+
score: list[float] = []
5655
# for weight 0 score is 1 - actual score
5756
if weight == 0:
5857
for item in dlist:
@@ -75,7 +74,7 @@ def procentual_proximity(source_data: list, weights: list) -> list:
7574
score_lists.append(score)
7675

7776
# initialize final scores
78-
final_scores = [0 for i in range(len(score_lists[0]))]
77+
final_scores: list[float] = [0 for i in range(len(score_lists[0]))]
7978

8079
# generate final scores
8180
for i, slist in enumerate(score_lists):

0 commit comments

Comments
 (0)