Skip to content

Minor improvements in testing, builds and pipeline #707

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 29, 2020
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions .ci/linux_devops_code_coverage_generate.sh

This file was deleted.

2 changes: 1 addition & 1 deletion .ci/linux_devops_e2e_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ export AzureWebJobsCosmosDBConnectionString=$LINUXCOSMOSDBCONNECTIONSTRING
export AzureWebJobsEventHubConnectionString=$LINUXEVENTHUBCONNECTIONSTRING
export AzureWebJobsServiceBusConnectionString=$LINUXSERVICEBUSCONNECTIONSTRING

coverage run -p --branch -m pytest tests/endtoend
pytest --instafail --cov=./azure_functions_worker --cov-report xml --cov-branch --cov-append tests/endtoend
2 changes: 1 addition & 1 deletion .ci/linux_devops_unit_tests.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash

set -e -x
coverage run -p --branch -m pytest tests/unittests
pytest --instafail --cov=./azure_functions_worker --cov-report xml --cov-branch tests/unittests
17 changes: 11 additions & 6 deletions .flake8
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
[flake8]
# it's not a bug that we aren't using all of hacking, ignore:
# H402: Module level import not at top of file
# W503: Line break occurred before a binary operator
# E731: Do not assign a lambda expression, use a def
ignore = W503,E402,E731
exclude =
.git, __pycache__, build, dist, .eggs, .github, .local, docs/,
Samples, azure_functions_worker/protos/,
azure_functions_worker/_thirdparty/typing_inspect.py,
tests/unittests/test_typing_inspect.py,
.venv*, .env*, .vscode, venv

exclude = .git, __pycache__, build, dist, .eggs, .github, .local, docs/,
Samples, azure_functions_worker/protos/,
azure_functions_worker/_thirdparty/typing_inspect.py,
tests/unittests/test_typing_inspect.py, .venv*, .env*, .vscode, venv

max-line-length = 80
15 changes: 11 additions & 4 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,17 @@ trigger:
variables:
DOTNET_VERSION: '2.2.207'

schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- dev
- master
- release/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this change will make us running the release/* branch everynight. Do you suggest moving the previous releases into releases/ancient/* branches now?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set up the branch patterns using the best practices as recommended by AzDevops. We can change the daily runs to not target release branches.

exclude:
- releases/ancient/*

jobs:
- job: Tests
pool:
Expand Down Expand Up @@ -60,10 +71,6 @@ jobs:
LINUXEVENTHUBCONNECTIONSTRING: $(linuxEventHub)
LINUXSERVICEBUSCONNECTIONSTRING: $(linuxServiceBus)
displayName: 'E2E Tests'
- bash: |
chmod +x .ci/linux_devops_code_coverage_generate.sh
.ci/linux_devops_code_coverage_generate.sh
displayName: 'Generate Code Coverage XML file'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: cobertura
Expand Down
32 changes: 18 additions & 14 deletions azure_functions_worker/dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,17 @@ class Dispatcher(metaclass=DispatcherMeta):

_GRPC_STOP_RESPONSE = object()

def __init__(self, loop, host, port, worker_id, request_id,
grpc_connect_timeout, grpc_max_msg_len=-1):
def __init__(self, loop, host, port: int, worker_id: str, request_id: str,
grpc_connect_timeout: float, grpc_max_msg_len: int = -1):
self._loop = loop
self._host = host
self._port = port
self._request_id = request_id
self._worker_id = worker_id
self._functions = functions.Registry()

self._old_task_factory = None

# A thread-pool for synchronous function calls. We limit
# the number of threads to 1 so that one Python worker can
# only run one synchronous function in parallel. This is
Expand All @@ -75,7 +77,8 @@ def __init__(self, loop, host, port, worker_id, request_id,
self._grpc_thread = threading.Thread(
name='grpc-thread', target=self.__poll_grpc)

def load_bindings(self):
@staticmethod
def load_bindings():
"""Load out-of-tree binding implementations."""
services = {}

Expand All @@ -88,18 +91,17 @@ def load_bindings(self):
@classmethod
async def connect(cls, host, port, worker_id, request_id,
connect_timeout):
loop = asyncio._get_running_loop()
disp = cls(loop, host, port, worker_id, request_id,
connect_timeout)
loop = asyncio.events.get_event_loop()
disp = cls(loop, host, port, worker_id, request_id, connect_timeout)
disp._grpc_thread.start()
await disp._grpc_connected_fut
logger.info('Successfully opened gRPC channel to %s:%s', host, port)
return disp

async def dispatch_forever(self):
if DispatcherMeta.__current_dispatcher__ is not None:
raise RuntimeError(
'there can be only one running dispatcher per process')
raise RuntimeError('there can be only one running dispatcher per '
'process')

self._old_task_factory = self._loop.get_task_factory()

Expand Down Expand Up @@ -131,7 +133,7 @@ async def dispatch_forever(self):
try:
await forever
finally:
# Reenable console logging when there's an exception
# Re-enable console logging when there's an exception
enable_console_logging()
root_logger.removeHandler(logging_handler)
finally:
Expand All @@ -152,7 +154,7 @@ def stop(self):
self._sync_call_tp.shutdown()
self._sync_call_tp = None

def _on_logging(self, record: logging.LogRecord, formatted_msg: str):
def on_logging(self, record: logging.LogRecord, formatted_msg: str):
if record.levelno >= logging.CRITICAL:
log_level = protos.RpcLog.Critical
elif record.levelno >= logging.ERROR:
Expand Down Expand Up @@ -201,7 +203,9 @@ def request_id(self):
def worker_id(self):
return self._worker_id

def _serialize_exception(self, exc):
# noinspection PyBroadException
@staticmethod
def _serialize_exception(exc: Exception):
try:
message = f'{type(exc).__name__}: {exc}'
except Exception:
Expand All @@ -222,8 +226,8 @@ async def _dispatch_grpc_request(self, request):
# Don't crash on unknown messages. Some of them can be ignored;
# and if something goes really wrong the host can always just
# kill the worker's process.
logger.error(
f'unknown StreamingMessage content type {content_type}')
logger.error(f'unknown StreamingMessage content type '
f'{content_type}')
return

resp = await request_handler(request)
Expand Down Expand Up @@ -524,7 +528,7 @@ def emit(self, record):
# Since we disable console log after gRPC channel is initiated
# We should redirect all the messages into dispatcher
msg = self.format(record)
Dispatcher.current._on_logging(record, msg)
Dispatcher.current.on_logging(record, msg)


class ContextEnabledTask(asyncio.Task):
Expand Down
7 changes: 6 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,12 @@ def run(self):
'mypy',
'pytest',
'requests==2.*',
'coverage'
'coverage',
'pytest-sugar',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, I don't see these packages are used anywhere in the code, are they pytest extensions and will be executed automatically?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytest-sugar -> If you run pytest locally, this helps to present the tests better

pytest-cov -> This changes the "coverage" to use pytest-coverage - better in tracking coverages. The pipeline scripts have been updated to use this. Eg:

pytest  --instafail --cov=./azure_functions_worker --cov-report xml --cov-branch --cov-append tests/endtoend

pytest-xdist -> This can run the tests parallel, but the catch currently is that the tests rely on things from other tests (or depend on things which others are also changing at the same time, we need to improve it)

pytest-randomly -> This runs the tests randomly - exposes the ordering issues in tests, if any.

pytest-instafail -> shows failures and errors instantly instead of waiting until the end of test session, good for running when trying locally.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pytest-sugar's output.

image

'pytest-cov',
'pytest-xdist',
'pytest-randomly',
'pytest-instafail'
]
},
include_package_data=True,
Expand Down
2 changes: 1 addition & 1 deletion tests/unittests/test_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ async def _runner():

# Trimming off carriage return charater when testing on Windows
stdout_lines = [
l.replace(b'\r', b'') for l in stdout.strip().split(b'\n')
line.replace(b'\r', b'') for line in stdout.strip().split(b'\n')
]
self.assertEqual(stdout_lines, [b'True', b'True'])

Expand Down
6 changes: 4 additions & 2 deletions tests/unittests/test_mock_http_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ async def test_call_sync_function_check_logs(self):
self.assertEqual(r.response.result.status,
protos.StatusResult.Success)

user_logs = [l for l in r.logs if l.category == 'my function']
user_logs = [line for line in r.logs
if line.category == 'my function']
self.assertEqual(len(user_logs), 1)

log = user_logs[0]
Expand All @@ -48,7 +49,8 @@ async def test_call_async_function_check_logs(self):
self.assertEqual(r.response.result.status,
protos.StatusResult.Success)

user_logs = [l for l in r.logs if l.category == 'my function']
user_logs = [line for line in r.logs if
line.category == 'my function']
self.assertEqual(len(user_logs), 2)

first_msg = user_logs[0]
Expand Down