Skip to content

github-ci: prevent default hosts from use of swap memory #99

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
avtikhon opened this issue Mar 22, 2021 · 0 comments · Fixed by tarantool/tarantool#5930
Closed

github-ci: prevent default hosts from use of swap memory #99

avtikhon opened this issue Mar 22, 2021 · 0 comments · Fixed by tarantool/tarantool#5930
Labels

Comments

@avtikhon
Copy link
Contributor

Found global issue in testing
#98

which may happen because of swap use on Github Actions default runners, where found
2 Cores + 7 Gb memory + 4 Gb swap

@avtikhon avtikhon added the teamQ label Mar 22, 2021
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 23, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Closes tarantool/tarantool-qa#99
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 23, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 23, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 24, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 24, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 4 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 24, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 5 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 24, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 7 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 60% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

For Github Actions host configurations with 7Gb RAM it means that after
4.2Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].
This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 24, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 25, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 9 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all [3].

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [4].

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://docs.docker.com/config/containers/resource_constraints/#--memory-swappiness-details
[4]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
@avtikhon avtikhon added the CI/CD label Mar 25, 2021
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 25, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 9 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all [3].

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [4].

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://docs.docker.com/config/containers/resource_constraints/#--memory-swappiness-details
[4]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 25, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 28, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'swapoff -a' command in
   actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hardcode the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For OSX switching off swap was made with command:

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 28, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'swapoff -a' command in
   actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hardcode the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For OSX switching off swap was made with command:

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'swapoff -a' command in
   actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hardcode the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For OSX switching off swap was made with command:

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tunned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hardcode the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX switching off swap was made with command:

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 8 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tunned for perfomance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tunned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hardcode the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX switching off swap currently not possible due to do System
   Integrity Protection (SIP) must be disabled [7], but we don't have
   such access.

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tuned for performance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tuned for performance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tuned for performance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made 3 changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 29, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tuned for performance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
@avtikhon avtikhon added the 5sp label Mar 30, 2021
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 30, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swap began to be used after 40% of RAM was used:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

This default vm.swappiness value of 60% represents the percentage of
the free memory before activating swap. The lower the value, the less
swapping is used and the more memory pages are kept in physical memory.

This swappiness value was not enough tuned for performance testing as
suggested in [2]. For performance testing better to use lower values
like 10% or even better not use swap at all.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://linuxhint.com/understanding_vm_swappiness/
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 30, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 30, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
avtikhon added a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
kyukhin pushed a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
kyukhin pushed a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471

(cherry picked from commit fd6ee6d)
kyukhin pushed a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471

(cherry picked from commit fd6ee6d)
kyukhin pushed a commit to tarantool/tarantool that referenced this issue Mar 31, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471

(cherry picked from commit fd6ee6d)
iskander232 pushed a commit to tarantool/tarantool that referenced this issue Apr 2, 2021
Github Actions provides hosts for Linux base runners in the following
configurations:

  2 Cores
  7 Gb memory
  4 Gb swap memory

To avoid of issues with hanging/slowing tests on high memory use
like [1], hosts configurations must avoid of swap memory use. All
of the tests workflows run inside dockers containers. This patch
sets in docker run configurations memory limits based on current
github actions hosts - 7Gb memory w/o swap memory increase.

Checked 10 full runs (29 workflows in each run used the change) and
got single failed test on gevent() routine in test-run. This result much
better than w/o this patch when 3-4 of workflows fail on each full run.

It could happen because swappiness set to default value:

  cat /sys/fs/cgroup/memory/memory.swappiness
  60

From documentation on swappiness [2]:

  This control is used to define the rough relative IO cost of swapping
  and filesystem paging, as a value between 0 and 200. At 100, the VM
  assumes equal IO cost and will thus apply memory pressure to the page
  cache and swap-backed pages equally; lower values signify more
  expensive swap IO, higher values indicates cheaper.
  Keep in mind that filesystem IO patterns under memory pressure tend to
  be more efficient than swap's random IO. An optimal value will require
  experimentation and will also be workload-dependent.

We may try to tune how often anonymous pages are swapped using the
swappiness parameter, but our goal is to stabilize timings (and make
them as predictable as possible), so the best option is to disable swap
at all and work on descreasing memory consumption for huge tests.

For Github Actions host configurations with 7Gb RAM it means that after
2.8Gb RAM was used swap began to use. But in testing we have some tests
that use 2.5Gb of RAM like 'box/net_msg_max.test.lua' and memory
fragmentation could cause after the test run swap use [3].

Also found that disk cache could use some RAM and it also was the cause
of fast memory use and start swapping. It can be periodically dropped
from memory [4] using 'drop_cache' system value setup, but it won't fix
the overall issue with swap use.

After freed cached pages in RAM another system kernel option can be
tuned [5][6] 'vfs_cache_pressure'. This percentage value controls the
tendency of the kernel to reclaim the memory which is used for caching
of directory and inode objects. Increasing it significantly beyond
default value of 100 may have negative performance impact. Reclaim code
needs to take various locks to find freeable directory and inode
objects. With 'vfs_cache_pressure=1000', it will look for ten times more
freeable objects than there are. This patch won't do this change, but
it can be done as the next change.

To fix the issue there were made changes:

 - For jobs that run tests and use actions/environment and don't use
   Github Actions container tag, it was set 'sudo swapoff -a' command
   in actions/environment action.

 - For jobs that run tests and use Github Actions container tag the
   previous solution doesn't work. It was decided to hard-code the
   memory value based on found on Github Actions hosts memory size
   7Gb. It was set for Github container tag as additional options:
     options: '--init --memory=7G --memory-swap=7G'
   This changes were made temporary till these containers tags will
   be removed within resolving tarantool/tarantool-qa#101 issue for
   workflows:
     debug_coverage
     release
     release_asan_clang11
     release_clang
     release_lto
     release_lto_clang11
     static_build
     static_build_cmake_linux

 - For VMware VMs like with FreeBSD added 'sudo swapoff -a' command
   before build commands.

 - For OSX on Github actions hosts swapping already disabled:
     sysctl vm.swapusage
     vm.swapusage: total = 0.00M  used = 0.00M  free = 0.00M  (encrypted)
   Also manual switching off swap currently not possible due to do
   System Integrity Protection (SIP) must be disabled [7], but we
   don't have such access on Github Actions hosts. For local hosts
   it must be done manually with [8]:
     sudo nvram boot-args="vm_compressor=2"
   Added swap status control to be sure that host correctly configured:
     sysctl vm.swapusage

Closes tarantool/tarantool-qa#99

[1]: tarantool/tarantool-qa#93
[2]: https://github.com/torvalds/linux/blob/1e43c377a79f9189fea8f2711b399d4e8b4e609b/Documentation/admin-guide/sysctl/vm.rst#swappiness
[3]: https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram
[4]: https://kubuntu.ru/node/13082
[5]: https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[6]: http://devhead.ru/read/uskorenie-raboty-linux
[7]: https://osxdaily.com/2010/10/08/mac-virtual-memory-swap/
[8]: https://gist.github.com/dan-palmer/3082266#gistcomment-3667471
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant