I have a distributed Openwhisk setup, and when I try to execute more than 30 requests with one user at once, I get the following error:
error: Unable to invoke action 'prime-number': Too many concurrent
requests in flight (count: 30, allowed: 30)
Any idea how I could increase this number?
If you are using ansible method for deploying OpenWhisk, you can deploy with the following ansible-playbook environment variables override
-e limit_invocations_per_minute=999999 -e limit_invocations_concurrent=999999
If you are doing other type of deployment, the controller container needs to be deploy with the corresponding environment variables set to override any of these related values.
"LIMITS_ACTIONS_INVOKES_PERMINUTE": "{{ limits.invocationsPerMinute }}"
"LIMITS_ACTIONS_INVOKES_CONCURRENT": "{{ limits.concurrentInvocations }}"
"LIMITS_TRIGGERS_FIRES_PERMINUTE": "{{ limits.firesPerMinute }}"
"LIMITS_ACTIONS_SEQUENCE_MAXLENGTH": "{{ limits.sequenceMaxLength }}"
Adding to #csantanapr answer, you can add them to openwhisk.yml playbook.
ansible-playbook -i environments/<environment> -e limit_invocations_per_minute=999999 -e limit_invocations_concurrent=999999 openwhisk.yml
If you are using the non-ansible method to run openwhisk, you can refer to the issue here. You can also refer to this.
You can modify the file core/standalone/src/main/resources/standalone.conf. If you look at this part:
config {
controller-instances = 1
limits-actions-sequence-maxLength = 50
limits-triggers-fires-perMinute = 60
limits-actions-invokes-perMinute = 60
limits-actions-invokes-concurrent = 30
}
you can modify the value of limits-actions-invokes-concurrent and any of the other limits.
Then, when you run openwhisk, you need to supply the file through the -c parameter, through the --args parameter. Like so:
sudo ./gradlew core:standalone:bootRun --args='-c /path/to/openwhisk/core/standalone/src/main/resources/standalone.conf'
That's it.
You can also "build then run", like so:
sudo ./gradlew :core:standalone:build
sudo java -jar ./bin/openwhisk-standalone.jar -c /path/to/openwhisk/core/standalone/src/main/resources/standalone.conf
Related
Below is the command generated by ansible plugin for Jenkins.
ansible-playbook /app/stop.yml -i /app/my-hosts -l test_west -e app_name=test -e environments=west -v
Here is my ansible inventory hosts file.
cat my-hosts
[test_west]
10.0.9.88
10.0.9.89
-l option helps match the ansible inventory host file 'test_west'
My question is ... what do I have to mention in the ansible playbook for the hosts ?
My playbook looks like below however this does not seem correct or needed as the hosts are match using the -l paramter passed to ansible:
---
- hosts: "{{ app_name + '_' + environments }}
Can you please suggest what should I set the hosts: to in my ansible playbook so it is the same as -l argument i.e 'test_west' ?
You should use: - hosts: "{{ app_name }}_{{ environments }}"
Sample output:
[root#greenhat-30 tests]$ ansible-playbook -i hosts test.yml -e app_name=test -e environments=west
[WARNING]: Could not match supplied host pattern, ignoring: test_west
PLAY [test_west] *******************************************************************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *************************************************************************************************************************************************************************************************************
[http_offline#greenhat-30 tests]$
You can see the group selected is PLAY [test_west].
I'm using ansible to execute some tasks on a local docker container, as such:
hosts: name-of-docker-container
connection: docker
tasks:
- name: setting up ssh_config
template:
src: ssh_config.j2
dest: /home/user/.ssh/ssh_config
mode: "0600"
owner: user
group: user
Something as simple as this takes a 2-5 seconds to run. Shouldn't this take less than a second? How can I make ansible run faster? I've tried the pipelining, but it doesn't seem to help:
ansible-playbook -v -e 'pipelining=True' -i inventories/staging/hosts.yml staging-deploy.yml
In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
I have a .travis.yml using Trusty Beta VMs which tries to push to the Docker hub. The relevant sections are:
sudo: required
dist: trusty
language: cpp
compiler:
- gcc
services:
- docker
env:
global:
- secure: "i...=" # DOCKER_EMAIL
- secure: "Z...=" # DOCKER_USER
- secure: "p...=" # DOCKER_PASSWORD
<snip>
after_success:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
- make docker-r-deliver
The log is giving me:
<snip>
Setting environment variables from .travis.yml
$ export DOCKER_EMAIL=[secure]
$ export DOCKER_USER=[secure]
$ export DOCKER_PASSWORD=[secure]
<snip>
$ docker login -e $DOCKER_EMAIL -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
Password:
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
The build has been terminated
There is a similar issue here: https://github.com/travis-ci/travis-ci/issues/5387. But I don't think that is it - my password contains no special characters. I tried the docker login in before_install - same issue, except weirdly, it prompted for username.
Edit
docker login -e foo#example.com -u fooo -p barty does not hang (gives expected Error response from daemon: Wrong login/password, please try again), suggesting something is up with the env vars.
Edit
Well, this is embarrassing, I was setting DOCKER_USER but attempting to use DOCKER_USERNAME! That would do it!
have you tried the exact syntax given in the TravisCI documentation?
docker login -e="$DOCKER_EMAIL" -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
My bad! As per edit, I was setting DOCKER_USER but attempting to use DOCKER_USERNAME!
I was reading a blog post on Percona Monitoring Plugins and how you can somehow monitor a Galera cluster using pmp-check-mysql-status plugin. Below is the link to the blog demonstrating that:
https://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/
The commands in this tutorial are run on the command line. I wish to try these commands in a Nagios .cfg file e.g, monitor.cfg. How do i write the services for the commands used in this tutorial?
This was my attempt and i cannot figure out what the best parameters to use for check_command on the service. I am suspecting that where the problem is.
So inside my /etc/nagios3/conf.d/monitor.cfg file, i have the following:
define host{
use generic-host
host_name percona-server
alias percona
address 127.0.0.1
}
## Check for a Primary Cluster
define command{
command_name check_mysql_status
command_line /usr/lib/nagios/plugins/pmp-check-
mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
}
define service{
use generic-service
hostgroup_name mysql-servers
service_description Cluster
check_command pmp-check-mysql-
status!wsrep_cluster_status!==!str!non-Primary
}
When i run the command Nagios and go to monitor it, i get this message in the Nagios dashboard:
status: UNKNOWN; /usr/lib/nagios/plugins/pmp-check-mysql-status: 31:
shift: can't shift that many
You verified that:
/usr/lib/nagios/plugins/pmp-check-mysql-status -x wsrep_cluster_status -C == -T str -c non-Primary
works fine on command line on the target host? I suspect there's a shell escape issue with the ==
Does this work well for you? /usr/lib64/nagios/plugins/pmp-check-mysql-status -x wsrep_flow_control_paused -w 0.1 -c 0.9