I am trying to use Pumba to isolate a container from the docker network. I am on Windows, and the command I am using is the following.
docker run \
-d \
--name pumba \
--network docker_default \
-v //var/run/docker.sock:/var/run/docker.sock
gaiaadm/pumba netem \
--tc-image="gaiadocker/iproute2" \
--duration 1000s \
loss \
-p 100 \
753_mycontainer_1
I start the container to isolate using docker-compose, with the restart property set to always. My wish is to let Pumba block the networking of the container also after each restart.
How can I achieve this behavior?
Thanks.
I managed to achieve the result, letting the docker restart the pumba container. I reduce the duration parameter to 30s that is the average time for my 753_mycontainer_1 container to stop itself and restart.
In this way, the two containers restart more or less synchronously, producing a real chaos test, in which the 753_mycontainer_1 container randomly lose the network.
docker run \
-d \
--name pumba \
--restart always \
--network docker_default \
-v //var/run/docker.sock:/var/run/docker.sock gaiaadm/pumba \
netem \
--tc-image="gaiadocker/iproute2" \
--duration 30s \
loss \
-p 100 \
753_mycontainer_1
Related
The container was created with the commands
docker run --gpus '"'device=$CUDA_VISIBLE_DEVICES'"' --ipc=host --rm -it \
--mount src=$(pwd),dst=/clipbert,type=bind \
--mount src=$OUTPUT,dst=/storage,type=bind \
--mount src=$PRETRAIN_DIR,dst=/pretrain,type=bind,readonly \
--mount src=$TXT_DB,dst=/txt,type=bind,readonly \
--mount src=$IMG_DIR,dst=/img,type=bind,readonly \
-e NVIDIA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES \
-w /clipbert jayleicn/clipbert:latest \
bash -c "source /clipbert/setup.sh && bash" \
But upon exit and running docker ps -a, the container is not listed and it seems like the container is only temporarily created. This has not happened in my previous experience with docker, what may the reason be?
The --rm options tells docker run command to remove the container when it exits automatically.
I create two containers (which provide restful API) via following docker commands.
The only one difference is "--cpuset-cpus="0" for setting specific core number.
However I found that container A will be crashed, if I submit over 150 http requests.
Then, I check the memory via "docker stats container A",
memory usage gradually increases with receiving the number of http requests,
finally container crashed.
But if I set "cpuset-cpus" argument into Container B, memory stability is not affected by http requests,
the container B will not be crashed.
Does anybody know why container will be crash without setting cpuset-cpus?
btw, my server has 32 cores cpu and 512gb.
Container A:
docker run \
-d \
-e PYTHONPATH=/app/custom_component:$PYTHONPATH \
-v /home/hibot_agents/proj.1845.cache:/app:rw \
--name test_1845 \
--memory="512M" \
--memory-swap="1g" \
--cpus="0.25" \
--network="chatbot-network" \
images/chatbot-server:2.0.0-full \
rasa run --enable-api --endpoints endpoints.yml -vv
Container B:
docker run \
-d \
-e PYTHONPATH=/app/custom_component:$PYTHONPATH \
-v /home/hibot_agents/proj.1845.cache:/app:rw \
--name test_1845 \
--memory="512M" \
--memory-swap="1g" \
--cpus="0.25" \
--cpuset-cpus="0" \
--network="chatbot-network" \
images/chatbot-server:2.0.0-full \
rasa run --enable-api --endpoints endpoints.yml -vv
I am setting up ksql-cli with confluent version 3.3.0 in following way
#zookeper
docker run -d -it \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:3.3.0
#kafka
docker run -d \
--net=host \
--name=kafka \
-e KAFKA_ZOOKEEPER_CONNECT=localhost:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:3.3.0
#schema-registry
docker run -d \
--net=host \
--name=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=localhost:32181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://localhost:8081 \
confluentinc/cp-schema-registry:3.3.0
i am running ksql-cli docker image in following manner
docker run -it \
--net=host \
--name=ksql-cli \
-e KSQL_CONFIG_DIR="/etc/ksql" \
-e KSQL_LOG4J_OPTS="-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties" \
-e STREAMS_BOOTSTRAP_SERVERS=localhost:29092 \
-e STREAMS_SCHEMA_REGISTRY_HOST=localhost \
-e STREAMS_SCHEMA_REGISTRY_PORT=8081 \
confluentinc/ksql-cli:0.5
when i am running ksql-cli by going in bash of container in folowing way
docker exec -it ksql-cli bash
and running ksql-cli in following way:
./usr/bin/ksql-cli local
It is giving me following error:
Initializing KSQL...
Could not fetch broker information. KSQL cannot initialize AdminCLient.
By default, the ksql-cli attempts to connect to the Kafka brokers on localhost:9092. It looks like your setup is using a different port, so you'll need to provide this on the command line, e.g.
./usr/bin/ksql-cli local --bootstrap-server localhost:32181
You'll probably also need to specify the schema registry port, so you may want to use a properties file, e.g. :
./usr/bin/ksql-cli local --properties-file ./ksql.properties
Where ksql.properties has:
bootstrap.servers=localhost:29092
schema.registry.url=localhost:8081
Or provide both on the command line:
./usr/bin/ksql-cli local \
--bootstrap-server localhost:29092 \
--schema.registry.url http://localhost:8081
Note, from KSQL version 4.1 onwards the commands and properties change name. ksql-cli becomes just ksql. The local mode disappears - you'll need to run a ksql-server node or two explicitly. --property-file becomes --config-file and schema.registry.url becomes ksql.schema.registry.url.
I am trying to run enlightenment(https://www.enlightenment.org/start) in a docker container,previously enlightenment is based on X11,but the latest version of enlightenment support wayland. As I searched,we can use the -v parameter when use the "docker run" command to start a docker image like :
$ docker run -it \
--net host \ # may as well YOLO
--cpuset-cpus 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
--device /dev/snd \ # so we have sound
--name chrome \
jess/chrome
(Reference: https://blog.jessfraz.com/post/docker-containers-on-the-desktop/)
But this is based on X11.Currently I do not use the X11,and use the wayland based enlightenment,How can I show my enlightenment UI in docker container?
According to
https://unix.stackexchange.com/questions/330366/how-can-i-run-a-graphical-application-in-a-container-under-wayland
you mount some device such as
/run/user/1000/wayland-0
in your
docker run
command
and here is an extract from
https://github.com/duzy/docker-wayland/blob/master/run.sh
docker run \
--name $container \
-v "$(pwd):/home/user/work" \
--device=/dev/dri/card0:/dev/dri/card0 \
--device=/dev/dri/card1:/dev/dri/card1 \
--device=/dev/dri/controlD64:/dev/dri/controlD64 \
--device=/dev/dri/controlD65:/dev/dri/controlD65 \
am running the progrium/consul container with the gliderlabs/registrator container. I am trying to create health checks to monitor if my docker containers are up or down. However I noticed some very strange activity with with health check I was able to make. Here is the command I used to create the health check:
curl -v -X PUT http://$CONSUL_IP_ADDR:8500/v1/agent/check/register -d #/home/myUserName/health.json
Here is my health.json file:
{
"id": "docker_stuff",
"name": "echo test",
"docker_container_id": "4fc5b1296c99",
"shell": "/bin/bash",
"script": "echo hello",
"interval": "2s"
}
First I noticed that this check would automatically delete the service whenever the container was stopped properly, but would do nothing when the container was stopped improperly (i.e. durring a node failure).
Second I noticed that the docker_container_id did not matter at all, this health check would attach itself to every container running on the consul node it was attached to.
I would like to just have a working tcp or http health test run for every docker container running on a consul node (yes I know my above json file runs a script, I just created that one following the documentation example). I just want consul to be able to tell if a container is stopped or running. I don't want my services to delete themselves when a health check fails. How would I do this.
Note: I find the consul documentation on Agent Health Checks very lacking, vague and inaccurate. So please don't just link to it and tell me to go read it. I am looking for a full explanation on exactly how to set up docker health checks the right way.
Update: Here is how to start consul servers with the most current version of the official consul container (right now its the dev versions, soon ill update it with the production versions):
#bootstrap server
docker run -d \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
--name=dev-consul0 consul agent -dev -ui -client 0.0.0.0
#its IP address will then be the IP of the host machine
#lets say its 172.17.0.2
#start the other two consul servers, without web ui
docker run -d --name --name=dev-consul1 \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
consul agent -dev -join=172.17.0.2
docker run -d --name --name=dev-consul2 \
-p 8300:8300 \
-p 8301:8301 \
-p 8301:8301/udp \
-p 8302:8302 \
-p 8302:8302/udp \
-p 8400:8400 \
-p 8500:8500 \
-p 53:53/udp \
consul agent -dev -join=172.17.0.2
# then heres your clients
docker run -d --net=host --name=client0 \
-e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' \
consul agent -bind=$(hostname -i) -retry-join=172.17.0.2
https://hub.docker.com/r/library/consul/
progrium/consul image has old version of consul (https://hub.docker.com/r/progrium/consul/tags/) and currently seems to be not maintained.
Please try to use official image with current version for consul https://hub.docker.com/r/library/consul/tags/
You can also use registrator to register checks in consul connected with your service. eg.
SERVICE_[port_]CHECK_SCRIPT=nc $SERVICE_IP $SERVICE_PORT | grep OK
More examples: http://gliderlabs.com/registrator/latest/user/backends/#consul
So a solution that works around using any version of the consul containers is to just directly install consul on the host machine. This can be done by following these steps from https://sonnguyen.ws/install-consul-and-consul-template-in-ubuntu-14-04/:
sudo apt-get update -y
sudo apt-get install -y unzip curl
sudo wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
sudo unzip consul_0.6.4_linux_amd64.zip
sudo rm consul_0.6.4_linux_amd64.zip
sudo chmod +x consul
sudo mv consul /usr/bin/consul
sudo mkdir -p /opt/consul
cd /opt/consul
sudo wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_web_ui.zip
sudo unzip consul_0.6.4_web_ui.zip
sudo rm consul_0.6.4_web_ui.zip
sudo mkdir -p /etc/consul.d/
sudo wget https://releases.hashicorp.com/consul-template/0.14.0/consul-template_0.14.0_linux_amd64.zip
sudo unzip consul-template_0.14.0_linux_amd64.zip
sudo rm consul-template_0.14.0_linux_amd64.zip
sudo chmod a+x consul-template
sudo mv consul-template /usr/bin/consul-template
sudo nohup consul agent -server -bootstrap-expect 1 \
-data-dir /tmp/consul -node=agent-one \
-bind=$(hostname -i) \
-client=0.0.0.0 \
-config-dir /etc/consul.d \
-ui-dir /opt/consul/ &
echo 'Done with consul install!!!'
Then after you do this create your consul health check json files, info on how to do that can be found here. After you create your json files just put them in the /etc/consul.d directory and restart consul with consul reload. If after the reload consul does not add your new health checks then there is something wrong with the syntax of your json files. Go back edit them and try again.