How to customize the broker.xml file in the standard Apache ActiveMQ Artemis docker image? - docker

Building a docker image from the artemis-docker package in the official GitHub uses a docker-run.sh that will checks on start-up, if the broker.xml file exists and create a new instance of the broker, in case it does not.
My question: Given, I want to use this image, either self build or by checking out the one from amusara on the official docker hub. How am I supposed to provide a custom broker.xml?

The docker-run.sh doesn't create a new instance if the the /var/lib/artemis-instance/etc/broker.xml file exists. This means that you need to provide a complete broker instance or to change the broker instance created by the artemis container:
Create a local empty folder with with the artemis user (1001)
mkdir broker
chown 1001:1001 broker
Create the artemis container with the local empty folder mounted
docker run -it --rm --name artemis -e ARTEMIS_USER=admin -e ARTEMIS_PASSWORD=admin -v ${PWD}/broker:/var/lib/artemis-instance artemis-ubuntu:latest
Stop the artemis container
docker stop artemis
Change artemis instance files
ls -al broker
Restart the container using the local artemis instance files
docker run -it --rm --name artemis -e ARTEMIS_USER=admin -e ARTEMIS_PASSWORD=admin -v ${PWD}/broker:/var/lib/artemis-instance artemis-ubuntu:latest
Docker makes deploying microservice applications very easy but it has some limitations for a production environment. I would take a look to the ArtemisCloud.io operator that provide a way to deploy the Apache ActiveMQ Artemis Broker on Kubernetes.

Related

docker: not found after mounting /var/run/docker.sock

I'm trying to use docker command inside container.
i use this command to mount /var/run/docker.sock and run container
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
but when i try to use docker inside container(gitlab-runner) i get an error
docker: not found
host:
srw-rw---- 1 root docker 0 Mar 23 15:13 docker.sock
container:
0 srw-rw---- 1 root gitlab-runner 0 Mar 23 15:13 docker.sock
this worked fine, before i removed old container and created new one, and now i'm unable to run docker inside container. Please help.
You should differentiate between docker daemon and docker CLI. First one is a service, which actually performs all work - builds and runs containers. The second one is an executable, used to send commands to daemon.
Executable (docker CLI) is lightweight and uses /var/run/docker.sock to access daemon (by default, there are different transports actually).
When you start your container with -v /var/run/docker.sock:/var/run/docker.sock you actually share your host's docker daemon to docker CLI in container. Thus, you still need to install docker CLI inside container to make use of Docker, but you dont need to setup daemon inside (which is pretty complicated and requires priviledged mode).
Conclusion
Install docker CLI inside container, share socket and enjoy. But upon using host's docker daemon, you will probably be confused with bind mounting volumes because daemon doesn't see the container's internal file system.

Networking-How to send data from a container to mosquitto container in the same network

I have a raspberry pi which is running a mosquitto docker container. This container contains file and all the configuration required to send data to IOT hub. I also have another container data-container which performs some set of steps and then it needs to send data to that IOT hub using mosquitto docker container. data-container has a python code which used paho-mqtt library to publish and subscribe to messages.
I have created my own network in docker mynetwork by using below command:
sudo docker network create mynetwork
This created the network mynetwork. I then started the mosquitto container by specifying the mynetwork:
sudo docker run -ti --net=mynetwork --restart=always -v /mosquitto/mqtt/config:/mqtt/config:ro -v /mosquitto/mqtt/log:/mqtt/log -v /mosquitto/mqtt/data/:/mqtt/data/ --name mqtt pascaldevink/rpi-mosquitto
I also started data-container using --net=mynetwork. So both the container are in the same network. Now inside the data-container, it collects few information and publish it using the below command:
publish.single("/machine/machine1/, "<data to send>", hostname=<hostname>)
I am confused as to what to use in hostname in publish.single. Should I mention the IP address of the mosquitto container in hostname.?
Thanks
The container name is fine when they are deployed inside the same network. To be sure you can try to exec inside your container and try to ping the other container using the container name. This should work when they are inside the same docker network.

Connect to cassandra inside docker from other server

I have two instances on AWS EC2. In one of the instance I have installed Cassandra inside docker.
Now I want to connect to cassandra from another AWS instance.
Can someone help me to do it.
I found this link https://github.com/nicolasff/docker-cassandra/issues/5
But not working for me.
lvthillo's comment would work. The Cassandra should expose the port to the node. So other node could access it.
Another note: if container restarts, Cassandra data will be lost. You should at least mount the node's local directory into container. docker run --name some-cassandra -v /my/own/datadir:/var/lib/cassandra -p 9042:9042 -d cassandra

Build/push image from jenkins running in docker

I have two docker containers - one running jenkins and one running docker registry. I want to build/push images from jenkins to docker registry. How do I achieve this in an easy and secure way (meaning no hacks)?
The easiest would be to make sure the jenkins container and registry container are on the same host. Then you can mount the docker socket onto the jenkins container and use the dockerd from the host machine to push the image to the registry. /var/run/docker.sock is the unix socket the dockerd is listening to.
By mounting the docker socket any docker command you run from that container executes as if it was the host.
$ docker run -dti --name jenkins -v /var/run/docker.sock:/var/run/docker.sock jenkins:latest
If you use pipelines, you can install this Docker Plugin https://plugins.jenkins.io/docker-workflow,
create a credentials resource on Jenkins,to access the Docker registry, and do this in your pipeline:
stage("Build Docker image") {
steps {
script {
docker_image = docker.build("myregistry/mynode:latest")
}
}
}
stage("Push images") {
steps {
script {
withDockerRegistry(credentialsId: 'registrycredentials', url: "https://myregistry") {
docker_image.push("latest")
}
}
}
}
Full example at: https://pillsfromtheweb.blogspot.com/2020/06/build-and-push-docker-images-with.html
I use this type of workflow in a Jenkins docker container, and the good news is that it doesn't require any hackery to accomplish. Some people use "docker in docker" to accomplish this, but I can't help you if that is the route you want to go as I don't have experience doing that. What I will outline here is how to use the existing docker service (the one that is running the jenkins container) to do the builds.
I will make some assumptions since you didn't specify what your setup looks like:
you are running both containers on the same host
you are not using docker-compose
you are not running docker swarm (or swarm mode)
you are using docker on Linux
This can easily be modified if any of the above conditions are not true, but I needed a baseline to start with.
You will need the following:
access from the Jenkins container to docker running on the host
access from the Jenkins container to the registry container
Prerequisites/Setup
Setting that up is pretty straight forward. In the case of getting Jenkins access to the running docker service on the host, you can do it one of two ways. 1) over TCP and 2) via the docker unix socket. If you already have docker listening on TCP you would simply take note of the host's IP address and the default docker TCP port number (2375 or 2376 depending on whether or not you use TLS) along with and TLS configuration you may have.
If you prefer not to enable the docker TCP service it's slightly more involved, but you can use the UNIX socket at /var/run/docker.sock. This requires you to bind mount the socket to the Jenkins container. You do this by adding the following to your run command when you run jenkins:
-v /var/run/docker.sock:/var/run/docker.sock
You will also need to create a jenkins user on the host system with the same UID as the jenkins user in the container and then add that user to the docker group.
Jenkins
You'll now need a Docker build/publish plugin like the CloudBees Docker Build and Publish plugin or some other plugin depending on your needs. You'll want to note the following configuration items:
Docker URI/URL will be something like tcp://<HOST_IP>:2375 or unix:///var/run/docker.sock depending on how we did the above setup. If you use TCP and TLS for the docker service you will need to upload the TLS client certificates for your Jenkins instance as "Docker Host Certificate Authentication" to your usual credentials section in Jenkins.
Docker Registry URL will be the URL to the registry container, NOT localhost. It might be something like http://<HOST_IP>:32768 or similar depending on your configuration. You could also link the containers, but that doesn't easily scale if you move the containers to separate hosts later. You'll also want to add the credentials for logging in to your registry as a username/password pair in the appropriate credentials section.
I've done this exact setup so I'll give you a "tl;dr" version of it as getting into depth here is way outside of the scope of something for StackOVerflow:
Install PID1 handler files in container (i.e. tini). You need this to handle signaling and process reaping. This will be your entrypoint.
Install some process control service (i.e. supervisord) packages. Generally running multiple services in containers is not recommended but in this particular case, your options are very limited.
Install Java/Jenkins package or base your image from their DockerHub image.
Add a dind (Docker-in-Docker) wrapper script. This is the one I based my config on.
Create the configuration for the process control service to start Jenkins (as jenkins user) and the dind wrapper (as root).
Add jenkins user to docker group in Dockerfile
Run docker container with --privileged flag (DinD requires it).
You're done!
Thanks for your input! I came up with this after some experimentation.
docker run -d \
-p 8080:8080 \
-p 50000:50000 \
--name jenkins \
-v pwd/data/jenkins:/var/jenkins_home \
-v /Users/.../.docker/machine/machines/docker:/Users/.../.docker/machine/machines/docker \
-e DOCKER_TLS_VERIFY="1" \
-e DOCKER_HOST="tcp://192.168.99.100:2376" \
-e DOCKER_CERT_PATH="/Users/.../.docker/machine/machines/docker" \
-e DOCKER_MACHINE_NAME="docker" \
johannesw/jenkins-docker-cli

why does kafka docker need to listen on unix socket

I am using docker image for kafka from wurstmeister
The docker-compose file defines a volume such as /var/run/docker.sock:/var/run/docker.sock
What is the purpose of the above unix socket?
When should a docker image declare the above volume?
The kafka-docker project is making (questionable, see below) use of the docker command run inside the kafka container in order to introspect your docker environment. For example, it will determine the advertised kafka port like this:
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
There is a broker-list.sh script that looks for kafka brokers like this:
CONTAINERS=$(docker ps | grep 9092 | awk '{print $1}')
In order to run the docker cli inside the container, it needs access to the /var/run/docker.sock socket on your host.
Okay, that's it for the facts. The following is just my personal opinion:
I think this is frankly a terrible idea and that the only containers that should ever have access to the docker socket are those that are explicitly managing containers. There are other mechanisms available for performing container configuration and discovery that do not involve giving the container root access to your host, which is exactly what you are doing when you give something access to the docker socket.
By default, the Docker daemon listens on unix:///var/run/docker.sock to allow only local connections by the root user. So, generally speaking, if we can access to this socket from somewhere else, we can talk to the Docker daemon or extract information about other containers.
If we want some processes inside our container to access to information of other containers managed by the Docker daemon (run on our host), we can declare the volume like above.
Let's see an example from the wurstmeister docker.
The Docker file:
At the end of the file, it will call:
CMD ["start-kafka.sh"]
start-kafka.sh
Let's take a look from the line 6:
if [[ -z "$KAFKA_ADVERTISED_PORT" ]]; then
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
fi
When start his Kafka container, he wants to execute below command inside the Kafka container (to find the port mapping to container...):
docker port `hostname` $KAFKA_PORT
Note that he did mount the above volume to be able to execute command like this.
Reference from Docker website(search for the Socket keyword)
What is the purpose of the above unix socket?
Mounting the /var/run/docker.sock socket in a container provides access to the Docker Remote API hosted by the docker daemon. Anyone with access to this socket has complete control of docker and the host running docker (essentially root access).
When should a docker image declare the above volume?
Very rarely. If you are running a docker admin tool that requires API access inside a container then it needs to be mounted (or accessible via TCP) so the tool can manage the hosting docker daemon.
As larsks mentioned, docker-kafka's use of the socket for config discovery is very questionable.
Its not necessary to mount docker.sock file and can be avoided by commenting out appropriate lines in kafka-docker/Dockerfile & start-kafka.sh. No need to add broker-list.sh to kafka container.

Resources