Logging from one docker container to another - jenkins

I think I am missing something linking docker containers.
I have 2 containers, 1 running Jenkins and 1 running elk stack.
From the host I can easily get logs to flow to elk.. and linking the Jenkins container to the elk one via --link gets some generic events into the elk stack.
But want I really want is the Jenkins container to (via the Jenkins Notification plugin) to log builds into elk no matter what I try tcp or http the the port I use on the docker host nothing shows up.
On the host the port 3333 is input to the elk container (3333 is the port for logstash).
From the docker host I can just do something like "echo "hello new World" | nc localhost 3333" and elk picks it up.
I am starting elk first with this:
docker run -d --name elk-docker -p 8686:80 -p 3333:3333 -p 9200:9200 elk-docker
Then Jenkins with this:
docker run -p 8585:8080 -v $PWD/docker/jenkins/jenkins_home:/var/lib/jenkins -t jenkins-docker
I have also tried this linking the two with no success.
docker run -p 8585:8080 --link elk-docker:elk -v $PWD/docker/jenkins/jenkins_home:/var/lib/jenkins -t jenkins-docker
In Jenkins I have the job notifiers plug-in installed and I was trying to use a simple TCP to port 333 and get the main events of the Jenkins job showing up in Elk by using the URL 172.17.0.5:3333. 172.17.0.5 is the IP of the logstash container.

Related

Jenkins docker running in the container but not launching in web browser

On my Redhat7linux docker host, i have created a jenkins container by pulling the jenkins official image from docker hub and i was able to bring the jenkins container up & running by executing the command:
docker run -d -p 50000:8080 -v $PWD/jenkins:/var/lib/jenkins -t jenkins_master
and i could see the jenkins is up when i checked the logs using the docker logs {containerID} but when i try to launch it in web browser with {hostip}:50000, I couldn't access it as it throws "The site cant be reached", and since my container is running inside a company network, should I either open/enable that port 50000 or do I need to set any proxy in the docker host?
Am I missing something here?
Here are the outputs of the docker command:
The official image provide the following command :
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
It seems that both ports 8080 and 50000 have to be exposed.
Execute the docker run command to run the container, check the status of your container.
docker container run -p [YOUR PORT]:8080 -v [YOUR
VOLUME]:/var/jenkins_home
--name jenkins-local jenkins/jenkins:lts
you can then access it using localhost:[YOUR PORT]

What ports do I need to open on Docker container when running Pulse in Tomcat in the container?

I'm running Geode (v1.14.0) servers/locators in Docker containers. I'm trying to run Pulse as a standalone WAR running in Tomcat in a Docker also. I can connect fine when running the Pulse WAR in Tomcat outside of the container so I suspect it's a ports or hostname issue. I'm currently mapping 8080 inside the container to 8081. I can load the Pulse UI in the browser but it keeps saying "Connecting..." in a yellowish box at the top of the page and doesn't find the Geode locator.
Looking in pulse.log, I see the following exception repeatedly:
java.net.ConnectException: Connection refused (Connection refused)
pulse.properties is configured to the defaults:
# JMX Locator/Manager Properties
pulse.useLocator=true
pulse.host=localhost
pulse.port=10334
#pulse.useSSL.locator=true
#pulse.useSSL.manager=true
Essentially I'm starting the Geode server/locator nodes with this command:
docker run -it -p 7070:7070 -p 10334:10334 -p 40404:40404 -p 40405:40405 -p 40406:40406 -p 1099:1099 apachegeode/geode
What are the Docker ports that need to be open for Pulse to work and is there any specific hostname config required?
Based on the comment from #greenPadawan (thanks) I solved this as follows:
Create a new network in Docker ref: docker network create --driver bridge geode
Inside the Pulse WAR file at /WEB-INF/classes/pulse.properties I changed the pulse.host=localhost setting to pulse.host=geode. I then re-built my Pulse Docker container with the updated WAR.
Start my two containers (one container for Geode and one for Pulse/Tomcat) both using that network:
docker run -it --network=geode --name geode -p 7070:7070 -p 10334:10334 -p 40404:40404 -p 40405:40405 -p 40406:40406 -p 1099:1099 apachegeode/geode
docker run -d -P --network=geode pulse
And when I now navigate to Pulse in the browser on the host machine, Pulse now connects to the Geode locator.

DNS not working between two linked docker containers - getaddrinfo EAI_AGAIN error

I am attempting to setup a temporary environment where I can execute ATs against a web application. To achieve this I have 3 docker containers:
Container 1: Database (mongo_local)
docker build -t mongo_local ./test/AT/mongo
docker run --name mongo_local -d -p 27017:27017 mongo_local
Container 2 (Web application):
docker run --name mywebapp_local -d -p 4431:4431 --link mongo_local -v /applicationdata:/applicationdata mywebapp
Container 3 (Newman test runner):
docker build -t newman_runner ./test/AT/newman
docker run --name newman_runner --link mywebapp_local newman_runner
The web application can access the database successfully using the following connection string: mongodb://mongo_local:27017/mydb, note that I am able to reference mongo_local, I dont have to specify an IP address for the mongo_local container.
The newman test runner runs postman tests against the web application and all tests execute successfully when I specify the IP address of the mywebapp_local container i.e. 10.0.0.4 in the URL, however if I specify the name mongo_local in the URL it does not work.
Hence https://mywebapp_local/api/v1/method1 does not work but https://10.0.0.4/api/v1/method1 does work.
The error Im getting is
getaddrinfo EAI_AGAIN mywebapp_local mywebapp_local:443 at request ...
I've tried using -add-host in the docker run command and this makes no difference. Is there anything obvious that I'm doing wrong?
As you have it set up, the newman_runner container doesn't --link mongo_local and that's why it can't see it.
Docker has been discouraging explicit inter-container links for a while. If you create a Docker-internal network and attach each container to it
docker network create testnet
docker run --net testnet ...
it will be able to see all of the other containers on the same network by their --name without an explicit --link.

How to connect to server on Docker from host machine?

Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Resources