How to configure a dockerfile and docker-compose for Jenkins - docker

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.

docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>

I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker

HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

Related

Docker compose up before build [duplicate]

How to access the running containers during new container docker build?
Need to access the database container during the build of the application container
docker-compose
version: '3'
services:
db:
build: ./db
ports:
- 1433:1433
networks:
- mynetwork
app:
build: ./app
ports:
- 8080:8080
depends_on:
- db
networks:
- mynetwork
networks:
mynetwork: {}
Tried to bring up the db prior to building the app container, but not working:
docker-compose build db
docker-compose up -d db
docker-compose build app
You can't, and it's not a good idea. For example, if you run:
docker-compose build
docker-compose down -v
docker-compose up
The down step will delete all of the containers and their underlying storage (including the contents of the database); then the up step will create all new containers from existing images without re-running the Dockerfile. Even if you added a --build option, Docker's layer caching would conclude that the filesystem output of your database setup command hasn't changed, and will skip re-running that step.
You can encounter a similar problem if you docker push the built image to some registry and run it on a different host: since the image is reusable, commands from its Dockerfile won't get re-run, but it's not the same database, so the setup won't get done.
Depending on what kind of setup you're trying to do, probably the best approach is to configure your image with an entrypoint script that runs your application's database migrations, then exec "$#" runs the main container command. It can also work to put setup commands in the database's /docker-entrypoint-initdb.d directory, though these won't get re-run if your application's database schema changes.
At a technical level, this doesn't work because the docker build environment isn't on any particular Docker network, neither the mynetwork you manually specify nor the default network Compose creates on its own. The build sequence runs separately from running the resulting image, and it ignores most of the Docker Compose settings.

Can docker write logs to an external directory?

We have written our own logger that writes about 10 different log files: skl log, http request log, a separate log for each client, etc.
If you run the service through docker, is it possible to tell him to write these logs not inside himself, but in an external folder?
From what I've read, I've only realized so far that docker only logs output to the console, and in one shared file.
You can run a docker container with volumes where you can mention your expected log directory, settings file, app resource, etc.
Here is the straightforward way to create the docker container with volumes
docker create --name YOUR_SERVICE_NAME -p 80:80 -v /APP_DIR_OUT_SIDE_OF_CONTAINER/settings/appsettings.json:/app/appsettings.json -v /APP_DIR_OUT_SIDE_OF_CONTAINER/Logs:/app/Logs:z YOUR_DOCKER_IMAGE_REPO_URL:IMAGE_TAG
Below is the docker-compose sample that should be running a container with volumes.
version: '3.4'
services:
YOUR_SERVICE_NAME:
image: IMAGE_URL
container_name: CONTAINER_NAME
ports:
- "80:80"
volumes:
- /APP_DIR_OUT_SIDE_OF_CONTAINER/config/appsettings.json:/app/appsettings.json
- /APP_DIR_OUT_SIDE_OF_CONTAINER/logs:/app/Logs:z
restart: always
Also, you can get all possible ways to run the docker container with volumes from Use volumes

Running an executable inside a docker container from another container

I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Resources