Behat container keeps exiting - docker

I'm using the following code to create a Behat docker container:
version: '3'
services:
behat:
container_name: behat.test
image: docksal/behat
# command: tail -F anything
# tty: true
# ports:
# - 4444:4444
# restart: always
But I'm experiencing a persistent problem with the container continually exiting with a code 1 as a result I can't interact with the container.
All the commented out portions of the code are what I have tried to resolve the issue.
Here is the output for the Behat container when I run docker-compose up -d --build:
d8c515771e4d docksal/behat "behat" 6 seconds ago Exited (1) 5 seconds ago behat.test
Update
I found Behat was reporting the following error :
`FeatureContext` context class not found and can not be used.

FeatureContext context class not found and can not be used.
This means behat fails to find features which needed, you could use behat --init to init a one.
$ docker run --rm -v $(pwd):/src docksal/behat --init
+d features - place your *.feature files here
+d features/bootstrap - place your context classes here
+f features/bootstrap/FeatureContext.php - place your definitions, transformations and hooks here
Then in your host, there will be a features folder, then run the command again see it's ok now:
$ docker run --rm -v $(pwd):/src docksal/behat
No scenarios
No steps
0m0.00s (7.73Mb)
For docker-compose, it's same, you need to assure there is features mount to the container. You could reference official folder structure to have your design, a workable step as next:
In current folder:
docker run --rm -v $(pwd):/src docksal/behat --init
Write a docker-compose.yaml like next:
version: "2.1"
services:
# Behat
behat:
hostname: behat
image: ${BEHAT_IMAGE:-docksal/behat}
volumes:
- .:/src
# Run a built-in web server for access to HTML reports
ports:
- 8000:8000
entrypoint: "php -S 0.0.0.0:8000"
docker-compose up -d
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
55f506afe31a docksal/behat "php -S 0.0.0.0:8000" 29 seconds ago Up 25 seconds 0.0.0.0:8000->8000/tcp 2020121502_behat_1
$ docker logs 2020121502_behat_1
PHP 7.3.25 Development Server started at Tue Dec 15 05:39:00 2020

Related

How to run multiples docker container when run docker-compose up ( gitlab-ci)

I need to deploy a new container each time that i do "docker-compose up" because the container will run a SQL SERVER database in a Gitlab pipeline for each merge request that will be created in the repository.
Is there a flag that should be passed to do this? I know the --force-recreate, but it recreate the SAME container. I neeed to every time to the command docker-compose up been called to create another container with the same configurations.
There is the --scale SERVICE=NUM, but it is not what i need. Why? because when i scale i can not control which host port docker will grab and use.
how do i intend to do this? By a environment variable. Look:
docker-compose file
version: '2'
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: ${CI_PIPELINE_ID}
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=${DATABASE_PASSWORD}
ports:
- "${CI_PIPELINE_ID}:1433"
my gitlab-ci:
stages:
- database_deploy
- build_and_test
- database_stop
database_deploy:
image: docker:latest
stage: database_deploy
services:
- name: docker
script:
- apk add py-pip
- pip install docker-compose==1.8.0
- cd ./docker; docker-compose up -d; docker ps
build_and_test:
image: maven:latest
stage: build_and_test
script:
- mvn test -Dquarkus.test.profile=homolog
- mvn checkstyle:check
artifacts:
paths:
- target
database_stop: &database_stop
image: docker:latest
stage: database_stop
services:
- name: docker
script:
- docker stop $CI_PIPELINE_ID
- docker rm -f $CI_PIPELINE_ID
- docker ps
cleanup_deployment_failure:
needs: ["build_and_test"]
when: on_failure
<<: *database_stop
Docker-compose groups your services in "projects". By default, the project name is the name of the directory that contains your docker-compose.yml file. When you run docker up, docker-compose will create any containers in the project that don't already exist.
Since you want docker-compose up to create new containers every time -- with different configurations -- you need to tell docker-compose that it's running in a different project each time. You can do this with the --project-name (-p) flag.
For example, let's say I have this docker-compose.yml:
version: "3"
services:
web:
image: "alpinelinux/darkhttpd"
ports:
- "${HOSTPORT}:8080"
I can bring up multiple instances of this stack by setting HOSTPORT and specifying a project name for each invocation of docker-compsoe:
$ HOSTPORT=8081 docker-compose -p project1 up -d
$ HOSTPORT=8082 docker-compose -p project2 up -d
After running those two commands, we see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
825ea98cca55 alpinelinux/darkhttpd "darkhttpd /var/www/…" 4 seconds ago Up 3 seconds 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp project2_web_1
776c12d38bbb alpinelinux/darkhttpd "darkhttpd /var/www/…" 9 seconds ago Up 8 seconds 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp project1_web_1
And I think that's exactly what you're looking for.
Note that with this configuration, you will need to specify the project name and a value for HOSTPORT every time you run docker-compose.
You can also set the project name using the COMPOSE_PROJECT_NAME environment variable. This means you can actually organize things using environment files.
We can reproduce the above behavior by creating project1.env with:
COMPOSE_PROJECT_NAME=project1
HOSTPORT=8081
And project2.env with:
COMPOSE_PROJECT_NAME=project2
HOSTPORT=8082
And then running:
$ docker-compose --env-file project1.env up -d
$ docker-compose --env-file project2.env up -d
As before, you'll need to provide --env-file every time you run docker-compose.

how to run freeradius using docker-compose

Could you advise how to make run freeradius using dockercompose?
Compose file here which is stop automatically in a sec.
version: '3'
services:
freeradius:
image: freeradius/freeradius-server
restart: always
volumes:
- ./freeradius:/etc/freeradius
ports:
- "1812-1813:1812-1813/udp"
volumes:
freeradius:
But when I run it with docker directly, then it runs
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
In here, it display configuration file,
root#945f7bcb3520:/# ls /etc/freeradius
README.rst clients.conf experimental.conf huntgroups mods-config panic.gdb
proxy.conf sites-available templates.conf users
certs dictionary hints mods-available mods-enabled policy.d
radiusd.conf sites-enabled trigger.conf
but then volume folder, ./freeradius don't include any conf file.
So, how can make it work properly in general?
I have gotten a similar setup up and running with my config being loaded. All my configuration has been done according to the docker hub documentation. Here is my docker-compose.yml and Dockerfile for reference.
(I am aware that I could probably avoid the Dockerfile completely, but the advantage of this is that the Dockerfile is basically 1:1 to the official documentation..)
run docker-compose up -d to run it. Both files should be in the parent directory of raddb
Dockerfile
FROM freeradius/freeradius-server:latest
COPY raddb/ /etc/raddb/
EXPOSE 1812 1813
docker-compose.yml
version: '2.2'
services:
freeradius:
build:
context: .
container_name: freeradius
ports:
- "1812-1813:1812-1813/udp"
restart: always
You don't display your Dockerfile here. But I can guess that you are running a command in the Dockerfile that doesn't persist. It works from the command line, because /bin/bash will persist until you exit.
I have had this problem a couple times recently.
Your command to run the container directly
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
is not equivalent to your dockercompose setup.
You are not mounting the config directory (also you are not publishing the container ports to the host - this will prevent you from accessing freeradius from outside a container).
I assume if you run your docker container mounting the volume
docker run --name my-radius -v ./freeradius:/etc/freeradius -i -t freeradius/freeradius-server /bin/bash
it will not work either.
For me, it didn't work when I tried to replace the whole config directory with a volume mount.
I had to mount components of the configuration individually. E.g.
-v ./freeradius/clients.conf:/etc/freeradius/clients.conf
Apparently, when you replace the whole directory something fails when starting freeradius. Excerpt from radius.log when mounting the whole config directory:
Fri Jan 13 10:49:22 2023 : Info: Debug state unknown (cap_sys_ptrace capability not set)
Fri Jan 13 10:49:22 2023 : Info: systemd watchdog is disabled
Fri Jan 13 10:49:22 2023 : Error: rlm_preprocess: Error reading /etc/freeradius/mods-config/preprocess/huntgroups
Fri Jan 13 10:49:22 2023 : Error: /etc/freeradius/mods-enabled/preprocess[13]: Instantiation failed for module "preprocess"

Volume data does not fill when running a bamboo container on the server

I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.

Why does docker compose exit right after starting?

I'm trying to configure docker-compose to use GreenPlum db in Ubuntu 16.04. Here is my docker-compose.yml:
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:
The issue is when I run it with sudo docker-compose up the GrrenPlum db is shutdowm immedately after starting. It looks as this:
greenplum_1 | 20170602:09:01:01:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Starting Master instance 72ba20be3774 directory /gpdata/master/gpseg-1
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Command pg_ctl reports Master 72ba20be3774 instance active
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-No standby master configured. skipping...
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Database successfully started
greenplum_1 | ALTER ROLE
dockergreenplumn_greenplum_1 exited with code 0 <<----- Here
Actually, when I start it with just sudo docker run pivotaldata/gpdb-base it's ok.
What's wrong with the docker compose?
First of all, be cautious running this image: the image looks to be badly maintained, and the information on Docker Hub indicates it's neither "official", nor "supported" in any way;
2017-01-09: Toolsmiths reviewed this image; it is not one we create. We make no promises about whether this is up to date or if it works. Feel free to email pa-toolsmiths#pivotal.io if you are the owner and are interested in collaborating with us on this image.
When using images from Docker Hub, it's recommended to either use official images, or when not available, prefer automated builds (in which case the source code of the image can be verified to see what's used to build theimage).
I think the image is built from this GitHub repository, which means it has not been updated for over a year, and uses an outdated (CentOS 6.7) base image that has a huge amount of critical vulnerabilities
Back to your question;
I tried starting the image, both with docker-compose and docker run, and both resulted in the same for me.
Looking at that image, it is designed to be run interactively, or to be used as a base image (and overriding the command).
I inspected the image to find out what the container's command is;
docker inspect --format='{{json .Config.Cmd}}' pivotaldata/gpdb-base
["/bin/sh","-c","echo \"127.0.0.1 $(cat ~/orig_hostname)\" >> /etc/hosts && service sshd start && su gpadmin -l -c \"/usr/local/bin/run.sh\" && /bin/bash"]
So, this is what's executed when the container is started;
echo "127.0.0.1 $(cat ~/orig_hostname)" >> /etc/hosts \
&& service sshd start \
&& su gpadmin -l -c "/usr/local/bin/run.sh" \
&& /bin/bash"
Based on the above, there is no "foreground" process in the container, so the moment /usr/local/bin/run.sh finishes, a bash shell is started. A bash shell wothout a tty attached, exits immediately, at which point the container exits.
To run this image
(Again; be cautious running this image)
Either run the image interactively, by passing it stdin and a tty (-i -t, or -it as a shorthand);
docker run -it pivotaldata/gpdb-base
Or can run it "detached", as long as a tty is passed as well (add the -d and -t flags, or -dt as a shorthand); doing so, keeps the container running in the background;
docker run -dit pivotaldata/gpdb-base
To do the same in docker-compose, add a tty to your service;
tty: true
Your compose file will then look like this;
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
tty: true
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:

docker exec not working in docker-compose containers

I'm executing two docker containers using docker compose.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eef95ca1b59b gogent_qaf "/bin/sh -c ./slave.s" 14 seconds ago Up 12 seconds 4242/tcp, 7000-7005/tcp, 9999/tcp, 0.0.0.0:30022->22/tcp coreqafidm_qaf_1
a01373e893eb gogent_master "/bin/sh -c ./master." 15 seconds ago Up 13 seconds 4242/tcp, 0.0.0.0:27000->7000/tcp, 0.0.0.0:27001->7001/tcp, 0.0.0.0:27002->7002/tcp, 0.0.0.0:27003->7003/tcp, 0.0.0.0:29999->9999/tcp coreqafidm_master_1
When I try to use:
docker exec -it coreqafidm_qaf_1 /bin/bash
I get the error:
docker exec -it coreqafidm_qaf_1 /bin/bash
no such file or directory
Here is the docker-compose file:
version: '2'
services:
master:
image: gogent_master
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
ports:
- "27000-27003:7000-7003"
- "29999:9999"
build:
context: .
dockerfile: Dockerfile.master
qaf:
image: gogent_qaf
ports:
- "30022:22"
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
depends_on: [master]
build:
context: .
dockerfile: Dockerfile.qaf
Both Docker files involved have as their last WORKDIR command:
WORKDIR /d1/apps/qaf
If there is a REAL directory /d1/apps/qaf on the machine's natural file system docker exec works, to some degree. It will open up a shell. However, the mapped in volumes are not available to this shell and the files I see are the ones in the real, natural directory, not what should be the mapped in volume.
$ mkdir /d1/apps/qaf
$ docker exec -it coreqafidm_qaf_1 /bin/bash
root#eef95ca1b59b:/d1/apps/qaf#
root#eef95ca1b59b:/d1/apps/qaf# ls /d1/apps/gogent
ls: cannot access /d1/apps/gogent: No such file or directory
The volumes work correctly from within the docker-compose context. I have scripts executing in their and they work. It's just docker exec that fails to see the volumes.
The error stems from a the container not finding /bin/bash, hence the no such file or directory error. The docker exec works fine though.
Try with /bin/sh.
Well, I installed docker-compose etc. on a different machine and this problem was not there. Go figure. This is just one of those things I don't have time to track down.

Resources