docker exec not working in docker-compose containers - docker

I'm executing two docker containers using docker compose.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eef95ca1b59b gogent_qaf "/bin/sh -c ./slave.s" 14 seconds ago Up 12 seconds 4242/tcp, 7000-7005/tcp, 9999/tcp, 0.0.0.0:30022->22/tcp coreqafidm_qaf_1
a01373e893eb gogent_master "/bin/sh -c ./master." 15 seconds ago Up 13 seconds 4242/tcp, 0.0.0.0:27000->7000/tcp, 0.0.0.0:27001->7001/tcp, 0.0.0.0:27002->7002/tcp, 0.0.0.0:27003->7003/tcp, 0.0.0.0:29999->9999/tcp coreqafidm_master_1
When I try to use:
docker exec -it coreqafidm_qaf_1 /bin/bash
I get the error:
docker exec -it coreqafidm_qaf_1 /bin/bash
no such file or directory
Here is the docker-compose file:
version: '2'
services:
master:
image: gogent_master
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
ports:
- "27000-27003:7000-7003"
- "29999:9999"
build:
context: .
dockerfile: Dockerfile.master
qaf:
image: gogent_qaf
ports:
- "30022:22"
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
depends_on: [master]
build:
context: .
dockerfile: Dockerfile.qaf
Both Docker files involved have as their last WORKDIR command:
WORKDIR /d1/apps/qaf
If there is a REAL directory /d1/apps/qaf on the machine's natural file system docker exec works, to some degree. It will open up a shell. However, the mapped in volumes are not available to this shell and the files I see are the ones in the real, natural directory, not what should be the mapped in volume.
$ mkdir /d1/apps/qaf
$ docker exec -it coreqafidm_qaf_1 /bin/bash
root#eef95ca1b59b:/d1/apps/qaf#
root#eef95ca1b59b:/d1/apps/qaf# ls /d1/apps/gogent
ls: cannot access /d1/apps/gogent: No such file or directory
The volumes work correctly from within the docker-compose context. I have scripts executing in their and they work. It's just docker exec that fails to see the volumes.

The error stems from a the container not finding /bin/bash, hence the no such file or directory error. The docker exec works fine though.
Try with /bin/sh.

Well, I installed docker-compose etc. on a different machine and this problem was not there. Go figure. This is just one of those things I don't have time to track down.

Related

Docker-compose volume not storing files

I'm having trouble demonstrating that data I generate on a shared volume is persistent, and I can't figure out why. I have a very simple docker-compose file:
version: "3.9"
# Define network
networks:
sorcernet:
name: sorcer_net
# Define services
services:
preclean:
container_name: cleaner
build:
context: .
dockerfile: DEESfile
image: dees
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
#command: python run dees.py
process:
container_name: processor
build:
context: .
dockerfile: OASISfile
image: oasis
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
volumes:
pgdata:
name: pgdata
Running the docker-compose file to keep the containers running in the background:
vscode ➜ /com.docker.devenvironments.code $ docker compose up -d
[+] Running 4/4
⠿ Network sorcer_net Created
⠿ Volume "pgdata" Created
⠿ Container processor Started
⠿ Container cleaner Started
Both images are running:
vscode ➜ /com.docker.devenvironments.code $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oasis latest e2399b9954c8 9 seconds ago 1.09GB
dees latest af09040befd5 31 seconds ago 1.08GB
and the volume shows up as expected:
vscode ➜ /com.docker.devenvironments.code $ docker volume ls
DRIVER VOLUME NAME
local pgdata```
Running the docker container, I navigate to the volume folder. There's nothing in the folder -- this is expected.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#049dac037802 opt]# cd /usr/share/appdata/
[root#049dac037802 appdata]# ls
[root#049dac037802 appdata]#
Since there's nothing in the folder, I create a file in called "dog.txt" and recheck the folder contents. The file is there. I exit the container.
[root#049dac037802 appdata]# touch dog.txt
[root#049dac037802 appdata]# ls
dog.txt
[root#049dac037802 appdata]# exit
exit
To check the persistence of the data, I re-run the container, but nothing is written to the volume.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#1787d76a54b9 opt]# cd /usr/share/appdata/
[root#1787d76a54b9 appdata]# ls
[root#1787d76a54b9 appdata]#
What gives? I've tried defining the volume as persistent, and I know each of the images have a folder location at /usr/share/appdata.
If you want to check the persistence of the data in the containers defined in your docker compose, the --volumes-from flag is the way to go
When you run
docker run -it oasis
This newly created container shares the same image, but it doesn't know anything about the volumes defined.
In order to link the volume to the new container run this
docker run -it --volumes-from $CONTAINER_NAME_CREATED_FROM_COMPOSE oasis
Now this container shares the volume pgdata.
You can go ahead and create files at /usr/share/appdata and validate their persistence

Behat container keeps exiting

I'm using the following code to create a Behat docker container:
version: '3'
services:
behat:
container_name: behat.test
image: docksal/behat
# command: tail -F anything
# tty: true
# ports:
# - 4444:4444
# restart: always
But I'm experiencing a persistent problem with the container continually exiting with a code 1 as a result I can't interact with the container.
All the commented out portions of the code are what I have tried to resolve the issue.
Here is the output for the Behat container when I run docker-compose up -d --build:
d8c515771e4d docksal/behat "behat" 6 seconds ago Exited (1) 5 seconds ago behat.test
Update
I found Behat was reporting the following error :
`FeatureContext` context class not found and can not be used.
FeatureContext context class not found and can not be used.
This means behat fails to find features which needed, you could use behat --init to init a one.
$ docker run --rm -v $(pwd):/src docksal/behat --init
+d features - place your *.feature files here
+d features/bootstrap - place your context classes here
+f features/bootstrap/FeatureContext.php - place your definitions, transformations and hooks here
Then in your host, there will be a features folder, then run the command again see it's ok now:
$ docker run --rm -v $(pwd):/src docksal/behat
No scenarios
No steps
0m0.00s (7.73Mb)
For docker-compose, it's same, you need to assure there is features mount to the container. You could reference official folder structure to have your design, a workable step as next:
In current folder:
docker run --rm -v $(pwd):/src docksal/behat --init
Write a docker-compose.yaml like next:
version: "2.1"
services:
# Behat
behat:
hostname: behat
image: ${BEHAT_IMAGE:-docksal/behat}
volumes:
- .:/src
# Run a built-in web server for access to HTML reports
ports:
- 8000:8000
entrypoint: "php -S 0.0.0.0:8000"
docker-compose up -d
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
55f506afe31a docksal/behat "php -S 0.0.0.0:8000" 29 seconds ago Up 25 seconds 0.0.0.0:8000->8000/tcp 2020121502_behat_1
$ docker logs 2020121502_behat_1
PHP 7.3.25 Development Server started at Tue Dec 15 05:39:00 2020

how to run freeradius using docker-compose

Could you advise how to make run freeradius using dockercompose?
Compose file here which is stop automatically in a sec.
version: '3'
services:
freeradius:
image: freeradius/freeradius-server
restart: always
volumes:
- ./freeradius:/etc/freeradius
ports:
- "1812-1813:1812-1813/udp"
volumes:
freeradius:
But when I run it with docker directly, then it runs
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
In here, it display configuration file,
root#945f7bcb3520:/# ls /etc/freeradius
README.rst clients.conf experimental.conf huntgroups mods-config panic.gdb
proxy.conf sites-available templates.conf users
certs dictionary hints mods-available mods-enabled policy.d
radiusd.conf sites-enabled trigger.conf
but then volume folder, ./freeradius don't include any conf file.
So, how can make it work properly in general?
I have gotten a similar setup up and running with my config being loaded. All my configuration has been done according to the docker hub documentation. Here is my docker-compose.yml and Dockerfile for reference.
(I am aware that I could probably avoid the Dockerfile completely, but the advantage of this is that the Dockerfile is basically 1:1 to the official documentation..)
run docker-compose up -d to run it. Both files should be in the parent directory of raddb
Dockerfile
FROM freeradius/freeradius-server:latest
COPY raddb/ /etc/raddb/
EXPOSE 1812 1813
docker-compose.yml
version: '2.2'
services:
freeradius:
build:
context: .
container_name: freeradius
ports:
- "1812-1813:1812-1813/udp"
restart: always
You don't display your Dockerfile here. But I can guess that you are running a command in the Dockerfile that doesn't persist. It works from the command line, because /bin/bash will persist until you exit.
I have had this problem a couple times recently.
Your command to run the container directly
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
is not equivalent to your dockercompose setup.
You are not mounting the config directory (also you are not publishing the container ports to the host - this will prevent you from accessing freeradius from outside a container).
I assume if you run your docker container mounting the volume
docker run --name my-radius -v ./freeradius:/etc/freeradius -i -t freeradius/freeradius-server /bin/bash
it will not work either.
For me, it didn't work when I tried to replace the whole config directory with a volume mount.
I had to mount components of the configuration individually. E.g.
-v ./freeradius/clients.conf:/etc/freeradius/clients.conf
Apparently, when you replace the whole directory something fails when starting freeradius. Excerpt from radius.log when mounting the whole config directory:
Fri Jan 13 10:49:22 2023 : Info: Debug state unknown (cap_sys_ptrace capability not set)
Fri Jan 13 10:49:22 2023 : Info: systemd watchdog is disabled
Fri Jan 13 10:49:22 2023 : Error: rlm_preprocess: Error reading /etc/freeradius/mods-config/preprocess/huntgroups
Fri Jan 13 10:49:22 2023 : Error: /etc/freeradius/mods-enabled/preprocess[13]: Instantiation failed for module "preprocess"

Volume data does not fill when running a bamboo container on the server

I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.

Docker not mapping changes from local project to the container in windows

I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?

Resources