Trouble with exec command on mariadb container - docker

I want to access the db directly through command prompt.
I run the command:
docker exec -it container_name -u user_name -p
Instead of a line asking me the user password I get the following error message:
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"-u\": executable file not found in $PATH": unknown
I've tried several other commands of the same type:
docker exec -it container_name -u -p
docker exec -it container_name -u user_name
docker exec -it container_name -u root
I don't use a Dockerfile but the image directly :
version: '3'
services:
frontend:
build: ./frontend
volumes:
- ./frontend/:/app
ports:
- ${PORT_FRONTEND}:${PORT_FRONTEND}
container_name: dev_frontend
database:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- ALLOW_EMPTY_PASSWORD=${ALLOW_EMPTY_PASSWORD}
ports:
- "${PORT_MARIADB}:${PORT_MARIADB}"
volumes:
- ./volumes/database/:/var/lib/mysql:rw
container_name: dev_mariadb
I use a .env file to store sensible data instead of placing them in the docker-compose.yml
If anyone has any idea on how to proceed to find the issue I'd be grateful.
Thanks!

Simply correct the command above with :
docker exec -it container_name mysql -u{username} -p{userpassword}

Related

How to get a bash terminal in an IPFS docker container

I am running IPFS with docker. The docker-compose.yml is as follows
#version of docker compose
version: '3'
services:
ipfshost:
image: ipfs/kubo:latest
container_name: ipfs
env_file:
- .env
ports:
- 4001:4001
- 4001:4001/udp
- 8081:8080
- 5001:5001
volumes:
- ./ipfs_staging:/export
- ./ipfs_data:/data/ipfs
- ./config:/data/ipfs/config
volumes:
ipfs_staging:
external: true
ipfs_data:
external: true
Currently the only way I can find to run CLI commands is using docker exec ipfs ipfs [command]. It's annoying to have to prepend docker exec ipfs to every command, but docker exec -it ipfs bash does not load a terminal.
The error is:
OCI runtime exec failed: exec failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
I also tried docker exec -it ipfs /bin/bash
The resulting error is:
OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
Is there a fairly straightforward way to solve this? The errors seem to indicate there is no bash software in the docker container.

How to connect local redis to docker container using docker compose

I am facing an issue with connect local redis with docker container, this is my docker compose file.
version: "3"
services:
test:
build: .
stdin_open: true
tty: true
command: nodemon --delay 6 index.js
volumes:
- .:/opt/test
ports:
- "5007:5007"
links:
- redis
redis:
image: redis:latest
container_name: qbo_redis
restart: always
ports:
- "6379:6379"
But it is not working.
Getting error
bash
TI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"3ed84a23c52d\": executable file not found in $PATH": unknown
If i remove the redis entries from docker-compose.yml file, its working without any error.
docker ps -q returns a list of containers and cannot be used to access the shell. Use docker ps -ql to work with the last container created.
redis images don't contain Bash. They're based on Alpine Linux, but can you use /bin/sh. Try this way:
docker exec -it qbo_redis /bin/sh
or
docker exec -it $(docker ps -ql) /bin/sh

docker-compose run commands after up

I have the following docker-compose file
version: '3.2'
services:
nd-db:
image: postgres:9.6
ports:
- 5432:5432
volumes:
- type: volume
source: nd-data
target: /var/lib/postgresql/data
- type: volume
source: nd-sql
target: /sql
environment:
- POSTGRES_USER="admin"
nd-app:
image: node-docker
ports:
- 3000:3000
volumes:
- type: volume
source: ndapp-src
target: /src/app
- type: volume
source: ndapp-public
target: /src/public
links:
- nd-db
volumes:
nd-data:
nd-sql:
ndapp-src:
ndapp-public:
nd-app contains a migrations.sql and seeds.sql file. I want to run them once the container is up.
If I ran the commands manually they would look like this
docker exec nd-db psql admin admin -f /sql/migrations.sql
docker exec nd-db psql admin admin -f /sql/seeds.sql
When you run up with this docker-compose file, it will run the container entrypoint command for both the nd-db and nd-app containers as part of starting them up. In the case of nd-db, this does some prep work then starts the postgres database.
The entrypoint command is defined in the Dockerfile, and expects to combine configured bits of ENTRYPOINT and CMD. What you might do is override the ENTRYPOINT in a custom Dockerfile or overriding it in your docker-compose.yml.
Looking at the postgres:9.6 Dockerfile, it has the following two lines:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
You could add the following to your nd-db configuration in docker-compose.yml to retain the existing entrypoint but also "daisy-chain" a custom migration-script.sh step.
entrypoint: ["docker-entrypoint.sh", "migration-script.sh"]
The custom script needs only one special behavior: it needs to do a passthru execution of any following arguments, so the container continues on to start postgres:
#!/usr/bin/env bash
set -exo pipefail
psql admin admin -f /sql/migrations.sql
psql admin admin -f /sql/seeds.sql
exec "$#"
Does docker-composer -f path/to/config.yml name_of_container nd-db psql admin admin -f /sql/migrations.sql work?
I’ve found that you have to specify the config and container when running commands from the laptop.

How do I convert this docker command to docker-compose?

I run this command manually:
$ docker run -it --rm \
--network app-tier \
bitnami/cassandra:latest cqlsh --username cassandra --password cassandra cassandra-server
But I don't know how to convert it to a docker compose file, specially the container's custom properties such as --username and --password.
What should I write in a docker-compose.yaml file to obtain the same result?
Thanks
Here is a sample of how others have done it. http://abiasforaction.net/apache-cassandra-cluster-docker/
Running the command below
command:
Setting arg's below
environment:
Remember just because you can doesn't mean you should.. Compose is not always the best way to launch something. Often it can be the lazy way.
If your running this as a service id suggest building the dockerfile to start and then creating systemd/init scripts to rm/relaunch it.
an example cassandra docker-compose.yml might be
version: '2'
services:
cassandra:
image: 'bitnami/cassandra:latest'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
although this will not provide you with your commandline arguments but start it with the default CMD or ENTRYPOINT.
As you are actually running another command then the default you might not want to do this with docker-compose. Or you can create a new Docker image with this command as the default and provide the username and password as ENV's
e.g. something like this (untested)
FROM bitnami/cassandra:latest
ENV USER=cassandra
ENV PASSWORD=password
CMD ["cqlsh", "--username", "$USER", "--password", "$PASSWORD", "cassandra-server"]
and you can build it
docker build -t mycassandra .
and run it with something like:
docker run -it -e "USER=foo" -e "PASSWORD=bar" mycassandra
or in docker-compose
services:
cassandra:
image: 'mycassandra'
ports:
- '7000:7000'
- '7001:7001'
- '9042:9042'
- '9160:9160'
environment:
USER:user
PASSWORD:pass
volumes:
- 'cassandra_data:/bitnami'
volumes:
cassandra_data:
driver: local
You might looking for something like the following. Not sure if it is going to help you....
version: '3'
services:
my_app:
image: bitnami/cassandra:latest
command: /bin/sh -c cqlsh --username cassandra --password cassandra cassandra-server
ports:
- "8080:8080"
networks:
- app-tier
networks:
app-tier:
external: true

Docker-compose Daemon mode logs

I'm running several containers in daemon mode: docker-compose up -d.
One of them recently crashed.
I'd like to investigate what happened. Where can I find the app logs?
Here's the docker-compose.yml (nothing special regarding logging):
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'production'
seed:
build: ./seed
links:
- mongodb
You can get logs via docker-compose logs or you could exec to attach (docker >= 1.3) to the running instance via
$ docker exec -i -t 6655b41beef /bin/bash #by ID
or
$ docker exec -i -t my_www /bin/bash #by Name

Resources