how to get volume data using ssh connection - docker

I'm trying to pull docker volume data from the remote server. Now my docker volume data is on my local machine. But I want to make this data available to everyone. So I copied docker volume file to the server. How do I show the file path of my docker volume data on this server in the compose file? Like;
docker-compose.yml
version: '3.2'
services:
jira:
container_name: jira-software_8.3
image: atlassian/jira-software:8.3
volumes:
# How to get volume data using ssh connection
- <user_name>#<server_ip>:<server_path>:<container_path>
ports:
- '8080:8080'
environment:
- 'JIRA_DATABASE_URL=postgresql://jira#postgresql/jira_db'
- 'JIRA_DB_PASSWORD=docker'
volumes:
jira_data:
external: false

You can not mount remote server files and folders, docker looking for mounting in the local context.
So the work arround is to copy during run time and mount the directory to a container.
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker run --rm -it -v $PWD/app:/app alpine ash -c "ls /app"
your current docker-compose
volumes:
# How to get volume data using ssh connection
as you can not bind - <user_name>#<server_ip>:<server_path>:<container_path>
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker run --rm -it -v $PWD/app:/app alpine ash -c "ls /app"
below will break your docker-compose
volumes:
# How to get volume data using ssh connection
- <user_name>#<server_ip>:<server_path>:<container_path>
update the docker-compose file
volumes:
- ./app/:/app/
then run docker-compose up command like
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker-compose up

Related

Starting docker containers

I have a docker-compose.yml file that starts two services: amazon/dynamodb-local on 8000 port and django-service. django-service runs tests that are dependent on dynamodb-local.
Here is working docker-compose.yml:
version: '3.8'
services:
dynamodb-local:
image: "amazon/dynamodb-local:latest"
container_name: dynamodb-local
ports:
- "8000:8000"
django-service:
depends_on:
- dynamodb-local
image: django-service
build:
dockerfile: Dockerfile
context: .
env_file:
- envs/tests.env
volumes:
- ./:/app
command: sh -c 'cd /app && pytest tests/integration/ -vv'
Now I need to run this without docker-compose, only using docker itself. I try to do following:
docker network create -d bridge net // create a network for dynamodb-local and django-service
docker run --network=net --rm -p 8000:8000 -d amazon/dynamodb-local:latest // run cont. att. to network
docker run --network=net --rm --env-file ./envs/tests.env -v `pwd`:/app django-service /bin/sh -c 'env && cd /app && pytest tests/integration -vv'
I can see that both services start, but I can't connect to the dynamo-db.
Where is the problem? Any comment or help is appreciated!
Through the docker-compose.yml, the amazon/dynamodb-local container has a name defined (container_name: dynamodb-local, If we do not set this property, docker-compose will use the service's name as container name). This enables other containers in the same network to address the container through its name.
In the docker-run command, we do not set an explicit container name. We can set an explicit container name by executing docker run ... --name dynamodb-local .... More details can be found in the corresponding docker run documentation.

Run docker-compose without installation

I try to run docker-compose without installation, so using docker:compose repository (with docker run).
So I tried this way :
docker run -ti --rm -v $PWD:/XXX docker/compose:1.24.1 up -d
The problem is that I don't know the container dir name of docker/compose (here XXX) to mount my current folder as volume.
Any ideas...?
Thanks !
You can bind mount your local docker-compose.yaml to any place just remember to tell docker-compose use -f, like next:
docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/code docker/compose:1.24.1 -f /code/docker-compose.yaml up -d
Meanwhile, don't forget to add docker.sock of your host machine bind mount to the container.
That XXX folder can be anything inside the container. Basically in -v option of docker run. Its -v [host-directory]:[container-directory].
If you are trying to run a docker-compose up inside the container, then follow these steps:
Create a directory on host mkdir /root/test
Create docker-compose.yaml file with following contents:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
Run docker run command to run docker-compose inside the container.
docker run -itd -v /var/run/docker.sock:/var/run/docker.sock -v /root/test/:/var/tmp/ docker/compose:1.24.1 -f /var/tmp/docker-compose.yaml up -d
NOTE: Here /var/tmp directory inside the container will contain docker-compose.yaml file so I have used -f option to specify complete path of the yaml file.
Hope this helps.

How to copy files to a Docker volume and use that volume with docker-compose

There is a webservice running in a Docker container.
This webservice relies on big json files to boot.
I create a Docker volume to store the json files with docker volume create my-api-files.
Here is the docker-compose file for the webservice:
version: '3'
services:
my-api:
image: node:alpine
expose:
- ${NODE_PORT}
volumes:
- ./:/api
- my-api-files:/files
working_dir: /api
command: npm run start
volumes:
my-api-files:
external: true
Now, how can I copy the json files to the my-api-files docker volume before to start the the webservice with docker-compose up?
You could run a temporary container with that volume and a bind mount to your host files and run a copy from there:
docker run --rm -it -v my-api-files:/temporary -v $PWD/jsonFileLocation:/big-data alpine cp /big-data/*.json /temporary
docker run --rm -it -v my-api-files:/test alpine ls /test
You should see your JSON files in there.
EDIT: Of course, replace $PWD/jsonFileLocation with your JSON file location and your operating system's syntax.

Unable to start container in docker compose using a new image

I have a service in my docker compose file that's based on a different image than my local Dockerfile. I'm able to start it manually directly with a docker run command but not with docker compose
docker-compose.yml
...
omf:
image: myuser/myapp
working_dir: /code
command: python -u myapp.py
network_mode: host
ports:
- "5001:5000"
I get this error:
omf_1 | python: can't open file 'python': [Errno 2] No such file or directory
grip-server_omf_1 exited with code 2
This works:
docker run -p 5001:5000 -it --entrypoint=/bin/bash myuser/myapp

Launching docker command from docker-compose v.3 file

I'm learning about Docker and I'm at first steps.
I've to 'refresh' postgres image from compose file to initialize db scripts as YOSIFKIT here do through shell (https://github.com/docker-library/postgres/issues/193).
here is my Docker file:
FROM postgres:9.6.7
COPY docker-postgresql-9.6.7/prova.sql /docker-entrypoint-initdb.d/
and here is my compose file:
version: '3'
services:
postgresql_rdbms:
restart: always
image: postgres-prova
build:
context: ../
dockerfile: docker-postgresql-9.6.7/Dockerfile
command: bash -c "docker run -it --rm postgres-prova ls -ln /docker-entrypoint-initdb.d && docker run -it --rm postgres-prova && postgres"
environment:
PG_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- /srv/docker/postgresql:/var/lib/postgresql
HOW can I insert a command in a compose-file to do "docker run -it --rm imageToReload" ???
Because I've seen that "command:" in compose file works inside the container, but I want operate ON the container, on a upper level (=manage the container from the compose file, after the container creation)
Thank you very much
From what I understand you want docker-compose to delete/remove the container after every run so that the build is run each time and a fresh prova.sql file can be copied into the image each time the service is brought up. The --force-recreate flag is probably what you need.
The command directive within the yaml file provides the command that is run inside the container.

Resources