docker copy file from one container to another? - docker

Here is what I want to do:
docker-compose build
docker-compose $COMPOSE_ARGS run --rm task1
docker-compose $COMPOSE_ARGS run --rm task2
docker-compose $COMPOSE_ARGS run --rm combine-both-task2
docker-compose $COMPOSE_ARGS run --rm selenium-test
And a docker-compose.yml that looks like this:
task1:
build: ./task1
volumes_from:
- task1_output
command: ./task1.sh
task1_output:
image: alpine:3.3
volumes:
- /root/app/dist
command: /bin/sh
# essentially I want to copy task1 output into task2 because they each use different images and use different tech stacks...
task2:
build: ../task2
volumes_from:
- task2_output
- task1_output:ro
command: /bin/bash -cx "mkdir -p task1 && cp -R /root/app/dist/* ."
So now all the required files are in task2 container... how would I start up a web server and expose a port with the content in task2?
I am stuck here... how do I access the stuff from task2_output in my combine-tasks/Dockerfile:
combine-both-task2:
build: ../combine-tasks
volumes_from:
- task2_output

In recent versions of docker, named volumes replace data containers as the easy way to share data between containers.
docker volume create --name myshare
docker run -v myshare:/shared task1
docker run -v myshare:/shared -p 8080:8080 task2
...
Those commands will set up one local volume, and the -v myshare:/shared argument will make that share available as the folder /shared inside each of each container.
To express that in a compose file:
version: '2'
services:
task1:
build: ./task1
volumes:
- 'myshare:/shared'
task2:
build: ./task2
ports:
- '8080:8080'
volumes:
- 'myshare:/shared'
volumes:
myshare:
driver: local
To test this out, I made a small project:
- docker-compose.yml (above)
- task1/Dockerfile
- task1/app.py
- task2/Dockerfile
I used node's http-server as task2/Dockerfile:
FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
and task1/Dockerfile used python:alpine, to show two different stacks writing and reading.
FROM python:alpine
WORKDIR /app
COPY . .
CMD python app.py
here's task1/app.py
import time
count = 0
while True:
fname = '/shared/{}.txt'.format(count)
with open(fname, 'w') as f:
f.write('content {}'.format(count))
count = count + 1
time.sleep(10)
Take those four files, and run them via docker compose up in the directory with docker-compose.yml - then visit $DOCKER_HOST:8080 to see a steadily updated list of files.
Also, I'm using docker version 1.12.0 and compose version 1.8.0 but this should work for a few versions back.
And be sure to check out the docker docs for details I've probably missed here:
https://docs.docker.com/engine/tutorials/dockervolumes/

For me the best way to copy file from or to container is using docker cp for example:
If you want copy schema.xml from apacheNutch container to solr container then:
docker cp apacheNutch:/root/nutch/conf/schema.xml /tmp/schema.xml
server/solr/configsets/nutch/
docker cp /tmp/schema.xml
solr:/opt/solr-8.1.1/server/solr/configsets/nutch/conf

Related

Adding Volume to docker container's /app Issue

I am having trouble creating a volume that maps to the directory "/app" in my container
This is basically so when I update the code I don't need to build the container again
This is my docker file
# stage 1
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/my-first-app /usr/share/nginx/html
I use this command to run the container
docker run -d -p 100:80/tcp -v ${PWD}/app:/app docker-testing:v1
and no volume gets linked to it.
However, if I were to do this
docker run -d -p 100:80/tcp -v ${PWD} docker-testing:v1
I do get a volume at least
Anything obvious that I am doing wrong?
Thanks
The ${PWD}:/app:/app should be ${PWD}/app:/app.
If you explode ${PWD}, you'd obtain something like /home/user/src/thingy:/app:/app which does not make much sense.
EDIT:
I'd suggest using docker-compose to avoid this kind of issues (it also simplify a lot the commands to start up docker).
In your case the docker-compose.yml would look like this:
docker run -d -p 100:80/tcp -v ${PWD}:/app:/app docker-testing:v1
version: "3"
services:
doctesting:
build: .
image: docker-testing:v1
volumes:
- "./app:/app"
ports:
- "100:80"
I didn't really test if it works, there might be typos...

container keep exiting right after running it

docker-compose.yaml:
web:
build: .
command: ./main
ports:
- "8888:3412"
volumes:
- .:/code
links:
- redis
redis:
image: redis
Dockerfile:
FROM golang:1.6
ADD main.go .
EXPOSE 3412
ENTRYPOINT /go
RUN go build main.go
so after running docker run -d imagename, there is no running container
also docker logs containername doesn't show anything
ENTRYPOINT /go
is equivalent to running /bin/sh -c /go
go is actually a directory in your container, so it will fail, because shell cannot execute a directory.
remove the background flag -d and use docker run imagename and you will see this error
What you probably want is:
ENTRYPOINT /usr/local/go/bin/go to use go as an executable from the container.
Or even better:
ENTRYPOINT ["/usr/local/go/bin/go"], so you would be able to pass arguments to go.

Docker - run image from docker-compose

I have set up a docker-compose.yml file that runs a web service along with postgres.
It works nicely when I run it with docker-compose up.
docker-compose.yml:
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile:
FROM python:3
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
CMD ["python", "manage.py", "runserver"]
Is there any way to construct an image out of the services?
I tried it with docker-compose build, but running the created image simply freezes the terminal.
Thanks!
docker-compose is a container orchestration tool, albeit a simple one , and not a bundler of multiple images and preferences into one. In fact, such a thing does not even exists.
What happens when you run docker-compose up is that it effectively runs docker-compose build for those images that need to be built, web in your example, and then effectively replaces the build: . with image: web and executes the configuration as defined by the compose file.
So if you were to run docker-compose build manually and wanted to run the same configuration you have in the compose file manually, you would need to something along the lines of (in order)
run docker-compose build or docker build -t web . to build the web image
run docker run --name db postgres
run docker run --name web -v .:/code -p 8000:8000 web python manage.py runserver 0.0.0.0:8000

Docker swarm did not update service

I use docker stack deploy deploy my python service.
First, I edit code.
then
docker build . -f Dockerfile -t my_service:$TAG
docker tag my_service:$TAG register.xxx.com:5000/my_service:$TAG
When I use docker run -p 9000:9000 register.xxx.com:5000/my_service:$TAG
It's worked.
But, when I use docker stack deploy -c docker-compose.yml my_service_stack
The service still is running old code.
The part of docker-compose.yaml:
web:
image: register.xxx.com:5000/my_service:v0.0.12
depends_on:
- redis
- db
- rabbit
links:
- redis
- db
- rabbit
volumes:
- web_service_data:/home
networks:
- webnet
v0.0.12 == $TAG
Dockerfile:
```
FROM python:3.6.4
RUN useradd -ms /bin/bash gmt
RUN mkdir -p /home/logs
WORKDIR /home/gmt/src
COPY /src/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
COPY /src .
RUN cat /home/gmt/src/setting/env.yaml
ENV PYTHONPATH=/home/gmt/src
CMD ["gunicorn", "-c", "/home/gmt/src/gunicornconf.py", "run:app"]
```
So, why?
I don't see that you actually pushed your image from your build server to your registry. I'll assume you're doing that after build and before deploy.
You should not be using a volume for code. That volume will overwrite your /home in the container with the contents of the volume, which are likely stale. Using/storing code in volumes is an anti-pattern.
You don't need links:, they are legacy.
depends_on: is not used in swarm.
You should not store logs in the container, you should have them sent to stdout and stderr.

docker-compose run returns /bin/ls cannot execute binary file

I have just started learning Docker, and run into this issue which don't know how to go abound.
My Dockerfile looks like this:
FROM node:7.0.0
WORKDIR /app
COPY app /app
COPY hermes-entry /usr/local/bin
RUN chmod +x /usr/local/bin/hermes-entry
COPY entry.d /entry.d
RUN npm install
RUN npm install -g gulp
RUN npm install gulp
RUN gulp
My docker-compose.yml looks like this:
version: '2'
services:
hermes:
build: .
container_name: hermes
volumes:
- ./app:/app
ports:
- "4000:4000"
entrypoint: /bin/bash
links:
- postgres
depends_on:
- postgres
tty: true
postgres:
image: postgres
container_name: postgres
volumes:
- ~/.docker-volumes/hermes/postgresql/data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
- "2345:5432"
After starting the containers up with:
docker-compose up -d
I tried running a simple bash cmd:
docker-compose run hermes ls
And I got this error:
/bin/ls cannot execute binary file
Any idea on what I am doing wrong?
The entrypoint to your container is bash. By default bash expects a shell script as its first argument, but /bin/ls is a binary, as the error says. If you want to run /bin/ls you need to use -c /bin/ls as your command. -c tells bash that the rest of the arguments are a command line rather than the path of a script, and the command line happens to be a request to run /bin/ls.
You can't run Gulp and Node at the same time in one container. Containers should always have one process each.
If you just want node to serve files, remove your entrypoint from the hermes service.
You can add another service to run gulp, if you are having it run tests, you'd have to map the same volume and add a command: ["gulp"]
And you'd need to remove RUN gulp from your dockerfile (unless you are using it to build your node files)
then run docker-compose up

Resources