I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
Related
I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'
I'm in Fedora 23 and i'm using docker-compose to build two containers: app and db.
I want to use that docker as my dev env, but have to execute docker-compose build and up every time i change the code isn't nice. So i was searching and tried the "volumes" option but my code doesn't get copied to docker.
When i run docker-build, a "RUN ls" command doesn't list the "app" folder or any files of it.
Obs.: in the root folder I have: docker-compose.yml, .gitignore, app (folder), db (folder)
ObsĀ¹.: If I remove the volumes and working_dir options and instead I use a "COPY . /app" command inside the app/Dockerfile it works and my app is running, but I want it to sync my code.
Anyone know how to make it work?
My docker-compose file is:
version: '2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
- DATABASE_USER=myuser
- DATABASE_PASSWORD=mypass
- DATABASE_NAME=dbusuarios
- PORT=3000
volumes:
- ./app:/app
working_dir: /app
db:
build: ./db
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=dbusuarios
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypass
Here you can see my app container Dockerfile:
https://gist.github.com/jradesenv/d3b5c09f2fcf3a41f392d665e4ca0fb9
Heres the output of the RUN ls command inside Dockerfile:
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
A volume is mounted in a container. The Dockerfile is used to create the image, and that image is used to make the container. What that means is a RUN ls inside your Dockerfile will show the filesystem before the volume is mounted. If you need these files to be part of the image for your build to complete, they shouldn't be in the volume and you'll need to copy them with the COPY command as you've described. If you simply want evidence that these files are mounted inside your running container, run a
docker exec $container_name ls -l /
Where $container_name will be something like ${folder_name}_app_1, which you'll see in a docker ps.
Two things, have you tried version: '3' version two seems to be outdated. Also try putting the working_dir into the Dockerfile rather than the docker-compose. Maybe it's not supported in version 2?
This is a recent docker-compose I have used with volumes and workdirs in the respective Dockerfiles:
version: '3'
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- 3001:3001
volumes:
- ./frontend:/app
networks:
- frontend
backend:
build: .
ports:
- 3000:3000
volumes:
- .:/app
networks:
- frontend
- backend
depends_on:
- "mongo"
mongo:
image: mongo
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
networks:
- backend
networks:
frontend:
backend:
You can extend or override docker compose configuration. Please follow for more info: https://docs.docker.com/compose/extends/
I had this same issue in Windows!
volumes:
- ./src/:/var/www/html
In windows ./src/ this syntax might not work in regular command prompt, so use powershell instead and then run docker-compose up -d.
it should work if it's a mounting issue.
i have an Ruby on Rails project, which i want to place into the containers( there are database, redis and web(includes rails project) containers). I want to add search feature, so i added a sphinx container in my compose file
docker-compose.yml
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
**- sphinx**
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
**sphinx:
image: centurylink/sphinx**
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
docker-compose build works fine but when i run docke-compose up i get
ERROR: Cannot start container 096410dafc86666dcf1ffd5f60ecc858760fb7a2b8f2352750f615957072d961: Cannot link to a non running container: /metartaf_sphinx_1 AS /metartaf_web_1/sphinx_1
How can i fix this ?
According to https://hub.docker.com/r/centurylink/sphinx/ the Sphinx container runs needs some amount of configuration files to run properly. See the *Daemonized usage (2). You need data source files and a configuration.
In my test, it fails to start as is with error:
FATAL: no readable config file (looked in /usr/local/etc/sphinx.conf, ./sphinx.conf)
Your docker-compose.yml shouldn't have these * in it.
If you want sphinx latest version you can do this:
web:
dockerfile: Dockerfile-rails
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
links:
- redis
- db
- sphinx
environment:
- REDISTOGO_URL=redis://user#redis:6379/
redis:
image: redis
sphinx:
image: centurylink/sphinx:latest
db:
dockerfile: Dockerfile-db
build: .
env_file: .env_db
If you want a specific version you write this way : centurylink/sphinx:2.1.8