I'm in Fedora 23 and i'm using docker-compose to build two containers: app and db.
I want to use that docker as my dev env, but have to execute docker-compose build and up every time i change the code isn't nice. So i was searching and tried the "volumes" option but my code doesn't get copied to docker.
When i run docker-build, a "RUN ls" command doesn't list the "app" folder or any files of it.
Obs.: in the root folder I have: docker-compose.yml, .gitignore, app (folder), db (folder)
ObsĀ¹.: If I remove the volumes and working_dir options and instead I use a "COPY . /app" command inside the app/Dockerfile it works and my app is running, but I want it to sync my code.
Anyone know how to make it work?
My docker-compose file is:
version: '2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
- DATABASE_USER=myuser
- DATABASE_PASSWORD=mypass
- DATABASE_NAME=dbusuarios
- PORT=3000
volumes:
- ./app:/app
working_dir: /app
db:
build: ./db
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=dbusuarios
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypass
Here you can see my app container Dockerfile:
https://gist.github.com/jradesenv/d3b5c09f2fcf3a41f392d665e4ca0fb9
Heres the output of the RUN ls command inside Dockerfile:
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
A volume is mounted in a container. The Dockerfile is used to create the image, and that image is used to make the container. What that means is a RUN ls inside your Dockerfile will show the filesystem before the volume is mounted. If you need these files to be part of the image for your build to complete, they shouldn't be in the volume and you'll need to copy them with the COPY command as you've described. If you simply want evidence that these files are mounted inside your running container, run a
docker exec $container_name ls -l /
Where $container_name will be something like ${folder_name}_app_1, which you'll see in a docker ps.
Two things, have you tried version: '3' version two seems to be outdated. Also try putting the working_dir into the Dockerfile rather than the docker-compose. Maybe it's not supported in version 2?
This is a recent docker-compose I have used with volumes and workdirs in the respective Dockerfiles:
version: '3'
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- 3001:3001
volumes:
- ./frontend:/app
networks:
- frontend
backend:
build: .
ports:
- 3000:3000
volumes:
- .:/app
networks:
- frontend
- backend
depends_on:
- "mongo"
mongo:
image: mongo
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
networks:
- backend
networks:
frontend:
backend:
You can extend or override docker compose configuration. Please follow for more info: https://docs.docker.com/compose/extends/
I had this same issue in Windows!
volumes:
- ./src/:/var/www/html
In windows ./src/ this syntax might not work in regular command prompt, so use powershell instead and then run docker-compose up -d.
it should work if it's a mounting issue.
Related
Dockerfile:
FROM hseeberger/scala-sbt:8u222_1.3.5_2.13.1
WORKDIR /code/SimpleStocks
COPY ./SimpleStocks .
RUN sbt dist
WORKDIR /code/SimpleStocks/target/universal
RUN unzip simplestocks-0.0.1.zip
WORKDIR /code/SimpleStocks/target/universal/simplestocks-0.0.1
CMD ["bin/simplestocks"]
docker-compose.yml:
version: "3.7"
services:
app:
container_name: simple-stocks
image: simple-stocks:1.0.0
build: .
ports:
- '9000:9000'
volumes:
- .:/code
links:
- pgdb1
pgdb1:
image: postgres
environment:
POSTGRES_DB: simple_stocks
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- pgdb1data:/var/lib/postgresql/data/
- ./docker_postgres_init.sql:/docker-entrypoint-initdb.d/docker_postgres_init.sql
ports:
- '5432:5432'
volumes:
pgdb1data:
When I manually run simple-stocks container using docker run -it {imageId}, I am able to run it successfully; but, on doing docker compose up I am receiving:
Error response from daemon: OCI runtime create failed:
container_linux.go:380: starting container process caused: exec:
"bin/simplestocks": stat bin/simplestocks: no such file or directory:
unknown
Your Dockerfile is building the application in /code/SimpleStocks/target/universal/simplestocks-0.0.1, but then your Compose file bind-mounts a host directory over /code, which hides everything the Dockerfile does. The bind mount is unnecessary and deleting it will resolve this issue.
Bind-mounting a host directory over your entire built application usually is not a best practice. I most often see it trying to convince Docker to emulate a local development environment, but even that approach doesn't make sense for a compiled language like Scala.
You can safely remove the volumes: block. The obsolete links: can also be removed. You don't need to manually specify container_name:, nor do you need to specify both build: and image: unless you're planning to push the built image to a registry. That would reduce the Compose setup to just:
version: '3.8'
services:
app:
build: .
ports:
- '9000:9000'
pgdb1: (as in the question originally)
volumes:
pgdb1data:
I have 4 services to run through docker compose:
version: "3"
services:
billingmock:
build:
context: ./mock/soap/billing
dockerfile: ./Dockerfile
ports:
- 8096:8096
salcusmock:
build:
context: ./mock/soap/salcus
dockerfile: ./Dockerfile
ports:
- 8088:8088
ngocsrestmock:
build:
context: ./mock/rest/ngocs-rest
dockerfile: ./Dockerfile
volumes:
- /test/mock-data/Ngocs-Rest-Mock:/usr/src/ngocs-rest-mock/
ports:
- 8091:8091
kafka:
image: <some-repo>.com/mce/kafka_local_r20-11
ports:
- 9092:9092
- 8080:8080
- 8081:8081
- 8082:8082
but ngocs container is not running, all other container s are running when i check the log of that container i get : Exited (1) 36 seconds ago
Error: Unable to access jarfile mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar
dockerfile for that service is :
FROM openjdk:8
COPY /executable/target/mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar /usr/src/ngocs-rest-mock/
WORKDIR /usr/src/ngocs-rest-mock/
ENTRYPOINT ["java","-jar","mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar"]
i have to start the container manually and then it runs but volume is not mounted. What seems to be the issue ??? Also if i remove the volume section in docker compose then it runs.
If you have volumes: that binds a host directory to a container directory, at container startup time, the contents of that host directory always completely hide anything that was in the underlying image. In your case, you're mounting a directory over the directory that contains the jar file, so the actual application gets hidden.
You should restructure your application to keep the data somewhere separate from the application code. Using simple top-level directories like /app and /data is common enough, or you can make the data directory a subdirectory of your application directory.
Once you've done this, you can change the volumes: mount to a different directory:
# for example, a "data" subdirectory of the application directory
volumes:
- /test/mock-data/Ngocs-Rest-Mock:/usr/src/ngocs-rest-mock/data
I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
I am running all of these operations on a remove server that is a
VM running Ubuntu 16.04.5 x64.
My Go project's Dockerfile looks like:
FROM golang:latest
ADD . $GOPATH/src/example.com/myapp
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
#CMD ["./myapp"]
When I run the docker container using docker-compose up -d, the Go application exits and I see this in the docker logs:
myapp_1 | /bin/sh: 1: ./myapp: Exec format error docker_myapp_1
exited with code 2
If I locate the image using docker images and run the image like:
docker run -it 75d4a95ef5ec
I can see that my golang applications runs just fine:
viper environment is: development HTTP server listening on address:
":3005"
When I googled for this error some people suggested compiling with some special flags but I am running this container on the same Ubuntu host so I am really confused why this isn't working using docker.
My docker-compose.yml looks like:
version: "3"
services:
openresty:
build: ./openresty
ports:
- "80:80"
- "443:443"
depends_on:
- myapp
env_file:
- '.env'
restart: always
myapp:
build: ../myapp
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
ports:
- "3005:3005"
depends_on:
- db
- redis
- memcached
env_file:
- '.env'
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
- "/home/deploy/v/redis:/data"
restart: always
memcached:
image: memcached
ports:
- "11211:11211"
restart: always
db:
image: postgres:9.4
volumes:
- "/home/deploy/v/pgdata:/var/lib/postgresql/data"
restart: always
Your docker-compose.yml file says:
volumes:
- /home/deploy/apps/myapp:/go/src/example.com/myapp
which means your host system's source directory is mounted over, and hides, everything that the Dockerfile builds. ./myapp is the host's copy of the myapp executable and if something is different (maybe you have a MacOS or Windows host) that will cause this error.
This is a popular setup for interpreted languages where developers want to run their application without running a normal test-build-deploy sequence, but it doesn't really make sense for a compiled language like Go where you don't have a choice. I'd delete this block entirely.
The Go container stops running because of this:
WORKDIR $GOPATH/src/example.com/myapp
RUN go build
#EXPOSE 80
#ENTRYPOINT $GOPATH/src/example.com/myapp/myapp
ENTRYPOINT ./myapp
You are switching directories to $GOPATH/src/example.com/myapp where you build your app, however, your entry point is pointing to the wrong location.
To solve this, you either copy the app into the root directory and keep the same ENTRYPOINT command or you copy the application to a different location and pass the full path such as:
ENTRYPOINT /my/go/app/location