I wonder how can i create files inside docker with my host user privileges to it.
This is my docker-compose:
version: "3"
networks:
main_network:
services:
php_8_fpm:
build:
context: php8.0-fpm
volumes:
- ../src:/var/www/
networks:
- main_network
web_server_nginx:
build:
context: webserver
volumes:
- ../src:/var/www/
- ./webserver/conf.d:/etc/nginx/conf.d
networks:
- main_network
ports:
- "8100:80"
depends_on:
- php_8_fpm
I'm using composer inside the php container. Problem is when i create any file inside the container i dont have privileges from my host.
On my host these files have root as owner.
Is there any good practice to prevent this insead of chaning owner on my host every time i create a new file inside container?
Even when i use bin/console make:entity i dont have privilege to it.
There is a config that you can use into docker-compose, there is multiple way to do it read this, it will help you
Related
I've set up Odoo 15 on a container with docker compose, and am accessing the container through remote container extension on VS code, I've looked everywhere I can't seem to get how to access the odoo files such as the installed add-ons folder
I've set up my volumes in docker-compose file pretty much in this way:
version: '3'
services:
odoo:
image: odoo:15.0
env_file: .env
depends_on:
- postgres
ports:
- "127.0.0.1:8069:8069"
volumes:
- data:/var/lib/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
But since I would like to apply changes on the html/css of non custom add-ons that are already present in odoo I'd have to access the source code of odoo that is present in the container (if doable).
For example in the volume odoo-addons:/mnt/extra-addons would be a directory where i could add my custom module but what i want is to find the source code of the add-ons already present in Odoo ?!
Use named volumes - it will copy the existing data from the container image into a new volume. In docker-compose you can do it, by defining a volume:
version: '2'
volumes:
data:
services:
odoo:
image: odoo:15.0
env_file: .env
depends_on:
- postgres
ports:
- "127.0.0.1:8069:8069"
volumes:
- data:/var/lib/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
If your files reside in the /var/lib/odoo folder you will be able to view/edit the files which are thereby accessing them in the /var/lib/docker/volumes/{someName}_data/_data
I am new to docker and had a problem that hope you could help.
I have defined multiple services (HTTPSERV, IMED, etc...) in my docker-compose file and these services have a python code inside and a docker file for running them. The dockerfile also copies the required files into a host path defined in docker-compose. The HTTPSERV and IMED must share a text file and expose it to an external user sending a GET request to HTTPSERV.
In docker-compose I have defined a local host directory and bind it to a named volume. The services and docker files are meant to share each service files and run.
As soon as I run the docker-compose, the files related to the first service copies files in the PATH directory where "src" and change the permission right of the "src" folder not letting the other services copy their files. This causes the next services to fail to find the appropriate files and the whole orchestration process fails.
version: "3.9"
networks:
default:
ipam:
config:
- subnet: 172.28.0.2/20
services:
httpserv:
user: root
container_name: httpserver
build: ./HTTPSERV
volumes:
- myapp:/httpApp:rw
networks:
default:
ipv4_address: 172.28.0.5
ports:
- "8080:3000"
rabitQ:
user: root
container_name: rabitQ
image: rabbitmq:3.8-management
networks:
default:
ipv4_address: 172.28.0.4
ports:
- "9000:15672"
imed:
user: root
container_name: IMED-Serv
build: ./IMED
volumes:
- myapp:/imed:rw
networks:
- default
# restart: on-failure
orig:
user: root
container_name: ORIG-Serv
build: ./ORIG
volumes:
- myapp:/orig:rw
networks:
- default
# restart: on-failure
obse:
container_name: OBSE-Serv
build: ./OBSE
volumes:
- myapp:/obse:rw
networks:
- default
# restart: on-failure
depends_on:
- "httpserv"
links:
- httpserv
volumes:
myapp:
driver: local
driver_opts:
type: none
o: bind
device: /home/dockerfiles/hj/a3/src
The content of the docker file is similar for most of the services and is as follow:
FROM python:3.8-slim-buster
WORKDIR /imed
COPY . .
RUN pip install --no-cache-dir -r imed-requirements.txt
RUN chmod 777 ./imed.sh
CMD ["./imed.sh"]
The code has root access and the UserID and GroupID are set
I also used the anonymously named volumes but the same issue happens.
In Docker it's often better to avoid "sharing files" as a first-class concept. Imagine running this in a clustered system like Kubernetes; if you have three copies of each service, and they're running in a cloud of a hundred systems, "sharing files" suddenly becomes difficult.
Given the infrastructure you've shown here, you have a couple of straightforward options:
Add an HTTP endpoint to update the file. You'd need to protect this endpoint in some way, since you don't want external callers accessing it; maybe filter it in an Nginx reverse proxy, or use a second HTTP server in the same process. The service that has the updated content would then call something like
r = requests.post('http://webserv/internal/file.txt', data=contents)
r.raise_for_status()
Add an HTTP endpoint to the service that owns the file. When the Web server service starts up, and periodically after that, it makes a request
r = requests.get('http://imed/file.txt')
You already have RabbitMQ in this stack; add a RabbitMQ consumer in a separate thread, and post the updated file content to a RabbitMQ topic.
There's some potential trouble with the "push" models if the Web server service restarts, since it won't be functional until the other service sends it the current version of file.
If you really do want to do this with a shared filesystem, I'd recommend creating a dedicated directory and dedicated volume to do this. Do not mount a volume over your entire application code. (It will prevent docker build from updating the application, and in the setup you've shown, you're mounting the same application-code volume over every service, so you're running four copies of the same service instead of four different services.)
Adding in the shared volume, but removing a number of other unnecessary options, I'd rewrite the docker-compose.yml file as:
version: '3.9'
services:
httpserv:
build: ./HTTPSERV
ports:
- "8080:3000"
volumes:
- shared:/shared
rabitQ:
image: rabbitmq:3.8-management
hostname: rabitQ # (RabbitMQ specifically needs this setting)
ports:
- "9000:15672"
imed:
build: ./IMED
volumes:
- shared:/shared
orig:
build: ./ORIG
obse:
build: ./OBSE
depends_on:
- httpserv
volumes:
shared:
I'd probably have the producing service unconditionally copy the file into the volume on startup, and have the consuming service block on it being present. (Don't depend on Docker to initialize the volume; it doesn't work on Docker bind mounts, or on Kubernetes, or if the underlying image has changed.) So long as the files are world-readable the two services' user IDs don't need to match.
I have 2 containers in a compose files,that i want to serve app static files through nginx.
I have read this: https://stackoverflow.com/a/43560093/7522096 and want to use host dir to share between app container and nginx container, for some reason I dont want to use named volume.
===
Using a host directory Alternately you can use a directory on the host
and mount that into the containers. This has the advantage of you
being able to work directly on the files using your tools outside of
Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead
mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
===
My docker-compose file:
version: "3.7"
services:
app:
container_name: app
restart: always
ports:
- 8888:8888
env_file:
- ./env/app.env
image: registry.gitlab.com/app/development
volumes:
- ./public/app/:/usr/app/static/
- app-log:/root/.pm2
nginx:
container_name: nginx
image: 'nginx:1.16-alpine'
restart: always
ports:
- 80:80
- 443:443
volumes:
- /home/devops/config/:/etc/nginx/conf.d/:ro
- /home/devops/ssl:/etc/nginx/ssl:ro
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
volumes:
# app-public:
app-log:
Yet when i do this in my compose, the dir always come up empty on nginx, and the static files in my app container got disappear too.
Please help, I tried a lot of ways but can not figure it out.
Thanks.
During the initialization of the container docker will bind the ./public/app directory on the host with the /usr/app/static/ directory in the container.
If the ./public/app does not exist it will be created. The bind is from the host to the container, meaning that the content of ./public/app folder is
reflected (copied) into the container and not viceversa. That's why after the initialization the static app directory is empty.
If my understanding is correct your goal is to share the application files between the app container and nginx.
Taken into consideration the above the only solution is to create the files in the volume after the volume is created. Here is an example for the relevant parts:
version: "3"
services:
app:
image: ubuntu
volumes:
- ./public/app/:/usr/app/static_copy/
entrypoint: /bin/bash
command: >
-c "mkdir /usr/app/static;
touch /usr/app/static/shared_file;
mv /usr/app/static/* /usr/app/static_copy;
rm -r /usr/app/static;
ln -sfT /usr/app/static_copy/ /usr/app/static;
exec sleep infinity"
nginx:
image: 'nginx:1.16-alpine'
volumes:
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
This will move the static files to the static_copy directory and link back those files to /usr/app/static. Those files will be shared with the host (public/app director)
and nginx container (/etc/nginx/public/app/). Adapt it to fit your needs.
In alternative you can of course use named volumes.
Hope it helps
I have a Docker container that runs a simple web application. That container is linked to two other containers by Docker Compose with the following docker-compose.yml file:
version: '2'
services:
mongo_service:
image: mongo
command: mongod
ports:
- '27017:27017'
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '8000:8000'
Once the web container starts, I want to write some content to /bitnami/tomcat/data/ on the tomcat_service container. I tried just writing to that disk location from within the web container but am getting an exception:
No such file or directory: '/bitnami/tomcat/data/'
Does anyone know what I can do to be able to write to the tomcat_service container from the web container? I'd be very grateful for any advice others can offer on this question!
you have to use docker volumes if you want one service to write to other service. If web writes to someFolderName the same file will exist in the tomcat_service.
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
volumes:
- my_shared_data:/bitnami/tomcat/data/
web:
volumes:
- my_shared_data:/someFolderName
volumes:
my_shared_data:
Data in volumes persist and they will be available even next time you re-create docker containers. You should always use docker volumes when writing some data in docker containers.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.