I am new to Docker and still learning.
I have a problem now, which drives me crazy as I am unable to figure out a pretty way to solve this for quite some time already.
So I have a very simple and common stack: nginx + php + mariadb + redis.
The idea is to have a shared volume between php and nginx containers, which will contain the app and run the php and nginx images as a non-root user, say with uid 1001.
here is the docker-compose.yml that i have came up with:
version: '3.8'
volumes:
app-data:
driver: local
driver_opts:
type: bind
o: uid=1001
device: ./app
services:
web:
image: nginx:1.20
user: "1001:1001"
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- app-data:/usr/share/nginx/html
depends_on:
- php
php:
build:
context: ./
dockerfile: ./php/Dockerfile
user: "1001:1001"
volumes:
- app-data:/usr/share/nginx/html
depends_on:
- db
- redis
I have omitted the mariadb and redis, as they are not relevant to my question. Dockerfile for the php image is irrelevant as well, as it is used only to install couple of modules, which were not included in the default image. if i had a choice, i would avoid having any custom Dockerfiles at all.
so this isn't working because apparently uid is not recognized as a valid option, although documentation CLEARLY STATES that local driver with bind would take the SAME OPTIONS as the mount command.
My goal here is to have a docker-compose file which will:
boot the neccessary services, i.e. db, php, nginx and redis
will have a volume created using a local directory which stores the app
have that volume shared between php and nginx images
have php and nginx images run as non-root, with same uid, so that they can access the app directory.
have no custom Dockerfiles
Could you please help me to achieve the goal? i would also appreciate any links to relevant documentation and/or best practices.
Thank you!
edit:
i would also like to understand clearly if best-practices docker/docker-compose assume that the user would have custom Dockerfile's for the services with needed adjustments, or if it is supposed to use the stock images with all configs done on the docker-compose file.
Related
I tried to install strapi with PostgreSQL from its official doc, I changed the name for mounted volumes in YAML file and keep all the rest the same as the given one in the doc.
based on strapi PostgreSQL docker-compose.yaml file see original
version: '3'
services:
strapi:
image: strapi/strapi
# totally the same as doc
volumes:
- ./backend:/srv/app
# totally the same as doc
postgres:
image: postgres
# totally the same as doc
volumes:
- ./database:/var/lib/postgresql/data
Then I pulled the latest image, and run them all, and it worked.
The folder structure now has all needed files and all functionalities are working in GUI provided in http://localhost:1337/admin/ and I could make the first content type.
\backend
\\ all_strapi_files + node_modules
\database
docker-compose.yaml
But the problem is that I can't add additional changes to files inside my editor(vscode).
I face the following error on every try for saving file changes
Failed to save 'files': Insufficient permissions. Select 'Retry as Sudo' to retry as superuser.
Also, I can not set up the yarn workspace properly, cause it doesn't have an access to remove backend/node_modules.
Git commands are not permitted either
git clean -f -- something
> failed to remove something: Permission denied
I can save every file by sudo which vscode provides, but I guess I ruined something or there is some extra thing to setup. I'm not an expert in docker and strapi, so sorry for not mentioning all content that might be needed.
This docker-compose configuration isn't good when you have changed inside a container with mount bind. Scenarios like that it's better to use docker volume.
[...]
postgres:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
How are you change Strapi with VSCode? My question is because most container images are configured to run with the root user, if possible when you are development to change outside the container and copy it to inside.
I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default
Trying to use H20 with docker-compose. Their website has instructions on running with Docker which I'm using as a basis.
I can't work out how to persist the appropriate folders to keep the models accessible in H2O Flow. Which folders do I need to persist locally for this?
I've used the Dockerfile here and the docker-compose.yaml below. I'm able to store models locally by mounting the /tmp folder, but which other folders do I need to mount?
version: '3.1'
services :
h2o-svc:
build:
context: .
dockerfile: Dockerfile
image: h2o:latest
restart: always
volumes:
- ./app/h2o_models:/tmp
ports:
- 54321:54321
H2O-3 has an in-memory architecture.
It does not write anything to disk unless you ask it to, and the location it persists to (when saving a model, for example) is the location you manually give it.
I suggest you try this without docker first, to get the hang of what to expect when H2O-3 restarts.
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
I want to use docker-compose to maintain containers, there is a cluster of API servers.
They build from the same image, I knew the docker-compose scale app=5 will start 5 containers, but they all same, including port setting.
I want to run multiple containers like this:
services:
# wx_service_cluster
wx_service_51011:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51011:8080"
wx_service_51012:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51012:8080"
wx_service_...:
....
THERE ARE ALMOST 100 SERVICES NEED TO BE WROTE
ANYONE CAN HELPS ME TO MAKE IT SIMPLER.
can I make it simpler?
like a shell loop:
for each_port in $( seq 51011 51040 )
{
wx_service_${each_port}:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "${each_port}:8080"
}
Simple answer to your actual questions: Use ENV variables and probably combine it with dotenv https://docs.docker.com/compose/environment-variables/
services:
foo_{$instance1}
ports:
- "${instance1}:8080"
foo_{$instance12}
ports:
- "${instance2}:8080"
but this will not help you "generating a docker-compose file with X service entries for WX" .. you seem to plan some kind of "hosting".
Alternatives:
You should step back, and rather use random-port assignment and the use docker inspect to find the port - see an example here https://github.com/EugenMayer/docker-sync/blob/master/lib/docker-sync/sync_strategy/unison.rb#L199 .. so basically you either use a template system to generate you docker-compose.yml file - e.g. like https://github.com/markround/tiller .. then you generate services with a static prefix like wx_service_ .. and later you use a different script ( for you nginx / haproxy ) to configure and upstream for each of those, find the name and port (using inspect) dynamically.
If i am right and you really go for some kind of hosting scenario and you do it commercially - you might even rethink this and add consul to the game. Let every wx service register as a service in consul and then use a additional httpd passenger like nginx / haproxy to reconfigure itself and add a backend+frontend/upstream+server entry in the passender using tiller+consul watch.
The last one is just the next level stuff, but if you do that "commercially" you should not do what you asked for initially - nevertheless, if you choose to, use dotenv as outlined above