Insufficient permissions for local file change in strapi docker - docker

I tried to install strapi with PostgreSQL from its official doc, I changed the name for mounted volumes in YAML file and keep all the rest the same as the given one in the doc.
based on strapi PostgreSQL docker-compose.yaml file see original
version: '3'
services:
strapi:
image: strapi/strapi
# totally the same as doc
volumes:
- ./backend:/srv/app
# totally the same as doc
postgres:
image: postgres
# totally the same as doc
volumes:
- ./database:/var/lib/postgresql/data
Then I pulled the latest image, and run them all, and it worked.
The folder structure now has all needed files and all functionalities are working in GUI provided in http://localhost:1337/admin/ and I could make the first content type.
\backend
\\ all_strapi_files + node_modules
\database
docker-compose.yaml
But the problem is that I can't add additional changes to files inside my editor(vscode).
I face the following error on every try for saving file changes
Failed to save 'files': Insufficient permissions. Select 'Retry as Sudo' to retry as superuser.
Also, I can not set up the yarn workspace properly, cause it doesn't have an access to remove backend/node_modules.
Git commands are not permitted either
git clean -f -- something
> failed to remove something: Permission denied
I can save every file by sudo which vscode provides, but I guess I ruined something or there is some extra thing to setup. I'm not an expert in docker and strapi, so sorry for not mentioning all content that might be needed.

This docker-compose configuration isn't good when you have changed inside a container with mount bind. Scenarios like that it's better to use docker volume.
[...]
postgres:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
How are you change Strapi with VSCode? My question is because most container images are configured to run with the root user, if possible when you are development to change outside the container and copy it to inside.

Related

how to add the plugin fluent-plugin-opensearch to docker

I'm trying to send logs from fluentd (installed using docker) to opensearch.
In configuration file, there's #type opensearch that uses the plugin fluent-plugin-opensearch which I installed locally as a Ruby gem.
I get the following error:
2022-04-22 15:47:10 +0000 [error]: config error file="/fluentd/etc/fluentd.conf" error_class=Fluent::NotFoundPluginError error="Unknown output plugin 'opensearch'. Run 'gem search -rd fluent-plugin' to find plugins"
As a solution, I found out that I need to add the plugin to the fluentd docker container, but I couldn't find a way to do that.
Any way to add the plugin to docker or an alternative to this solution would be appreciated.
The comments already gave a hint, you will need to build your own Docker image. Depending on the infrastructure you have available, you can either build the image, store it in some registry and then use it in your compose file, or build it on the machine that you use docker on.
The Dockerfile
Common to both approaches is that you'll need a Dockerfile. I am using Calyptias Docker image as a base, but you can use whatever fluentd image you like to. My docker file looks as follows:
FROM ghcr.io/calyptia/fluentd:v1.14.6-debian-1.0
USER root
RUN gem install fluent-plugin-opensearch
RUN fluent-gem install fluent-plugin-rewrite-tag-filter fluent-plugin-multi-format-parser
USER fluent
ENTRYPOINT ["tini", "--", "/bin/entrypoint.sh"]
CMD ["fluentd"]
As you can see it installs a few more plugins, but the first RUN line is the important one for you.
Option 1
If you have a container registry available, you can build the image and push it there, either using a CI/CD pipeline or simply locally. Then you can reference this custom image instead of whatever other fluentd image you're using today as such:
fluentd:
image: registry.your-domain.xyz/public-projects/fluentd-opensearch:<tag|latest>
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Adjust the config to your needs.
Option 2
You can also have docker-compose build the container locally for you. For this, create a directory fluentd in the same folder where you store your docker-compose.yml and place the Dockerfile there.
fluentd:
build: ./fluentd
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Instead of referencing the image from some registry, you can reference a local build directory. This should get you started.

Proper way to build a CICD pipeline with Docker images and docker-compose

I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.

"key cannot contain a space" error while running docker compose

I am trying to deploy my django app to app engine using dockerfile and for that after following a few blogs such as these, I created a docker-compose.yml file but when I run the docker compose up command or docker-compose -f docker-compose-deploy.yml run --rm gcloud sh -c "gcloud app deploy" I get an error key cannot contain a space. See below:
For example:
$ docker compose up
key cannot contain a space
$ cat docker-compose.yml
version: '3.7'
services:
app:
build:
context: .
ports: ['8000:8000']
volumes: ['./app:/app']
Can someone please help me to fix this error? I have tried yamllint to validate the yaml file for any space/indentation type of error and it doesn't show any error to me.
EDIT:
Here is the content for file in the longer command:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
environment:
- CLOUDSDK_CONFIG=/creds
volumes:
gcp-creds:
Ok this is resolved finally! After beating my head around, I was able to finally resolve this issue by doing the following things:
Unchecked the option to use "Docker Compose v2" from my docker desktop settings. Here is the setting in Docker Desktop
Closed the docker desktop app and restarted it.
Please try these steps in case you face the issue. Thanks!
Just adding another alt answer here that I confirmed worked for me when following the steps above did not. My case is slightly different, but as Google brought me here first I thought I'd leave a note.
Check your env var values for spaces!
This may only be applicable if you are using env_var files (appreciate that OP is not in the minimal example, hence saying this is different).
Unescaped spaces in variables will cause this cryptic error message.
So, given a compose file like this:
version: '3.7'
services:
gcloud:
image: google/cloud-sdk:338.0.0
volumes:
- gcp-creds:/creds
- .:/app
working_dir: /app
env_file:
- some_env_file.env
If some_env_file.env looks like this:
MY_VAR=some string with spaces
then we get the cryptic key cannot contain a space.
If instead we change some_env_file.env to be like this:
MY_VAR="some string with spaces"
then all is well.
The issue has been reported to docker-compose.
Google brought me here first, and when your suggestion sadly didn't work for me, it then took me to this reddit thread, where I found out the above.
Docker Compose (at least since v2) automatically parses .env files before processing the docker-compose.yml file, regardless of any env_file setting within the yaml file. If any of the variables inside your .env file contains spaces, then you will get the error key cannot contain a space.
Two workarounds exist at this time:
Rename your .env file to something else, or
Create an alternate/empty .env file, e.g. .env.docker and then explicitly set the --env-file parameter, i.e. docker compose --env-file .env.docker config.
Track the related issues here:
https://github.com/docker/compose/issues/6741
https://github.com/docker/compose/issues/8736
https://github.com/docker/compose/issues/6951
https://github.com/docker/compose/issues/4642
https://github.com/docker/compose/commit/ed18cefc040f66bb7f5f5c9f7b141cbd3afbbc89
https://docs.docker.com/compose/env-file/
One more thing to be care about - since Compose V2, this error may be raise in case you have inline comments in the env file used by Compose. For example
---
version: "3.7"
services:
backend:
build:
context: .
dockerfile: Dockerfile
env_file: .app.env
and that .app.env is like this
RABBIT_USER=user # RabbitMQ user
the same error may occur. To fix just move comment to its own line
# RabbitMQ user
RABBIT_USER=user

docker service with compose file single node and local image

So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.

File in docker-entrypoint-initdb.d never get executed when using docker compose

I'm using Docker Toolbox on Windows 10
I can access the php part succesfully via http://192.168.99.100:8000, I have been working around with the mariadb part but still having several problems
I have an sql file as /mariadb/initdb/abc.sql so I should be copied into /docker-entrypoint-initdb.d, after the container is created I use docker-compose exec mariadb to access the container, there is the file as /docker-entrypoint-initdb.d/abc.sql but the file never get executed, I also have tested to import the sql file to the container manually, it was succesful so the sql file is valid
I don't quite understand about the data folder mapping, and what to do to get the folder sync with the container, I always get the warning when recreate the container using docker-compose up -d
WARNING: Service "mariadb" is using volume "/var/lib/mysql" from the previous container. Host mapping "/.../mariadb/data" has no effect. Remove the existing containers (with docker-compose rm mariadb) to use the Recreating db ... done
Questions
How to get the sql file in /docker-entrypoint-initdb.d to be executed ?
What is the right way to map the data folder with the mariadb container ?
Please guide
Thanks
This is my docker-compose.yml
version: "3.2"
services:
php:
image: php:7.1-apache
container_name: web
restart: always
volumes:
- /.../php:/var/www/html
ports:
- "8000:80"
mariadb:
image: mariadb:latest
container_name: db
restart: always
environment:
- MYSQL_ROOT_PASSWORD=12345
volumes:
- /.../mariadb/initdb:/docker-entrypoint-initdb.d
- /.../mariadb/data:/var/lib/mysql
ports:
- "3306:3306"
For me the issue was the fact that Docker didn't clean up my mounted volumes from previous runs.
Doing a:
docker volume ls
Will list any volumes, and if previous exist, then run 'rm' command on the volume to remove it.
As stated on docker mysql docks, scripts in the '/docker-entrypoint-initdb.d' folder is only evalutated the first time the container runs, and if a previous volume remains, it won't run the scripts.
As for the mapping, you simply need to mount your script folder to the '/docker-entrypoint-initdb.d' folder in the image:
volumes:
- ./db/:/docker-entrypoint-initdb.d
I have a single script file in a folder named db, relative to my docker-compose file.
In your Docker file for creating mariaDB, at the end add the abc.sql file to your docker entry point like so:
COPY abc.sql /docker-entrypoint-initdb.d/
Remove the - /.../mariadb/initdb:/docker-entrypoint-initdb.d mapping as any file copied into the entry point will be executed.
Note: Windows containers do not execute anything in docker-entrypoint-initdb.d/

Resources