I want to make a development setup of a Blitz.js app with Docker (because it will be deployed and tested with it, too). I am developing on Windows, the code resides within WSL2.
After starting up, the container exits with:
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
Environment variables loaded from .env
Prisma schema loaded from db/schema.prisma
Prisma Studio is up on http://localhost:5555
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
[Error: EACCES: permission denied, unlink '/home/node/app/.next/server/blitz-db.js'] {
errno: -13,
code: 'EACCES',
syscall: 'unlink',
path: '/home/node/app/.next/server/blitz-db.js'
}
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
This is what my Dockerfile looks like:
# Create a standard base image that has all the defaults
FROM node:16-slim as base
ENV NODE_ENV=production
ENV PATH /home/node/app/node_modules/.bin:$PATH
ENV TINI_VERSION v0.19.0
WORKDIR /home/node/app
RUN apt-get update && apt-get install -y openssl --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& chown -R node:node /home/node/app
# Blitz.js recommends using tini, see why: https://github.com/krallin/tini/issues/8
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
USER node
COPY --chown=node:node package*.json yarn.lock* ./
RUN yarn config list && yarn install --frozen-lockfile && yarn cache clean --force
# Create a development image
FROM base as dev
ENV NODE_ENV=development
USER node
COPY --chown=node:node . .
RUN yarn config list && yarn install && yarn cache clean --force
CMD ["bash", "-c", "yarn dev"]
Within WSL2, I run docker-compose up -d to make use of the following docker-compose.yml:
version: "3.8"
services:
app:
container_name: itb_app
build: .
image: itb_app:dev
ports:
- 3000:3000
volumes:
# Only needed during development: Container gets access to app files on local development machine.
# Without access, changes made during development would only be reflected
# every time the container's image is built (hence on every `docker-compose up`).
- ./:/home/node/app/
The file in question (blitz-db.js) is generated by yarn dev (see Dockerfile). I checked the owner of it within WSL2: It seems to be root. But I wouldn't know how to change it under these circumstances, let alone know to which user.
I wonder how I can mount the WSL2 directory into my container for Blitz.js to use it.
The issue is that the .next directory and its content (the code of the compiled Blitz.js app) was created by the host system before the docker container was introduced. So, the host system user was owner of the directory and its files. Thus, the container's user did not have write permissions and couldn't compile its own app version into the .next directory, raising the error above.
The solution is to delete the .next folder from the host system and restarting the container, giving it the ability to compile the app.
Related
I'm using Rails in a Docker container, and every once in a while I run into this issue that I have no idea how to solve. When adding a new gem to the Gemfile, upon rebuilding the Docker Image + Container, the build will fail with the common bundler error Could not find [GEM_NAME] in any of the sources; Run 'bundle install' to install missing gems. This only occurs to me when I try to build the image in Docker, if I run a regular bundle install on my local machine, the Gemfile gets installed correctly and everything works as expected.
I have a fairly standard Dockerfile & docker-compose file.
Dockerfile:
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml:
version: '3'
services:
backend:
build:
dockerfile: Dockerfile
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./[REDACTED]:/usr/src/app
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
user: root
I've tried docker system prune -a, docker builder prune -a, reinstalling Docker, multiple rebuilds in a row, restarting my machine and so on, to no avail. The weird part is that it doesn't happen with every new Gem that I decide to add, only for some specific gems. For example I got this issue once again when trying to add gem 'sendgrid-ruby' to my Gemfile. This is the repo for the gem for reference, and the specific error I get with sendgrid-ruby is Could not find ruby_http_client-3.5.1 in any of the sources. I tried specifying ruby_http_client in my Gemfile, and I also tried sshing into the Docker container and running gem install ruby_http_client, but I get the same errors.
What might be happening here?
You're mounting a named volume over the container's /usr/local/bundle directory. The named volume will get populated from the image, but only the very first time you run the container. After that the old contents of the named volume will take precedence over the content of the image: using a volume this way will cause Docker to completely ignore any changes you make in the Gemfile.
You should be able to delete that volumes: line from the docker-compose.yml file. I'm not clear what benefit you would get from keeping the installed gems in a named volume.
I am new to docker, currently following book to learn Django.
Is it necessary to be in virtual environment when running the below
command?
I have gone through docker basic videos which says it saves each apps as images. But where these images are saved?.
Does this line make the current pc root directory or dockers Image '
WORKDIR /usr/src/app'
ADD is placed before RUN in the Dockerfile.
$ sudo docker-compose build
But I got these errors.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
Dockerfile
FROM python:3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
mysql-client default-libmysqlclient-dev
WORKDIR /usr/src/app
ADD config/requirements.txt ./
RUN pip3 install --upgrade pip; \
pip3 install -r requirements.txt
RUN django-admin startproject myproject .;\
mv ./myproject ./origproject
docker-compose.yml
version: '2'
services:
db:
image: 'mysql:5.7'
app:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- './project:/usr/src/app/myproject'
- './media:/usr/src/app/media'
- './static:/usr/src/app/static'
- './templates:/usr/src/app/templates'
- './apps/external:/usr/src/app/external'
- './apps/myapp1:/usr/src/app/myapp1'
- './apps/myapp2:/usr/src/app/myapp2'
ports:
- '8000:8000'
links:
- db
requirements.txt
Pillow~=5.2.0
mysqlclient~=1.3.0
Django~=2.1.0
Is it necessary to be in virtual environment when running the below
command?
No, the docker build environment is isolated from the host. Any virtualenv on the host is ignored on the build context and the resulting image.
I have gone through docker basic videos which says it saves each apps
as images. But where these images are saved?.
The images are stored somewhere in /var/lib/docker but isn't meant to be browsed manually. You can send the images somewhere with docker push <image:tag> or save them with docker save <image:tag> -o <image>.tar
Does this line make the current pc root directory or dockers Image ' WORKDIR > /usr/src/app'
That line change the current workdir on the image.
ERROR: Service 'app' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder912263941/config/requirements.txt: no such file or directory
This error means that you do not have config/requirements.txt in your current directory where build is run. Adjust your path on the Dockerfile properly.
$ docker-compose up -d
This will download the necessary Docker images and create a container for the web service.
I am currently running into a problem trying to set up nginx:alpine in Openshift.
My build runs just fine but I am not able to deploy with permission being denied with the following error
2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here
https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like
Dockerfile
FROM nginx:alpine
#Configure proxy settings
ENV HTTP_PROXY=http://my.proxy:port
ENV HTTPS_PROXY=http://my.proxy:port
ENV HTTP_PROXY_AUTH=basic:*:username:password
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build
COPY ./dist/my-app /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 777 /var
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf
EXPOSE 8080
It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here
https://torstenwalter.de/openshift/nginx/2017/08/04/nginx-on-openshift.html
works. (but i am having some other issues with that build so I switched to alpine)
Any ideas why this is still not working?
I was using openshift, with limited permissions, so I fixed this problem by using the following nginx image (rather than nginx:latest)
FROM nginxinc/nginx-unprivileged
To resolve this. I think the Problem in this Dockerfile was that I used the COPY command to move my build and that did not exist. So here is my working
Dockerfile
FROM nginx:alpine
LABEL maintainer="ReliefMelone"
WORKDIR /app
COPY . .
# Install node.js
RUN apk update && \
apk add nodejs npm python make curl g++
# Build Application
RUN npm install
RUN ./node_modules/#angular/cli/bin/ng build --configuration=${BUILD_CONFIG}
RUN cp -r ./dist/. /usr/share/nginx/html
# Configure NGINX
COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \
chmod -R 770 /var/cache/nginx /var/run /var/log/nginx
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
Note that under the Build Application section I now do
RUN cp -r ./dist/. /usr/share/nginx/html
instead of
COPY ./dist/my-app /usr/share/nginx/html
The copy will not work as I previously ran the ng build inside of the container the dist will only exist in the container as well, so I need to execute the copy command inside of that container
Had the same error on my nginx:alpine Dockerfile
There is already a user called nginx in the nginx:alpine image. My guess is that it's cleaner to use it to run nginx.
Here is how I resolved it:
Set the owner of /var/cache/nginx to nginx (user 101, group 101)
Create a /var/run/nginx.pid and set the owner to nginx as well
Copy all the files to the image using --chown=nginx:nginx
FROM nginx:alpine
RUN touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/cache/nginx /var/run/nginx.pid
USER nginx
COPY --chown=nginx:nginx my/html/files /usr/share/nginx/html
COPY --chown=nginx:nginx config/myapp/default.conf /etc/nginx/conf.d/default.conf
...
If you're here because you failed to deploy an example helm chart (e.g: helm create mychart), do just like #quasipolynomial suggested but instead change your deployment file pull the right image.
i.e
containters:
- image: nginxinc/nginx-unprivileged
more info on the official unprivileged image: https://github.com/nginxinc/docker-nginx-unprivileged
You may change the folder using the nginx.conf file. You can read more information in the section Running nginx as a non-root user.
May or may not be a step in the right direction (especially helpful for those who came here looking for general help on the [emerg] mkdir() ... failed error).
This solution counts from Builing nginx from source.
It took me about seven hours to realize the solution is directly related to the prefix path set in compiling nginx.
This is where my configuration throws off nginx (as a very brief example), compiled from this nginx source:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/tmp/nginx/client-body-temp \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp
Without realizing it, I was setting the prefix to /usr/local/nginx but setting the client body temp path & fastcgi temp path to a directory inside /tmp/nginx.
It's basically breaking nginx's ability to access the correct files, because the temp paths are not correlated to the prefix path.
So I fixed it by (again, super simple configure as an example):
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/usr/local/nginx/client_body_temp \
--http-fastcgi-temp-path=/usr/local/nginx/fastcgi_temp \
Further simplified:
sudo ./auto/configure \
--prefix=/usr/local/nginx \
--http-client-body-temp-path=/client_body_temp \
--http-fastcgi-temp-path=/fastcgi_temp \
Again, not guaranteed to work, but definitely a step in the right direction.
run the below command to fix the above issue. The anyuid security context constraint required.
oc adm policy add-scc-to-user anyuid system:serviceaccount:<NAMESPACE>:default
I have a docker-compose.yml file with the following content:
version: '2'
services:
MongoDB:
image: mongo
Parrot-API:
build: ./Parrot-API
image: sails-js:dev
volumes:
- "/user/Code/node/Parrot-API:/host"
command: bash -c "cd /host && sails lift"
links:
- MongoDB:MongoDB
ports:
- "3050:1337"
The file basically runs two containers: mongodb and web app (in directory ./Parrot-API) built in sails.js. However, when I run docker-compose up in the terminal, I got this error: Parrot-API_1 | bash: sails: command not found
node_Parrot-API_1 exited with code 127. Note that sails.js is a node.js web framework, and sails lift starts the app at port 1337.
I have done some google search and have found some similar questions, but not helpful in my case.
btw, I have the following Dockerfile in the Parrot-API folder:
FROM sails-js:dev
VOLUME /host
WORKDIR /host
RUN rm -rf node_modules && \
echo "hello world!" && \
pwd && \
ls -lrah
EXPOSE 1337
CMD npm install -g sails && npm install && sails lift
The file structure is following:
|- docker-compose.yml
|- Parrot-API/Dockerfile
|- Parrot-API/app.js, etc..
It is clear to me that the Parrot-API docker container exits immediately due to the reason that sails lift command is not executed, but how to make the container work? Thanks!
You showed a docker-compose.yml that builds a sails-js:dev image, and you showed a Dockerfile that is based on the sails-js:dev image. This appears to be recursive.
Your Dockerfile itself ends with a CMD in lieu of an ENTRYPOINT that does the npm install of sails. Since you did this as a CMD instead of a RUN, sails is not installed in your image, the install is launched on a container run, but only if you don't run the container with any arguments of your own, like you are doing in the docker-compose.yml with a custom command.
The fix is to update the Dockerfile with a proper base image and change the CMD to a RUN. I'm also seeing a few other mistakes like creating a volume and then modifying the contents, where volumes ignore other changes after they have been created. The FROM node is just a guess based on your npm commands, feel free to adjust:
FROM node
RUN mkdir -p /host && cd /host && npm install -g sails && npm install
EXPOSE 1337
WORKDIR /host
VOLUME /host
CMD sails lift
I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps