Does anyone know a way to mount a Next cache in Docker?
I thought this would be relatively simple. I found buildkit had a cache mount feature and tried to add it to my Dockerfile.
COPY --chown=node:node . /code
RUN --mount=type=cache,target=/code/.next/cache npm run build
However I found that I couldn't write to the cache as node.
Type error: Could not write file '/code/.next/cache/.tsbuildinfo': EACCES: permission denied, open '/code/.next/cache/.tsbuildinfo'.
Apparently you need root permissions for using the buildkit cache mount. This is an issue because I cannot build Next as root.
My workaround was to make a cache somewhere else, and then copy the files to and from the .next/cache. For some reason the cp command does not work in docker(as node, you get a permission error and as root you get no error, but it still doesn't work.) I eventually came up with this:
# syntax=docker/dockerfile:1.3
FROM node:16.15-alpine3.15 AS nextcache
#create cache and mount it
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
mkdir -p /tmp/cache && chown node:node /tmp/cache
FROM node:16.15-alpine3.15 as builder
USER node
#many lines later
# Build next
COPY --chown=node:node . /code
#copy mounted cache into actual cache
COPY --chown=node:node --from=nextcache /tmp/cache /code/.next/cache
RUN npm run build
FROM builder as nextcachemount
USER root
#update mounted cache
RUN mkdir -p tmp/cache
COPY --from=builder /code/.next/cache /tmp/cache
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
cp -R /code/.next/cache /tmp
I managed to store something inside the mounted cache, but I have not noticed any performance boosts.(I am trying to implement this mounted cache for Next in order to save time every build. Right now, the build next step takes ~160 seconds, and I'm hoping to bring that down a bit.)
If you are using the node user in a node official image, which happens to have uid=1000 and the same gid, I think you should specify that when mounting the cache so that you have permission to write on it:
RUN --mount=type=cache,target=/code/.next/cache,uid=1000,gid=1000 npm run build
Related
My friend gave me a project with a dockerfile which seems to work just fine for him but I get a permission error.
FROM node:alpine
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
USER node
RUN yarn install --pure-lockfile
COPY --chown=node:node . .
EXPOSE 3000
error An unexpected error occurred: "EACCES: permission denied, mkdir '/usr/src/node-app/node_modules/<project_name>/api/node_modules'".
Could it be a docker version error?
COPY normally copies things into the image owned by root, and it will create directories inside the image if they don't exist. In particular, when you COPY ./api/package.json ./api/, it creates the api subdirectory owned by root, and when you later try to run yarn install, it can't create the node_modules subdirectory because you've switched users.
I'd recommend copying files into the container and running the build process as root. Don't chown anything; leave all of these files owned by root. Switch to an alternate USER only at the very end of the Dockerfile, where you declare the CMD. This means that the non-root user running the container won't be able to modify the code or libraries in the container, intentionally or otherwise, which is a generally good security practice.
FROM node:alpine
# Don't RUN mkdir; WORKDIR creates the directory if it doesn't exist
WORKDIR /usr/src/node-app
# All of these files and directories are owned by root
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
# Run this installation command still as root
RUN yarn install --pure-lockfile
# Copy in the rest of the application, still as root
COPY . .
# RUN yarn build
# Declare how to run the container -- _now_ switch to a non-root user
EXPOSE 3000
USER node
CMD yarn start
I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up
I get an error when using the COPY --from=reference in my Dockerfile. I created a minimal example:
FROM alpine AS build
FROM scratch
COPY --from=build / /
This causes the following build output:
$ docker build .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine AS build
---> b7b28af77ffe
Step 2/3 : FROM scratch
--->
Step 3/3 : COPY --from=build / /
failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 165578 cannot be mapped to a host ID
The builds run fine in CI, but it fails on my laptop running Ubuntu 18:04. What could be causing this issue?
I've just had this issue. I wanted to copy the binaries of a standard node image to my image in a multi-stage build.
Worked fine locally. Didn't work in BitBucket Pipeline.
As mentioned by #BMitch, the issue was use of userns.
With BitBucket, the userns setting is 100000:65536, which (as I understand it) means that the "safe" userIDs must be between 100000 and 165536.
The userID you have on your source files is outside of that range, but it doesn't mean it is userID 165578. Don't ask me why, but the userID is actually 165536 lower than the value reported, so 165578 - 100000 - 65536 = 42.
The solution I have is to change the user:group ownership for the source files to root:root, copy them to my image, and set the user:group ownership back (though as I'm typing this, I've not done that bit yet as I'm not 100% it is necessary).
ARG NODE_VERSION
FROM node:${NODE_VERSION}-stretch as node
# To get the files copied to the destination image in BitBucket, we need
# to set the files owner to root as BitBucket uses userns of 100000:65536.
RUN \
chown root:root -R /usr/local/bin && \
chown root:root -R /usr/local/lib/node_modules && \
chown root:root -R /opt
FROM .... # my image has a load of other things in it.
# Add node - you could also add --chown=<user>:<group> to the COPY commands if you want
COPY --from=node /usr/local/bin /usr/local/bin
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /opt /opt
That error is indicating that you have enabled userns on your Ubuntu docker host, but that there is no mapping for uid 165578. These mappings should be controlled by /etc/subuid.
Docker's userns documentation contains more examples of configuring this file.
You can also modify the source image, finding any files owned by 165578 and changing them to be within your expected range.
Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.
Im pulling a wordpress image and everything is working fine but when I go to the wordpress editor page the following error is on the top of screen.
Autoptimize cannot write to the cache directory (/var/www/html/wp-content/cache/autoptimize/), please fix to enable CSS/ JS optimization!
I assumed RUN chown -R www-data:www-data wp-content/ would solve that issue but its not working. Any ideas would be appreciated. My Dockerfile is below.
FROM wordpress:4.9.2-php7.2-apache
RUN chown -R www-data:www-data wp-content/
COPY ./src /var/www/html/
# Install the new entry-point script
COPY secrets-entrypoint.sh /secrets-entrypoint.sh
RUN chmod +x /secrets-entrypoint.sh
ENTRYPOINT ["/secrets-entrypoint.sh"]
EXPOSE 80
CMD ["apache2-foreground"]
I'm not sure the exact permission but you don't want to be writing inside a container so you should define a volume. Since you don't need the data to persist, you can do this in your dockerfile like:
VOLUME /var/www/html/wp-content/cache
This will set up a default volume where Docker will choose the location on your host, but you can mount it to a named volume instead when the container is created if you like.
You could also use a tmpfs volume which is good for things like cache files.