How to fix 'Permission denied' in Docker sh entrypoint - docker

I'm trying to create an easy-to-use Docker image for the Garry's Mod server. While my Docker image builds just fine, running it as a container always results in a single error: /bin/sh: 1: ./easygmod.sh: Permission denied.
I'm using the cm2network/steamcmd image as a base. I have tried both tags that the aforementioned base image has. I have tried chmod +x, changing users to root, and fiddling with the shebang in the first line of the easygmod.sh script, as well as a number of possible typos, particularly in file names and paths.
I have a GitHub repository for this project which auto-builds to Docker Hub. Currently, the lines of code involving the problematic script are:
# Start main script
ADD easygmod.sh .
RUN chmod +x easygmod.sh
USER steam
CMD ./easygmod.sh
Also, the shebang/first line of the script is currently #!/bin/sh.
Despite having no logical explanation, the easygmod.sh script refuses to be executed, always throwing the error Permission denied. This especially confusing given that my only other public GitHub project, which is very similar (similar style Docker image with the same base OS as cm2network/steamcmd), never had any issues like this.

The file isn't owned by steam in the container, so the chmod +x was insufficient. Either add --chown=steam to the ADD, or change your chmod from +x to a+rx.
Also, you didn't specify CWD or a path to put those files in. It's likely that the root version of that image has a CWD that steam can't access. You should use /home/steam/ for that instead.

Related

Can I run scripts from a docker build context without a copy?

I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.

Creating a dockerfile to compile source code

I am trying to follow the 2 steps mentioned below:
1) Downloaded source code of
https://sourceforge.net/projects/hunspell/files/Hyphen/2.8/hyphen-2.8.8.tar.gz/download
2) Compiled it and you will get binary named example:
hyphen-2.8.8$ ./example ~/dev/smc/hyphenation/hi_IN/hyph_hi_IN.dic
~/hi_sample.text
I have downloaded and uncompressed the tar file. My question is how to create a dockerfile to automate this?
There are only 3 commands involved:
./configure
make all-recursive
make install
I can select the official python image as a base container. But how do I write the commands in a docker file?
You can do that with a RUN command:
FROM python:<version number here>
RUN ./configure && make-recursive && make install
CMD ['<some command here>']
what you use for <some command here> depends on what the image is meant to do. Remember that docker containers only run as long as that command is executing, so if you put the configure/make/install steps in a script and use that as your entry point, it's going to build your program, and then the container will halt.
Also you need to get the downloaded files into the container. That can be done using a COPY or an ADD directive (before the RUN of course). If you have the tar.gz file saved locally, then ADD will both copy the file into the container and expand it into a directory automatically. COPY will not expand it, so if you do that, you'll need to add a tar -zxvf or similar to the RUN.
If you want to download the file directly into the container, that could be done with ADD <source URL>, but in that case it won't expand it, so you'll have to do that in the RUN. COPY doesn't allow sourcing from a URL. This post explains COPY vs ADD in more detail.
You can have the three commands in a shell script and then use the following docker commands
COPY ./<path to your script>/<script-name>.sh /
ENTRYPOINT ["/<script-name>.sh"]
CMD ["run"]
For reference, you can create your docker file as they have created for one of the projects I worked on Apache Artemis Active Mq:
https://github.com/apache/activemq-artemis/blob/master/artemis-docker/Dockerfile-ubuntu

Docker ADD command produces 'cannot find the file' error

I'm trying to move my few microservices to a docker containers using docker-compose project type from Visual Studio.
I also have Service Fabric project so I have to install Service Fabric SDK into my docker containers.
That's what I do to achieve this (my dockerfile(s)):
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base
WORKDIR /app
EXPOSE 80
...
WORKDIR /temp
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp #C:\TEMP\vs_buildtools.exe
...
The rest code doesn't matter since it crashes on line with ADD command.
The error from Output after I run this via Ctrl+F5:
3>Step 4/11 : ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp
3>Service 'bmt.microservices.snowforecastcenter' failed to build: ADD failed: CreateFile \\?\C:\ProgramData\Docker\tmp\docker-builder567273413\temp: The system cannot find the file specified.
I don't understand what I'm doing wrong and what does it mean 'system cannot find the file' since I simply load the file from the internet and place it into my newly created \temp folder (the link is valid, I checked).
Does anybody know what this might be related to?
Ok, I've accidentally fixed the problem by moving comment to next line.
From this:
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp #C:\TEMP\vs_buildtools.exe
To this:
ADD https://aka.ms/vs/15/release/vs_buildtools.exe /temp
#C:\TEMP\vs_buildtools.exe
Then I red on official site (https://docs.docker.com/engine/reference/builder/#/from) that you cannot have inline comments since they are treated as arguments:
Docker treats lines that begin with # as a comment, unless the line is a valid parser directive. A # marker anywhere else in a line is treated as an argument.
I hope this will help other people who are new in Docker.

Move file downloaded in Dockerfile to harddrive

First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks

Not able to execute RUN commands in Dockerfile

Contents of the Dockerfile:
FROM XYZ
MAINTAINER ABC
RUN echo "hello world"
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
When I try to build an image from this file, I see the following:
permission denied Removing intermediate container
when docker tries to execute the RUN command
Observations:
This error is irrespective of the content of the RUN command.
Removing it ensures the build completes without issues.
I am able to build from the same docker file and image on another host.
"docker info" produced similar information on both machines.
How can I debug this further to see what the issue is?
Update (in response to the comments below):
I have been able to build the same image (and others) on this instance before
The issue occurred irrespective of the base image used
The issue was specific to this one instance which is running CentOS
The user I was logged in as was different from the user the daemon was running as (root)
Assuming the issue may have been because of the user mismatch, I changed to root and tried the command. It went through without issues. Then, I changed back to the original user, removed the image and tried again: it went through again! The original issue is not reproducible anymore.

Resources