I have a weird problem with Docker and hope someone here can help me :)
I want to create a keycloak image that is derived from the image jboss/keycloak. The idea is that in the Dockerfile also a preconfigured standalone.xml is copied into the image and keycloak can start directly without manual work.
But as soon as I write for example a
"CMD touch /opt/test.txt"
into the file the container crashes with the message "12:02:14,290 INFO [org.jboss.modules] (main) JBoss Modules version 1.9.1.Final
WFLYSRV0073: Invalid option '/bin/sh'"
This is just a new file with no purpose, the changes to the .xml are not in there yet.
As soon as I put only the FROM back in and rebuild everything works again.
I thought through the layers in the container you could mod an image, but here it doesn't seem to work. Can someone tell me why ?
So far it has always worked with the alpine image, but I don't want to build the whole keycloak setup again myself, when there is already an official image for it.
This is basically what I had in mind:
FROM jboss/keycloak:X.XX
CMD rm /opt/jboss/keycloak/standalone/configuration/standalone.xml
COPY ./keycloak/standalone.xml /opt/jboss/keycloak/standalone/configuration/
Thanks for help :)
Change
CMD rm
to
RUN rm
RUN is part of building. every RUN command is executed while your image is built.
With CMD you define (or override) the default command when running/starting a container based on your image (and you don't want to change keycloaks default CMD)
Related
I have a debian package I am deploying that comes with a docker image. On upgrading the package, the prerm script stops and removes the docker image. As a fail safe, I have the preinst script do it as well to ensure the old image is removed before the installation of the new image. If there is no image, the following error reports to the screen: (for stop) No such image: <tag> and (for rmi): No such container: <tag>.
This really isn't a problem, as the errors are ignored by dpkg, but they are reported to the screen, and I get constant questions from the users is that error ok? Did the install fail? etc.
I cannot seem for find the correct set of docker commands to check if a container is running to stop it, and check to see if an image exists to remove it, so those errors are no longer generated. All I have is docker image tag to work with.
I think you could go one of two ways:
Knowing the image you could check whether there is any container based on that image. If yes, find out whether that container is running. If yes, stop it. If not running, remove the image. This would prevent error messages showing up but other messages regarding the container and image handling may be visible.
Redirect output of the docker commands in question, e.g. >/dev/null
you're not limited with docker-cli you know? you can always combine docker-cli commands with linux sh or dos commands as well and also you can write your own .sh scripts and if you don't want to see the errors you can either redirect them or store them to a file such as
to redirect: {operation} 2>/dev/null
to store : {operation} 2>>/var/log/xxx.log
I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.
I somehow don't like the RUN x && y && z ... syntax we currently use in DockerFile. As far as I understand I could just run a shell script instead like RUN xyz.sh and do the same tasks on my favorite language. Does the latter have any disadvantage?
Update:
In additional to the point made by David about the complexity, I believe writing everything to the Dockerfile makes it easier for share (thus creating a survivorship bias for you). Namely on the DockerHub, most of the time, you have a "Dockerfile" tab to quickly get the idea on how the image is built. If the author uses COPY and RUN xyz.sh, he/she would have to host the script elsewhere or the Dockerfile alone becomes meaningless.
CMD is executed at runtime, that is when the container is created from the image. RUN is a build time instruction. So the question is actually why people run things with RUN instead of CMD at runtime. (You can of course COPY script.sh /script.sh then RUN bash /script.sh)
If you do things like installing dependencies, it could take a lot of time, in case of scaling up your service, this would make auto-scaling useless because it can't be fast enough to absorb the peak.
At build time, RUN can be cached, so next time the build will be a lot faster.
Because the way docker file system works, creating 10 containers from the same image takes only a few more space than creating 1 container. So you can save disk space by installing packages in the image, while if you install them at runtime, they will all occupy a part of disk space.
RUN executes commands in a new layer and creates a new image. This happens when you build the image using docker build.
CMD specifies a default command an parameters to be run when a container is launched from the image.
In summary. Run and cmd is not interchangeable, RUN runs when an image is created, CMD when a container is launched.
I'm trying to build a ruby-on-rails project, using rails 1.9.3 on Debian image.
After I've built it, using dockerfile, it appears that a directory is missing. So the container doesn't start. So, can I add it manually? I've tried to use "docker run -it sh" to run it as shell, but for some reason, after I add a directory with mkdir it vanishes, when I exit.
I'm kinda new to this stuff (just did some tutorials), so apologize for any mixed up details.
You are going to need to add the dir, and then commit the changes in the container to make a new image out of it to use the directory in the new image. Its much better to use a repeatable DockerFile to create the image
Documentation for DockerFile -> https://docs.docker.com/engine/reference/builder/
Have a look at the documentation for commit here -> https://docs.docker.com/engine/reference/commandline/commit/
Contents of the Dockerfile:
FROM XYZ
MAINTAINER ABC
RUN echo "hello world"
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
When I try to build an image from this file, I see the following:
permission denied Removing intermediate container
when docker tries to execute the RUN command
Observations:
This error is irrespective of the content of the RUN command.
Removing it ensures the build completes without issues.
I am able to build from the same docker file and image on another host.
"docker info" produced similar information on both machines.
How can I debug this further to see what the issue is?
Update (in response to the comments below):
I have been able to build the same image (and others) on this instance before
The issue occurred irrespective of the base image used
The issue was specific to this one instance which is running CentOS
The user I was logged in as was different from the user the daemon was running as (root)
Assuming the issue may have been because of the user mismatch, I changed to root and tried the command. It went through without issues. Then, I changed back to the original user, removed the image and tried again: it went through again! The original issue is not reproducible anymore.