Docker RUN instruction to source bash profile - docker

I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.

You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)

Related

Install Keycloak adapter on WILDFLY that depends on ENVs in standalone.xml

I am trying to install the Keycloak Adapter to my WILDFLY Application Server that runs as Docker Container. I am using the image jboss/wildfly:17.0.0.Final as base image. I am having a big trouble while building my actual own image.
My Dockerfile:
FROM jboss/wildfly:17.0.0.Final
ENV $WILDFLY_HOME /opt/jboss/wildfly
COPY keycloak-adapter.zip $WILDFLY_HOME
RUN unzip $WILDFLY_HOME/keycloak-adapter.zip -d $WILDFLY_HOME
# My standalone.xml that contains ENVs
COPY standalone.xml $WILDFLY_HOME/standalone/configuration/
# Here it crashes!
RUN $WILDFLY_HOME/bin/jboss-cli.sh --file=$WILDFLY_HOME/bin/adapter-elytron-install-offline.cli
The official documentation says:
Unzip the adapter zip file in $WILDFLY_HOME (/opt/jboss/wildfly) - I've done this, works.
In order to install the adapter (when server is offline) you need to execute ./bin/jboss-cli.sh --file=bin/adapter-elytron-install-offline.cli which basically starts the server (which is needed as you cant modify the configuration otherwise) and modifies the standalone.xml.
Here is the problem. My standalone.xml is parametrized with environment variables that are only set during runtime as it runs in multiple different environments. When the ENVs are not set, the server crashes and so does the command above.
The error during docker build at the last step:
Cannot start embedded server WFLYEMB0021: Cannot start embedded process: JBTHR00005: Operation failed WFLYSRV0056: Server boot has fialed in an unrecoverable manner.
The cause
Despite of the not very precise error message I have clearly identified the unset ENVs as the cause by running the container with bash, setting the required ENVs with some random values and executing the jboss-cli command again - and it worked.
I know that the docs say its also possible to configure when the server is running but this is not an option for me, i need this configured at docker build stage.
So the problem here is they provide an offline installation that fails if the standalone.xml depends on environment variables which are usually not set during docker build. Unfortunately, i could not find a way to tell the jboss cli to ignore unset environment variables.
Do you know any workaround?

Separating Docker files and application source files to optimize production environment

I have a bunch of (Ruby) scripts stored on a server. Up until now, my team has used them by opening an accessor app that launches a list of the script names, and they select the script they want to run in that instance on the files in their working folder. The scripts are run directly from the server, so updates made to the script files are automatically reflected when a user runs the script.
The scripts require a fair amount of specific dependencies, so I'm trying to move to a Docker-based workflow to eliminate the problems we encounter with incongruent computer environments. I've been able to successfully build an image with our script library and run an instance of it on my computer.
However, all of the documentation and tutorials include the application source files when building an image, so that all the files are copied over by the Dockerfile. From my understanding, this means that any time the code in the application files needs to be updated, all the users will need to rebuild the image before trying to run anything. I would very rarely ever need to make changes to the environment settings/dependencies, but the app code is changed relatively frequently, so it seems like having every user rebuild an image every single time a line of app code is changed would actually slow down everyone's workflow considerably.
My question is this: Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored? And does a new container need to be created every single time a user wants to run any one of the scripts? (The users are not tech-savvy.)
Generally you'd do this by using a Docker image instead of the checked-out tree of scripts. You can use a Docker registry to store a built copy of the image somewhere on the network; Docker Hub works for this, most large public-cloud providers have some version of this (AWS ECR, Google GCR, Azure ACR, ...), or you can run your own. The workflow for using this would generally look like
# Get any updates to the "latest" version of the image
# (can be run infrequently)
docker pull ourorg/scripts
# Actually run the script, injecting config files and credentials
docker run --rm \
-v $PWD/config:/config \
-v $HOME/.ssh:/config/.ssh \
ourorg/scripts \
some_script.rb
# Nothing in this example actually requires a local copy of the scripts
I'm envisioning a directory that has kind of a mix of scripts and support files and not a lot of organization to it. Still, you could write a simple Dockerfile that looks like
FROM ruby:2.7
WORKDIR /opt/scripts
# As of Bundler 2.1, there is no compatibility between Bundler
# versions; this must match exactly what is in Gemfile.lock
RUN gem install bundler -v 2.1.4
# Copy the scripts in and do basic installation
COPY Gemfile Gemfile.lock .
RUN bundle install
COPY . .
ENV PATH /opt/scripts:$PATH
# Prefix all commands with...
ENTRYPOINT ["bundle", "exec"]
# The default command to run is...
CMD ["ls"]
On the back end you'd need a continous integration service (Jenkins is popular if a little unwieldy; there are a large selection of cloud-hosted ones) that can rebuild the Docker image whenever there's a commit to the source repository. You can generally rig this up so that it happens automatically whenever anybody pushes anything.
This process makes more sense of most people are just using the set of scripts and few of them are developing them. It's also a little bit difficult to discover what the scripts are (you might be able to docker run --rm ourorg/scripts ls though).
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
This always strikes me as an ineffective use of Docker. You have all of the fiddly steps of your current workflow that require everyone to run a git pull or equivalent routinely, but you also have to inject the host source tree into the container. If there are OS incompatibilities in, for example, native gems in the vendor tree, you have to work around that.
# You still need to do this periodically
git pull
# And you also need to
sudo docker run \
--rm \
-v $PWD:/app \
-v $HOME/config:/config \
-v $HOME/.ssh:/config/.ssh \
-w /app \
ruby:2.7 \
bundle exec ./some_script.rb
Some of these details (especially the config file and credentials) you'd have to deal with even if you did build an image; some others of the details you could improve by building an image. Inside the image you need to correct the ownership and permissions on the ssh keys and replace the $PWD/vendor tree with something the container can run, without modifying the mounted host directories.
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
You can build an image with all the environment already installed then mount the directory with the scripts so the container can read the scripts from the host. Something like
docker run -it --rm -v /opt/myscripts:/myscripts myimage somescript.rb
Then your image Dockerfile would end with:
WORKDIR /myscripts
ENTRYPOINT ["/usr/bin/ruby"]
And does a new container need to be created every single time a user wants to run any one of the scripts?
Of course, a container is just an isolated process managed by docker, you could make a wrapper so the users wouldn't need to type the full docker run command.

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Apache PredictionIO - Docker run failed

I have been trying http://predictionio.apache.org/install/install-docker/ this tutorial. I have successfully built Docker image however when I try to run docker run i get the Can't open /etc/predictionio/pio-env.sh error.
docker build -t predictionio/pio pio
docker run -ti predictionio/pio
PS: If I comment out the last line CMD ["sh", "/usr/bin/pio_run"] I can build and run docker image successfully. I can open the file too from docker bash.
I think you need to grant permissions to execute this file. add the following line at the end of your Dockerfile
RUN chmod +x pio_run.sh
also, you might need to change CMD to ENTRYPOINT like following:
ENTRYPOINT ["sh","/usr/bin/pio_run.sh"]
Your output states you are running Windows. Did you use the default command prompt or did you use docker terminal? I had the same error messages in the past on Windows but mysteriously it disappeared after trying the tutorial again. I am not sure what I did different except I might possibly used docker instead of the default command prompt...
Could you also try using docker-compose instead of plain docker commands as described in the tutorial?
Ensure your storage (Postgres, MySQL or ElasticSearch) is running before starting PIO as well.
Just resolved it on my machine.
When you cloned repository on Windows, git converted end of line symbols from Unix-style (\n) to Windows style (\r\n).
You need to open file C:\wherever-you-cloned-pio-repository\predictionio\docker\pio\pio_run and change it back (for e.g. using Visual Studio Code, or Notepad++). Then you need to rebuild the image and it should work.
Also for the future you may want to disable automatic conversion Disable git EOL Conversions

Dockerd with Chocolatey

I am using Chocolatey to install Docker.
When I originally run the following command:
choco install docker
and try to run the "docker --version" command, everything goes as expected.
Docker version 17.10.0-ce, build f4ffd25
When I try to run "dockerd" command, it shows as not being part of my path.
'dockerd' is not recognized as an internal or external command,
Looking at the PATH variable, and navigating to where Chocolatey stores the executables, dockerd.exe is not present while docker.exe is. Am I missing something in instructing Chocolatey in adding dockerd?
The reason I need the dockerd executable is so that I can limit the number of concurrent downloads, as shown in the Docker documentation.
This is a decision that the package maintainer(s) for Docker have made. If you have a look here:
https://chocolatey.org/packages/docker#files
You will see that there is a dockerd.exe.ignore file. This file is used to instruct Chocolatey to explicitly not create what is referred to as a shim file, which would make it work from the command line, in the same way as Docker does.
My best suggestion would be to reach out to the maintainers of that package to ask them why this was done, and to perhaps get it changed. You can do this by clicking on the Contact Maintainers link on this page:
https://chocolatey.org/packages/docker
As a workaround, you could add the following path to your Windows PATH environment variable:
C:\ProgramData\chocolatey\lib\docker\tools\docker
Which would allow it to work.

Resources