the docker container `heroku run` command with arguments returns "not found" - docker

I'm running a docker container on heroku, but I can't seem to understand how it works.
Locally I'm able to run a command docker run imageName ls -al, but on heroku: heroku run "ls -al" it returns ./entrypoint.sh: line 34: exec: ls -al: not found. Although when I run heroku run ls without arguments, it works as expected. (as another experiment I've run heroku run bash and then ./entrypoint.sh ls -al that also works).
What's happening here?
Comments updates:
Damien MATHIEU: the image I try to run is this https://github.com/jshimko/meteor-launchpad - and my docker file is:
FROM jshimko/meteor-launchpad:latest
CMD ["node", "main.js"]

Edit-2 - 28-Oct-2017
Latest update from Heroku
We've triaged this, and we're definitely not implementing Docker-compatible behaviour here. Thanks for catching this - we'll get it fixed.
Original answer
Your error is quite clear from below itself
./entrypoint.sh: line 34: exec: ls -al: not found
You are passing ls -al as one string parameter. You should try below
heroku run -- ls -al
Edit-1
So I created a simple Dockerfile to test the issue.
FROM alpine
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["tail", "-f", "/dev/null"]
And the entrypoint.sh
#!/bin/sh
echo "You passed $# arguments"
for var in "$#"
do
echo "$var"
done
exec "$#"
When I build and run the container locally I get the output as
$ docker run -it 5e866a76fd25
You passed 3 arguments
tail
-f
/dev/null
When I push the app to Heroku I get below output on logs
2017-10-21T19:11:11.873567+00:00 app[api]: Deployed web (xxxxx) by user xxx#yyy.com
2017-10-21T19:11:14.235819+00:00 heroku[web.1]: Starting process with command `tail -f /dev/null`
2017-10-21T19:11:16.593724+00:00 heroku[web.1]: Process exited with status 127
2017-10-21T19:11:16.447960+00:00 app[web.1]: You passed 1 arguments
2017-10-21T19:11:16.447976+00:00 app[web.1]: tail -f /dev/null
This is completely wrong, as the CMD is being sent quoted as a single argument instead of the 3 arguments. I have opened a ticket for the same with heroku team, hopefully they will reply before Tue

I'm also running into this issue. As a temporary workaround, I've put my original CMD into a separate script file, and now calling that script file in CMD.
Here's my original Dockerfile:
...
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
Here's my new Dockerfile:
...
RUN chmod +x PATH_TO_START_SCRIPT/start.sh
CMD ["./start.sh"]
And my start.sh script (starting a Rails app):
#!/bin/bash
set -e
echo "Starting Puma server..."
bundle exec puma -C config/puma.rb

Related

Docker entrypoint script not sourcing file

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

Docker entrypoint can't find file

I have a very simple docker build file:
FROM openjdk:10
ENV JENAVERSION=3.7.0
RUN mkdir /fuseki
RUN wget http://apache.claz.org/jena/binaries/apache-jena-fuseki-$JENAVERSION.tar.gz -P /tmp \
&& tar -zxvf /tmp/apache-jena-fuseki-$JENAVERSION.tar.gz -C /tmp \
&& mv -v /tmp/apache-jena-fuseki-$JENAVERSION/* /fuseki
EXPOSE 3030
ENTRYPOINT ["/bin/bash", "/fuseki/fuseki-server"]
I've tried different variations on CMD and ENTRYPOINT, but nothing allows "fuseki-server" to execute. Always a "No such file or directory" error. If I manually create an empty container from openjdk:10, and execute each command manually, it works fine. What's going on?
I think the issue is the line ending - the entrypoint needs to have LF line ending.
I get the same error when my entrypoint has CLRF line ending.
If I build and run your Dockerfile, I get a different error from what you've described. I see:
Can't find jarfile to run
If you look at the fuseki-server shell script, it's trying to find the jar file relative either to your current directory or to the $FUSEKI_HOME environment variable:
export FUSEKI_HOME="${FUSEKI_HOME:-$PWD}"
if [ ! -e "$FUSEKI_HOME" ]
then
echo "$FUSEKI_HOME does not exist" 1>&2
exit 1
fi
JAR1="$FUSEKI_HOME/fuseki-server.jar"
JAR2="$FUSEKI_HOME/jena-fuseki-server-*.jar"
JAR=""
So if you set the FUSEKI_HOME environment variable in your
Dockerfile:
ENV FUSEKI_HOME=/fuseki
Then the container starts up without errors:
[2018-06-04 14:02:17] Server INFO Apache Jena Fuseki 3.7.0
[2018-06-04 14:02:17] Config INFO FUSEKI_HOME=/fuseki
[2018-06-04 14:02:17] Config INFO FUSEKI_BASE=/run
[2018-06-04 14:02:17] Config INFO Shiro file: file:///run/shiro.ini
[2018-06-04 14:02:18] Server INFO Started 2018/06/04 14:02:18 UTC on port 3030
Wow... After going through #larsk's suggestion it occurred to me to change the entrypoint to
ENTRYPOINT ["tail", "-f", "/dev/null"]
and go into the container to see what was actually there. It turns out that I was accidently overwriting the /fuseki folder with a volume declaration in the compose file I was using. (facepalm...)

Why does redirecting container stdout to a file not work?

I set an simple environment for testing.
Dockerfile
FROM ubuntu:16.04
COPY test.sh /
ENTRYPOINT /test.sh
test.sh
#!/bin/bash
while true; do
echo "test..."
sleep 5
done
docker-compose.yml
version: '3.4'
services:
test:
image: asleea/simple_test
entrypoint: ["/test.sh", ">", "test.log"]
# command: [">", "/test.log"]
container_name: simple_test
Run the test container
$docker-compose up
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting simple_test ...
Starting simple_test ... done
Attaching to simple_test
simple_test | test...
simple_test | test...
It is still printing stdout there.
Check test.log inside the container
$ docker exec -it simple_test bash
$ cd /
$ ls
# No file named `test.log`
test.log for redirection doesn't exist.
docker seems to just ignore redirection. Is it normal and why? or I did wrong way something?
Edit
Thank you #Sebastian for your answer. it works redirecting stdout to a file.
However, one more question.
The docs you refer also is saying the below.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
As my understanding of that, command: /test.sh > /test.log is equivalent with command: ["sh", "-c", "/test.sh > /test.log"].
However, when I did command: /test.sh > /test.log, it didn't redirect as well.
Why does command: ["sh", "-c", "/test.sh > /test.log"] work but command: /test.sh > /test.log.
Do I misunderstand?
You need to make sure your command is executed in a shell. Try to use:
CMD [ "sh", "-c", "/test.sh", ">", "test.log" ]
You specified the command/ entrypoint as JSON which is called exec form
The exec form does not invoke a command shell.
This means that normal shell processing does not happen.
Docker docs
i think you are doing something wrong with syntax , "command" parameter works with compose and also did same thing as CMD. try to use command: sh -c '/test.sh > /tmp/test.log' in your compose file. it works fine.

How to deal with state "Exit 0" in Docker

I have build a Docker image and afterwards run a container using Docker Compose. The following command will do the job for me:
docker-compose up -d
I have restarted the PC and now I want to start the previous container that I've created before. So I have tried the following command:
$ docker-compose start
Starting php-apache ... done
Apparently it works but it doesn't as per the output for the following command:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
php55devwork_php-apache_1 /bin/sh -c bash -C '/usr/l ... Exit 0
For sure something is wrong and I am trying to find out what.
How do I find why the command is failing?
Is there any place where I could see a log file or something that help me to identify and fix the error?
Here is the repository if you want to give it a try.
Update
If I remove the container: docker rm <container-id> and recreate it by running docker-compose up -d --build it works again.
Update #1
I am not able to see such weird characters:
This is what helped me to resolve this issue:
Under one of your services in the docker-compose yaml file, type in the following:
tty: true so it'll look like
version: '3'
services:
web:
tty: true
Hopefully this helps someone; thumps up if it helps you :)
I took a look into your Docker github and setup_php_settings
on line (line n. 27) there is source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND
and that runs apache2 on foreground so it shouldn't exit with status code 0.
But it seems to me like your setup_php_settings contains some weird character (when I run your image with compose)
(original is one on right side) weird character
I have changed it to new lines and it worked for me. Let us know if it helped.
If you want to debug your docker container you can run it without entrypoint like:
docker run -it yourImage bash
-- AFTER some investigation:
There were still some errors when I restart docker container - like in your case stopped container and start after reboot. There were problems: symbolic links already exist and apache2 has grumpy PID so we need to do something like in oficial php docker
This is full setup_php_settings worked for me after container restart.
#!/bin/bash -x
set -e
PHP_ERROR_REPORTING=${PHP_ERROR_REPORTING:-"E_ALL & ~E_DEPRECATED & ~E_NOTICE"}
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/apache2/php.ini
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/cli/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/apache2/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/cli/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/apache2/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/cli/php.ini
mkdir -p /data/tmp/php/uploads
mkdir -p /data/tmp/php/sessions
mkdir -p /data/tmp/php/xdebug
chown -R www-data:www-data /data/tmp/php*
ln -sf /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
ln -sf /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini
# Add symbolic link to get Zend out of the current install dir
ln -sf /usr/share/php/libzend-framework-php/Zend/ /usr/share/php/Zend
a2enmod rewrite
php5enmod mcrypt
# Apache gets grumpy about PID files pre-existing
: "${APACHE_PID_FILE:=${APACHE_RUN_DIR:=/var/run/apache2}/apache2.pid}"
rm -f "$APACHE_PID_FILE"
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$#"
You can check logs with docker compose logs.
Looking through your repo, you have
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
which, without an interactive session, bash will exit immediately (with an exit code 0) after reading the end of file on stdin.
Normally getting an exit 0 should be a reason to celebrate, as it indicates that your command has ended successfully (http://www.tldp.org/LDP/abs/html/exit-status.html).
Having had a look at your Dockerfile it looks like, your just invoking bash in your entry point which then for sure will exit (as it is non blocking). In order to serve some data, you should rather be calling php (which is a blocking operation that keeps the container up), like done in the official docker files for php (see the CMD ["php", "-a"] at https://github.com/docker-library/php/blob/1c56325a69718a3e3cf76179e75d070b7e23da62/5.6/Dockerfile)

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

Resources