I am running docker on windows 10. When I run this command without the -v flag to mount the host drive and file path to the container path, then the container runs fine and I can connect to it. However, when I provide the flag to mount the paths the container exits immediately. This is my command which runs without any error
docker container run -v c:/container-fs:/usr/src/app --publish 8001:8080 --detach --name bboard-ubuntu bulletinboard:Ubuntu
when I run the command docker container ls --all, I see that the container named bboard-ubuntu has exited almost immediately after it started up.
When trying to exec into the container using the command docker exec -it bboard-ubuntu /bin/bash, I get the error message as below:
Error response from daemon: Container
26a2d3361dfc0c890xxxxxxxxxxxxxxx97be532ab6e8771652e5b is not running
When I remove the mount flags and run it like this below, there are no issues and I can exec into the container file system.
docker container run --publish 8001:8080 --detach --name bboard-ubuntu bulletinboard:Ubuntu
How do I trace and fix this issue caused by providing the mount flag?
Edit
This is the Dockerfile
FROM ubuntu:18.04
WORKDIR /usr/src/app
COPY package.json .
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get -y autoclean
RUN apt-get install -y apt-utils
RUN apt-get -y install nano
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 13.9.0
# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
RUN npm install
EXPOSE 8080
CMD [ "npm", "start" ]
COPY . .
Here is the error after removing detach
npm ERR! code ENOENT npm ERR! syscall open npm ERR! path
/usr/src/app/package.json npm ERR! errno -2 npm ERR! enoent ENOENT: no
such file or directory, open '/usr/src/app/package.json' npm ERR!
enoent This is related to npm not being able to find a file. npm ERR!
enoent
npm ERR! A complete log of this run can be found in: npm ERR!
/root/.npm/_logs/2020-02-26T19_02_33_143Z-debug.log
I am running these commands on windows host. Where do I find the /root/.npm/ log folder?
Related
I'm trying to run on my Ubuntu 20.04 machine a cluster of docker containers present in this repository :
https://github.com/Capgemini-AIE/ethereum-docker
My dockerfile:
FROM ethereum/client-go
RUN apk update && apk add bash
RUN apk add --update git bash nodejs npm perl
RUN cd /root &&\
git clone https://github.com/cubedro/eth-net-intelligence-api &&\
cd eth-net-intelligence-api &&\
npm install &&\
npm install -g pm2
ADD start.sh /root/start.sh
ADD app.json /root/eth-net-intelligence-api/app.json
RUN chmod +x /root/start.sh
ENTRYPOINT /root/start.sh
The commands:
sudo docker-compose build
sudo docker-compose up -d
are done correctly, but when execute:
docker exec -it ethereum-docker-master_eth_1 geth attach ipc://root/.ethereum/devchain/geth.ipc
i have this error:
ERROR: Container 517e11aef83f0da580fdb91b6efd19adc8b1f489d6a917b43cc2d22881b865c6 is restarting, wait until the container is running
The reason is, executing:
docker logs ethereum-docker-master_eth_1
result:\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory\
/root/start.sh: line 5: /usr/bin/pm2: No such file or directory
Why do I have this problem? In the Docker file I have the command:
RUN npm install -g pm2
How can I solve the problem?
When I build an image with this dockerfile and then check the files in it I find that pm2 is installed at /usr/local/bin/pm2:
So you need to change the call in your script to
/usr/local/bin/pm2
I have a Jenkinsfile that looks like
pipeline {
agent {
docker {
image 'myartifactory/cloud-eng/sls-build:0.13'
label 'docker'
registryUrl 'https://myartifactory'
registryCredentialsId 'artfifactory-cred-id'
}
}
environment {
}
stages {
stage('Test') {
sh "env | sort"
sh "make setup-ci"
sh "make test"
}
}
}
When I run this I see that jenkins executed a command that looks like:
docker run -t -d -u 1318244366:1318464184 -w /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https -v /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https:/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https:rw,z -v /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https#tmp:/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** myartifactory/cloud-eng/sls-build:0.13 cat
This project uses python, NPM, and the serverless framework (javascript).
If I run this as above it will fail
npm ERR! correctMkdir failed to make directory /.npm/_locks
2021-03-11 16:17:02 npm ERR! code EACCES
2021-03-11 16:17:02 npm ERR! syscall mkdir
2021-03-11 16:17:02 npm ERR! path /.npm
2021-03-11 16:17:02 npm ERR! errno -13
2021-03-11 16:17:02 npm ERR!
2021-03-11 16:17:02 npm ERR! Your cache folder contains root-owned files, due to a bug in
2021-03-11 16:17:02 npm ERR! previous versions of npm which has since been addressed.
2021-03-11 16:17:02 npm ERR!
2021-03-11 16:17:02 npm ERR! To permanently fix this problem, please run:
2021-03-11 16:17:02 npm ERR! sudo chown -R 1318244366:1318464184 "/.npm"
2021-03-11 16:17:02 make: *** [setup-ci] Error 243
I tried many solutions with varying success. If I add this:
args '-u root' to the docker section it works as of course root has permissions to everything.... however security isn't going to like running the docker container as root.
No matter what I do with overriding $HOME in environment or args, changing users I always end up with permissions issues either with NPMs or python.
Other errors I've encountered with various hacks such as args '-e HOME=/tmp -e NPM_CONFIG_PREFIX=/tmp/.npm'
../../../../../tmp/.local/share/virtualenvs/te_csoe-1624-switch-shared-https-y_ilovXz/lib/python3.8/site-packages/_pytest/cacheprovider.py:428
2021-03-11 14:45:14 /tmp/.local/share/virtualenvs/te_csoe-1624-switch-shared-https-y_ilovXz/lib/python3.8/site-packages/_pytest/cacheprovider.py:428: PytestCacheWarning: cache could not write path /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https/.pytest_cache/v/cache/nodeids
2021-03-11 14:45:14 config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
Error: EACCES: permission denied, unlink '/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https/.serverless/cloudformation-template-update-stack.json'
2021-03-11 14:45:19 at Object.unlinkSync (fs.js:1136:3)
Since jenkins mounts random directories to share and random users I am not sure how to modify the Dockerfile for the image to grant write permissions....
Does anyone know how to get the permissions correct?
EDIT added Dockerfile
FROM amazonlinux:2
RUN yum install -y amazon-linux-extras
RUN yum install -y unzip
RUN yum groupinstall -y "Development Tools"
RUN yum install vim-enhanced -y
# install python/pipenv
ENV PYTHON_VERSION=3.9
RUN amazon-linux-extras install python${PYTHON_VERSION}
RUN /bin/pip-${PYTHON_VERSION} install pipenv
# install node/npm
RUN curl -sL https://rpm.nodesource.com/setup_12.x | bash -
RUN yum install -y nodejs
RUN mkdir /tmp/node-cache
RUN npm config set cache /tmp/node-cache --global
# install aws-cli2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip
`
# install vault client
ENV VAULT_VERSION=1.5.4
RUN curl -sSLo /tmp/vault.zip https://releases.hashicorp.com/vault/$VAULT_VERSION/vault_${VAULT_VERSION}_linux_amd64.zip && \
unzip -d /bin /tmp/vault.zip && \
rm -rf /tmp/vault.zip && \
setcap cap_ipc_lock= /bin/vault
ADD ./aws-login.sh /usr/local/bin/aws-login.sh
ADD ./ghe-token.sh /usr/local/bin/ghe-token.sh
ENV PATH="/bin:${PATH}"
# indicates CI CONTAINER so processes can check if running in CI
ENV CI_CONTAINER=1
ENV LANG="en_US.UTF-8"
ENV TERM xterm
# avoid million NPM install messages
ENV npm_config_loglevel warn
ENTRYPOINT []
The thing that was tripping me up was I had run this as -u root many times and I only have one agent (don't ask) and jenkins caches the workspace directory. So the file permissions had been changed in that code by the docker container running as root. So when I got rid of -u root and it started using the jenkins user it didn't have rights to some files and directories.
The solution was to delete the workspace and make sure that all make calls had an export HOME=${WORKSPACE} before any command.
There might be a better way to export HOME but this solves the prob
I am using jenkins image to run on docker container. I have a modified version of the image as below:
USER root
RUN apt-get update
RUN apt-get install -y sudo
RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN npm -v
USER jenkins
when I run the container based on this image it all goes fine. I can go into the container and do npm -v and it all works just fine. However, the build script on my jenkins which is simply as
echo 'starting build'
npm -v
fails with error npm not found.
npm is not in the path of your jenkins' user.
You could get a shell on your container to figure out the npm path:
docker exec -it <CONTAINER_NAME> bash
which npm
Then you could run it with a full path in the jenkins script, symlink it, add it to $PATH etc...
I want to run npm install command in container.
But simple: docker exec container npm install is not the right thing for me.
I want to run this command in /home/client but my working directory in container is /home
Is that possible?
I don't want to enter container and I don't want to change working environment.
Edit 1
Dockerfile:
FROM ubuntu:16.04
COPY . /home
WORKDIR /home
RUN apt-get update && apt-get install -y \
python-pip \
postgresql \
rabbitmq-server \
libpq-dev \
python-dev \
npm \
mongodb
RUN pip install -r requirements.txt
Docker run command:
docker run \
-tid \
-p 8000:8000 \
-v $(PWD):/home \
--name container \
-e DB_NAME \
-e DB_USER \
-e DB_USER_PASSWORD \
-e DB_HOST \
-e DB_PORT \
container
Two commands in order to prove there is a directory /home/client:
docker exec container pwd
Gives: /home
docker exec container ls client
Gives:
node_modules
package.json
src
webpack.config.js
That's node modules from my host.
Edit 2
When run:
docker exec container cd /home/client
It produces the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"cd\": executable file not found in $PATH"
That is possible with:
docker exec {container} sh -c "cd /home/client && npm install"
Thanks to Matt
Yeah, it's possible. You can do it one of two ways.
Method 1
Do it in a single command like this:
$ docker exec container sh -c "cd /home/client && npm install"
Or like this (as an arg to npm install):
$ docker exec container npm install --prefix /home/client
Method 2
Use an interactive terminal:
$ docker exec -it container /bin/bash
# cd /home/client
# npm install
I am trying to build a docker image using the following docker file.
FROM ubuntu:latest
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update packages
RUN apt-get -y update && apt-get install -y \
curl \
build-essential \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
ENV APP_NAME testapp
ENV NODE_VERSION 5.10
ENV SERVE_PORT 8080
ENV LIVE_RELOAD_PORT 8888
# Install nvm, node, and angular
RUN (curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash -) \
&& source /root/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g angular-cli \
&& ng new $APP_NAME \
&& cd $APP_NAME \
&& npm run postinstall
EXPOSE $SERVE_PORT $LIVE_RELOAD_PORT
WORKDIR $APP_NAME
EXPOSE 8080
CMD ["node", "-v"]
But I keep getting an error when trying to run it:
docker: Error response from daemon: Container command 'node' not found or does not exist..
I know node is being properly installed because if I rebuild the image by commenting out the CMD line from the docker file
#CMD ["node", "-v"]
And then start a shell session
docker run -it testimage
I can see that all my dependencies are there and return proper results
node -v
v5.10.1
.....
ng -v
angular-cli: 1.0.0-beta.5
node: 5.10.1
os: linux x64
So my question is. Why is the CMD in Dockerfile not able to run these and how can I fix it?
When using the shell to RUN node via nvm, you have sourced the nvm.sh file and it will have a $PATH variable set in it's environment to search for executable files via nvm.
When you run commands via docker run it will only inject a default PATH
docker run <your-ubuntu-image> echo $PATH
docker run <your-ubuntu-image> which node
docker run <your-ubuntu-image> nvm which node
Specifying a CMD with an array execs a binary directly without a shell or a $PATH to lookup.
Provide the full path to your node binary.
CMD ["/bin/node","-v"]
It's better to use the node binary rather than the nvm helper scripts due to the way dockers signal processing works. It might be easier to use the node apt packages in docker rather than nvm.