I want to run npm install command in container.
But simple: docker exec container npm install is not the right thing for me.
I want to run this command in /home/client but my working directory in container is /home
Is that possible?
I don't want to enter container and I don't want to change working environment.
Edit 1
Dockerfile:
FROM ubuntu:16.04
COPY . /home
WORKDIR /home
RUN apt-get update && apt-get install -y \
python-pip \
postgresql \
rabbitmq-server \
libpq-dev \
python-dev \
npm \
mongodb
RUN pip install -r requirements.txt
Docker run command:
docker run \
-tid \
-p 8000:8000 \
-v $(PWD):/home \
--name container \
-e DB_NAME \
-e DB_USER \
-e DB_USER_PASSWORD \
-e DB_HOST \
-e DB_PORT \
container
Two commands in order to prove there is a directory /home/client:
docker exec container pwd
Gives: /home
docker exec container ls client
Gives:
node_modules
package.json
src
webpack.config.js
That's node modules from my host.
Edit 2
When run:
docker exec container cd /home/client
It produces the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"cd\": executable file not found in $PATH"
That is possible with:
docker exec {container} sh -c "cd /home/client && npm install"
Thanks to Matt
Yeah, it's possible. You can do it one of two ways.
Method 1
Do it in a single command like this:
$ docker exec container sh -c "cd /home/client && npm install"
Or like this (as an arg to npm install):
$ docker exec container npm install --prefix /home/client
Method 2
Use an interactive terminal:
$ docker exec -it container /bin/bash
# cd /home/client
# npm install
Related
I have built a Docker image which calls a bash file and processes some files in a specific folder, but I can't manage to make it run/work. I have tried different approaches but cannot find where the issue is. Building an image with
docker build -t user/mycontainer .
works, but the bash script doesn't run when I
docker run mycontainer `pwd`:/app
Instead it produces an error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/path/c/to/where/i/run/teh/docker:/app": stat /path/c/to/where/i/run/teh/docker:/app: no such file or directory: unknown.
When I run docker ps there aren't any containers, but if I docker ps -a the container just built appears.
I try running
docker exec -it <container id build> bash
and I get a different error response from daemon:
Container <container id build> is not running
The docker image seems to build all the dependencies and the bash code works when run it separately in my local within the folder.
My dockerfile looks:
FROM alpine:latest
# Create directory in container image for app code
RUN mkdir -p /app
# Copy app code (.) to /app in container image
COPY . /app
# Set working directory context
ARG TIPPECANOE_RELEASE="1.36.0"
RUN apk add --no-cache sudo git g++ make libgcc libstdc++ sqlite-libs sqlite-dev zlib-dev bash \
&& addgroup sudo && adduser -G sudo -D -H tippecanoe && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& cd /root \
&& git clone https://github.com/mapbox/tippecanoe.git tippecanoe \
&& cd tippecanoe \
&& git checkout tags/$TIPPECANOE_RELEASE \
&& cd /root/tippecanoe \
&& make \
&& make install \
&& cd /root \
&& rm -rf /root/tippecanoe \
&& apk del git g++ make sqlite-dev
RUN chmod +x ./app/bash.sh
USER tippecanoe
WORKDIR /app
CMD ["./bash.sh"]
I have a docker file with entrypoint script as below
FROM centos:latest
MAINTAINER “HV"
#Add Environment variables here
ENV container docker
#Add Prerquisites
RUN yum install wget -y
RUN yum install unzip -y
RUN yum install make -y
RUN yum install sudo -y
RUN yum install libtool-ltdl -y
RUN yum install python3 python3-pip -y
#Install Git
RUN mkdir -p /home/Test/
RUN yum install git -y
#Copy Entrypoint script
COPY ilab_entrypoint.sh /usr/local/bin/
ENTRYPOINT ["ilab_entrypoint.sh"]
CMD ["/usr/sbin/init"]
#!/bin/bash
set -e
git clone git_url
exec "$#"
So i could build and run the container i don;t see the id when i am inside the container,
example,
when i run the docker container i can enter it but it remains as same localhost but no id
root#centos8-test:[/home/Test]: docker exec -it 3c2b5973f5c0 bash
[root#centos8-test /]#
it should be running as below instead
[root#3c2b5973f5c0 /]#
This is not related to the script, if you want to change the hostname you can specify it in the run command:
$ docker run --rm -it python:3.9 bash
root#2d97c319abcf:/# exit
exit
$ docker run --rm -it --hostname some-host python:3.9 bash
root#some-host:/# exit
exit
I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated
Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.
I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
In Jenkins I installed Docker build step plugin.
In Jenkins, created job and in it, executed docker command selected build image. The image is created using the Dockerfile.The Dockerfile is :
FROM ubuntu:latest
#OS Update
RUN apt-get update
RUN apt-get -y install git git-core unzip python-pip make wget build-essential python-dev libpcre3 libpcre3-dev libssl-dev vim nano net-tools iputils-ping supervisor curl supervisor
WORKDIR /home/wipro
#Mongo Setup
RUN curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-3.0.2.tgz && tar -xzvf mongodb-linux-x86_64-3.0.2.tgz && cd mongodb-linux-x86_64-3.0.2/bin && cp * /usr/bin/
#RUN mongod --dbpath /home/azureuser/CI_service/data/ --logpath /home/azureuser/CI_service/log.txt --logappend --noprealloc --smallfiles --port 27017 --fork
#Node Setup
#RUN curl -O https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz && tar -xzvf node-v0.12.7.tar.gz && cd node-v0.12.7
#RUN cd /opt/node-v0.12.7 && ./configure && make && make install
#RUN cp /usr/local/bin/node /usr/bin/ && cp /usr/local/bin/npm /usr/bin/
RUN wget https://nodejs.org/dist/v0.12.7/node-v0.12.7-linux-x64.tar.gz
RUN cd /usr/local && sudo tar --strip-components 1 -xzf /home/wipro/node-v0.12.7-linux-x64.tar.gz
RUN npm install forever -g
#CI SERVICE
ADD prod /home//
ADD servicestart.sh /home/
RUN chmod +x /home/servicestart.sh
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["sh", "/home/servicestart.sh"]
EXPOSE 80
EXPOSE 27017
Then I tried to create the container and container is created.
When I tried to start the container, the container is not running.
When I checked with command:
docker ps -a
, it shows status as created only.
Its not in running or Exited state.
The output of docker ps -a is:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ac762c4dc84 d85c2d90be53 "sh /home/servi" 15 hours ago Created hungry_liskov
7d8864940515 d85c2d90be53 "sh /home/servi" 16 hours ago Created ciservice
How to start the container using jenkins?
It depends on your container main command (ENTRPOINT + CMD)
A created state (for non data-volume container) means the main command failed to execute.
Try a docker logs <container_id> to see if there is any error message recorded.
CMD ["sh", "/home/servicestart.sh"] should be:
CMD ["/home/servicestart.sh"]
(The default ENTRYPOINT for Ubuntu should be ["sh", "-c"], so no need to repeat an "sh")