how to cd into src directory during Docker build - docker

I'm using devcontainer in vscode.
In the Dockerfile (Ubuntu image) I'm trying to install serverless dynamodb so I need to cd into the location of the serverless.yml file.
cd apps/api/src && serverless dynamodb install
The problem I have is the path is not found. my workspaceFolder is workspace.
I've tried:
cd ~/workspace/apps/api/src
cd ./apps/api/src
cd /home/vscode/workspace/apps/api/src
I just don't know what's the current directory I'm in and for some reason RUN echo pwd or RUN ls -l is not outputing anything to the console.
Just as info, I added to the devcontainer.json for now and its working. But I just wanted to have this included in the Dockerfile and for any future need.
"postCreateCommand": "cd apps/api/src && serverless dynamodb install",
Could someone please help.

Related

Create docker image with specified CUDA toolkits and pytorch

I'm using clusters of my corporation by ICM. It provides a convenient way to configure the remote by docker:
So, I want to build a docker image of my developing environment (python packages, cuda, some utility scripts like screen and rsync, also some necessary data) to deploy on the remote machine. Here is my Dockerfile:
# syntax=docker/dockerfile:1
FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime
WORKDIR /app
RUN sudo apt-get install rsync
RUN sudo apt-get install screen
RUN conda create --prefix /data/vxxx/nn python=3.8
RUN conda init
# shotcut for activating my environment
RUN echo 'alias nn="conda activate /data/xxx/nn"' >> ~/.bashrc
RUN source ~/.bashrc
RUN nn
RUN pip3 install torchtext==0.8.1 pandas scipy scikit-learn transformers tensroboard -f https://download.pytorch.org/whl/torch_stable.html
# copy files from windows
COPY /mnt/c/test .
RUN cd /data
RUN mkdir xxx
RUN cd xxx
RUN mkdir Data
RUN mkdir Code
RUN cd Code
RUN git clone https://github.com/namespace-Pt/Document-Reduction.git
RUN git config --global user.name 'xxx'
RUN git config --global user.email 'xxx#1.com'
CMD [ "sleep", "infinity"]
I'm new to docker and I followed the official python image tutorial, I have the following questions:
what is WORKDIR, does it mean to create a new directory where all files will be stored?
why my COPY command is not working?
how to publish my image to make it usable for the cluster?
Beginner here, from what i understood.
WORKDIR Is work directory for other commands as RUN, CMD, COPY or ADD.
Don't know. I will double check directories paths.
Don't know. Normally i'am using dockerhub

Dockerfile ADD statement can't acces my src folder when building inside a circleci job

I've started using circleci for CI (I'm a newbie) and I want to build a docker image and push it to dockerhub inside a circleci job.
the problem is the ADD statement of the dockerfile, the error say
ADD failed: stat /var/lib/docker/tmp/docker-builder814373370/app/build: no such file or directory
docker build work fine in local. The problem seems to be the 'remote environment' create by circleci to execute docker cmd inside a job (when the job is executing inside a container). I tried multiple things to share my folder to the remote environment but nothing has worked. I also tried to execute my job inside a 'machine' to get rid of the 'remote environment' but it gives me more errors.
I think I can achieve it by storing my project online in another job and then adding the folder by https inside the dockerfile. But I'm pretty sure there is a faster way, I just don't see it.
here my dockerfile:
FROM ubuntu:20.04
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -yq && apt-get -yq install nodejs npm && npm install serve -g
ADD app/build/ /app
EXPOSE 5000
CMD serve -s /app -l 5000
and my circleci job:
working_directory: ~/project/
docker:
- image: circleci/buildpack-deps:stretch
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: sudo docker build . -t $IMAGE_NAME:latest
I achieve it by storing artifacts in another job and then adding the folder by https with curl and wget in a RUN statement of the dockerfile

chmod in Dockerfile does not permanently change permissions

I have the below Dockerfile to create a nodejs server. It clones a repo that contains the script that starts the node servers but that script needs to be executable.
The first time I start this docker container, it works as expected. I know the chown is doing something because without it, the script "does not exist" because it does not have the x permissions.
However, if I then open a TTY into the container and inspect the file, the permissions have not changed. If the container is restarted, it gives an error that the file does not exist (x permission is missing). If I execute the chmod manually after the first run, I can restart the container without issues.
I have tried it as part of the combined RUN command (as below) and as a separate RUN command, both written normally and as RUN ["chmod", "755" "/opt/youtube4me/start_all.sh"] and both with 755 and +x. All of those ways work the first start but do not permanently change the permissions.
I'm very new to docker and I think I'm missing some basic concept that explains that chmod is run in some sort of context (maybe?)
FROM node:10.19.0-alpine
RUN apk update && \
apk add git && \
git clone https://github.com/0502-crew/youtube4me.git /opt/youtube4me && \
chmod 755 /opt/youtube4me/start_all.sh
COPY config/. /opt/youtube4me/
EXPOSE 45011
WORKDIR /opt/youtube4me
CMD ["/opt/youtube4me/start_all.sh"]
UPDATE 1
I run these commands with docker:
sudo docker build . -t 0502-crew/youtube4me
sudo docker container run -d --name youtube4me --network host 0502-crew/youtube4me
start_all.sh
#!/bin/sh
# Reset to master in case any force pushes were done
git reset --hard origin/master
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
The config folder from which I COPY just contains 2 .ts files that are in .gitignore so need to be added manually:
config/api/src/config/Config.ts
config/client/src/config/Config.ts
UPDATE 2
If I replace the CMD line with CMD ["sh", "-c", "tail -f /dev/null"] then the x permissions remain on the sh file. It seems that executing the script removes the permissions...
As #dennisvandehoef pointed out, the fundamentals of my approach are off. Including a git clone in the Dockerfile will, for example, have as side effect that building the image a second time will not clone the repo again, even if files have changed.
However, I did find a solution to this approach so I might as well share it for the sake of knowledge sharing:
I develop on Windows so I don't need to set permissions to a script but Linux does need it. I found out you can still set the +x permission bit using git on Windows and commit it. When docker clones the repo, chmod is then no longer needed at all.
git update-index --chmod=+x start_all.sh
If you then ls the files with this command
git ls-files --stage
you will see that all other files are 100644 but the sh file is 100755 (so permission 755).
Next just commit it
git commit -m"Made sh executable"
I had to delete my docker images afterwards and rebuild it to ensure the git clone was performed again.
I'll rewrite my dockerfile at a later point but for now it works as intented.
It is still a mystery to me that the x bit disappears when the chmod is part of the Dockerfile
You are building a docker-file for the project you are pulling from git and your start_all script includes a git reset --hard origin/master which is a bad practice since now you cannot create versions of your docker images.
Also you copy these 2 files to the wrong directory. with COPY config/. /opt/youtube4me/ you copy them directly to the rood of your project. and not to the given locations in config/api/src/config/ and config/client/src/config/Config.ts
I realize that fixing these problems will not fix this chmod problem itself. But it might make it go away for your specific use case.
Also if it are secrets you are excluding from git, you also should not add them to your docker-image while building since then (after you pushed it to docker) it will be public again. Therefore it is a good practice to never add them while building.
Did you try something like this:
Docker file
FROM node:10.19.0-alpine
RUN mkdir /opt/youtube4me
COPY . /opt/youtube4me/
WORKDIR /opt/youtube4me
RUN chmod 755 /opt/youtube4me/start_all.sh
EXPOSE 45011
CMD ["/opt/youtube4me/start_all.sh"]
dockerignore
api/src/config/Config.ts
client/src/config/Config.ts
script
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
You then need to run docker run with 2 volumes to add the config to it. This is been done with -v. example: docker run -v $(pwd)config/api/src/config/:api/src/config/ -v $(pwd)config/client/src/config/:client/src/config/
Never the less I wonder why you are running both services in one docker image. If this is just for local execution you can also think about creating 2 separate docker images, and use docker-compose to spawn it all.
Addition 1:
I thought a bit about this and it is also a good practice to use environment variables to config a docker-image instead of adding a file to it. You might want to switch to that as well. I advise reading this article to get a better understanding of the why and other possibilities to do this.
Addition 2:
I created a pull request on your code, that is an example of how it could look with docker-compose. It currently does not build because of the config options. But it will give you some more insights.

How to run jenkins.war in background and use cli

I am pretty new to Docker. I need to do the following tasks:
Run Jenkins instance in Docker
Configure it to auto-install JobDSL plugin on startup
I wrote DockerFile
FROM java:8
EXPOSE 8080
ADD jenkins.war jenkins.war
ENTRYPOINT ["java","-jar","jenkins.war"]
and then I run docker run ...
But there is one problem I can't use the console but I have to use the console to install the plugin. I tried to solve this problem using & at the end. It did not help. P.S I can't use the jenkins image
Jenkins use a JENKINS_HOME directory where it stores config, jobs and plugins.
One way to achieve what you want is perhaps to set the plugins in this directory before running jenkins.
If you use the official jenkins image, then perhaps you can use a data volume to store that and run docker to use this data volume: docker run -V /your/data/volume:/var/jenkins_home jenkins/jenkins
If you don't want a data volume, and want an image with the plugins, then you can add to you dockerfile something like:
RUN mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget http://your/plugins/plugins.jpi
finally you can mix a little bit both by creating a shell script that check if the plugins directory exist, and if not get the plugin file, then start jenkins. This shell script would be your image entry point.
NOTE: The file you need to download as plugins is the .jpi file! not the .hpi.
As reference, here an example:
FROM java:8
RUN wget https://updates.jenkins-ci.org/download/war/2.121.2/jenkins.war && \
mkdir -p ~/.jenkins/plugins && \
cd ~/.jenkins/plugins && \
wget https://repo.jenkins-ci.org/releases/org/jenkins-ci/plugins/job-dsl/1.33/job-dsl-1.33.jpi
ENTRYPOINT ["java","-jar","jenkins.war"]

Dockerfile manual install of multiple deb files

Working with Docker and I notice almost everywhere the "RUN" command starts with an apt-get upgrade && apt-get install etc.
What if you don't have internet access and simply want to do a "dpkg -i ./deb-directory/*.deb" instead?
Well, I tried that and I keep failing. Any advice would be appreciated:
dpkg: error processing archive ./deb-directory/*.deb (--install):
cannot access archive: No such file or directory
Errors were encountered while processing: ./deb-directory/*.deb
INFO[0002] The command [/bin/sh -c dpkg -i ./deb-directory/*.deb] returned a non-zero code: 1`
To clarify, yes, the directory "deb-directory" does exist. In fact it is in the same directory as the Dockerfile where I build.
This is perhaps a bug, I'll open a ticket on their github to know.
Edit: I did it here.
Edit2:
Someone answered a better way of doing this on the github issue.
* is a shell metacharacter. You need to invoke a shell for it to be expanded.
docker run somecontainer sh -c 'dpkg -i /debdir/*.deb'
!!! Forget the following but I leave it here to keep track of my reflexion steps !!!
The problem comes from the * statement which doesn't seem to work well with the docker run dpkg command. I tried your command inside a container (using an interactive shell) and it worked well. It looks like dpkg is trying to install the so called ./deb-directory/*.deb file which doesn't exist instead of installing all the .deb files contained there.
I just implemented a workaround. Copy a .sh script in your container, chmod +x it and then use it as your command.
(FYI, prefer using COPY instead of ADD when the file isn't remotely copied. Check the best practices for writing Dockerfiles for more info.)
This is my Dockerfile for example purpose:
FROM debian:latest
MAINTAINER Vrakfall <jeremy#artphotolaurent.be>
COPY install.sh /
#debdir is a directory
COPY debdir /debdir
RUN chmod +x /install.sh
CMD ["/install.sh"]
The install.sh (copied at the root directory) simply contains:
#!/bin/bash
dpkg -i /debdir/*.deb
And the following
docker build -t debiantest .
docker run debiantest
works well and install all the packages contained in the /debdir directory.

Resources