How to run "cd" command in a Pipfile's [script]? - pipenv

I need to make something like this in a Pipfile:
...
[scripts]
my_script = "cd folder"
...

Using cd actually works. It just does not seem to work because pipenv run spawns a new shell (not the same shell where you run pipenv run), and runs your command/s there. In that separate shell, it will cd to the folder ... then simply exit. In the original shell where you did pipenv run, you would still be in the same folder.
You can check that it can access the folder correctly:
StackOverflow$ tree -L 2 .
.
├── Pipfile
├── Pipfile.lock
├── folder
│ ├── file1.txt
│ ├── file2.txt
│ └── file3.txt
└── ...
StackOverflow$ cat Pipfile
...
[scripts]
# To use multiple commands, wrap with `bash -c`
# See https://github.com/pypa/pipenv/issues/2038
my_script = "bash -c 'cd folder && ls'"
StackOverflow$ pipenv run my_script
file1.txt file2.txt file3.txt
StackOverflow$
The script shortcut spawned a shell that successfully cd-ed into the folder, but in the original shell, you are still in the same directory ("StackOverflow" in this case).
Now, I don't know what is the intended purpose of creating a shortcut for cd-ing into a folder. I assume that's not the only command for my_script, because doing cd folder would have been simpler than pipenv run my_script.
If you are going to do some operation/s inside that folder, then I recommend writing a separate script for all your other commands and just use the [scripts] shortcut to call that script.
StackOverflow$ tree -L 2 .
.
├── Pipfile
├── Pipfile.lock
├── folder
│ ├── file1.txt
│ ├── file2.txt
│ └── file3.txt
└── scripts
└── my_script.sh
StackOverflow$ cat my_script.sh
#!/bin/bash
cd folder
for file in *
do
echo "Doing something to $file"
sleep 3
done
echo "Done."
StackOverflow$ cat Pipfile
...
[scripts]
my_script = "./scripts/my_script.sh"
StackOverflow$ pipenv run my_script
Doing something to file1.txt
Doing something to file2.txt
Doing something to file3.txt
Done.
The [script] shortcut would still work correctly (i.e. cd into the folder and do things there). It's also better than chaining commands, which has been discussed by the pipenv maintainers as not its intended purpose.
https://github.com/pypa/pipenv/issues/2878
https://github.com/pypa/pipenv/issues/2160

ex) If you want to execute docs/Makefile,you write below.
# Pipfile
[scripts]
doc_clean = "bash -c 'cd docs && make clean'"
doc_build = "bash -c 'cd docs && make html'"
you execute 'pipenv run doc_clean' or 'pipenv run doc_build'.

Related

Dockerfile COPY and keep folder structure

I'm trying to create a Dockerfile that copies all package.json files into the image but keeps the folder structure.
This what I have now:
FROM node:15.9.0-alpine as base
WORKDIR /app/
COPY ./**/package.json ./
CMD ls -laR /app
Running with: sudo docker run --rm -it $(sudo docker build -q .)
But it only copies 1 package.json and puts it in the base dir (/app)
Here is the directory I'm testings on:
├── Dockerfile
├── t1
│   └── package.json
└── t2
└── ttt
├── b.txt
└── package.json
And i would like it to look like this inside the container:
├── Dockerfile
├── t1
│   └── package.json
└── t2
└── ttt
└── package.json
The Dockerfile COPY directive is documented as using the Go filepath.Match function for glob expansion. That only supports the basic glob characters *, ?, [a-z], but not extensions like ** that some shells support.
Since COPY only takes a filename glob as input and it likes to flatten the file structure, I don't think there's a way to do the sort of selective copy you're describing in a single command.
Instead you need to list out the individual files you want to copy. COPY will create directories as needed, but that means you need to repeat paths on both sides of COPY.
COPY t1/package*.json t1/
COPY t2/ttt/package*.json t2/ttt/
I can imagine some hacky approaches using multi-stage builds; have an initial stage that copies in the entire source tree but then deletes all of the files except package*.json, then copies that into the actual build stage. I'd contemplate splitting my repository into smaller modules with separate Dockerfiles per module first.

Can you have a non top-level Dockerfile when invoking COPY?

Have a Dockerfile to build releases for an Elixir/Phoenix application...The tree directory structure is as follows, where the Dockerfile (which has a dependency on this other Dockerfile) is in the "infra" subfolder and needs access to all the files one level above "infra".
.
├── README.md
├── assets
│   ├── css
│   ├── js
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
├── lib
├── infra
│   ├── Dockerfile
│   ├── config.yaml
│   ├── deployment.yaml
The Dockerfile looks like:
# https://github.com/bitwalker/alpine-elixir
FROM bitwalker/alpine-elixir:latest
# Set exposed ports
EXPOSE 4000
ENV PORT=4000
ENV MIX_ENV=prod
ENV APP_HOME /app
ENV APP_VERSION=0.0.1
COPY ./ ${HOME}
WORKDIR ${HOME}
RUN mix deps.get
RUN mix compile
RUN MIX_ENV=${MIX_ENV} mix distillery.release
RUN echo $HOME
COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
RUN tar -xzvf my_app.tar.gz
USER default
CMD ./bin/my_app foreground
The command "mix distillery.release" is what builds the my_app.tar.gz file in the path indicated by the COPY command.
I invoke the docker build as follows in the top-level directory (the parent directory of "infra"):
docker build -t my_app:local -f infra/Dockerfile .
I basically then get an error with COPY:
Step 13/16 : COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
COPY failed: stat /var/lib/docker/tmp/docker-builder246562111/opt/app/_build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz: no such file or directory
I understand that the COPY command depends on the "build context" but I thought that by issuing the "docker build" in the parent directory of infra meant I had the appropriate context set for the COPY, but clearly that doesn't seem to be the case. Is there a way to have a Dockerfile one level below the parent directory that contains all the files needed to build an Elixir/Phoenix "release" (the my_app.tar.gz and associated files created via the command mix distillery.release)? What bits am I missing?

Specific `build` directory not added to Docker image

I have a repository with a directory structure like this
.
├── Dockerfile
├── README.md
├── frontend/
├── backend/
├── docs/
├── examples/
└── build/
The dockerfile is a simple ADD with no entrypoint:
FROM python:3.6-slim
WORKDIR /app
# Copy and install requirements.txt first for caching
ADD . /app
RUN pip install --no-cache-dir --trusted-host pypi.python.org -r backend/requirements.txt
EXPOSE 8200
WORKDIR /app/backend
My issue is that after docker build -t myimage ., the build folder is missing from the image.
I just ran an ls when verifying the image contents with docker run -it myimage /bin/bash, and the build folder is missing!
.
├── frontend/
├── backend/
├── docs/
├── examples/
Does anyone know why? How can I add modify my Dockerfile to add this folder into my image? All resources online say that ADD . <dest> should duplicate my current directory tree inside the image, but the build folder is missing...
Missed that there's a .dockerignore file in the repo that contains this folder. Whooooops, thank you #David Maze.

Building docker where does `RUN mkdir` create a directory - cannot find it when running container

I'm new to docker and I'm building the docker file below using docker build -t control . It builds successfully with no errors specifically it says that it makes the control directory. Then I try to run the image with docker run control but it gives an error saying that it can't find control/control_file/job.py
Where does docker create the control directory. Is it in a container that I cannot see? As I can't see it being create anywhere and I'm unsure how to debug?
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir control
COPY control_file/ /control
ENV PYTHONPATH="/control:$PYTHONPATH"
RUN export PYTHONPATH=/control:$PYTHONPATH
CMD ["python","/control/job.py"]
This is the directory structure:
├── control_file
│   ├── insert_to_container.py
│   ├── ip_path
│   ├── job.py
│   └── read_info.py
└── Dockerfile
The job.py is now in /control within your Docker build.
With the COPY command you copy all contents within control_file/ into the new directory /control.
Change the last line to:
CMD ["python", "control/job.py"]
you docker file has mistakes, please find below correct one and control_file directory should available in build directory (where you building docker image )....job.py should have execute permission
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir -p /control/control_file
COPY control_file/ /control/control_file
CMD [ "python" , "/control/control_file/job.py" ]

Docker ADD is failing with relative directory

My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help
What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.

Resources