I moved project from PIP to Poetry and my Docker container failed to run on Google Cloud Run.
Last string in Docker:
CMD ["poetry", "run", "uwsgi", "--http-socket", "0.0.0.0:80", "--wsgi-file", "/server/app/main.py", "--callable", "app", "-b 65535"]
It's works locally, it's works on other laptop, it's works in Cloud Run Emulator, but fails when I try to run it on Cloud Run.
Here is a Cloud Run log:
Creating virtualenv my-project-xTUGyw3C-py3.8 in /home/.cache/pypoetry/virtualenvs
FileNotFoundError
[Errno 2] No such file or directory: b'/bin/uwsgi'
at /usr/local/lib/python3.8/os.py:601 in _execvpe
597│ path_list = map(fsencode, path_list)
598│ for dir in path_list:
599│ fullname = path.join(dir, file)
600│ try:
→ 601│ exec_func(fullname, *argrest)
602│ except (FileNotFoundError, NotADirectoryError) as e:
603│ last_exc = e
604│ except OSError as e:
605│ last_exc = e
Container called exit(1).
It's have correct port set. It's doesn't use any environment variables. I don't use volumes, files passed to Docker through COPY.
Logs says that application can't find uwsgi file. That file doesn't exists in local version too, but it's works without any errors.
How that even possible that a docker container behaves differently?
UPD: My Docker file
FROM python:3.8
WORKDIR server
ENV PYTHONPATH $PYTHONPATH:/server
RUN pip install poetry==1.1.11
COPY poetry.lock /server
COPY pyproject.toml /server
RUN poetry install
EXPOSE 80
COPY /data /server/data
COPY /test /server/test
COPY /app /server/app
CMD ["poetry", "run", "uwsgi", "--http-socket", "0.0.0.0:80", "--wsgi-file", "/server/app/main.py", "--callable", "app", "-b 65535"]
adding HOME=/root to the CMD, fixed the problem for me.
CMD HOME=/root poetry run python main.py
Apparently home is not set correctly and this is a known issue
Related
Problem Description
I have a docker image which I build and run using docker-compose. Normally I develop on WSL2, and when running docker-compose up --build the image builds and runs successfully. On another machine, using Windows powershell, with an identical clone of the code, executing the same command successfully builds the image, but gives an error when running.
Error
[+] Running 1/1
- Container fastapi-service Created 0.0s
Attaching to fastapi-service
fastapi-service | exec /start_reload.sh: no such file or directory
fastapi-service exited with code 1
I'm fairly experienced using Docker, but am a complete novice with PowerShell and developing on Windows more generally. Is there a difference in Dockerfile construction in this context, or a difference in the execution of COPY and RUN statements?
Code snippets
Included are all parts of the code required to replicate the error.
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
COPY ./start_reload.sh /start_reload.sh
RUN chmod +x /start_reload.sh
COPY ./data /data
COPY ./app /app
EXPOSE 8000
CMD ["/start.sh"]
docker-compose.yml
services:
web:
build: .
container_name: "fastapi-service"
ports:
- "8000:8000"
volumes:
- ./app:/app
command: /start_reload.sh
start-reload.sh
This is a small shell script which runs a prestart.sh if present, and then launches gunicorn/uvicorn in "reload mode":
#!/bin/sh
# If there's a prestart.sh script in the /app directory, run it before starting
PRE_START_PATH=/app/prestart.sh
HOST=${HOST:-0.0.0.0}
PORT=${PORT:-8000}
LOG_LEVEL=${LOG_LEVEL:-info}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
# Start Uvicorn with live reload
exec uvicorn --host $HOST --port $PORT --log-level $LOG_LEVEL main:app --reload
The solution lies in a difference between UNIX and Windows systems, and the way they end lines. A discussion on the topic can be found [here].
(Difference between CR LF, LF and CR line break types?)
The presence/absence of these characters in the file, and configuration of the shell running the command leads to an error where the file being run is the Dockerfile start-reload.sh(CR-LF) but the file that exists is simply start-reload.sh, hence the no such file or directory error raised.
This is my dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# copy package.json to /app folder
COPY package.json .
RUN npm install
# Copy all files from current directory to current directory in docker(app)
COPY . ./
EXPOSE 3000
CMD ["node","index.js"]
I am using this command in my powershell to run the image in a container
docker run -v ${pwd}:/app -p 3000:3000 -d --name node-app node-app-image
${pwd}
returns the current directory.
But as soon as I hit enter, somehow node_modules isn't being installed in the container and I get "express not found" error in the log.
[![Docker log][1]][1]
I can't verify if node_modules isn't being installed because I can't get the container up to run the exec --it command.
I was following a freecodecamp tutorial and it seems to work in his pc and I've tried this command in command prompt too by replacing ${pwd} by %cd%.
This used to work fine before I added the volume flag in the command.
[1]: https://i.stack.imgur.com/4Fifu.png
Your problem was you build your image somewhere and then try to map another folder to it.
|_MyFolder/
|_ all-required-files
|_ all-required-folders
|_ Dockerfile
docker build -t node-app-image .
docker run -p 3000:3000 -d --name node-app node-app-image
Simplified Dockerfile
FROM node:15
# sets the folder structure to /app directory
WORKDIR /app
# Copy all files from current directory to current directory in docker(app)
COPY . ./
RUN npm install
EXPOSE 3000
CMD ["node","index.js"]
I have a dockerfile for my TS app
FROM node:alpine
WORKDIR /usr
COPY package.json ./
COPY tsconfig.json ./
COPY src ./src
RUN ls -a
RUN npm install
EXPOSE 4005
CMD ["npm","run","dev"]
and I'm able to build it with this command
docker build -t ts-prisma .
and run it like this
docker run -it -p 3000:4005 -v src-prisma:/usr/src ts-prisma
What I want to achieve is to attach a volume to it and everytime I change something in my code change it in the docker.
I mean,
first time I build my app I have and endpoint like this
app.get(
"/",
async (req: Request, res: Response): Promise<Response> => {
return res.status(200).send({
message: "Hello world!!!",
});
}
);
and if I do a curl to ´http://localhost:3000´ it sends me the correct response of
{
message: "Hello world!!!",
}
But if I change that for this
app.get(
"/",
async (req: Request, res: Response): Promise<Response> => {
return res.status(200).send({
message: "This is a new message",
});
}
);
I'm still getting the older message.
I access to the docker with
docker exec -it <id> /bin/sh
and do a cat of the index.ts file
and is still the first version of it, nothing changed.
What am I missing?
I know there is a way to do it with volumes but I couldn't figured out.
Assuming that you want to run node in a container strictly for development, then you want to keep the source on your host drive and not have it in the container at all. So your COPY statements in the Dockerfile aren't necessary. You might actually be able to get away with running a completely standard node image.
Please note that I assume your host is a Linux or MacOS machine. If you're on Windows with WSL2, then doing this is hard(er) since file changes on the Windows file system aren't sent to Linux containers, so the container won't get notified when you make changes to your files.
To make sure that you have the necessary packages installed, we can run
docker run --rm -v $(pwd):/src -w /src node:alpine npm install
That will install the packages you need on your host file system.
Now you can start your development environment with
docker run --rm -v $(pwd):/src -p 3000:4005 -w /src node:alpine npm run dev
Now you should be able to change your files on your host file system and your container should pick up the changes.
When your app is done and you want to create a final image, you can do it with a Dockerfile like this
FROM node:alpine
WORKDIR /src
COPY . ./
RUN npm install
EXPOSE 4005
CMD ["npm", "run", "production"]
That image will have all your source code inside it, so once it's built, no changes to the code will have an effect on it unless you build it again.
To build it and run it, you'd do
docker build -t myimage .
docker run -p 3000:4005 myimage
I'm trying to figure why Nodemon is not restarting from within docker when passing in an environment variable. It worked previously when I was not trying pass in an env variable and instead in my Dockerfile the final command was CMD ["npm", "run", "devNoClient"]
I can see Nodemon launching in the terminal but doesn't restart the server when I update a file.
Makefile
node_dev:
echo 'Starting Node dev server in Docker Container'
docker build -t node_dev .
docker run -it --env-file variables.env -p 8080:8080 node_dev
Dockerfile
WORKDIR /chord-app
# copy package.json into the container
COPY package.json /chord-app/
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /chord-app
COPY . /chord-app/
# Make port 8080 available to the world outside this container
EXPOSE 8080
# Env is required to persist variable into built image.
# Docker run can now accept variable and it will be assigned here.
# default is run in dev mode
ENV run_mode_env=devNoClient
# Run the app when the container launches
# Due to variable, CMD syntax must change for this to work https://stackoverflow.com/a/40454758
CMD npm run $run_mode_env
package.json
"scripts": {
"devNoClient": "nodemon --exec babel-node src/server/start.js",
},
I realized it was not working because I don't have any binding volumes to my local machine when starting my docker image. So the container does not what files on my machine to watch for saves so it can restart the server with nodemon.
I have an Angular - Flask app that I'm trying to dockerize with the following Dockerfile:
FROM node:latest as node
COPY . /APP
COPY package.json /APP/package.json
WORKDIR /APP
RUN npm install
RUN npm install -g #angular/cli#7.3.9
CMD ng build --base-href /static/
FROM python:3.6
WORKDIR /root/
COPY --from=0 /APP/ .
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py"]
On building the image and running the image, console gives no errors. However, it seems to be stuck. What could be the issue here?
Is it because they are both in different directories?
Since I'm dockerizing Flask as well as Angular, how can I put both in same directory (right now one is in /APP and the other in /root)
OR should I put the two in separate containers and use a docker-compose.yml file?
In that case, how do I write the file? Actually my Flask calls my Angular and both run on same port. So I'm not sure if running in two different containers is a good idea.
I am also providing the commands that I use to build and run the image for reference:
docker image build -t prj .
docker container run --publish 5000:5000 --name prj prj
In the Angular build first stage of the Dockerfile, use RUN instead of CMD.
CMD is for running a command after the final image is built.