Server Error (500) when starting my dockerized laravel app - docker

I have created a laravel app and then created a Dockerfile:
FROM php:7.4-cli
RUN apt-get update -y && apt-get install -y libmcrypt-dev
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo
#mbstring
WORKDIR /app
COPY . /app
RUN composer install
EXPOSE 8000
CMD php artisan serve --host=0.0.0.0 --port=8000
Then I ran sudo docker build -t myApp . and sudo docker run -it -p 8000:8000 news-organizer. Everything worked fine.
When I copy this folder (with Dockerfile) to another location and run composer update --ignore-platform-reqs, sudo docker build --no-cache -t theApp . and sudo docker run -it -p 8888:8888 theApp the Web App starts. When I enter 127.0.0.1:8888 I get the 500 Error.
I already set all rights to sudo chmod 755 -R <myLaravelFolder>. I also tried setting different port numbers. The Dockerfile of the new folder is:
FROM php:7.4-cli
RUN apt-get update -y && apt-get install -y libmcrypt-dev
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo
#mbstring
WORKDIR /app
COPY . /app
RUN composer install
EXPOSE 8888
CMD php artisan serve --host=0.0.0.0 --port=8888
I just can't find a way to fix the 500 Error. What can I do?
My basic idea is: Creating a Laravel Web-App. Then creating a Dockerfile and upload it somewhere. Then others can download the Web App, install Docker, and run sudo docker build -t <myImage> . and sudo docker run -it -p 8000:8000 <myImage>. With that they can use my Laravel App on their computers as docker container.
I run xubuntu 20.04 as OS.

Thank you for hinting at the logs. Eventually, I found that I had to generate the application key. It can be done via php artisan key:generate.

Related

/bin/sh: 1: composer: not found

I am trying to run a laravel app from a docker container.
Here is my Dockerfile
FROM php:8.0
RUN apt-get update -y && apt-get install -y openssl zip unzip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo pdo_mysql
WORKDIR /app/api
COPY . .
RUN chmod 777 -R storage
RUN composer install
RUN php artisan key:generate
EXPOSE 8000
CMD php artisan serve --host=0.0.0.0
However, when I am running docker-compose up --build command. I am getting this error.
> [laravel-react-dockerized-backend 8/9] RUN composer install:
#0 0.321 /bin/sh: 1: composer: not found
What could be the solution?
Get Composer from its official Docker image with:
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
or more safely with a set version:
COPY --from=composer:2.5.1 /usr/bin/composer /usr/local/bin/composer

getting "exec ./bin/activemq: no such file or directory" on docker image run

Using below Dockerfile
FROM docker.io/eclipse-temurin:11-jre
ENV ACTIVEMQ_HOME /opt/activemq
RUN mkdir -p /opt/activemq && chmod 755 /opt/activemq
COPY apache-activemq-5.17.1/. /opt/activemq/
RUN apt update -y && apt upgrade -y
RUN addgroup --system activemq && adduser --system --home $ACTIVEMQ_HOME --uid 10001 --group activemq&& chown -R activemq:activemq $ACTIVEMQ_HOME && chown -h activemq:activemq $ACTIVEMQ_HOME
USER 10001
WORKDIR $ACTIVEMQ_HOME
CMD ["./bin/activemq","console","-Djetty.host=0.0.0.0"]
EXPOSE 61616 8161
Build Docker Image using docker build -t 123:11 .
When I try to run the image using docker run -it 123:11 getting exec ./bin/activemq: no such file or directory.
Same worked on one server and not working on other server.
Tried to overwrite with --entrypoint /bin/bash and verified the files were copied successfully.
Any reason it is working on one server but not on other?
I'm using Docker Desktop on Windows servers.

Cannot run installed tool in Dockerfile even though its there

I installed diesel-cli in a Dockerfile:
FROM alpine:latest
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apk update
RUN apk add postgresql curl gcc musl-dev libpq-dev bash
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]
That works fine. The entrypoint.sh is:
#!/bin/bash
export PATH="/root/.cargo/bin:${PATH}"
ls /root/.cargo/bin/diesel
bash -c "/root/.cargo/bin/diesel setup"
The strange this is that the ls shows that the diesel binary is there. But when running the docker container it still says:
bash: line 1: /root/.cargo/bin/diesel: No such file or directory
I also tried calling diesel right from the Dockerfile with the same result.
Why can't I run diesel this way?
See comment by The Fool!
Using a different base image resolves the problem:
FROM debian:bullseye-slim
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apt update -y
RUN apt install postgresql curl gcc libpq-dev bash -y
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
# This may take a minute
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
# provision the database
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

Not being able to access webapp from host in Docker

I have a simple webproject which I want to "Dockerize" but I keep failing at exposing the webapp to host.
My Dockerfile looks like:
FROM debian:jessie
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
WORKDIR /app/web
And requirements.txt looks like:
PasteScript==2.0.2
Pylons==1.0.2
The web directory was built using:
paster create --template=pylons web
And finally start_server.sh:
#!/bin/bash
paster serve --daemon development.ini start
Now I am able to build with :
docker build -t webapp .
And then run command:
docker run -it -p 5000:5000 --name app webapp:latest /bin/bash
And then inside the docker container I run:
bash start_server.sh
which successfully starts the webapp on port 5000 and if I curl inside docker container I get expected response. Also the container is up and running with the correct port mappings:
bc6511d584ae webapp:latest "/bin/bash" 2 minutes ago Up 2 minutes 0.0.0.0:5000->5000/tcp app
Now if I run docker port app it looks fine:
5000/tcp -> 0.0.0.0:5000
However I cannot get any response from server from host with :
curl localhost:5000
I have probably misunderstood something here but it seems fine to me.
In your dockerfile you need to add EXPOSE 5000 your port mapping is correct think of it as opening the port on your container and then you map it with localhost with the -p
Answer in the comment
when you make_server bind to 0.0.0.0 instead of localhost

Resources