Unfortunately, at the moment, I cannot use docker-compose. And I have to get Google Cloud Proxy running in a Docker container. But it doesn't run in the container, as MySQL is unable to connect to Google Cloud SQL.
Keep in mind, I was able to connect outside of the container on my machine. So that's how I know the connection works.
My Dockerfile looks like this:
FROM node:12-alpine
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy
RUN ./cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json &
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
CMD ["npm", "run", "serve"
How can I configure it so Google Cloud Proxy runs?
RUN directive executes at build time so your CMD only start node process, that is why you are not able to connect because the proxy process is not running at all.
One way is to start both processes from entrypoint but you should know that in such case if proxy down due to some reason your container will still keep running as the main process is nodejs of the container.
Change the entrypoint to
ENTRYPOINT [ "sh", "-c", "/cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json & npm start" ]
Related
I am having some issues deploying to Cloud Run lately. When I am trying to deploy the below Dockerfile to Cloud Run, it ends up with the error Failed to start and then listen on the port defined by the PORT environment variable.:
FROM phpmyadmin/phpmyadmin:latest
EXPOSE 8080
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
The ENTRYPOINT and CMD were added separately even though the phpmyadmin/phpmyadmin:latest uses this same ENTRYPOINT and CMD to see if that would solve it, though it is not required. The same Docker image when deployed using docker run runs properly and listens on port 8080. Is there something I am doing wrong?
This is the command I use to deploy:
gcloud run deploy phpmyadmin --memory=1Gi --platform=managed \
--allow-unauthenticated --add-cloudsql-instances project_id:us-central1:db-name \
--region=us-central1 --image gcr.io/project_id/phpmyadmin:1.3 \
--update-env-vars PMA_HOST=localhost,PMA_SOCKET="/cloudsql/project_id:us-central1:db-name",PMA_ABSOLUTE_URI=phpmyadmin.domain.com
This is all I can find in the logs. (Have redacted some data):
https://gist.github.com/shanukk27/9dd4b3076c55307bd6e853a76e7a34e0
Cloud Run runtime environment seems to be slightly different than Docker run command. You can't use ENTRYPOINT and CMD in the same time
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "apache2-foreground" ]
It works with Docker Run (Why? Docker issue? Docker feature?) and not on Cloud Run (missing feature? bug?).
Use only one of them, for example:
ENTRYPOINT /docker-entrypoint.sh && apache2-foreground
EDIT
A strange remark shared by Shanu is the 2 command works with Wordpress deployment, and doesn't work here.
FROM wordpress:5.3.2-php7.3-apache
EXPOSE 8080
# Copy custom entrypoint from repo
COPY cloud-run-entrypoint.sh /usr/local/bin/
# Change apache listening port and set permission for docker entrypoint
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf && \
chmod +x /usr/local/bin/cloud-run-entrypoint.sh
# Wordpress conf
COPY wordpress/. /var/www/html/
# Custom entrypoint
ENTRYPOINT ["cloud-run-entrypoint.sh","docker-entrypoint.sh"]
# Start apache when docker container starts
CMD ["apache2-foreground"]
The problem is solved here, but the reason is not clear
Note to Googler (Steren? Ahmet?): Can you share more details on this behavior?
TL:DR - I am trying to deploy my MERN stack application to GCP's Cloud Run. Struggling with what I believe is a port issue.
My React application is in a client folder inside of my Node.js application.
Here is my one Dockerfile to run both the front-end and back-end:
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
# Installing components for be connector
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
... and here is my entrypoint.sh file:
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
docker-compose up works locally, and docker run -p 8080:8080 -p 3000:3000 <image_id> runs the image I built. Port 8080 is for Node and port 3000 for the React app. However, on Cloud Run, the app does not work. When I visit the app deployed to Cloud Run, the frontend initially loads for a split second, but then the app crashes as it attempts to make requests to the API.
In the Advanced Settings, there is a container port which defaults to 8080. I've tried changing this to 3000, but neither works. I cannot enter 8080,3000, as the field takes valid integers only for the port. Is it possible to deploy React + Node at the same time to Cloud Run like this? How can I have Cloud Run listen on both 8080 and 3000, as opposed to just 1 of the 2?
Thanks!
It's not currently possible.
Instead, you can run multiple processes inside Cloud Run, but instead use nginx to proxy requests between them depending on the URL, similar to what's recommended in this answer.
I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.
i'am trying to running my Go Apps in Docker container, but it fail and give error exit code 1. The application works well in my local machine but not in Docker.
Below is my Dockerfile.
FROM golang:1.8 as goimage
RUN go get -u github.com/golang/dep/cmd/dep
COPY . src/github.com/aditmayapada/tryout
WORKDIR src/github.com/aditmayapada/tryout
ENV PORT 9090
RUN dep ensure
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/main
FROM alpine:3.6 as baseimagealp
RUN apk add --no-cache bash
ENV WORK_DIR=/docker/bin
WORKDIR $WORK_DIR
# RUN mkdir src/github.com/aditmayapada/tryout/bin
# WORKDIR src/github.com/aditmayapada/tryout/bin
COPY --from=goimage /go/src/github.com/aditmayapada/tryout/bin ./
ENTRYPOINT /docker/bin/main
EXPOSE 9090
And below is my apps repository that i want to deploy in Docker
https://github.com/aditmayapada/tryout
I have tried to get logs using docker events, and i only get this
Logs
Then i tried to use --logs in docker but its not showing anything.
Am i missing something here? because my apps run well in my local machine... Thank you.
I have briefly looked at your code and found that app could be finished in a case when connection.Ping() return err
https://github.com/aditmayapada/tryout/blob/master/main.go#L44
I recommend adding some logging in this space to identify a point of exit. It seems something is wrong with connection to DB in docker.
I have a flask app, if use "python app.py" to start the server, it runs perfectly. The browser client can get what I want.
However, if I use docker container, If I use below DockerFile:
FROM python:3.6.5-slim
RUN mkdir /opt/convosimUI
WORKDIR /opt/convosimUI
RUN pip install Flask
RUN pip install grpcio
RUN pip install grpcio-tools
ADD . .
EXPOSE 5000
ENV FLASK_APP=app.py
CMD ["python", "-u", "app.py"]
The browser (on windows) can not get response from server, in the linux container, everything works perfect and I can use wget to get content of 127.0.0.1, but outside the container, everything in container is not accessible.
If I change the lask line of DockerFile into:
CMD ["flask", "run", "--host", "0.0.0.0"]
Not use python app.py, then it works again.
Why this happen? And how if I want to use python app.py command?
It because there're some other parallel processing in app.py which I need to share the client of another service, and this client must be always on when turn on the web server. So I can not just put them in separate place.
Any ideas are welcome. Thanks!