I am trying to set proxy in Dockerfile where package is downloaded from e.g. https://github.com/joeferner/node-java.git but connection to github.com can not be established from private network . So i have set proxy in Dockerfile. Can you let me know how to do it.
I have tried this command
ENV https_proxy https://github.com
which is not working during the image build process.
Related
I am encountering an interesting difference in startup behaviour when running a simple net6.0 web api built with docker-compose in comparison to being built with docker. The application itself runs in a kubernetes cluster.
Environment
Minikube v1.26.1
Docker Desktop v4.12
Docker Compose v2.10.2
Building with docker-compose
docker-compose.yml
version: "3.8"
services:
web.api:
build:
context: ./../
dockerfile: ./web.API/Dockerfile
The context is set to the parent directory due to some files needed there.
Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
ENV ASPNETCORE_URLS=http://+:80
COPY Directory.Build.props ./Directory.Build.props
COPY .editorconfig ./.editorconfig
COPY ["webapi/web.API", "web.API/"]
RUN dotnet build "web.API/web.API.csproj" -c Release --self-contained true --runtime alpine-x64
RUN dotnet publish "webapi/web.API.csproj" -c Release -o /app/publish \
--no-restore \
--runtime alpine-x64 \
--self-contained true \
/p:PublishSingleFile=true
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-alpine
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["./web.API"]
This results in the app starting up within the kubernetes cluster with the following logs:
Now listening on: http://[::]:80
Building with docker build
Using the same Dockerfile mentioned earlier with the same build context you can see in the docker-compose.yml, a deployment to k8s results in the following log:
Now listening on: http://localhost:5000
Running the image locally
Running the exact same image from the k8s cluster locally however results in
Now listening on: http://[::]:80
Already tried
As suggested in many posts, I tried setting the environment variable ASPNETCORE_URLS via Dockerfile or k8s deployment.yml- neither of which had an impact on the startup url.
I can't seem to figure out why there is a difference between those 2 ways of building an image.
Update
The only thing that seems to work is to add
builder.WebHost.ConfigureKestrel(option =>{
option.ListenAnyIP(80);
});
to the Program.cs.
Still not sure about the reason behind the behaviour though.
A few things:
I assume that the container already running and working on port 80 (docker run) is stopped before attempting to run docker-compose?
Environment variables can be used in docker-compose.yml file
Ports most likely need to be exposed correctly, which from the Dockerfile and docker-compose.yml seems like it is not?
Environment Variables
First off, before the environmental ENV ASPNETCORE_URLS=http://+:80 is going to be of any use, your docker-compose instance does not define which ports to use, your docker-compose (if trimmed) does not show any ports.
Perhaps because the ports aren't exposed, this means the environmental tries to connect to 80, which fails (already in use/not exposed) and then fails, and somehow connects on 5000.
Alternatively, more likely: it does not really not _see your ENV ASPNETCORE_URLS
You can try environment variables directly in your docker-compose file:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Publishing ports
In docker-compose file you need this, to publish ports:
ports:
- "80"
- "443"
... or
ports:
- "80:80" // "host-port:container-port"
- "443:1234"
Additional information
The keyword EXPOSE\expose in Dockerfile/docker-compose.yml is just informative (comments in a sense), functionally it does not process anything. A port need to be exposed (published) to be used.
So, those EXPOSE 443 & 80 is not telling Docker to use it, perhaps you are running your container for example like this:
This exposes port 80 for it to be available.
docker run -p 127.0.0.1:80:80/tcp image command
In short, use ports keyword in docker-compose.yml.
EDIT:
I read your comment above:
But the app is not accessible in k8s when listening to localhost:5000 even with correct service and container configuration
This indicates what I am trying to say regarding the ports being published or not. Your port 5000 is also not exposed because nothing in your configuration shows that is the case.
I tried to deploy my flask app and followed these two guides 1, 2. But I can't connect to the site by task public IP.
here my dokerfile
FROM python:3.7
# By default, listen on port 5000
EXPOSE 80
# Set the working directory in the container
WORKDIR /app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install any dependencies
RUN pip install -r requirements.txt
# Copy the content of the local src directory to the working directory
COPY . .
# Specify the command to run on container start
CMD python app.py
How can I fix it?
Enabling public IP is not enough, you also have to attach the service you create with a security group that enables http requests on a given port from a given source.
If you go to your cluster and click on "Deploy" to deploy a new service, you will see one of the tabs is Networking. Create a security group that allows HTTP requests from anywhere on port 80 as shown in the picture.
Security group configuration:
I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]
I know that there are many discussion about this, but none of the proposed solutions worked for me, so I will have to know at least if I was doing something wrong or I was hitting a limitation.
Step 1.
I created the default .NET Core 2.0 WEB API project from Visual Studio, nothing special here, output type set to Console Application, running OK from Visual Stuido 2017 Community.
Step 2. I installed latest Docker Toolbox since I am running Windows 10 Home Edition, that installed also the Virtual Box.
Step 3. I added the following docker file next to the sln:
FROM microsoft/aspnetcore-build:2.0
WORKDIR /app
EXPOSE 80
COPY . .
RUN dotnet restore
RUN dotnet build
WORKDIR /app/DockerSample
ENTRYPOINT dotnet run
Next
Step 4. I successfully build the image with a command like 'docker build -t sample1 .'
Step 5. The container successfully started to run, it was started by the following command 'docker run -d -p 8080:80 sample1'
Step 6. Pull info about the container using command docker logs c6
The following info was shown:
Interesting here is the address where the service is listening, this seems to be the same with the address I was getting when running the service directly from Visual Studio.
Is this the service address from the virtual machine that is running inside Virtual box ? Why the port is not 8080 or 80 as I mentioned inside of the "run" command ?
The container looks ok, something like:
Step 7.
Now starts the fun trying to hit the service from Windows 10 machine, was impossible using calls like http://localhost:8080/values/api I also tried calls like http://192.168.99.100:8080/values/api where 192.168.99.100 is the address of the default docker machine.
I also tried with something like 'http://172.17.0.2:8080/values/api' where the IP address was got after a call like 'docker inspect a2', changing the port to 80 did not help :).
Trying to change the port number to 80 or 58954 , the one shown in red as listening, did not help. Also Windows Firewall or any other firewalls were stopped.
I tried to port forward from VirtualBox having something like
Trying to change the 80 and 8080 ports between them for host and guest also did not work.
Basically none of the suggested solutions I found did not gave me the chance to hit the service from my Windows machine.
Mainly I was following this tutorial https://www.stevejgordon.co.uk/docker-for-dotnet-developers-part-2 which explains quite well what should be done only that at some point is using the Docker Desktop for Windows so the Docker Toolbox was left behind.
Do you know what should I do so that I can hit the service from the docker container ?
In docker compose (visual studio add docker integration "docker-compose.yml") set this:
version: '3.4'
services:
webapi.someapi:
image: ${DOCKER_REGISTRY-}somenamesomeapi
build:
context: .
dockerfile: ../webapi/Dockerfile
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_HTTPS_PORT=443
ports:
- "80:80"
- "443:443"
in lunch settings specify your app to start on ports 80 and 443 https
Docker for visual studio code: https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker
Follow this steps to orchestrate your containers:
https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker
For your issue, it is caused by that you run the container under Development environment which did not use the port 80 for the application.
For FROM microsoft/aspnetcore-build:2.0, it seems you could not change the ASPNETCORE_ENVIRONMENT to Production.
For a solution, you could change your docker file like below which change the base image with microsoft/aspnetcore:2.0.
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ["TestAPI/TestAPI.csproj", "TestAPI/"]
RUN dotnet restore "TestAPI/TestAPI.csproj"
COPY . .
WORKDIR "/src/TestAPI"
RUN dotnet build "TestAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "TestAPI.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "TestAPI.dll"]
I'm wondering whether there are best practices on how to inject credentials into a Docker container during a docker build.
In my Dockerfile I need to fetch resources webservers which require basic authentication and I'm thinking about a proper way on how to bring the credentials into the container without hardcoding them.
What about a .netrc file and using it with curl --netrc ...? But what about security? I do no like the idea of having credentials being saved in a source repository together with my Dockerfile.
Is there for example any way to inject credentials using parameters or environment variables?
Any ideas?
A few new Docker features make this more elegant and secure than it was in the past. The new multi-phase builds let us implement the builder pattern with one Dockerfile. This method puts our credentials into a temporary "builder" container, and then that container builds a fresh container that doesn't hold any secrets.
You have choices for how you get your credentials into your builder container. For example:
Use an environment variable: ENV creds=user:pass and curl https://$creds#host.com
Use a build-arg to pass credentials
Copy an ssh key into the container: COPY key /root/.ssh/id_rsa
Use your operating system's own secure credentials using Credential Helpers
Multi-phase Dockerfile with multiple FROMs:
## Builder
FROM alpine:latest as builder
#
# -- insert credentials here using a method above --
#
RUN apk add --no-cache git
RUN git clone https://github.com/some/website.git /html
## Webserver without credentials
FROM nginx:stable
COPY --from=builder /html /usr/share/nginx/html