I unable to get container IP address to run it from Browser.
Code Snippet
PS H:\DevAreaLocal\COMPANY - RAD PROJECTS\DockerApp\WebDockerCoreApp> docker-compose build
Building webdockercoreapp
Step 1/5 : FROM microsoft/aspnetcore:1.1
---> 4fe9b4d0d093
Step 2/5 : WORKDIR /WebDockerCoreApp
---> Using cache
---> b1536c639a21
Step 3/5 : COPY . ./
---> Using cache
---> 631ca2773407
Step 4/5 : EXPOSE 80
---> Using cache
---> 94a50bb10fbe
Step 5/5 : ENTRYPOINT dotnet WebDockerCoreApp
---> Using cache
---> 7003460ebe84
Successfully built 7003460ebe84
Successfully tagged webdockercoreapp:latest
PS H:\DevAreaLocal\COMPANY - RAD PROJECTS\DockerApp\WebDockerCoreApp> docker inspect --format="{{.Id}}" 7003460ebe84
Got Bellow ID
sha256:7003460ebe84bdc3e8647d7f26f9038936f032de487e70fb4f1ca137f9dde737
If I run bellow command
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" 7003460ebe84
Got bellow response
Template parsing error: template: :1:19: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings"
Docker.Compose.yml file settings
version: '2.1'
services:
webdockercoreapp:
image: webdockercoreapp
build:
context: ./WebDockerCoreApp
dockerfile: Dockerfile
ports:
- "5000:80"
networks:
default:
external:
name: nat
By runnging "docker network ls"
got bellow response
NETWORK ID NAME DRIVER SCOPE
f04966f0394c nat nat local
3bcb5f906e01 none null local
680d4b4e1a0d webdockercoreapp_default nat local
When I run "docker network inspect webdockercoreapp_default"
Got below response
[
{
"Name": "webdockercoreapp_default",
"Id": "680d4b4e1a0de228329986f217735e5eb35e9925fd04321569f9c9e78508ab88",
"Created": "2017-12-09T22:59:55.1558081+05:30",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "0.0.0.0/0",
"Gateway": "0.0.0.0"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.windowsshim.hnsid": "ad817a46-e7ff-4fc7-9bb9-d6cf17820b8a"
},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "webdockercoreapp"
}
}
]
When you're running the command docker inspect --format="{{.Id}}" 7003460ebe84 - you're currently running this against the image ID not the container ID.
Images are the static asset that you build, and of which containers are run from. So what you need to do is first bring your image online, via:
docker-compose up
Now you'll be able to see the running containers via:
docker ps
Find the container you want; let's say it's abcd1234
Now you'll be able to run your original command against the container - rather than the image.
docker inspect --format="{{.Id}}" abcd1234
This will return the full SHA of the container; and since you originally asked about the network settings; you'll be able to run something like:
docker inspect -f "{{ .NetworkSettings.Networks.your_network_here.IPAddress }}" abcd1234
If you're unsure exactly what your network name is (Looks like it should be nat) just do a full docker inspect abcd1234 and look at the output and adjust the filter as needed.
Change in command withing Docker file solve the issue. Bellow is the code snippet
Since we must build our project, this first container we create is a
temporary container which we will use to do just that, and then
discard it at the end.
Next, we copy over the .csproj files into our temporary container's
'/app' directory. We do this because .csproj files contain contain a
list of package references our project needs. After copying this file,
dotnet will read from it and then to go out and fetch all of the
dependencies and tools which our project needs.
Once we've pulled down all of those dependencies, we copy it into the
temporary container. We then tell dotnet to publish our application
with a release configuration and specify the output path.
We should have successfully compiled our project. Now we need to build
our finalized container.
Our base image for this final container is similar but different to
the FROM command above--it doesn't have the libraries capable of
building an ASP.NET app, only running.
Concussion:
We have now successfully performed what is called a multi-stage build.
We used the temporary container to build our image and then moved over
the published dll into another container so that we minimized the
footprint of the end result. We want this container to have the
absolute minimum required dependencies to run; if we had kept with
using our first image, then it would have come packaged with other
layers (for building ASP.NET apps) which were not vital and therefore
would increase our image size.
FROM microsoft/aspnetcore-build:1.1 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM microsoft/aspnetcore:1.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "CoreApiDockerSupport.dll"]
Put withing .dockerignore file
bin\
obj\
Note: Above settings will work if you create a Asp.Net Core Project from Visual Studio IDE with in-build docker support (using docker-compose.yml and docker-compose.ci.build.yml) files or without it.
Source 1 - building-sample-app
Source 2 - asp-net-core-on-linux with Docker
Related
I have a python app that needs access to private repository which is mentioned in the Dockerfile like this:
RUN --mount=type=ssh pip install -r requirements.txt
I have followed instruction from this official docker docs and things are working fine when I do
docker build --ssh default=C:\Users\Ravi.Kumar\.ssh\id_rsa -t somename:latest . from the command line in host machine.
Now I am trying to get this to work using the VSCode Remote Container extension. I am getting this in the logs when opening the project in a container using the Remote Contaienr extension:
Container server: Remote to local stream terminated with error: {
message: 'connect ENOENT \\\\.\\pipe\\openssh-ssh-agent',
name: 'Error',
stack: 'Error: connect ENOENT \\\\.\\pipe\\openssh-ssh-agent\n' +
'\tat PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16)'
}
Also then the remote container starts, I can see this is the docker build command being used:
Start: Run: docker build -f d:\Code\somename\Dockerfile -t vsc-somename-8afa92e4f821805c825a5facd311c4f9 d:\Code\somename
devcontainer.json file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.217.4/containers/docker-existing-dockerfile
{
"name": "Existing Dockerfile",
"context": "..",
"dockerFile": "../Dockerfile",
"settings": {},
"build": {},
"extensions": []
}
Question: How do I tell Remote Container extension to use the --ssh arg in the docker build command.
I think this reference page can help you find the right syntax to customize docker commands using the devcontainer.json.
Unfortunately, it seems you can't specify arguments to be passed to your docker build command directly.
You can only pass in build-args:
"build": { "args": { "MYARG": "MYVALUE"} }
A potential workaround to your problem can be to build the image using the command line you mentionned, and then running Attach to running container... vscode action to work inside it from your vscode instance.
I'm using a jenkins container, i ran this container using docker-compose
Here is my docker-compose file:
version: '3.3'
services:
jenkins-service:
build:
context: ./
image: docker/jenkins-local
ports:
- 8080:8080
- 5000:5000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_data_prod:/var/jenkins_home
- jenkins_tmp_data_prod:/tmp
volumes:
jenkins_data_prod:
jenkins_tmp_data_prod:
and here is my dockerfile
FROM jenkins/jenkins:lts
# Installation of docker
USER root
RUN curl -sSL https://get.docker.com/ | sh
# Adding user jenkins in groupe "docker"
RUN usermod -aG docker jenkins
# Installation of docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
#USER jenkins
After that i ran jenkins using docker-compose up --build command
i first noticed that my docker is mounting volumes "jenkins_jenkins_data_prod", and "jenkins_jenkins_tmp_data_prod" as name, while i was writing only "jenkins_data_prod" and "jenkins_tmp_data_prod" on my docker-compose.yml, so i modified my file and wrote volumes names as "data_prod" and "tmp_data_prod"
like this :
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- data_prod:/var/jenkins_home
- tmp_data_prod:/tmp
volumes:
data_prod:
tmp_data_prod:
i Found volumes mounted in "/var/lib/docker/volumes/jenkins_data_prod/_data"
as "jenkins_data_prod" and "jenkins_tmp_data_prod" when i used to inspect the mountpoint using this command :
'docker volume inspect jenkins_data_prod'
Result command :
terminal :
{
"CreatedAt": "2021-06-15T23:33:41Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "jenkins",
"com.docker.compose.version": "1.29.1",
"com.docker.compose.volume": "data_prod"
},
"Mountpoint": "/var/lib/docker/volumes/jenkins_data_prod/_data",
"Name": "jenkins_data_prod",
"Options": null,
"Scope": "local"
}
So after that i'm running a Jenkins pipline to build a personnal project on gitlab, it's using "/var/jenkins_home" as workspace
i have a docker-compose.yml file using bindmount volumes
My docker-compose to run
version: '3.3'
services:
netcore:
image: mcr.microsoft.com/dotnet/sdk:3.1
volumes:
- ../../projet-api/:/sources
- ${NUGET_HOME}/.nuget/:/root/.nuget
- ../build/netcore/application:/application
working_dir: /sources
command: dotnet publish -o /application -c Debug --self-contained false /p:Version=${VERSION}
when i run the image on "/var/jenkins_home" workspace i use to have error
+ docker-compose run --rm netcore
Microsoft (R) Build Engine version 16.7.2+b60ddb6f4 for .NET
Copyright (C) Microsoft Corporation. All rights reserved.
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
so "/var/jenkins_home" was empty and my docker-compose only created folders with the name of "/projet-api" and "/build/netcore/application" i used to see where is the workspace containing the project i found it in "\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\jenkins_data_prod_data\workspace"
i looked arround docs and found that "\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes" was mapped to "/var/lib/docker/volumes/"
so i decided to try to run the image using environnement variables
version: '3.3'
services:
netcore:
image: mcr.microsoft.com/dotnet/sdk:3.1
volumes:
- ${PWD_docker}/../../projet-api/:/sources
- ${NUGET_HOME}/.nuget/:/root/.nuget
- ${PWD_docker}/../build/netcore/application:/application
working_dir: /sources
command: dotnet publish -o /application -c Debug --self-contained false /p:Version=${VERSION}
My running part of jenkinsfile
node {
//*********************************
// getting code
//*********************************
stage ('SCM'){
checkout scm
}
//*********************************
// docker images
//*********************************
stage('RUNNING NETCORE') {
dir ('project-docker/compile'){
def containerPWD = pwd() // printing /var/jenkins_home/....
def pwdval = "/var/lib/docker/volumes/jenkins_data_prod/_data"
def dockerHostPWD = containerPWD.replace("/var/jenkins_home", pwdval)// /var/lib/docker/volumes/jenkins_data_prod/_data
def props = readProperties file: '.env'
VERSION = props.VERSION
withEnv(["NUGET_HOME=${dockerHostPWD}","VERSION=${VERSION}","PWD_docker=${dockerHostPWD}"]) {
sh """
docker-compose run --rm netcore
"""
}
}
}
.
// rest of the code .
.
still same error of MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
Questions
1/ running my docker-compose using those paths only creating folders, at the last one i put it was in "\wsl$\Ubuntu\var\lib\docker\volumes" so looks like jenkins doesnt access to "\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes" which is the workspace containing the project, using path "/var/lib/docker/volumes" anyone can help me out finding how to access to volumes ?
Maybe "/var/lib/docker/volumes" is not really mapping to "\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes" ? but for "\wsl$\Ubuntu\var\lib\docker\volumes" ?
( without having to make symbolic link,or bindmounting my host "/var/jenkins_home" to jenkins "/var/jenkins_home" in the docker-compose that run jenkins.)
2/ My second question is why i'm getting "jenkins_" prefixed to my volumes, we can see on labels "com.docker.compose.volume": "data_prod" and we see at the name "Name": "jenkins_data_prod"
and why the label volume is "data_prod" and volume name isnt
While I was trying to convert Docker compose file with the container transform I got the following error:
Container "container-name" is missing required parameter 'image'.
Services with the image parameter, it operates fine. However, the ones with the build parameter instead of the image cause error. I want to build some of the images based on a Dockerfile by using a build parameter and I don't need an image parameter in the Docker compose file at all. What would be the most effective solution here?
Here is an example:
Successfull transformation for db service:
Docker-compose.yml:
db:
image: postgres
Dockerrun.aws.json:
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"memory": 128,
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data/",
"sourceVolume": "Postgresql"
}
],
"name": "db"
}
Unsuccessfull transformation for web service since build used instead of image parameter:
Docker-compose.yml:
web:
build:
context: .
dockerfile: Dockerfile
The issue is that an AWS ECS (=elastic container service) task definition cannot depend on a Dockerfile to build the image. The image has to be already build for it to be used in a task definition. For this reason the "image" key is required in a task definition json file and so it has to be in the docker-compose file you are converting from also.
The image for the task definition can come from Docker hub (like the postgres image does) or you can build your own images and push them to AWS ECR (=elastic container registry).
I have developed some static web-pages using jQuery & bootstrap.Here follows the folder structure,
Using below command i can able to run the docker image
Build the image
docker build -t weather-ui:1.0.0 .
Run the docker image
docker run -it -p 9080:80 weather-ui:1.0.0
Which is working fine and i can able to see the pages using http://docker-host:9080
But i would like to create a docker-compose for it,I have created a docker-compose file like below
version: '2'
services:
weather-ui:
build: .
image: weather-ui:1.0.0
volumes:
- .:/app
ports:
- "9080:9080"
The above compose file was not working and it stuck,
$docker-compose up
Building weather-ui
Step 1 : FROM nginx:1.11-alpine
---> bedece1f06cc
Step 2 : MAINTAINER ***
---> Using cache
---> ef75a70d43e8
Step 3 : COPY . /usr/share/nginx/html
---> 6fbc3a1d4aff
Removing intermediate container 2dc46f1f751d
Successfully built 6fbc3a1d4aff
WARNING: Image for service weather-ui was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Recreating weatherui_weather-ui_1 ...
Recreating weatherui_weather-ui_1 ... done
Attaching to weatherui_weather-ui_1
It stuck in the above line and i really don't know why it stuck?
Any pointers or hint would be great to resolve this issue.
As per Antonio edit,
I can see the running container,
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69ea4ff1a3ea weather-ui:1.0.2 "nginx -g 'daemon ..." 6 seconds ago Up 5 seconds 80/tcp, 443/tcp, 0.0.0.0:9080->9080/tcp weatherui_weather-ui_1
But while launching the page i couldn't see anything.It says the site can't be reached
docker-compose up build your docker container (if not already done) and attach the container to your console.
If you open your browser, and go to http://localhost:9080, you should see your website.
You don't need to map a volume : volumes: - .:/app in docker-compose.yml because you already copy static files in Dockerfile :
COPY . /usr/share/nginx/html
If you want to launch your container in the background (or in "detached" mode), add -d option : docker-compose up -d.
And by default docker-compose to not "rebuild" container if already exists, to build new container each time, add --build option : docker-compose up -d --build.
I am really stuck with the usage of docker VOLUME's. I have a plain dockerfile:
FROM ubuntu:latest
VOLUME /foo/bar
RUN touch /foo/bar/tmp.txt
I ran $ docker build -f dockerfile -t test . and it was successful. After this, I interactively ran a shell into the docker container associated with the run of the created test image. That is, I ran $ docker run -it test
Observations:
/foo/bar is created but empty.
docker inspect test mounting info:
"Volumes": {
"/foo/bar": {}
}
It seems that it is not mounting at all. The task seems pretty straight but am I doing something wrong ?
EDIT : I am looking to persist the data that is created inside this mounted volume directory.
The VOLUME instruction must be placed after the RUN.
As stated in https://docs.docker.com/engine/reference/builder/#volume :
Note: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
If you want to know the source of the volume created by the docker run command:
docker inspect --format='{{json .Mounts}}' yourcontainer
will give output like this:
[{
"Name": "4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628",
"Source": "/var/lib/docker/volumes/4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628/_data",
"Destination": "/foo/bar",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}]
Source contains the path you are looking for.