Best practice Net Core Api and Docker - docker

I'm trying to find over the net how to manage properly Dockerfile in order to make the best possible image, but unfortunately no good way appeared to me. That's why I ask here.
This is my context :
I'm developping Net Core 3 web API
I'm using template from VS2019
I'm using the original DockerFile with some modifications
Here is my Dockerfile :
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update;apt-get install libfontconfig1 -y
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Src/API/API.csproj", "Src/API/"]
RUN dotnet restore "Src/API/API.csproj"
COPY . .
WORKDIR "/src/Src/API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]
There is my solution structure :
.
| .dockerignore
| mySolution.sln
+---Docs
+---Src
| \---API
| | API.csproj
| | API.csproj.user
| | appsettings.Development.json
| | appsettings.json
| | appsettings.Staging.json
| | Dockerfile
| | Dockerfile.original
| | Program.cs
| | Startup.cs
| +---.config
| | dotnet-tools.json
| +---bin
| +---Controllers (source files)
| +---Data (source files)
| +---Database (source files)
| +---Dtos (source files)
| +---Helpers (source files)
| +---Mail (source files)
| +---Migrations (EF source files)
| +---Models (source files)
| +---obj
| +---PDF (source files)
| +---Properties
| | | launchSettings.json
| +---Services (source files)
| \---wwwroot
| +---Templates
| \---uploads
\---Tests
As you can see, if I want to build my image without VS2019, I have to put the Dockerfile to the root directory (where is the .sln file is).
For now, if I use this Dockerfile, Docker will copy all files / directories from Src directory, including bin / obj directories, and wwwroot directory which can contains some files from my upload tests.
If I check in Visual Studio the file structure in my container :
As you can see, I don't need to all files, only my sources in order to build and deploy my app.
How can I upgrade my Dockerfile in order to make the most proper image ?

Some tips:
For security/portability use alpine instead of buster slim in the final image.
At the final image, use "USER NOBODY" to run Dockerfile as non-root.
That will require usage of ports above 1024.
For the building purpose control the current context using '-f' so you can leave Dockerfile inside but use context from solution root and even if you have CI/CD pipelines.
Run your unit tests inside Dockerfile before last stage so if it fails, it would stop.
Think about secrets and that depends where you will run your container because AppConfigs aren't recommended.

Related

Why is the rust docker image so huge

I am packaging a rust app to a docker image to deploy to my server. I found the rust docker image size to be more than 1GB (larger than any other app using java and python). Why is the rust docker image so huge? I checked the layer and found the cargo build command takes more than 400MB.
FROM rust:1.54
LABEL maintainer="jiangtingqiang#gmail.com"
ENV ROCKET_ADDRESS=0.0.0.0
ENV ROCKET_PORT=11014
WORKDIR /app
COPY . .
RUN rustup default stable
RUN cargo build
CMD ["cargo", "run"]
Is it possible to make the rust docker image smaller?
The rust image is definitely not 1GB. From Dockerhub we can see that the images are far smaller. Your image is 1GB, because it contains all intermediate build artifacts which are not necessary for the functioning of the application - just check the size of the target folder on your PC
Rust image sizes:
+---------------+----------------+------------------+
| Digest | OS/ARCH | Compressed Size |
+---------------+----------------+------------------+
| 99d3d924303a | linux/386 | 265.43 MB |
| 852ba83a6e49 | linux/amd64 | 196.74 MB |
| 6eb0fe2709a2 | linux/arm/v7 | 256.59 MB |
| 2a218d86ec85 | linux/arm64/v8 | 280.22 MB |
+---------------+----------------+------------------+
The rust docker image contains the compiler, which you need to build your app, but you don;t have to package it with your final image. Nor you have to package all the temporary artifacts generated by the build process.
Solution
In order to reduce the final, production image, you have to use a multi-stage docker build:
The first stage builds the image
The second stage discards all irrelevant stuff and gets only the built application:
# Build stage
FROM rust:1.54 as builder
WORKDIR /app
ADD . /app
RUN cargo build --release
# Prod stage
FROM gcr.io/distroless/cc
COPY --from=builder /app/target/release/app-name /
CMD ["./app-name"]

Running docker when dockerfile is in folder inside a project and the solution includes multiple projects [duplicate]

This question already has answers here:
How to include files outside of Docker's build context?
(19 answers)
Closed 1 year ago.
I added a docker support to a dot.net core application, this is what I got
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /src
COPY ["SMSys.csproj", ""]
COPY ["../DataLayer/DataLayer.csproj", "../DataLayer/"]
COPY ["../Utilities/Utilities.csproj", "../Utilities/"]
COPY ["../ServiceLayer/ServiceLayer.csproj", "../ServiceLayer/"]
RUN dotnet restore "./SMSys.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "SMSys.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SMSys.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SMSys.dll"]
My docker file is where 'SMSys.csproj' is.
If I run docker inside that directory, I get:
COPY failed: forbidden path outside the build context:
If I change some of the references of the project after the copy commands and run docker from the outside of the directory where all my projects are, I get multiple over a thousand of errors relating to my solution. The errors relate to missing things (assemblies and packages) as if all the projects are oblivious to one another when in fact they should be well referenced to one another as they do when I launch the project through visual studio.
This is an example of a solution that I followed that didnt work.
https://www.jamestharpe.com/docker-include-files-outside-build-context/
Whats the best solution to implement in order to run my project through docker?
UPDATE
FROM mcr.microsoft.com/dotnet/aspnet:5.0.3-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.103-buster-slim AS build
WORKDIR /src
# Prevent 'Warning: apt-key output should not be parsed (stdout is not a terminal)'
ENV APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=1
# install NodeJS 13.x
# see https://github.com/nodesource/distributions/blob/master/README.md#deb
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get install -y mc
#RUN apt-get install curl gnupg -yq
#RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y npm
COPY ["SMSysSolution/SMSys.csproj", "SMSysSolution/"]
COPY ["DataLayer/DataLayer.csproj", "DataLayer/"]
COPY ["Utilities/Utilities.csproj", "Utilities/"]
COPY ["ServiceLayer/ServiceLayer.csproj", "ServiceLayer/"]
COPY ["XUnitIntegrationTests/XUnitIntegrationTests.csproj", "XUnitIntegrationTests/"]
COPY ["XUnitTestProject1/XUnitTestProject1.csproj", "XUnitTestProject1/"]
RUN dotnet restore "./SMSysSolution/SMSys.csproj"
COPY . .
WORKDIR "/src/SMSysSolution"
RUN dotnet build "SMSys.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SMSys.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SMSys.dll"]
I had to put a few changes to the paths to make it work.
Dockerfile
Please, mind your COPY instructions. The first location is your physical location, whereas the other is the location inside the container. Apparently, you are trying to create a tree like this:
| src
| SMSys.csproj
| DataLayer
| ...
| Utilities
| ...
| ...
I'm not sure if this is exactly what you want...
Moreover, there were some issues with these dotnet Docker images. You may try to use a more recent version.
Please, try to do something like:
FROM mcr.microsoft.com/dotnet/aspnet:5.0.3-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.103-buster-slim AS build
WORKDIR /src
COPY ["{FOLDER OF THE SMSys.csproj}/SMSys.csproj", "{MATCHING FOLDER OF THE SMSys.csproj}/"]
COPY ["DataLayer/DataLayer.csproj", "DataLayer/"]
COPY ["Utilities/Utilities.csproj", "Utilities/"]
COPY ["ServiceLayer/ServiceLayer.csproj", "ServiceLayer/"]
RUN dotnet restore "./SMSys.csproj"
COPY . .
WORKDIR "/src/{FOLDER OF THE SMSys.csproj}"
RUN dotnet build "SMSys.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SMSys.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SMSys.dll"]
In this way, you would be creating a tree like:
| src
| {FOLDER OF THE SMSys.csproj}
| SMSys.csproj
| DataLayer
| ...
| Utilities
| ...
| ...
Running the Dockerfile with context
Taking the example above, to build that image, you must have context for all those folders. Either using docker commands at the same level of the folders for example, or using a docker-compose, you would have to specify that the context is at the top of the folders.
As an example, following the docker-compose approach, you could locate the file like:
| docker-compose.yaml
| {FOLDER OF THE SMSys.csproj}
| DataLayer
| Utilities
| ...
And then, inside the file:
services:
smsys-app:
ports:
- "..." # ports of the services
build:
context: .
dockerfile: {FOLDER OF THE SMSys.csproj}/Dockerfile
This is assuming that you keep the Dockerfile at the same level of the SMSys.csproj, but you may put in another location :)

Is there a more elegant way to copy specific files using Docker COPY to the working directory?

Attempting to create a container with microsoft/dotnet:2.1-aspnetcore-runtime. The .net core solution file has multiple projects nested underneath the solution, each with it's own .csproj file. I am attemping to create a more elegant COPY instruction for the sub-projects
The sample available here https://github.com/dotnet/dotnet-docker/tree/master/samples/aspnetapp has a solution file with only one .csproj so creates the Dockerfile thusly:
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore
It works this way
COPY my_solution_folder/*.sln .
COPY my_solution_folder/project/*.csproj my_solution_folder/
COPY my_solution_folder/subproject_one/*.csproj subproject_one/
COPY my_solution_folder/subproject_two/*.csproj subproject_two/
COPY my_solution_folder/subproject_three/*.csproj subproject_three/
for a solution folder structure of:
my_solution_folder\my_solution.sln
my_solution_folder\project\my_solution.csproj
my_solution_folder\subproject_one\subproject_one.csproj
my_solution_folder\subproject_two\subproject_two.csproj
my_solution_folder\subproject_three\subproject_three.csproj
but this doesn't (was a random guess)
COPY my_solution_folder/*/*.csproj working_dir_folder/*/
Is there a more elegant solution?
2021: with BuildKit, see ".NET package restore in Docker cached separately from build" from Palec.
2018: Considering that wildcard are not well-supported by COPY (moby issue 15858), you can:
either experiment with adding .dockerignore files in the folder you don't want to copy (while excluding folders you do want): it is cumbersome
or, as shown here, make a tar of all the folders you want
Here is an example, to be adapted in your case:
find .. -name '*.csproj' -o -name 'Finomial.InternalServicesCore.sln' -o -name 'nuget.config' \
| sort | tar cf dotnet-restore.tar -T - 2> /dev/null
With a Dockerfile including:
ADD docker/dotnet-restore.tar ./
The idea is: the archive gets automatically expanded with ADD.
The OP sturmstrike mentions in the comments "Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files (Part 2)" from Andrew Lock "Sock"
The alternative solution actually uses the wildcard technique I previously dismissed, but with some assumptions about your project structure, a two-stage approach, and a bit of clever bash-work to work around the wildcard limitations.
We take the flat list of csproj files, and move them back to their correct location, nested inside sub-folders of src.
# Copy the main source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
L01nl suggests in the comments an alternative approach that doesn't require compression: "Optimising ASP.NET Core apps in Docker - avoiding manually copying csproj files", from Andrew Lock "Sock".
FROM microsoft/aspnetcore-build:2.0.6-2.1.101 AS builder
WORKDIR /sln
COPY ./*.sln ./NuGet.config ./
# Copy the main source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# Copy the test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
# Remainder of build process
This solution is much cleaner than my previous tar-based effort, as it doesn't require any external scripting, just standard docker COPY and RUN commands.
It gets around the wildcard issue by copying across csproj files in the src directory first, moving them to their correct location, and then copying across the test project files.
One other option to consider is using a multi-stage build to prefilter / prep the desired files. This is mentioned on the same moby issue 15858.
For those building on .NET Framework, you can take it a step further and leverage robocopy.
For example:
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS prep
# Gather only artifacts necessary for NuGet restore, retaining directory structure
COPY / /temp/
RUN Invoke-Expression 'robocopy C:/temp C:/nuget /s /ndl /njh /njs *.sln nuget.config *.csproj packages.config'
[...]
# New build stage, independent cache
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build
# Copy prepped NuGet artifacts, and restore as distinct layer
COPY --from=prep ./nuget ./
RUN nuget restore
# Copy everything else, build, etc
COPY src/ ./src/
RUN msbuild
[...]
The big advantage here is that there are no assumptions made about the structure of your solution. The robocopy '/s' flag will preserve any directory structure for you.
Note the '/ndl /njh /njs' flags are there just to cut down on log noise.
In addition to VonC's answer (which is correct), I am building from a Windows 10 OS and targetting Linux containers. The equivalent to the above answer using Windows and 7z (which I normally have installed anyway) is:
7z a -r -ttar my_project_files.tar .\*.csproj .\*.sln .\*nuget.config
followed by the ADD in the Dockerfile to decompress.
Be aware that after installing 7-zip, you will need to add the installation folder to your environment path to call it in the above fashion.
Looking at the moby issue 15858, you will see the execution of the BASH script to generate the tar file and then the subsequent execution of the Dockerfile using ADD to extract.
Fully automate either with a batch or use the Powershell execution as given in the below example.
Pass PowerShell variables to Docker commands
Qnother solution, maybe a bit slower but all in one
Everything in one file and one command docker build .
I've split my Dockerfile in 2 steps,
First image to tar the *.csproj files
Second image use the tar and setup project
code:
FROM ubuntu:18.04 as tar_files
WORKDIR /tar
COPY . .
RUN find . -name "*.csproj" -print0 | tar -cvf projectfiles.tar --null -T -
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /source
# copy sln
COPY *.sln .
# Copy all the csproj files from previous image
COPY --from=tar_files /tar/projectfiles.tar .
RUN tar -xvf projectfiles.tar
RUN rm projectfiles.tar
RUN dotnet restore
# Remainder of build process
I use this script
COPY SolutionName.sln SolutionName.sln
COPY src/*/*.csproj ./
COPY tests/*/*.csproj ./
RUN cat SolutionName.sln \
| grep "\.csproj" \
| awk '{print $4}' \
| sed -e 's/[",]//g' \
| sed 's#\\#/#g' \
| xargs -I {} sh -c 'mkdir -p $(dirname {}) && mv $(basename {}) $(dirname {})/'
RUN dotnet restore "/src/Service/Service.csproj"
COPY ./src ./src
COPY ./tests ./tests
RUN dotnet build "/src/Service/Service.csproj" -c Release -o /app/build
Copy solution file
Copy project files
(optional) Copy test project files
Make linux magic (scan sln-file for projects and restore directory structure)
Restore packages for service project
Copy sources
(optional) Copy test sources
Build service project
This is working for all Linux containers

Docker COPY with folder wildcards

Given a file structure like this:
project root
|-- X.sln
|-- src
| |-- Foo
| | |-- Foo.fsproj
| | |-- Foo.fs
| |-- Bar
| |-- Bar.fsproj
| |-- Bar.fs
|-- test
|-- Baz
|-- Baz.fsproj
I'd like to first add all .fsproj files to my Docker image, then run a command, then add the rest of the files. I tried the following, but of course it didn't work:
COPY X.sln .
COPY **/*.fsproj .
RUN dotnet restore
COPY . .
RUN dotnet build
The idea is that after the first two COPY steps, the file tree on the image is like this:
working dir
|-- X.sln
|-- src
| |-- Foo
| | |-- Foo.fsproj
| |-- Bar
| |-- Bar.fsproj
|-- test
|-- Baz
|-- Baz.fsproj
and the rest of the tree is only added in after RUN dotnet restore.
Is there a way to emulate this behavior, preferably without resorting to scripts outside of the dockerfile?
You can use two RUN commands to solve this problem, using the shell commands (find, sed, and xargs).
Follow the steps:
Find all fsproj files, with regex extract the filename without extension and with xargs use this data to create directories with mkdir;
Based on the previous script change the regex to create from-to syntax and use the mv command to move files to newly created folders.
Example:
COPY *.sln ./
COPY */*.fsproj ./
RUN find *.fsproj | sed -e 's/.fsproj//g' | xargs mkdir
RUN find *.fsproj | sed -r -e 's/((.+).fsproj)/.\/\1 .\/\2/g' | xargs -I % sh -c 'mv %'
References:
how to use xargs with sed in search pattern
If you use the dotnet command to manage your solution you can use this piece of code:
Copy the solution and all project files to the WORKDIR
List projects in the solution with dotnet sln list
Iterate the list of projects and move the respective *proj files into newly created directories.
COPY *.sln ./
COPY */*/*.*proj ./
RUN dotnet sln list | \
tail -n +3 | \
xargs -I {} sh -c \
'target="{}"; dir="${target%/*}"; file="${target##*/}"; mkdir -p -- "$dir"; mv -- "$file" "$target"'
One pattern that can be used to achieve what you want without resorting to a script outside the Dockerfile is this:
COPY <project root> .
RUN <command to tar/zip the directory to save a copy inside the container> \
<command the removes all the files you don't want> \
dotnet restore \
<command to unpack tar/zip and restore the files> \
<command to remove the tar/zip> \
dotnet build
This would keep all of your operations inside the container. I've bundled them all in one RUN command to keep all of that activity into a single layer of the build. You can break them out if you need to.
Here's just one example on linux of how to recursively remove all files except the ones you don't want: https://unix.stackexchange.com/a/15701/327086. My assumption, based on your example, is that this won't be a costly operation for you.
Great question and I believe I have found the solution.
Have .dockerignore like this
# Ignore everything except *.fsproj.
**/*.*
!**/*.fsproj
Have your Dockerfile-AA like this (please update the ls)
FROM your-image
USER root
RUN mkdir -p /home/aa
WORKDIR /home/aa
COPY . .
RUN ls
RUN ls src
RUN ls src/p1
RUN ls src/p2
RUN ls src/p3
RUN dotnet restore
Have your docker command like this
sudo docker build --rm -t my-new-img -f Dockerfile-AA .
Run it the first time, it will show only the fsproj file being copied.
Run it again, you cannot see the ls result because it is using cache, great.
Obviously, you cannot have dotnet restore in the same Dockerfile, have another docker file like Dockerfile-BB
FROM my-new-img
COPY . .
RUN dotnet build
So, have your script like this
docker build --rm -t my-new-img -f Dockerfile-AA .
rm -f .dockerignore
docker build --rm -t my-new-img2 -f Dockerfile-BB .
This should work. When building my-new-img, it is going to be blazing fast. You need to try this at least 2 times, because the cache was not created right away in my past experience. This is way better than copying the project file line by line.

Command docker run create empty container

Command docker run create empty container. I mean that docker image builds successfully but when I run container there is no my changes. I am working on ubuntu 16.04 with vagrant. My docker file:
FROM node:6.2.0-wheezy
MAINTAINER Lukasz Migut
RUN mkdir -p /app
ADD package.json /app/package.json
RUN cd /app/ && \
npm cache clear && \
npm install --no-bin-links --quiet
RUN mkdir -p /var/www/backend
WORKDIR /var/www/backend
COPY entrypoint.sh /entrypoint.sh
CMD ["/bin/bash", "/entrypoint.sh"]
EXPOSE 8080
This is output after docker build .
Sending build context to Docker daemon 6.144 kB
Step 1 : FROM node:6.2.0-wheezy
6.2.0-wheezy: Pulling from library/node
47994b92ab73: Already exists
a3ed95caeb02: Already exists
9b7b75987c3c: Already exists
d66c4af59bfb: Already exists
26df7d6a7371: Already exists
b656b9b4e7eb: Already exists
e3753c84bc68: Already exists
Digest: sha256:9a04df0cd52487e2fb6d6316890380780209f9211f4767934c5f80f2da83a6bf
Status: Downloaded newer image for node:6.2.0-wheezy
---> ecbd08787958
Step 2 : MAINTAINER Lukasz Migut
---> Running in 2a1d31015aea
---> 6d6ff7769ec5
Removing intermediate container 2a1d31015aea
Step 3 : ADD package.json /app/package.json
---> 5a28cc87577c
Removing intermediate container 3df429908e6c
Step 4 : RUN cd /app/ && npm cache clear && npm install --no- bin-links --quiet
---> Running in 1fc442eb449a
npm info it worked if it ends with ok
npm info using npm#3.8.9
npm info using node#v6.2.0
npm info ok
blog-backend#0.0.1 /app
`-- hapi#13.4.1
+-- accept#2.1.1
+-- ammo#2.0.1
+-- boom#3.2.0
+-- call#3.0.1
+-- catbox#7.1.1
+-- catbox-memory#2.0.2
+-- cryptiles#3.0.1
+-- heavy#4.0.1
+-- hoek#4.0.0
+-- iron#4.0.1
+-- items#2.1.0
+-- joi#8.1.0
| +-- isemail#2.1.2
| `-- moment#2.13.0
+-- kilt#2.0.1
+-- mimos#3.0.1
| `-- mime-db#1.23.0
+-- peekaboo#2.0.1
+-- shot#3.0.1
+-- statehood#4.0.1
+-- subtext#4.0.3
| +-- content#3.0.1
| +-- pez#2.1.1
| | +-- b64#3.0.1
| | `-- nigel#2.0.1
| | `-- vise#2.0.1
| `-- wreck#7.2.1
`-- topo#2.0.1
---> ad5bf17db156
Removing intermediate container 1fc442eb449a
Step 5 : WORKDIR /var/www/backend
---> Running in 3f75e64f3880
---> 477162d999c0
Removing intermediate container 3f75e64f3880
Step 6 : COPY entrypoint.sh /entrypoint.sh
---> b0918e5611e2
Removing intermediate container b1c46f9175dd
Step 7 : CMD /bin/bash /entrypoint.sh
---> Running in fd2c72465c11
---> 275911ac22ca
Removing intermediate container fd2c72465c11
Step 8 : EXPOSE 8080
---> Running in d54c25afb6a1
---> f4ba799427cc
Removing intermediate container d54c25afb6a1
Successfully built f4ba799427cc
Command docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> f4ba799427cc 28 minutes ago 514.2 MB
node 6.2.0-wheezy ecbd08787958 8 days ago 503.9 MB
Next i try to run container with docker run -it node:6.2.0-wheezy /bin/bash
and when i login on container i can't see for example created folders form Dockerfile and no node_modules.
What i do wrong, why i can't see changes?
You built an image... it is the one tagged "< none >" in your docker images command. However, when you tried to run a container, you used the old image that you based your new image on. Your changes are in that new image, not the old one.
To get it to work you have to tag your new image with a name and use that name...
docker build -t MYIMAGENAME .
docker run -it MYIMAGENAME /bin/bash

Resources