asp.net-core 2.0 Docker deploy - docker

I have an app developed in asp.net-core 2.0 and deployed on Linux with Docker.
So I created Docker image and run it on Linux server like:
docker run -p 80:80 --name my-container my-image:1.0
So from Docker image my-image:1.0 created container my-container
Now the issue is when I make some changes to my app and want to deploy that changes I have to stop/remove my-container and create a new one from new Docker image like:
docker stop my-container
docker rm my-container
docker run -p 80:80 --name my-container my-image:1.1
Is there any way to just update the container with the new image? Point is to use existing container with the new version of the image.

Is there any way to just update the container with the new image?
No. But this is not what you need, since you said that your goal is the following one:
Now the issue is when I make some changes to my app and want to deploy that changes I have to stop/remove my-container and create a new one from new Docker image
Your Dockerfile looks certainly like that:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
So, you just need to create a volume to export your workdir /app into the host filesystem, outside the container (use -v parameter with docker run). Then simply restart the container after having applied changes to your app.

Related

How ro access docker volume files from the code on docker container

i have creted a docker volume with such command
docker run -ti --rm -v TestVolume1:/testvolume1 ubuntu
then i created a file there, called TestFile.txt and added text to it
Also i have a simple "Hello world" .net core app with Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Release/net6.0/publish/ ShareFileTestInstance1/
WORKDIR /ShareFileTestInstance1
ENTRYPOINT ["dotnet", "ShareFileTestInstance1.dll"]
I published it using
dotnet publish -c Release
then ran
docker build -t counter-image -f Dockerfile .
And finally executed
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
to run my app with a docker volume
So what i want to achive to access a file which is in a volume("TestFile.txt" in my case) from a code in the container.
for example
Console.WriteLine(File.Exists("WHAT FILE PATH HAS TO BE HERE") ? "File exists." : "File does not exist.");
Is it also possible to combine all this stuff in a Dockerfile? I want to add one more container next and connect to the volume to save data there.
The parameters for docker run can be either for docker or for the program running in the docker container. Parameters for docker go before the image name and parameters for the program in the container go after the image name.
The volume mapping is a parameter for docker, so it should go before the image name. So instead of
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
you should do
docker run -it --rm --name=counter-container -v TestVolume1:/testvolume1 counter-image
When you do that, your file should be accessible for your program at /testvolume1/TestFile.txt.
It's not possible to do the mapping in the Dockerfile as you ask. Mappings may vary from docker host to docker host, so they need to be specified at run-time.

Docker file in host machine not available in container using bind volume

I am facing an issue where after runnig the container and using bind mount to mount the directory on host to container I am not able to see new files created in host machine inside container.Below is my project structure.
The python code creates a file inside the container which should be available inside the host machine too however this does happen when I start the container with below command. However updates to python code and html is available inside the container.
sudo docker container run -p 5000:5000 --name flaskapp --volume feedback1:/app/feedback/ --volume /home/deepak/PycharmProjects/NewDockerProject/sampleapp:/app flask_image
However after starting the container using below command, everything seems to work fine. I can see all the files from container to host and vice versa(new created , edited).I git this command from docker in the month of lunches book.
sudo docker container run --mount type=bind,source=/home/deepak/PycharmProjects/NewDockerProject/sampleapp,target=/app -p 5000:5000 --name flaskapp
Below is the content of my dockerfile
FROM python:3.8-alpine
WORKDIR /app
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python","main.py"]
Could someone please help me in figuring out the difference between the two commands ? I am using ubuntu. Thank you
In my case i got working volumes using following docker run args (but i am running without --mount type=bind):
docker run -it ... -v mysql_data:/var/lib/mysql -v storage:/usr/shared/app_storage
where:
mysql_data is a volume name
/var/lib/mysql path inside container machine
you could list volumes as:
docker volume ls
and inspect them to see where it points on your system (usually /var/lib/docker/volumes/{volume_nanme}/_data):
docker volume inspect mysql_data
to create volume use following command:
docker volume create {volume_name}

Attach a file (Config file) at the time of run docker container from local machine

I have created a customised-docker image which runs some code after creating a container.
But I want to attach a config file at the time of deployment and our config file is saved on the local machine.
docker run -d -ti -v /home/logs/:/home/logs/ --name "ContainerName" "ImageName" /bin/bash
I want to attach file at the place of volume.
How can I attach a config file to the container at runtime?
the docker run options doesnt really let you mess with the image. for that you have the Dockerfile - so you can build an inage of your own, or in this case- kinda like extending the base one:
on your project root directory:
copy the logs you need to sit inside your project (so the dockerfile can access them)
create a Dockerfile:
#Dockerfile
FROM <image_name>
COPY ./logs /home/logs
build your own image: ( you can also push it to a repo)
docker build . -t <new_image_name>
run the container:
docker run -d -ti --name "ContainerName" <new_image_name> /bin/bash

Why my docker image can not start app when I run it after import

Below is my dockerfile
FROM node:10.15.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./build/release /usr/src/app/
RUN yarn
EXPOSE 3000
CMD [ "node", "server.js" ]
First I ran
docker build -t app .
and then
docker run -t -p 3000:3000 app
Everything works fine via localhost:3000 in my computer.
Then I try to export this image by
docker export 68719e2bb0cd > app.tar
and import again by
cat app.tar | docker import - app2
then run
docker run -t -d -p 2000:3000 app2
and the error came out
docker: Error response from daemon: No command specified.
Why this happened?
You're using the wrong commands: docker export and docker import only transfer the filesystem part of an image and not other data like environment variables or the default command. There's not really a good typical use case for these commands.
The standard way to do this is to set up a Docker registry or use a public registry server like Docker Hub, AWS ECR, GCR, ... Once you have this set up you can docker push an image to the registry from the system it was built on, and then docker pull it on the system you want to run it on (or directly docker run it, which will automatically pull the image if not present).
If you really can't set up a registry then the commands you actually want are docker save and docker load, which save complete images with all of their metadata. I've only every wanted these in environments where I can't connect the systems I want to run images to the registry server; otherwise a registry is almost always better. (Cluster environments like Docker Swarm and Kubernetes all but require a registry as well.)
Just pass the command to run. because the imported image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
The correct command would be something like:
docker run -t -d -p 2000:3000 app2 /path/to/something.sh

create a pure data image in docker

I know that in docker we can run data volume containers like this
#create a pure data container based on my data_image
docker run -v /data --name data-volume-container data-vol-container-img
# here I'm using the data volume in a property container (ubuntu)
docker run --volumes-from data-volume-container ubuntu
my question is how do we create the data_image?
I know that the easiest way is to create a image based on ubuntu, or anything like that
From ubuntu
Copy data /data
CMD["true"]
But the thing is , why do I need ubuntu as my base image??? (I know it's not a big deal as ubuntu is going to re-used in other scenarios). I really want to know why can't I use scratch??
FROM scratch
COPY data /data
#I don't know what to put here
CMD ["???"]
The image i'm creating here is meant to be a dummy one, it execute absolutely NOTHING and only act a dummy data container, i.e to be used on in docker run -v /data --name my_dummy_data_container my_dummy_data_image
Any ideas??
(Is it because scratch doesn't implement a bare minimum file system? But Docker can use the host system's file system if a container doesn't implement its own)
Yes, you can do this FROM scratch.
A CMD is required to create a container, but Docker doesn't validate it - so you can specify a dummy command:
FROM scratch
WORKDIR /data
COPY file.txt .
VOLUME /data
CMD ["fake"]
Then use docker create for your data container rather than docker run, so the fake command never gets started:
> docker create --name data temp
55b814cf4d0d1b2a21dd4205106e88725304f8f431be2e2637517d14d6298959
Now the container is created so the volumes are accessible:
> docker run --volumes-from data ubuntu ls /data
file.txt

Resources