static website using docker nginx doesn't update automatically after rebuilding image - docker

I am a beginner with nginx. I am using the docker nginx image to build a static website.
Here is my Dockerfile:
FROM nginx
COPY . /usr/share/nginx/html
The files copied in the container are a simple index.html with some css/js.
Here is what I did on my laptop:
I built a first webiste image and executed the container on my laptop. I could see the website by hitting http://localhost:port.
I changed the index.html, rebuilt the image and executed again the container on my laptop. I could see the changes added in index.html on the website by hitting http://localhost:port.
I did exactly the same process on a remote virtual machine but this time, I cannot see the changes I made index.html after rebuilding the image (step 2).
Here are my environments details:
Laptop:
MacOS High Serria 10.13.1
Docker version 17.12.0-ce, build c97c6d6
Virtual Machine:
Red Hat Enterprise Linux Server release 7.2 (Maipo)
Docker version 1.12.5, build 047e51b/1.12.5
Any idea what could made this happen? Do I have to fine-tune something in nginx config files? Thanks for your help.

Related

Docker compose unsupported option 'target'

I am developing a nest js application and I need to deploy it to a remote server with ubuntu, i'm new with docker and everything is fine on windows 8 and ubuntu 22 i can use my app fine but on server i get this error
enter image description here error in the server
in using this docker files
enter image description here Dockerfile
enter image description here docker-compose
On my windows and ubuntu works docker-compose up dev and docker-compose up prod. It seems weird that it works on two different systems but not on the server, they all have the same version of docker and docker-compose
I try to change the version in the yml, but the error persists.

Running docker container as amd64 machine using a M1 Mac

I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.

is it ok to build docker image with docker:18 and load image with different docker:19?

I am using circleci to build images and export them as tar.gz using docker version 18.
But now I have docker 19 on all of my swarm manager and worker.
I have done the following steps to deploy services in swarm.
Load docker images using docker load command
Run docker stack deploy servername to deploy
I have tested it, and it works fine but I want to know is it the right thing to do?
It works for sure.
Here is a link with the breaking changes and incompatibilities of docker :
Documentation
When you build a public image, you cannot know the versions of docker which will pulling it, the compatibility is an important thing for the ecosystem of docker.

docker create Linux with my own application

I developed an HTTP server which implements RESTful API specified by our client. Currently my workstation (Centos 7.4 x86_64) and everything else is working. Now I need to ship it as Centos 7.4 docker image.
I read the getting started guide and spent some time browsing the documentation but am still not sure how to proceed with this.
Basic Steps
Download Centos image from here
Run Centos image on my workstation and copy everything into it.
Make an appropriate changes so that server is started via systemd.
In step 3 : I am not sure how to do root/sudo inside the docker image.
I think what you are looking for is the Dockerfile reference https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
It's a file name Dockerfile that sits in the root of your project. You specify in this file which commands you want to run on top of the base image.
for example, from your use case:
FROM centos7.4.1708
COPY <your files> /opt/
CMD ["program.exe" "-arg" "argument"]
FROM - defines the base image
COPY - copies files from the folder you run the command from to the image
CMD - runs this command when the container starts
Build with docker build . -t image-name
Run with docker run image-name

How to run docker image produced by VS 2017

Docker noob here...
How does one properly run the docker image of your Asp.Net CORE app which is produced by Visual Studio 2017 at the command line?
docker run -it -d -p 80:32769 myappimage
does not appear to work properly (image runs, but I cannot browse to my app)
Note: I've simply created a sample ASP.Net Core Web App within Studio using the default template, and added Docker support (by clicking the "Add Docker support" checkbox.). Studio adds a dockerfile and some docker-compose files when you do this.
When Visual Studio "runs" the image (by pressing F5) - I can successfully browse to my application ( via "http://localhost:32789" or similar host port. App inside container is on port 80 ). But I cannot figure out the command to run it myself at the command line.
The standard Dockerfile that Studio adds to your project is...
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WebApplication2.dll"]
Yes, it is possible. Rebuild your solution in the Release configuration and try to run the docker-compose project with F5 to ensure the image is updated and your app is working fine. Then execute docker images console command. You'll see something like:
REPOSITORY TAG IMAGE ID CREATED SIZE
Your.App latest 673b79a6bb3d About a minute ago 294 MB
All you need is to run a new container from that image and map its exposed port to a localhost port. By default, the exposed port is 80 (look at Dockerfile). For example:
docker run -d -p 1234:80 --name some_name Your.App:latest
Then your app should become accessible at http://127.0.0.1:1234/.
Explanation:
If the Debug configuration is set, then empty non-workable images are created by Visual Studio. It manually maps the empty container to the filesystem to make possible debugging, "Edit and Continue" features and so on. This is why dev image is useless without Visual Studio. Build the image in the Release configuration to make it usable.
The full publishing process is described in the documentation: Visual Studio Tools for Docker
Publishing Docker images
Once you have completed the develop and debug cycle of your
application, the Visual Studio Tools for Docker will help you create
the production image of your application. Change the debug dropdown to
Release and build the application. The tooling will produce the image
with the :latest tag which you can push to your private registry or
Docker Hub.
You are confusing here something. When you run your project with F5 in Visual Studio 2017, you run it with IISExpress on a randomly configured port.
In Docker you don't have IISExpress, there your app is only hosted by Kestrel (Kestrel is always used even behind IIS/IISExpress, but they act as reverse proxy).
The default port for Kestrel is 5000, but you can also configure it. See my post here for more detail on which methods you have to configure the listening ip/port.

Resources