How to setup mass dynamic virtual hosts in nginx on docker? - docker

How can I setup mass dynamic virtual hosts in nginx As seen here
except using docker as the host machine?
I currently have it setup like this:
# default.conf
server {
root /var/www/html/$http_host;
server_name $http_host;
}
And in my Dockerfile
COPY default.conf /etc/nginx/sites-enabled/default.conf
And after I build the image and run it:
docker run -d 80:80 -v www/:/var/www/html
But when I point a new domain (example.dev) in my hosts file and make a www/example.dev/index.html. It doesn't work at all.

The setup is correct and it works as i tested on my system. The only issue is that you are copying the file on a wrong path. The docker image doesn't use the sites-enabled path by default. The default config loads everything present in /etc/nginx/conf.d. So you need to copy to that path and rest all works great
COPY default.conf /etc/nginx/conf.d/default.conf
Make sure to map you volumes correctly. While testing I tested it using below docker command
docker run -p 80:80 -v $PWD/www/:/var/www/html -v $PWD/default.conf:/etc/nginx/conf.d/default.conf nginx
Below is the output on command line
vagrant#vagrant:~/test/www$ mkdir dev.tarunlalwani.com
vagrant#vagrant:~/test/www$ cd dev.tarunlalwani.com/
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ vim index.html
vagrant#vagrant:~/test/www/dev.tarunlalwani.com$ cat index.html
<h1>This is a test</h1>
Output on browser

Related

Mount container folders to hosts

I have three containers implemented as follows:
Each container is a Django project that has a folder called images that holds the files (gray rectangles).
I want to access the images of each container in the following format through the website address:
http://example.com/storage/<container>/images/<file name>
What I tried?
I created a storage folder on the host. Then I considered a separate folder for each container. Finally, I mounted each of these folders to their containers. But images are not available from the website.
/storage/
/users/images/...
/company/images/...
/financial/images/...
Can anyone help?
UPDATED
# Create Volume
docker volume create users
# mount
docker run -v users:storage/users/images user-image
I using MinIO with Nginx on Docker.
Here is steps that I configure MinIO.
Initial Config 🦜
Step 1: Create minio user
Create minio user
sudo useradd minio
You then add a password for the minio user by using the passwd command:
sudo passwd minio
Step 2: Create shared folder
Make minIO directory and change owner to minio user:
mkdir -p /usr/local/share/minio
sudo chown -R minio:minio /usr/local/share/minio
Docker 🐳
Step 3: Create docker container:
Create docker container with:
docker-compose up -d --build
This command create your s3 container. Check it with docker ps command.
Nginx 🔥
Step 4: Create subdomain
Open /etc/host file and add your subdomain:
127.0.0.1 localhost
127.0.0.1 s3.localhost # You can rename `s3` with desired name.
So at /etc/nginx/sites-enabled/default edit server_name key as s3.localhost.
...
// add this block at the end of `default` file:
server {
listen 80;
listen [::]:80;
server_name s3.localhost;
index index.html;
location / {
proxy_pass http://127.0.0.1:9001/;
}
}
Then restart nginx:
sudo service nginx reload
Run MinIO 🏃🏽‍♂️
Go to your browser and open s3.localhost.
Username and password is in .env file. Login and create your Buckets. 🌟
Check out my repo.

Basic Nginx container (static file)

I am trying to do the simplest possible container with Nginx hosting a simple index.html file. But I can't seem to get it working. I must be missing a concept somewhere and I am hoping for a bit of help.
Here is what I am doing:
I have a folder with 2 files in it:
Dockerfile
FROM nginx:1.18.0
WORKDIR /usr/share/nginx/html
COPY index.html ./
index.html
<!DOCTYPE html>
<html>
<head>
<title>Testing</title>
</head>
<body>
<p>Hello Test App</p>
</body>
</html>
First I build the container. From the folder that has the files, I run this command:
docker image build --tag nginx-test:1.0.0 --file ./Dockerfile .
Then I run the container:
docker run -d -p 3737:3737 nginx-test:1.0.0
Then I browse to http://localhost:3737 (I also tried http://localhost:3737/index.html) and chrome shows an ERR_EMPTY_RESPONSE error:
How can I get my index.html file to be hosted in my container?
The problem was with this command:
docker run -d -p 3737:3737 nginx-test:1.0.0
The -p option is selecting the ports. The first port is what will be exposed on the host machine. The second port is what will be used to communicate with the container. By default, nginx uses port 80 (like most web servers).
So changing that command to:
docker run -d -p 3737:80 nginx-test:1.0.0
Causes the rest of the steps in my scenario to work correctly.

docker COPY is not copying the files

FROM nginx:alpine
EXPOSE 80
COPY . /usr/share/nginx/html
Am trying to run an Angular app with the following docker configuration. It does work, but I can't see the files/directory that was suppose to be copied in that location "/usr/share/nginx/html" which is super confusing. The directory only contains the default index.html nginx created.
Does it store it in memory or something since the files are not there but it does fetch my website properly.
Build:
docker build -t appname .
Run:
docker run -d -p 80:80 appname
It seems like the COPY destination path is not the path on disk server but its the path inside the image of the docker. Which explains why i cant see my files on the server disk.

Docker + OpenResty - flexible configuration. How to?

I have the openresty application that deploys with docker.
My docker file:
FROM openresty/openresty:alpine-fat
ARG APPLICATION_PATH="/srv/www/my-app"
COPY nginx.conf /usr/local/openresty/nginx/conf
RUN mkdir ${APPLICATION_PATH}
I'm running docker with this command:
docker run -v $(CURRENT_DIRECTORY):/srv/www/my-app -v $(CURRENT_DIRECTORY)/conf:/etc/nginx/conf.d --name="$(APP_NAME)" -p $(PORT):80 -d $(CONTAINER_NAME)
This command stored in the Makefile and variables values like that:
CONTAINER_NAME = my-app
APP_NAME = my-app
override PORT = 8080
ifeq ($(OS),Windows_NT)
CURRENT_DIRECTORY=%cd%
else
CURRENT_DIRECTORY=${PWD}
endif
So also I have my-app.conf stored in the conf directory. This is nginx-configuration file, where I have this line:
content_by_lua_file '/srv/www/my-app/main.lua';
And further I have nginx.conf, where I have this line:
lua_package_path ";;/srv/www/my-app/?.lua;/srv/www/my-app/application/?.lua";
I don't want duplicate /srv/www/my-app in the 3 files. How I can avoid this?
IMO, your approach is not consistent.
You copy nginx.conf file, but mount a volume for my-app.conf (is it included into nginx.conf?)
Curiously that $(CURRENT_DIRECTORY)/conf is mounted twice - as /srv/www/my-app/conf and as /etc/nginx/conf.d.
Below is my approach for OpenResty containers:
Write simple nginx.conf without includes. Copy it into container as you do.
The only reason to mount a folder with nginx.conf is ability to reload nginx configuration after changes. Keep in mind - if you would mount a single file reload may not work. https://github.com/docker/for-win/issues/328
Copy all Lua files mentioned in *_by_lua_file directives into /usr/local/openresty/nginx
Copy all Lua files required from files above (if any) into /usr/local/openresty/lualib
Don't use absolute file paths in *_by_lua_file directives, you may specify relative path to /usr/local/openresty/nginx
Don't use lua_package_path directive, defaults should works.
Here is the simple working example https://gist.github.com/altexy/8f8e08fd13cda25ca47418ab4061ce1b

Confusion while deploying docker-composer image

I've been working in a sample ruby-on-rails application and deploying docker image in a linux server (ubuntu 14.04).
Here is my Dockerfile:
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
# CMD bundle exec rails s -p 3000 -b 0.0.0.0
# EXPOSE 3000
docker-compose.yml:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
image: atulkhanduri/rails_docker_demos
volumes:
- .:/rails_docker_demo
ports:
- "3000:3000"
depends_on:
- db
deploy.sh:
#!/bin/bash
docker build -t atulkhanduri/rails_docker_demo .
docker push atulkhanduri/rails_docker_demo
ssh username#ip-address << EOF
docker pull atulkhanduri/rails_docker_demo:latest
docker stop web || true
docker rm web || true
docker rmi atulkhanduri/rails_docker_demo:current || true
docker tag atulkhanduri/rails_docker_demo:latest atulkhanduri/rails_docker_demo:current
docker run -d --restart always --name web -p 3000:3000 atulkhanduri/rails_docker_demo:current
EOF
Now my problem is that I'm not able to use docker-compose commands like docker-compose up, to run the application server.
When I uncomment the last two lines fromDockerfile i.e,
CMD bundle exec rails s -p 3000 -b 0.0.0.0
EXPOSE 3000
then I'm able to run the server on port 3000 but getting error could not translate host name "db" to address: Name or service not known. (my database.yml has "db" as host.) This is because postgres image is not used as I'm not using docker-compose file is not.
EDIT:
Output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
b466c9f566a4 bridge bridge local
7cce2e53ee5b host host local
bfa28a6fe173 none null local
P.S: I've searched a lot in the internet but not yet able to use the docker-compose file.
Assumptions
If I am reading what you've done here correctly, my answer assumes the following two things.
You are using docker-compose to run the database container.
You are using plain docker commands (not docker-compose) to start the application server ("web").
First, I would suggest not doing that, it is a lot simpler to use docker-compose for both. However, I'll answer based on the above, assuming that there is some valid reason you cannot use docker-compose to run the "web" container.
About container and network names
When you run the docker-compose command to start the db container, among other things, two things happen.
The container is given a new name, composed of the directory you run the compose setup from, the static name in compose (db), and a number. So let's say you have this all in a directory name myapp, you would have a new container named myapp_db_1. You can see what it is named using docker ps.
A network bridge is created if it didn't already exist, named something like myapp_default - again, named after the directory that the compose setup is inside of.
Connecting to the right network
The problem is that your non-compose container is attached to the default network (probably docker_default), but your db container is attached to myapp_default. The two networks do not know about each other. You need to connect them. It probably makes more sense to tell the web app container to attach to the compose network.
First, get the correct network name. You can see all networks using docker network ls. It might look like this:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c1f5764a112b bridge bridge local
175efb89adef docker_default bridge local
5185ff0e1054 myapp_default bridge local
Once you have the correct name, update your run command to know about the network using the --network option.
docker run -d --restart always --name web \
-p 3000:3000 --network myapp_default \
atulkhanduri/rails_docker_demo:current
Once it is attached to the proper network, the name "db" should resolve correctly.
If you used docker-compose to start both of them, this would not be necessary (this is one of the things docker-compose just takes care of for you silently).
Getting this to run on your server
In the comments, you mention that you are having some issues with compose on the server. Specifically you said:
Do I need to copy my complete project on the server? Can't I run the application from docker image only? Actually, I've copied docker-compose in server and it throws errors for Gemfile, then I copied Gemfile, then it says it should be a rails app. So I guess I need to copy my complete folder in server. Can you please confirm?
Let's look at some parts of your Dockerfile. I'll add some comments inline.
## Make a new directory, and then make it the current directory
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
## Copy Gemfile and Gemfile.lock into this directory from outside
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
## Run the bundle installer, which will install to this directory
RUN bundle install
## Finally, copy everything from the outside local dir to here
ADD . /rails_docker_demo
So, clearly, /rails_docker_demo is your application directory within the container. You've installed a bunch of stuff here, and this will become a part of your image. When you push your image to the registry, then pull it down on the server (as you do in the deploy script), this will all come with it.
Now let's look at (some of) docker-compose.yml.
services:
web:
volumes:
- .:/rails_docker_demo
Here you have defined a volume mount, mounting the current directory (wherever docker-compose.yml lives) as /rails_docker_demo. When you do that, whatever happens to exist on the server is now available in /rails_docker_demo, but this mount undoes all the work from Dockerfile that I just mentioned above. Instead of having the resources you installed when you built the image, you have only whatever is on the server in the . directory. The mount is on top of the image's existing /rails_docker_demo directory, hiding its contents and replacing them with whatever is on the server at the moment.
Unless there is a reason you put this mount here, you probably just need to remove that volume mount from docker-compose.yml. You will still need docker-compose.yml on the server, but you should not need the rest of it (aside from the image, of course).
This mount you have done is a useful thing - for development purposes. It would let you use the container to run the application and quickly have code changes show up (without rebuilding the image). But in the case of your deployment, it is just causing trouble.
Try moving the EXPOSE above CMD, .e.g.
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
EXPOSE 3000
CMD bundle exec rails s -p 3000 -b 0.0.0.0

Resources