Connect to Redis server from within a Docker image - docker

I have 2 hosts, a web unit (WU) and a computing unit (CU). On the WU, I have my website. On the CU, I have a redis server and a (C++) app that does some computing.
The user enters input data in the website, and then I want to enqueue a job from the WU to the Redis server on the CU. I have then a worker on the CU which performs a task.
Now, I am able to enqueue a job from the WU (outside of any docker image) to the CU from the terminal (using the python rq module). However, my website is in a docker image, and I can't get it working. From within the docker image, I try to connect to 172.17.0.1:6370 (172.17.0.1 is the IP of the gateway between the image and the docker host). The error I get is connection refused. Then I thought I might have to map the ports in my docker-compose file: 6739:6739. However, then I got an error saying the port is already used. And indeed, it is used by the stunnel4 service which allows me to enqueue jobs from the WU to the redis server on the CU.
Should I run the stunnel4 service in the docker image are something? And if so, how could I do that? Or should I tackle my problem in a different way?
Network structure
WU and CU are 2 (virtual) machines. My redis server is on CU and not in a docker container. I am able to connect to the redis server from WU to CU by means of the python redis module (but not from within a docker container). I had to set up a stunnel4.service for that (redis-client on WU and redis-server on CU).

Finally I managed to build a stunnel service in a docker container on the WU. I can now simply connect with python redis to that stunnel service, and the end of the tunnel points to the CU.
Here is what I did on the WU:
Dockerfile
FROM alpine:3.12
RUN apk add --no-cache stunnel
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
COPY ./ca_file.crt /etc/stunnel/ca_file.crt
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
cd /etc/stunnel
cat > stunnel.conf <<_EOF_
foreground = yes
[stunnel-client]
client = yes
accept = ${ACCEPT}
connect = ${CONNECT}
CAfile = ca_file.crt
verify = 4
_EOF_
exec stunnel "$#"
The ACCEPT and CONNECT values are specified in an environment file:
.env.stunnel
ACCEPT=6379
CONNECT=10.110.0.3:6379
where 10.110.0.3 is the IP address of my redis host.
docker-compose
stunnel-client:
container_name: stunnel-client
build:
context: ./stunnel
dockerfile: Dockerfile
restart: always
volumes:
- stunnel_volume:/etc/stunnel
env_file:
- ./.env.stunnel
networks:
- stunnel-net
ports:
- "6379:6379"
The stunnel-net is also in my web service so I can connect from there to the stunnel-client service by means of python redis.

Related

ASP.NET Core Webapi startup url - Docker-Compose vs. Docker build

I am encountering an interesting difference in startup behaviour when running a simple net6.0 web api built with docker-compose in comparison to being built with docker. The application itself runs in a kubernetes cluster.
Environment
Minikube v1.26.1
Docker Desktop v4.12
Docker Compose v2.10.2
Building with docker-compose
docker-compose.yml
version: "3.8"
services:
web.api:
build:
context: ./../
dockerfile: ./web.API/Dockerfile
The context is set to the parent directory due to some files needed there.
Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
ENV ASPNETCORE_URLS=http://+:80
COPY Directory.Build.props ./Directory.Build.props
COPY .editorconfig ./.editorconfig
COPY ["webapi/web.API", "web.API/"]
RUN dotnet build "web.API/web.API.csproj" -c Release --self-contained true --runtime alpine-x64
RUN dotnet publish "webapi/web.API.csproj" -c Release -o /app/publish \
--no-restore \
--runtime alpine-x64 \
--self-contained true \
/p:PublishSingleFile=true
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-alpine
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["./web.API"]
This results in the app starting up within the kubernetes cluster with the following logs:
Now listening on: http://[::]:80
Building with docker build
Using the same Dockerfile mentioned earlier with the same build context you can see in the docker-compose.yml, a deployment to k8s results in the following log:
Now listening on: http://localhost:5000
Running the image locally
Running the exact same image from the k8s cluster locally however results in
Now listening on: http://[::]:80
Already tried
As suggested in many posts, I tried setting the environment variable ASPNETCORE_URLS via Dockerfile or k8s deployment.yml- neither of which had an impact on the startup url.
I can't seem to figure out why there is a difference between those 2 ways of building an image.
Update
The only thing that seems to work is to add
builder.WebHost.ConfigureKestrel(option =>{
option.ListenAnyIP(80);
});
to the Program.cs.
Still not sure about the reason behind the behaviour though.
A few things:
I assume that the container already running and working on port 80 (docker run) is stopped before attempting to run docker-compose?
Environment variables can be used in docker-compose.yml file
Ports most likely need to be exposed correctly, which from the Dockerfile and docker-compose.yml seems like it is not?
Environment Variables
First off, before the environmental ENV ASPNETCORE_URLS=http://+:80 is going to be of any use, your docker-compose instance does not define which ports to use, your docker-compose (if trimmed) does not show any ports.
Perhaps because the ports aren't exposed, this means the environmental tries to connect to 80, which fails (already in use/not exposed) and then fails, and somehow connects on 5000.
Alternatively, more likely: it does not really not _see your ENV ASPNETCORE_URLS
You can try environment variables directly in your docker-compose file:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Publishing ports
In docker-compose file you need this, to publish ports:
ports:
- "80"
- "443"
... or
ports:
- "80:80" // "host-port:container-port"
- "443:1234"
Additional information
The keyword EXPOSE\expose in Dockerfile/docker-compose.yml is just informative (comments in a sense), functionally it does not process anything. A port need to be exposed (published) to be used.
So, those EXPOSE 443 & 80 is not telling Docker to use it, perhaps you are running your container for example like this:
This exposes port 80 for it to be available.
docker run -p 127.0.0.1:80:80/tcp image command
In short, use ports keyword in docker-compose.yml.
EDIT:
I read your comment above:
But the app is not accessible in k8s when listening to localhost:5000 even with correct service and container configuration
This indicates what I am trying to say regarding the ports being published or not. Your port 5000 is also not exposed because nothing in your configuration shows that is the case.

Links to container in networking with docker-compose

My aim is to access a container via URL from another container using docker-compose.
So, suppose i have the following docker-compose.yml file
version: "3.8"
services:
web:
build: web
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
ports:
- "8001:5432"
and a Dockerfile Dockerfile in the folder web
FROM alpine:3.7
RUN ping postgres://db:5432
Running docker-compose build returns
db uses an image, skipping
Building web
Step 1/2 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/2 : RUN ping postgres://db:5432
---> Running in afbfcd27b340
ping: bad address 'postgres://db:5432'
Service 'web' failed to build : The command '/bin/sh -c ping postgres://db:5432' returned a non-zero code: 1
The docs for networking in docker compose (
https://docs.docker.com/compose/networking/#links) states:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
What is the correct URL the connect to the container obtained from service db?
During the web image build, your db container does not exist, so using RUN is incorrect here.
One option would to include the CMD command in the Dockerfile which will instruct the web container to run the ping command every time the container is started up.
Also, I've adjusted the argument being passed to the ping command.
So, the web Dockerfile would be:
FROM alpine:3.7
CMD ["ping", "db:5432"]
Now, after docker-compose build and docker-compose up, you will see that the web container pings the db container on part 5432 and receives a response.
docker-compose starts a bridge network and adds all of the containers to this network so they can communicate with each other. Each container's hostname is the same as their service name in the docker-compose file. The hostnames are resoved by an internal DNS service.

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

Putting file into HDFS using docker-compose

Is there a way to put some file, let's say data.json, into HDFS automatically right from Docker-compose/Dockerfile?
When I start namenode and datanode I can enter into containers with
docker exec -it namenode [datanode] bash, and use
hdfs dfs -put data.json hdfs:/ (when safe mode is finished)
and that works, but I need a way to run this automatically. When I try to build containers from Dockerfile and put comands:
FROM bde2020/hadoop-namenode:1.1.0-hadoop2.8-java8
WORKDIR /data
ADD hdfs_writer/data.json /data
# ADD python_script.py /data
CMD ["hdfs dfsadmin -safemode wait && hdfs dfs -put ./data.json hdfs:/"]
# CMD ["python python_script.py"]
Container namenode immediately terminates. I also tried with the python script, that I add to container and run it with CMD.
python_script
import time
import os
os.system("hdfs dfsadmin -safemode wait")
os.system("hdfs dfs -put -f data.json hdfs:/")
while True:
time.sleep(5)
in that case, container is running, but if I check logs and try to list hdfs with hdfs dfs -ls hdfs:/, there is following error
safemode: Call From 662aae005e8b/172.20.0.5 to namenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
19/04/18 14:36:36 WARN ipc.Client: Failed to connect to server: namenode/172.20.0.5:8020: try once and fail.
I read recommended link from error log, and to be honest, I am not sure that I understand what should I do.
Any your suggestions or ideas about possible solution is highly valuable for me, as I am new to this field and I don't have much experience.
If you need some more info, I will be happy to provide it.
docker-compose.yml (just part of it)
namenode:
#docker-compose.yml and Dockerfile are in the dame directory
build: .
volumes:
- ./data/namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=cluster
env_file:
- ./hadoop.env
ports:
- 50070:50070
datanode:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.8-java8
depends_on:
- namenode
volumes:
- ./data/datanode:/hadoop/dfs/data
env_file:
- ./hadoop.env
hadoop.env
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*
HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false
HDFS_CONF_dfs_blocksize=1m
YARN_CONF_yarn_log___aggregation___enable=true
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
YARN_CONF_yarn_timeline___service_hostname=historyserver
YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032
YARN_CONF_yarn_resourcemanager_scheduler_address=resourcemanager:8030
YARN_CONF_yarn_resourcemanager_resource__tracker_address=resourcemanager:8031
You can't write to networked services in a Dockerfile. Imagine running docker build, running your combined application, tearing it down, and running it again. You'll reuse the same built image without re-running the Dockerfile steps; only the content in the image itself is kept. In most cases you need some minor amount of setup to communicate between services (Docker Compose can do this for you) but that is not set up during a build sequence. This is the same answer as "you can't run database migrations from a Dockerfile", but it applies equally to Hadoop.
A container only does one thing. Your sample Dockerfile sets a different CMD that waits for the namenode to be running and sets it up. This happens instead of starting the namenode process. A Docker container runs one main command and one main command only; there is not a way to run a main command and also a side support script of some form. The container you show would probably work, but you'd need to run it as a separate container alongside the namenode container.
You don't need to be "in Docker" to access Docker-hosted services. You can use a Docker Compose ports: directive to make services visible to the host, at which point you can use ordinary clients to interact with them. The docker exec path is the equivalent of "I ssh to my server as root, and then...", which isn't how you normally deal with any service at all.
Your server containers should only run servers. In your example you're both trying to launch an HDFS namenode and also populate the server from the same container; you'd be better off having the namenode container only be the namenode and running the setup job from another container or from the host. (See the standard postgres image's entrypoint script for some idea of the gyrations required otherwise.)
Docker Compose isn't great for one-off jobs. Every time you run docker-compose up it will discover that your setup container isn't running and try to start it again. Other more powerful orchestrators could be a better fit; for example, a Kubernetes Job is a reasonable fit for what you're describing.

Docker - Run app on swarm manager (cant connect)

TLDR version:
How can I verify/set ports 7946 & 4789 on my swarm node so that I can view my application running from my docker-machine?
Complete question:
I am going through the docker tutorials and am on step 4
https://docs.docker.com/get-started/part4/#accessing-your-cluster
When I get to accessing your cluster section. It says that I should just be able to grab the ip address from one of my nodes displayed using docker-machine ls. I run that command, see the IP, grab it and put it into my browser (or alternatively use curl) and i receive the error
This site can’t be reached
192.168.99.100 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Below this step it has a note saying that before you enable swarm mode, assuming they mean when you run:
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
You should check the following port settings
Having connectivity trouble?
Keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
I've spent the last few days going through documentation, redoing the steps and trying everything I can to have this work but nothing has been successful.
Can anyone explain/provide documentation to show me how to view/set these ports, or explain if I am missing some other important information?
UPDATE
I wasn't able to get swarm working, so I decided to just run everything from a docker-compose.yml file. Here is the code I used below:
docker-compose.yml file:
version: '3'
services:
www:
build: .
ports:
- "80:80"
links:
- db
depends_on:
- db
volumes:
- .:/opt/www
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: supersecure
MYSQL_DATABASE: test_db
MYSQL_USER: jake
MYSQL_PASSWORD: supersecure
and a Dockerfile located in the same directory containing the following:
# A simple Flask app container.
FROM python:2.7
LABEL maintainer="your name here"
# Place app in container.
ADD . /opt/www
WORKDIR /opt/www
# Install dependencies.
RUN pip install -r requirements.txt
EXPOSE 80
ENV FLASK_APP index.py
ENV FLASK_DEBUG 1
CMD python index.py
you'll need to create any other files which are referenced in these two files (example requirements.txt & index.py) but those are all in the same directory as the dockerfile & docker-compose.yml files. Please comment if anyone has questions

Resources