I have a docker container with tcserver on it with the UI of an application on it. I have a second docker container that is also running tcserver, but this one has the applications engine.
I am trying to get these two to talk to each other somehow, because when I access the UI on the web browser it says that it is not connected to the engine. How can I achieve this?
You need to link the new allotted ports of the App Engine container to the UI Container, because the container can only be accessed by other containers through port.
As simple as that:
docker run --name engine -d tcserver-engine
docker run --name lala --link engine:tc-engine -d tcserver-ui
Inside lala container you can get engine container using the selected alias, in this example tc-engine
Use name and link in your docker run command or docker-compose.yml file?
docker run -ti --name server1 -p 8111:8111 ikamman/docker-tc-server
docker run -ti --name server2 --link server1 -p 8112:8111 ikamman/docker-tc-server
docker exec server2 curl server1:8111
Will return like this:
$ docker exec server2 curl server1:8111
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3546 0 3546 0 0 3290 0 --:--:-- 0:00:01 --:--:-- 3292
<!--
Page: maintenance-welcome
Stage: FIRST_START_SCREEN
State revision: 12
Timestamp: Wed Jul 27 20:30:06 UTC 2016
-->
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>TeamCity Maintenance — TeamCity</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge"/>
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"/>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta name="application-name" content="TeamCity"/>
<meta name="description" content="Powerful Continuous Integration and Build Server"/>
<link rel="icon" href="/img/icons/TeamCity512.png" sizes="512x512"/>
Related
I'm currently working on setting up a ddev typo3 webpage running in a ubuntu dind container to get around the installation requirements for ddev on windows.
I have previously tested connecting to an nginx container within the dind container, which worked as expected. With this, nginx was served on localhost:80 on the host.
#host
docker run --rm -it --privileged -p 80:8080 ubuntu-dind
#container
docker run -it --rm -d -p 8080:80 --name web nginx
After successfully setting up and starting ddev, the following containers are now running inside my dind container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf695c8e2ed9 drud/ddev-router:v1.21.4-built "/app/docker-entrypo…" 3 minutes ago Up 3 minutes (healthy) 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:8025-8026->8025-8026/tcp, 127.0.0.1:8036-8037->8036-8037/tcp ddev-router
6d26ecd91adf drud/ddev-webserver:20230207_fix_nvm-recruiting-built "/start.sh" 3 minutes ago Up 3 minutes (healthy) 8025/tcp, 127.0.0.1:32772->80/tcp, 127.0.0.1:32771->443/tcp ddev-recruiting-web
5e10c98eb2e7 phpmyadmin:5 "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 80/tcp ddev-recruiting-dba
8e3a5254605d drud/ddev-dbserver-mariadb-10.4:v1.21.4-recruiting-built "/docker-entrypoint.…" 3 minutes ago Up 3 minutes (healthy) 127.0.0.1:32768->3306/tcp ddev-recruiting-db
68f7527750ab drud/ddev-ssh-agent:v1.21.4-built "/entry.sh ssh-agent" 4 minutes ago Up 4 minutes (healthy) ddev-ssh-agent
The next step would now be to connect to http://recruiting.ddev.site:8036/ the site is reachable from within the dind container, but im unsure of how to connect to this address from the host.
I have fired up my dind container as follows:
docker run --rm -it --privileged -v ${PWD}/project:/usr/src/project -p 80:8036 dindu
Attempting to map port 8036 of the container to port 80 on the host.
Testing to connect to port 8036 inside the container reaches:
root#57fcae15337a:/usr/src/project# curl 127.0.0.1:8036
<!DOCTYPE html>
<html>
<head>
<title>503: No ddev back-end site available</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>503: No ddev back-end site available.</h1>
<p>This is the ddev-router container: There is no back-end webserver at the URL you specified. You may want to use "ddev start" to start the site.</p>
</body>
</html>
And only gets an ERR_EMPTY_RESPONSE from the host, so it's there have to be some additional steps im missing. I don't believe my problem is ddev specific and has more to do with me being somewhat inexperienced with docker networking.
How do I forward an address, like http://recruiting.ddev.site, instead of a simple port to the host machine?
I am trying to set a local development enviroment with nginx docker and local DNS containers, once I bring docker compose up and typed commands it response with:
$ nslookup ns.main.com
;; connection timed out; no servers could be reached
$dig #127.0.0.1 ns.main.com
; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> #127.0.0.1 ns.main.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 38715
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: e9ba5744ce2779c601000000633878c753c784e7d4f38f3e (good)
;; QUESTION SECTION:
;ns.main.com. IN A
;; Query time: 4 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Sat Oct 01 11:28:39 CST 2022
;; MSG SIZE rcvd: 68
The test domain is not resolved and test page is not access, there is a step missing for create the environment,
The OS is Ubuntu 22.04.1 LTS.
Because local DNS conflict with network real DNS after running docker compose build it has to disable local resolution service with:
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
then run docker compose up -d
Noted that PC is not able to access internet.
docker compose file is:
services:
nginx:
build:
context: ./nginx/
ports:
- 80:80
volumes:
- ./nginx/html/:/usr/share/nginx/html/
- ./nginx/conf.d/:/etc/nginx/conf.d/
dns:
build:
context: ./dns/
restart: always
ports:
- 53:53
- 53:53/udp
volumes:
- ./dns/named.conf:/etc/bind/named.conf
- ./dns/zone/:/etc/bind/zone/
command: named -c /etc/bind/named.conf -g -u named
the structure and files for environment are:
the file details in services DNS:
Dockerfile file:
FROM alpine:latest
RUN apk add bind openrc
RUN rc-update -u named
named.conf file:
options {
directory "var/bind";
allow-transfer { "none"; };
allow-query { any; };
listen-on { any; };
};
zone "main.com" IN {
type master;
file "/etc/bind/zone/main.com";
};
zone "secondary.com" IN {
type master;
file "/etc/bind/zone/secondary.com";
};
dns/zone/main.com file:
$TTL 86400
# IN SOA ns.main.com. hostmaster.main.com. (
202 ; Serial
600 ; Refresh
3600 ; Retry
1209600) ; Expire
; 3600) ; Negative Cache TTL
# IN NS ns.main.com.
ns IN A 127.0.0.1
dns/zone/secondary.com file:
$TTL 86400
# IN SOA ns.secondary.com. hostmaster.secondary.com. (
202 ; Serial
600 ; Refresh
3600 ; Retry
1209600) ; Expire
;3600) ; Negative Cache TTL
# IN NS ns.secondary.com.
ns IN A 127.0.0.1
-- NGINX service:
Dockerfile file:
FROM nginx:latest
COPY ./html /usr/share/nginx/html
RUN apt-get update && apt-get install -y procps
nginx/conf.d/default.conf file:
server {
listen 80;
server_name main.com ns.main.com *.main.com;
location / {
root /usr/share/nginx/html/main;
index index.html;
}
}
server {
listen 80;
server_name secondary.com ns.secondary.com *.secondary.com;
location / {
root /usr/share/nginx/html/secondary;
index index.html;
}
}
nginx/html/main/index.html file:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Nginx</title>
</head>
<body>
<h2>Hello from Nginx container!</h2>
</body>
</html>
nginx/html/secondary/index.html file:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Nginx</title>
</head>
<body>
<h2>Hello from secondary</h2>
</body>
</html>
for access internet again it needs to rollback commands and deactivat/activate network/WIFI with:
sudo systemctl enable systemd-resolved
sudo systemctl start systemd-resolved
Thanks in advance
When we do not disable systemd-resolved service, our PC goes to internet, because my domain name test is not registed then from outside our PC does not receive any resolved IP to routing packages so SERVFAIL is displayed.
Once we disable systemd-resolved we could access to local DNS service dockerized however locally DNS is solved with file in /etc/resolv.conf which has this default content:
nameserver 127.0.0.53
options edns0 trust-ad
search .
Since PC cannot resolved nslookup command, temporized and failed because server could not be reached, it is dockerized in localhost.
My solution during test is:
Disable local resolution service.
Add localhost to /etc/resolv.conf
nameserver 127.0.0.1
nameserver 127.0.0.53
options edns0 trust-ad
search .
Add localhost and external DNS to interface edns0 (WIFI in my case) for having external DNS resolution too.
Deactivate/activate interface.
nslookup is OK and my APP in docker compose NGINX service is reacheable.
Once I finish my work I can enable systemd-resolved and deactive and active interface. Also PC back to default once it is booting.
My architecture is as follows:
2 docker-dev-1 and docker-dev-2 nodes in a docker-dev VPC
2 docker-internal-1 and docker-internal-2 nodes in a docker-internal VPC
The firewall allows tcp:2377, 7946, udp:4789, 7946, esp as documented here
All of them are masters in order to facilitate testing for the moment. Docker version is 20.10.16. All the instances are exactly the same (packages, configuration...).
Currently I have a flask/jinja application running on docker-dev-X.
To connect to the database, the app passes by a reverse proxy which redirects the streams that arrives on port 3306 (MySQL) to a Cloud SQL instance of the docker-internal VPC.
The flask application is exposed via a reverse proxy that listens on port 8082.
Here is the docker daemon.json configuration:
{
"mtu": 1454,
"no-new-privileges": true
}
Everything works fine when I have only one docker-dev. However, as soon as I add the docker-dev-2 node, all streams with a large output passing through docker-dev-2 do not work.
Let me explain:
On docker-dev-1 :
dev#docker-dev-1:~$ curl localhost:8082/health
Ok
# With a heavier page
dev#docker-dev-1:~$ curl localhost:8082/auth/login
<!DOCTYPE html>
<html lang="en_GB">
<head>
... # Lots of HTMLs
</html>
No problem everything is working fine.
On docker-dev-2 :
dev#docker-dev-2:~$ curl localhost:8082/health
Ok
dev#docker-dev-2:~$ curl -I localhost:8082/health
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 18 May 2022 12:34:57 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 2
...
# With a heavier page
devdocker-dev-2:~$ curl localhost:8082/auth/login
^C # Timeout
# Same curl but shows only header
dev#docker-dev-2:~$ curl -I localhost:8082/auth/login
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 18 May 2022 10:43:57 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 222967 # Long Content-Length
Connection: keep-alive
...
As you can see, when I try to curl the /health --> No problem
When I try to curl /auth/login --> The request timeout, I have no answer
When I try to curl /auth/login to show only headers --> The request works
In a container, everything is working fine, on docker-dev-1 and on docker-dev-2 :
dev#docker-dev-2:~$ docker run -it --rm --name debug --network jinja_flask_network nicolaka/netshoot bash
bash-5.1# curl reverse_proxy_nginx/health:8082
Ok
bash-5.1# curl reverse_proxy_nginx:8082/auth/login
<!DOCTYPE html>
<html lang="en_GB">
<head>
... # Lots of HTMLs
</html>
So the problem doesn't seem to be in docker network.
The problem seems to be when the request output is too long.
I already reduced MTU to 1454 a few months ago to resolve a problem... (Seems to be the same problem but in docker network).
So, when the request is on docker-dev-1 --> No problem, the website is loading normally, But when the request is on docker-dev-2 --> Infinite loading results in a timeout.
I hope I was clear in my explanation, do you have any idea ?
I have a hosted container web app on azure which is working. but for
CI with the Gitlab container registry I need to call the webhook url
in gitlab-ci.yaml file.
when I called the webhook url with postman with POST request it's pulling the latest image for registry. when I made same request
in gitlab-ci.yaml file using curl it's showing an error Access is
denied due to invalid credentials.
$ curl -d -X POST https://$acr-image:SOME_SECRET_KEY#acr-image.scm.azurewebsites.net/docker/hook
curl: (6) Could not resolve host: POST
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
<title>401 - Unauthorized: Access is denied due to invalid credentials.</title>
<style type="text/css">
After starting a Sonatype Nexus 3 image (command 1) I tried to create a repo and push one test image (command 2) to that repo but got an error 405 (error 1)
command 1
$ docker run -d -p 8081:8081 --name nexus sonatype/nexus3:3.14.0
command 2
$ docker push 127.0.0.1:8081/repository/test2/image-test:0.1
error 1
error parsing HTTP 405 response body: invalid character '<' looking for beginning of value: "\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <title>405 - Nexus Repository Manager</title>\n <meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"/>\n\n\n <!--[if lt IE 9]>\n <script>(new Image).src=\"http://127.0.0.1:8081/favicon.ico?3.14.0-04\"</script>\n <![endif]-->\n <link rel=\"icon\" type=\"image/png\" href=\"http://127.0.0.1:8081/favicon-32x32.png?3.14.0-04\" sizes=\"32x32\">\n <link rel=\"mask-icon\" href=\"http://127.0.0.1:8081/safari-pinned-tab.svg?3.14.0-04\" color=\"#5bbad5\">\n <link rel=\"icon\" type=\"image/png\" href=\"http://127.0.0.1:8081/favicon-16x16.png?3.14.0-04\" sizes=\"16x16\">\n <link rel=\"shortcut icon\" href=\"http://127.0.0.1:8081/favicon.ico?3.14.0-04\">\n <meta name=\"msapplication-TileImage\" content=\"http://127.0.0.1:8081/mstile-144x144.png?3.14.0-04\">\n <meta name=\"msapplication-TileColor\" content=\"#00a300\">\n\n <link rel=\"stylesheet\" type=\"text/css\" href=\"http://127.0.0.1:8081/static/css/nexus-content.css?3.14.0-04\"/>\n</head>\n<body>\n<div class=\"nexus-header\">\n \n <div class=\"product-logo\">\n <img src=\"http://127.0.0.1:8081/static/images/nexus.png?3.14.0-04\" alt=\"Product logo\"/>\n </div>\n <div class=\"product-id\">\n <div class=\"product-id__line-1\">\n <span class=\"product-name\">Nexus Repository Manager</span>\n </div>\n <div class=\"product-id__line-2\">\n <span class=\"product-spec\">OSS 3.14.0-04</span>\n </div>\n </div>\n \n</div>\n\n<div class=\"nexus-body\">\n <div class=\"content-header\">\n <img src=\"http://127.0.0.1:8081/static/rapture/resources/icons/x32/exclamation.png?3.14.0-04\" alt=\"Exclamation point\" aria-role=\"presentation\"/>\n <span class=\"title\">Error 405</span>\n <span class=\"description\">Method Not Allowed</span>\n </div>\n <div class=\"content-body\">\n <div class=\"content-section\">\n HTTP method POST is not supported by this URL\n </div>\n </div>\n</div>\n</body>\n</html>\n\n"
Explication
After some research I found out that the nexus3 docker repositories are designed to work with individual port for each repository (hosted, group or proxy).
https://issues.sonatype.org/browse/NEXUS-9960
Solution
So I destroyed my previous docker container because I didn't have any relative info on it and launched the same command but with an extra port enabled.
$ docker run -d -p 8081:8081 --name nexus sonatype/nexus3:3.14.0
Updated: need to open port 8082 for docker
$ docker run -d -p 8081:8081 -p 8082:8082 --name nexus sonatype/nexus3:3.14.0
So when you make a new docker repository you need to define at least a http connector port, that I defined in the image as 8082.
After that you have to login to the service with the default admin account (admin admin123)
$ docker login 127.0.0.1:8082
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Then tried to upload the new tag to that url and it wworked.
$ docker push 127.0.0.1:8082/repository/test2/image-test:0.1
The push refers to repository [127.0.0.1:8082/repository/test2/image-test]
cd76d43ec36e: Pushed
8ad8344c7fe3: Pushed
b28ef0b6fef8: Pushed
0.1: digest: sha256:315f00bd7986508cb0984130bbe3f7f26b2ec477122c9bf7459b0b64e443a232 size: 948
Extra - Dockerfile
So because I needed to create a custom nexus3 docker image for my production environment I started the Dockerfile like this:
FROM sonatype/nexus3:3.14.0
ENV NEXUS_DATA = /nexus-data/
EXPOSE 8090-8099
I will be using the ports from 8090 to 8099 to specify different docker image repositories instead of 8022, but in case I needed more ports I could just change the valors or add a new range of ports.
Hope it was useful!!
Nexus Documentation Says:
Sharing an image can be achieved by publishing it to a hosted repository. This is completely private and requires you to tag and push the image. When tagging an image, you can use the image identifier (imageId). It is listed when showing the list of all images with docker images. Syntax and an example (using imageId) for creating a tag are:
docker tag <imageId or imageName> <nexus-hostname>:<repository-port>/<image>:<tag>
docker tag af340544ed62 nexus.example.com:18444/hello-world:mytag
Once the tag, which can be equivalent to a version, is created successfully, you can confirm its creation with docker images and issue the push with the syntax:
docker push <nexus-hostname>:<repository-port>/<image>:<tag>
Note that the port needs to be the repository connector port configured for the hosted repository to which you want to push to. You can not push to a repository group or a proxy repository.
hope It help you!