I have virtualbox with Ubuntu 14.10 server, Jenkins and Apache installed. When I access the IP of this virtualbox the homepage of apache is load correctly. But when I try to acces jenkins via x.x.x.x:8080 (ip of my virtualbox) it won't load. I only get a connection time out error
I tried to configure a different port (8081 and 6060) but that doesn't work. I also add a port forwarding to VirtualBox but that doesn't work ether...
Anyone suggestions how I can acces jenkins that is running inside a virtual machine?
Depending on whether or not you need the box to be accessible by machines other than your host, you need a bridged or host-only network interface https://www.virtualbox.org/manual/ch06.html
I've just done a full install of Nginx, Java, and Jenkins:
sudo apt-get install nginx
sudo apt-get install openjdk-7-jdk
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
on a fresh Ubuntu VirtualBox instance where the first interface is Host-only and the second is NAT:
Here is my /etc/network/interfaces:
# The loopback network interface
auto lo
iface lo inet loopback
# Host-only interface
auto eth0
iface eth0 inet static
address 192.168.56.20
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
# NAT interface
auto eth1
iface eth1 inet dhcp
I can reach Jenkins from my host on 192.168.56.20:8080 with no port forwarding necessary. You must have something unrelated to Jenkins going on, possibly firewall related. Try setting Jenkins back to 8080, removing your port forwarding, and check for firewall rules that could be getting in the way.
Related
When I start a simple docker container (e.g. Portainer) with
docker run -d --name portainer -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
the container is accessable from the internet as expected.
When I stop (docker stop portainer) and start (docker start portainer) the container, the port 9000 is open again (verified with nmap), but the web interface of portainer loads forever.
# first run
networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier configured
2 enp35s0 ether routable configured
3 enp36s0 ether no-carrier configuring
5 br-1815f2210327 bridge no-carrier configuring
6 br-7f9b2f2637a1 bridge no-carrier configuring
7 br-a9ae27884558 bridge no-carrier configuring
6552 br-39aac8ad8ef3 bridge routable configuring
6559 docker0 bridge no-carrier configuring
# next run
networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier configured
2 enp35s0 ether routable configured
3 enp36s0 ether no-carrier configuring
5 br-1815f2210327 bridge no-carrier configuring
6 br-7f9b2f2637a1 bridge no-carrier configuring
7 br-a9ae27884558 bridge no-carrier configuring
6552 br-39aac8ad8ef3 bridge no-carrier configuring
6559 docker0 bridge no-carrier configuring
I already tried different workarounds that I found on the internet, like
nano /etc/docker/daemon.json
{ "debug": true, "bip": "172.20.0.1/16" }
and this config file in various configurations
nano /etc/systemd/network/docker0.network
#[Match]
#Name=docker0
#[Network]
#IPForward=yes
#[Network]
#Address=172.17.0.1/16
#[Link]
#Unmanaged=yes
(Currently everythings is commented out.)
When I restart the docker daemon with
systemctl restart docker
and then start the docker container
docker start portainer
it's working fine again.
My system is a linux root server hosted by strato.de:
docker -v
Docker version 20.10.6, build 370c289
cat /etc/issue
Ubuntu 20.04.2 LTS
uname -r
5.4.0-73-generic
The problem occurs with all of my docker containers on that server.
I would be very grateful for any further tips.
UPDATE
Docker on Ubuntu doesn't connect to localhost
The mentioned solution seems not work on my server with Ubuntu 20.04.
Yesterday I installed the same OS and docker version in a VM. Everything is working fine there.
Kind regards,
K1LLUM1N471
My problem was that i could ping google only once in a container (docker run --rm alpine ping google.com), after exit it would not ping the next time i ran the same command. In ifconfig docker0 the inet address was gone after exiting the container the inet6 was still there after running the command once.
when running networkctl status the docker0 link is at configuring.
This might do the trick:
the default Netplan config (/etc/netplan/01-netcfg.yaml)in my Ubuntu 22.04 server from Strato (dedicated server) is:
network:
version: 2
ethernets:
mainif:
match:
name: '*'
dhcp4: yes
replace it with something like this:
network:
version: 2
ethernets:
enp3s0:
dhcp4: yes
dhcp6: no
enp2s0:
dhcp4: yes
dhcp6: no
apply netplan config
sudo netplan try or sudo netplan apply
restart Docker
sudo systemctl restart docker
when now running networkctl the docker0 link should be unmanaged
For your interrest:
I know that is not the best answer, but in my case I solved the problem by downgrading the OS on the root server :(
cat /etc/issue
Ubuntu 18.04.4 LTS
docker -v
Docker version 20.10.7, build f0df350
I'm running a virtual machine on GCE and Centos 7. I've configured the machine with two network interfaces. When doing so, the user is required to enter the following commands to configure eth1 (every interface except eth0 requires this approach). On my machine, eth1's gateway is 10.140.0.1.
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I have used the above with success, but the configuration is not persistent. I know it's possible to do so, but I first need to fully understand what the above is actually doing (breaking my problem into smaller parts).
sudo ifconfig eth1 10.140.0.2 netmask 255.255.255.255 broadcast 10.140.0.2 mtu 1430
This command seems to be telling eth1 at 10.140.0.2 to broadcast on the same internal IP. It's also setting MTU to 1430, which is strange because the other interfaces are set to 1460. Is this command really needed?
sudo echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables # (sudo su - first if permission denied)
From what I read, this command is appending "1 rt1" to the file rt_tables. If this is run once, does it need to be run each time the network comes up? Seems like it only needs to be run once.
sudo ip route add 10.140.0.1 src 10.140.0.2 dev eth1
sudo ip route add default via 10.140.0.1 dev eth1 table rt1
sudo ip rule add from 10.140.0.2/20 table rt1
sudo ip rule add to 10.140.0.2/20 table rt1
I know these commands add non-persistent rules and routes to the network configuration. Once I know the answers to the above, I will come back to the approach of making this persistent.
Referring to your question on Google group thread, as I had mentioned in the post:
IP routes and IP rules needs to be persistent routes to avoid the routes being lost after VM reboot or network services restart. Depending upon the operating system configuration files required to make the routes persistent can be different. Here is a stackexchange thread for CentOS 7, mentioning files: "/etc/sysconfig/network-scripts/route-ethX" and "/etc/sysconfig/network/scripts/rule-ethX" to keep the IP route and rule peristent. Here is the CentOS documentation for the persistent static routes.
I am running a Debian server (stable), with the docker.io Debian package.This is the one distributed by Debian, not the one from the Docker developers. Since docker.io is only available in sid, I have installed from there (apt install -t unstable docker.io).
My firewall does allow connections to/from docker containers:
$ sudo ufw status
(...)
Anywhere ALLOW 172.17.0.0/16
172.17.0.0/16 ALLOW Anywhere
I also have this in /etc/ufw/before.rules :
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 172.17.0.0/16 -o eth0 -j MASQUERADE
So -- I have created an image with
$ sudo debootstrap stable ./stable-chroot http://deb.debian.org/debian > /dev/null
$ sudo tar -C stable-chroot -c . | docker import - debian-stable
Then started a container and installed apache2 and netcat. Port 1111 on the host machine will be redirected to port 80 on the container:
$ docker run -ti -p 1111:80 debian-stable bash
root#dc4996de9fe6:/# apt update
(... usual output from apt update ...)
root#dc4996de9fe6:/# apt install apache2 netcat
(... expected output, installation successful ...)
root#dc4996de9fe6:/# service apache2 start
root#dc4996de9fe6:/# service apache2 status
[ ok ] apache2 is running.
And from the host machine I can connect to the apache server:
$ curl 127.0.0.1:1111
(... HTML from the Debian apache placeholder page ...)
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
And it waits for me to type (if I type GET / I get the Debian apache placeholder page). Ok. And if I stop apache inside the container,
root#06da401a5724:/# service apache2 stop
[ ok ] Stopping Apache httpd web server: apache2.
root#06da401a5724:/# service apache2 status
[FAIL] apache2 is not running ... failed!
Then connections to port 1111 on the host will be rejected (as expected):
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Now, if I start netcat on the container, listening on port 80:
root#06da401a5724:/# nc -l 172.17.0.2 80
Then I cannot connect from the host!
$ telnet localhost 1111
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
The same happens if I try nc -l 127.0.0.1 80 in the container.
What could be happening? Both apache and netcat were listening on port 80. What have I missed?
I'd appreciate any hints...
update: if I try this:
root#12b8fd142e00:/# nc -vv -l -p 80
listening on [any] 80 ...
172.17.0.1: inverse host lookup failed: Unknown host
invalid connection to [172.17.0.2] from (UNKNOWN) [172.17.0.1] 54876
Then it works!
Now it's weird... ifconfig inside the container tells me it has IP 172.17.0.2, but I can only use netcat binding to 172.17.0.1:
root#12b8fd142e00:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
And Apache seems to want to 172.17.0.2 instead:
2AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
but it actually uses 172.17.0.1:
root#12b8fd142e00:/# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 12b8fd142e00:http 172.17.0.1:54942 TIME_WAIT
tcp 0 0 12b8fd142e00:39528 151.101.48.204:http TIME_WAIT
Apache is not listening on 172.17.0.1, that's the address of the host (in the docker bridge).
In the netstat output, the local address has been resolved to 12b8fd142e00. Use the -n option with netstat to see unresolved (numeric) addresses (for example netstat -plnet to see listening sockets). 172.17.0.1 is the foreign address that connected to Apache (an it's indeed the host).
The last line in the netstat output shows that some process made a connection to 151.101.48.204:80, probably to make an HTTP request. You can see the PID/name of the process with netstat -p.
I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.
As part of a security test of an iOS app I'm developing, I'd like to verify that it correctly validates SSL/TLS certificates when connecting to various APIs. I installed mitmproxy on my Mac and configured it as a transparent proxy, then configured the WiFi based on this transparent proxy iOS WiFi screenshot. The iPhone takes a very long time to show that it's connected to WiFi, and after it does, it can't access the network. Nothing at all shows up in mitmproxy, including in its event log:
Details of the mitmproxy configuration
I installed mitmproxy 0.11.3 in my system python (and renamed the outdated pyOpenSSL that ships with OSX, so that it uses pyOpenSSL 0.14 as installed with mitmproxy by pip).
I'm using the following script to configure and start pf and mitmproxy:
#!/bin/bash -x
sudo sysctl -w net.inet.ip.forwarding=1
# sudo sysctl -w net.inet.ip.scopedroute=0
## OSX can't change the net.inet.ip.scopedroute kernel flag in user space so I used:
## sudo defaults write "/Library/Preferences/SystemConfiguration/com.apple.Boot" "Kernel Flags" "net.inet.ip.scopedroute=0
## and then rebooted
sudo defaults read /Library/Preferences/SystemConfiguration/com.apple.Boot
cat > pf.conf << _EOF_
rdr on en0 inet proto tcp to any port 80 -> 127.0.0.1 port 8080
rdr on en0 inet proto tcp to any port 443 -> 127.0.0.1 port 8080
_EOF_
cat pf.conf
sudo pfctl -d
sudo pfctl -f pf.conf
sudo pfctl -e
mitmproxy -T --host
Interface en0 is my Mac's WiFi connection.
The output from that script (visible after stopping mitmproxy with control-C) looks like this:
$ ./transparent.sh
+ sudo sysctl -w net.inet.ip.forwarding=1
net.inet.ip.forwarding: 1 -> 1
+ sudo defaults read /Library/Preferences/SystemConfiguration/com.apple.Boot
{
"Kernel Flags" = "net.inet.ip.scopedroute=0";
}
+ cat
+ cat pf.conf
rdr on en0 inet proto tcp to any port 80 -> 127.0.0.1 port 8080
rdr on en0 inet proto tcp to any port 443 -> 127.0.0.1 port 8080
+ sudo pfctl -d
No ALTQ support in kernel
ALTQ related functions disabled
pf disabled
+ sudo pfctl -f pf.conf
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled
+ sudo pfctl -e
No ALTQ support in kernel
ALTQ related functions disabled
pf enabled
+ mitmproxy -T --host
Details of the iOS configuration
I'm using a physical iPhone5s on iOS 8.1 and connecting to the same WiFi network as the Mac. My WiFi configuration looks like this:
I've used 192.168.20.118 because that is the IP address of my Mac on the same WiFi network, which I found using ifconfig:
$ ifconfig
[...]
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether xx:xx:xx:xx:xx:xx
inet6 fe80::22c9:d0ff:fe84:983b%en0 prefixlen 64 scopeid 0x4
inet 192.168.20.118 netmask 0xffffff00 broadcast 192.168.20.255
nd6 options=1<PERFORMNUD>
media: autoselect
status: active
[...]
I got the same problem. In my case, i turn off Mac OS Firewall in Setting panel. It works and i can use mitmproxy as a transparent proxy.
i occur the same problem today .and i solve it only set the dns.and i think the mitmproxy did not provide dns