How to connect a host PC to a ZedBoard and share the host PC internet access? - connection

I have tried to connect to a digilent ZedBoard using my host PC, which I can do using UART, but I am not able to ssh into the board or further use my host PC internet connection to access the internet through the ZedBoard.
Zedboard is running: Xillinux distribution for Zynq-7000 EPP
Host PC is running: Ubuntu 16.04
How should I set this up?

We will go through the steps of communicating to a digilent Zedboard using the UART and the Ethernet port.
Using UART port
Connect the host (USB) to the zedboard's UART port (micro USB) and execute on the host:
# Install minicom
apt update && apt install minicom
minicom –D /dev/ttyACM0 –b 115200 -8 -o
Congratulations, you are connected to the zedboard
* For minicom help: CTRL+a z
* To exit minicom CTRL+a x
Connect using the board's ethernet port
Connect the zedboard to the host using the ethernet port on the host system, or an ethernet to usb adapter.
By default the zedboard's os has eth0 cunfigured to have the static ip of: 192.168.1.10
Configure on the host:
Network Connections > (Select the connection interface to the zedboard) > Edit > IPv4 Settings:
Change Method to Manual
Edit Address to: 192.168.1.1
Edit Netmask to: 255.255.255.0
Use the menu on the host to disconnect and connect to the interface that you have just configured.
Connect to the board by: ssh root#192.168.1.10
Share your PC's internet with the zedboard
Network Connections > (Select the connection interface) > Edit > IPv4 Settings:
* Change Method to Share to other computers
Use the menu on the host to disconnect and connect to the interface that you have just configured
execute ip addr and confirm the ip of the connection interface that is being shared
10.42.0.1 in my machine (this may be different in your machine)
Use minicom to connect to the board (see above).
In the ZedBoard:
Edit the file /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.42.0.10
netmask 255.255.255.0
gateway 10.42.0.1
And fix your DNS resolver by editing the file /etc/resolv.conf to
nameserver 10.42.0.1
Execute the command to change the configurations of your zedboard
ifdown eth0; ifup eth0
And voiala! At this point should would be able to ping your host at:
root#localhost:~# ping 10.42.0.1
PING 10.42.0.1 (10.42.0.1) 56(84) bytes of data.
64 bytes from 10.42.0.1: icmp_req=1 ttl=64 time=0.424 ms
64 bytes from 10.42.0.1: icmp_req=2 ttl=64 time=0.498 ms
Ping a internet hosted website 8.8.8.8 through your host connection:
root#localhost:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=53 time=6.93 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=53 time=6.89 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=53 time=7.22 ms
And if you have setup /etc/resolv.conf correctly you can also access the internet using full domain names:
root#localhost:~# ping www.google.com
PING www.google.com (172.217.10.132) 56(84) bytes of data.
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=1 ttl=53 time=7.02 ms
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=2 ttl=53 time=7.20 ms
Additional notes
Files to keep in mind
/etc/network/interfaces describes the network interfaces
/etc/hostname configures the nameserver credentials
/etc/hosts resolves IP addresses to hostnames
/etc/resolv.conf configure your DNS resolver

Related

IP link to connect host to macvlan containers breaks after a couple of seconds

I have a macvlan network where i put all my containers in. It was created like this:
docker network create -d macvlan -o parent=eno2 \
--subnet 10.0.2.0/24 \
--gateway 10.0.2.1 \
--ip-range 10.0.2.0/24 \
mynet
This is allows for container <-> container communication and from external computers <-> container communication. The problem is that host <-> container communication is not possible.
Searching about how to fix this, i found this blog.
That solution appears to work, running this commands:
ip link add zrz-ch link eno2 type macvlan mode bridge
ip addr add 10.0.2.15/24 dev zrz-ch
ifconfig zrz-ch up
I can successfully ping any container, and any container can ping the host.
The problem is that after 5-10 seconds the link breaks and the communication does not work anymore.
Ping from host -> container:
root#pluto:/home/zrz# ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
64 bytes from 10.0.2.10: icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from 10.0.2.10: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.0.2.10: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 10.0.2.10: icmp_seq=4 ttl=64 time=0.068 ms
From 10.0.2.15 icmp_seq=5 Destination Host Unreachable
From 10.0.2.15 icmp_seq=6 Destination Host Unreachable
From 10.0.2.15 icmp_seq=7 Destination Host Unreachable
From 10.0.2.15 icmp_seq=8 Destination Host Unreachable
From 10.0.2.15 icmp_seq=9 Destination Host Unreachable
From 10.0.2.15 icmp_seq=10 Destination Host Unreachable
As you see after 5 seconds~ it stops working, the same happens when pinging from container -> host (after running the above commands on the host)
I can ping again if i do
ip link delete zrz-ch
And then run the commands above again. But as i said, it breaks after a couple of seconds.
How can i fix this?
Any ideas of how can i fix this?

docker network through a specific physical interface

So I'm trying to create a network (docker network create) so that its traffic will pass through an specific physical network interface (NIC); I have two: <iface1> (internal), and <iface2> (external).
I need the traffics of both NICs to be physically separated.
METHOD 1:
I think macvlan is the driver should use to create such network.
For most of what I found on the internet, the solutions refer to Pipework (deprecated now) and temporary docker-plugins (deprecated too).
For what most closely has helped me is this1
docker network create -d macvlan \
--subnet 192.168.0.0/16 \
--ip-range 192.168.2.0/24 \
-o parent=wlp8s0.1 \
-o macvlan_mode=bridge \
macvlan0
Then, in order for the container to be visible from the host, I need to do this in the host:
sudo ip link add macvlan0 link wlp8s0.1 type macvlan mode bridge
sudo ip addr add 192.168.2.10/16 dev macvlan0
sudo ifconfig macvlan0 up
Now the container and the host see each other :) BUT the container can't access the local network.
The idea, is that the container can access internet.
METHOD 2:
As I will use <iface2> manually, I'm ok if by default the traffic goes through <iface1>.
But no matter in which order I get the NICs up (I also tried removing the LKM for <iface2> temporarely); the whole traffic is always overtaken by the external NIC <iface2>.
And I found that it happens because the route table updates automatically at some "random" time.
In order to force the traffic to go through <iface1>, I have to (in the host):
sudo route del -net <net> gw 0.0.0.0 netmask 255.0.0.0 dev <iface2>
sudo route del default <iface2>
Now, I can verify (in several ways) that the traffic just goes through <iface1>.
But the moment that the route table updates (automatically), all traffic moves to <iface2>. Damn!
I'm sure there's a way to make the route table "static" or "persistent".
EDIT (18/Jul/2018):
The main idea is to be able to access internet through a docker container using only one of two available physical network interfaces.
My environment:
On the host created for vm virbr0 bridge with ip address 192.168.122.1 and up vm instance with interface ens3 and ip address 192.168.122.152.
192.168.122.1 - is gateway for 192.168.122.0/24 network.
Into vm:
Create network:
# docker network create --subnet 192.168.122.0/24 --gateway 192.168.122.1 --driver macvlan -o parent=ens3 vmnet
Create docker container:
# docker run -ti --network vmnet alpine ash
Check:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:c0:a8:7a:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 192.168.122.152
PING 192.168.122.152 (192.168.122.152): 56 data bytes
^C
--- 192.168.122.152 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1): 56 data bytes
64 bytes from 192.168.122.1: seq=0 ttl=64 time=0.471 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.471/0.471/0.471 ms
Ok, I up another vm with ip address 192.168.122.73 and check from docker:
/ # ping 192.168.122.73 -c2
PING 192.168.122.73 (192.168.122.73): 56 data bytes
64 bytes from 192.168.122.73: seq=0 ttl=64 time=1.630 ms
64 bytes from 192.168.122.73: seq=1 ttl=64 time=0.984 ms
--- 192.168.122.73 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.984/1.307/1.630 ms
From docker instance I can't ping interface on vm, but I can access to local network.
/ # ip n|grep 192.168.122.152
192.168.122.152 dev eth0 used 0/0/0 probes 6 FAILED
On vm I add macvlan0 nic:
# ip link add macvlan0 link ens3 type macvlan mode bridge
# ip addr add 192.168.122.100/24 dev macvlan0
# ip l set macvlan0 up
From the docker I can ping 192.168.122.100:
/ # ping 192.168.122.100 -c2
PING 192.168.122.100 (192.168.122.100): 56 data bytes
64 bytes from 192.168.122.100: seq=0 ttl=64 time=0.087 ms
64 bytes from 192.168.122.100: seq=1 ttl=64 time=0.132 ms
--- 192.168.122.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.109/0.132 ms

Docker: Connection from inside the container to localhost:port Refused

I'm trying to insure the connection between the different containers and the localhost address (127.0.0.1) used with port 8040.( My web application container run using this port.)
root#a70b20fbda00:~# curl -v http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
This is what I get when I want to connect to localhost from inside the container
root#a70b20fbda00:~# curl -v http://127.0.0.1:8040
* Rebuilt URL to: http://127.0.0.1:8040/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 8040 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8040: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 8040: Connection refused
About iptables in each container:
root#a70b20fbda00:~# iptables
bash: iptables: command not found
Connection between the container is good
root#635114ca18b7:~# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.253 ms
--- 172.17.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
root#635114ca18b7:~# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.100 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
root#635114ca18b7:~# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.149 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.180 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.149/0.164/0.180/0.020 ms
Ping the 127.0.0.1:8040
root#635114ca18b7:~# ping 127.0.01:8040
ping: unknown host 127.0.0.1:8040
What I need to do in this case?
So the Global image that there is two containers ,
The first container contains a tomcat server that deploy my web application and it turnes perfectly.
The second is a container that need to connect to the web application
URL. http://127.0.0.1:8040/my_app
you will have to use docker run --network host IMAGE:TAG for achieving the desired connection
further read here
example:-
docker run --network host --name CONTAINER1 IMAGE:tag
docker run --network host --name CONTAINER2 IMAGE:tag
inside container - CONTAINER2 you will be able to access other container as host CONTAINER1
And for accessing the service you will have to do CONTAINER:
Based on the information provided, looks like there are two containers. If these two containers are started by docker without --net=host then each of them get two different IP addresses. Say your first container got 172.17.0.2 and the second one 172.17.0.3.
In this scenario each container gets it's own networking stack. So 127.0.0.1 refers to it's own networking stack not the same.
As pointed out by #kakabali, it's possible to run the containers with host network, sharing the networking stack of the host.
One of the other options is to use the actual IP address of the first container in the second one.
second-container# curl http://172.17.0.2
Or another option is to run the second container as the sidekick/sidecar container sharing the networking stack of the first one.
docker run --net=container:${ID_OF_FIRST_CONTAINER} ${IMAGE_SECOND}:${IMAGE_TAG_SECOND}
Or if you use links correctly:
docker run --name web -itd ${IMAGE_FIRST}:${TAG_FIRST}
docker run --link web -itd ${IMAGE_SECOND}:${TAG_SECOND}
Note: docker --link feature is deprecated.
Another option is to use container management platforms which take care of service discovery for you automatically.
PS: You cannot ping an IP address on a different port. For more info, click here.

docker network - ping 255.255.255.255

When I setup a network with docker create network test1 and then start a few containers, for example
docker run -d --net=test1 --name=t1 elasticsearch
docker run -d --net=test1 elasticsearch
docker run -d --net=test1 elasticsearch
I can't broadcast ping any of these containers with docker exec -ti t1 ping 255.255.255.255.
Any idea how I can change this?
This is currently followed in issue 17814
UDP broadcasts don't work in multi-host network between hosts.
UDP broadcasts only work if both containers run on the same host.
Playing with icmp broadcast by pinging on 255.255.255.255, I receive replies only from the local host:
# ping -b 255.255.255.255
WARNING: pinging broadcast address
PING 255.255.255.255 (255.255.255.255) 56(84) bytes of data.
64 bytes from 172.18.0.1: icmp_req=1 ttl=64 time=0.601 ms
64 bytes from 172.18.0.1: icmp_req=2 ttl=64 time=0.424 ms
64 bytes from 172.18.0.1: icmp_req=3 ttl=64 time=0.420 ms
64 bytes from 172.18.0.1: icmp_req=4 ttl=64 time=0.427 ms
(I made sure /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts is set to 0 on both hosts.)
It also seems impossible to set a broadcast address on the interface connected to the shared network:
# ifconfig eth0 broadcast 10.0.0.255
SIOCSIFBRDADDR: Operation not permitted
SIOCSIFFLAGS: Operation not permitted
This ability to multicast in overlay driver is discussed in docker/libnetwork issue 552.
(help wanted)

problems with rabbitmq cluster across 2 ubuntu 9.04 machines

I keep getting unable_to_contact_cluster_nodes error
Has anyone seen this earlier and resolved it?
I am using rabbitmq-server 1.5.4 installed using ubuntu repositories. I have a hunch that this is something to do with ufw or some other network security measure, enabled by default in ubuntu, that is preventing connections.
The machine is pingable (I made an entry in /etc/hosts file)
pgatram#mzl005:~$ ping mz005
PING mz005 (192.168.0.22) 56(84) bytes of data.
64 bytes from mz005 (192.168.0.22): icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from mz005 (192.168.0.22): icmp_seq=2 ttl=64 time=0.023 ms
^C
--- mz005 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.023/0.024/0.026/0.005 ms
I cant get the cluster to work
pgatram#mzl005:~$ sudo rabbitmqctl cluster rabbit#mz005
Clustering node rabbit#mzl005 with [rabbit#mz005] ...
Error: {unable_to_contact_cluster_nodes,[rabbit#mz005]}
Almost certainly a firewall issue. You should be able to telnet to the other host on port 5672 (or whatever you specified in /etc/default/rabbitmq). If telnet can't connect then the port isn't open. As a sanity check, try telnet to localhost on port 5672.
If you can't telnet it'll be a firewall issue.
After that it's a case of opening the port and trying again.
Chris

Resources