Radius test only success in the local machine, but can't success remote machine - freeradius

I install the freeradius in centos7 through ./configure&&make&&make install.
after make the server running. the local test is valid:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing localhost 0 testing123
Sending Access-Request of id 151 to 127.0.0.1 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=151, length=71
Service-Type = Framed-User
Framed-Protocol = PPP
Framed-IP-Address = 172.16.3.33
Framed-IP-Netmask = 255.255.255.0
Framed-Routing = Broadcast-Listen
Filter-Id = "std.ppp"
Framed-MTU = 1500
Framed-Compression = Van-Jacobson-TCP-IP
But in the remote machine, it seems that there's no response from the radius server machine:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing 211.71.149.221 0 testing123
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
radclient: no response from server for ID 149 socket 3
Here's my configure file:
clients.conf:
client 211.71.149.221{
ipaddr=211.71.149.221
secret = testing123
short = test-client
nastype = other
}
users
steve Cleartext-Password := "testing"
Service-Type = Framed-User,
Framed-Protocol = PPP,
Framed-IP-Address = 172.16.3.33,
Framed-IP-Netmask = 255.255.255.0,
Framed-Routing = Broadcast-Listen,
Framed-Filter-Id = "std.ppp",
Framed-MTU = 1500,
Framed-Compression = Van-Jacobsen-TCP-I
I didn't use database,so I didn't make a change to the radiusd.conf.
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# netstat -upln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:68 0.0.0.0:* 727/dhclient
udp 0 0 172.17.120.248:123 0.0.0.0:* 828/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:58664 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:5489 0.0.0.0:* 727/dhclient
udp 0 0 127.0.0.1:18120 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1812 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1813 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1814 0.0.0.0:* 26159/radiusd
udp6 0 0 :::123 :::* 828/ntpd
udp6 0 0 :::54457 :::* 727/dhclient

Your failing radtest is sending the request to a remote server with IP 211.71.149.221. Your clients.conf defines a client with client IP-adress 211.71.149.221. I'm guessing that that is NOT the IP from which your request is coming from.

Related

FreeRadius - Failed to connect with # comand

please help!!! i use freeradius + mysql + daloradius (centos7)" and when i put any user whith "#"
don´t work
radtest prueba 1234 localhost 1645 testing123
Sent Access-Request Id 89 from 0.0.0.0:46842 to 127.0.0.1:1645 length 76
User-Name = "prueba"
User-Password = "1234"
NAS-IP-Address =
NAS-Port = 1645
Message-Authenticator = 0x00
Cleartext-Password = "1234"
Received Access-Accept Id 89 from 127.0.0.1:1645 to 0.0.0.0:0 length 20
whithout "#", work
Sent Access-Request Id 198 from 0.0.0.0:57280 to 127.0.0.1:1645 length 77
User-Name = "prueba#"
User-Password = "1234"
NAS-IP-Address =
NAS-Port = 1645
Message-Authenticator = 0x00
Cleartext-Password = "1234"
Received Access-Reject Id 198 from 127.0.0.1:1645 to 0.0.0.0:0 length 20
(0) -: Expected Access-Accept got Access-Reject

What does the "*A" and "*U" means in the "sctp assocs" display?

cat /proc/net/sctp/assocs
ASSOC-ID ASSOC SOCK STY SST ST HBKT TX_QUEUE RX_QUEUE UID INODE LPORT RPORT LADDRS <-> RADDRS HBINT INS OUTS MAXRT T1X T2X RTXC wmema wmemq sndbuf rcvbuf
13 ffff8800b93a9000 ffff8800ac752300 2 1 3 9176 0 0 0 20735 3905 48538 *192.168.44.228 <-> *A172.16.236.92 7500 10 10 2 0 0 23 1 0 2000000 2000000
0 ffff88042aea3000 ffff880422e88000 0 10 1 40840 0 0 0 9542 3868 3868 *10.127.58.66 <-> *U10.127.115.194 7500 17 17 10 0 0 0 1 0 8388608 8388608

how to publish port with docker-compose (bridge default network)

I can't contact any services declared in my docker-compose.yml
All connection has gone in timeout and i don't see traffics using tcpdump but in using netstat all ports seems opened
this is my docker-compose.yml
version: '3'
services:
just_teacher:
image: just_teacher
build:
context: ./teacher
dockerfile: Dockerfile_teacher_
ports:
- "5002:5002"
container_name: just_teacher
network_mode: bridge
just_teacher_consumer:
image: just_teacher_consumer
build:
context: ./teacher
dockerfile: Dockerfile_teacher_vote_consumer_
container_name: just_teacher_consumer
network_mode: bridge
just_controller:
image: just_controller
build:
context: ./controller
dockerfile: Dockerfile_controller_
container_name: just_controller
ports:
- "5010:5010"
network_mode: bridge
just_controller_consumer:
image: just_controller_consumer
build:
context: ./controller
dockerfile: Dockerfile_controller_consumer_
container_name: just_controller_consumer
network_mode: bridge
just_reccomender:
image: just_reccomender
build:
context: ./reccomender_news
dockerfile: Dockerfile_reccomender_
container_name: just_reccomender
ports:
- "5020:5020"
network_mode: bridge
just_reccomender_consumer:
image: just_reccomender_consumer
build:
context: ./reccomender_news
dockerfile: Dockerfile_reccomender_consumer_
container_name: just_reccomender_consumer
network_mode: bridge
just_rss_consumer:
image: just_rss_consumer
build:
context: ./just-server
dockerfile: Dockerfile_rss_consumer_
container_name: just_rss_consumer
volumes:
- /data:/data
network_mode: bridge
just_server:
image: just_server
build:
context: ./server
dockerfile: Dockerfile_
container_name: just_server
volumes:
- /data:/data
ports:
- "28050:28050"
- "8087:8087"
- "8060:8060"
- "8050:8050"
restart: always
network_mode: bridge
just_scheduler_feed:
image: just_scheduler_feed
build:
context: ./server
dockerfile: Dockerfile_Scheduler_Feed_
container_name: just_scheduler_feed
network_mode: bridge
just_scheduler_objects:
image: just_scheduler_objects
build:
context: ./server
dockerfile: Dockerfile_Scheduler_Objects_
container_name: just_scheduler_object
network_mode: bridge
just_scheduler_travel:
image: just_scheduler_travel
build:
context: ./server
dockerfile: Dockerfile_Scheduler_Travel_
container_name: just_scheduler_travel
network_mode: bridge
just_scheduler_social:
image: just_scheduler_social
build:
context: ./server
dockerfile: Dockerfile_Scheduler_Social_
container_name: just_scheduler_social
network_mode: bridge
just_metric:
image: just_metric
build:
context: ./metric
dockerfile: Dockerfile_metric_
container_name: just_metric
ports:
- "5005:5005"
network_mode: bridge
just_metric_consumer_vote:
image: just_metric_consumer_vote
build:
context: ./metric
dockerfile: Dockerfile_consumer_metric_vote_
container_name: just_metric_consumer_vote
network_mode: bridge
just_metric_consumer_container:
image: just_metric_consumer_container
build:
context: ./metric
dockerfile: Dockerfile_consumer_metric_container_
container_name: just_metric_consumer_container
network_mode: bridge
just_metric_consumer_user_info:
image: just_metric_consumer_user_info
build:
context: ./metric
dockerfile: Dockerfile_consumer_metric_user_info_
container_name: just_metric_consumer_user_info
network_mode: bridge
just_social:
image: just_social
build:
context: ./social
dockerfile: Dockerfile_social_
container_name: just_social
ports:
- "6789:6789"
network_mode: bridge
just_social_consumer:
image: just_social_consumer
build:
context: ./social
dockerfile: Dockerfile_social_consumer_
container_name: just_social_consumer
network_mode: bridge
docker network inspect output
[
{
"Name": "bridge",
"Id": "30ef10e3971151145931f08666677f77df4471927832b457e4792c3f46479e21",
"Created": "2018-06-15T13:11:36.014987501+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"076daa201d090133bba9f8f5ed3dd050f33f06d2ee59512f4776bfe2bb1c532d": {
"Name": "just_rss_consumer",
"EndpointID": "d4ac24885570fbffeafb0b793b92c1970014eedbc77cb71e8561c0975749f0d5",
"MacAddress": "02:42:ac:11:00:0d",
"IPv4Address": "172.17.0.13/16",
"IPv6Address": ""
},
"2024fec5d0581051cc7bc27fa03a353b4829a0a93248ce6d50a657973a4d0934": {
"Name": "just_scheduler_social",
"EndpointID": "88e5159284e872b7bc9f670ada503b2d4dd8985964d3c4237aeb8f7267d9a20a",
"MacAddress": "02:42:ac:11:00:13",
"IPv4Address": "172.17.0.19/16",
"IPv6Address": ""
},
"30e4bf7a098436786ca91de245400d7022f542e825d4da65909d26d21c995b33": {
"Name": "just_controller",
"EndpointID": "298a994caaa5e508a5b4865c30452d358ecd6115553ea79f1bdb488b786a471a",
"MacAddress": "02:42:ac:11:00:08",
"IPv4Address": "172.17.0.8/16",
"IPv6Address": ""
},
"3f039e893196c0438a610424bc19f42b9d861aa085077940e073d1bbf37d3878": {
"Name": "just_metric_consumer_container",
"EndpointID": "c4df111bfe78a5585a20160165aeaaa6a9aefd018787f2538249aeb045e23743",
"MacAddress": "02:42:ac:11:00:10",
"IPv4Address": "172.17.0.16/16",
"IPv6Address": ""
},
"5157ac753b38f94f9d852c94d1f3be5b886827e0c02163184c3510cb1f251426": {
"Name": "just_teacher",
"EndpointID": "f985bd9f71e5f6e9e6c89262e5ace6531ba04c8f615299feab5e881edc4a2acb",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"61f8513310b0e222fbb73ce4e7d7fb42aa72ac00fb888c692d6b2936e1161c06": {
"Name": "just_scheduler_feed",
"EndpointID": "312b484156a7ddc3aea082bb1032e02b24edd310bad4d8e8eedfefe57e83a6ca",
"MacAddress": "02:42:ac:11:00:0b",
"IPv4Address": "172.17.0.11/16",
"IPv6Address": ""
},
"7366522d028ebf3ec9926b79bb71b54122d5a80c5806310c32a2dd29eba5c103": {
"Name": "just_controller_consumer",
"EndpointID": "e2fe21d33f5ba6bd23f5055bea208329d76dbf52166911e61b38dc0ba441f298",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"8268526364d5ec08b1c5e2ebc5acd2427eb8eaf8e8769307a5aa9eb9d90a940a": {
"Name": "just_metric",
"EndpointID": "2df8952934e8a850cc87b34b3374028a1af8af876af7aa7253f321377669aa79",
"MacAddress": "02:42:ac:11:00:0c",
"IPv4Address": "172.17.0.12/16",
"IPv6Address": ""
},
"835e4f11251c127a33f7d6e449140103e1a907e753ef190916f3dde4bc07a528": {
"Name": "just_reccomender_consumer",
"EndpointID": "d0d4d8600026043c05e49edb59c97a62962cfd7c7901d5fb084236a50a79f24a",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"8c6eca8f61aaa15ca1a48942e50cf00d262184726d2d9605972e86fb80f1f07d": {
"Name": "just_social",
"EndpointID": "ae235fe1e59c3bdabd73f104f1959828b60ee01714446176c0cf0b7f674934f7",
"MacAddress": "02:42:ac:11:00:07",
"IPv4Address": "172.17.0.7/16",
"IPv6Address": ""
},
"9aeb9955f3d97d03884edd51775ef05c0aefa4470aaa544750149d17931cf615": {
"Name": "just_metric_consumer_vote",
"EndpointID": "f8cd4fe4d36910e1137dd58d8f229ce8926d49e21c8996c7f0139402d69266b3",
"MacAddress": "02:42:ac:11:00:0e",
"IPv4Address": "172.17.0.14/16",
"IPv6Address": ""
},
"9c3b982fa71df2745dcbf72f17ff1071e0d8e2929331c4679b19f29861b7b303": {
"Name": "just_scheduler_object",
"EndpointID": "d627a091b153beafeb221fce536d0675fd523da97cb7241c102df70cc28332b2",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"bb15d3eea528972dfc2eb177e0369539b0df3c1e51c28574f39d05ae7d30a009": {
"Name": "just_social_consumer",
"EndpointID": "a738a1ec6ae7e133ea17105591173f883ec6435a76d26539ddc0af331254c7f6",
"MacAddress": "02:42:ac:11:00:0f",
"IPv4Address": "172.17.0.15/16",
"IPv6Address": ""
},
"d0f0aa795fd17a4d692ff53224c51e49fec30a4d27f52d71bd2b7ea909e0db2c": {
"Name": "just_scheduler_travel",
"EndpointID": "204db8ce5f2d0e782a77bd3f70d1a475ed49110ab8dc92c350a52da1eabd56af",
"MacAddress": "02:42:ac:11:00:0a",
"IPv4Address": "172.17.0.10/16",
"IPv6Address": ""
},
"eab8ba6d88875052997faf7c5c843910b1e9d26618768254ba157bfa809e301b": {
"Name": "just_teacher_consumer",
"EndpointID": "45fc12d0ed1fe516c67d5a1745fcacd0ca97da7e2ce0cee6a596884e08316a99",
"MacAddress": "02:42:ac:11:00:11",
"IPv4Address": "172.17.0.17/16",
"IPv6Address": ""
},
"ec79e0733ae049308b80c3b75bc05a0919fbdb2b3e3d99531ce8f06df6698d58": {
"Name": "just_reccomender",
"EndpointID": "594419d39deec90f0f65cb5ca592432b23a376d1a1b28d02ba3a57cdf638b457",
"MacAddress": "02:42:ac:11:00:12",
"IPv4Address": "172.17.0.18/16",
"IPv6Address": ""
},
"ec93841f66307f73f713a92f33eaeb1f84cfbd2bbdaa6d35a423d6190acca656": {
"Name": "just_metric_consumer_user_info",
"EndpointID": "d7dcb848e89d5f6cbb68dc0e015c33a6005427d4b1923e0a68309d52fa61c118",
"MacAddress": "02:42:ac:11:00:09",
"IPv4Address": "172.17.0.9/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
netstat output
Connessioni Internet attive (solo server)
Proto CodaRic CodaInv Indirizzo locale Indirizzo remoto Stato PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8050 0.0.0.0:* LISTEN 27840/crossbar-work
tcp 0 0 0.0.0.0:28050 0.0.0.0:* LISTEN 27840/crossbar-work
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8087 0.0.0.0:* LISTEN 27840/crossbar-work
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8889 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN 27840/crossbar-work
tcp 0 0 0.0.0.0:24224 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:24230 0.0.0.0:* LISTEN -
tcp 0 0 10.1.1.245:2375 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN -
tcp6 0 0 :::111 :::* LISTEN -
tcp6 0 0 10.1.1.245:9200 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 :::4369 :::* LISTEN -
tcp6 0 0 :::5010 :::* LISTEN -
tcp6 0 0 10.1.1.245:9300 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::8889 :::* LISTEN -
tcp6 0 0 :::5020 :::* LISTEN -
tcp6 0 0 :::6789 :::* LISTEN -
tcp6 0 0 :::5672 :::* LISTEN -
tcp6 0 0 :::5002 :::* LISTEN -
tcp6 0 0 :::5355 :::* LISTEN -
tcp6 0 0 :::5005 :::* LISTEN -
udp 0 0 0.0.0.0:5353 0.0.0.0:* -
udp 0 0 0.0.0.0:5355 0.0.0.0:* -
udp 0 0 0.0.0.0:24224 0.0.0.0:* -
udp 0 0 0.0.0.0:65488 0.0.0.0:* -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 0.0.0.0:111 0.0.0.0:* -
udp 0 0 172.17.0.1:123 0.0.0.0:* -
udp 0 0 10.1.1.245:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp 0 0 0.0.0.0:782 0.0.0.0:* -
udp6 0 0 :::5353 :::* -
udp6 0 0 :::5355 :::* -
udp6 0 0 :::12935 :::* -
udp6 0 0 :::111 :::* -
udp6 0 0 fe80::34dd:50ff:fea:123 :::* -
udp6 0 0 fe80::4419:96ff:fe9:123 :::* -
udp6 0 0 fe80::8c9a:46ff:fe0:123 :::* -
udp6 0 0 fe80::5c3c:5cff:fe2:123 :::* -
udp6 0 0 fe80::33:5fff:fed0::123 :::* -
udp6 0 0 fe80::4ca3:39ff:fe7:123 :::* -
udp6 0 0 fe80::640a:cff:fe74:123 :::* -
udp6 0 0 fe80::4cc3:49ff:fe3:123 :::* -
udp6 0 0 fe80::a4b4:24ff:fe2:123 :::* -
udp6 0 0 fe80::b08a:c5ff:feb:123 :::* -
udp6 0 0 fe80::a04f:a6ff:fe4:123 :::* -
udp6 0 0 fe80::1847:33ff:fe2:123 :::* -
udp6 0 0 fe80::e823:8eff:fef:123 :::* -
udp6 0 0 fe80::b8e9:f9ff:fe7:123 :::* -
udp6 0 0 fe80::18db:dff:fe12:123 :::* -
udp6 0 0 fe80::2443:5fff:fe0:123 :::* -
udp6 0 0 fe80::706c:13ff:fe2:123 :::* -
udp6 0 0 fe80::42:14ff:fe66::123 :::* -
udp6 0 0 fe80::65f:a479:51e8:123 :::* -
udp6 0 0 ::1:123 :::* -
udp6 0 0 :::123 :::* -
udp6 0 0 :::782 :::* -
i have also ufw disable.
How could i solve?
Thanks in advance
UPDATED
docker ps -a output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9b7a1a9bfa5647456132dea8c321e461af68ac759a92a673593757d146883137 just_reccomender "/bin/sh -c 'python main.py'" 32 minutes ago Up 32 minutes 0.0.0.0:5020->5020/tcp just_reccomender
e4ac56ef1ec6f66cc4320884001309423146ce781b9829718ea80a7595f14c25 just_scheduler_travel "/bin/sh -c 'python SchedulerTravel.py'" 32 minutes ago Up 32 minutes just_scheduler_travel
0ebdee6414010c4bad80b81987fda843941e4b0cf7ffa1553bff7b6ffea29374 just_metric_consumer_user_info "/bin/sh -c 'python consumer/user_info_consumer.py'" 32 minutes ago Up 32 minutes just_metric_consumer_user_info
2e3d1a7cff81096886d1d55ff1ab8f7fcc41af02fdb0fa22245ff500e28aa760 just_metric_consumer_container "/bin/sh -c 'python consumer/container_consumer.py'" 32 minutes ago Up 32 minutes just_metric_consumer_container
c2d5bf9629311326e7db293a49da3d46cf2fac275599c315651cae593ffdd33d just_metric "/bin/sh -c 'python main.py'" 32 minutes ago Up 32 minutes 0.0.0.0:5005->5005/tcp just_metric
1e50b37952871745734a8e02b7e0604ad977d27849f15534c3800ec50684e264 just_reccomender_consumer "/bin/sh -c 'python consumer/reccomandations_consumer.py'" 32 minutes ago Up 32 minutes just_reccomender_consumer
0fc7b0ab6d486854aeb2e3db8d8e115e83379cd99b482e5f400833937c03ff26 just_controller "/bin/sh -c 'python main.py'" 32 minutes ago Up 32 minutes 0.0.0.0:5010->5010/tcp just_controller
418c8a26fa6db83909bfae993774b231e4252d4f5937045db1294b2af3de5a68 just_rss_consumer "/bin/sh -c 'python consumer_rss.py'" 32 minutes ago Up 32 minutes just_rss_consumer
e91b01d5fc61017ad61da3d8ad70d427ec971df5561a752a5fb38e6a8731b513 just_controller_consumer "/bin/sh -c 'python consumer/reccomendations_consumer.py'" 32 minutes ago Up 32 minutes just_controller_consumer
a16082943a9ac47d1720f05042b0e62f57cc980b0f135f137c4ed5e8747cd13c just_metric_consumer_vote "/bin/sh -c 'python consumer/vote_consumer.py'" 32 minutes ago Up 32 minutes just_metric_consumer_vote
1f1a3add81ef3631beafd84c9cb94414a4ec28da0e615e168854aa8564150f50 just_teacher_consumer "/bin/sh -c 'python consumer/vote_news_consumer.py'" 32 minutes ago Up 32 minutes just_teacher_consumer
001e86c665dd3b658cce34a8940c4c251348e8e1058f381e944856d6da740f35 just_teacher "/bin/sh -c 'python main.py'" 32 minutes ago Up 32 minutes 0.0.0.0:5002->5002/tcp just_teacher
844bb1c54f9d463e24d9774695f38b06a4c6abbd0318ee0c3e53760523ea9d26 just_scheduler_feed "/bin/sh -c 'python SchedulerFeed.py'" 32 minutes ago Up 32 minutes just_scheduler_feed
34221f4300a3442e8f2c2a769d3b22779b0cbb1dc0b9477b396037f01aff5029 just_scheduler_social "/bin/sh -c 'python SchedulerSocial.py'" 32 minutes ago Up 32 minutes just_scheduler_social
aa56b65738ff5776ceb4f7620f8a7970d9cadddeb833957960896870493df67e just_social_consumer "/bin/sh -c 'python consumer/social_consumer.py'" 32 minutes ago Up 32 minutes just_social_consumer
a5eac49ca5cece68e907f152d9f2ab7ac75166efe982bd5467912347bd54b34d just_scheduler_objects "/bin/sh -c 'python SchedulerObjects.py'" 32 minutes ago Up 32 minutes just_scheduler_object
6460985f0d5d917e928ba14bc6a0a83c9cd7bc4d46da8605aaf9deb26a472e32 just_social "/bin/sh -c 'python main.py'" 32 minutes ago Up 32 minutes 0.0.0.0:6789->6789/tcp just_social
5e4c9beaf91f491f5cc9d72a396a2f65227f1a57fb24bd2923850be6814b2c64 just_server "crossbar start --cbdir /mynode/.crossbar" 32 minutes ago Up 32 minutes 0.0.0.0:8050->8050/tcp, 0.0.0.0:8060->8060/tcp, 0.0.0.0:8087->8087/tcp, 0.0.0.0:28050->28050/tcp just_server
How are you checking if port is exposed or not ? Did you try running telnet to those ports from inside containers themselves ?

Erlang application exit,, but vm is still running

My Erlang applicaation processed crashed and then exited, but found that the erlang VM is still running.
I could recieve pong when ping this "suspended node"
Types regs() and the results show below, there is not my app process.
(hub#192.168.1.140)4> regs().
** Registered procs on node 'hub#192.168.1.140' **
Name Pid Initial Call Reds Msgs
application_controlle <0.7.0> erlang:apply/2 30258442 1390
auth <0.20.0> auth:init/1 189 0
code_server <0.26.0> erlang:apply/2 1194028 0
erl_epmd <0.19.0> erl_epmd:init/1 138 0
erl_prim_loader <0.3.0> erlang:apply/2 2914236 0
error_logger <0.6.0> gen_event:init_it/6 49983527 0
file_server_2 <0.25.0> file_server:init/1 16185407 0
global_group <0.24.0> global_group:init/1 107 0
global_name_server <0.13.0> global:init/1 1385 0
gr_counter_sup <0.43.0> supervisor:gr_counter_sup 253 0
gr_lager_default_trac <0.70.0> gr_counter:init/1 121 0
gr_lager_default_trac <0.72.0> gr_manager:init/1 46 0
gr_lager_default_trac <0.69.0> gr_param:init/1 117 0
gr_lager_default_trac <0.71.0> gr_manager:init/1 46 0
gr_manager_sup <0.45.0> supervisor:gr_manager_sup 484 0
gr_param_sup <0.44.0> supervisor:gr_param_sup/1 253 0
gr_sup <0.42.0> supervisor:gr_sup/1 237 0
inet_db <0.16.0> inet_db:init/1 749 0
inet_gethost_native <0.176.0> inet_gethost_native:serve 4698517 0
inet_gethost_native_s <0.175.0> supervisor_bridge:inet_ge 41 0
init <0.0.0> otp_ring0:start/2 30799457 0
kernel_safe_sup <0.35.0> supervisor:kernel/1 278 0
kernel_sup <0.11.0> supervisor:kernel/1 47618 0
lager_crash_log <0.52.0> lager_crash_log:init/1 97712230 0
lager_event <0.50.0> gen_event:init_it/6 1813660437 0
lager_handler_watcher <0.51.0> supervisor:lager_handler_ 358 0
lager_sup <0.49.0> supervisor:lager_sup/1 327 0
net_kernel <0.21.0> net_kernel:init/1 110769667 0
net_sup <0.18.0> supervisor:erl_distributi 313 0
os_cmd_port_creator <0.582.0> erlang:apply/2 81 0
rex <0.12.0> rpc:init/1 15653480 0
standard_error <0.28.0> erlang:apply/2 9 0
standard_error_sup <0.27.0> supervisor_bridge:standar 41 0
timer_server <0.100.0> timer:init/1 59356077 0
user <0.31.0> group:server/3 23837008 0
user_drv <0.30.0> user_drv:server/2 12239455 0
** Registered ports on node 'hub#192.168.1.140' **
Name Id Command
ok
It rarely occurs, but anyone explains it?
System: CentOS 5.8
Erlang: R15B03

Orbeon MySQL connection constantly dropping

I'm constantly losing my MySQL connection after a few minutes. I see no errors in the log until I attempt to connect.
I'm happy to post any settings that will help debug, just let me know what you need to see.
context.xml:
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="10" maxActive="50" maxIdle="20" maxWait="60000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
username="orbeon"
password="pw"
url="jdbc:mysql://localhost:3306/orbeon"/>
my.cnf:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
skip-name-resolve
bind-address = 0.0.0.0
key-buffer = 256M
thread_stack = 256K
thread_cache_size = 8
max_allowed_packet = 16M
max_connections = 200
myisam-recover = BACKUP
wait_timeout = 180
net_read_timeout = 30
net_write_timeout = 30
back_log = 128
table_cache = 128
max_heap_table_size = 32M
lower_case_table_names = 0
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
log_slow_queries = /var/log/mysql/slow.log
long-query-time = 5
log-queries-not-using-indexes
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key-buffer = 256M
max_allowed_packet = 16M
!includedir /etc/mysql/conf.d/
Try adding the following two attributes to your existing <Resource> for MySQL. With those, the connection pool in Tomcat will check that the connection is still usable after getting it from the pool.
validationQuery="select 1 from dual"
testOnBorrow="true"
So your <Resource> should look something like (of course with the appropriate username, password, and server):
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="3" maxActive="10" maxIdle="20" maxWait="30000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
validationQuery="select 1 from dual"
testOnBorrow="true"
username="orbeon"
password="orbeon"
url="jdbc:mysql://localhost:3306/orbeon?useUnicode=true&characterEncoding=UTF8"/>
Why do you have your wait_timeout set so low???
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout

Resources