In my influxdb i have the metrics like below. These metrics are dropwizard meter metrics which capture the moving average values for 1-, 5-, and 15-minute.
select * from "messages" limit 10
time client cluster count fifteen-minute five-minute mean-minute module one-minute server
---- ------ ------- ----- -------------- ----------- ----------- ------ ---------- ------
1507335739883000000 OurImportantClient CL01 0 0 0 0 messageprocessor 0 localhost
1507335749886000000 OurImportantClient CL01 501 0.5520477628396372 1.6287864046290808 25.041124895717168 messageprocessor 7.3709815118186555 localhost
1507335759884000000 OurImportantClient CL01 587 0.6412384642848744 1.859679429071625 19.562294999745585 messageprocessor 7.614637212636722 localhost
1507335769881000000 OurImportantClient CL01 30475 33.662903467742275 99.87494091791048 761.8030767050573 messageprocessor 467.97943323340695 localhost
1507335779884000000 OurImportantClient CL01 62362 68.51470940480358 201.11411048844886 1247.0948280106677 messageprocessor 885.6986695814577 localhost
1507335789882000000 OurImportantClient CL01 94824 103.63811321618489 301.02408263969613 1580.2875055219413 messageprocessor 1249.7709883284872 localhost
1507335799881000000 OurImportantClient CL01 132473 144.09666119665695 414.58418058144684 1892.3728563562054 messageprocessor 1635.661065015788 localhost
1507335809881000000 OurImportantClient CL01 171835 185.9911221266662 530.0218087595953 2147.790629620081 messageprocessor 1988.92207608162 localhost
1507335819880000000 OurImportantClient CL01 215110 231.75939527511298 654.5929160902938 2389.999838003209 messageprocessor 2349.874824492088 localhost
1507335829881000000 OurImportantClient CL01 273217 293.4094925023778 823.6482709833962 2732.078610331237 messageprocessor 2881.3518546121827 localhost
How to plot these moving averages in Grafana dashboard ?
Related
I am trying to make my own network on top of the "test-network" provided in fabric-samples. Although the test-network runs fine, when i make my changes into it (like renaming org1 and org2 to some other names) all runs but the orderer container stops running after a few seconds,
docker logs for the container
2020-12-30 12:38:46.267 UTC [localconfig] completeInitialization -> WARN 001 General.GenesisFile should be replaced by General.BootstrapFile
2020-12-30 12:38:46.268 UTC [localconfig] completeInitialization -> INFO 002 Kafka.Version unset, setting to 0.10.2.0
2020-12-30 12:38:46.268 UTC [orderer.common.server] prettyPrintStruct -> INFO 003 Orderer config values:
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = true
General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.TLS.TLSHandshakeTimeShift = 0s
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = "/var/hyperledger/orderer/tls/server.crt"
General.Cluster.ClientPrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.Cluster.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.BootstrapMethod = "file"
General.BootstrapFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/var/hyperledger/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = true
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = ""
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = true
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.TLS.TLSHandshakeTimeShift = 0s
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 1
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Operations.TLS.TLSHandshakeTimeShift = 0s
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
ChannelParticipation.Enabled = false
ChannelParticipation.RemoveStorage = false
2020-12-30 12:38:46.295 UTC [orderer.common.server] initializeServerConfig -> INFO 004 Starting orderer with TLS enabled
2020-12-30 12:38:46.658 UTC [orderer.common.server] Main -> INFO 005 Not bootstrapping the system channel because of existing channels
2020-12-30 12:38:46.761 UTC [orderer.common.server] selectClusterBootBlock -> INFO 006 Cluster boot block is bootstrap (genesis) block; Blocks Header.Number system-channel=0, bootstrap=0
2020-12-30 12:38:46.766 UTC [orderer.common.server] Main -> INFO 007 Starting with system channel: system-channel, consensus type: etcdraft
2020-12-30 12:38:46.766 UTC [orderer.common.server] Main -> INFO 008 Setting up cluster
2020-12-30 12:38:46.766 UTC [orderer.common.server] reuseListener -> INFO 009 Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-12-30 12:38:46.766 UTC [orderer.common.server] reuseListener -> INFO 00a Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-12-30 12:38:46.772 UTC [orderer.common.cluster] loadVerifier -> INFO 00b Loaded verifier for channel system-channel from config block at index 0
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00c The enrollment certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00d The server TLS certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00e The client TLS certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.782 UTC [orderer.consensus.etcdraft] detectSelfID -> WARN 00f Could not find -----BEGIN CERTIFICATE-----
MIICyjCCAnGgAwIBAgIUYfzAFhIUFlpkY0tXOAZG+dNHwL0wCgYIKoZIzj0EAwIw
XjELMAkGA1UEBhMCUEsxDzANBgNVBAgTBlB1bmphYjEPMA0GA1UEBxMGTGFob3Jl
MRQwEgYDVQQKEwtzc2gtaGhtLmNvbTEXMBUGA1UEAxMOY2Euc3NjLWhobS5jb20w
HhcNMjAxMjMwMTIzNDAwWhcNMjExMjMwMTIzOTAwWjBgMQswCQYDVQQGEwJVUzEX
MBUGA1UECBMOTm9ydGggQ2Fyb2xpbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMRAw
DgYDVQQLEwdvcmRlcmVyMRAwDgYDVQQDEwdvcmRlcmVyMFkwEwYHKoZIzj0CAQYI
KoZIzj0DAQcDQgAEGZ/xCBprojm/iSrRdNosCSvGR/3nLZ8mtqPlhkLCTZouptr0
RqYxFqMdCDQe0Sh8a0ZnwvB9cnSQiQpNQdcdBaOCAQkwggEFMA4GA1UdDwEB/wQE
AwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw
ADAdBgNVHQ4EFgQUOfdnd2a46tUDkeGYyILJ3hVlXlgwHwYDVR0jBBgwFoAUeDHp
ud9ZRsgehfOLprCK/Wpw0H8wKQYDVR0RBCIwIIITb3JkZXJlci5zc2MtaGhtLmNv
bYIJbG9jYWxob3N0MFsGCCoDBAUGBwgBBE97ImF0dHJzIjp7ImhmLkFmZmlsaWF0
aW9uIjoiIiwiaGYuRW5yb2xsbWVudElEIjoib3JkZXJlciIsImhmLlR5cGUiOiJv
cmRlcmVyIn19MAoGCCqGSM49BAMCA0cAMEQCID1dC/QtexQQDHyUcWh7b9Adti4l
P6XVl9P1V0PtByhAAiBh6UoaAmved5zr9rDvKbVPpte6N+2ANrbft9wV7UrAbw==
-----END CERTIFICATE-----
among [-----BEGIN CERTIFICATE-----
MIICZDCCAgugAwIBAgIRAKjhbRj45MuremeCXWn8wzIwCgYIKoZIzj0EAwIwbDEL
MAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG
cmFuY2lzY28xFDASBgNVBAoTC3NzYy1oaG0uY29tMRowGAYDVQQDExF0bHNjYS5z
c2MtaGhtLmNvbTAeFw0yMDEyMjkxMDM2MDBaFw0zMDEyMjcxMDM2MDBaMFgxCzAJ
BgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4gRnJh
bmNpc2NvMRwwGgYDVQQDExNvcmRlcmVyLnNzYy1oaG0uY29tMFkwEwYHKoZIzj0C
AQYIKoZIzj0DAQcDQgAEL/gYIN8w69hi/abMkdDmCBpJEhokhcPcx1mmICf8+9Aa
dSwfRpCbN4GQ71mOUwfh6U5PglXwMkJrXmn/TczaQKOBoTCBnjAOBgNVHQ8BAf8E
BAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQC
MAAwKwYDVR0jBCQwIoAgVk5hX37poZ0h4MjMfqtYZte2PEoJs6DwR0lhPOXMBVsw
MgYDVR0RBCswKYITb3JkZXJlci5zc2MtaGhtLmNvbYIHb3JkZXJlcoIJbG9jYWxo
b3N0MAoGCCqGSM49BAMCA0cAMEQCIFGU5BMMpcmkIG1tmA3rwsm2h0Yn6bOT/GU9
oLxTg57iAiB9TaK9uBaYSF0GkpH+fvmhiQ0egSxRcwaXWi99C/V1rA==
-----END CERTIFICATE-----
]
2020-12-30 12:38:46.782 UTC [orderer.common.onboarding] TrackChain -> INFO 010 Adding system-channel to the set of chains to track
2020-12-30 12:38:46.782 UTC [orderer.commmon.multichannel] Initialize -> INFO 011 Starting system channel 'system-channel' with genesis block hash 8d267aa91012044d668ed2f377d485df675b181a5a02d10addf05ec9ef1c77fc and orderer type etcdraft
2020-12-30 12:38:46.783 UTC [orderer.common.server] Main -> INFO 012 Starting orderer:
Version: 2.2.1
Commit SHA: 344fda6
Go version: go1.14.4
OS/Arch: linux/amd64
2020-12-30 12:38:46.783 UTC [orderer.common.server] Main -> INFO 013 Beginning to serve requests
2020-12-30 12:38:56.775 UTC [orderer.common.onboarding] replicateDisabledChains -> INFO 014 Found 1 inactive chains: [system-channel]
2020-12-30 12:38:56.776 UTC [orderer.common.cluster] ReplicateChains -> INFO 015 Will now replicate chains [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] discoverChannels -> INFO 016 Discovered 1 channels: [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] channelsToPull -> INFO 017 Evaluating channels to pull: [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] channelsToPull -> INFO 018 Probing whether I should pull channel system-channel
2020-12-30 12:38:56.790 UTC [core.comm] ServerHandshake -> ERRO 019 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59898
2020-12-30 12:38:57.793 UTC [core.comm] ServerHandshake -> ERRO 01a TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59900
2020-12-30 12:38:59.432 UTC [core.comm] ServerHandshake -> ERRO 01b TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59902
2020-12-30 12:39:01.881 UTC [core.comm] ServerHandshake -> ERRO 01c TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59904
2020-12-30 12:39:03.787 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 01d Failed connecting to {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"}: failed to create new connection: context deadline exceeded channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster.replication] func1 -> WARN 01e Received error of type 'failed to create new connection: context deadline exceeded' from {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"} channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster.replication] HeightsByEndpoints -> INFO 01f Returning the heights of OSNs mapped by endpoints map[] channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] channelsToPull -> WARN 020 Could not obtain blocks needed for classifying whether I am in the channel,skipping the retrieval of the chan system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] ReplicateChains -> INFO 021 Found myself in 0 channels out of 1 : {[] [{system-channel 0xc0006e7580}]}
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] appendBlock -> INFO 022 Skipping commit of block [0] for channel system-channel because height is at 1
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] PullChannel -> INFO 023 Pulling channel system-channel
2020-12-30 12:39:03.793 UTC [core.comm] ServerHandshake -> ERRO 024 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59906
2020-12-30 12:39:04.797 UTC [core.comm] ServerHandshake -> ERRO 025 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59908
2020-12-30 12:39:06.430 UTC [core.comm] ServerHandshake -> ERRO 026 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59910
2020-12-30 12:39:08.952 UTC [core.comm] ServerHandshake -> ERRO 027 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59912
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 028 Failed connecting to {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"}: failed to create new connection: context deadline exceeded channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] func1 -> WARN 029 Received error of type 'failed to create new connection: context deadline exceeded' from {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"} channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] HeightsByEndpoints -> INFO 02a Returning the heights of OSNs mapped by endpoints map[] channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster] ReplicateChains -> PANI 02b Failed pulling system channel: failed obtaining the latest block for channel system-channel
panic: Failed pulling system channel: failed obtaining the latest block for channel system-channel
goroutine 28 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000151340, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:230 +0x545
go.uber.org/zap.(*SugaredLogger).log(0xc000209240, 0xc00044b304, 0x101fc6d, 0x21, 0xc0000f9c40, 0x1, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0x100
go.uber.org/zap.(*SugaredLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/common/flogging/zap.go:74
github.com/hyperledger/fabric/orderer/common/cluster.(*Replicator).ReplicateChains(0xc0003dfe00, 0xc00054e340, 0xc0002c0200, 0xc0003dfe00)
/go/src/github.com/hyperledger/fabric/orderer/common/cluster/replication.go:166 +0x49d
github.com/hyperledger/fabric/orderer/common/onboarding.(*ReplicationInitiator).ReplicateChains(0xc00027e400, 0xc00054e340, 0xc0002c0000, 0x1, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:185 +0x1e3
github.com/hyperledger/fabric/orderer/common/onboarding.(*InactiveChainReplicator).replicateDisabledChains(0xc000207920)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:312 +0x225
github.com/hyperledger/fabric/orderer/common/onboarding.(*InactiveChainReplicator).Run(0xc000207920)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:290 +0x42
created by github.com/hyperledger/fabric/orderer/common/server.initializeEtcdraftConsenter
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:777 +0x218
I am very much new to Fabric. I am trying to make a network for SupplyChain with 3 orgs : Manufacturer, Supplier and Retailer. If anyone could guide me here, their help would be greatly appreciated.
I am using Hyperledger Fabric 2.0 on Ubuntu 18.04.
With help of #myeongkil kim I was able to solve this issue. The solution was I just had to remove all the docker volumes by docker volume prune and after this, bringing the network back up solves this problem.
The prob comes from a bad TLS certificate used by the new ordered. Check the log of other ordereres and you would get some hints about this.
I install the freeradius in centos7 through ./configure&&make&&make install.
after make the server running. the local test is valid:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing localhost 0 testing123
Sending Access-Request of id 151 to 127.0.0.1 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=151, length=71
Service-Type = Framed-User
Framed-Protocol = PPP
Framed-IP-Address = 172.16.3.33
Framed-IP-Netmask = 255.255.255.0
Framed-Routing = Broadcast-Listen
Filter-Id = "std.ppp"
Framed-MTU = 1500
Framed-Compression = Van-Jacobson-TCP-IP
But in the remote machine, it seems that there's no response from the radius server machine:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing 211.71.149.221 0 testing123
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
radclient: no response from server for ID 149 socket 3
Here's my configure file:
clients.conf:
client 211.71.149.221{
ipaddr=211.71.149.221
secret = testing123
short = test-client
nastype = other
}
users
steve Cleartext-Password := "testing"
Service-Type = Framed-User,
Framed-Protocol = PPP,
Framed-IP-Address = 172.16.3.33,
Framed-IP-Netmask = 255.255.255.0,
Framed-Routing = Broadcast-Listen,
Framed-Filter-Id = "std.ppp",
Framed-MTU = 1500,
Framed-Compression = Van-Jacobsen-TCP-I
I didn't use database,so I didn't make a change to the radiusd.conf.
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# netstat -upln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:68 0.0.0.0:* 727/dhclient
udp 0 0 172.17.120.248:123 0.0.0.0:* 828/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:58664 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:5489 0.0.0.0:* 727/dhclient
udp 0 0 127.0.0.1:18120 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1812 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1813 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1814 0.0.0.0:* 26159/radiusd
udp6 0 0 :::123 :::* 828/ntpd
udp6 0 0 :::54457 :::* 727/dhclient
Your failing radtest is sending the request to a remote server with IP 211.71.149.221. Your clients.conf defines a client with client IP-adress 211.71.149.221. I'm guessing that that is NOT the IP from which your request is coming from.
I am working on a Ruby on Rails application which uses the WinRM library to access a remote Windows server. The transport supplied is :negotiate which will negotiate the authentication with the remote server.
The issue is the WinRM library expects a 401 HTTP status code so that it can send more data for authentication. However, a 200 HTTP status code is returned and the negotiate fails.
The backtrace is :
NoMethodError: undefined method `split' for nil:NilClass
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/http/transport.rb:226:in `init_auth'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/http/transport.rb:166:in `send_request'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/winrm_service.rb:489:in `send_message'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/winrm_service.rb:390:in `run_wql'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/command_executor.rb:186:in `os_version'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/command_executor.rb:145:in `code_page'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/command_executor.rb:72:in `block in open'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/command_executor.rb:218:in `retryable'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/winrm-1.7.3/lib/winrm/command_executor.rb:71:in `open'
from (irb):20
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/railties-4.2.1/lib/rails/commands/console.rb:110:in `start'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/railties-4.2.1/lib/rails/commands/console.rb:9:in `start'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/railties-4.2.1/lib/rails/commands/commands_tasks.rb:68:in `console'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/railties-4.2.1/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
from /home/cobalt/.rvm/gems/ruby-2.2.2/gems/railties-4.2.1/lib/rails/commands.rb:17:in `<top (required)>'
from bin/rails:4:in `require'
from bin/rails:4:in `<main>'2.2.2 :021 >
The TCP Dump shows the below package exchanges
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
12:04:08.372376 IP d8b5d56cba65.53166 > pokcpeusap02.corp.absc.local.wsman: Flags [S], seq 2899844066, win 29200, options [mss 1460,sackOK,TS val 1316187676 ecr 0,nop,wscale 7], length 0
0x0000: 4500 003c 75fe 4000 4006 d486 0400 0005 E..<u.#.#.......
0x0010: 0acd e165 cfae 1761 acd8 1be2 0000 0000 ...e...a........
0x0020: a002 7210 f065 0000 0204 05b4 0402 080a ..r..e..........
0x0030: 4e73 6e1c 0000 0000 0103 0307 Nsn.........
12:04:08.421019 IP pokcpeusap02.corp.absc.local.wsman > d8b5d56cba65.53166: Flags [S.], seq 3702856093, ack 2899844067, win 8192, options [mss 1351,nop,wscale 8,sackOK,TS val 79780711 ecr 1316187676],
length 0
0x0000: 4500 003c 7f04 4000 7d06 8e80 0acd e165 E..<..#.}......e
0x0010: 0400 0005 1761 cfae dcb5 199d acd8 1be3 .....a..........
0x0020: a012 2000 754e 0000 0204 0547 0103 0308 ....uN.....G....
0x0030: 0402 080a 04c1 5b67 4e73 6e1c ......[gNsn.
12:04:08.421047 IP d8b5d56cba65.53166 > pokcpeusap02.corp.absc.local.wsman: Flags [.], ack 1, win 229, options [nop,nop,TS val 1316187725 ecr 79780711], length 0
0x0000: 4500 0034 75ff 4000 4006 d48d 0400 0005 E..4u.#.#.......
0x0010: 0acd e165 cfae 1761 acd8 1be3 dcb5 199e ...e...a........
0x0020: 8010 00e5 f05d 0000 0101 080a 4e73 6e4d .....]......NsnM
0x0030: 04c1 5b67 ..[g
12:04:08.421368 IP d8b5d56cba65.53166 > pokcpeusap02.corp.absc.local.wsman: Flags [P.], seq 1:340, ack 1, win 229, options [nop,nop,TS val 1316187725 ecr 79780711], length 339
0x0000: 4500 0187 7600 4000 4006 d339 0400 0005 E...v.#.#..9....
0x0010: 0acd e165 cfae 1761 acd8 1be3 dcb5 199e ...e...a........
0x0020: 8018 00e5 f1b0 0000 0101 080a 4e73 6e4d ............NsnM
0x0030: 04c1 5b67 504f 5354 202f 7773 6d61 6e20 ..[gPOST./wsman.
0x0040: 4854 5450 2f31 2e31 0d0a 4175 7468 6f72 HTTP/1.1..Author
0x0050: 697a 6174 696f 6e3a 204e 6567 6f74 6961 ization:.Negotia
0x0060: 7465 2054 6c52 4d54 564e 5455 4141 4241 te.TlRMTVNTUAABA
0x0070: 4141 414e 3449 4934 4151 4142 4141 6741 AAAN4II4AQABAAgA
0x0080: 4141 4141 4141 4141 4351 4141 4142 4462 AAAAAAAACQAAABDb
0x0090: 334a 775a 4468 694e 5751 314e 6d4e 6959 3JwZDhiNWQ1NmNiY
0x00a0: 5459 310d 0a43 6f6e 7465 6e74 2d54 7970 TY1..Content-Typ
0x00b0: 653a 2061 7070 6c69 6361 7469 6f6e 2f73 e:.application/s
0x00c0: 6f61 702b 786d 6c3b 6368 6172 7365 743d oap+xml;charset=
0x00d0: 5554 462d 380d 0a55 7365 722d 4167 656e UTF-8..User-Agen
0x00e0: 743a 2052 7562 7920 5769 6e52 4d20 436c t:.Ruby.WinRM.Cl
0x00f0: 6965 6e74 2028 322e 372e 312c 2072 7562 ient.(2.7.1,.rub
0x0100: 7920 322e 322e 3220 2832 3031 352d 3034 y.2.2.2.(2015-04
0x0110: 2d31 3329 290d 0a41 6363 6570 743a 202a -13))..Accept:.*
0x0120: 2f2a 0d0a 4461 7465 3a20 5475 652c 2030 /*..Date:.Tue,.0
0x0130: 3720 4d61 7220 3230 3137 2031 323a 3034 7.Mar.2017.12:04
0x0140: 3a30 3820 474d 540d 0a43 6f6e 7465 6e74 :08.GMT..Content
0x0150: 2d4c 656e 6774 683a 2030 0d0a 486f 7374 -Length:.0..Host
0x0160: 3a20 706f 6b63 7065 7573 6170 3032 2e63 :.pokcpeusap02.c
0x0170: 6f72 702e 6162 7363 2e6c 6f63 616c 3a35 orp.absc.local:5
0x0180: 3938 350d 0a0d 0a 985....
12:04:08.516497 IP pokcpeusap02.corp.absc.local.wsman > d8b5d56cba65.53166: Flags [P.], seq 1:39, ack 340, win 256, options [nop,nop,TS val 79780721 ecr 1316187725], length 38
0x0000: 4500 005a 7f05 4000 7d06 8e61 0acd e165 E..Z..#.}..a...e
0x0010: 0400 0005 1761 cfae dcb5 199e acd8 1d36 .....a.........6
0x0020: 8018 0100 11f4 0000 0101 080a 04c1 5b71 ..............[q
0x0030: 4e73 6e4d 4854 5450 2f31 2e31 2032 3030 NsnMHTTP/1.1.200
0x0040: 204f 4b0d 0a43 6f6e 7465 6e74 2d4c 656e .OK..Content-Len
0x0050: 6774 683a 2030 0d0a 0d0a gth:.0....
12:04:08.516541 IP d8b5d56cba65.53166 > pokcpeusap02.corp.absc.local.wsman: Flags [.], ack 39, win 229, options [nop,nop,TS val 1316187821 ecr 79780721], length 0
0x0000: 4500 0034 7601 4000 4006 d48b 0400 0005 E..4v.#.#.......
0x0010: 0acd e165 cfae 1761 acd8 1d36 dcb5 19c4 ...e...a...6....
0x0020: 8010 00e5 f05d 0000 0101 080a 4e73 6ead .....]......Nsn.
0x0030: 04c1 5b71 ..[q
What would be the issue? Why I don't get a 401 HTTP status code?
I havae managed to found the root cause of the issue. It turns out there is another service instead of WinRM srevice is listening to port 5985. Hence when a request is sent to that port, that service responded with a respone requiring Basic Authentication and the status code 200. The issue is fixed after starting WinRM service and make it listen at port 5985.
The detailed analysis can be found at Ruby WinRM undefined method `split' for nil:NilClass. It's really a good lesson to learn. Sometimes the issue is very simple and stupid, but to find out the issue would take much effort.
My Erlang applicaation processed crashed and then exited, but found that the erlang VM is still running.
I could recieve pong when ping this "suspended node"
Types regs() and the results show below, there is not my app process.
(hub#192.168.1.140)4> regs().
** Registered procs on node 'hub#192.168.1.140' **
Name Pid Initial Call Reds Msgs
application_controlle <0.7.0> erlang:apply/2 30258442 1390
auth <0.20.0> auth:init/1 189 0
code_server <0.26.0> erlang:apply/2 1194028 0
erl_epmd <0.19.0> erl_epmd:init/1 138 0
erl_prim_loader <0.3.0> erlang:apply/2 2914236 0
error_logger <0.6.0> gen_event:init_it/6 49983527 0
file_server_2 <0.25.0> file_server:init/1 16185407 0
global_group <0.24.0> global_group:init/1 107 0
global_name_server <0.13.0> global:init/1 1385 0
gr_counter_sup <0.43.0> supervisor:gr_counter_sup 253 0
gr_lager_default_trac <0.70.0> gr_counter:init/1 121 0
gr_lager_default_trac <0.72.0> gr_manager:init/1 46 0
gr_lager_default_trac <0.69.0> gr_param:init/1 117 0
gr_lager_default_trac <0.71.0> gr_manager:init/1 46 0
gr_manager_sup <0.45.0> supervisor:gr_manager_sup 484 0
gr_param_sup <0.44.0> supervisor:gr_param_sup/1 253 0
gr_sup <0.42.0> supervisor:gr_sup/1 237 0
inet_db <0.16.0> inet_db:init/1 749 0
inet_gethost_native <0.176.0> inet_gethost_native:serve 4698517 0
inet_gethost_native_s <0.175.0> supervisor_bridge:inet_ge 41 0
init <0.0.0> otp_ring0:start/2 30799457 0
kernel_safe_sup <0.35.0> supervisor:kernel/1 278 0
kernel_sup <0.11.0> supervisor:kernel/1 47618 0
lager_crash_log <0.52.0> lager_crash_log:init/1 97712230 0
lager_event <0.50.0> gen_event:init_it/6 1813660437 0
lager_handler_watcher <0.51.0> supervisor:lager_handler_ 358 0
lager_sup <0.49.0> supervisor:lager_sup/1 327 0
net_kernel <0.21.0> net_kernel:init/1 110769667 0
net_sup <0.18.0> supervisor:erl_distributi 313 0
os_cmd_port_creator <0.582.0> erlang:apply/2 81 0
rex <0.12.0> rpc:init/1 15653480 0
standard_error <0.28.0> erlang:apply/2 9 0
standard_error_sup <0.27.0> supervisor_bridge:standar 41 0
timer_server <0.100.0> timer:init/1 59356077 0
user <0.31.0> group:server/3 23837008 0
user_drv <0.30.0> user_drv:server/2 12239455 0
** Registered ports on node 'hub#192.168.1.140' **
Name Id Command
ok
It rarely occurs, but anyone explains it?
System: CentOS 5.8
Erlang: R15B03
I'm constantly losing my MySQL connection after a few minutes. I see no errors in the log until I attempt to connect.
I'm happy to post any settings that will help debug, just let me know what you need to see.
context.xml:
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="10" maxActive="50" maxIdle="20" maxWait="60000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
username="orbeon"
password="pw"
url="jdbc:mysql://localhost:3306/orbeon"/>
my.cnf:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
skip-name-resolve
bind-address = 0.0.0.0
key-buffer = 256M
thread_stack = 256K
thread_cache_size = 8
max_allowed_packet = 16M
max_connections = 200
myisam-recover = BACKUP
wait_timeout = 180
net_read_timeout = 30
net_write_timeout = 30
back_log = 128
table_cache = 128
max_heap_table_size = 32M
lower_case_table_names = 0
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
log_slow_queries = /var/log/mysql/slow.log
long-query-time = 5
log-queries-not-using-indexes
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key-buffer = 256M
max_allowed_packet = 16M
!includedir /etc/mysql/conf.d/
Try adding the following two attributes to your existing <Resource> for MySQL. With those, the connection pool in Tomcat will check that the connection is still usable after getting it from the pool.
validationQuery="select 1 from dual"
testOnBorrow="true"
So your <Resource> should look something like (of course with the appropriate username, password, and server):
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="3" maxActive="10" maxIdle="20" maxWait="30000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
validationQuery="select 1 from dual"
testOnBorrow="true"
username="orbeon"
password="orbeon"
url="jdbc:mysql://localhost:3306/orbeon?useUnicode=true&characterEncoding=UTF8"/>
Why do you have your wait_timeout set so low???
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout