Why I get error: on_load_function_failed,re2 - erlang

When I try to start the deployed app, I get this error:
initial_call: {supervisor,kernel,['Argument__1']}
pid: <0.1762.0>
registered_name: []
error_info: {exit,{on_load_function_failed,re2},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,352}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: [kernel_sup,<0.1738.0>]
messages: []
links: [<0.1739.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 117
2017-06-19 11:51:18 supervisor_report
supervisor: {local,kernel_sup}
errorContext: start_error
reason: {on_load_function_failed,re2}
offender: [{pid,undefined},{id,kernel_safe_sup},{mfargs,{supervisor,start_link,[{local,kernel_safe_sup},kernel,safe]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]
2017-06-19 11:51:19 crash_report
initial_call: {application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}
pid: <0.1737.0>
registered_name: []
error_info: {exit,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,re2}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,134}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: [<0.1736.0>]
messages: [{'EXIT',<0.1738.0>,normal}]
links: [<0.1736.0>,<0.1735.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 152
2017-06-19 11:51:19 std_info
application: kernel
exited: {{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,re2}}},{kernel,start,[normal,[]]}}
type: permanent
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,re2}}},{kernel,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,kernel_safe_sup,{on_load_function_failed,re2}}},{kernel,start,[normal,[]]}}})
But I have no idea how to solve this error.. any idea?
Phoenix app start:
def start(_type, _args) do
import Supervisor.Spec, warn: false
children = [
# Start the endpoint when the application starts
supervisor(MyApp.Endpoint, []),
# Start the endpoint for schools
supervisor(MyApp.Schools.Endpoint, []),
# Start the Ecto repository
supervisor(MyApp.Schools.Repo, []),
supervisor(MyApp.Repo, []),
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end

Related

Orderer container fails to run Hyperledger Fabric 2.0

I am trying to make my own network on top of the "test-network" provided in fabric-samples. Although the test-network runs fine, when i make my changes into it (like renaming org1 and org2 to some other names) all runs but the orderer container stops running after a few seconds,
docker logs for the container
2020-12-30 12:38:46.267 UTC [localconfig] completeInitialization -> WARN 001 General.GenesisFile should be replaced by General.BootstrapFile
2020-12-30 12:38:46.268 UTC [localconfig] completeInitialization -> INFO 002 Kafka.Version unset, setting to 0.10.2.0
2020-12-30 12:38:46.268 UTC [orderer.common.server] prettyPrintStruct -> INFO 003 Orderer config values:
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = true
General.TLS.PrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.TLS.Certificate = "/var/hyperledger/orderer/tls/server.crt"
General.TLS.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.TLS.TLSHandshakeTimeShift = 0s
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = "/var/hyperledger/orderer/tls/server.crt"
General.Cluster.ClientPrivateKey = "/var/hyperledger/orderer/tls/server.key"
General.Cluster.RootCAs = [/var/hyperledger/orderer/tls/ca.crt]
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.BootstrapMethod = "file"
General.BootstrapFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/var/hyperledger/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = true
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = ""
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = true
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.TLS.TLSHandshakeTimeShift = 0s
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 1
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Operations.TLS.TLSHandshakeTimeShift = 0s
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
ChannelParticipation.Enabled = false
ChannelParticipation.RemoveStorage = false
2020-12-30 12:38:46.295 UTC [orderer.common.server] initializeServerConfig -> INFO 004 Starting orderer with TLS enabled
2020-12-30 12:38:46.658 UTC [orderer.common.server] Main -> INFO 005 Not bootstrapping the system channel because of existing channels
2020-12-30 12:38:46.761 UTC [orderer.common.server] selectClusterBootBlock -> INFO 006 Cluster boot block is bootstrap (genesis) block; Blocks Header.Number system-channel=0, bootstrap=0
2020-12-30 12:38:46.766 UTC [orderer.common.server] Main -> INFO 007 Starting with system channel: system-channel, consensus type: etcdraft
2020-12-30 12:38:46.766 UTC [orderer.common.server] Main -> INFO 008 Setting up cluster
2020-12-30 12:38:46.766 UTC [orderer.common.server] reuseListener -> INFO 009 Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-12-30 12:38:46.766 UTC [orderer.common.server] reuseListener -> INFO 00a Cluster listener is not configured, defaulting to use the general listener on port 7050
2020-12-30 12:38:46.772 UTC [orderer.common.cluster] loadVerifier -> INFO 00b Loaded verifier for channel system-channel from config block at index 0
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00c The enrollment certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00d The server TLS certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.772 UTC [certmonitor] trackCertExpiration -> INFO 00e The client TLS certificate will expire on 2021-12-30 12:39:00 +0000 UTC
2020-12-30 12:38:46.782 UTC [orderer.consensus.etcdraft] detectSelfID -> WARN 00f Could not find -----BEGIN CERTIFICATE-----
MIICyjCCAnGgAwIBAgIUYfzAFhIUFlpkY0tXOAZG+dNHwL0wCgYIKoZIzj0EAwIw
XjELMAkGA1UEBhMCUEsxDzANBgNVBAgTBlB1bmphYjEPMA0GA1UEBxMGTGFob3Jl
MRQwEgYDVQQKEwtzc2gtaGhtLmNvbTEXMBUGA1UEAxMOY2Euc3NjLWhobS5jb20w
HhcNMjAxMjMwMTIzNDAwWhcNMjExMjMwMTIzOTAwWjBgMQswCQYDVQQGEwJVUzEX
MBUGA1UECBMOTm9ydGggQ2Fyb2xpbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMRAw
DgYDVQQLEwdvcmRlcmVyMRAwDgYDVQQDEwdvcmRlcmVyMFkwEwYHKoZIzj0CAQYI
KoZIzj0DAQcDQgAEGZ/xCBprojm/iSrRdNosCSvGR/3nLZ8mtqPlhkLCTZouptr0
RqYxFqMdCDQe0Sh8a0ZnwvB9cnSQiQpNQdcdBaOCAQkwggEFMA4GA1UdDwEB/wQE
AwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw
ADAdBgNVHQ4EFgQUOfdnd2a46tUDkeGYyILJ3hVlXlgwHwYDVR0jBBgwFoAUeDHp
ud9ZRsgehfOLprCK/Wpw0H8wKQYDVR0RBCIwIIITb3JkZXJlci5zc2MtaGhtLmNv
bYIJbG9jYWxob3N0MFsGCCoDBAUGBwgBBE97ImF0dHJzIjp7ImhmLkFmZmlsaWF0
aW9uIjoiIiwiaGYuRW5yb2xsbWVudElEIjoib3JkZXJlciIsImhmLlR5cGUiOiJv
cmRlcmVyIn19MAoGCCqGSM49BAMCA0cAMEQCID1dC/QtexQQDHyUcWh7b9Adti4l
P6XVl9P1V0PtByhAAiBh6UoaAmved5zr9rDvKbVPpte6N+2ANrbft9wV7UrAbw==
-----END CERTIFICATE-----
among [-----BEGIN CERTIFICATE-----
MIICZDCCAgugAwIBAgIRAKjhbRj45MuremeCXWn8wzIwCgYIKoZIzj0EAwIwbDEL
MAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG
cmFuY2lzY28xFDASBgNVBAoTC3NzYy1oaG0uY29tMRowGAYDVQQDExF0bHNjYS5z
c2MtaGhtLmNvbTAeFw0yMDEyMjkxMDM2MDBaFw0zMDEyMjcxMDM2MDBaMFgxCzAJ
BgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4gRnJh
bmNpc2NvMRwwGgYDVQQDExNvcmRlcmVyLnNzYy1oaG0uY29tMFkwEwYHKoZIzj0C
AQYIKoZIzj0DAQcDQgAEL/gYIN8w69hi/abMkdDmCBpJEhokhcPcx1mmICf8+9Aa
dSwfRpCbN4GQ71mOUwfh6U5PglXwMkJrXmn/TczaQKOBoTCBnjAOBgNVHQ8BAf8E
BAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQC
MAAwKwYDVR0jBCQwIoAgVk5hX37poZ0h4MjMfqtYZte2PEoJs6DwR0lhPOXMBVsw
MgYDVR0RBCswKYITb3JkZXJlci5zc2MtaGhtLmNvbYIHb3JkZXJlcoIJbG9jYWxo
b3N0MAoGCCqGSM49BAMCA0cAMEQCIFGU5BMMpcmkIG1tmA3rwsm2h0Yn6bOT/GU9
oLxTg57iAiB9TaK9uBaYSF0GkpH+fvmhiQ0egSxRcwaXWi99C/V1rA==
-----END CERTIFICATE-----
]
2020-12-30 12:38:46.782 UTC [orderer.common.onboarding] TrackChain -> INFO 010 Adding system-channel to the set of chains to track
2020-12-30 12:38:46.782 UTC [orderer.commmon.multichannel] Initialize -> INFO 011 Starting system channel 'system-channel' with genesis block hash 8d267aa91012044d668ed2f377d485df675b181a5a02d10addf05ec9ef1c77fc and orderer type etcdraft
2020-12-30 12:38:46.783 UTC [orderer.common.server] Main -> INFO 012 Starting orderer:
Version: 2.2.1
Commit SHA: 344fda6
Go version: go1.14.4
OS/Arch: linux/amd64
2020-12-30 12:38:46.783 UTC [orderer.common.server] Main -> INFO 013 Beginning to serve requests
2020-12-30 12:38:56.775 UTC [orderer.common.onboarding] replicateDisabledChains -> INFO 014 Found 1 inactive chains: [system-channel]
2020-12-30 12:38:56.776 UTC [orderer.common.cluster] ReplicateChains -> INFO 015 Will now replicate chains [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] discoverChannels -> INFO 016 Discovered 1 channels: [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] channelsToPull -> INFO 017 Evaluating channels to pull: [system-channel]
2020-12-30 12:38:56.786 UTC [orderer.common.cluster] channelsToPull -> INFO 018 Probing whether I should pull channel system-channel
2020-12-30 12:38:56.790 UTC [core.comm] ServerHandshake -> ERRO 019 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59898
2020-12-30 12:38:57.793 UTC [core.comm] ServerHandshake -> ERRO 01a TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59900
2020-12-30 12:38:59.432 UTC [core.comm] ServerHandshake -> ERRO 01b TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59902
2020-12-30 12:39:01.881 UTC [core.comm] ServerHandshake -> ERRO 01c TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59904
2020-12-30 12:39:03.787 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 01d Failed connecting to {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"}: failed to create new connection: context deadline exceeded channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster.replication] func1 -> WARN 01e Received error of type 'failed to create new connection: context deadline exceeded' from {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"} channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster.replication] HeightsByEndpoints -> INFO 01f Returning the heights of OSNs mapped by endpoints map[] channel=system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] channelsToPull -> WARN 020 Could not obtain blocks needed for classifying whether I am in the channel,skipping the retrieval of the chan system-channel
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] ReplicateChains -> INFO 021 Found myself in 0 channels out of 1 : {[] [{system-channel 0xc0006e7580}]}
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] appendBlock -> INFO 022 Skipping commit of block [0] for channel system-channel because height is at 1
2020-12-30 12:39:03.788 UTC [orderer.common.cluster] PullChannel -> INFO 023 Pulling channel system-channel
2020-12-30 12:39:03.793 UTC [core.comm] ServerHandshake -> ERRO 024 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59906
2020-12-30 12:39:04.797 UTC [core.comm] ServerHandshake -> ERRO 025 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59908
2020-12-30 12:39:06.430 UTC [core.comm] ServerHandshake -> ERRO 026 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59910
2020-12-30 12:39:08.952 UTC [core.comm] ServerHandshake -> ERRO 027 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.21.0.8:59912
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 028 Failed connecting to {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"}: failed to create new connection: context deadline exceeded channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] func1 -> WARN 029 Received error of type 'failed to create new connection: context deadline exceeded' from {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=tlsca.ssc-hhm.com,O=ssc-hhm.com,L=San Francisco,ST=California,C=US"}],"Endpoint":"orderer.ssc-hhm.com:7050"} channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster.replication] HeightsByEndpoints -> INFO 02a Returning the heights of OSNs mapped by endpoints map[] channel=system-channel
2020-12-30 12:39:10.790 UTC [orderer.common.cluster] ReplicateChains -> PANI 02b Failed pulling system channel: failed obtaining the latest block for channel system-channel
panic: Failed pulling system channel: failed obtaining the latest block for channel system-channel
goroutine 28 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000151340, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:230 +0x545
go.uber.org/zap.(*SugaredLogger).log(0xc000209240, 0xc00044b304, 0x101fc6d, 0x21, 0xc0000f9c40, 0x1, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0x100
go.uber.org/zap.(*SugaredLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/common/flogging/zap.go:74
github.com/hyperledger/fabric/orderer/common/cluster.(*Replicator).ReplicateChains(0xc0003dfe00, 0xc00054e340, 0xc0002c0200, 0xc0003dfe00)
/go/src/github.com/hyperledger/fabric/orderer/common/cluster/replication.go:166 +0x49d
github.com/hyperledger/fabric/orderer/common/onboarding.(*ReplicationInitiator).ReplicateChains(0xc00027e400, 0xc00054e340, 0xc0002c0000, 0x1, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:185 +0x1e3
github.com/hyperledger/fabric/orderer/common/onboarding.(*InactiveChainReplicator).replicateDisabledChains(0xc000207920)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:312 +0x225
github.com/hyperledger/fabric/orderer/common/onboarding.(*InactiveChainReplicator).Run(0xc000207920)
/go/src/github.com/hyperledger/fabric/orderer/common/onboarding/onboarding.go:290 +0x42
created by github.com/hyperledger/fabric/orderer/common/server.initializeEtcdraftConsenter
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:777 +0x218
I am very much new to Fabric. I am trying to make a network for SupplyChain with 3 orgs : Manufacturer, Supplier and Retailer. If anyone could guide me here, their help would be greatly appreciated.
I am using Hyperledger Fabric 2.0 on Ubuntu 18.04.
With help of #myeongkil kim I was able to solve this issue. The solution was I just had to remove all the docker volumes by docker volume prune and after this, bringing the network back up solves this problem.
The prob comes from a bad TLS certificate used by the new ordered. Check the log of other ordereres and you would get some hints about this.

Detox test not able to run

I've been trying to run a Detox test for our mobile app this whole day, but haven't been able to.
Relevant npm versions I use:
"detox": "^17.14.6",
"jest": "^26.6.3",
"jest-circus": "^26.6.3",
"jest-cli": "^26.6.3",
detoxrc.json
{
"testRunner": "jest",
"runnerConfig": "e2e/config.json",
"configurations": {
"ios.sim.debug": {
"binaryPath": "ios/build/Build/Products/Debug-iphonesimulator/Test.app",
"build": "xcodebuild -workspace ios/Test.xcworkspace -scheme Test -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build",
"type": "ios.simulator",
"device": {
"type": "iPhone 11 Pro"
}
}
}
}
test.e2e.js
describe('Overview', () => {
beforeEach(async () => {
await device.reloadReactNative();
});
it('should open the overviewTab', async () => {
await expect(element(by.id('overviewTab'))).toBeVisible();
});
});
It errors in a timeout:
Overview
✕ should open the overviewTab (120006 ms)
● Overview › should open the overviewTab
thrown: "Exceeded timeout of 120000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
1 | describe('Overview', () => {
> 2 | beforeEach(async () => {
| ^
3 | await device.reloadReactNative();
4 | });
5 |
at test.e2e.js:2:3
at Object.<anonymous> (test.e2e.js:1:1)
When I run detox test --loglevel trace this is the whole log:
kevin#ip-192-168-0-203 mobile-app % detox test --loglevel trace
detox[9009] INFO: [test.js] loglevel="trace" useCustomLogger=true DETOX_START_TIMESTAMP=1608416281857 reportSpecs=true jest --config e2e/config.json --testNamePattern '^((?!:android:).)*$' --maxWorkers 1 e2e
detox[9010] TRACE: [Detox.js/DETOX_CREATE] created a Detox instance with config:
{"artifactsConfig":{"rootDir":"artifacts/ios.sim.debug.2020-12-19 22-18-01Z","plugins":{"log":{"enabled":false,"keepOnlyFailedTestsArtifacts":false},"screenshot":{"enabled":true,"shouldTakeAutomaticSnapshots":false,"keepOnlyFailedTestsArtifacts":false},"video":{"enabled":false,"keepOnlyFailedTestsArtifacts":false},"instruments":{"enabled":false,"keepOnlyFailedTestsArtifacts":false},"timeline":{"enabled":false}},"pathBuilder":{"_rootDir":"artifacts/ios.sim.debug.2020-12-19 22-18-01Z"}},"behaviorConfig":{"init":{"reinstallApp":true,"exposeGlobals":true,"launchApp":true},"cleanup":{"shutdownDevice":false}},"cliConfig":{"loglevel":"trace","useCustomLogger":"true"},"deviceConfig":{"binaryPath":"ios/build/Build/Products/Debug-iphonesimulator/Test.app","build":"xcodebuild -workspace ios/Test.xcworkspace -scheme Test -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build","type":"ios.simulator","device":{"type":"iPhone 11 Pro"}},"runnerConfig":{"testRunner":"jest","runnerConfig":"e2e/config.json","specs":"e2e"},"sessionConfig":{"autoStart":true,"server":"ws://localhost:62012","sessionId":"e836caba-d171-3480-4931-9ee02f21322b","debugSynchronization":false},"errorBuilder":{"filepath":"/Users/kevin/Projects/test/mobile-app/.detoxrc.json","contents":{"testRunner":"jest","runnerConfig":"e2e/config.json","configurations":{"ios.sim.debug":{"binaryPath":"ios/build/Build/Products/Debug-iphonesimulator/Test.app","build":"xcodebuild -workspace ios/Test.xcworkspace -scheme Test -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build","type":"ios.simulator","device":{"type":"iPhone 11 Pro"}}}},"configurationName":"ios.sim.debug"}}
detox[9010] INFO: [DetoxServer.js] server listening on localhost:62012...
detox[9010] DEBUG: [AsyncWebSocket.js/WEBSOCKET_OPEN] opened web socket to: ws://localhost:62012
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"login","params":{"sessionId":"e836caba-d171-3480-4931-9ee02f21322b","role":"tester"},"messageId":0}
detox[9010] DEBUG: [DetoxServer.js/LOGIN] role=tester, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] DEBUG: [DetoxServer.js/LOGIN_SUCCESS] role=tester, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"type":"loginSuccess","params":{"sessionId":"e836caba-d171-3480-4931-9ee02f21322b","role":"tester"},"messageId":0}
detox[9010] DEBUG: [exec.js/EXEC_CMD, #0] applesimutils --list --byType "iPhone 11 Pro"
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #0] [
{
"os" : {
"bundlePath" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/Runtimes\/iOS.simruntime",
"buildversion" : "18C61",
"runtimeRoot" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/Runtimes\/iOS.simruntime\/Contents\/Resources\/RuntimeRoot",
"identifier" : "com.apple.CoreSimulator.SimRuntime.iOS-14-3",
"version" : "14.3",
"isAvailable" : true,
"name" : "iOS 14.3"
},
"dataPath" : "\/Users\/kevin\/Library\/Developer\/CoreSimulator\/Devices\/ED9068B9-5E52-404E-B689-ABA33E32E0A1\/data",
"logPath" : "\/Users\/kevin\/Library\/Logs\/CoreSimulator\/ED9068B9-5E52-404E-B689-ABA33E32E0A1",
"udid" : "ED9068B9-5E52-404E-B689-ABA33E32E0A1",
"isAvailable" : true,
"deviceType" : {
"minRuntimeVersion" : 851968,
"bundlePath" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/DeviceTypes\/iPhone 11 Pro.simdevicetype",
"maxRuntimeVersion" : 4294967295,
"name" : "iPhone 11 Pro",
"identifier" : "com.apple.CoreSimulator.SimDeviceType.iPhone-11-Pro",
"productFamily" : "iPhone"
},
"deviceTypeIdentifier" : "com.apple.CoreSimulator.SimDeviceType.iPhone-11-Pro",
"state" : "Booted",
"name" : "iPhone 11 Pro"
}
]
detox[9010] DEBUG: [exec.js/EXEC_CMD, #1] applesimutils --list --byId ED9068B9-5E52-404E-B689-ABA33E32E0A1 --maxResults 1
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #1] [
{
"os" : {
"bundlePath" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/Runtimes\/iOS.simruntime",
"buildversion" : "18C61",
"runtimeRoot" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/Runtimes\/iOS.simruntime\/Contents\/Resources\/RuntimeRoot",
"identifier" : "com.apple.CoreSimulator.SimRuntime.iOS-14-3",
"version" : "14.3",
"isAvailable" : true,
"name" : "iOS 14.3"
},
"dataPath" : "\/Users\/kevin\/Library\/Developer\/CoreSimulator\/Devices\/ED9068B9-5E52-404E-B689-ABA33E32E0A1\/data",
"logPath" : "\/Users\/kevin\/Library\/Logs\/CoreSimulator\/ED9068B9-5E52-404E-B689-ABA33E32E0A1",
"udid" : "ED9068B9-5E52-404E-B689-ABA33E32E0A1",
"isAvailable" : true,
"deviceType" : {
"minRuntimeVersion" : 851968,
"bundlePath" : "\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/iPhoneOS.platform\/Library\/Developer\/CoreSimulator\/Profiles\/DeviceTypes\/iPhone 11 Pro.simdevicetype",
"maxRuntimeVersion" : 4294967295,
"name" : "iPhone 11 Pro",
"identifier" : "com.apple.CoreSimulator.SimDeviceType.iPhone-11-Pro",
"productFamily" : "iPhone"
},
"deviceTypeIdentifier" : "com.apple.CoreSimulator.SimDeviceType.iPhone-11-Pro",
"state" : "Booted",
"name" : "iPhone 11 Pro"
}
]
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onBootDevice({
coldBoot: false,
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
type: 'iPhone 11 Pro'
})
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onBeforeUninstallApp({
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
bundleId: 'io.test.ios-dev'
})
detox[9010] DEBUG: [exec.js/EXEC_CMD, #2] /usr/bin/xcrun simctl uninstall ED9068B9-5E52-404E-B689-ABA33E32E0A1 io.test.ios-dev
detox[9010] DEBUG: [exec.js/EXEC_TRY, #2] Uninstalling io.test.ios-dev...
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #2]
detox[9010] DEBUG: [exec.js/EXEC_SUCCESS, #2] io.test.ios-dev uninstalled
detox[9010] DEBUG: [exec.js/EXEC_CMD, #3] /usr/bin/xcrun simctl install ED9068B9-5E52-404E-B689-ABA33E32E0A1 "/Users/kevin/Projects/test/mobile-app/ios/build/Build/Products/Debug-iphonesimulator/Test.app"
detox[9010] DEBUG: [exec.js/EXEC_TRY, #3] Installing /Users/kevin/Projects/test/mobile-app/ios/build/Build/Products/Debug-iphonesimulator/Test.app...
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #3]
detox[9010] DEBUG: [exec.js/EXEC_SUCCESS, #3] /Users/kevin/Projects/test/mobile-app/ios/build/Build/Products/Debug-iphonesimulator/Test.app installed
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onBeforeTerminateApp({
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
bundleId: 'io.test.ios-dev'
})
detox[9010] DEBUG: [exec.js/EXEC_CMD, #4] /usr/bin/xcrun simctl terminate ED9068B9-5E52-404E-B689-ABA33E32E0A1 io.test.ios-dev
detox[9010] DEBUG: [exec.js/EXEC_TRY, #4] Terminating io.test.ios-dev...
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #4]
detox[9010] DEBUG: [exec.js/EXEC_SUCCESS, #4] io.test.ios-dev terminated
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onTerminateApp({
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
bundleId: 'io.test.ios-dev'
})
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onBeforeLaunchApp({
bundleId: 'io.test.ios-dev',
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
launchArgs: {
detoxServer: 'ws://localhost:62012',
detoxSessionId: 'e836caba-d171-3480-4931-9ee02f21322b'
}
})
detox[9010] DEBUG: [exec.js/EXEC_CMD, #5] SIMCTL_CHILD_DYLD_INSERT_LIBRARIES="/Users/kevin/Library/Detox/ios/1027ad4db0a05cf5e2f8569ce0a4fb6f1ac16bcb/Detox.framework/Detox" /usr/bin/xcrun simctl launch ED9068B9-5E52-404E-B689-ABA33E32E0A1 io.test.ios-dev --args -detoxServer "ws://localhost:62012" -detoxSessionId "e836caba-d171-3480-4931-9ee02f21322b" -detoxDisableHierarchyDump "YES"
detox[9010] DEBUG: [exec.js/EXEC_TRY, #5] Launching io.test.ios-dev...
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #5] io.test.ios-dev: 9047
detox[9010] DEBUG: [exec.js/EXEC_CMD, #6] /usr/bin/xcrun simctl get_app_container ED9068B9-5E52-404E-B689-ABA33E32E0A1 io.test.ios-dev
detox[9010] TRACE: [exec.js/EXEC_SUCCESS, #6] /Users/kevin/Library/Developer/CoreSimulator/Devices/ED9068B9-5E52-404E-B689-ABA33E32E0A1/data/Containers/Bundle/Application/CA9DB8EE-74C0-4BE1-9F2C-0FF3CC4D9AB0/Test.app
detox[9010] INFO: [AppleSimUtils.js] io.test.ios-dev launched. To watch simulator logs, run:
/usr/bin/xcrun simctl spawn ED9068B9-5E52-404E-B689-ABA33E32E0A1 log stream --level debug --style compact --predicate 'process == "Test"'
detox[9047] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onLaunchApp({
bundleId: 'io.test.ios-dev',
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
launchArgs: {
detoxServer: 'ws://localhost:62012',
detoxSessionId: 'e836caba-d171-3480-4931-9ee02f21322b',
detoxDisableHierarchyDump: 'YES'
},
pid: 9047
})
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"isReady","params":{},"messageId":-1000}
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=tester action=isReady (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] DEBUG: [DetoxServer.js/CANNOT_FORWARD] role=testee not connected, cannot fw action (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] DEBUG: [DetoxServer.js/LOGIN] role=testee, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] DEBUG: [DetoxServer.js/LOGIN_SUCCESS] role=testee, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=testee action=ready (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"type":"ready","params":{},"messageId":-1000}
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"waitForActive","params":{},"messageId":1}
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=tester action=waitForActive (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=testee action=waitForActiveDone (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=testee action=ready (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"type":"waitForActiveDone","params":{},"messageId":1}
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"type":"ready","messageId":-1000,"params":{}}
detox[9047] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onAppReady({
deviceId: 'ED9068B9-5E52-404E-B689-ABA33E32E0A1',
bundleId: 'io.test.ios-dev',
pid: 9047
})
detox[9010] INFO: at e2e/test.e2e.js:21:11
bafke
ROOT_DESCRIBE_BLOCK[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onRunDescribeStart({ name: 'ROOT_DESCRIBE_BLOCK' })
Overview[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onRunDescribeStart({ name: 'Overview' })
detox[9010] INFO: Overview is assigned to ED9068B9-5E52-404E-B689-ABA33E32E0A1 {"type":"iPhone 11 Pro"}
detox[9010] INFO: Overview: should open the overviewTab
detox[9010] TRACE: [Detox.js/DETOX_BEFORE_EACH] running test: "Overview should open the overviewTab"
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onTestStart({
title: 'should open the overviewTab',
fullName: 'Overview should open the overviewTab',
status: 'running',
invocations: 1
})
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"reactNativeReload","params":{},"messageId":-1000}
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=tester action=reactNativeReload (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onHookFailure({
error: 'Exceeded timeout of 120000 ms for a hook.\n' +
'Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test.',
hook: 'beforeEach'
})
detox[9010] TRACE: [Detox.js/DETOX_AFTER_EACH] failed test: "Overview should open the overviewTab"
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onTestDone({
title: 'should open the overviewTab',
fullName: 'Overview should open the overviewTab',
status: 'failed',
invocations: 1,
timedOut: true
})
detox[9010] WARN: [Client.js/PENDING_REQUESTS] App has not responded to the network requests below:
(id = -1000) reactNativeReload: {}
That might be the reason why the test "Overview should open the overviewTab" has timed out.
detox[9010] INFO: Overview: should open the overviewTab [FAIL]
Overview[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onRunDescribeFinish({ name: 'Overview' })
ROOT_DESCRIBE_BLOCK[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onRunDescribeFinish({ name: 'ROOT_DESCRIBE_BLOCK' })
detox[9010] TRACE: [ArtifactsManager.js/LIFECYCLE] artifactsManager.onBeforeCleanup()
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"cleanup","params":{"stopRunner":true},"messageId":-49642}
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=tester action=cleanup (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [DetoxServer.js/MESSAGE] role=testee action=cleanupDone (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"params":{},"type":"cleanupDone","messageId":-49642}
detox[9010] DEBUG: [DetoxServer.js/DISCONNECT] role=tester, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] DEBUG: [DetoxServer.js/DISCONNECT] role=testee, sessionId=e836caba-d171-3480-4931-9ee02f21322b
detox[9010] DEBUG: [DetoxServer.js/CANNOT_FORWARD] role=tester not connected, cannot fw action (sessionId=e836caba-d171-3480-4931-9ee02f21322b)
detox[9010] DEBUG: [DetoxServer.js/WS_CLOSE] Detox server connections terminated gracefully
FAIL e2e/test.e2e.js (133.272 s)
Overview
✕ should open the overviewTab (120006 ms)
● Overview › should open the overviewTab
thrown: "Exceeded timeout of 120000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
1 | describe('Overview', () => {
> 2 | beforeEach(async () => {
| ^
3 | await device.reloadReactNative();
4 | });
5 |
at test.e2e.js:2:3
at Object.<anonymous> (test.e2e.js:1:1)
detox[9009] ERROR: [cli.js] Error: Command failed: jest --config e2e/config.json --testNamePattern '^((?!:android:).)*$' --maxWorkers 1 e2e
Edit
I managed to get some more info by using detox test --debug-synchronization 200 from the Synchronization troubleshooting and that resulted in constant logs of
detox[12636] INFO: [actions.js] Sync WXRunLoopIdlingResource: React Native thread is busy.
detox[12636] INFO: [actions.js] Sync Dispatch Queue: com.apple.main-thread
and one log about a call to firebase logging that wasn't getting fulfilled.
When I also use --loglevel trace I get this log constantly:
detox[12636] INFO: [actions.js] Sync WXRunLoopIdlingResource: React Native thread is busy.
detox[12636] INFO: [actions.js] Sync Dispatch Queue: com.apple.main-thread
detox[12636] TRACE: [AsyncWebSocket.js/WEBSOCKET_SEND] {"type":"currentStatus","params":{},"messageId":15}
detox[12636] TRACE: [DetoxServer.js/MESSAGE] role=tester action=currentStatus (sessionId=d7428263-4169-e8b4-756d-cc5b3c3d0f73)
detox[12636] TRACE: [DetoxServer.js/MESSAGE] role=testee action=currentStatusResult (sessionId=d7428263-4169-e8b4-756d-cc5b3c3d0f73)
detox[12636] TRACE: [AsyncWebSocket.js/WEBSOCKET_MESSAGE] {"type":"currentStatusResult","params":{"state":"busy","resources":[{"name":"WXRunLoopIdlingResource","info":{"runLoop":"<CFRunLoop 0x600000c7c700 [0x7fff8002e8c0]>{wakeup port = 0x5a17, stopped = false, ignoreWakeUps = true, \ncurrent mode = (none),\ncommon modes = <CFBasicHash 0x600003e18f30 [0x7fff8002e8c0]>{type = mutable set, count = 1,\nentries =>\n\t2 : <CFString 0x7fff801ab7e8 [0x7fff8002e8c0]>{contents = \"kCFRunLoopDefaultMode\"}\n}\n,\ncommon mode items = <CFBasicHash 0x600003e1bcf0 [0x7fff8002e8c0]>{type = mutable set, count = 4,\nentries =>\n\t0 : <CFRunLoopSource 0x60000057ae80 [0x7fff8002e8c0]>{signalled = No, valid = Yes, order = 0, context = <CFRunLoopSource context>{version = 0, info = 0x60000322ab60, callout = __NSThreadPerformPerform (0x7fff208581ee)}}\n\t2 : <CFRunLoopSource 0x600000519ec0 [0x7fff8002e8c0]>{signalled = No, valid = Yes, order = 0, context = <CFRunLoopSource context>{version = 0, info = 0x60000320e470, callout = __NSThreadPerformPerform (0x7fff208581ee)}}\n\t3 : <CFRunLoopSource 0x600000564a80 [0x7fff8002e8c0]>{signalled = Yes, valid = Yes, order = 0, context = <CFRunLoopSource context>{version = 0, info = 0x114fc0000, callout = _ZN3WTF7RunLoop11performWorkEPv (0x7fff32220270)}}\n\t4 : <CFRunLoopTimer 0x6000005600c0 [0x7fff8002e8c0]>{valid = Yes, firing = No, interval = 0, tolerance = 0, next fire date = 630235923 (486.399006 # 357970975424457), callout = _ZZN3WTF7RunLoop9TimerBase5startENS_7SecondsEbEN3$_18__invokeEP16__CFRunLoopTimerPv (0x7fff32220bb0 / 0x7fff32220bb0) (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/JavaScriptCore.framework/JavaScriptCore), context = <CFRunLoopTimer context 0x114ff65a0>}\n}\n,\nmodes = <CFBasicHash 0x600003e1bdb0 [0x7fff8002e8c0]>{type = mutable set, count = 2,\nentries =>\n\t0 : <CFRunLoopMode 0x600000b044e0 [0x7fff8002e8c0]>{name = kCFRunLoopCommonModes, port set = 0xc307, queue = 0x600001e13a80, source = 0x600001e12c00 (not fired), timer port = 0xc503, \n\tsources0 = (null),\n\tsources1 = (null),\n\tobservers = (null),\n\ttimers = (null),\n\tcurrently 630235437 (357484576475625) / soft deadline in: 1.84463866e+10 sec (# -1) / hard deadline in: 1.84463866e+10 sec (# -1)\n},\n\n\t2 : <CFRunLoopMode 0x600000b6cdd0 [0x7fff8002e8c0]>{name = kCFRunLoopDefaultMode, port set = 0x13a0f, queue = 0x600001e3e800, source = 0x600001e3db00 (not fired), timer port = 0x13c27, \n\tsources0 = <CFBasicHash 0x600003e1a7c0 [0x7fff8002e8c0]>{type = mutable set, count = 0,\nentries =>\n}\n,\n\tsources1 = <CFBasicHash 0x600003e1bea0 [0x7fff8002e8c0]>{type = mutable set, count = 0,\nentries =>\n}\n,\n\tobservers = (\n \"<CFRunLoopObserver 0x600000112b20 [0x7fff8002e8c0]>{valid = Yes, activities = 0xe7, repeats = Yes, order = 0, callout = _runLoopObserverWithBlockContext (0x7fff2038c504), context = <CFRunLoopObserver context 0x600003ec0090>}\"\n),\n\ttimers = <CFArray 0x600001455f20 [0x7fff8002e8c0]>{type = mutable-small, count = 1, values = (\n\t0 : <CFRunLoopTimer 0x6000005600c0 [0x7fff8002e8c0]>{valid = Yes, firing = No, interval = 0, tolerance = 0, next fire date = 630235923 (486.398864 # 357970975424457), callout = _ZZN3WTF7RunLoop9TimerBase5startENS_7SecondsEbEN3$_18__invokeEP16__CFRunLoopTimerPv (0x7fff32220bb0 / 0x7fff32220bb0) (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/JavaScriptCore.framework/JavaScriptCore), context = <CFRunLoopTimer context 0x114ff65a0>}\n)},\n\tcurrently 630235437 (357484576488240) / soft deadline in: 486.398936 sec (# 357970975424457) / hard deadline in: 486.398936 sec (# 357970975424457)\n},\n\n}\n}\n","prettyPrint":"React Native thread is busy."}},{"name":"Dispatch Queue","info":{"queue":"<OS_dispatch_queue_main: com.apple.main-thread[0x7fff86d30c80] = { xref = -2147483648, ref = -2147483648, sref = 1, target = com.apple.root.default-qos.overcommit[0x7fff86d31100], width = 0x1, state = 0x001ffe9000000300, dirty, in-flight = 0, thread = 0x303 }>","prettyPrint":"com.apple.main-thread"}}],"messageId":15},"messageId":15}
So I'm going to continue to check out where this might be coming from. I tried adding logs from the very beginning of the source code, but nothing seems to be coming through though.

Issue when trying to run Webdriver IO (end to end) tests in a Gitlab CI pipeline runner using a docker image

This is my first time posting in this plateform so if my request lacks informations let me know so I can provide them.
I'm trying to run my end to end tests using Webdriver IO, I've been running my tests locally at first but now I need to execute them in a Gitlab CI pipeline using a docker image, and here are some information about the setup:
Docker image:
it's an alpinedocker image
node version: 12.18.3
npm version:6.14.6
Java version: openjdk "11.0.8"
Chromium: 85.0.4183.83
Mozilla: 80.0.1
Webdriver IO configuration:
latest versions of webdriver IO framework, geckodriver, and wdio-geckodriver-service
Here is my wdio.cong.js file
const { generate } = require('multiple-cucumber-html-reporter');
const { removeSync } = require('fs-extra');
exports.config = {
runner: 'local',
maxInstances: 1,
capabilities: [
{
maxInstances: 1,
browserName: 'firefox',
acceptInsecureCerts: true,
'moz:firefoxOptions': {
args: ['--headless', '--lang=fr-FR', '--disable-dev-shm-usage']
}
}
],
specs: ['./features/adminNavigation.feature'],
logLevel: 'info',
bail: 0,
baseUrl: 'http://localhost:3000',
waitforTimeout: 10000000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
services: ['geckodriver'],
framework: 'cucumber',
reporters: [
[
'cucumberjs-json',
{
jsonFolder: './cucumberjs-reports/json',
language: 'en'
}
]
],
cucumberOpts: {
timeout: 40000,
requireModule: [
'tsconfig-paths/register',
() => {
require('ts-node').register({ files: true, project: './tsconfig.json' });
}
],
require: ['./**/**/*.step.ts']
},
onPrepare: function(config, capabilities) {
removeSync('./cucumberjs-reports');
},
onComplete: function(exitCode, config, capabilities, results) {
generate({
jsonDir: './cucumberjs-reports/json',
reportPath: './cucumberjs-reports/report'
});
}
};
My package.json:
"geckodriver": "^1.20.0",
"#wdio/cli": "^6.4.5",
"#wdio/cucumber-framework": "^6.4.3",
"#wdio/local-runner": "^6.4.5",
"#wdio/mocha-framework": "^6.4.0",
"#wdio/selenium-standalone-service": "^6.4.7",
"#wdio/spec-reporter": "^6.4.0",
"#wdio/sync": "^6.4.5",
"chromedriver": "^85.0.1",
"wdio-chromedriver-service": "^6.0.4",
"wdio-cucumberjs-json-reporter": "^2.0.2",
"wdio-geckodriver-service": "^1.1.0",
And here's the error logs :
Execution of 1 spec files started at 2020-09-11T13:15:06.515Z
2062020-09-11T13:15:06.519Z DEBUG #wdio/utils:initialiseServices: initialise service "geckodriver" as NPM package
2072020-09-11T13:15:06.522Z INFO #wdio/cli:launcher: Run onPrepare hook
2082020-09-11T13:15:06.525Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2092020-09-11T13:15:06.535Z INFO #wdio/local-runner: Start worker 0-0 with arg: ./apps/front-e2e-wdio/wdio.ci.conf.js
210[0-0] 2020-09-11T13:15:07.117Z INFO #wdio/local-runner: Run worker command: run
211[0-0] 2020-09-11T13:15:07.247Z DEBUG #wdio/local-runner:utils: init remote session
212[0-0] 2020-09-11T13:15:07.252Z INFO webdriverio: Initiate new session using the ./protocol-stub protocol
213[0-0] RUNNING in firefox - /apps/front-e2e-wdio/features/adminNavigation.feature
214[0-0] 2020-09-11T13:15:07.749Z DEBUG #wdio/utils:initialiseServices: initialise service "geckodriver" as NPM package
215[0-0] 2020-09-11T13:15:07.772Z DEBUG #wdio/local-runner:utils: init remote session
216[0-0] 2020-09-11T13:15:07.774Z INFO webdriverio: Initiate new session using the webdriver protocol
217[0-0] 2020-09-11T13:15:07.778Z INFO webdriver: [POST] http://localhost:34305/session
218[0-0] 2020-09-11T13:15:07.778Z INFO webdriver: DATA {
219 capabilities: {
220 alwaysMatch: {
221 browserName: 'firefox',
222 acceptInsecureCerts: true,
223 'moz:firefoxOptions': [Object]
224 },
225 firstMatch: [ {} ]
226 },
227 desiredCapabilities: {
228 browserName: 'firefox',
229 acceptInsecureCerts: true,
230 'moz:firefoxOptions': { args: [Array] }
231 }
232}
233[0-0] 2020-09-11T13:15:10.623Z DEBUG webdriver: request failed due to response error: unknown error
234[0-0] 2020-09-11T13:15:10.624Z INFO webdriver: Retrying 1/3
2352020-09-11T13:15:10.624Z INFO webdriver: [POST] http://localhost:34305/session
2362020-09-11T13:15:10.624Z INFO webdriver: DATA {
237 capabilities: {
238 alwaysMatch: {
239 browserName: 'firefox',
240 acceptInsecureCerts: true,
241 'moz:firefoxOptions': [Object]
242 },
243 firstMatch: [ {} ]
244 },
245 desiredCapabilities: {
246 browserName: 'firefox',
247 acceptInsecureCerts: true,
248 'moz:firefoxOptions': { args: [Array] }
249 }
250}
251[0-0] 2020-09-11T13:15:10.623Z WARN webdriver: Request failed with status 500 due to invalid argument: can't kill an exited process
252[0-0] 2020-09-11T13:15:13.141Z DEBUG webdriver: request failed due to response error: unknown error
253[0-0] 2020-09-11T13:15:13.142Z WARN webdriver: Request failed with status 500 due to invalid argument: can't kill an exited process
254[0-0] 2020-09-11T13:15:13.142Z INFO webdriver: Retrying 2/3
255[0-0] 2020-09-11T13:15:13.143Z INFO webdriver: [POST] http://localhost:34305/session
256[0-0] 2020-09-11T13:15:13.143Z INFO webdriver: DATA {
257 capabilities: {
258 alwaysMatch: {
259 browserName: 'firefox',
260 acceptInsecureCerts: true,
261 'moz:firefoxOptions': [Object]
262 },
263 firstMatch: [ {} ]
264 },
265 desiredCapabilities: {
266 browserName: 'firefox',
267 acceptInsecureCerts: true,
268 'moz:firefoxOptions': { args: [Array] }
269 }
270}
271[0-0] 2020-09-11T13:15:15.558Z DEBUG webdriver: request failed due to response error: unknown error
272[0-0] 2020-09-11T13:15:15.567Z INFO webdriver: Retrying 3/3
273[0-0] 2020-09-11T13:15:15.567Z INFO webdriver: [POST] http://localhost:34305/session
274[0-0] 2020-09-11T13:15:15.568Z INFO webdriver: DATA {
275 capabilities: {
276 alwaysMatch: {
277 browserName: 'firefox',
278 acceptInsecureCerts: true,
279 'moz:firefoxOptions': [Object]
280 },
281 firstMatch: [ {} ]
282 },
283 desiredCapabilities: {
284 browserName: 'firefox',
285 acceptInsecureCerts: true,
286 'moz:firefoxOptions': { args: [Array] }
287 }
288}
289[0-0] 2020-09-11T13:15:15.566Z WARN webdriver: Request failed with status 500 due to invalid argument: can't kill an exited process
290[0-0] 2020-09-11T13:15:17.985Z DEBUG webdriver: request failed due to response error: unknown error
291[0-0] 2020-09-11T13:15:17.986Z ERROR webdriver: Request failed with status 500 due to unknown error: invalid argument: can't kill an exited process
2922020-09-11T13:15:17.986Z ERROR webdriver: unknown error: invalid argument: can't kill an exited process
293 at getErrorFromResponseBody (/builds/node_modules/webdriver/build/utils.js:121:10)
294 at WebDriverRequest._request (/builds/node_modules/webdriver/build/request.js:149:56)
295 at processTicksAndRejections (internal/process/task_queues.js:97:5)
296 at async startWebDriverSession (/builds/node_modules/webdriver/build/utils.js:41:16)
297 at async Function.newSession (/builds/node_modules/webdriver/build/index.js:44:23)
298 at async remote (/builds/node_modules/webdriverio/build/index.js:75:20)
299 at async Runner._startSession (/builds/node_modules/#wdio/runner/build/index.js:206:50)
300 at async Runner._initSession (/builds/node_modules/#wdio/runner/build/index.js:175:21)
301 at async Runner.run (/builds/node_modules/#wdio/runner/build/index.js:93:15)
3022020-09-11T13:15:17.987Z ERROR #wdio/runner: Error: Failed to create session.
303invalid argument: can't kill an exited process
304 at startWebDriverSession (/builds/node_modules/webdriver/build/utils.js:45:11)
305 at processTicksAndRejections (internal/process/task_queues.js:97:5)
306[0-0] Error: Failed to create session.
307invalid argument: can't kill an exited process
3082020-09-11T13:15:18.108Z DEBUG #wdio/local-runner: Runner 0-0 finished with exit code 1
309[0-0] FAILED in firefox - /apps/front-e2e-wdio/features/adminNavigation.feature
3102020-09-11T13:15:18.110Z INFO #wdio/cli:launcher: Run onComplete hook
3112020-09-11T13:15:18.112Z ERROR #wdio/cli:utils: Error in onCompleteHook: Error: There were issues reading JSON-files from './apps/front-e2e-wdio/cucumberjs-reports/json'.
312 at collectJSONS (/builds/node_modules/multiple-cucumber-html-reporter/lib/collect-jsons.js:16:15)
313 at generateReport (/builds/node_modules/multiple-cucumber-html-reporter/lib/generate-report.js:69:22)
314 at Object.onComplete (/builds/apps/front-e2e-wdio/wdio.ci.conf.js:44:5)
315 at /builds/node_modules/#wdio/cli/build/utils.js:109:13
316 at Array.map (<anonymous>)
317 at runOnCompleteHook (/builds/node_modules/#wdio/cli/build/utils.js:107:37)
318 at Launcher.run (/builds/node_modules/#wdio/cli/build/launcher.js:83:69)
319 at processTicksAndRejections (internal/process/task_queues.js:97:5)
320Spec Files: 0 passed, 1 failed, 1 total (100% completed) in 00:00:11
3212020-09-11T13:15:18.114Z INFO #wdio/local-runner: Shutting down spawned worker
3222020-09-11T13:15:18.365Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
3232020-09-11T13:15:18.366Z INFO #wdio/local-runner: shutting down
edit
The commande that i'm using to run my tests is the following : "npx wdio wdio.conf.js".
And here's the Docker config file im using:
FROM alpine:latest
ARG BUILD_DATE
ARG VCS_REF
USER root
# Install node
RUN apk add --update nodejs npm
# Installs latest Chromium package.
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/main" > /etc/apk/repositories \
    && echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
    && echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
    && echo "http://dl-cdn.alpinelinux.org/alpine/v3.11/main" >> /etc/apk/repositories \
    && apk upgrade -U -a \
    && apk add --no-cache \
    libstdc++ \
    chromium \
    harfbuzz \
    nss \
    freetype \
    ttf-freefont \
    wqy-zenhei \
    && rm -rf /var/cache/* \
    && mkdir /var/cache/apk
# Add Chrome as a user
RUN mkdir -p /usr/src/app \
    && adduser -D chrome \
    && chown -R chrome:chrome /usr/src/app
# Run Chrome as non-privileged
USER chrome
WORKDIR /usr/src/app
ENV CHROME_BIN=/usr/bin/chromium-browser \
    CHROME_PATH=/usr/lib/chromium/
USER root
# Install firefox
RUN apk upgrade --update-cache --available
RUN apk add xvfb firefox dbus py-pip ttf-dejavu
ENV FIREFOX_BIN=/usr/bin/firefox
# Install bash && curl
RUN apk add --no-cache bash
RUN apk --no-cache add curl
#ADD wait module
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
# Install Mongo Tools
RUN echo 'http://dl-cdn.alpinelinux.org/alpine/v3.9/community' >> /etc/apk/repositories
RUN apk add mongodb-tools
# Install jdk11
RUN apk --no-cache add openjdk11 --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
# Install chromeDriver
RUN apk add --no-cache chromium-chromedriver
RUN ls
RUN node -v
RUN npm -v
RUN java -version
RUN chromium-browser --version
RUN firefox --version
# RUN apt-cache policy chromium | grep Installed | sed -e "s/Installed/Chrome/"
CMD tail -f /dev/null 
please see a repo https://gitlab.com/bar_foo/wdio-cucumber-typescript for en example of how to run tests in Chrome and Firefox in the GitLab pipeline.
Note: you don't need Java in the image if you decided to use devtools automation protocol instead of webdriver protocol, see https://webdriver.io/blog/2019/09/16/devtools.html

Why does my supervisor fail on init with badarg?

I am trying to start a supervisor of type one_for_one with one child and am getting this error:
A = pl:start().
{error,{badarg,[{erlang,apply,[{state,[0|1]},init,[[]]],[]},
{supervisor,init,1,[{file,"supervisor.erl"},{line,295}]},
{gen_server,init_it,2,[{file,"gen_server.erl"},{line,374}]},
{gen_server,init_it,6,[{file,"gen_server.erl"},{line,342}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,249}]}]}}
=CRASH REPORT==== 1-Mar-2020::15:28:41.090000 ===
crasher:
pid: <0.215.0>
registered_name: []
exception error: bad argument
in function apply/3
called as apply({state,[0|1]},init,[[]])
in call from supervisor:init/1 (supervisor.erl, line 295)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [<0.209.0>]
message_queue_len: 0
messages: []
links: [<0.209.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 25
reductions: 192
neighbours:
neighbour:
pid: <0.209.0>
registered_name: []
initial_call: {erlang,apply,2}
current_function: {io,execute_request,2}
ancestors: []
message_queue_len: 0
links: [<0.63.0>,<0.215.0>]
trap_exit: false
status: waiting
heap_size: 1598
stack_size: 25
reductions: 5867
current_stacktrace: [{io,execute_request,2,[{file,"io.erl"},{line,579}]},
{shell,exprs,7,[{file,"shell.erl"},{line,693}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,642}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,627}]}]
** exception exit: badarg
in function apply/3
called as apply({state,[0|1]},init,[[]])
in call from supervisor:init/1 (supervisor.erl, line 295)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 249)
Supervisor:
-module(pl).
-behaviour(supervisor).
-export([start/0,init/1]).
-record(state,{
data=[]
}).
%% {ChildId, StartFunc, Restart, Shutdown, Type, Modules}
start()->
supervisor:start_link({local,?MODULE},#state{data=[0|1]},[]).
init(State=#state{data=D})->
InitialChild={fchild,{serv,start_link,[-1]},temporary,3000,brutal_kill,worker},
MaxRestart=2,
MaxTime=600,
Strategy={one_for_one,MaxRestart,MaxTime},
{ok,{Strategy,[
InitialChild
]}}.
Worker:
-module(serv).
-behaviour(gen_server).
-compile(export_all).
-define(A,300).
-record(state,{
values=[],
id
}).
call(Pid,Message)->
gen_server:call(Pid,Message).
cast(Pid,Message)->
gen_server:cast(Pid,Message).
start_link(Id)->
{ok,Pid}=gen_server:start_link({local,xx},?MODULE,[Id],[]),
Pid.
stop(Ref)->
gen_server:stop(Ref).
init(Id)->
{ok,#state{values=[],id=Id}}.
handle_call(state,From,State=#state{values=V})->
Reply={reply,State,State},
Reply;
I am just trying to start a supervisor with only one child that gets a parameter in its init method. What is wrong in this code?
The second argument of start_link with 3 arguments is supposed to be the module name; it should be
start()->
supervisor:start_link({local,?MODULE}, ?MODULE, [#state{data=[0|1]}]).

Erlang error exception exit in ZeroMQ example

i run hwclient:main(). or hwserver:main().
from zeromq example for erlang
https://github.com/zeromq/ezmq/blob/master/examples
from link
http://zeromq.org/bindings:erlang
and have error "exception exit"
crasher:
initial call: ezmq:init/1 pid: <0.42.0> registered_name: []
exception exit: {noproc,
{gen_server,call,
[ezmq_link_sup,{start_child,[]},infinity]}}
in function gen_server:terminate/6 (gen_server.erl, line 746)
ancestors: [<0.31.0>] messages: [] links: [] dictionary: [] trap_exit: true status: running heap_size: 376 stack_size: 27 reductions: 174
neighbours:
** exception exit: {{noproc,
{gen_server,call,
[ezmq_link_sup,{start_child,[]},infinity]}},
{gen_server,call,
[<0.42.0>,{connect,tcp,{127,0,0,1},5555,[]}]}}
in function gen_server:call/2 (gen_server.erl, line 182)
in call from hwclient:main/0 (/examples/hwclient.erl, line 15)
what's wrong?

Resources