How to solve 504 HTTP Code in Scrapy-Splash - docker

I use scrapy for a long time but now I need to use scrapy-splash for some reason.
I ran
docker run -it -p 8050:8050 --rm --name spider -v /etc/splash/proxy-profiles:/etc/splash/proxy-profiles scrapinghub/splash --max-timeout 3600
In /etc/splash/proxy-profiles/default.ini I write corporate proxy
Log:
[user#vir ~]$ sudo docker run -it -p 8050:8050 --rm --name spider -v /etc/splash/proxy-profiles:/etc/splash/proxy-profiles scrapinghub/splash --max-timeout 3600
2020-10-30 10:36:19+0000 [-] Log opened.
2020-10-30 10:36:19.324303 [-] Xvfb is started: ['Xvfb', ':1087332563', '-screen', '0', '1024x768x24', '-nolisten', 'tcp']
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-splash'
2020-10-30 10:36:19.496380 [-] Splash version: 3.5
2020-10-30 10:36:19.593826 [-] Qt 5.14.1, PyQt 5.14.2, WebKit 602.1, Chromium 77.0.3865.129, sip 4.19.22, Twisted 19.7.0, Lua 5.2
2020-10-30 10:36:19.594291 [-] Python 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
2020-10-30 10:36:19.594536 [-] Open files limit: 1048576
2020-10-30 10:36:19.594911 [-] Can't bump open files limit
2020-10-30 10:36:19.627214 [-] proxy profiles support is enabled, proxy profiles path: /etc/splash/proxy-profiles
2020-10-30 10:36:19.627645 [-] memory cache: enabled, private mode: enabled, js cross-domain access: disabled
2020-10-30 10:36:19.880659 [-] verbosity=1, slots=20, argument_cache_max_entries=500, max-timeout=3600.0
2020-10-30 10:36:19.881199 [-] Web UI: enabled, Lua: enabled (sandbox: enabled), Webkit: enabled, Chromium: enabled
2020-10-30 10:36:19.882106 [-] Site starting on 8050
2020-10-30 10:36:19.882406 [-] Starting factory <twisted.web.server.Site object at 0x7fda701815f8>
2020-10-30 10:36:19.883106 [-] Server listening on http://0.0.0.0:8050
To find out my docker ip: ip addr show docker0 | grep -Po 'inet \K[\d.]+'
It is 172.17.0.1
I try
curl 'http://172.17.0.1:8050/render.html?url=https://www.google.com/' --noproxy "*"
This --noproxy "*" is due to a company works behind a proxy.
And I get the 200 response in Docker Log. It says that proxy I set in /etc/splash/proxy-profiles/default.ini works correct.
2020-10-30 13:15:25.586864 [-] "172.17.0.1" - - [30/Oct/2020:13:15:25 +0000] "GET /render.html?url=https://www.google.ru HTTP/1.1" 200 199991 "-" "curl/7.29.0"
2020-10-30 13:18:22.103537 [events] {"path": "/render.html", "rendertime": 0.7088587284088135, "maxrss": 214808, "load": [0.02, 0.13, 0.14], "fds": 57, "active": 0, "qsize": 0, "_id": 140576160225768, "method": "GET", "timestamp": 1604063902, "user-agent": "curl/7.29.0", "args": {"url": "https://www.google.ru", "uid": 140576160225768}, "status_code": 200, "client_ip": "172.17.0.1"}
2020-10-30 13:18:22.103956 [-] "172.17.0.1" - - [30/Oct/2020:13:18:21 +0000] "GET /render.html?url=https://www.google.ru HTTP/1.1" 200 199128 "-" "curl/7.29.0"
Great! That's what I wanted
But when I try to implement ScrapyRequest and run Scrapy with code below
yield SplashRequest(
args={
'wait': 0.5,
'timeout': 10,
'proxy': '<hidden for security reasons>',
},
splash_url="http://172.17.0.1:8050",
url="http://www.google.ru/",
endpoint='render.html',
method="GET",
callback=self.parse_splash,
headers=headers,
dont_filter=True,
)
I got error
2020-10-30 16:45:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-10-30 16:45:37 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.google.ru/ via http://172.17.0.1:8050/render.html> (failed 1 times): 504 Gateway Time-out
2020-10-30 16:46:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-10-30 16:46:39 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.google.ru/ via http://172.17.0.1:8050/render.html> (failed 2 times): 504 Gateway Time-out
2020-10-30 16:47:27 [scrapy.crawler] INFO: Received SIGINT, shutting down gracefully. Send again to force
2020-10-30 16:47:27 [scrapy.core.engine] INFO: Closing spider (shutdown)
2020-10-30 16:47:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-10-30 16:47:40 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET http://www.google.ru/ via http://172.17.0.1:8050/render.html> (failed 3 times): 504 Gateway Time-out
2020-10-30 16:47:40 [scrapy.core.engine] DEBUG: Crawled (504) <GET http://www.google.ru/ via http://172.17.0.1:8050/render.html> (referer: None)
Important note!
When I use just requests - it works!
When I use Scrapy requests - it works!
When I use ScrapySplash requests - it doesnt work!
Here is overriden settings of my Scrapy crawler:
custom_settings = {
'ITEM_PIPELINES': {
'spidermon.pipelines.DotcomPipeline': 400
},
'DUPEFILTER_DEBUG': True,
'HTTPERROR_ALLOWED_CODES': [503, 504],
'LOG_ENABLED': True,
'DOWNLOAD_DELAY': 0.5,
'DOWNLOADER_MIDDLEWARES': {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': None,
},
'SPIDER_MIDDLEWARES': {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
},
'SPLASH_URL': "http://172.17.0.1:8050",
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
}

Related

Python Error when testing uWSGI in basic way

I'm just testing uWSGI by following the quick-start guide in official web site. But, I am facing a problem.
This is what I did. It's exactly same as the steps in quick guide.
$ uwsgi --http :9090 --wsgi-file foobar.py
*** Starting uWSGI 2.0.19.1 (64bit) on [Sat Mar 12 09:28:41 2022] ***
compiled with version: Clang 11.0.0 on 18 January 2021 21:53:23
os: Darwin-21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_X86_64
nodename: mac-brian.local
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 16
current working directory: /Users/brian/Documents/project/uwsgi_test
detected binary path: /opt/anaconda3/envs/price_analysis/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 5568
your memory page size is 4096 bytes
detected max file descriptor number: 256
lock engine: OSX spinlocks
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 10349)
uwsgi socket 0 bound to TCP address 127.0.0.1:59738 (port auto-assigned) fd 3
Python version: 3.9.1 | packaged by conda-forge | (default, Jan 10 2021, 02:52:42) [Clang 11.0.0 ]
Fatal Python error: init_import_site: Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/site.py", line 73, in <module>
import os
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/os.py", line 29, in <module>
from _collections_abc import _check_methods
File "/opt/anaconda3/envs/price_analysis/lib/python3.9/_collections_abc.py", line 416, in <module>
class _CallableGenericAlias(GenericAlias):
TypeError: type 'types.GenericAlias' is not an acceptable base type
And this is my foobar.py:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
After this I tried to connect to the http://localhost:9090 in this way.
curl -v 127.0.0.1:9090
These are the responses.
curl -v 127.0.0.1:9090
* Trying 127.0.0.1:9090...
* Connected to 127.0.0.1 (127.0.0.1) port 9090 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:9090
> User-Agent: curl/7.77.0
> Accept: */*
>
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (0 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (1 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (2 retries): Connection refused
[uwsgi-http] unable to connect() to node "127.0.0.1:59738" (3 retries): Connection refused
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
I expected Hello World but I get empty messages from the server.
How can I solve this?

Hyperledger Blockchain Explorer-Fail to connect before the deadline on Endorser, fail to connect to remote gRPC server

I am trying to set up a Hyperledger Fabric Network with Hyperledger Explorer. I spin up a VM on the digital ocean cloud with ubuntu OS. From there, I spin up 3 orderers node, and 2 peers node. Which result in total of 5 nodes. (I am using RAFT setup).
However, I encounter the error as below when trying to start the hyperledger fabric explorer docker-container images.
Error:
> hyperledger-explorer#1.1.4 app-start /opt/explorer
> ./start.sh
[2021-02-18T07:32:21.828] [INFO] PgService - SSL to Postgresql disabled
[2021-02-18T07:32:21.829] [INFO] PgService - connecting to Postgresql postgres://hppoc:******#explorerdb.mynetwork.com:5432/fabricexplorer
[2021-02-18T07:32:21.898] [INFO] Platform - network_config.id test-network network_config.profile ./connection-profile/test-network.json
[2021-02-18T07:32:22.013] [INFO] FabricConfig - config.client.tlsEnable true
[2021-02-18T07:32:22.013] [INFO] FabricConfig - FabricConfig, this.config.channels airlinechannel
[2021-02-18T07:32:22.016] [INFO] FabricGateway - enrollUserIdentity: userName : exploreradmin
2021-02-18T07:32:25.221Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: peer1.acme.com, url:grpcs://peer1.acme.com:7051, connected:false, connectAttempted:true
2021-02-18T07:32:25.222Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer1.acme.com url:grpcs://peer1.acme.com:7051 timeout:3000
2021-02-18T07:32:25.223Z - info: [NetworkConfig]: buildPeer - Unable to connect to the endorser peer1.acme.com due to Error: Failed to connect before the deadline on Endorser- name: peer1.acme.com, url:grpcs://peer1.acme.com:7051, connected:false, connectAttempted:true
2021-02-18T07:32:28.250Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Discoverer- name: peer1.acme.com, url:grpcs://peer1.acme.com:7051, connected:false, connectAttempted:true
2021-02-18T07:32:28.250Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer1.acme.com url:grpcs://peer1.acme.com:7051 timeout:3000
2021-02-18T07:32:28.250Z - error: [ServiceEndpoint]: ServiceEndpoint grpcs://peer1.acme.com:7051 reset connection failed :: Error: Failed to connect before the deadline on Discoverer- name: peer1.acme.com, url:grpcs://peer1.acme.com:7051, connected:false, connectAttempted:true
2021-02-18T07:32:28.251Z - error: [DiscoveryService]: send[airlinechannel] - no discovery results
[2021-02-18T07:32:28.251] [ERROR] FabricClient - Error: DiscoveryService has failed to return results
at DiscoveryService.send (/opt/explorer/node_modules/fabric-network/node_modules/fabric-common/lib/DiscoveryService.js:370:10)
at process._tickCallback (internal/process/next_tick.js:68:7)
[2021-02-18T07:32:28.252] [INFO] FabricClient - ********* call to initializeDetachClient **********
[2021-02-18T07:32:28.253] [INFO] FabricClient - initializeDetachClient, network config) { name: 'test-network',
version: '1.0.0',
client:
{ tlsEnable: true,
adminCredential: { id: 'exploreradmin', password: 'exploreradminpw' },
enableAuthentication: true,
organization: 'AcmeMSP',
connection: { timeout: [Object] } },
channels: { airlinechannel: { peers: [Object] } },
organizations:
{ AcmeMSP:
{ mspid: 'AcmeMSP',
adminPrivateKey: [Object],
peers: [Array],
signedCert: [Object] } },
peers:
{ 'peer1.acme.com': { tlsCACerts: [Object], url: 'grpcs://peer1.acme.com:7051' } } }
[2021-02-18T07:32:28.253] [INFO] FabricClient - ************************************* initializeDetachClient *************************************************
[2021-02-18T07:32:28.254] [INFO] FabricClient - Error : Failed to connect client peer, please check the configuration and peer status
[2021-02-18T07:32:28.254] [INFO] FabricClient - Info : Explorer will continue working with only DB data
[2021-02-18T07:32:28.254] [INFO] FabricClient - ************************************** initializeDetachClient ************************************************
[2021-02-18T07:32:28.259] [INFO] Platform - initializeListener, network_id, network_client test-network { name: 'test-network',
version: '1.0.0',
client:
{ tlsEnable: true,
adminCredential: { id: 'exploreradmin', password: 'exploreradminpw' },
enableAuthentication: true,
organization: 'AcmeMSP',
connection: { timeout: [Object] } },
channels: { airlinechannel: { peers: [Object] } },
organizations:
{ AcmeMSP:
{ mspid: 'AcmeMSP',
adminPrivateKey: [Object],
peers: [Array],
signedCert: [Object] } },
peers:
{ 'peer1.acme.com': { tlsCACerts: [Object], url: 'grpcs://peer1.acme.com:7051' } } }
[2021-02-18T07:32:28.260] [INFO] main - Please open web browser to access :http://localhost:8080/
[2021-02-18T07:32:28.261] [INFO] main - pid is 20
[2021-02-18T07:32:28.263] [ERROR] main - <<<<<<<<<<<<<<<<<<<<<<<<<< Explorer Error >>>>>>>>>>>>>>>>>>>>>
[2021-02-18T07:32:28.263] [ERROR] main - Error : [ 'Default client peer is down and no channel details available database' ]
[2021-02-18T07:32:30.264] [INFO] main - Received kill signal, shutting down gracefully
[2021-02-18T07:32:30.266] [INFO] Platform - <<<<<<<<<<<<<<<<<<<<<<<<<< Closing explorer >>>>>>>>>>>>>>>>>>>>>
[2021-02-18T07:32:30.266] [INFO] main - Closed out connections
Version Detail
Hyperledger Fabric: 2.3.1
Hyperledger Explorer: v1.1.1 (latest tag)
Part 1: Docker Container Setup
a) Docker PS
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e8768914140 hyperledger/explorer:latest "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp explorer.mynetwork.com
903c8c4a4326 hyperledger/explorer-db:latest "docker-entrypoint.s…" 3 minutes ago Up 3 minutes (healthy) 5432/tcp explorerdb.mynetwork.com
9fed267ae9b1 dev-peer1.budget.com-gocc1.1.0-1.0-2593f1a95def85c64fdfed35e3d3b1051b92ed91549cfe789292ec5475d7db86-e014c6d933da036b6c79b53d29a800d8a6886e374ceb435a30642b885735f8f5 "chaincode -peer.add…" 21 minutes ago Up 21 minutes dev-peer1.budget.com-gocc1.1.0-1.0-2593f1a95def85c64fdfed35e3d3b1051b92ed91549cfe789292ec5475d7db86
12407497fa4c dev-peer1.acme.com-gocc1.1.0-1.0-2593f1a95def85c64fdfed35e3d3b1051b92ed91549cfe789292ec5475d7db86-a3b6caa9293bb826d231b1a31cc47437e58764abe5169a20ed0ee43f25c16b24 "chaincode -peer.add…" 21 minutes ago Up 21 minutes dev-peer1.acme.com-gocc1.1.0-1.0-2593f1a95def85c64fdfed35e3d3b1051b92ed91549cfe789292ec5475d7db86
0448e71f68e1 hyperledger/fabric-peer:latest "peer node start" 22 minutes ago Up 22 minutes 0.0.0.0:8051->7051/tcp, 0.0.0.0:8052->7052/tcp peer1.budget.com
3266ec37b360 hyperledger/fabric-peer:latest "peer node start" 22 minutes ago Up 22 minutes 0.0.0.0:7051-7052->7051-7052/tcp peer1.acme.com
47ebe9ad79d1 hyperledger/fabric-orderer:latest "orderer" 22 minutes ago Up 22 minutes 0.0.0.0:8050->7050/tcp orderer2.acme.com
09a5f771f47f hyperledger/fabric-tools:latest "/bin/bash" 22 minutes ago Up 22 minutes tools
e132bb01ce22 hyperledger/fabric-orderer:latest "orderer" 22 minutes ago Up 22 minutes 0.0.0.0:9050->7050/tcp orderer3.acme.com
3c61b0316385 hyperledger/fabric-orderer:latest "orderer" 22 minutes ago Up 22 minutes
b) I use 3 docker-compose files for my configuration setting.
$ docker-compose -f ./config/docker-compose-base.yaml -f ./tls/docker-compose-tls.yaml -f ./raft/docker-compose-raft.yaml up -d
docker-compose-base.yaml
https://gist.github.com/Skyquek/03d1ffad5643d67d8da5b268a4814a7d
docker-compose-tls.yaml
https://gist.github.com/Skyquek/b3b314cb2152ab541e822f72c60a2cbd
docker-compose-raft.yaml
https://gist.github.com/Skyquek/7f8ec2d4d1876283f4a9444675971be8
c) Core.yaml
acme core.yaml
https://gist.github.com/Skyquek/8cdcbc4ee3d53a2277b1c34bb2fca704
Part 2: Blockchain Explorer Setting
1. connection-profile.json
{
"name": "test-network",
"version": "1.0.0",
"client": {
"tlsEnable": true,
"adminCredential": {
"id": "exploreradmin",
"password": "exploreradminpw"
},
"enableAuthentication": true,
"organization": "AcmeMSP",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"airlinechannel": {
"peers": {
"peer1.acme.com": {}
}
}
},
"organizations": {
"AcmeMSP": {
"mspid": "AcmeMSP",
"adminPrivateKey": {
"path": "/tmp/crypto/peerOrganizations/acme.com/users/Admin#acme.com/msp/keystore/priv_sk"
},
"peers": ["peer1.acme.com"],
"signedCert": {
"path": "/tmp/crypto/peerOrganizations/acme.com/users/Admin#acme.com/msp/signcerts/Admin#acme.com-cert.pem"
}
}
},
"peers": {
"peer1.acme.com": {
"tlsCACerts": {
"path": "/tmp/crypto/peerOrganizations/acme.com/tlsca/tlsca.acme.com-cert.pem"
},
"url": "grpcs://peer1.acme.com:7051"
}
}
}
2. docker-compose.yaml
# SPDX-License-Identifier: Apache-2.0
version: '2.1'
volumes:
pgdata:
walletstore:
networks:
mynetwork.com:
external:
name: acloudfan_airline
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:latest
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mynetwork.com
explorer.mynetwork.com:
image: hyperledger/explorer:latest
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
environment:
- DATABASE_HOST=explorerdb.mynetwork.com
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
- LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=true
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./organizations:/tmp/crypto
- walletstore:/opt/explorer/wallet
ports:
- 8080:8080
depends_on:
explorerdb.mynetwork.com:
condition: service_healthy
networks:
- mynetwork.com
Solution that I tried
1. Change the tlscs certs path
As stated in
Hyperledger Fabric 2.0.1: Error: Failed to connect before the deadline on Discoverer- name:
The problem most likely is due to the error from the path. But the problem still persists.
2. Change the env variable DISCOVERY_AS_LOCALHOST=true to false
Some of them mention that this will fixed the problem. But I can't seem to fix it this way.
3. Tried with hyperledger fabric 2.0 test-network
I tried to run with fabric sample test-network and its run perfectly fine.
4. docker exec -it sh to the explorer.mynetwork.com to ping the peer
The ping can run perfectly fine.
/opt/explorer # ping peer1.acme.com:7051
PING peer1.acme.com:7051 (172.23.0.6): 56 data bytes
64 bytes from 172.23.0.6: seq=0 ttl=64 time=0.138 ms
64 bytes from 172.23.0.6: seq=1 ttl=64 time=0.087 ms
64 bytes from 172.23.0.6: seq=2 ttl=64 time=0.090 ms
64 bytes from 172.23.0.6: seq=3 ttl=64 time=0.089 ms
64 bytes from 172.23.0.6: seq=4 ttl=64 time=0.101 ms
64 bytes from 172.23.0.6: seq=5 ttl=64 time=0.088 ms
^C
--- peer1.acme.com:7051 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.098/0.138 ms
Its keep on showing that the peer is down or the connection to the peer is fail
I am struggling with this error for few days now. Hope someone can help me to identify the problem. Thank you very much.
I think you should double-check your network. Explorer should be spun up on the same network with Fabric, so that the peers and other nodes can be connected. To check the Fabric network name, you should check in the docker-compose file which set it up and look for CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE or you can navigate to you CLI and run docker network ls. You should be able to find your fabric network name with the DRIVER which should be bridge
Also, you should make sure that your Fabric network is up and running properly before bringing Explorer up.
With a similar error, the problem I had was that the keys and certificates for the network had gotten out of sync with the keys and certs for the blockchain explorer. This is a particularly significant problem when running a test network, because the keys and certs will (AFAICT) be regenerated whenever you restart a network, and a test network restarts much more often than a production network.
Copy over everything into the organizations directory, or whatever you call it; repeat this every time you bring up the network you want to use the explorer on. Symlinks also work, though that is probably more brittle and I wouldn't want to rely on it in production.
All configurations seems good, however you have to upgrade explorer version to be compatible with hyperledger fabric version.
So please use v1.1.4 instead of v1.1.1
Also make sure that you have mounted crypto config correctly, try to access this path inside the container /tmp/crypto/peerOrganizations/acme.com/tlsca/tlsca.acme.com-cert.pem
Try to change tlsCACerts path to use peer tls ca.crt
/tmp/crypto/peerOrganizations/acme.com/peers/peer1.acme.com/tls/ca.crt
You have mentioned that the same configurations works with hyperledger fabric v2, if you have tried it locally not on the same server so I please disable the firewall on the server and give it a try
To check if you can reach domain and port please try this
cat > /dev/tcp/peer1.acme.com/7051
Check this
https://support.bluemedora.com/s/article/Using-Bash-to-test-if-a-TCP-port-on-a-remote-system-is-open
I've had this error
[INFO] FabricClient - Error : Failed to connect client peer, please check the configuration and peer status
...
Error : [ 'Default client peer is down and no channel details available database' ]
What I found was that when we use 256-bit vs 384-bit key length, the error occurs.
Setting local/host made no difference.
I'm using v1.1.4. Will likely test the same with v1.1.5 soon enough and suspect it'll have the same issue.

HAProxy Lua logging

I'm getting duplicate HAProxy log messages from my LUA script and don't understand why.
haproxy.cfg
global
log /dev/log local0 warning
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
lua-load /home/tester/hello.lua
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend test_endpoint
bind *:9202
http-request lua.tester
hello.lua
function tester(txn)
core.log(core.debug, "debug message!\n")
core.log(core.info, "info message!\n")
core.log(core.warning, "warning message!\n")
core.log(core.err, "error message!\n")
end
core.register_action('tester', {'http-req'}, tester)
HAProxy was installed as a package and therefore writes to /var/log/haproxy.log by default on my ubuntu system. This is what I see in the log:
Jan 25 05:47:23 ubuntu haproxy[65622]: warning message!.
Jan 25 05:47:23 ubuntu haproxy[65622]: error message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [info] 024/054723 (65622) : info message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [warning] 024/054723 (65622) : warning message!.
Jan 25 05:47:23 ubuntu haproxy[65615]: [err] 024/054723 (65622) : error message!.
I expected only the top 2 lines. Can anyone explain why the other lines appear in the log and how I can configure them out?
Thanks in advance!
for info:
# haproxy -v
HA-Proxy version 2.2.8-1ppa1~bionic 2021/01/14 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
Running on: Linux 4.15.0-134-generic #138-Ubuntu SMP Fri Jan 15 10:52:18 UTC 2021 x86_64
UPDATE:
Looking at the hlua.c source code, I can see the extra 3 lines are stderr - the logging is sent to the log (green box) and also to stderr (red box):
I had to add "-q" flag to ExecStart in /lib/systemd/system/haproxy.service. It now looks like this:
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE -q $EXTRAOPTS
Note: adding "quiet" to the global section in haproxy.cfg did not work for me. Perhaps broken?

ERR_ADDRESS_INVALID trying to connect to docker container

I'm getting started with docker using docker-toolbox on win 10 home. I'm experimenting with scrapy splash project (https://github.com/scrapy-plugins/scrapy-splash). I've installed the toolbox on a flash drive (e:)
If I understand correctly I have installed docker using the docker toolbox. When I click on the docker quickstart terminal, I get the screenshot.
I ran:
$ docker-machine start
but when I :
$ docker run -it scrapinghub/splash
2019-05-25 22:33:53+0000 [-] Log opened.
2019-05-25 22:33:53.053632 [-] Splash version: 3.3.1
2019-05-25 22:33:53.054895 [-] Qt 5.9.1, PyQt 5.9.2, WebKit 602.1, sip 4.19.4, Twisted 18.9.0, Lua 5.2
2019-05-25 22:33:53.055695 [-] Python 3.5.2 (default, Nov 12 2018, 13:43:14) [GCC 5.4.0 20160609]
2019-05-25 22:33:53.056773 [-] Open files limit: 1048576
2019-05-25 22:33:53.057319 [-] Can't bump open files limit
2019-05-25 22:33:53.165435 [-] Xvfb is started: ['Xvfb', ':1788299128', '-screen', '0', '1024x768x24', '-nolisten', 'tcp']
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
2019-05-25 22:33:53.257636 [-] proxy profiles support is enabled, proxy profiles path: /etc/splash/proxy-profiles
2019-05-25 22:33:53.258827 [-] memory cache: enabled, private mode: enabled, js cross-domain access: disabled
2019-05-25 22:33:53.422507 [-] verbosity=1, slots=20, argument_cache_max_entries=500, max-timeout=90.0
2019-05-25 22:33:53.424799 [-] Web UI: enabled, Lua: enabled (sandbox: enabled)
2019-05-25 22:33:53.426021 [-] Site starting on 8050
2019-05-25 22:33:53.426778 [-] Starting factory <twisted.web.server.Site object at 0x7efcd8d8dcc0>
2019-05-25 22:33:53.427649 [-] Server listening on http://0.0.0.0:8050
I tried to open the browser at
http://0.0.0.0:8050
But get the error in the title. What am I doing wrong?
edit:
I had to remove prior container by :
docker container ls
docker rm -f <container-name>
Then it worked at :
http://192.168.99.100:8050/
You missed the publishing part.
docker run -it -p 8050:8050 scrapinghub/splash

uWSGI - unresponsive after reloading workers

I'm trying to let uWSGI refresh its workers, because they are leaking a bit of memory. However, after reloading the workers it's no longer responsive. Am I forgetting something?
uwsgi --http-socket 0.0.0.0:8000 --wsgi-file entry.py --processes 3 --master --req-logger file:/log/reqlog --logger file:/log/errlog --harakiri 15 --max-requests 3
max-requests 3 is to test the reloading:
mapped 291072 bytes (284 KB) for 3 cores
*** Operational MODE: preforking ***
2018-02-01 13:31:04,416 root [INFO] Starting
WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x17d9dc0 pid: 1 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 11, cores: 1)
spawned uWSGI worker 2 (pid: 12, cores: 1)
spawned uWSGI worker 3 (pid: 13, cores: 1)
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
...The work of process 11 is done. Seeya!
flask#4078cdd3df37:/app$ curl localhost:8000
worker 1 killed successfully (pid: 11)
Respawned uWSGI worker 1 (new pid: 33)
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
...The work of process 13 is done. Seeya!
flask#4078cdd3df37:/app$ curl localhost:8000
{"message": "ok"}
...The work of process 12 is done. Seeya!
flask#4078cdd3df37:/app$ curl localhost:8000 --max-time 10
worker 3 killed successfully (pid: 13)
Respawned uWSGI worker 3 (new pid: 40)
worker 2 killed successfully (pid: 12)
Respawned uWSGI worker 2 (new pid: 43)
curl: (28) Operation timed out after 10001 milliseconds with 0 bytes received
flask#4078cdd3df37:/app$ curl localhost:8000 --max-time 10
curl: (28) Operation timed out after 10001 milliseconds with 0 bytes received
Eg uWSGI is no longer responding (connection is alive forever, unless i use curl --max-time). How does uWSGI communicate internally? How does the master process know how to reach the workers? I think something is going wrong there.
I ran into this this same issue. It appears that when master flag is unset, this issue goes away. For those using emperor, the 'master flag' is the one in the vassal configuration.

Resources