set default timeout like this
hystrix:
threadpool:
default:
coreSize: 500
maxQueueSize: 1000
queueSizeRejectionThreshold: 800
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 30000
Ribbon:
ribbon:
eager-load:
enabled: true
clients: dcit-auth,dcit-service-upms,dcmd-service-demand
Httpclient:
enabled: false
OkHttp:
enabled: true
ReadTimeout: 30000
ConnectTimeout: 30000
feign:
feign:
hystrix:
enabled: true
okhttp:
enabled: true
httpclient:
enabled: false
client:
config:
feignName:
connectTimeout: 30000
readTimeout: 30000
compression:
request:
enabled: true
response:
enabled: true
hystrix metics for my service:
"gauge.servo.hystrix.hystrixcommand.ribboncommand.myservice.propertyvalue_executiontimeoutinmilliseconds": 2000,
Each time myservice response over 2s will return 500 timeout error.
Why dose the timeout setting not work?
I think you are hitting the Feign timeout.
Use feign.client.config.default instead feign.client.config.feignName to define that to all Feign Clients in your application.
Related
I have a Jenkins system running a whole bunch of tests in different sub projects.
The tests are standard karma/jasmine tests, there's around 1100 of them.
When run locally everything is fine, however when I run on Jenkins - it's a bit of a slow box, things get a bit confused:
02 09 2022 10:56:56.958:INFO [Chrome Headless 88.0.4324.150 (Linux x86_64)]: Connected on socket bo3DxF9BnavPH6nwAAAD with id 94495247
02 09 2022 10:56:57.126:INFO [Chrome Headless 88.0.4324.150 (Linux x86_64)]: Connected on socket LdcMV2pzqr9wLn-hAAAF with id 55577246
My karma.conf is as follows:
module.exports = function (config) {
config.set({
basePath: '',
frameworks: ['jasmine', '#angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-jasmine-seed-reporter'),
require('karma-coverage'),
require('#angular-devkit/build-angular/plugins/karma')
],
client: {
clearContext: false, // leave Jasmine Spec Runner output visible in browser
captureConsole: false // switch this flag to true if console log is required during Karma Runner
},
coverageReporter: {
dir: require('path').join(__dirname, '../../coverage/project'),
subdir: 'report',
check: {
global: {
statements: 80,
lines: 80,
branches: 80,
functions: 80
}
},
reporters: [
{ type: 'json' },
{ type: 'lcovonly' },
{ type: 'text-summary' },
],
includeAllSources: true
},
reporters: ['progress', 'kjhtml', 'jasmine-seed'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['ChromeHeadless'],
singleRun: true,
restartOnFileChange: false,
captureTimeout: 240000,
browserSocketTimeout: 240000,
browserNoActivityTimeout: 240000,
browserDisconnectTimeout: 240000,
customLaunchers: {
ChromeHeadless: {
base: 'Chrome',
flags: [
'--headless',
'--disable-gpu',
'--no-sandbox',
'--remote-debugging-port=9222']
}
}
});
};
The problem gets more confused 1/2 way through when both browsers report on what they are doing:
[1A[2K[1A[2KChrome Headless 88.0.4324.150 (Linux x86_64): Executed 40 of 71 SUCCESS (0 secs / 0.232 secs)
Chrome Headless 88.0.4324.150 (Linux x86_64): Executed 0 of 71 SUCCESS (0 secs / 0 secs)
Has anyone had this problem and if so what was the solution. As I said the tests are very basic jasmine ones, so it shouldn't kick off a whole bunch of browsers.
SO help appreciated as ever.
I'm working on a CDK deployment of a DNS server using a pair of FargateServices behind a NetworkLoadBalancer. Since Fargate can't expose the same port as both TCP and UDP, this requires two separate services, one for tcp/53 and one for udp/53.
Defining and deploying the TCP service works just fine:
const taskDefTCP = new TaskDefinition(this, 'TaskDefTCP', {
compatibility: Compatibility.FARGATE,
cpu: '256',
memoryMiB: '512',
});
taskDefTCP.addToTaskRolePolicy(new PolicyStatement({
actions: [
'ssmmessages:CreateControlChannel',
'ssmmessages:CreateDataChannel',
'ssmmessages:OpenControlChannel',
'ssmmessages:OpenDataChannel'
],
resources: ['*'],
}));
taskDefTCP.taskRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore'));
const containerTCP = taskDefTCP.addContainer('ContainerTCP', {
image: ContainerImage.fromEcrRepository(repository),
portMappings: [{
containerPort: 53,
hostPort: 53,
protocol: ecsProtocol.TCP,
}],
environment: {
"AWS_ENVIRONMENT": 'DEV',
},
logging: LogDrivers.awsLogs({
logGroup: assets.dnsLogGroup,
streamPrefix: 'dns',
})
});
this.serviceSecurityGroup = new SecurityGroup(this, 'ServiceSecurityGroup', {
vpc: assets.vpc,
allowAllOutbound: true, // TODO: Lock this down.
});
this.serviceSecurityGroup.addIngressRule(Peer.anyIpv4(), Port.tcp(53), "TCP Queries");
this.serviceSecurityGroup.addIngressRule(Peer.anyIpv4(), Port.udp(53), "UDP Queries");
this.dnsServiceTCP = new FargateService(this, 'ServiceTCP', {
cluster: cluster,
enableExecuteCommand: true,
assignPublicIp: false,
taskDefinition: taskDefTCP,
securityGroups: [this.serviceSecurityGroup],
vpcSubnets: {
subnets: assets.mycorpNetworkResources.getSubnets(NetworkEnvironment.DEV, SubnetType.DNS),
}
});
const autoScaleTCP = this.dnsServiceTCP.autoScaleTaskCount({maxCapacity: 2, minCapacity: 1});
If I add the same code copy/pasted from the code above that works, just with TCP changed to UDP, I get an error:
Container 'AuthDNSApplicationStack/TaskDefUDP/ContainerUDP' has no mapping for port undefined and protocol tcp. Did you call "container.addPortMappings()"?
Of course it has no mapping for TCP. It's a UDP container! Here's the code that, when added, produces the above error:
const taskDefUDP = new TaskDefinition(this, 'TaskDefUDP', {
compatibility: Compatibility.FARGATE,
cpu: '256',
memoryMiB: '512',
});
taskDefUDP.addToTaskRolePolicy(new PolicyStatement({
actions: [
'ssmmessages:CreateControlChannel',
'ssmmessages:CreateDataChannel',
'ssmmessages:OpenControlChannel',
'ssmmessages:OpenDataChannel'
],
resources: ['*'],
}));
taskDefUDP.taskRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore'));
const containerUDP = taskDefUDP.addContainer('ContainerUDP', {
image: ContainerImage.fromEcrRepository(repository),
portMappings: [{
containerPort: 53,
hostPort: 53,
protocol: ecsProtocol.UDP,
}],
environment: {
"AWS_ENVIRONMENT": 'DEV',
},
logging: LogDrivers.awsLogs({
logGroup: assets.dnsLogGroup,
streamPrefix: 'dns',
})
});
this.dnsServiceUDP = new FargateService(this, 'ServiceUDP', {
cluster: cluster,
enableExecuteCommand: true,
assignPublicIp: true,
taskDefinition: taskDefUDP,
securityGroups: [this.serviceSecurityGroup],
vpcSubnets: {
subnets: assets.mycorpNetworkResources.getSubnets(NetworkEnvironment.DEV, SubnetType.DNS),
}
});
const autoScaleUDP = this.dnsServiceUDP.autoScaleTaskCount({maxCapacity: 2, minCapacity: 1});
Anyone know where I'm going wrong here?
When adding the dnsServiceUDP service to the NLB with addTarget, explicitly pass protocol: elbv2.Protocol.UDP in the AddNetworkTargetsProps.
The target protocol defaults to TCP if not explicitly provided.
I am using Nightwatch with BrowserStack to run a test suite on Ms Edge 17 and I am getting the following error in most builds.
Error retrieving a new session from the selenium server
Connection refused! Is selenium server started?
{
value: { message: 'Could not start Browser / Emulator' },
sessionId: '',
status: 13
}
We have exactly the same code running on chrome browser in BrowserStack without any problems.
This suite runs in parallel using workers. If I disable workers, then it passes, unfortunately we need to use parallel execution otherwise the suite would take too long.
I will post our config file below.
var seleniumJar = require('selenium-server-standalone-jar')
var nightwatchWorkers = parseInt(process.env.NIGHTWATCH_WORKER_THREADS) || 1
var chromeDriver = process.env.CHROME_DRIVER_PATH || './drivers/chromedriver'
var msEdgeDriverPath = process.env.MSEDGE_DRIVER_PATH || './drivers/MicrosoftWebDriver.exe'
var ie11DriverPath = process.env.MSIE11_DRIVER_PATH || './drivers/IEDriverServer.exe'
var localDockerPort = 4444
var browserStackPort = 80
var browserStackHost = 'hub-cloud.browserstack.com'
var browserStackScreenResolution = '1920x1200'
module.exports = {
src_folders: ['./tests'],
output_folder: './results',
live_output: true, //set to false so each thread will output the whole result of the test when it's done
silent: true,
custom_commands_path: './commands',
custom_assertions_path: './assertions',
page_objects_path: './pages',
test_workers: {
enabled: true,
workers: nightwatchWorkers
},
selenium: {
start_process: false,
server_path: seleniumJar.path,
log_path: './results',
port: localDockerPort,
cli_args: {
'webdriver.chrome.driver': chromeDriver,
'webdriver.edge.driver': msEdgeDriverPath,
'webdriver.ie.driver': ie11DriverPath
}
},
test_settings: {
default: {
launch_url: 'http://localhost',
selenium_host: '127.0.0.1',
selenium_port: localDockerPort,
silent: true,
disable_colors: false,
screenshots: {
enabled: true,
on_failure: true,
path: './results/screenshots'
},
desiredCapabilities: {
browserName: 'chrome',
resolution: browserStackScreenResolution,
javascriptEnabled: true,
acceptSslCerts: true,
elementScrollBehavior: 1,
project: 'megarepo',
build: process.env.BROWSERSTACK_BUILD_ID,
'browserstack.user': process.env.BROWSERSTACK_USER,
'browserstack.key': process.env.BROWSERSTACK_KEY
},
globals: require('./data/dev')
},
browserstack_msedge: {
selenium: {
port: browserStackPort
},
selenium_host: browserStackHost,
selenium_port: browserStackPort,
detailed_output: false,
desiredCapabilities: {
os: 'Windows',
os_version: '10',
browserName: 'Edge',
browser_version: '17.0',
resolution: browserStackScreenResolution
}
}
}
}
We have exactly the same code running on chrome browser in BrowserStack without any problems.
This suite runs in parallel using workers. If I disable workers, then it passes, unfortunately we need to use parallel execution otherwise the suite would take too long.
I have parse-server and parse-dashboard installed with pm2 in docker container in my synology nas as below:
+------+ +-------------+
| +NIC1 (192.168.1.100) <--> Router <--> (192.168.1.2) NIC1 + |
| PC | | DSM |
| +NIC2 (10.10.0.100) <--Peer-2-Peer---> (10.10.0.2) NIC2 + |
+------+ | [DOCKER] |
|(172.17.0.2) |
+-------------+
for reference: for pm2 setup, i'm following: this tutorial
here is my parse-server and parse-dashboard pm2 ecosystem (defined in pm2's ecosystem.json):
{
"apps" : [{
"name" : "parse-wrapper",
"script" : "/usr/bin/parse-server",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse",
"args" : "--serverURL http://localhost:1337/parse",
"env": {
"PARSE_SERVER_CLOUD_CODE_MAIN": "/home/parse/cloud/main.js",
"PARSE_SERVER_DATABASE_URI": "mongodb://localhost/test",
"PARSE_SERVER_APPLICATION_ID": "aeb932b93a9125c19df2fcaf3fd69fcb",
"PARSE_SERVER_MASTER_KEY": "358898112f354f8db593ea004ee88fed",
}
},{
"name" : "parse-dashboard-wrapper",
"script" : "/usr/bin/parse-dashboard",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse/parse-dashboard",
"args" : "--config /home/parse/parse-dashboard/config.json --allowInsecureHTTP=1"
}]
}
here is my parse-dashboard config: /home/parse/parse-dashboard/config.json
{
"apps": [{
"serverURL": "http://172.17.0.2:1337/parse",
"appId": "aeb932b93a9125c19df2fcaf3fd69fcb",
"masterKey": "358898112f354f8db593ea004ee88fed",
"appName": "Parse Server",
"iconName": "",
"primaryBackgroundColor": "",
"secondaryBackgroundColor": ""
}],
"users": [{
"user": "user",
"pass": "1234"
}],
"iconsFolder": "icons"
}
once you run: pm2 start ecosystem.json
here is parse-server log: pm2 logs 0
masterKey: ***REDACTED***
serverURL: http://localhost:1337/parse
masterKeyIps: []
logsFolder: ./logs
databaseURI: mongodb://localhost/test
userSensitiveFields: ["email"]
enableAnonymousUsers: true
allowClientClassCreation: true
maxUploadSize: 20mb
customPages: {}
sessionLength: 31536000
expireInactiveSessions: true
revokeSessionOnPasswordReset: true
schemaCacheTTL: 5000
cacheTTL: 5000
cacheMaxSize: 10000
objectIdSize: 10
port: 1337
host: 0.0.0.0
mountPath: /parse
scheduledPush: false
collectionPrefix:
verifyUserEmails: false
preventLoginWithUnverifiedEmail: false
enableSingleSchemaCache: false
jsonLogs: false
verbose: false
level: undefined
[1269] parse-server running on http://localhost:1337/parse
here is parse-dashboard log: pm2 logs 1
The dashboard is now available at http://0.0.0.0:4040/
once it's run, i can access 192.168.1.2:1337/parse (which will return {"error":"unauthorized"})
and i can access 192.168.1.2:4040
but it return:
Server not reachable: unable to connect to server
i see a lot of this issue can be solve by changing parse-dashboard config "serverURL": "http://localhost:1337/parse" to "serverURL": "http://172.17.0.2:1337/parse"
but to me, it's still no luck...
any idea what i've been missing here?
so apparently, my /home/parse/parse-dashboard/config.json:
"serverURL": "http://172.17.0.2:1337/parse"
should point to my DSM machine which is:
"serverURL": "http://192.168.1.2:1337/parse"
I'm trying to create a VM for developing a Zend Framework 2 application.
I'm not very confortable with, so I'm trying to do that with Puphpet.
But I've got actually a lot of the different problems.
Small precision: I'm on Mac (10.11.1), and I'm using virtual box (5.0.6)
First, here is my configuration file of puphpet:
vagrantfile:
target: local
vm:
box: puphpet/ubuntu1404-x64
box_url: puphpet/ubuntu1404-x64
hostname: local.blog
memory: '1024'
cpus: '2'
chosen_provider: virtualbox
network:
private_network: 192.168.56.101
forwarded_port:
vflnp_nfdzn8w8a4h3:
host: '6757'
guest: '22'
post_up_message: ''
provider:
virtualbox:
modifyvm:
natdnshostresolver1: 'on'
showgui: '0'
vmware:
numvcpus: 1
parallels:
cpus: 1
provision:
puppet:
manifests_path: puphpet/puppet
manifest_file: site.pp
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
- '--parser future'
synced_folder:
vflsf_knbv8t2xmfrp:
source: ./
target: /var/www/
sync_type: nfs
smb:
smb_host: ''
smb_username: ''
smb_password: ''
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
owner: www-data
group: www-data
usable_port_range:
start: 10200
stop: 10500
ssh:
host: null
port: null
private_key_path: null
username: vagrant
guest_port: null
keep_alive: true
forward_agent: false
forward_x11: false
shell: 'bash -l'
vagrant:
host: detect
server:
install: '1'
packages: { }
users_groups:
install: '1'
groups: { }
users: { }
locale:
install: '1'
settings:
default_locale: fr_FR.UTF-8
locales:
- fr_FR.UTF-8
firewall:
install: '1'
rules: { }
cron:
install: '1'
jobs: { }
nginx:
install: '0'
settings:
default_vhost: 1
proxy_buffer_size: 128k
proxy_buffers: '4 256k'
upstreams: { }
vhosts:
nxv_j6ygd9l3rlpb:
server_name: local.recipes
server_aliases:
- local.recipes
www_root: /var/www/public
listen_port: '80'
index_files:
- index.html
- index.htm
- index.php
client_max_body_size: 1m
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_port: '443'
ssl_protocols: ''
ssl_ciphers: ''
rewrite_to_https: '1'
spdy: '1'
locations:
nxvl_zdupydqyeec0:
location: /
autoindex: 'off'
internal: 'false'
try_files:
- $uri
- $uri/
- /index.php$is_args$args
fastcgi: ''
fastcgi_index: ''
fastcgi_split_path: ''
nxvl_w6tj6ii33ek6:
location: '~ \.php$'
autoindex: 'off'
internal: 'false'
try_files:
- $uri
- $uri/
- /index.php$is_args$args
fastcgi: '127.0.0.1:9000'
fastcgi_index: index.php
fastcgi_split_path: '^(.+\.php)(/.*)$'
fast_cgi_params_extra:
- 'SCRIPT_FILENAME $request_filename'
- 'APP_ENV dev'
proxies: { }
apache:
install: '1'
settings:
user: www-data
group: www-data
default_vhost: true
manage_user: false
manage_group: false
sendfile: 0
modules:
- proxy_fcgi
- rewrite
vhosts:
av_tordbapk4fv1:
servername: local.blog
serveraliases:
- www.local.blog
docroot: /var/www/blog/public
port: '80'
setenv:
- 'APP_ENV dev'
custom_fragment: ''
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
ssl_protocol: ''
ssl_cipher: ''
php:
install: '1'
settings:
version: '70'
modules:
php:
- cli
- intl
- mcrypt
pear: { }
pecl: { }
ini:
display_errors: 'On'
error_reporting: '-1'
session.save_path: /var/lib/php/session
date.timezone: UTC
fpm_ini:
error_log: /var/log/php-fpm.log
fpm_pools:
phpfp_8dskxs4sc6bp:
ini:
prefix: www
listen: '127.0.0.1:9000'
security.limit_extensions: .php
user: www-user
group: www-data
composer: '1'
composer_home: ''
xdebug:
install: '1'
settings:
xdebug.default_enable: '1'
xdebug.remote_autostart: '0'
xdebug.remote_connect_back: '1'
xdebug.remote_enable: '1'
xdebug.remote_handler: dbgp
xdebug.remote_port: '9000'
blackfire:
install: '0'
settings:
server_id: ''
server_token: ''
agent:
http_proxy: ''
https_proxy: ''
log_file: stderr
log_level: '1'
php:
agent_timeout: '0.25'
log_file: ''
log_level: '1'
xhprof:
install: '0'
wpcli:
install: '0'
version: v0.19.0
drush:
install: '0'
version: 6.3.0
ruby:
install: '1'
versions: { }
python:
install: '1'
packages: { }
versions: { }
nodejs:
install: '0'
npm_packages: { }
hhvm:
install: '0'
nightly: 0
composer: '1'
composer_home: ''
settings: { }
server_ini:
hhvm.server.host: 127.0.0.1
hhvm.server.port: '9000'
hhvm.log.use_log_file: '1'
hhvm.log.file: /var/log/hhvm/error.log
php_ini:
display_errors: 'On'
error_reporting: '-1'
date.timezone: UTC
mysql:
install: '1'
settings:
version: '5.6'
root_password: christina
override_options: { }
adminer: 0
users: { }
databases:
mysqlnd_dl3m722heu31:
name: blogdata
sql: ''
grants: { }
mariadb:
install: '0'
settings:
version: '10.0'
root_password: '123'
override_options: { }
adminer: 0
users:
mariadbnu_17honh69lm86:
name: dbuser
password: '123'
databases:
mariadbnd_l1rsvnj0ghw1:
name: dbname
sql: ''
grants:
mariadbng_y345jmggx662:
user: dbuser
table: '*.*'
privileges:
- ALL
postgresql:
install: '0'
settings:
global:
encoding: UTF8
version: '9.3'
server:
postgres_password: '123'
databases: { }
users: { }
grants: { }
adminer: 0
mongodb:
install: '0'
settings:
auth: 1
bind_ip: 127.0.0.1
port: '27017'
databases: { }
redis:
install: '0'
settings:
conf_port: '6379'
sqlite:
install: '0'
adminer: 0
databases: { }
mailcatcher:
install: '0'
settings:
smtp_ip: 0.0.0.0
smtp_port: 1025
http_ip: 0.0.0.0
http_port: '1080'
mailcatcher_path: /usr/local/rvm/wrappers/default
from_email_method: inline
beanstalkd:
install: '0'
settings:
listenaddress: 0.0.0.0
listenport: '11300'
maxjobsize: '65535'
maxconnections: '1024'
binlogdir: /var/lib/beanstalkd/binlog
binlogfsync: null
binlogsize: '10485760'
beanstalk_console: 0
rabbitmq:
install: '0'
settings:
port: '5672'
users: { }
vhosts: { }
plugins: { }
elastic_search:
install: '0'
settings:
version: 1.4.1
java_install: true
solr:
install: '0'
settings:
version: 4.10.2
port: '8984'
During the vagrant up, I have a lot of red errors, and at the end, this:
The SSH command responded with a non-zero exit status. Vagrant assumes
that this means the command failed. The output for this command should
be in the log above. Please read the output to determine what went
wrong.
But if I'm trying to go to the ip http://192.168.56.101/, I have the Apache page:
Apache2 Ubuntu Default Page, [...]
You should replace this file (located at /var/www/html/index.html)
before continuing to operate your HTTP server.
This file is on the puphpet html folder, but for Zend, I have to link to the public/index.php file
Here is the work tree of my ZF project:
Don't use the box's IP address. Use vhosts and add an entry into your computer's hosts file, then access that URL in your browser.