Can't fix postfix 421 timeout exceeded errors - timeout

I am continuously getting 421 "Error: timeout exceeded (in reply to end of DATA command)" upon sending to 2 other servers on port 25. I have read through many pages, but nothing works. Here is what happens when using traceroute. It makes it to the base DN, but stops short of making it to the mail name:
root#me1.net# traceroute -n -T -p 25 me2.xyz
traceroute to me2.xyz (me.me.me.me), 30 hops max, 60 byte packets
1 45.11.183.1 3.861 ms 4.052 ms 4.032 ms
2 212.7.30.161 2.166 ms 1.985 ms 2.052 ms
3 212.7.29.65 0.863 ms 1.922 ms 2.157 ms
4 87.245.242.68 0.432 ms 0.599 ms 0.570 ms
5 87.245.233.74 7.276 ms 7.132 ms 7.093 ms
6 194.68.123.194 9.052 ms 8.829 ms 194.68.128.194 7.765 ms
7 * * *
8 me.me.me.me 8.357 ms 11.145 ms 8.023 ms
root#me1.net# traceroute -n -T -p 25 mail.me2.xyz
traceroute to mail.me2.xyz (me.me.me.me), 30 hops max, 60 byte packets
1 45.11.183.1 3.933 ms 3.649 ms 3.528 ms
2 212.7.30.161 1.765 ms 1.736 ms 1.540 ms
3 212.7.29.65 1.368 ms 1.095 ms 1.395 ms
4 87.245.242.68 0.624 ms 0.568 ms 0.522 ms
5 87.245.233.74 6.979 ms 6.860 ms 7.064 ms
6 194.68.123.194 7.515 ms 194.68.128.194 8.178 ms *
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *
root#me1.net/etc/postfix#
Anyone know why it would stalls out going to mail.me2.xyz?
What is crazy is that it didn't have this problem until I
moved the server to another VPS.
Thanks

The only way I found to fixed this was to lower the MTU on my server. I lowered it from the default 1500 to 1000. Don't go below 500. I eventually set it to 700.
Show current MTU:
echo "$(ifconfig | sed '1!d' | awk '{print "MTU: " $4}')" #see MTU
Set MTU & restart network:
sudo ifconfig eth0 mtu 1000
/etc/init.d/networking restart

Related

Extract text from wrk output

I'm running a load test with wrk2 as a job on Jenkins. I'd like to send the results of the load test to Graylog but I only want to store the Requests/Sec and average latency.
Here's what the output looks like:
Running 30s test # https://example.com
1 threads and 100 connections
Thread calibration: mean lat.: 8338.285ms, rate sampling interval: 19202ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.20s 6.17s 29.64s 65.74%
Req/Sec 5.00 0.00 5.00 100.00%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 15.72s
75.000% 20.81s
90.000% 24.58s
99.000% 29.13s
99.900% 29.66s
99.990% 29.66s
99.999% 29.66s
100.000% 29.66s
Detailed Percentile spectrum:
Value Percentile TotalCount 1/(1-Percentile)
4497.407 0.000000 1 1.00
7561.215 0.100000 11 1.11
11100.159 0.200000 22 1.25
12582.911 0.300000 33 1.43
14565.375 0.400000 44 1.67
15720.447 0.500000 54 2.00
16416.767 0.550000 60 2.22
17301.503 0.600000 65 2.50
18464.767 0.650000 71 2.86
19185.663 0.700000 76 3.33
20807.679 0.750000 81 4.00
21479.423 0.775000 84 4.44
22347.775 0.800000 87 5.00
22527.999 0.825000 90 5.71
23216.127 0.850000 93 6.67
23478.271 0.875000 95 8.00
23805.951 0.887500 96 8.89
24723.455 0.900000 98 10.00
25067.519 0.912500 99 11.43
25395.199 0.925000 101 13.33
26525.695 0.937500 102 16.00
26525.695 0.943750 102 17.78
26705.919 0.950000 103 20.00
28065.791 0.956250 104 22.86
28065.791 0.962500 104 26.67
28377.087 0.968750 105 32.00
28377.087 0.971875 105 35.56
28475.391 0.975000 106 40.00
28475.391 0.978125 106 45.71
28475.391 0.981250 106 53.33
29130.751 0.984375 107 64.00
29130.751 0.985938 107 71.11
29130.751 0.987500 107 80.00
29130.751 0.989062 107 91.43
29130.751 0.990625 107 106.67
29655.039 0.992188 108 128.00
29655.039 1.000000 108 inf
#[Mean = 16199.756, StdDeviation = 6170.105]
#[Max = 29638.656, Total count = 108]
#[Buckets = 27, SubBuckets = 2048]
----------------------------------------------------------
130 requests in 30.02s, 13.44MB read
Socket errors: connect 0, read 0, write 0, timeout 1192
Requests/sec: 4.33
Transfer/sec: 458.47KB
Does anyone know how I could go about extracting Requests/sec (at the bottom) and the latency average to send as JSON parameters?
The expected output would be: "latency": 16.2, "requests_per_second": 4.33
You didn't provide the expected output so your question isn't clear but is this what you want?
$ awk 'BEGIN{a["Latency"]; a["Requests/sec:"]} ($1 in a) && ($2 ~ /[0-9]/){print $1, $2}' file
Latency 16.20s
Requests/sec: 4.33
Updated based on you adding expected output to your question:
$ awk '
BEGIN { map["Latency"]="latency"; map["Requests/sec:"]="requests_per_second" }
($1 in map) && ($2 ~ /[0-9]/) { printf "%s\"%s\": %s", ofs, map[$1], $2+0; ofs=", " }
END { print "" }
' file
"latency": 16.2, "requests_per_second": 4.33

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

esp8266 RTOS blink example doesn't work

I have a problem with the RTOS firmware on the esp8266 (I have a esp12e), after flashing the firmware, reading from uart, it keeps stuck with those lines:
ets Jan 8 2013,rst cause:2, boot mode:(3,0)
load 0x40100000, len 31584, room 16
tail 0
chksum 0x24
load 0x3ffe8000, len 944, room 8
tail 8
chksum 0x9e
load 0x3ffe83b0, len 1080, room 0
tail 8
chksum 0x60
csum 0x60
Now I will explain my HW setup:
GPIO15 -> Gnd
EN -> Vcc
GPIO0 -> Gnd (when flashing)
GPIO0 -> Vcc (normal mode)
For the toolchain I've followed this tutorial and it works well:
http://microcontrollerkits.blogspot.it/2015/12/esp8266-eclipse-development.html
Then I started doing my RTOS blink example, I post my user_main.c code here:
#include "esp_common.h"
#include "gpio.h"
void task2(void *pvParameters)
{
printf("Hello, welcome to client!\r\n");
while(1)
{
// Delay and turn on
vTaskDelay (300/portTICK_RATE_MS);
GPIO_OUTPUT_SET (5, 1);
// Delay and LED off
vTaskDelay (300/portTICK_RATE_MS);
GPIO_OUTPUT_SET (5, 0);
}
}
/******************************************************************************
* FunctionName : user_rf_cal_sector_set
* Description : SDK just reversed 4 sectors, used for rf init data and paramters.
* We add this function to force users to set rf cal sector, since
* we don't know which sector is free in user's application.
* sector map for last several sectors : ABCCC
* A : rf cal
* B : rf init data
* C : sdk parameters
* Parameters : none
* Returns : rf cal sector
*******************************************************************************/
uint32 user_rf_cal_sector_set(void)
{
flash_size_map size_map = system_get_flash_size_map();
uint32 rf_cal_sec = 0;
switch (size_map) {
case FLASH_SIZE_4M_MAP_256_256:
rf_cal_sec = 128 - 5;
break;
case FLASH_SIZE_8M_MAP_512_512:
rf_cal_sec = 256 - 5;
break;
case FLASH_SIZE_16M_MAP_512_512:
case FLASH_SIZE_16M_MAP_1024_1024:
rf_cal_sec = 512 - 5;
break;
case FLASH_SIZE_32M_MAP_512_512:
case FLASH_SIZE_32M_MAP_1024_1024:
rf_cal_sec = 1024 - 5;
break;
default:
rf_cal_sec = 0;
break;
}
return rf_cal_sec;
}
/******************************************************************************
* FunctionName : user_init
* Description : entry of user application, init user function here
* Parameters : none
* Returns : none
*******************************************************************************/
void user_init(void)
{
uart_init_new();
printf("SDK version:%s\n", system_get_sdk_version());
// Config pin as GPIO5
PIN_FUNC_SELECT (PERIPHS_IO_MUX_GPIO5_U, FUNC_GPIO5);
xTaskCreate(task2, "tsk2", 256, NULL, 2, NULL);
}
I also post the flash command, the first executed one time, the second every time I modify the code:
c:/Espressif/utils/ESP8266/esptool.exe -p COM3 write_flash -ff 40m -fm qio -fs 32m 0x3FC000 c:/Espressif/ESP8266_RTOS_SDK/bin/esp_init_data_default.bin 0x3FE000 c:/Espressif/ESP8266_RTOS_SDK/bin/blank.bin 0x7E000 c:/Espressif/ESP8266_RTOS_SDK/bin/blank.bin
c:/Espressif/utils/ESP8266/esptool.exe -p COM3 -b 256000 write_flash -ff 40m -fm qio -fs 32m 0x00000 firmware/eagle.flash.bin 0x40000 firmware/eagle.irom0text.bin
There is something wrong? I really don't understand why it doesn't work.
When I try the NON-OS example they works very well.
I had the same problem as you. This issue is caused by the incorrect address of the eagle.irom0text.bin .
So I changed the address of the eagle.irom0text.bin from 0x40000 (0x10000) to 0x20000 and it worked well for me.
[RTOS SDK version: 1.4.2(f57d61a)]
The correct flash codes in the common_rtos.mk (ESP-12E)
for flashinit
flashinit:
$(vecho) "Flash init data default and blank data."
$(ESPTOOL) -p $(ESPPORT) write_flash $(flashimageoptions) 0x3fc000 $(SDK_BASE)/bin/esp_init_data_default.bin
$(ESPTOOL) -p $(ESPPORT) write_flash $(flashimageoptions) 0x3fe000 $(SDK_BASE)/bin/blank.bin
for flash:
flash: all
#ifeq ($(app), 0)
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) 0x00000 $(FW_BASE)/eagle.flash.bin 0x20000 $(FW_BASE)/eagle.irom0text.bin
else
ifeq ($(boot), none)
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) 0x00000 $(FW_BASE)/eagle.flash.bin 0x20000 $(FW_BASE)/eagle.irom0text.bin
else
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) $(addr) $(FW_BASE)/upgrade/$(BIN_NAME).bin
endif
endif

Vowpal Wabbit Logistic Regression

I am performing logistic regression using Vowpal Wabbit on a dataset with 25 features and 48 million instances. I have a question on current predict values. Should it be within 0 or 1.
average since example example current current current
loss last counter weight label predict features
0.693147 0.693147 1 1.0 -1.0000 0.0000 24
0.419189 0.145231 2 2.0 -1.0000 -1.8559 24
0.235457 0.051725 4 4.0 -1.0000 -2.7588 23
6.371911 12.508365 8 8.0 -1.0000 -3.7784 24
3.485084 0.598258 16 16.0 -1.0000 -2.2767 24
1.765249 0.045413 32 32.0 -1.0000 -2.8924 24
1.017911 0.270573 64 64.0 -1.0000 -3.0438 25
0.611419 0.204927 128 128.0 -1.0000 -3.1539 25
0.469127 0.326834 256 256.0 -1.0000 -1.6101 23
0.403473 0.337820 512 512.0 -1.0000 -2.8843 25
0.337348 0.271222 1024 1024.0 -1.0000 -2.5209 25
0.328909 0.320471 2048 2048.0 -1.0000 -2.0732 25
0.309401 0.289892 4096 4096.0 -1.0000 -2.7639 25
0.291447 0.273492 8192 8192.0 -1.0000 -2.5978 24
0.287428 0.283409 16384 16384.0 -1.0000 -3.1774 25
0.287249 0.287071 32768 32768.0 -1.0000 -2.7770 24
0.282737 0.278224 65536 65536.0 -1.0000 -1.9070 25
0.278517 0.274297 131072 131072.0 -1.0000 -3.3813 24
0.291475 0.304433 262144 262144.0 1.0000 -2.7975 23
0.324553 0.357630 524288 524288.0 -1.0000 -0.8995 24
0.373086 0.421619 1048576 1048576.0 -1.0000 -1.2076 24
0.422605 0.472125 2097152 2097152.0 1.0000 -1.4907 25
0.476046 0.529488 4194304 4194304.0 -1.0000 -1.8591 25
0.476627 0.477208 8388608 8388608.0 -1.0000 -2.0037 23
0.446556 0.416485 16777216 16777216.0 -1.0000 -0.9915 24
0.422831 0.399107 33554432 33554432.0 -1.0000 -1.9549 25
0.428316 0.433801 67108864 67108864.0 -1.0000 -0.6376 24
0.425511 0.422705 134217728 134217728.0 -1.0000 -0.4094 24
0.425185 0.424860 268435456 268435456.0 -1.0000 -1.1529 24
0.426747 0.428309 536870912 536870912.0 -1.0000 -2.7468 25
Predictions are in the range [-50, +50] (theoretically any real number, but Vowpal Wabbit truncates it to [-50, +50]).
To convert them to {-1, +1}, use --binary. Positive predictions are simply mapped to +1, negative to -1.
To convert them to [0, +1], use --link=logistic.
This uses the logistic function 1/(1 + exp(-x)).
You should also use --loss_function=logistic if you want to interpret the numbers as probabilities.
To convert them to [-1, +1], use --link=glf1.
This uses formula 2/(1 + exp(-x)) - 1 (generalized logistic function with limits of 1).

Try to simulate a neural network in MATLAB by myself

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.
With the following inputs:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1
-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71
and the following outputs:
0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296
1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500
2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096
4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144
169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841
900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849
1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249
3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041
I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:
purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})
Other information :
My net.b[1] is: -1.16610230053776 1.16667147712026
My net.b[2] is: 51.3266249426358
And net.IW(1) is: 0.344272596370387 0.344111217766824
net.LW(2) is: 31.7635369693519 -31.8082184881063
When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?
I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?
You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:
>> net = fitnet(2);
>> net.inputs{1}.processFcns
ans =
'removeconstantrows' 'mapminmax'
>> net.outputs{2}.processFcns
ans =
'removeconstantrows' 'mapminmax'
The second preprocessing function applied to both input/output is mapminmax with the following parameters:
>> net.inputs{1}.processParams{2}
ans =
ymin: -1
ymax: 1
>> net.outputs{2}.processParams{2}
ans =
ymin: -1
ymax: 1
to map both into the range [-1,1] (prior to training).
This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.
One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.
You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).
Here is an example to illustrate the idea (adapted from another post of mine):
%%# data
x = linspace(-71,71,200); %# 1D input
y_model = x.^2; %# model
y = y_model + 10*randn(size(x)).*x; %# add some noise
%%# create ANN, train, simulate
net = fitnet(2); %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);
%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on
%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);
%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(#plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer
%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);
%# compare against MATLAB output
max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`)
I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

Resources