Orbeon MySQL connection constantly dropping - orbeon

I'm constantly losing my MySQL connection after a few minutes. I see no errors in the log until I attempt to connect.
I'm happy to post any settings that will help debug, just let me know what you need to see.
context.xml:
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="10" maxActive="50" maxIdle="20" maxWait="60000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
username="orbeon"
password="pw"
url="jdbc:mysql://localhost:3306/orbeon"/>
my.cnf:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
skip-name-resolve
bind-address = 0.0.0.0
key-buffer = 256M
thread_stack = 256K
thread_cache_size = 8
max_allowed_packet = 16M
max_connections = 200
myisam-recover = BACKUP
wait_timeout = 180
net_read_timeout = 30
net_write_timeout = 30
back_log = 128
table_cache = 128
max_heap_table_size = 32M
lower_case_table_names = 0
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
log_slow_queries = /var/log/mysql/slow.log
long-query-time = 5
log-queries-not-using-indexes
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key-buffer = 256M
max_allowed_packet = 16M
!includedir /etc/mysql/conf.d/

Try adding the following two attributes to your existing <Resource> for MySQL. With those, the connection pool in Tomcat will check that the connection is still usable after getting it from the pool.
validationQuery="select 1 from dual"
testOnBorrow="true"
So your <Resource> should look something like (of course with the appropriate username, password, and server):
<Resource name="jdbc/mysql" auth="Container" type="javax.sql.DataSource"
initialSize="3" maxActive="10" maxIdle="20" maxWait="30000"
driverClassName="com.mysql.jdbc.Driver"
poolPreparedStatements="true"
validationQuery="select 1 from dual"
testOnBorrow="true"
username="orbeon"
password="orbeon"
url="jdbc:mysql://localhost:3306/orbeon?useUnicode=true&characterEncoding=UTF8"/>

Why do you have your wait_timeout set so low???
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout

Related

FreeRadius - Failed to connect with # comand

please help!!! i use freeradius + mysql + daloradius (centos7)" and when i put any user whith "#"
don´t work
radtest prueba 1234 localhost 1645 testing123
Sent Access-Request Id 89 from 0.0.0.0:46842 to 127.0.0.1:1645 length 76
User-Name = "prueba"
User-Password = "1234"
NAS-IP-Address =
NAS-Port = 1645
Message-Authenticator = 0x00
Cleartext-Password = "1234"
Received Access-Accept Id 89 from 127.0.0.1:1645 to 0.0.0.0:0 length 20
whithout "#", work
Sent Access-Request Id 198 from 0.0.0.0:57280 to 127.0.0.1:1645 length 77
User-Name = "prueba#"
User-Password = "1234"
NAS-IP-Address =
NAS-Port = 1645
Message-Authenticator = 0x00
Cleartext-Password = "1234"
Received Access-Reject Id 198 from 127.0.0.1:1645 to 0.0.0.0:0 length 20
(0) -: Expected Access-Accept got Access-Reject

Reading data from ADTF 2 using DDL structures

I'm trying to read the exemplary ADTF file. When reading the chunk header I see that chunk size is 96bytes, subtracting the header length (32) it leaves us with 64bytes for the actual data.
Now the data structure for the stream says we need only 43 bytes to express the data. I'm not sure how to apply padding there. The actual 64 bytes of data seems to have some padding - I cannot just read the data and push it into structures. I'm not sure how to guess the extra padding sizes. All the extracted values should be equal to 41 (decimal).
<stream description="streamid_2" name="NESTED_STRUCT" type="adtf.core.media_type">
<struct bytepos="0" name="tNestedStruct" type="tNestedStruct"/>
</stream>
<struct alignment="1" name="tNestedStruct" version="1">
<element alignment="1" arraysize="1" byteorder="LE" bytepos="0" name="sHeaderStruct" type="tHeaderStruct"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="12" name="sSimpleStruct" type="tSimpleStruct"/>
</struct>
<struct alignment="1" name="tHeaderStruct" version="1">
<element alignment="1" arraysize="1" byteorder="LE" bytepos="0" name="ui32HeaderVal" type="tUInt32"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="4" name="f64HeaderVal" type="tFloat64"/>
</struct>
<struct alignment="1" name="tSimpleStruct" version="1">
<element alignment="1" arraysize="1" byteorder="LE" bytepos="0" name="ui8Val" type="tUInt8"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="1" name="ui16Val" type="tUInt16"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="3" name="ui32Val" type="tUInt32"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="7" name="i32Val" type="tInt32"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="11" name="i64Val" type="tInt64"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="19" name="f64Val" type="tFloat64"/>
<element alignment="1" arraysize="1" byteorder="LE" bytepos="27" name="f32Val" type="tFloat32"/>
</struct>
Here are the 64 data bytes:
index = value (decimal)
0 = 3
1 = 43
2 = 0
3 = 0
4 = 0
5 = -57
6 = -120
7 = 31
8 = 0
9 = 0
10 = 0
11 = 0
12 = 0
13 = 0
14 = 0
15 = 0
16 = 0
17 = 41
18 = 0
19 = 0
20 = 0
21 = 0
22 = 0
23 = 0
24 = 0
25 = 0
26 = -128
27 = 68
28 = 64
29 = 41
30 = 41
31 = 0
32 = 41
33 = 0
34 = 0
35 = 0
36 = 41
37 = 0
38 = 0
39 = 0
40 = 41
41 = 0
42 = 0
43 = 0
44 = 0
45 = 0
46 = 0
47 = 0
48 = 0
49 = 0
50 = 0
51 = 0
52 = 0
53 = -128
54 = 68
55 = 64
56 = 0
57 = 0
58 = 36
59 = 66
60 = 0
61 = 0
62 = 0
63 = 0
I don't really understand what you want to achieve... First of all, you don't need any padding in DDL, the bytepos follows the previous element size. You have to know, that the Description contains the serialized (bytepos, byteorder) and deserialized structure (alignment), please have a look at https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_a_utils_indexedfileformat.html. To access the data (read/write), just access via DDL (https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_ddl_usage_howto.html), also have a look at the example (https://support.digitalwerk.net/adtf/v2/adtf_sdk_html_docs/page_demo_media_desc_coder.html)
There is also a data offset as well as chunk headers, please have a look at https://support.digitalwerk.net/adtf_libraries/adtf-streaming-library/v2/DATFileFormatSpecification.pdf
But you don't have to care about the indexed file format to use DDL outside ADTF Framework. For that in ADTF 2.x there is the Streaming Library provided
https://support.digitalwerk.net/adtf_libraries/adtf-streaming-library/v2/api/index.html
https://support.digitalwerk.net/adtf_libraries/adtf-streaming-library/v2/StreamingLibrary.pdf
In ADTF 3.x the ADTF File Library (which comes Open Source and can also handle Files from 2.x)
https://support.digitalwerk.net/adtf_libraries/adtf-file-library/html/index.html
Both Libs support Read and Write of (ADTF)DAT Files, so I guess exactly what you need and don't need to reinvent.
Please have a look at the Media Descritpion Example:
https://support.digitalwerk.net/adtf_libraries/adtf-streaming-library/v2/api/page_mediadescription.html
And also the Reader itself:
https://support.digitalwerk.net/adtf_libraries/adtf-streaming-library/v2/api/classadtfstreaming_1_1_i_a_d_t_f_file_reader.html

Radius test only success in the local machine, but can't success remote machine

I install the freeradius in centos7 through ./configure&&make&&make install.
after make the server running. the local test is valid:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing localhost 0 testing123
Sending Access-Request of id 151 to 127.0.0.1 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=151, length=71
Service-Type = Framed-User
Framed-Protocol = PPP
Framed-IP-Address = 172.16.3.33
Framed-IP-Netmask = 255.255.255.0
Framed-Routing = Broadcast-Listen
Filter-Id = "std.ppp"
Framed-MTU = 1500
Framed-Compression = Van-Jacobson-TCP-IP
But in the remote machine, it seems that there's no response from the radius server machine:
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# radtest steve testing 211.71.149.221 0 testing123
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
Sending Access-Request of id 149 to 211.71.149.221 port 1812
User-Name = "steve"
User-Password = "testing"
NAS-IP-Address = 172.17.120.248
NAS-Port = 0
Message-Authenticator = 0x00000000000000000000000000000000
radclient: no response from server for ID 149 socket 3
Here's my configure file:
clients.conf:
client 211.71.149.221{
ipaddr=211.71.149.221
secret = testing123
short = test-client
nastype = other
}
users
steve Cleartext-Password := "testing"
Service-Type = Framed-User,
Framed-Protocol = PPP,
Framed-IP-Address = 172.16.3.33,
Framed-IP-Netmask = 255.255.255.0,
Framed-Routing = Broadcast-Listen,
Framed-Filter-Id = "std.ppp",
Framed-MTU = 1500,
Framed-Compression = Van-Jacobsen-TCP-I
I didn't use database,so I didn't make a change to the radiusd.conf.
[root#iZ2zebgsn1haj8gu0447fiZ raddb]# netstat -upln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:68 0.0.0.0:* 727/dhclient
udp 0 0 172.17.120.248:123 0.0.0.0:* 828/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 828/ntpd
udp 0 0 0.0.0.0:58664 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:5489 0.0.0.0:* 727/dhclient
udp 0 0 127.0.0.1:18120 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1812 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1813 0.0.0.0:* 26159/radiusd
udp 0 0 0.0.0.0:1814 0.0.0.0:* 26159/radiusd
udp6 0 0 :::123 :::* 828/ntpd
udp6 0 0 :::54457 :::* 727/dhclient
Your failing radtest is sending the request to a remote server with IP 211.71.149.221. Your clients.conf defines a client with client IP-adress 211.71.149.221. I'm guessing that that is NOT the IP from which your request is coming from.

How to join more than 2 regions with Apache Geode?

I've been trying to query some regions and i'm failing to join more than 2 of them. I set that up in a Java test to run them more easily but it fails all the same in pulse.
#Test
public void test_geode_join() throws QueryException {
ClientCache cache = new ClientCacheFactory()
.addPoolLocator(HOST, LOCATOR_PORT)
.setPoolSubscriptionEnabled(true)
.setPdxSerializer(new MyReflectionBasedAutoSerializer())
.create();
{
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT itm.itemId, bx.boxId " +
"FROM /items itm, /boxs bx " +
"WHERE itm.boxId = bx.boxId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
{
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT bx.boxId, rm.roomId " +
"FROM /boxs bx, /rooms rm " +
"WHERE bx.roomId = rm.roomId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
{
// That fails
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT itm.itemId, bx.boxId, rm.roomId " +
"FROM /items itm, /boxs bx, /rooms rm " +
"WHERE itm.boxId = bx.boxId " +
"AND bx.roomId = rm.roomId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
}
The first 2 queries work fine and respond in an instant but the last query holds until it timeouts. I get the following logs
[warn 2018/02/06 17:33:17.155 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname:31902: Connection[hostname:31902]#1978504976)
[warn 2018/02/06 17:33:27.333 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname2:31902: Connection[hostname2:31902]#1620459733 attempt=2)
[warn 2018/02/06 17:33:37.588 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3)
[warn 2018/02/06 17:33:37.825 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3). Server unreachable: could not connect after 3 attempts
[info 2018/02/06 17:33:37.840 CET <Distributed system shutdown hook> tid=0xd] VM is exiting - shutting down distributed system
[info 2018/02/06 17:33:37.840 CET <Distributed system shutdown hook> tid=0xd] GemFireCache[id = 1839168128; isClosing = true; isShutDownAll = false; created = Tue Feb 06 17:33:05 CET 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]: Now closing.
[info 2018/02/06 17:33:37.887 CET <Distributed system shutdown hook> tid=0xd] Destroying connection pool DEFAULT
And it ends up crashing
org.apache.geode.cache.client.ServerConnectivityException: Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3). Server unreachable: could not connect after 3 attempts
at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:798)
at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:623)
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:174)
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:115)
at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:763)
at org.apache.geode.cache.client.internal.QueryOp.execute(QueryOp.java:58)
at org.apache.geode.cache.client.internal.ServerProxy.query(ServerProxy.java:70)
at org.apache.geode.cache.query.internal.DefaultQuery.executeOnServer(DefaultQuery.java:456)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:338)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:319)
at local.test.geode.GeodeTest.test_geode_join(GeodeTest.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)
I tried to set timeouts at 60 seconds but i'm still not getting any results.
All regions are configured like this:
Type | Name | Value
------ | --------------- | --------------------
Region | data-policy | PERSISTENT_REPLICATE
| disk-store-name | regionDiskStore1
| size | 1173
| scope | distributed-ack
Am I missing anything here ?
Based on all the information provided, it looks like you are doing everything correct. I tried to reproduce in a simple test (similar test) and the test returns 5 results. However, if one of the predicates did not match, it could cause the query to take a lot longer to join enough rows to find a tuple that matches.
Below is a sample test that does not have an issue, but if I modify the test to put into region3 only portfolios with ID = -1. Then the test "hangs" trying to find 5 rows that fulfill the search criteria (it has to join 1000 * 1000 * 1000 rows which takes awhile to do). In the end the query will not find an p3.ID = p1.ID. Is it possible that the itm.boxIds just do not match box.boxId often enough so it takes a lot longer to find ones that do?
public void testJoinMultipleReplicatePersistentRegionsWithLimitClause() throws Exception {
String regionName = "portfolio";
Cache cache = serverStarterRule.getCache();
assertNotNull(cache);
Region region1 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 1);
Region region2 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 2);
Region region3 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 3);
for ( int i = 0; i < 1000; i++) {
Portfolio p = new Portfolio(i);
region1.put(i, p);
region2.put(i, p);
region3.put(i, p); //modify this line to region3.put(i, new Portfolio(-1)) to cause query to take longer
}
QueryService queryService = cache.getQueryService();
SelectResults results = (SelectResults) queryService
.newQuery("select p1.ID, p2.ID, p3.ID from /portfolio1 p1, /portfolio2 p2, /portfolio3 p3 where p1.ID = p2.ID and p3.ID = p1.ID limit 5").execute();
assertEquals(5, results.size());
}

esp8266 RTOS blink example doesn't work

I have a problem with the RTOS firmware on the esp8266 (I have a esp12e), after flashing the firmware, reading from uart, it keeps stuck with those lines:
ets Jan 8 2013,rst cause:2, boot mode:(3,0)
load 0x40100000, len 31584, room 16
tail 0
chksum 0x24
load 0x3ffe8000, len 944, room 8
tail 8
chksum 0x9e
load 0x3ffe83b0, len 1080, room 0
tail 8
chksum 0x60
csum 0x60
Now I will explain my HW setup:
GPIO15 -> Gnd
EN -> Vcc
GPIO0 -> Gnd (when flashing)
GPIO0 -> Vcc (normal mode)
For the toolchain I've followed this tutorial and it works well:
http://microcontrollerkits.blogspot.it/2015/12/esp8266-eclipse-development.html
Then I started doing my RTOS blink example, I post my user_main.c code here:
#include "esp_common.h"
#include "gpio.h"
void task2(void *pvParameters)
{
printf("Hello, welcome to client!\r\n");
while(1)
{
// Delay and turn on
vTaskDelay (300/portTICK_RATE_MS);
GPIO_OUTPUT_SET (5, 1);
// Delay and LED off
vTaskDelay (300/portTICK_RATE_MS);
GPIO_OUTPUT_SET (5, 0);
}
}
/******************************************************************************
* FunctionName : user_rf_cal_sector_set
* Description : SDK just reversed 4 sectors, used for rf init data and paramters.
* We add this function to force users to set rf cal sector, since
* we don't know which sector is free in user's application.
* sector map for last several sectors : ABCCC
* A : rf cal
* B : rf init data
* C : sdk parameters
* Parameters : none
* Returns : rf cal sector
*******************************************************************************/
uint32 user_rf_cal_sector_set(void)
{
flash_size_map size_map = system_get_flash_size_map();
uint32 rf_cal_sec = 0;
switch (size_map) {
case FLASH_SIZE_4M_MAP_256_256:
rf_cal_sec = 128 - 5;
break;
case FLASH_SIZE_8M_MAP_512_512:
rf_cal_sec = 256 - 5;
break;
case FLASH_SIZE_16M_MAP_512_512:
case FLASH_SIZE_16M_MAP_1024_1024:
rf_cal_sec = 512 - 5;
break;
case FLASH_SIZE_32M_MAP_512_512:
case FLASH_SIZE_32M_MAP_1024_1024:
rf_cal_sec = 1024 - 5;
break;
default:
rf_cal_sec = 0;
break;
}
return rf_cal_sec;
}
/******************************************************************************
* FunctionName : user_init
* Description : entry of user application, init user function here
* Parameters : none
* Returns : none
*******************************************************************************/
void user_init(void)
{
uart_init_new();
printf("SDK version:%s\n", system_get_sdk_version());
// Config pin as GPIO5
PIN_FUNC_SELECT (PERIPHS_IO_MUX_GPIO5_U, FUNC_GPIO5);
xTaskCreate(task2, "tsk2", 256, NULL, 2, NULL);
}
I also post the flash command, the first executed one time, the second every time I modify the code:
c:/Espressif/utils/ESP8266/esptool.exe -p COM3 write_flash -ff 40m -fm qio -fs 32m 0x3FC000 c:/Espressif/ESP8266_RTOS_SDK/bin/esp_init_data_default.bin 0x3FE000 c:/Espressif/ESP8266_RTOS_SDK/bin/blank.bin 0x7E000 c:/Espressif/ESP8266_RTOS_SDK/bin/blank.bin
c:/Espressif/utils/ESP8266/esptool.exe -p COM3 -b 256000 write_flash -ff 40m -fm qio -fs 32m 0x00000 firmware/eagle.flash.bin 0x40000 firmware/eagle.irom0text.bin
There is something wrong? I really don't understand why it doesn't work.
When I try the NON-OS example they works very well.
I had the same problem as you. This issue is caused by the incorrect address of the eagle.irom0text.bin .
So I changed the address of the eagle.irom0text.bin from 0x40000 (0x10000) to 0x20000 and it worked well for me.
[RTOS SDK version: 1.4.2(f57d61a)]
The correct flash codes in the common_rtos.mk (ESP-12E)
for flashinit
flashinit:
$(vecho) "Flash init data default and blank data."
$(ESPTOOL) -p $(ESPPORT) write_flash $(flashimageoptions) 0x3fc000 $(SDK_BASE)/bin/esp_init_data_default.bin
$(ESPTOOL) -p $(ESPPORT) write_flash $(flashimageoptions) 0x3fe000 $(SDK_BASE)/bin/blank.bin
for flash:
flash: all
#ifeq ($(app), 0)
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) 0x00000 $(FW_BASE)/eagle.flash.bin 0x20000 $(FW_BASE)/eagle.irom0text.bin
else
ifeq ($(boot), none)
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) 0x00000 $(FW_BASE)/eagle.flash.bin 0x20000 $(FW_BASE)/eagle.irom0text.bin
else
$(ESPTOOL) -p $(ESPPORT) -b $(ESPBAUD) write_flash $(flashimageoptions) $(addr) $(FW_BASE)/upgrade/$(BIN_NAME).bin
endif
endif

Resources