I found physical address space mapping for Arm from DEN0001C_principles_of_arm_memory_maps.pdf:
32-bit, 36-bit and 40-bit ARM Address Maps
Address map in use in ARM development systems today
- 32-bit - - 36-bit - - 40-bit -
1024GB+ + +--------------+ <- 40-bit
| | DRAM |
~ ~ ~ ~
| | |
| | |
| | |
| | |
544GB + + +--------------+
| | Hole or DRAM |
| | |
512GB + + +--------------+
| | Mapped |
| | I/O |
~ ~ ~ ~
| | |
256GB + + +--------------+
| | Reserved |
~ ~ ~ ~
| | |
64GB + +--------------+--------------+ <- 36-bit
| | DRAM |
~ ~ ~ ~
| | |
| | |
34GB + +--------------+--------------+
| | Hole or DRAM |
32GB + +--------------+--------------+
| | Mapped I/O |
~ ~ ~ ~
| | |
16GB + +--------------+--------------+
| | Reserved |
~ ~ ~ ~
4GB +--------------+--------------+--------------+ <- 32-bit
| 2GB of DRAM |
| |
2GB +--------------+--------------+--------------+
| Mapped I/O |
1GB +--------------+--------------+--------------+
| ROM & RAM & I/O |
0GB +--------------+--------------+--------------+ 0
- 32-bit - - 36-bit - - 40-bit -
Figure 1 32-bit, 36-bit and 40-bit Address Map
But I cannot find any mapping of arm64, is there any official doc?
update
The Arm architecture does not standardize the CPU’s view of the physical address space indeed, DRAM may start at address 0x0, but often it does not. SoC designers can decide their AXI address space layout.
=============================================================
32-bit armv7 without LPAE can access up to 32-bit address space
32-bit armv7 can access up to 40-bit address space
64-bit arm can naturally access up to 48-bits of physical address space:
256TB +-----------------+ <- 48-bit
| DRAM |
~ ~
| |
| |
| |
| |
136TB +-----------------+
| Hole or DRAM |
| |
128TB +-----------------+
| Mapped |
| I/O |
~ ~
| |
64TB +-----------------+
| Reserved |
~ ~
| |
16TB +-----------------+ <- 44-bit
~ ~
~ ~
~ ~
0GB +-----------------+ 0
So, the address space below 48bits just follows the layout in the question description.
Related
I have a Raspberry Pi Pico and I'm trying to get it Wifi Connected.
I'm using an ESP-01 with the sock firmware and confirmed AT commands are working through the Arduino serial monitor on both 115200 and 9600 Baud. To connect to my PC I am using an ESP-01S USB adapter I've gotten on amazon.
AT
OK
AT+GMR
AT version:1.2.0.0(Jul 1 2016 20:04:45)
SDK version:1.5.4.1(39cb9a32)
v1.0.0
Mar 11 2018 18:27:31
OK
+--------------------------+
| Both NL & CR | 9600 Baud |
+--------------------------+
Once I get everything wired up to the Raspberry Pi Pico there is now a blue light on the ESP-01, which wasn't on with the USB serial adapter. And I do not get a response from the ESP-01.
+-------------------+ +-------------------+
| | 3.3v PS | |
| Raspberry Pi | = | | ESP-01 |
| Pico | | +------+ 3.3v |
| | | | |
| GPIO 0 +----------------> RXD |
| | | | |
| GPIO 1 <----------------+ TXD |
| | | | |
| GND +-------+--------+ GND |
| | | |
| | | |
+-------------------+ +-------------------+
from machine import UART
uart = UART(0, baudrate=9600)
def write(msg):
print("Sending %s" % msg)
uart.write(msg)
write('AT\r\n')
while True:
if t:
print(t)
else:
t = uart.readline()
print('.', end="")
Tried with multiple baudrates as well as UART 0 and UART 1 (TX=0, RX=1 and TX=4, RX=5).
ESP-01 is supplied with 3.3v and not using power from the Raspberry Pi.
What could be going on that would prevent a response?
I have two containers running in a swarm. Each exposes a /stats endpoint which I am trying to scrape.
However, using the swarm port obviously results in the queries being load balanced and therefore the stats are all intermingled:
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| \ / |
| \ / |
| +--------------+ |
| | | |
| | Swarm Router | |
| | | |
| +--------------+ |
| v |
+-------------------------|------------------------+
|
A Stats
B Stats
A Stats
B Stats
|
v
I want to keep the load balancer for application requests, but also need a direct way to make requests to each container to scrape the stats.
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| | \ / | |
| | \ / | |
| | +--------------+ | |
| | | | | |
| | | Swarm Router | | |
| v | | v |
| | +--------------+ | |
| | | | |
+--------|----------------|----------------|-------+
| | |
A Stats | B Stats
A Stats Normal Traffic B Stats
A Stats | B Stats
| | |
| | |
v | v
A dynamic solution would be ideal, but since I don't intend to do any dynamic scaling something like hardcoded ports for each container would be fine:
::8080 Both containers via load balancer
::8081 Direct access to container A
::8082 Direct access to container B
Can this be done with swarm?
From inside an overlay network you can get IP-addresses of all replicas with tasks.<service_name> DNS query:
; <<>> DiG 9.11.5-P4-5.1+deb10u5-Debian <<>> -tA tasks.foo_test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19860
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;tasks.foo_test. IN A
;; ANSWER SECTION:
tasks.foo_test. 600 IN A 10.0.1.3
tasks.foo_test. 600 IN A 10.0.1.5
tasks.foo_test. 600 IN A 10.0.1.6
This is mentioned in the documentation.
Also, if you use Prometheus to scrape those endpoints for metrics, you can combine the above with dns_sd_configs to set the targets to scrape (here is an article how). This is easy to get running but somewhat limited in features (especially in large environments).
A more advanced way to achieve the same is to use dockerswarm_sd_config (docs, example configuration). This way the list of endpoints will be gathered by querying Docker daemon, along with some useful labels (i.e. node name, service name, custom labels).
While less than ideal, you can introduce a microservice that acts as an intermediary to the other containers that are exposing /stats. This microservice would have to be configured with the individual endpoints and operate in the same network as said endpoints.
This doesn't bypass the load balancer, but instead makes it so it does not matter.
The intermediary could roll-up the information or you could make it more sophisticated by passing a list of opaque identifiers which the caller can then use to individually query the intermediary.
It is slightly "anti-pattern" in the sense that you have a highly coupled "stats" proxy that must be configured to be able to hit each endpoint.
That said, it is good in the sense that you don't have to expose individual containers outside of the proxy. From a security perspective, this may be better because you're not leaking additional information out of your swarm.
You can try to publish a specific container port on a host machine
,add to your services:
ports:
- target: 8081
published: 8081
protocol: tcp
mode: host
I have one Google Sheet that contains ID-values, together with corresponding Names and Attack power. In another sheet, I want to combine Names and Attack power in the same cell using the ID as a reference - separated with row breaks.
Sheet1
Sheet1 looks like this:
| GROUP ID | NAME | ATTACK POWER |
|---------:|:----------|--------------:|
| 101 | guile | 333 |
|----------|-----------|---------------|
| 101 | blanka | 50 |
|----------|-----------|---------------|
| 101 | sagat | 500 |
|----------|-----------|---------------|
| 207 | ruy | 450 |
|----------|-----------|---------------|
| 207 | vega | 150 |
Sheet2
Right now, I have created the following ArrayFormula that sort of does what I want.
In NAME-column:
=ArrayFormula(TEXTJOIN(CHAR(10);1;REPT(Sheet1!B:C;1*(Sheet1!A:A=A2))))
Which returns the following result:
| GROUP ID | NAME |
|---------:|:--------------------------|
| 101 | guile |
| | 333 |
| | blanka |
| | 50 |
| | sagat |
| | 500 |
|----------|---------------------------|
| 101 | ruy |
| | 450 |
| | vega |
| | 150 |
|----------|---------------------------|
The problem is that I can't figure out how to get Name and Attack Power on the same line.
Tried combining with CONCATENATE: =CONCATENATE(ArrayFormula(TEXTJOIN(CHAR(10);1;REPT(Sheet1!B:B;1*(Sheet1!A:A=A2))));" (";ArrayFormula(TEXTJOIN(CHAR(10);1;REPT(Sheet1!C:C;1*(Sheet1!A:A=A2))));")") - but it's not quite right:
| GROUP ID | NAME |
|---------:|:--------------------------|
| 101 | guile |
| | blanka |
| | sagat (333 |
| | 50 |
| | 500) |
|----------|---------------------------|
| 101 | ruy |
| | vega (450 |
| | 150) |
|----------|---------------------------|
I would instead the sheet to look like this:
| GROUP ID | NAME |
|---------:|:--------------------------|
| 101 | guile (333) |
| | blanka (50) |
| | sagat (500) |
|----------|---------------------------|
| 101 | ruy (450) |
| | vega (150) |
|----------|---------------------------|
Is this possible?
=ARRAYFORMULA(TEXTJOIN(CHAR(10), 1,
REPT(Sheet1!B:B&" ("&Sheet1!C:C&")", 1*(Sheet1!A:A=A4))))
I am running freeradius from the same computer that I am running "radtest" from.
I can get get an "accept" message with user password coming from either "users" file or mysql , and can get client "secret" from clients.conf file, but can't figure out how to get freeradius to look at mysql for the client "secret".
Do I have to somehow disable or override the entry in "cients.conf"?
Here's a summary of file entries, mysql, and test results:
/etc/freeradius/3.0/clients.conf #client localhost with secret testing123
/etc/freeradius/3.0/users #testing Cleartext-Password := "testpwd"
/etc/freeradius/3.0/mods-available/sql #read_clients = yes (/etc/freeradius/3.0/sites- enabled/sql points here)
SELECT * FROM radgroupreply LIMIT 10;
| id | groupname | attribute | op | value |
| 1 | dynamic | Framed-Compression | := | Van-Jacobsen-TCP-IP |
| 2 | dynamic | Framed-Protocol | := | PPP |
| 3 | dynamic | Service-Type | := | Framed-User |
| 4 | dynamic | Framed-MTU | := | 1500 |
| 5 | 2048-1024 | Motorola-Canopy-ULBR | = | 1024 |
| 6 | 2048-1024 | Motorola-Canopy-ULBL | = | 500000 |
mysql> SELECT * FROM radusergroup LIMIT 10;
| username | groupname | priority |
| fredf | dnamic | 2 |
| 0a-00-3e-89-35-32 | 2048-1024 | 2 |
mysql> SELECT * FROM radcheck LIMIT 10;
| id | username | attribute | op | value |
| 3 | fredf | Cleartext-Password | := | wilma |
| 6 | 0a-00-3e-89-35-32 | Cleartext-Password | := | passwordsql |
mysql> SELECT * FROM radreply LIMIT 10;
| id | username | attribute | op | value |
| 1 | fredf | Motorola-Canopy-UserLevel | = | 3 |
| 2 | testuser | Motorola-Canopy-UserLevel | = | 3 |
mysql> SELECT * FROM nas LIMIT 10;
| id | nasname | shortname | type | ports | secret | server | community | description |
| 1 | 10.10.2.2 | Griz450NW | 1 | 1812 | naspass | localhost | ISReader | Griz450NW |
radtest testing testpwd 127.0.0.1 0 testing123 #works
Received Access-Accept Id 107 from 127.0.0.1:1812 to 0.0.0.0:0 length 20
radtest fredf wilma 127.0.0.1 0 testing123 #works
Received Access-Accept Id 242 from 127.0.0.1:1812 to 0.0.0.0:0 length 32
Motorola-WiMAX-Home-BTS = 0x00000003
radtest 0a-00-3e-89-35-32 passwordsql 127.0.0.1 0 testing123 #works
Received Access-Accept Id 27 from 127.0.0.1:1812 to 0.0.0.0:0 length 44
Motorola-Canopy-ULBR = 1024
Motorola-Canopy-ULBL = 500000
radtest 0a-00-3e-89-35-32 passwordsql 127.0.0.1 0 naspass #doesn't work
Radius -X output: Dropping packet without response because of error: Received packet from 127.0.0.1 with invalid Message-Authenticator! (Shared secret is incorrect.)
The NAS table entry worked once I changed the "nasname" to "127.0.0.1" and disabled the client in clients.conf (I just changed "ipaddr" from "127.0.0.1" to "127.0.0.2").
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
grails stats gives various code statistics for a given Grails project.
Typical output can look like something along the lines of:
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 4 | 183 |
| Domain Classes | 8 | 264 |
| Jobs | 1 | 10 |
| Services | 4 | 297 |
| Tag Libraries | 2 | 63 |
| Unit Tests | 17 | 204 |
+----------------------+-------+-------+
| Totals | 36 | 1021 |
+----------------------+-------+-------+
I'm curious about the typical division of code between the various artifacts in Grails projects (such as the ratio LOC(controllers) / LOC(services), etc.).
Please share the grails stats output of your largest Grails project to contribute your statistics to this question.
My current project:
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 67 | 7665 |
| Domain Classes | 101 | 3736 |
| Jobs | 3 | 45 |
| Services | 61 | 6158 |
| Tag Libraries | 34 | 2357 |
| Groovy Helpers | 54 | 3356 |
| Java Helpers | 1 | 65 |
| Unit Tests | 227 | 24224 |
| Integration Tests | 70 | 10908 |
| Scripts | 2 | 77 |
+----------------------+-------+-------+
| Totals | 620 | 58591 |
+----------------------+-------+-------+
The large number in "Java Helpers" originates mostly from a wsdl2java stub generation.
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 13 | 1085 |
| Domain Classes | 17 | 802 |
| Services | 19 | 1918 |
| Tag Libraries | 2 | 182 |
| Groovy Helpers | 39 | 1586 |
| Java Helpers | 521 | 42232 |
| Unit Tests | 45 | 5294 |
| Integration Tests | 9 | 836 |
| Scripts | 2 | 22 |
+----------------------+-------+-------+
| Totals | 667 | 53957 |
+----------------------+-------+-------+
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 40 | 3912 |
| Domain Classes | 42 | 2109 |
| Jobs | 5 | 127 |
| Services | 18 | 2352 |
| Tag Libraries | 12 | 355 |
| Groovy Helpers | 158 | 5249 |
| Java Helpers | 4 | 207 |
| Unit Tests | 54 | 3258 |
| Integration Tests | 22 | 1790 |
| Scripts | 7 | 150 |
+----------------------+-------+-------+
| Totals | 362 | 19509 |
+----------------------+-------+-------+
A pity it doesn't have more stats like average/min/max LOC per class, test coverage, etc ;)
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 17 | 1961 |
| Domain Classes | 14 | 843 |
| Jobs | 4 | 109 |
| Services | 5 | 831 |
| Tag Libraries | 2 | 789 |
| Groovy Helpers | 38 | 948 |
| Java Helpers | 5 | 445 |
| Unit Tests | 1 | 12 |
| Integration Tests | 1 | 33 |
| Scripts | 1 | 11 |
+----------------------+-------+-------+
| Totals | 88 | 5982 |
+----------------------+-------+-------+
Small app (about 25 stories)