I am trying to build my custom ROS services. They are inside a another parent package
the structure is as follows:
|--catkine_ws
| |--src
| | |--Parent
| | | |--CMakeLists.txt
| | | |--package.xml
| | | |--ChildA
| | | | |--CMakeLists.txt
| | | | |--package.xml
| | | | |--srv
| | | | | |--SomeService.srv
| | | |--ChildB
The packages are building correctly and I am able to use them in other nodes and packags.
however when I try to use rossrv list the custom services do not appear. I think that this is causing some issues when I try to build my Simulink controller and it cannot find the service message definition.
Does any one have any idea what is going on?
I was able to fix the problem, while not obvious, the solution was rather simple. I had to change the slightly change the structure of the package by making the parent package a meta package then do some handling to make sure that the sub packages still had access to the cmakes to locate my external packages.
I am finding trouble in why my query in secondary indexed column in cassandra getting rpc timeout.
Here is my details about the cassandra and table
[cqlsh 4.1.1 | Cassandra 2.0.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cqlsh:master_hub_development> describe table service_hubs;
CREATE TABLE service_hubs (
id uuid,
host text,
hub_name text,
os text,
owner text,
pubkey text,
service_type text,
trust int,
PRIMARY KEY (id)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
CREATE INDEX service_hubs_host_idx ON service_hubs (host);
CREATE INDEX service_hubs_hub_name_idx ON service_hubs (hub_name);
CREATE INDEX service_hubs_os_idx ON service_hubs (os);
CREATE INDEX service_hubs_owner_idx ON service_hubs (owner);
CREATE INDEX service_hubs_pubkey_idx ON service_hubs (pubkey);
CREATE INDEX service_hubs_service_type_idx ON service_hubs (service_type);
CREATE INDEX service_hubs_trust_idx ON service_hubs (trust);
cqlsh:master_hub_development> select * from service_hubs;
id | host | hub_name | os | owner | pubkey | service_type | trust
--------------------------------------+----------------+--------------------+--------+---------------+--------+--------------+-------
b9d9bd06-e006-11e3-a1e2-3382b7d578d2 | localhost:3001 | HUB:Darknetdb | Mac os | darknet_admin | null | darknetdb | 90
b9d74918-e006-11e3-a1e2-3382b7d578d2 | localhost:3000 | HUB:Darknetbitcoin | Mac os | darknet_admin | null | bitcoin | 90
b9da9b2c-e006-11e3-a1e2-3382b7d578d2 | localhost:3002 | HUB:Darknetemail | Mac os | darknet_admin | null | email | 90
b9db8596-e006-11e3-a1e2-3382b7d578d2 | localhost:3003 | HUB:Darknetftp | Mac os | darknet_admin | null | ftp | 90
(4 rows)
cqlsh:master_hub_development> select * from service_hubs where host='localhost:3001';
id | host | hub_name | os | owner | pubkey | service_type | trust
--------------------------------------+----------------+---------------+--------+---------------+--------+--------------+-------
b9d9bd06-e006-11e3-a1e2-3382b7d578d2 | localhost:3001 | HUB:Darknetdb | Mac os | darknet_admin | null | darknetdb | 90
(1 rows)
cqlsh:master_hub_development> select * from service_hubs where service_type='darknetdb';
Request did not complete within rpc_timeout.
Here we can see that the query on secondary index column host gets succeeded but the similar query on service_type gets rpc_timeout error.
I am not able to find the reason behind it why it rpc_timeouted in this query.
select * from service_hubs where service_type='darknetdb';
Thanks
I want to store a simple active record object using memcached. I know I need to first convert the object to JSON before saving it to memcached my question is how I can pull it out again, deserialize it and use it as an activerecord relation. Do I have to make a custom parser the JSON or am I overlooking some drop dead easy solution?
The active record object looks like this:
+------+-----+-----------+---------------------------------+---------------------+-------+
| id | ppl | exclusive | name | price | spots |
+------+-----+-----------+---------------------------------+---------------------+-------+
| 8948 | 12 | false | 12 Bed Mixed Dorm | 9.0000000000000000 | 12 |
| 8947 | 10 | false | 10 Bed Mixed Dorm | 9.5000000000000000 | 10 |
| 8946 | 6 | false | 6 Bed Mixed Dorm | 10.0000000000000000 | 6 |
| 8945 | 4 | false | Basic 4 Bed Mixed Dorm | 10.0000000000000000 | 4 |
| 8944 | 2 | true | Twin Private Shared Bathroom | 12.0000000000000000 | 1 |
| 8943 | 1 | true | Standard Single Private Ensuite | 15.0000000000000000 | 1 |
+------+-----+-----------+---------------------------------+---------------------+-------+
You shouldn't don't need to worry about the serialization -- in almost all cases, this can be handled for you:
#Gemfile
gem install dalli
#config/environments/production.rb
config.cache_store = :dalli_store, '127.0.0.1' #use memcached
#Get id 1245 from model_names
Rails.cache.fetch("ModelName#1245") do
ModelName.find(1245)
end
Is it possible to obtain the status of Bacula backup system Director in some parseable format?
It looks like the human-readable representation (one you can see when using bacula-console) is formed on the director side during the TCP control connection.
In what language? The easiest way would be to invoke bconsole and send command as stdin, then parse stdout and stderr.
Bacula has interactive mode in bconsole, but if you know commands in advance, this is not an issue.
You can also pull directly from the database, depending on your needs.
Example:
mysql> select JobId, Name, JobStatus from Job ORDER BY JobId DESC Limit 10;
+--------+-------------------------------------+-----------+
| JobId | Name | JobStatus |
+--------+-------------------------------------+-----------+
| 231215 | dbs16 Daily MysqlC XBM Snapshot | T |
| 231214 | dbs09 Daily MysqlS XBM Snapshot | T |
| 231213 | dbs10 Daily MysqlQ XBM Snapshot | T |
| 231212 | dbs11 Daily MysqlT XBM Snapshot | T |
| 231211 | dbs16 Daily MysqlI XBM Snapshot | T |
| 231210 | dbs19 Daily MysqlE XBM Snapshot | T |
| 231209 | dbs18 Daily MysqlB XBM Snapshot | R |
| 231208 | dbs17 Daily MysqlG XBM Snapshot | R |
| 231207 | Daily Catalog Backup | C |
| 231206 | adm6 svnops SVN Backup | R |
+--------+-------------------------------------+-----------+
I'm currently trying to implement a LALR parser generator as described in "compilers principles techniques and tools" (also called "dragon book").
A lot already works. The parser generator is currently able to generate the full goto-graph.
Example Grammar:
S' --> S
S --> C C
C --> c C
C --> d
Nonterminals: S', S, C
Terminals: c, d
Start: S'
The goto-graph:
I[0]---------------+ I[1]-------------+
| S' --> . S , $ |--S-->| S' --> S . , $ |
| S --> . C C , $ | +----------------+
| C --> . c C , c |
| C --> . c C , d | I[2]--------------+
| C --> . d , c | | S --> C . C , $ | I[3]--------------+
| C --> . d , d |--C-->| C --> . c C , $ |--C-->| S --> C C . , $ |
+------------------+ | C --> . d , $ | +-----------------+
| | +-----------------+
| | +--c--+ | |
| | | | c |
| | | v v |
| | I[4]--------------+ |
| c | C --> c . C , c | |
| | | C --> c . C , d | |
| | | C --> c . C , $ | d
| | | C --> . c C , c | |
| +---->| C --> . c C , d | |
| | C --> . c C , $ | |
d | C --> . d , c |--+ |
| +-----| C --> . d , d | | |
| | | C --> . d , $ | | |
| | +-----------------+ | |
| C | |
| | I[6]--------------+ | |
| | | C --> c C . , c | d |
| +---->| C --> c C . , d | | |
| | C --> c C . , $ | | |
| +-----------------+ | |
| | |
| I[5]------------+ | |
| | C --> d . , c |<---+ |
+------->| C --> d . , d | |
| C --> d . , $ |<-----+
+---------------+
I have trubbles with implementing the algorithm to generate the actions-table!
My algorithm computes the following output:
state | action
| c | d | $
------------------------
0 | s4 | s5 |
------------------------
1 | | | acc
------------------------
2 | s4 | s5 |
------------------------
3 | | | r?
------------------------
4 | s4 | s5 |
------------------------
5 | r? | r? | r?
------------------------
6 | r? | r? | r?
sx... shift to state x
rx... reduce to state x
The r? means that I don't know how to get the state (the ?) to which the parser should reduce. Does anyone know an algorithm to get ? using the goto-graph above?
If anything is describe no clearly enough, please ask and I will try to explain it better!
Thanks for your help!
A shift entry is attributed by the next state, but a reduce entry indicates a production.
When you shift, you push a state reference onto your stack and proceed to the next state.
When you reduce, this is for a specific production. The production was responsible for shifting n states onto your stack, where n is the number of symbols in that production. E.g. one for S', two for S, and two or one for C (i.e. for the first or second alternative for C).
After n entries are popped off the stack, you return to the state where you started processing that production. For that state and the nonterminal resulting from the production, you lookup the goto table to find the next state, which is then pushed.
So the reduce entries identify a production. In fact it may be sufficient to know the resulting nonterminal, and the number of symbols to pop.
Your table thus should read
state | action | goto
| c | d | $ | C | S
------------------------------------
0 | s4 | s5 | | 2 | 1
------------------------------------
1 | | | acc | |
------------------------------------
2 | s4 | s5 | | 3 |
------------------------------------
3 | | | r0 | |
------------------------------------
4 | s4 | s5 | | | 6
------------------------------------
5 | r3 | r3 | r3 | |
------------------------------------
6 | r2 | r2 | r2 | |
where rx indicates reduce by production x.
You need to pop the stack and and find the next state from there.
The rx means: reduce using the production with the number x!
Then everything gets clear!
Simple pop the body of the production and shift the head back onto the top!