I have some example Avro messages from a kafka provider that looks to start as such:
00000000 4f 62 6a 01 04 16 61 76 72 6f 2e 73 63 68 65 6d |Obj...avro.schem|
00000010 61 ef bf bd 24 7b 22 74 79 70 65 22 3a 22 72 65 |a...${"type":"re|
That ef bf bd 24 I expected to be the length of the schema which is 2332 bytes. I'm having trouble confirming that the zigzag varint (why would a length, which can never be negative, be zigzaged?) is the right value. I take it to be somewhere in the 200K range.
I believe that's why I'm having trouble using the avro-tools jar on it at all to either getmeta, getschema or transform to json.
Is this a particular known issue with either the version of Avro Tools which is 1.8.2 or the platform Mac OS with java 1.8.0_102-b14 for that tool version?
Does this look like it's been mis-encoded? Because all calls to use the tools give me:
$ java -jar ~/Downloads/avro-tools-1.8.2.jar tojson dt20170607hr08_1496793109907_11_8229967.bin.1
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.IllegalArgumentException
at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)
at org.apache.avro.io.BinaryDecoder.readBytes(BinaryDecoder.java:288)
at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:112)
at org.apache.avro.file.DataFileStream.<init>(DataFileStream.java:84)
at org.apache.avro.tool.DataFileReadTool.run(DataFileReadTool.java:71)
at org.apache.avro.tool.Main.run(Main.java:87)
at org.apache.avro.tool.Main.main(Main.java:76)
Looks like you have a single record in the Avro file. The system generating the Avro file is running the older version. I have a similar issue with Nifi running 1.7.7. By merging two records into the Avro file, we were able to work around the issue.
Avro 1.8.2 fixes the bug.
1.7.7 and 1.8.0/1.8.1 all have the single record issue.
https://issues.apache.org/jira/browse/AVRO-1888
Related
our td-agent encounters "fail to flush the buffer" issue, the workaround is to delete buffer file(also we back up for troubleshooting).After deletion the issue was gone.
I looked into the buffer file and hoped to find which particular log caused the issue. However I found it hard to understand, and also some output is not text.
xxxxxxxxg:~/buffer# hexdump -C buffer.q5e65a46f3e10a05be5ab55e76674e195.log
00000000 93 a6 73 79 73 6c 6f 67 ce 62 fb 83 37 b3 66 6f |..syslog.b..7.fo|
00000010 6f 20 68 74 74 70 5f 6e 6f 72 6d 61 6c 69 7a 65 |o http_normalize|
00000020 64 |d|
What does the ".b..7" mean btw "syslog" and ".fo"
Could anyone help here? Many thanks!
I am getting the following error while running my Ruby on rails 6 app on locale (Ubuntu 18.04 using chrome):
Uncaught (CardanoSerializationLib.ts:15 in promise) CompileError: WebAssembly.instantiate(): expected magic word 00 61 73 6d, found 3c 21 44 4f #+0
and
GET http://localhost:3000/wasm/csl-v10.0.4.wasm 404 (Not Found)
and
`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:
TypeError: Failed to execute 'compile' on 'WebAssembly': HTTP status code is not ok
Seems like it is affecting webpacker, and cannot render my pages.
Looks like disabling Eternl wallet extension might be a temp fix while the team looks into it:
https://github.com/ccwalletio/tracker/issues/119
I've just deployed a CI system based on jenkins plus sonarqube. Once Jenkins Sonnarscanner starts his part of the Pipeline I can see a lot of messages such as the following:
WARN: Invalid character encountered in file /var/jenkins_home/workspace/Pipeline Test/code/..../CodigoSitioDAO.java at line 3 for encoding UTF-8. Please fix file content or configure the encoding to be used using property 'sonar.sourceEncoding'.
Well, my sonarqube calling line is:
sh "${scannerHome}/bin/sonar-scanner \
-Dsonar.sourceEnconding=UTF-8 \
-Dsonar.projectKey=My_Project\
-Dsonar.sources=. \
-Dsonar.java.binaries=. \
-Dsonar.nodejs.executable=. \
-Dsonar.login=c9bb378b2380af844c7465424933b942d10f5d18 \
-Dsonar.host.url=http://sonarqube:9000"
}
So, once I've check the mentioned file, what I can see in line 3 is something that I think does not have to do with the warning messages: import java.sql.Connection;
Having also configured -Dsonar.sourceEncoding=UTF-8, I have to say that I don't know what is happenig.
Could anyone of you help me?
It looks to me like the file is not in UTF-8. It's quite possible (especially if you used Windows editor) that the file is saved in some platform-specific encoding and its contents does not make sense for UTF-8. Consider the following:
Clase de implementación del DAO de los códigos
The same contents has been saved as ANSI and UTF-8 (explicitly selected upon save). Now if you compare byte contents, these are not the same:
$ hexdump -C test-ansi.txt
00000000 43 6c 61 73 65 20 64 65 20 69 6d 70 6c 65 6d 65 |Clase de impleme|
00000010 6e 74 61 63 69 f3 6e 20 64 65 6c 20 44 41 4f 20 |ntaci.n del DAO |
00000020 64 65 20 6c 6f 73 20 63 f3 64 69 67 6f 73 |de los c.digos|
0000002e
$ hexdump -C test-utf8.txt
00000000 43 6c 61 73 65 20 64 65 20 69 6d 70 6c 65 6d 65 |Clase de impleme|
00000010 6e 74 61 63 69 c3 b3 6e 20 64 65 6c 20 44 41 4f |ntaci..n del DAO|
00000020 20 64 65 20 6c 6f 73 20 63 c3 b3 64 69 67 6f 73 | de los c..digos|
00000030
Note that the same character ó is encoded as f3 in ANSI and as c3 b3 in UTF-8. I believe this is what your message is about: declared encoding is UTF-8 (possibly default), but the character contents f3 is invalid under this encoding. Please double check your encoding with the hex editor.
Side note: if you copied and pasted your line directly from your file, please note that Stack Overflow was not able to decode it as well, confirming it's not UTF-8-encoded.
I'm trying to use Bosun to determine if certain processes are running and then eventually alert on if they are up or down. I'm probably misinterpreting the docs but I can't figure this out.
Bosun is running just fine. I have the scollector running on Ubuntu 14 LTS and using my config file correctly.
Here is what I have in my scollector.toml:
host="blah:8070"
hostname="cass01"
[[Process]]
command = "^.*(java).*(CassandraDaemon)$"
name = "Cassandra"
I would then expect to see in bosun under my host cass01 a metric title "cassandra" somewhere but it's nowhere to be seen. Other metrics are there.
Right now Command is a partial match on the process path to the binary, up to the first space delimiter. The Args parameter is a regex to differentiate between multiple instances of the process. So for a java process you would use something like:
[[Process]]
Command = "java"
Name = "Cassandra"
Args = "CassandraDaemon$"
This would match a command line like:
/usr/bin/java /usr/bin/CassandraDaemon
This assumes the /proc/<pid>/cmdline for that process ends in CassandraDaemon. If it doesn't end in that string you would need to change the Args to just "CassandraDaemon" which would match any java process that contains that string.
Also some processes change the cmdline to something other than a nul delimited string. In those cases the Command argument needs to be used to match as Args is expecting nul delimiters. Example:
cat /proc/80156/cmdline | hexdump -C
00000000 2f 75 73 72 2f 62 69 6e 2f 72 65 64 69 73 2d 73 |/usr/bin/redis-s|
00000010 65 72 76 65 72 20 2a 3a 36 33 37 39 00 00 00 00 |erver *:6379....|
00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000030 00 |.|
00000031
#Example for cmdline without NUL (00) delimiters between args
[[Process]]
Command = "redis-server *:6379"
Name = "redis-core"
Once these are in place with the correct matching values you should see metrics show up under linux.proc.* where the the name tag will match the name used in the TOML file.
I'm trying to use canplayer to replay some candump files, with no success. When I try to run the canplayer it just executes and returns, giving me no clue of what is happening.
What I've tried up to now:
Setup 1
Set up a vcan interface
Sent data to the vcan interface using cansend and cangen, with success (verified with candump).
Recorded a candump file from vcan0 when running cangen. Recorded files with absolute timestamps (-t a) and without.
Tried using canplayer to reproduce the file. Tried using several arguments, no avail. canplayer just returns immediately without any complaint. If I mess up the the file or arguments it complains.
Setup 2
Connected 2 PEAK CAN USB adapter devices to the PC.
Connected the adapters to each other using a 120ohm terminated cable
Started cangen pointing to can0 and verified that the messages got to can1 using candump
Recorded candump files from can0 interface.
Tried using canplayer to reproduce the file., with no success.
I've tried these tests on two different machines, both running Ubuntu 12.04, with the same results.
Do you know what might be the cause of it?
It seems I had recorded the log files in a wrong manner.
My logs files where recording using the following command:
$ candump -ta vcan0 "log.candump"
That command, however, records the log in human readable format:
vcan0 1B3 [8] 8E 02 74 22 55 70 49 30
vcan0 658 [6] 27 48 2C 56 14 0A
vcan0 1F8 [2] 77 99
vcan0 7B7 [8] 33 A2 24 38 B2 78 86 72
vcan0 43C [8] 92 C6 81 2E FC 5E 38 35
vcan0 7B0 [2] 2D 1B
In order to record log files that can be playback with canplayer, they should be recorded using either
$ candump -l vcan0
or
$ candump -L vcan0 > myfile.log
The recorded file will look like this:
(1436509052.249713) vcan0 044#2A366C2BBA
(1436509052.449847) vcan0 0F6#7ADFE07BD2
(1436509052.650004) vcan0 236#C3406B09F4C88036
(1436509052.850131) vcan0 6F1#98508676A32734
(1436509053.050284) vcan0 17F#C7
(1436509053.250417) vcan0 25B#6EAAC56C77D15E27
(1436509053.450557) vcan0 56E#46F02E79A2B28C7C
(1436509053.650713) vcan0 19E#6FE1CB7DE2218456
(1436509053.850870) vcan0 1A0#9C20407F96EA167B
(1436509054.051025) vcan0 6DE#68FF147114D1
Files in that format can be replayed in canplayer using the following commands:
$ canplayer -I candump-2015-07-10_081824.log
or
$ cat candump-2015-07-10_081824.log | canplayer
Credits for this answer go to Oliver Hartkopp.
I found this article with some great information about using can and vcan. After taking a log of a physical CAN bus with candump using
$ candump -l can0
as stated in the previous answer. I used the following to play it back over a virtual CAN bus.
$ canplayer vcan0=can0 -I candump-may-14-2015.log