I'm experimenting with Frida, and some of the simplest examples I've found do not work as expected on macOS. Here is an example.
Consider this C code:
#include <stdio.h>
#include <unistd.h>
void print_hello(int n, char a, float f) {
printf("hello %d %c %f\n", n, a, f);
}
int main(int argc, char *argv[]) {
while (1) {
print_hello(10, 'a', 3.141f);
sleep(1);
}
return 0;
}
Obvious what it does.
Now here's a Frida python launcher:
#!/usr/bin/env python3
import frida
def on_message(message, data):
print(message)
pid = frida.spawn('./a.out')
session = frida.attach(pid)
script = session.create_script("""
Interceptor.attach(Module.findExportByName(null, 'print_hello'), {
onEnter(args) {
send('enter')
},
onLeave(retval) {
send('leave')
}
})
""")
script.on('message', on_message)
script.load()
frida.resume(pid)
One would expect this to print enter/leave every second. Instead, here are some of the results I get:
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
^C
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
^C
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
hello 10 a 3.141000
^C
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
{'type': 'send', 'payload': 'leave'}
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
hello 10 a 3.141000
hello 10 a 3.141000
^C
% ./trace.py
hello 10 a 3.141000
{'type': 'send', 'payload': 'enter'}
hello 10 a 3.141000
hello 10 a 3.141000
hello 10 a 3.141000
^C
On that third invocation it exited w/o ^C.
There's nothing keeping trace.py alive, so the instrumentation is reverted once it reaches the end of the script – as part of the Python interpreter shutting down. You could add a call to sys.stdin.read() at the end of the script to avoid this.
Related
I am trying to pass both a command string and separate arguments from an input file to GNU parallel. My script looks like this:
parallel="parallel --delay 0.2 -j 100 --joblog remaining_runs_$1.log --resume "
$srun $parallel {python3 scaling.py {1} {2} {3}} < missing_runs_$1.txt
The python script takes 3 separate integers as arguments, each listed in missing_runs_$1.txt like so:
1 1 153
1 1 154
1 1 155
1 1 156
1 1 157
1 1 158
...
I have tried using --colsep but it results to only the file arguments being passed to parallel missing the python3 scaling.py part. Without --colsep each file line is interpreted as a string which is not what I want either (e.g., python3 scaling.py '1 1 153'). Any ideas?
With base in your input sample, I created a reproducible example to test this issue:
A simple python script:
#!/usr/bin/python
import sys
for i in range(1, len(sys.argv)):
print(f'The argument number {i} is {sys.argv[i]}.')
And a simplified command line:
parallel --dry-run -j 100 --colsep ' ' ./python.py {1} {2} {3} :::: < missing_runs_1.txt
./python.py 1 1 153
./python.py 1 1 154
./python.py 1 1 155
./python.py 1 1 156
./python.py 1 1 157
./python.py 1 1 158
without --dry-run:
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 153.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 154.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 155.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 156.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 157.
The argument number 1 is 1.
The argument number 2 is 1.
The argument number 3 is 158.
Using all arguments from your parallel command, in the file remaining_runs_1.log, I got:
Seq Host Starttime JobRuntime Send Receive Exitval Signal Command
1 : 1630591288.009 0.021 0 86 0 0 ./python.py 1 1 153
2 : 1630591288.220 0.040 0 86 0 0 ./python.py 1 1 154
3 : 1630591288.422 0.035 0 86 0 0 ./python.py 1 1 155
4 : 1630591288.649 0.041 0 86 0 0 ./python.py 1 1 156
5 : 1630591288.859 0.042 0 86 0 0 ./python.py 1 1 157
6 : 1630591289.081 0.040 0 86 0 0 ./python.py 1 1 158
I think this can solve the problem or at least give new ideas for the definitive solution.
If
parallel --delay 0.2 -j 100 --joblog curtailment_scaling_remaining_$1.log --resume python3 scaling.py {1} {2} {3} :::: < missing_runs_$1.txt
gives you:
python3 curtailment_scaling.py '1 1 163'
and you want:
python3 curtailment_scaling.py 1 1 163
you can do (version > 20190722):
parallel --delay 0.2 -j 100 --joblog curtailment_scaling_remaining_$1.log --resume python3 scaling.py {=uq=} < missing_runs_$1.txt
(uq runs uq(); which causes the replacement string not to be quoted.)
or:
parallel --delay 0.2 -j 100 --joblog eval curtailment_scaling_remaining_$1.log --resume python3 scaling.py {} < missing_runs_$1.txt
We have two servers A and B
When we enqueue Active Jobs, Server A shows that a job has been created a minute ago in the Sidekiq web interface, but Server B shows that the job has been created 3 hours ago.
The same is true for
set(wait: 10.hours).perform_later(object_id)
=> Server A in Sidekiq Web UI: In 10 hours
=> Server B in Sidekiq Web UI: In 7 hours
Server A and B both return the same date and /etc/timezone
$ date
Mo 9. Aug 15:32:14 CEST 2021
$ cat /etc/timezone
Europe/Berlin
We have a test job to reproduce the problem:
4 class TimeTestJob < ApplicationJob
5 queue_as :default
6
7 def perform(time)
8 puts "Time in Sidekiq: #{Time.now.to_s}"
9 puts "DateTime in Sidekiq: #{DateTime.now.to_s}"
10 puts "time args: #{time.to_s}"
11 ...
16 end
17
18 end
The output is identical on both servers
TimeTestJob.perform_now(Time.now.to_s)
Performing TimeTestJob (Job ID: b542c55d-7957-48db-83b8-7d0620accb6f) from Sidekiq(default) enqueued at with arguments: "2021-08-09 15:32:14 +0200"
Time in Sidekiq: 2021-08-09 15:32:14 +0200
DateTime in Sidekiq: 2021-08-09T15:32:14+02:00
time args: 2021-08-09 15:32:14 +0200
Enqueued TimeTestJob (Job ID: d3d9b7e7-2d97-4700-9c93-9e61f9fbeae5) to Sidekiq(default) at 2021-08-09 23:33:14 UTC with arguments: "2021-08-09 15:32:14 +0200"
...
Any ideas what could be the issue for the time offset on Server B?
Running Sidekiq 6.1.0 on Rails 5.0.7.2 with Sidekiq concurrency set to 12, Redis server 4.0.14 and Redis gem 4.2.1.
There seems to be some delay (seconds) between queueing/executing jobs that we didn't see with Rails 4.2.9. Note, this behavior only occurs in development, production seems to do just fine.
An example worker:
class SidekiqTestWorker
include Sidekiq::Worker
sidekiq_options(
queue: "default",
)
def perform
puts "Hello from Sidekiq!"
end
end
Running 1000.times { SidekiqTestWorker.perform_async } in a Rails console takes around a second to execute all jobs with Rails 4.2.9, but with Rails 5.0.7.2 it takes several minutes to complete. Worth mentioning that we tried running with the same Sidekiq version (5.2.8) with only Rails diffing between the tries with the same result.
A snippet from the Sidekiq worker logs shows the behavior (note the timestamp):
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:00 sidekiq_worker.1 | Hello from Sidekiq!
16:52:01 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:04 sidekiq_worker.1 | Hello from Sidekiq!
16:52:05 sidekiq_worker.1 | Hello from Sidekiq!
16:52:05 sidekiq_worker.1 | Hello from Sidekiq!
16:52:05 sidekiq_worker.1 | Hello from Sidekiq!
Any ideas what can be done to fix this?
In the Sidekiq 5.0 release notes:
Sidekiq 5.0 contains a reworked job dispatch and execution core to integrate better with the new Rails 5.0 Executor.
The Rails Executor is single-threaded in development mode so it can hot reload job code changes. Sidekiq can only execute one job at a time.
The only way to fix this is to enable eager loading in config/environment/development.rb but that will disable code reloading too.
I am trying to upload the artifact Report-0.0.1-SNAPSHOT.jar to nexus repository from jenkins but i am keep getting the below error :
> 10 % completed (6.6 MB / 66 MB). 20 % completed (13 MB / 66 MB). 30 %
> completed (20 MB / 66 MB). 40 % completed (26 MB / 66 MB). 50 %
> completed (33 MB / 66 MB). 60 % completed (40 MB / 66 MB). 70 %
> completed (46 MB / 66 MB). 80 % completed (53 MB / 66 MB). 90 %
> completed (59 MB / 66 MB). 100 % completed (66 MB / 66 MB). 110 %
> completed (73 MB / 66 MB). 120 % completed (79 MB / 66 MB). [Pipeline]
> echo Nexus Upload Failed:
> [sp.sd.nexusartifactuploader.steps.NexusArtifactUploaderStep$Execution.run(NexusArtifactUploaderStep.java:259),
> sp.sd.nexusartifactuploader.steps.NexusArtifactUploaderStep$Execution.run(NexusArtifactUploaderStep.java:217),
> org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47),
> hudson.security.ACL.impersonate(ACL.java:290),
> org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44),
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511),
> java.util.concurrent.FutureTask.run(FutureTask.java:266),
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142),
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617),
> java.lang.Thread.run(Thread.java:745)] [Pipeline] } [Pipeline] //
> stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline
> Finished: SUCCESS
below is my pipeline script
echo "***** Uploading to Nexus *****"
// Working but unable to upload
// With
try {
nexusArtifactUploader artifacts: [[artifactId: 'Report-0.0.1-SNAPSHOT', classifier: '', file: 'Report-0.0.1-SNAPSHOT.jar', type: 'jar']], groupId: 'com.mycompany.myproject', nexusUrl: 'url-to-nexus/nexus/', nexusVersion: 'nexus2', protocol: 'https', repository: 'mycompany-xxx-yyy-zzz-hosted', version: '0.0.1-SNAPSHOT'
echo 'Succeeded!'
} catch (err) {
echo "Nexus Upload Failed: ${err.stackTrace}"
}
I have good expirience with https://jenkins.io/doc/pipeline/steps/nexus-jenkins-plugin/
This plugin is supported by Sonatype. Maybe this is a workaround for you
I am interested in learning about the deflate compression algorithm, particularly how is it represented in a data-stream, and feel that I would greatly benefit from some extra examples (eg. the compression of a short string of text, or the decompression of a compressed chunk).
I am continuing to study some resources I have found: ref1, ref2, ref3 but these do not have many examples of how the actual compression looks as a data-stream.
If I could get a few examples of how some strings would look before and after being compressed, and an explanation of the relationship between them that would be fantastic.
Also if there are other resources that I could be looking at please add those.
You can compress example data with gzip or zlib and use infgen to disassemble and examine the resulting compressed data. infgen also has an option to see the detail in the dynamic headers.
+1 for infgen, but here's a slightly more detailed answer.
You can take a look at the before- and after- using gzip and any hex editor. For example, xxd is included on most linux distros. I'd included both raw hex output (not that interesting without understanding) and infgen's output.
hello hello hello hello (triggers static huffman coding, like most short strings).
~ $ echo -n "hello hello hello hello" | gzip | xxd
00000000: 1f8b 0800 0000 0000 0003 cb48 cdc9 c957 ...........H...W
00000010: c840 2701 e351 3d8d 1700 0000 .#'..Q=.....
~ $ echo -n "hello hello hello hello" | gzip | ./infgen/a.out -i
! infgen 2.4 output
!
gzip
!
last
fixed
literal 'hello h
match 16 6
end
!
crc
length
\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8\xf7\xf6\xf5\xf4\xf3\xf2\xf1 (triggers uncompressed mode)
~ $ echo -ne "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8\xf7\xf6\xf5\xf4\xf3\xf2\xf1" | gzip | xxd
00000000: 1f8b 0800 0000 0000 0003 010f 00f0 ffff ................
00000010: fefd fcfb faf9 f8f7 f6f5 f4f3 f2f1 c6d3 ................
00000020: 157e 0f00 0000 .~....
~ $ echo -ne "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8\xf7\xf6\xf5\xf4\xf3\xf2\xf1" | gzip | ./infgen/a.out -i
! infgen 2.4 output
!
gzip
!
last
stored
data 255 254 253 252 251 250 249 248 247 246 245 244 243 242 241
end
!
crc
length
abaabbbabaababbaababaaaabaaabbbbbaa (triggers dynamic huffman coding)
~ $ echo -n "abaabbbabaababbaababaaaabaaabbbbbaa" | gzip | xxd
00000000: 1f8b 0800 0000 0000 0003 1dc6 4901 0000 ............I...
00000010: 1040 c0ac a37f 883d 3c20 2a97 9d37 5e1d .#.....=< *..7^.
00000020: 0c6e 2934 9423 0000 00 .n)4.#...
~ $ echo -n "abaabbbabaababbaababaaaabaaabbbbbaa" | gzip | ./infgen/a.out -i -d
! infgen 2.4 output
!
gzip
!
last
dynamic
count 260 7 18
code 1 4
code 2 1
code 4 4
code 16 4
code 17 4
code 18 2
zeros 97
lens 1 2
zeros 138
zeros 19
lens 4
repeat 3
lens 2
zeros 3
lens 2 2 2
! litlen 97 1
! litlen 98 2
! litlen 256 4
! litlen 257 4
! litlen 258 4
! litlen 259 4
! dist 0 2
! dist 4 2
! dist 5 2
! dist 6 2
literal 'abaabbba
match 4 7
match 3 9
match 5 6
literal 'aaa
match 5 5
literal 'b
match 4 1
literal 'aa
end
!
crc
length
I found infgen was still not enough detail to fully understand the format. I look through decompressing all three examples here bit-by-bit, by hand, in detail on my blog
For concepts, in addition to RFC 1951 (DEFLATE) which is pretty good, I would recommend Feldspar's conceptual overview of Huffman codes and LZ77 in DEFLATE