frida.TimedOutError: unexpectedly timed out while initializing suspended process - ios

Describe the bug
Timeout occurred on objection explore
To Reproduce
Steps to reproduce the behavior:
Run command objection --gadget "com.apple.AppStore" explore
Evidence / Logs / Screenshots
Using USB device `iPhone`
Traceback (most recent call last):
File "/usr/local/bin/objection", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/objection/console/cli.py", line 114, in explore
agent.inject()
File "/usr/local/lib/python3.9/site-packages/objection/utils/agent.py", line 202, in inject
session = self.get_session()
File "/usr/local/lib/python3.9/site-packages/objection/utils/agent.py", line 169, in get_session
self.session = self.device.attach(self.spawned_pid)
File "/usr/local/lib/python3.9/site-packages/frida/core.py", line 76, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/frida/core.py", line 800, in attach
return Session(self._impl.attach(self._pid_of(target), **kwargs)) # type: ignore
frida.TimedOutError: unexpectedly timed out while initializing suspended process
Environment (please complete the following information):
Device: iPhone 7
OS: 15.3.1
Frida Version: 16.0.2
Objection Version: 1.11.0

Thanks to #Robert solved with pid choice. There are two important things here:
Don't use Apple-Slicon(m1/m2) MacOS
Use process-id instead of process-name. objection -g pid explore

Related

Beam Dataflow python job fails in dist_proc/dax/shuffle/batch/chunking_shuffle_writer.cc#146

I have a Dataflow job that fails repeatedly with the following log message:
022-03-27T01:39:21.871476411ZAn exception was raised when trying to execute the workitem 1257504293434498471 : Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 646, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 210, in execute op.finish() File "dataflow_worker/shuffle_operations.py", line 171, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish File "dataflow_worker/shuffle_operations.py", line 172, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish File "dataflow_worker/shuffle_operations.py", line 174, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 593, in exit self.writer.Close() File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 185, in shuffle_client.PyShuffleWriter.Close OSError: Shuffle close failed: b'FAILED_PRECONDITION: Precondition check failed. [type.googleapis.com/util.MessageSetPayload='[dist_proc.dax.internal.TrailProto] { trail_point { source_file_loc { filepath: "dist_proc/dax/shuffle/batch/chunking_shuffle_writer.cc" line: 146 } } trail_point { source_file_loc { filepath: "dist_proc/dax/shuffle/batch/chunking_shuffle_writer.cc" line: 102 } } }']'
I tried running the job a few times, and it fails in the same place each time. Other than that error and the set of DoFns that were running, I don't have many clues about where to look for problems.
I suspect it's my code that causing a problem. Would love advice on how to diagnose this.
Have you tried using runner v2?

Superset Oauth Integration config using Ambari error

I am trying to config OAUTH_PROVIDERS using ambari
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SUPERSET/package/scripts/superset.py", line 184, in <module>
Superset().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 971, in restart
self.stop(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SUPERSET/package/scripts/superset.py", line 133, in stop
self.configure(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SUPERSET/package/scripts/superset.py", line 90, in configure
user=params.superset_user)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'source /usr/hdp/current/superset/conf/superset-env.sh ; /usr/hdp/current/superset/bin/superset db upgrade' returned 1. Loaded your LOCAL configuration at [/usr/hdp/current/superset/conf/superset_config.py]
Traceback (most recent call last):
File "/usr/hdp/current/superset/bin/superset", line 12, in <module>
from superset.cli import manager
File "/usr/hdp/3.1.4.0-315/superset/lib/python3.6/site-packages/superset/__init__.py", line 180, in <module>
update_perms=utils.get_update_perms_flag(),
File "/usr/hdp/3.1.4.0-315/superset/lib/python3.6/site-packages/flask_appbuilder/base.py", line 135, in __init__
self.init_app(app, session)
File "/usr/hdp/3.1.4.0-315/superset/lib/python3.6/site-packages/flask_appbuilder/base.py", line 156, in init_app
self.sm = self.security_manager_class(self)
File "/usr/hdp/3.1.4.0-315/superset/lib/python3.6/site-packages/flask_appbuilder/security/sqla/manager.py", line 39, in __init__
super(SecurityManager, self).__init__(appbuilder)
File "/usr/hdp/3.1.4.0-315/superset/lib/python3.6/site-packages/flask_appbuilder/security/manager.py", line 199, in __init__
provider_name = _provider['name']
TypeError: string indices must be integers
I am able to setup superset Oauth without Ambari, but struggling to make the config in Ambari because even if make change in superset_cofig.py Ambari is overwriting the superset_cofig.py when we restart the service.
I never used Superset with Ambari, but I am currently struggling with it for standalone usage due to the lack of proper documentation and practical use cases.
To my understanding, in order to read the superset_config.py you need to export the PYTHONPATH and point down to the folder where the config is placed.
For example: export PYTHONPATH=/<folder where the config is placed>/:$PYTHONPATH
If you get this right, you should see in Superset's logs something like this
Loaded your LOCAL configuration at [/<folder where the config is>/superset_config.py]

Pipeline keeps running because of Dataflow runner not closing file system writer. NotImplementedError

I'm running a Dataflow job (Apache Beam 2.12.0) using python on the Google Cloud Platform. The pipeline is not terminating and keeps running.
The issue is the same to
https://issues.apache.org/jira/browse/BEAM-7266
It wasn't solved and says "open when met again". It also says that file writer is not closing.
There's only one error log:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 178, in execute
op.finish()
File "dataflow_worker/native_operations.py", line 93, in dataflow_worker.native_operations.NativeWriteOperation.finish
def finish(self):
File "dataflow_worker/native_operations.py", line 94, in dataflow_worker.native_operations.NativeWriteOperation.finish
with self.scoped_finish_state:
File "dataflow_worker/native_operations.py", line 95, in dataflow_worker.native_operations.NativeWriteOperation.finish
self.writer.__exit__(None, None, None)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativefileio.py", line 465, in __exit__
self.file.close()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filesystemio.py", line 202, in close
self._uploader.finish()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/gcsio.py", line 606, in finish
raise self._upload_thread.last_error # pylint: disable=raising-bad-type
NotImplementedError: offset: 0, whence: 0, position: 51518, last: 0

What is the root cause of distributed.scheduler.KilledWorker exception?

I'm trying to run a Dask job on a YARN cluster. This jobs reads and writes to HDFS using the hdfs3 library.
When I run it on a cluster without a Kerberos security layer, it runs fine.
But, on a cluster with a Kerberos security layer, I had to implement the solution here to avoid Kerberos related errors. Running the same job, led to the following error:
File "/fsstreamdevl/f6_development/acoustics/acoustics_analysis_dask/acoustics_analytics/task_runner/task_runner.py", line 123, in run
dask.compute(tasks)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/dask/base.py", line 446, in compute
results = schedule(dsk, keys, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 2568, in get
results = self.gather(packed, asynchronous=asynchronous, direct=direct)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1822, in gather
asynchronous=asynchronous,
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 753, in sync
return sync(self.loop, func, *args, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1653, in _gather
six.reraise(type(exception), exception, traceback)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
distributed.scheduler.KilledWorker: ('__call__-6af7aa29-2a09-45f3-a5e2-207c06562672', <Worker 'tcp://10.194.211.132:11927', memory: 0, processing: 1>)
Strangely enough, running the same solution on the former cluster without a Kerberos security layer, I get the same error.
Looking at the YARN application logs, I see the following traceback, but cannot tell what it means.
distributed.nanny - INFO - Closing Nanny at 'tcp://10.194.211.133:17659'
Traceback (most recent call last):
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
End of LogType:dask.worker.log
I do not see any explicit messages in the logs about low memory. Would anyone know how to diagnose this issue?
hdfs3 is not actively maintained any more. You have two main choices for interacting with HDFS:
pyarrow's hdfs driver (via libhdfs jni library), which requires you to have java and hadoop requirements correctly set up and available to the session calling it
webhdfs such as in fsspec, which does not need java libraries, and can interact with kerberos if HTTP authentication is allowed on your system.

o.n.b.v.t.BoltProtocolV1- Failed to write response to driver Cannot write to buffer when closed

I am using neo4j 3.1.3 version where I am getting following error:
2018-02-28 05:17:27.780+0000 ERROR [o.n.b.v.t.BoltProtocolV1] Failed to write response to driver Cannot write to buffer when closed
java.io.IOException: Cannot write to buffer when closed
at org.neo4j.bolt.v1.transport.ChunkedOutput.ensure(ChunkedOutput.java:163)
at org.neo4j.bolt.v1.transport.ChunkedOutput.writeShort(ChunkedOutput.java:94)
at org.neo4j.bolt.v1.packstream.PackStream$Packer.packStructHeader(PackStream.java:330)
at org.neo4j.bolt.v1.messaging.BoltResponseMessageWriter.onSuccess(BoltResponseMessageWriter.java:72)
at org.neo4j.bolt.v1.messaging.MessageProcessingHandler.onFinish(MessageProcessingHandler.java:111)
at org.neo4j.bolt.v1.runtime.BoltStateMachine.after(BoltStateMachine.java:105)
at org.neo4j.bolt.v1.runtime.BoltStateMachine.run(BoltStateMachine.java:201)
at org.neo4j.bolt.v1.messaging.BoltMessageRouter.lambda$onRun$3(BoltMessageRouter.java:80)
at org.neo4j.bolt.v1.runtime.concurrent.RunnableBoltWorker.execute(RunnableBoltWorker.java:135)
at org.neo4j.bolt.v1.runtime.concurrent.RunnableBoltWorker.run(RunnableBoltWorker.java:89)
at java.lang.Thread.run(Thread.java:748)
I have used following configurations:
dbms.memory.heap.initial_size=8g
dbms.memory.heap.max_size=8g
dbms.memory.pagecache.size=4g
Machine description: 4 vCPUs, 15 GB memory
My neo4j client is Python so using neo4j python driver, version 1.5.3. I am getting following error on client side:
[2018-02-28 10:59:43,838: ERROR/MainProcess] Task api_keyword.tasks.job.update_application_rel[14f5baa6-09a5-44df-9bd7-982116e0b184] raised unexpected: ServiceUnavailable("Failed to write to closed connection Address(host='10.160.0.9', port=7687)",)
Traceback (most recent call last):
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ubuntu/celeryprod/api/src/api_keyword/tasks/job.py", line 21, in update_application_rel
rel_label='APPLIED'
File "/home/ubuntu/celeryprod/api/src/api_keyword/utils/neo4j/base.py", line 76, in delete_relationship
self.run_query(neo4j_query)
File "/home/ubuntu/celeryprod/api/src/api_keyword/utils/neo4j/base.py", line 32, in run_query
res = Neo4jConnector().run_query(neo4j_query)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/six.py", line 686, in reraise
raise value
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/timeout_decorator/timeout_decorator.py", line 81, in new_function
return function(*args, **kwargs)
File "/home/ubuntu/celeryprod/api/src/core/services/neo4j.py", line 56, in run_query
result = session.run(cypher_query)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/v1/api.py", line 339, in run
self._connection.send()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/bolt/connection.py", line 263, in send
self._send()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/bolt/connection.py", line 275, in _send
raise self.Error("Failed to write to closed connection {!r}".format(self.server.address))
neo4j.exceptions.ServiceUnavailable: Failed to write to closed connection Address(host='10.160.0.9', port=7687)
I am initializing driver like this:
self.driver = GraphDatabase.driver("bolt://{0}".format(self.neo4j_db_url),
auth=basic_auth(self.neo4j_username, self.neo4j_password), connection_timeout=60)
Can anyone help me regarding this. Any configuration I need to tune, or any other configuration I need to define?

Resources