I'm doing some work with the reverse engineering tool angr, and I'm trying to run it in a container.
My current directory looks likes this:
ask#Garsy:~/Notes/ethHack/wetransfer-85179d/Export$ ls
angry.py impossible_password_location.csv report.md
impossible_password.bin impossible_password_strings.txt test.txt
I then run a specific angr image like so:
ask#Garsy:~/Notes/ethHack/wetransfer-85179d/Export$ sudo docker run -it --rm -v $pwd:/local angr/angr
Where I believe that using $pwd:/local should give me acces to the before shown files inside the container (following [this][1] guide [5:40]).
I run the container, and try to write some python:
(angr) angr#38b067fffc2d:~$ ipython3
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.22.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import angr
In [2]: angr.Project("/impossible_password.bin")
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-2-3f794a899665> in <module>
----> 1 angr.Project("/impossible_password.bin")
~/angr-dev/angr/angr/project.py in __init__(self, thing, default_analysis_mode, ignore_functions, use_sim_procedures, exclude_sim_procedures_func, exclude_sim_procedures_list, arch, simos, engine, load_options, translation_cache, support_selfmodifying_code, store_function, load_function, analyses_preset, concrete_target, **kwargs)
124 self.loader = cle.Loader(thing, **load_options)
125 elif not isinstance(thing, str) or not os.path.exists(thing) or not os.path.isfile(thing):
--> 126 raise Exception("Not a valid binary file: %s" % repr(thing))
127 else:
128 # use angr's loader, provided by cle
Exception: Not a valid binary file: '/impossible_password.bin'
where it can't find the file. Same goes for local/impossible_password.bin". How do I get the files of my current direcotry to be available when I spin up the container?
[1]: https://www.youtube.com/watch?v=9dQFM5O4KFk
Related
I'm trying to run a python script with ros2 in my docker container, and everything up to running the Script works, I can even run Gazebo via a launch file, and it works.
The Error ROS gives me is the following:
root#86d8bf3a6eb9:/# ros2 run field_robot robot_spawner.py
Traceback (most recent call last):
File "/opt/ros/foxy/bin/ros2", line 11, in <module>
load_entry_point('ros2cli==0.9.11', 'console_scripts', 'ros2')()
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2cli/cli.py", line 67, in main
rc = extension.main(parser=parser, args=args)
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2run/command/run.py", line 70, in main
return run_executable(path=path, argv=args.argv, prefix=prefix)
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2run/api/__init__.py", line 61, in run_executable
process = subprocess.Popen(cmd)
File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1704, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py'
And yes, I checked, the File actually exists:
root#86d8bf3a6eb9:/# ls -l /field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py
-rwxr-xr-x 1 root root 1964 Apr 12 14:37 /field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py
Also, I'm running the Host system on Windows, so it could be that something with windows is fucked up, so if you have an Idea what could be the Problem there, that also might be it
Based on the comments it appears you're running into this issue because of the file type. If they're being edited in Windows first it is likely they are DOS files and not UNIX files. I know this causes issues with ROS1 so I assume it's the case in ROS2 as well. To fix this, you have a couple of options.
Usually the easiest would be to use dos2unix. This isn't installed by default but you can get it via apt install dos2unix assuming your image is Ubuntu. The files can be converted by running dos2unix <filename> inside your container.
I'm trying to get a function to run in AWS Lambda that uses Selenium and Firefox/geckodriver in order to run. I've decided to go the route of creating a container image, and then uploading and running that instead of using a pre-configured runtime. I was able to create a Dockerfile that correctly installs Firefox and Python, downloads geckodriver, and installs my test code:
FROM alpine:latest
RUN apk add firefox python3 py3-pip
RUN pip install requests selenium
RUN mkdir /app
WORKDIR /app
RUN wget -qO gecko.tar.gz https://github.com/mozilla/geckodriver/releases/download/v0.28.0/geckodriver-v0.28.0-linux64.tar.gz
RUN tar xf gecko.tar.gz
RUN mv geckodriver /usr/bin
COPY *.py ./
ENTRYPOINT ["/usr/bin/python3","/app/lambda_function.py"]
The Selenium test code:
#!/usr/bin/env python3
import util
import os
import sys
import requests
def lambda_wrapper():
api_base = f'http://{os.environ["AWS_LAMBDA_RUNTIME_API"]}/2018-06-01'
response = requests.get(api_base + '/runtime/invocation/next')
request_id = response.headers['Lambda-Runtime-Aws-Request-Id']
try:
result = selenium_test()
# Send result back
requests.post(api_base + f'/runtime/invocation/{request_id}/response', json={'url': result})
except Exception as e:
# Error reporting
import traceback
requests.post(api_base + f'/runtime/invocation/{request_id}/error', json={'errorMessage': str(e), 'traceback': traceback.format_exc(), 'logs': open('/tmp/gecko.log', 'r').read()})
raise
def selenium_test():
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
options = Options()
options.add_argument('-headless')
options.add_argument('--window-size 1920,1080')
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
ffx.get("https://google.com")
url = ffx.current_url
ffx.close()
print(url)
return url
def main():
# For testing purposes, currently not using the Lambda API even in AWS so that
# the same container can run on my local machine.
# Call lambda_wrapper() instead to get geckodriver logs as well (not informative).
selenium_test()
if __name__ == '__main__':
main()
I'm able to successfully build this container on my local machine with docker build -t lambda-test . and then run it with docker run -m 512M lambda-test.
However, the exact same container crashes with an error when I try and upload it to Lambda to run. I set the memory limit to 1024M and the timeout to 30 seconds. The traceback says that Firefox was unexpectedly killed by a signal:
START RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Version: $LATEST
/app/lambda_function.py:29: DeprecationWarning: use service_log_path instead of log_path
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
Traceback (most recent call last):
File "/app/lambda_function.py", line 45, in <module>
main()
File "/app/lambda_function.py", line 41, in main
lambda_wrapper()
File "/app/lambda_function.py", line 12, in lambda_wrapper
result = selenium_test()
File "/app/lambda_function.py", line 29, in selenium_test
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
File "/usr/lib/python3.8/site-packages/selenium/webdriver/firefox/webdriver.py", line 170, in __init__
RemoteWebDriver.__init__(
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status signal
END RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30
REPORT RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Duration: 20507.74 ms Billed Duration: 21350 ms Memory Size: 1024 MB Max Memory Used: 131 MB Init Duration: 842.11 ms
Unknown application error occurred
I had it upload the geckodriver logs as well, but there wasn't much useful information in there:
1608506540595 geckodriver INFO Listening on 127.0.0.1:41597
1608506541569 mozrunner::runner INFO Running command: "/usr/bin/firefox" "--marionette" "-headless" "--window-size 1920,1080" "-foreground" "-no-remote" "-profile" "/tmp/rust_mozprofileQCapHy"
*** You are running in headless mode.
How can I even begin to debug this? The fact that the exact same container behaves differently depending upon where it's run seems fishy to me, but I'm not knowledgeable enough about Selenium, Docker, or Lambda to pinpoint exactly where the problem is.
Is my docker run command not accurately recreating the environment in Lambda? If so, then what command would I run to better simulate the Lambda environment? I'm not really sure where else to go from here, seeing as I can't actually reproduce the error locally to test with.
If anyone wants to take a look at the full code and try building it themselves, the repository is here - the lambda code is in lambda_function.py.
As for prior research, this question a) is about ChromeDriver and b) has no answers from over a year ago. The link from that one only has information about how to run a container in Lambda, which I'm already doing. This answer is almost my problem, but I know that there's not a version mismatch because the container works on my laptop just fine.
I have exactly the same problem and a possible explanation.
I think what you want is not possible for the time being.
According to AWS DevOps Blog Firefox relies on fallocate system call and /dev/shm.
However AWS Lambda does not mount /dev/shm so Firefox will crash when trying to allocate memory. Unfortunately, this handling cannot be disabled for Firefox.
However if you can live with Chromium, there is an option for chromedriver --disable-dev-shm-usage that disables the usage of /dev/shm and instead writes shared memory files to /tmp.
chromedriver works fine for me on AWS Lambda, if that is an option for you.
According to AWS DevOps Blog you can also use AWS Fargate to run Firefox/geckodriver.
There is an entry in the AWS forum from 2015 that requests mounting /dev/shm in Lambdas, but nothing happened since then.
I want to replace the below command with docker python sdk
docker exec 6d9c9b679541 /u01/app/oracle/product/12.0.0/dbhome_1/bin/sqlplus sachin/sachin#orcl #1.sql
here is code i am writing and the error i am getting using python3
>>> import docker
>>> client = docker.from_env()
>>> client.exec_run('6d9c9b679541',command='/u01/app/oracle/product/12.0.0/dbhome_1/bin/sqlplus sachin/sachin#orcl #1.sql')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/docker/client.py", line 205, in __getattr__
raise AttributeError(' '.join(s))
AttributeError: 'DockerClient' object has no attribute 'exec_run'
How to resolve this issue?
The from_env method returns a DockerClient object (docs here).
You need to get the container first, and then use the exec_run method.
If you want to access a running container, you need the following:
container = client.containers.get('your_container_name_or_id')
Now you can run your command in the container:
container.exec_run('your command here')
I am new to running airflow in docker. To run a airflow cli command, it used to be simply possible to run airflow trigger_dag etc. However, that clearly doesn't work anymore right away.
I found out I can get 'in' the container by docker exec -ti <container_name> bash. However, if I then try to run an airflow cli command, I get the following:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 16, in <module>
from airflow import configuration
File "/usr/local/lib/python3.5/dist-packages/airflow/__init__.py", line 31, in <module>
from airflow import settings
File "/usr/local/lib/python3.5/dist-packages/airflow/settings.py", line 150, in <module>
configure_orm()
File "/usr/local/lib/python3.5/dist-packages/airflow/settings.py", line 136, in configure_orm
engine = create_engine(SQL_ALCHEMY_CONN, **engine_args)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/__init__.py", line 424, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/strategies.py", line 50, in create
u = url.make_url(name_or_url)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/url.py", line 211, in make_url
return _parse_rfc1738_args(name_or_url)
File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/url.py", line 270, in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
It looks like it doesn't take the SQL_ALCHEMY_CONN. However, if I run printenv it does show up.
Can anyone help me?
I ran into this issue. My setup is Docker using Ubuntu 16.04. I used Python3.
When i print ENV, i realized AIRFLOW__CORE__SQL_ALCHEMY_CONN was set to blank. Once i fixed that the issue was resolved.
You said SQL_ALCHEMY_CONN, Environment variable should be referred to as AIRFLOW__CORE__SQL_ALCHEMY_CONN
Make sure it is setup correctly. The order of precedence in airflow is a) Environment variables 2) Configuration in airflow.cfg file link
I have a small working example of the tensorflow object detection api working locally. Everything looks great. My goal is to use their scripts to run in Google Machine Learning Engine, which i've used extensively in the past. I am following these docs.
Declare some relevant variables
declare PROJECT=$(gcloud config list project --format "value(core.project)")
declare BUCKET="gs://${PROJECT}-ml"
declare MODEL_NAME="DeepMeerkatDetection"
declare FOLDER="${BUCKET}/${MODEL_NAME}"
declare JOB_ID="${MODEL_NAME}_$(date +%Y%m%d_%H%M%S)"
declare TRAIN_DIR="${FOLDER}/${JOB_ID}"
declare EVAL_DIR="${BUCKET}/${MODEL_NAME}/${JOB_ID}_eval"
declare PIPELINE_CONFIG_PATH="${FOLDER}/faster_rcnn_inception_resnet_v2_atrous_coco_cloud.config"
declare PIPELINE_YAML="/Users/Ben/Documents/DeepMeerkat/training/Detection/cloud.yml"
My yaml looks like
trainingInput:
runtimeVersion: "1.0"
scaleTier: CUSTOM
masterType: standard_gpu
workerCount: 5
workerType: standard_gpu
parameterServerCount: 3
parameterServerType: standard
The relevant paths are set in the config, e.g
fine_tune_checkpoint: "gs://api-project-773889352370-ml/DeepMeerkatDetection/checkpoint/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/model.ckpt"
I've packaged object detection and slim using setup.py
Running
gcloud ml-engine jobs submit training "${JOB_ID}_train" \
--job-dir=${TRAIN_DIR} \
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
--module-name object_detection.train \
--region us-central1 \
--config ${PIPELINE_YAML} \
-- \
--train_dir=${TRAIN_DIR} \
--pipeline_config_path= ${PIPELINE_CONFIG_PATH}
yields a tensorflow (import?) error. Its a bit cryptic
insertId: "1inuq6gg27fxnkc"
logName: "projects/api-project-773889352370/logs/ml.googleapis.com%2FDeepMeerkatDetection_20171017_141321_train"
receiveTimestamp: "2017-10-17T21:38:34.435293164Z"
resource: {…}
severity: "ERROR"
textPayload: "The replica ps 0 exited with a non-zero status of 1. Termination reason: Error.
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 198, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 145, in main
model_config, train_config, input_config = get_configs_from_multiple_files()
File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 127, in get_configs_from_multiple_files
text_format.Merge(f.read(), train_config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/lib/io/file_io.py", line 112, in read
return pywrap_tensorflow.ReadFromStream(self._read_buf, length, status)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
FailedPreconditionError: .
I've seen this error in other questions related to prediction on Machine Learning Engine, suggesting this error probably(?) is not directly related to the object detection code, but it feels like its not being packaged correctly, missing dependencies? I've updated my gcloud to the latest version.
Bens-MacBook-Pro:research ben$ gcloud --version
Google Cloud SDK 175.0.0
bq 2.0.27
core 2017.10.09
gcloud
gsutil 4.27
Hard to see how its related to this problem here
FailedPreconditionError when running TF Object Detection API with own model
why would code need to initialized differently in the cloud?
Update #1.
The curious thing is that the eval.py works fine, so it can't be a path to the config file, or anything that train.py and eval.py share. Eval.py patiently sits and waits for model checkpoints to be created.
Another idea might be that the checkpoint is somehow been corrupted during upload. We can test this bypassing and training from scratch.
In .config
from_detection_checkpoint: false
that yields the the same precondition error, so it can't be the model.
The root cause is a slight typo:
--pipeline_config_path= ${PIPELINE_CONFIG_PATH}
has an extra space. Try this:
gcloud ml-engine jobs submit training "${JOB_ID}_train" \
--job-dir=${TRAIN_DIR} \
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
--module-name object_detection.train \
--region us-central1 \
--config ${PIPELINE_YAML} \
-- \
--train_dir=${TRAIN_DIR} \
--pipeline_config_path=${PIPELINE_CONFIG_PATH}