Cannot import HOL in Isabelle batch mode from Docker - docker

I'm trying to use HOL in Isabelle in batch mode from Docker, but it can't seem to find HOL.
If I have this My.thy file
theory My
imports HOL
begin
end
and then run this to process the file in batch mode
docker run --rm -it -v $PWD/My.thy:/home/isabelle/My.thy makarius/isabelle:Isabelle2022_ARM process -T My
I get
*** No such file: "/home/isabelle/HOL.thy"
*** The error(s) above occurred for theory "Draft.HOL" (line 2 of "~/My.thy")
*** (required by "Draft.My")
Exception- TOPLEVEL_ERROR raised
However, I can import Main. In more detail, if I change My.thy to be
theory My
imports Main
begin
end
then running the same Docker command as above to run the batch process results in
Loading theory "Draft.My"
### theory "Draft.My"
### 0.039s elapsed time, 0.078s cpu time, 0.000s GC time
val it = (): unit
How can I import HOL Isabelle's batch mode in Docker?

Related

Why is my containerized Selenium application failing only in AWS Lambda?

I'm trying to get a function to run in AWS Lambda that uses Selenium and Firefox/geckodriver in order to run. I've decided to go the route of creating a container image, and then uploading and running that instead of using a pre-configured runtime. I was able to create a Dockerfile that correctly installs Firefox and Python, downloads geckodriver, and installs my test code:
FROM alpine:latest
RUN apk add firefox python3 py3-pip
RUN pip install requests selenium
RUN mkdir /app
WORKDIR /app
RUN wget -qO gecko.tar.gz https://github.com/mozilla/geckodriver/releases/download/v0.28.0/geckodriver-v0.28.0-linux64.tar.gz
RUN tar xf gecko.tar.gz
RUN mv geckodriver /usr/bin
COPY *.py ./
ENTRYPOINT ["/usr/bin/python3","/app/lambda_function.py"]
The Selenium test code:
#!/usr/bin/env python3
import util
import os
import sys
import requests
def lambda_wrapper():
api_base = f'http://{os.environ["AWS_LAMBDA_RUNTIME_API"]}/2018-06-01'
response = requests.get(api_base + '/runtime/invocation/next')
request_id = response.headers['Lambda-Runtime-Aws-Request-Id']
try:
result = selenium_test()
# Send result back
requests.post(api_base + f'/runtime/invocation/{request_id}/response', json={'url': result})
except Exception as e:
# Error reporting
import traceback
requests.post(api_base + f'/runtime/invocation/{request_id}/error', json={'errorMessage': str(e), 'traceback': traceback.format_exc(), 'logs': open('/tmp/gecko.log', 'r').read()})
raise
def selenium_test():
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
options = Options()
options.add_argument('-headless')
options.add_argument('--window-size 1920,1080')
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
ffx.get("https://google.com")
url = ffx.current_url
ffx.close()
print(url)
return url
def main():
# For testing purposes, currently not using the Lambda API even in AWS so that
# the same container can run on my local machine.
# Call lambda_wrapper() instead to get geckodriver logs as well (not informative).
selenium_test()
if __name__ == '__main__':
main()
I'm able to successfully build this container on my local machine with docker build -t lambda-test . and then run it with docker run -m 512M lambda-test.
However, the exact same container crashes with an error when I try and upload it to Lambda to run. I set the memory limit to 1024M and the timeout to 30 seconds. The traceback says that Firefox was unexpectedly killed by a signal:
START RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Version: $LATEST
/app/lambda_function.py:29: DeprecationWarning: use service_log_path instead of log_path
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
Traceback (most recent call last):
File "/app/lambda_function.py", line 45, in <module>
main()
File "/app/lambda_function.py", line 41, in main
lambda_wrapper()
File "/app/lambda_function.py", line 12, in lambda_wrapper
result = selenium_test()
File "/app/lambda_function.py", line 29, in selenium_test
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
File "/usr/lib/python3.8/site-packages/selenium/webdriver/firefox/webdriver.py", line 170, in __init__
RemoteWebDriver.__init__(
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status signal
END RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30
REPORT RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Duration: 20507.74 ms Billed Duration: 21350 ms Memory Size: 1024 MB Max Memory Used: 131 MB Init Duration: 842.11 ms
Unknown application error occurred
I had it upload the geckodriver logs as well, but there wasn't much useful information in there:
1608506540595 geckodriver INFO Listening on 127.0.0.1:41597
1608506541569 mozrunner::runner INFO Running command: "/usr/bin/firefox" "--marionette" "-headless" "--window-size 1920,1080" "-foreground" "-no-remote" "-profile" "/tmp/rust_mozprofileQCapHy"
*** You are running in headless mode.
How can I even begin to debug this? The fact that the exact same container behaves differently depending upon where it's run seems fishy to me, but I'm not knowledgeable enough about Selenium, Docker, or Lambda to pinpoint exactly where the problem is.
Is my docker run command not accurately recreating the environment in Lambda? If so, then what command would I run to better simulate the Lambda environment? I'm not really sure where else to go from here, seeing as I can't actually reproduce the error locally to test with.
If anyone wants to take a look at the full code and try building it themselves, the repository is here - the lambda code is in lambda_function.py.
As for prior research, this question a) is about ChromeDriver and b) has no answers from over a year ago. The link from that one only has information about how to run a container in Lambda, which I'm already doing. This answer is almost my problem, but I know that there's not a version mismatch because the container works on my laptop just fine.
I have exactly the same problem and a possible explanation.
I think what you want is not possible for the time being.
According to AWS DevOps Blog Firefox relies on fallocate system call and /dev/shm.
However AWS Lambda does not mount /dev/shm so Firefox will crash when trying to allocate memory. Unfortunately, this handling cannot be disabled for Firefox.
However if you can live with Chromium, there is an option for chromedriver --disable-dev-shm-usage that disables the usage of /dev/shm and instead writes shared memory files to /tmp.
chromedriver works fine for me on AWS Lambda, if that is an option for you.
According to AWS DevOps Blog you can also use AWS Fargate to run Firefox/geckodriver.
There is an entry in the AWS forum from 2015 that requests mounting /dev/shm in Lambdas, but nothing happened since then.

snakemake: MissingOutputException within docker

I am trying to run a pipeline within a docker using snakemake. I am having problem using the sortmerna tool to produce {sample}_merged_sorted_mRNA and {sample}_merged_sorted output from control_merged.fq and treated_merged.fq input files.
Here my Snakefile:
SAMPLES = ["control","treated"]
for smp in SAMPLES:
print("Sample " + smp + " will be processed")
rule final:
input:
expand('/output/{sample}_merged.fq', sample=SAMPLES),
expand('/output/{sample}_merged_sorted', sample=SAMPLES),
expand('/output/{sample}_merged_sorted_mRNA', sample=SAMPLES),
rule sortmerna:
input: '/output/{sample}_merged.fq',
output: merged_file='/output/{sample}_merged_sorted_mRNA', merged_sorted='/output/{sample}_merged_sorted',
message: """---SORTING---"""
shell:
'''
sortmerna --ref /usr/share/sortmerna/rRNA_databases/silva-bac-23s-id98.fasta,/ usr/share/sortmerna/rRNA_databases/index/silva-bac-23s-id98: --reads {input} --paired_in -a 16 --log --fastx --aligned {output.merged_file} --other {output.merged_sorted} -v
'''
When runnig this I get:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 57 of /input/Snakefile:
Missing files after 5 seconds:
/output/control_merged_sorted_mRNA
/output/control_merged_sorted
This might be due to filesystem latency. If that is the case, consider to increase the wait $ime with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /input/.snakemake/log/2018-11-05T091643.911334.snakemake.log
I tried to increase the latency with --latency-wait but I get the same result. Funny thing is that two output files control_merged_sorted_mRNA.fq and control_merged_sorted.fq are produced but the program fails and exits. The version of snakemake is 5.3.0. Any help?
snakemake fails because the outputs described by the rule sortmerna are not produced. This is not a latency problem, it is a problem with your outputs.
Your rule sortmerna expects as output:
/output/control_merged_sorted_mRNA
and
/output/control_merged_sorted
but the program you are using (I know nothing about sortmerna) is apparently producing
/output/control_merged_sorted_mRNA.fq
and
/output/control_merged_sorted.fq
Make sure that when you specify the options --aligned and --other on the command line of your program, it should be the real names of the files produced or if it is only the basename and the program will add a suffix .fq. If you are in the latter case, I suggest you use:
rule final:
input:
expand('/output/{sample}_merged.fq', sample=SAMPLES),
expand('/output/{sample}_merged_sorted', sample=SAMPLES),
expand('/output/{sample}_merged_sorted_mRNA', sample=SAMPLES),
rule sortmerna:
input:
'/output/{sample}_merged.fq',
output:
merged_file='/output/{sample}_merged_sorted_mRNA.fq',
merged_sorted='/output/{sample}_merged_sorted.fq'
params:
merged_file_basename='/output/{sample}_merged_sorted_mRNA',
merged_sorted_basename='/output/{sample}_merged_sorted'
message: """---SORTING---"""
shell:
"""
sortmerna --ref /usr/share/sortmerna/rRNA_databases/silva-bac-23s-id98.fasta,/usr/share/sortmerna/rRNA_databases/index/silva-bac-23s-id98: --reads {input} --paired_in -a 16 --log --fastx --aligned {params.merged_file_basename} --other {params.merged_sorted_basename} -v
"""

Exhausted Virtual Memory Installing SyntaxNet Using Docker Toolbox

I exhausted my virtual memory when trying to install SyntaxNet from this Dockerfile using the Docker Toolbox. I received this message when compiling the Dockerfile:
ERROR: /root/.cache/bazel/_bazel_root/5b21cea144c0077ae150bf0330ff61a0/external/org_tensorflow/tensorflow/core/kernels/BUILD:1921:1: C++ compilation of rule '#org_tensorflow//tensorflow/core/kernels:svd_op' failed: gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-canonical-system-headers ... (remaining 115 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1. virtual memory exhausted: Cannot allocate memory ____Building complete. ____Elapsed time: 8548.364s, Critical Path: 8051.91s
I have a feeling this could be resolved by changing Bazel's default jobs limit with (for example) --jobs=1, however I'm not sure where I would put that in the Dockerfile.
There are two possibilities: You could either modify the Dockerfile so that it creates a ~/.bazelrc that contains the following text:
build --jobs=1
Note that this works, even though the Dockerfile runs bazel test (as opposed to bazel build), because build flags in the .bazelrc also apply to Bazel's test command.
The other possibility would be to modify the RUN command in the Dockerfile to include the --jobs=1 parameter, e.g. RUN [...] && bazel test --jobs=1 --genrule_strategy=standalone [...].
Bazel should then spawn not more than a single child process during the build. You can verify this by running "ps axuf" on your host and looking at the process tree of your container. If you modified the RUN cmd, you should also see the --jobs=1 parameter on Bazel's command-line.

ROS pointgrey_camera_driver crashes when I echo its topics

With Ubuntu 14.04 and Indigo, I cloned the pointgrey_camera_driver to catkinws/src and did catkin_make install. When I:
roslaunch pointgrey_camera_driver camera.launch
if I
rostopic echo /camera/image_color
the camera_nodelet_manager process dies. I don't know where the problem might be. This did work at one time.
I have tried using a calibration file as well as setting the launch argument "calibrated" to 0.
When the process dies this is the message:
[camera/camera_nodelet_manager-2] process has died [pid 14845, exit code -11, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/mitch/.ros/log/9c7752b6-2116-11e6-a626-c03fd56e6751/camera-camera_nodelet_manager-2.log].
log file: /home/mitch/.ros/log/9c7752b6-2116-11e6-a626-c03fd56e6751/camera-camera_nodelet_manager-2*.log
The log files are not enlightening.

How to use Docker in sbt-native-packager 0.8.0-M2 with Play

I am trying to build a Docker image on a Play 2.2 project. I am using Docker version 1.2.0 on Ubuntu Linux.
My Docker specific settings in Build.scala looks like this:
dockerBaseImage in Docker := "dockerfile/java:7"
maintainer in Docker := "My name"
dockerExposedPorts in Docker := Seq(9000, 9443)
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
Generated Dockerfile:
FROM dockerfile/java:latest
MAINTAINER
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/device-guides"]
CMD []
Output looks like the dockerBaseImage is being ignored, and the default
(dockerfile/java:latest) is not handled correctly:
[project] $ docker:publishLocal
[info] Wrote /..../project.pom
[info] Step 0 : FROM dockerfile/java:latest
[info] ---> bf7307ff060a
[info] Step 1 : MAINTAINER
[error] 2014/10/07 11:30:12 Invalid Dockerfile format
[trace] Stack trace suppressed: run last docker:publishLocal for the full output.
[error] (docker:publishLocal) Nonzero exit value: 1
[error] Total time: 2 s, completed Oct 7, 2014 11:30:12 AM
[project] $ run last docker:publishLocal
java.lang.RuntimeException: Invalid port argument: last
at scala.sys.package$.error(package.scala:27)
at play.PlayRun$class.play$PlayRun$$parsePort(PlayRun.scala:52)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at scala.Option.map(Option.scala:145)
at play.PlayRun$class.play$PlayRun$$filterArgs(PlayRun.scala:69)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:97)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:91)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:35)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Invalid port argument: last
[error] Total time: 0 s, completed Oct 7, 2014 11:30:16 AM
What needs to be done to make this work?
I am able to build the image using Docker from the command line:
docker build --force-rm -t device-guides:1.0-SNAPSHOT .
Packaging/publishing settings are per-project settings, rather than per-build settings.
You were using a Build.scala style build, with a format like this:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
...
)
}
The settings should be applied to this main project. This means that you call the settings() method on the project, passing in the appropriate settings to set up the packaging as you wish.
In this case:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
dockerBaseImage in Docker := "dockerfile/java:7",
maintainer in Docker := "My name",
dockerExposedPorts in Docker := Seq(9000, 9443),
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
)
}
To reuse similar settings across multiple projects, you can either create a val of type Seq[sbt.Setting], or extend sbt.Project to provide the common settings. See http://jsuereth.com/scala/2013/06/11/effective-sbt.html for some examples of how to do this (e.g. Rule #4).
This placement of settings is not necessarily clear if one is used to using build.sbt-type builds instead, because in that file, a line that evaluates to an SBT setting (or sequence of settings) is automatically appended to the root project's settings.
It's a wrong command you executed. I didn't saw it the first time.
run last docker:publishLocal
remove the run last
docker:publishLocal
Now you get your docker image build as expected

Resources