When running the Neo4j standalone community Docker, some logs are written to stdout and some to a file inside the container: debug.log.
I would like to be able to set log4j options, like log-level, appenders etc. The reasons are:
I can't access the log files when running in e.g AWS ECS
The debug logs are quite verbose
log4j-properties are convenient to deploy and manage
So the question is, how can I set a custom log4j properties for a Neo4j server that's running inside a container?
It's not documented if it's even possible, but I have tried the usual things one starts to thing of. First adding a log4j.xml to the classpath but to no avail. Also I have tried setting dbms.jvm.additional to
-Dlog4j.configuration=<path_to_file>
-Dlog4j.configurationFile=<path_to_file>
-Dlog4j2.configurationFile==<path_to_file>
I've verified that the Neo4j process has the correct jvm-arguments:
/usr/local/openjdk-11/bin/java -cp /var/lib/neo4j/plugins:/var/lib/neo4j/conf:/var/lib/neo4j/lib/*:/var/lib/neo4j/plugins/*
-Xms2048m -Xmx2048m -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch
-XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC
-XX:MaxInlineLevel=15 -XX:-UseBiasedLocking -Djdk.nio.maxCachedBufferSize=262144
-Dio.netty.tryReflectionSetAccessible=true -Djdk.tls.ephemeralDHKeySize=2048
-Djdk.tls.rejectClientInitiatedRenegotiation=true -XX:FlightRecorderOptions=stackdepth=256
-XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints -Dlog4j2.disable.jmx=true
-Dlog4j.configuration=file:/var/lib/neo4j/conf/log4j.properties
-Dlog4j.configurationFile=file:/var/lib/neo4j/conf/log4j.xml
-Dlog4j2.configurationFile=file:/var/lib/neo4j/conf/log4j2.xml
-Dfile.encoding=UTF-8 org.neo4j.server.CommunityEntryPoint
--home-dir=/var/lib/neo4j --config-dir=/var/lib/neo4j/conf
I have verified that no lib-jars contains a log4j-properties that takes precedence.
And whatever I try, no changes are made to the way the server is logging. The same events are written to /logs/debug.log. No exceptions regarding bad log4j config or something like that.
I have created a small project for easier debugging.
I would write out the files to stdout.
# forward logs to docker log collector
RUN ln -sf /dev/stdout /logs/debug.log
So this is a 'docker' solution but handy and works also for other usecases.
Related
I'm trying to use Xdebug (v3) through PhpStorm (v2022.3.2) to debug a PHPUnit test method from inside a Docker container.
Xdebug is correctly setup and also the server in PhpStorm is.
In fact, if I run tests from command line, Xdebug stops at breakpoints:
XDEBUG_SESSION=1 PHP_IDE_CONFIG="serverName=myapp.localhost" vendor/bin/phpunit --testdox --filter testProfileUpdate
If I debug the test clicking on the "bug" icon on the left of the test method, instead, I receive the following error:
Connection was not established. Probably 'xdebug.remote_host=host.docker.internal' is incorrect. Change 'xdebug.remote_host'.
Looking at the command executed by PhpStorm to debug a test method, here is what I see:
[docker://myapp:latest/]:php -dxdebug.mode=debug -dxdebug.client_port=9000 -dxdebug.client_host=host.docker.internal /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
Tried to configure two more flags
I've also tried to configure two more flags:
-dxdebug.idekey="serverName=myapp.localhost"
-dxdebug.session=1
The resulting command is this:
[docker://myapp:latest/]:php -dxdebug.mode=debug -dxdebug.client_port=9000 -dxdebug.client_host=host.docker.internal -dxdebug.idekey="serverName=myapp.localhost" -dxdebug.session=1 /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
Also with this configuration, Xdebug doesn't start and breakpoints are ignored (and the error message pops up).
Tried to disable params passed by PhpStorm
I tried to disable params passed automatically by PhpStorm going to Settings > PHP > Debug.
Then, in the section "Advanced settings" I unflagged the option "Pass required configuration options through command line".
Now, the resulting command is this:
[docker://myapp:latest/]:php /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
As you can see, there is no mention of xdebug.client_host nor of xdebug.remote_host.
Nevertheless, PhpStorm continues to pop up the same error mentioning xdebug.remote_host=host.docker.internal.
I imagined that an error should have occurred, but different, something saying a required parameter is missed.
Request for help
Any idea about how to configure PhpStorm to start an Xdebug session to debug a PHPUnit test?
-dxdebug.client_port=9000
Xdebug 3 now uses 9003 port by default instead of 9000. Try sticking to this.
In PhpStorm you most likely have both ports listed (9000,9003 as a default value, to cover new and old Xdebug versions) and it uses the first one for CLI scripts (which PHPUnit is). Either remove it (preferred; so you will only have 9003 there) or make it last (i.e. 9003,9000).
-dxdebug.idekey="serverName=myapp.localhost"
This will not work (wrong approach). You are trying to pass it as Xdebug IDEKey param... but it actually should be the value of the ENV variable named PHP_IDE_CONFIG.
If you want to actually use it:
Look into the PHPUnit config file: it has the ability to set up custom ENV variables for tests.
Another place: PhpStorm's Run/Debug Configuration for PHPUnit has an Environment variables field: try using it for this purpose.
Or use it in your actual docker file/config.
Other than that: enable Xdebug log (xdebug.log), try to debug (both when it's working and when it's not) and see what the log has to say for both runs (where it connects to, what the response is etc). You should spot the difference that should give the clues on what might be wrong (in which direction to dig further).
This actually should be the first step when investigating "why it works here and not there" situations.
Most of the Docker image that embed Apache Spark have the whole spark archive in it.
Also most of the time, we submit the spark application on kubernetes, hence the spark job is running on other Docker container.
As such, I am wondering, in order to make the Docker image smaller, how to embed spark-submit feature?
That's a great question! I had a look (for the latest one on the Downloads page: 3.3.1) and found the following:
Looking at the contents of $SPARK_HOME/bin/spark-submit, you can see the following line:
exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$#"
Ok, so it looks like the $SPARK_HOME/bin/spark-submit script calls the $SPARK_HOME/bin/spark-class script. Let's have a look at that one.
Similar to spark-submit, spark-class calls the load-spark-env.sh script like so:
. "${SPARK_HOME}"/bin/load-spark-env.sh
This load-spark-env.sh script calls other scripts of its own as well. But there is also a bit about Spark jars in spark-class:
# Find Spark jars.
if [ -d "${SPARK_HOME}/jars" ]; then
SPARK_JARS_DIR="${SPARK_HOME}/jars"
else
SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars"
fi
if [ ! -d "$SPARK_JARS_DIR" ] && [ -z "$SPARK_TESTING$SPARK_SQL_TESTING" ]; then
echo "Failed to find Spark jars directory ($SPARK_JARS_DIR)." 1>&2
echo "You need to build Spark with the target \"package\" before running this program." 1>&2
exit 1
else
LAUNCH_CLASSPATH="$SPARK_JARS_DIR/*"
fi
# Add the launcher build dir to the classpath if requested.
if [ -n "$SPARK_PREPEND_CLASSES" ]; then
LAUNCH_CLASSPATH="${SPARK_HOME}/launcher/target/scala-$SPARK_SCALA_VERSION/classes:$LAUNCH_CLASSPATH"
fi
So as you can see, it is referencing the spark jars directory (288MB of the total 324MB for Spark 3.3.1) and putting that on the launch classpath. Now, it's very possible that not all of those jars are needed in the case of submitting an application on kubernetes. But at least you need some kind of library to translate a spark application to some kubernetes configuration that your Kubernetes API server can understand.
So my conclusion from this bit is:
We can quite easily follow which files are exactly needed. In first instance, I would say anything in $SPARK_HOME/bin and $SPARK_HOME/conf. But that is no issue since they are all very small scripts/conf files.
Some of those scripts though, are putting the jars directory in your classpath for the final java command.
Maybe they don't need all the jars, but they will need some kind of library to connect to the Kubernetes API server. So I would expect some jar to be needed there. I see there is a jar called kubernetes-model-core-5.12.2.jar. Maybe this one?
Since most of the size of this $SPARK_HOME folder comes from those jars, you can try to delete some jars and run your spark-submit jobs to see what happens. I would think that, amongst others, jars like commons-math3-3.6.1.jar or spark-mllib_2.12-3.3.1.jar would not be necessary for a simple spark-submit to a Kubernetes API Server.
(All those specific jars just come from that one Spark version I talked about in the start of this post)
Really interesting question, I hope this helps you a bit! Just try deleting some jars, run your spark-submit and see what happens!
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.
First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.
In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.
I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.
I'm running play2 on a 512M vps.
It can create a new app:
play new test
But can't start that test project:
cd test
play
It reports such an error:
[freewind#289144 test]$ play
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
[freewind#289144 test]$
After some research, I found play2 will invoke play-2.0/framework/build, and build has following settings:
I tried to modify the play-2.0/play shell, from:
java ${DEBUG_PARAM} -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=384M -Dfile.encoding=UTF8 -Dplay.version="${PLAY_VERSION}"
-Dsbt.ivy.home=`dirname $0`/../repository -Dplay.home=`dirname $0`
-Dsbt.boot.properties=`dirname $0`/sbt/sbt.boot.properties
-jar `dirname $0`/sbt/sbt-launch.jar "$#"
We can see that the Xms is 512M, the vps hasn't enough memory for it.
So I change it to:
java ${DEBUG_PARAM} -Xms112M -Xmx300M -Xss1M
-XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=84M -Dfile.encoding=UTF8
...
This time, the error message is changed:
Error occurred during initialization of VM
Cannot create VM thread. Out of system resources.
What should I do?
Assuming you're running the Sun Hotspot VM, run it like this:
_JAVA_OPTIONS="-Xmx384m" play <your commands>
And you'll get what you need. When the VM launches, it includes the contents of the _JAVA_OPTIONS environment variable along with any other command-line Java options you specify. You'll know it was picked up because you'll see the following message on your console:
Picked up _JAVA_OPTIONS: -Xmx384m
The shell command above defines the variable only for execution of the rest of the shell command. If you wanted to make it more durable, you could say something like
export _JAVA_OPTIONS="-Xmx384m"
and put that in .bash_profile, or .profile, etc.
The _JAVA_OPTIONS environment variable is poorly documented, and I'm not sure how widely it is supported, but I'm pretty sure it works on Linux, BSD* (like Mac OS), and...I don't know what else.
I faced the same issue but I found the reason and the solution.
It is a java parameter in play. I do a simple check:
java -Xms512M -Xmx1024M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M -version
This does not work, but
java -Xms512M -Xmx1024M -Xss1M -XX:+CMSClassUnloadingEnabled
does work! and
java -Xms512M -Xmx512M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M
works too.
You have to modify the build.bat as I did: on maximum mem size or change the maximum permanent size.
I build and develop locally. I then run "play dist" to create a distribution which contains a start script. I deploy to my 512MB VPS using Fabric and do not have any memory issues.
Another way is to use the following command (it works when you dont use play dist but have the framework installed on the server aswell, maybe it works with the standalone package too but I have not tested it):
play "start 6000" -Xms64m -Xmx128m -server
the "start 6000" will start the server listening on port 6000.
play stage && target/start -Xmx384m