groff in chroot environment - environment

I want to use groff in a chroot environment. It bombs out with "Abort trap" which is not very helpful as an error message. I assume it is invoking some tool or data it can't reach. I can run the same command outside the chroot environment without issue - other than security risk.
Is there a debug mode I can use to get a trace of what is being called?
Where can I get a list of what needs to be copied into the chroot environment?

Related

Connection was not established. Probably 'xdebug.remote_host=host.docker.internal' is incorrect. Change 'xdebug.remote_host'

I'm trying to use Xdebug (v3) through PhpStorm (v2022.3.2) to debug a PHPUnit test method from inside a Docker container.
Xdebug is correctly setup and also the server in PhpStorm is.
In fact, if I run tests from command line, Xdebug stops at breakpoints:
XDEBUG_SESSION=1 PHP_IDE_CONFIG="serverName=myapp.localhost" vendor/bin/phpunit --testdox --filter testProfileUpdate
If I debug the test clicking on the "bug" icon on the left of the test method, instead, I receive the following error:
Connection was not established. Probably 'xdebug.remote_host=host.docker.internal' is incorrect. Change 'xdebug.remote_host'.
Looking at the command executed by PhpStorm to debug a test method, here is what I see:
[docker://myapp:latest/]:php -dxdebug.mode=debug -dxdebug.client_port=9000 -dxdebug.client_host=host.docker.internal /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
Tried to configure two more flags
I've also tried to configure two more flags:
-dxdebug.idekey="serverName=myapp.localhost"
-dxdebug.session=1
The resulting command is this:
[docker://myapp:latest/]:php -dxdebug.mode=debug -dxdebug.client_port=9000 -dxdebug.client_host=host.docker.internal -dxdebug.idekey="serverName=myapp.localhost" -dxdebug.session=1 /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
Also with this configuration, Xdebug doesn't start and breakpoints are ignored (and the error message pops up).
Tried to disable params passed by PhpStorm
I tried to disable params passed automatically by PhpStorm going to Settings > PHP > Debug.
Then, in the section "Advanced settings" I unflagged the option "Pass required configuration options through command line".
Now, the resulting command is this:
[docker://myapp:latest/]:php /opt/project/vendor/phpunit/phpunit/phpunit --configuration /opt/project/phpunit.xml --filter "/(ProfileTest::testProfileUpdate)( .*)?$/" --test-suffix ProfileTest.php /opt/project/tests --teamcity
As you can see, there is no mention of xdebug.client_host nor of xdebug.remote_host.
Nevertheless, PhpStorm continues to pop up the same error mentioning xdebug.remote_host=host.docker.internal.
I imagined that an error should have occurred, but different, something saying a required parameter is missed.
Request for help
Any idea about how to configure PhpStorm to start an Xdebug session to debug a PHPUnit test?
-dxdebug.client_port=9000
Xdebug 3 now uses 9003 port by default instead of 9000. Try sticking to this.
In PhpStorm you most likely have both ports listed (9000,9003 as a default value, to cover new and old Xdebug versions) and it uses the first one for CLI scripts (which PHPUnit is). Either remove it (preferred; so you will only have 9003 there) or make it last (i.e. 9003,9000).
-dxdebug.idekey="serverName=myapp.localhost"
This will not work (wrong approach). You are trying to pass it as Xdebug IDEKey param... but it actually should be the value of the ENV variable named PHP_IDE_CONFIG.
If you want to actually use it:
Look into the PHPUnit config file: it has the ability to set up custom ENV variables for tests.
Another place: PhpStorm's Run/Debug Configuration for PHPUnit has an Environment variables field: try using it for this purpose.
Or use it in your actual docker file/config.
Other than that: enable Xdebug log (xdebug.log), try to debug (both when it's working and when it's not) and see what the log has to say for both runs (where it connects to, what the response is etc). You should spot the difference that should give the clues on what might be wrong (in which direction to dig further).
This actually should be the first step when investigating "why it works here and not there" situations.

Install Keycloak adapter on WILDFLY that depends on ENVs in standalone.xml

I am trying to install the Keycloak Adapter to my WILDFLY Application Server that runs as Docker Container. I am using the image jboss/wildfly:17.0.0.Final as base image. I am having a big trouble while building my actual own image.
My Dockerfile:
FROM jboss/wildfly:17.0.0.Final
ENV $WILDFLY_HOME /opt/jboss/wildfly
COPY keycloak-adapter.zip $WILDFLY_HOME
RUN unzip $WILDFLY_HOME/keycloak-adapter.zip -d $WILDFLY_HOME
# My standalone.xml that contains ENVs
COPY standalone.xml $WILDFLY_HOME/standalone/configuration/
# Here it crashes!
RUN $WILDFLY_HOME/bin/jboss-cli.sh --file=$WILDFLY_HOME/bin/adapter-elytron-install-offline.cli
The official documentation says:
Unzip the adapter zip file in $WILDFLY_HOME (/opt/jboss/wildfly) - I've done this, works.
In order to install the adapter (when server is offline) you need to execute ./bin/jboss-cli.sh --file=bin/adapter-elytron-install-offline.cli which basically starts the server (which is needed as you cant modify the configuration otherwise) and modifies the standalone.xml.
Here is the problem. My standalone.xml is parametrized with environment variables that are only set during runtime as it runs in multiple different environments. When the ENVs are not set, the server crashes and so does the command above.
The error during docker build at the last step:
Cannot start embedded server WFLYEMB0021: Cannot start embedded process: JBTHR00005: Operation failed WFLYSRV0056: Server boot has fialed in an unrecoverable manner.
The cause
Despite of the not very precise error message I have clearly identified the unset ENVs as the cause by running the container with bash, setting the required ENVs with some random values and executing the jboss-cli command again - and it worked.
I know that the docs say its also possible to configure when the server is running but this is not an option for me, i need this configured at docker build stage.
So the problem here is they provide an offline installation that fails if the standalone.xml depends on environment variables which are usually not set during docker build. Unfortunately, i could not find a way to tell the jboss cli to ignore unset environment variables.
Do you know any workaround?

Tweak logging of standalone Neo4j server

When running the Neo4j standalone community Docker, some logs are written to stdout and some to a file inside the container: debug.log.
I would like to be able to set log4j options, like log-level, appenders etc. The reasons are:
I can't access the log files when running in e.g AWS ECS
The debug logs are quite verbose
log4j-properties are convenient to deploy and manage
So the question is, how can I set a custom log4j properties for a Neo4j server that's running inside a container?
It's not documented if it's even possible, but I have tried the usual things one starts to thing of. First adding a log4j.xml to the classpath but to no avail. Also I have tried setting dbms.jvm.additional to
-Dlog4j.configuration=<path_to_file>
-Dlog4j.configurationFile=<path_to_file>
-Dlog4j2.configurationFile==<path_to_file>
I've verified that the Neo4j process has the correct jvm-arguments:
/usr/local/openjdk-11/bin/java -cp /var/lib/neo4j/plugins:/var/lib/neo4j/conf:/var/lib/neo4j/lib/*:/var/lib/neo4j/plugins/*
-Xms2048m -Xmx2048m -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch
-XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC
-XX:MaxInlineLevel=15 -XX:-UseBiasedLocking -Djdk.nio.maxCachedBufferSize=262144
-Dio.netty.tryReflectionSetAccessible=true -Djdk.tls.ephemeralDHKeySize=2048
-Djdk.tls.rejectClientInitiatedRenegotiation=true -XX:FlightRecorderOptions=stackdepth=256
-XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints -Dlog4j2.disable.jmx=true
-Dlog4j.configuration=file:/var/lib/neo4j/conf/log4j.properties
-Dlog4j.configurationFile=file:/var/lib/neo4j/conf/log4j.xml
-Dlog4j2.configurationFile=file:/var/lib/neo4j/conf/log4j2.xml
-Dfile.encoding=UTF-8 org.neo4j.server.CommunityEntryPoint
--home-dir=/var/lib/neo4j --config-dir=/var/lib/neo4j/conf
I have verified that no lib-jars contains a log4j-properties that takes precedence.
And whatever I try, no changes are made to the way the server is logging. The same events are written to /logs/debug.log. No exceptions regarding bad log4j config or something like that.
I have created a small project for easier debugging.
I would write out the files to stdout.
# forward logs to docker log collector
RUN ln -sf /dev/stdout /logs/debug.log
So this is a 'docker' solution but handy and works also for other usecases.

Docker RUN instruction to source bash profile

I run some installation scripts via docker, they change ~/.bashrc but then I need to source it to use installed commands in RUN instructions below.
Tried obvious RUN . ~/.bashrc and got /bin/sh: 13: /root/.bashrc: shopt: not found error.
Tried RUN . ~/.profile and got mesg: ttyname failed: Inappropriate ioctl for device
I do not want to use ENV instructions. The point of having external installation scripts is to use them in non-Docker environments, for example when running unit tests locally. ENV instructions would duplicate environment setup which is already done in installation scripts.
You should not try to set up shell dotfiles in Docker. Many typical paths do not run them at all; for example
# In a Dockerfile
CMD ["some", "command", "here"]
# From the command line
docker run myimage some command here
The Docker environment is, fundamentally, different from a standalone Linux system; in addition to shell dotfiles, "home directory" isn't really a Docker concept, and if you have a multi-part process, on Docker it's standard to run each part in a separate container, but on standalone Linux you could use the init system to keep all of the parts running together. If you're expecting things to work exactly the same with exactly the same installation scripts, a virtual machine would be a better technological match for what you're attempting.
("Inappropriate ioctl for device" also suggests that there are things in the dotfiles that strongly expect to be run from an actual terminal, which you don't necessarily have at docker build time.)
My generic advice here is:
If possible, install things in the "system" directories within the image and avoid needing custom environment variable settings. (Don't use a version manager like nvm or rvm; don't use a Python virtual environment.)
If you do have to set environment variables, ENV is the way to do it.
If you really can't do either of the above, you can set environment variables in an ENTRYPOINT script before launching the main process; but if it's important to you that variables show up in docker inspect or docker exec shells, they won't be set there.
(Also remember that each RUN command launches a new container with a totally new shell environment. You can RUN . .profile; foo, but the environment variable settings won't carry through to the next RUN line.)

Ruby on Rails- how to run a bash script as root?

What I'm wanting to do is use 'button_to' & friends to start different scripts on a linux server. Not all scripts will need to be root, but some will, since they'll be running "apt-get dist-upgrade" and such.
The PassengerDefaultUser is set to www-data in apache2.conf
I have already tried running scripts from the controller that do little things like writing to text files, etc, just so that I know that I am having Rails execute the script correctly. (in other words, I know how to run a script from the controller) But I cannot figure out how to run a script that requires root access. Can anyone give me a lead?
A note on security: Thanks for all the warnings against hacking that were given. You don't need to loose any sleep, though, because A) the webapp is not accessible from the public internet, it will only be on private intranets, B) the app is password protected, and C) because the user will not be able to supply custom input, only make selections from a form that will be passed as variables to the script. However, because I say this does not mean that I am disregarding your recommendations for security- I will be considering them very carefully in my design.
You should be using the setuid bit to achieve the same functionality without sudo. But you shouldn't be running Bash scripts. Setuid is much more secure than sudo. Using either sudo or setuid, you are running a program as root. But only setuid (with the help of certain languages) offers some added security measures.
Essentially you'll be using scripts that are temporarily allowed to run as a the owner, instead of the user that invoked them. Ruby and Perl can detect when a script is run as a different user than the caller and enforces security measures to protect against unsafe calls. This is called Taint mode. Bash does not run in taint mode at all.
Taint mode essentially works by declaring all input from an outside source unsafe for use when passed to a system call.
Setting it up:
Use chmod to set permissions on the script you want to run as 4755 and set it's owner to root:
$ chmod 4755 script.rb
$ chown root script.rb
Then just run the script as you normally would. The setuid bit kicks in and runs the script as if it was run by root. This is the safest way to temporarily elevate privileges.
See Ruby's documentation on safe levels and taint to understand Ruby's sanitation requirements to protect against tainted input causing harm. Or the perlsec faq to learn the how the same thing is done in Perl.
Again. If you're dead set on running scripts as root from an automated system. Do Not Use Bash! Use Ruby or Perl instead of Bash. Taint mode forces you to take security seriously and can avoid many unnecessary problems down the line.

Resources