I received this error when I ran the kafka container on docker.
Thereafter, I gave the following commands:
docker exec -it /bin/sh #This starts the kafka container in interactive mode
cd /opt/kafka
./bin/kafka-server-start.sh config/server.properties
I came across the solution to delete the folder which solved the issue for many, as stated here:
Apache kafka: Failed to acquire lock on file .lock in tmp/kafka-logs
However, when I try this, it forces the container to stop.
Also, I cannot find any .lock file inside the folder.
Kindly help me with this!
Not clear what container you're actually using, but all kafka images run the same command as their entrypoint, which keeps them running.
Besides, the error says nothing about the /tmp folder and docker container file systems are ephemeral, so if you are not using volumes/mounts, then simply restarting the container would clear the error
All in all, you shouldn't start a new broker process manually
Related
I follow this install guide run dcm4chee in docker container, when I start the service up by sudo docker-compose -p dcm4chee start it gives me error
Starting ldap ... done
Starting db ... done
Starting arc ... failed
this is my docker-compose.yml.
docker-compose.env:
STORAGE_DIR=/storage/fs1
POSTGRES_DB=pacsdb
POSTGRES_USER=pacs
POSTGRES_PASSWORD=pacs
Why ldap and db start fine but arc failed.
Edit:
/var/local/dcm4chee-arc/wildfly:/opt/wildfly/standalone
/var/local/dcm4chee-arc/storage:/storage
these files are in docker-compose.yml in volumes attribute, but I didn't find wildfly folder/file in system.
It worked, the problem is that for some reason this up command (docker-compose -p dcm4chee up -d) failed to create the wildfly and storage folders,
so when I then run start command, the docker failed to start the service as the required folders wasn't created, the solution was simply run up command again,
this time it creates the folders correctly.
I encountered an issue when running the docker container.
An error log was generated as below:
[Error] mysqld : unknown variable “wait_timeout = 288000”.
I wanted to test some docker container features.
So, I opened the docker bash and entered the directory /etc/mysql/my.cnf.
And I added the variable “wait_timeout = 288000” below [mysqld] option.
However, after rebooting, when I ran the container, it exited immediately with status code (1).
I knew that the error was caused by the variable I just added.
So, I wanted to delete the variable, but now the docker container bash won’t open.
Is there any way that I can delete the variable “wait_timeout” in this case?
If there isn’t, could you recommend other methods for troubleshooting?
Thanks for checking the issue.
Delete and recreate the container, and it will start fresh from a clean container filesystem.
That is probably also a better way to modify the database configuration (if you do, in fact, need a custom my.cnf). You can bind-mount a directory of configuration files into the container at startup time:
docker run -d -p 3306:3306 --name mysql \
-v $PWD/mysql-conf:/etc/mysql/conf.d \
mysql:8
Then when the configuration changes, you can delete and recreate this container:
docker stop mysql
docker rm mysql
docker run -d -p 3306:3306 ... mysql:8 # as above
(See "Using a custom MySQL configuration file" in the Docker Hub mysql image page for more information.)
Deleting and recreating Docker containers is very routine, and one of the benefits is that when a new container starts, it always has a "clean" filesystem. This particular setup also makes sure the modified configuration file is stored outside the container, so if you are forced to recreate the container (to upgrade MySQL to get a critical security fix, for example) it's something you're used to doing and you won't lose data or settings.
Upon starting a docker container, I get the following error:
standard_init_linux.go:175: exec user process caused "permission denied"
sudo does not fix it. I have all permissions.
docker-compose only shows the container crashing in the same way.
I use Linux and the Dockerfile is on a cifs-share. Starting from a locally mounted drive, everything works.
As hinted at here the filesystem is no-exec. I.e. executing scripts or binaries from there is not allowed. You can test that by finding e.g. a shellscript, checking it has the execute bit set with ls -l and then try to run it. Furthermore, looking at the mount-parameters can reveal the problem:
//nas.local/home on /cifs/h type cifs ( <lots of options omitted> , noexec)
Interestingly, the command that mounted the share did not explicitly request noexec. However the mount came out this way anyway. Adding -o exec to the mounting command and remounting fixed it.
I solved the issue by changing the top line in train file.
It was
#!/usr/bin/env python
I changed it to:
#!/usr/bin/env python3.5
depending on what version of python i had installed.
I spent the weekend pouring over the Docker docs and playing around with the toy applications and example projects. I'm now trying to write a super-simple web service of my own and run it from inside a container. In the container, I want my app (a Spring Boot app under the hood) -- called bootup -- to have the following directory structure:
/opt/
bootup/
bin/
bootup.jar ==> the app
logs/
bootup.log ==> log file; GETS CREATED BY THE APP # STARTUP
config/
application.yml ==> app config file
logback.groovy ==> log config file
It's very important to note that when I run my app locally on my host machine - outside of Docker - everything works perfectly fine, including the creation of log files to my host's /opt/bootup/logs directory. The app endpoints serve up the correct content, etc. All is well and dandy.
So I created the following Dockerfile:
FROM openjdk:8
RUN mkdir /opt/bootup
RUN mkdir /opt/bootup/logs
RUN mkdir /opt/bootup/config
RUN mkdir /opt/bootup/bin
ADD build/libs/bootup.jar /opt/bootup/bin
ADD application.yml /opt/bootup/config
ADD logback.groovy /opt/bootup/config
WORKDIR /opt/bootup/bin
EXPOSE 9200
ENTRYPOINT java -Dspring.config=/opt/bootup/config -jar bootup.jar
I then build my image via:
docker build -t bootup .
I then run my container:
docker run -it -p 9200:9200 -d --name bootup bootup
I run docker ps:
CONTAINER ID IMAGE COMMAND ...
3f1492790397 bootup "/bin/sh -c 'java ..."
So far, so good!
My app should then be serving a simple web page at localhost:9200, so I open my browser to http://localhost:9200 and I get nothing.
When I use docker exec -it 3f1492790397 bash to "ssh" into my container, I see everything looks fine, except the /opt/bootup/logs directory, which should have a bootup.log file in it -- created at startup -- is instead empty.
I tried using docker attach 3f1492790397 and then hitting http://localhost:9200 in my browser, to see if that would generated some standard output (my app logs both to /opt/bootup/logs/bootup.log as well as the console) but that doesn't yield any output.
So I think what's happening is that my app (for some reason) doesn't have permission to create its own log file when the container starts up, and puts the app in a weird state, or even prevents it from starting up altogether.
So I ask:
Is there a way to see what user my app is starting up as?; or
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
Thanks in advance!
I don't know why your app isn't working, but can answer your questions-
Is there a way to see what user my app is starting up as?; or
A: Docker containers run as root unless otherwise specified.
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
A: Docker containers dump stdout/stderr to the Docker logs by default. There are two ways to see these- 1 is to run the container with the flag -it instead of -d to get an interactive session that will list the stdout from your container. The other is to use the docker logs *container_name* command on a running or stopped container.
docker attach 3f1492790397
This doesn't do what you are hoping for. What you want is docker exec (probably docker exec -it bootup bash), which will give you a shell in the scope of the container which will let you check for your log files or try and hit the app using curl from inside the container.
Why do I get no output?
Hard to say without the info from the earlier commands. Is your app listening on 0.0.0.0 or on localhost (your laptop browser will look like an external machine to the container)? Does your app require a supervisor process that isn't running? Does it require some other JAR files that are on the CLASSPATH on your laptop but not in the container? Are you running docker using Docker-Machine (in which case localhost is probably not the name of the container)?
I have two simple containers, web and db. I built and can successfully up the containers via docker-compose on both Windows and Ubuntu. However, when I attempt to up on Photon, I get the following error for my web container.
Handler for POST /v1.21/containers/.../start returned error: Container command 'apache2-foreground' not found or does not exist.
But when I build the image based on the Dockerfile, and docker run web, it launches and runs fine. Any ideas about this error?
apache2-foreground is a command (script) that calls apache2 -DFOREGROUND (see httpd/php repos/containers). It's the command automatically run by php/httpd containers
If you run into a problem running a command from docker-compose that will ordinarily run with docker then it could probably be a bug - see this for instance
It could also be the case that you actually have bad paths in your docker-compose.yml volume mappings