I setup a pypiserver with docker for our team and ran into a problem, where publishing of packages didn't work. After reading the tutorial more carefully I saw, that -P .htpasswd packages was missing at the end of my docker run ... command.
Compare with the pypiserver documentation (last command in the docker section):
https://pypi.org/project/pypiserver/#using-the-docker-image
docker run -p 80:8080 -v ~/.htpasswd:/data/.htpasswd pypiserver/pypiserver:latest -P .htpasswd packages
According to man docker run, the -P option should only receive the values false or true (not a list of files or even a single file) and it maps ports of the container to random ports of the host, that is clearly not happening in my use case, since docker port containername only outputs a single port map which I configured with the lowercase -p option.
So what is actually happening here? I first thought, that maybe the file list has nothing to do with the -P option (maybe it is just a toggle, that is automatically set to true if it appears in the command), but when I remove the file list I get the error:
> docker run -p 80:8080 -v ~/.htpasswd:/data/.htpasswd pypiserver/pypiserver:latest -P
usage error: option -P requires argument
Either I seriously misunderstand CLI interfaces or -P does something different as described in dockers manpage.
-P, --publish-all=true|false
Publish all exposed ports to random ports on the host interfaces. The default is false.
When set to true publish all exposed ports to the host interfaces. The default is false. If the operator uses -P (or -p) then Docker will make the exposed port accessible on the host and the ports will be available to any client that can reach the host.
When using -P, Docker will bind any exposed port to a random port on the host within an ephemeral port range defined by /proc/sys/net/ipv4/ip_local_port_range. To find the mapping between the host ports and the exposed ports, use docker port(1).
You're looking in the wrong place. Yes, to docker run the -P option will publish all exposed ports to random high numbered ports on the host. However, before you get to that, the docker run command itself is order sensitive, and flags to docker run need to be passed in the right part of the command line:
docker run ${args_to_run} ${image_name} ${cmd_override}
In other words, as soon as docker sees something that is not an arg to run, it parses the next thing as an image name, and then the rest of the args become the new value of CMD inside the container.
Next, when you have an entrypoint defined in your container, that entrypoint is concatenated with the CMD value to form one command running inside the container. E.g. if entrypoint is /entrypoint.sh and you override CMD with -P filename then docker will run /entrypoint.sh -P filename to start your container.
Therefore you need to look at the pypiserver image docs to see what syntax their entrypoint expects:
-P, --passwords PASSWORD_FILE
Use apache htpasswd file PASSWORD_FILE to set usernames & passwords when
authenticating certain actions (see -a option).
To allow unauthorized access, use:
-P . -a .
You can also dig into their repo to see they've set the entrypoint to:
ENTRYPOINT ["pypi-server", "-p", "8080"]
So with -P .htpasswd packages the command inside the container becomes:
pypi-server -p 8080 -P .htpasswd packages
Related
I have built a docker image to run a jenkins server in and after creating a container for this image, I find that the container remains on exit status, and never starts. Even when I attempt to start the container with the UI.
Here are the steps I have taken, and perhaps I am missing something?
docker pull jenkins/jenkins
sudo mkdir /var/jenkins_home
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home jenkins/jenkins
I already have java running on the port 8080, maybe this is impacting the container status?
java 2968 user 45u IPv6 0xbf254983f0051d87 0t0 TCP *:http-alt (LISTEN)
Not sure why its running on this port, I have attempted to kill the PID but it recreates itself.
Following the comments:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fc880ccd31ed jenkins/jenkins "/usr/bin/tini -- /u…" 3 seconds ago Exited (1) 2 seconds ago vigorous_lewin
docker logs vigorous_lewin
touch: setting times of '/var/jenkins_home/copy_reference_file.log': No such file or directory
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
The docs say
NOTE: Avoid using a bind mount from a folder on the host machine into
/var/jenkins_home, as this might result in file permission issues (the
user used inside the container might not have rights to the folder on
the host machine). If you really need to bind mount jenkins_home,
ensure that the directory on the host is accessible by the jenkins
user inside the container (jenkins user - uid 1000) or use -u
some_other_user parameter with docker run.
So they recommend using a docker volume rather than a bind mount like you do. If you have to use a bind mount, you need to ensure that UID 1000 can read and write the host directory.
The easiest solution is to run the container as root by adding -u root to your docker run command, like this
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home -u root jenkins/jenkins
That's not as secure though, so depending on what environment you're running your container in, that might not be a good idea.
I did an application in Spring Boot and I'm trying to do a Dockerfile with environment variables to specify the port to expose and any custom arguments from the command line.
# defines a source container image to build upon
FROM openjdk:8-jre-alpine
# adding a volume to save the logs
VOLUME /tmp
# copy a local file into the container
COPY build/libs/demo-0.0.1-SNAPSHOT.jar /app.jar
# environment variable
ENV SERVER_PORT 0
ENV JAVA_OPTS=""
# the app will listen on port ####
EXPOSE ${SERVER_PORT}
# tells Docker what it should execute when you run that container
ENTRYPOINT ["sh", "-c","java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar"]
When I run:
docker run -d JAVA_OPTS=-Dserver.port=2020 SERVER_PORT=2020 -p 8080:2020 my-good-app
Error:
docker: invalid reference format: repository name must be lowercase.
I would like to do this from the command line:
Specify the port to expose by the container
Specify the port that my application will use when it runs
To accomplish this, as my command above, I have tried:
SERVER_PORT=2020 (The port to expose by the container)
JAVA_OPTS=-Dserver.port=2020 (The port that my application will use when it runs)
My goal is to specify the port that the container will expose from the command line, also I would like to pass any custom argument/command from the command line to edit my Spring Boot application behavior, in this case, change it's port when it runs to match the port expose by the container.
There is no need to customize this. Since each container runs in an isolated network space, it's not a problem to have multiple containers each listening on the same port, just so long as when you publish those ports you use different host ports.
Spring Boot by default listens on port 8080, so just hard-code that in your Dockerfile:
FROM openjdk:8-jre-alpine
COPY build/libs/demo-0.0.1-SNAPSHOT.jar /app.jar
# No need to have an anonymous volume on /tmp
# Don't need to customize port or provide empty default variable
# Do expose default port (mostly for documentation)
EXPOSE 8080
# Provide default command to run (Docker provides `sh -c`)
CMD java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
When you go to run it, you can specify an arbitrary host port, even if you run multiple copies of the container. Make sure the second -p port number matches what the container is actually listening on.
docker run -d --name app1 -p 8080:8080 my-good-app
docker run -d --name app2 -p 8081:8080 my-good-app
On modern Docker, "expose" as a verb means almost nothing, and there's no harm in having a port exposed with nothing listening on it. If you really needed a different container-side port, and you really needed it exposed, in principle you could still set these options:
docker run -d --name app3 \
-e JAVA_OPTS=-Dserver.port=2020 \
--expose 2020 \
-p 8082:2020 \
my-good-app
In the docker run command you show, make sure to specify a -e option before each environment variable value you set (you do not have a -e before SERVER_PORT, which leads to your error). Also remember that most of the Dockerfile is completely processed before anything in the docker run command is considered; no matter what -e SERVER_PORT=... you set at run time, the image will always have EXPOSE 0. You could use build arguments to specify this at compile time, but there's not a lot of value in doing that.
For logging purposes I want to know the name of the Docker instance that my program is running under.
For example if I start it as:
docker run -d --name my-docker-name some/image
how can i find the actual docker name (my-docker-name, in this example) from a program running in it?
TL;DR: --hostname option.
Issue
Container's program cannot access its container's name.
Solution a) -dirty and not easy-
https://stackoverflow.com/a/36068029/5321002
Solution b)
Add the option -h|--hostname="" matching the same name as the docker container name. Then you just need to query the hostname from the program and you're done.
edit
Solution c)
Provide, as you suggested, a env-variable with the name. The overall command would look like as follow:
$name="custom-uniq-name"
$docker run -h $name --name $name -e NAME=$name image-to-run
if you add
-v /var/run/docker.sock:/var/run/docker.sock
to your
docker run
command, you expose docker socket to the container, and you will be able to launch docker commands, such as
docker ps --filter
Keep in mind that this is potentially dangerous, now your container has a privileged access to the host.
I'm preparing a Docker image to teach my students the basics of Linked Data. I want them to actually prepare proper RDF and simulate the process of publishing it on the web as Linked Data, so I have prepared a Docker image comprising:
Triple Store: Blazegraph, listening to port 9999.
GRefine. I have copied an instance of Open Refine, with the RDF extension included. Listening to port 3333.
Linked Data Server: I have copied an instance of Jetty, with Pubby inside it. Listening to port 8080.
I have tested the three in my localhost (runing Ubuntu 14.04) and they work fine. This is the Dockerfile I'm using to build the image:
FROM ubuntu:14.04
MAINTAINER Mikel Egaña Aranguren <my.email#x.com>
RUN apt-get update && apt-get install -y openjdk-7-jre wget curl
RUN mkdir /LinkedDataServer
COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5
COPY blazegraph /LinkedDataServer/blazegraph
COPY jetty /LinkedDataServer/jetty
EXPOSE 9999
EXPOSE 3333
EXPOSE 8080
WORKDIR /LinkedDataServer
CMD java -server -jar blazegraph/bigdata-bundled.jar
CMD google-refine-2.5/refine -i 0.0.0.0
WORKDIR /LinkedDataServer/jetty
CMD java -jar start.jar jetty.port=8080
I run the container and it does map the appropriate ports:
docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a08709d23acb mikeleganaaranguren/linked-data-server:0.0.1 /bin/sh -c 'java -ja 5 seconds ago Up 4 seconds 0.0.0.0:3333->3333/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9999->9999/tcp dreamy_engelbart
The triple store, for example, seems to be working. If I go to 127.0.0.1:9999, I can access the triple store:
However, if try to do anything (queries, upload data, ...), the triple store simply fails with an "ERROR: Could not contact server". Since the same setting works on the host, I assume I'm doing something wrong with Docker. I have tried with -P instead of mapping the ports, and with --net=host, but I get the same error.
PS: Jetty also fails in the same fashion, and GRefine is not even working.
You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container.
You'll need to run something like
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID"
Where CONTAINER ID is the value of your docker instance.
I am referring this site to link containers.
When two containers are linked, Docker will set some environment variables in the target container to enable programmatic discovery of information related to the source container.
This is the line specified in the documentaion. But when i see /etc/hosts i can see entries for both container. But when i run env command, i don't see any port mappings specified in that docker site.
Works fine for me:
$ docker run -d --name redis1 redis
0b869d9f5a43e24976beec6c292839ea2c67983012e50893f0b557cd8bc0c3b4
$ docker run --link redis1:redis1 debian env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c23a30b8618f
REDIS1_PORT=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP_ADDR=172.17.0.3
REDIS1_PORT_6379_TCP_PORT=6379
REDIS1_PORT_6379_TCP_PROTO=tcp
REDIS1_NAME=/berserk_nobel/redis1
REDIS1_ENV_REDIS_VERSION=2.8.19
REDIS1_ENV_REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-2.8.19.tar.gz
REDIS1_ENV_REDIS_DOWNLOAD_SHA1=3e362f4770ac2fdbdce58a5aa951c1967e0facc8
HOME=/root
If you're still having trouble, you need to provide a way we can recreate your problem.