How to get docker apero/cas server working on localhost? - docker

I tried the following:
docker pull apereo/cas
docker run -p 80:8080 -p 443:8443 -d --name="cas" apereo/cas:v5.2.2
I then do:
docker exec -it /bin/bash
I set the etc/cas/cas.properties file with:
cas.tgc.crypto.encryption.key=
cas.tgc.crypto.signing.key=
cas.webflow.crypto.signing.key=
cas.webflow.crypto.encryption.key=
(and fill in the autogenerated keys populated after running ./bin/run-cas.sh)
I then run:
keytool -genkey -keyalg RSA -alias cas -keystore thekeystore
-storepass changeit -validity 9999 -keysize 2048
Problem is, when I try rerunning ./bin/run-cas.sh, I am getting error:
The Tomcat connector configured to listen on port 8443 failed to
start. The port may already be in use or the connector may be
misconfigured.
Is there something else I need to do to get started by getting CAS running on my local machine?

When you run the container, it executes whatever ENTRYPOINT or CMD is in the Dockerfile. In the case of apero/cas that would be CMD ["/cas-overlay/bin/run-cas.sh"]. What's likely happening is this script runs cas in some capacity and utilizes port 8443.
Now when you docker exec into the container, you're running an additional process in the container. You are a different process. You can see what I'm talking about if you ps aux inside the container, you will see the CMD running via bash. Now when you run run-cas.sh, it attempts to run something (again) using port 8443, but that port it already allocated in the container, and so you get the error.
Try this. docker run -p 80:8080 -p 443:8443 -d --name="cas" apereo/cas:v5.2.2 /bin/bash. When you add /bin/bash to the end, you're overriding the CMD in the Dockerfile, so the first run-cas.sh doesn't run, and 8443 isn't allocated to a process. This is similar in function to using docker exec, except you're running bash before the initial command executes. Then you should be able to do all the things you're trying to do (I know nothing about cas) without this port 8443 error.

Related

What does the -P option for "docker run" actually do?

I setup a pypiserver with docker for our team and ran into a problem, where publishing of packages didn't work. After reading the tutorial more carefully I saw, that -P .htpasswd packages was missing at the end of my docker run ... command.
Compare with the pypiserver documentation (last command in the docker section):
https://pypi.org/project/pypiserver/#using-the-docker-image
docker run -p 80:8080 -v ~/.htpasswd:/data/.htpasswd pypiserver/pypiserver:latest -P .htpasswd packages
According to man docker run, the -P option should only receive the values false or true (not a list of files or even a single file) and it maps ports of the container to random ports of the host, that is clearly not happening in my use case, since docker port containername only outputs a single port map which I configured with the lowercase -p option.
So what is actually happening here? I first thought, that maybe the file list has nothing to do with the -P option (maybe it is just a toggle, that is automatically set to true if it appears in the command), but when I remove the file list I get the error:
> docker run -p 80:8080 -v ~/.htpasswd:/data/.htpasswd pypiserver/pypiserver:latest -P
usage error: option -P requires argument
Either I seriously misunderstand CLI interfaces or -P does something different as described in dockers manpage.
-P, --publish-all=true|false
Publish all exposed ports to random ports on the host interfaces. The default is false.
When set to true publish all exposed ports to the host interfaces. The default is false. If the operator uses -P (or -p) then Docker will make the exposed port accessible on the host and the ports will be available to any client that can reach the host.
When using -P, Docker will bind any exposed port to a random port on the host within an ephemeral port range defined by /proc/sys/net/ipv4/ip_local_port_range. To find the mapping between the host ports and the exposed ports, use docker port(1).
You're looking in the wrong place. Yes, to docker run the -P option will publish all exposed ports to random high numbered ports on the host. However, before you get to that, the docker run command itself is order sensitive, and flags to docker run need to be passed in the right part of the command line:
docker run ${args_to_run} ${image_name} ${cmd_override}
In other words, as soon as docker sees something that is not an arg to run, it parses the next thing as an image name, and then the rest of the args become the new value of CMD inside the container.
Next, when you have an entrypoint defined in your container, that entrypoint is concatenated with the CMD value to form one command running inside the container. E.g. if entrypoint is /entrypoint.sh and you override CMD with -P filename then docker will run /entrypoint.sh -P filename to start your container.
Therefore you need to look at the pypiserver image docs to see what syntax their entrypoint expects:
-P, --passwords PASSWORD_FILE
Use apache htpasswd file PASSWORD_FILE to set usernames & passwords when
authenticating certain actions (see -a option).
To allow unauthorized access, use:
-P . -a .
You can also dig into their repo to see they've set the entrypoint to:
ENTRYPOINT ["pypi-server", "-p", "8080"]
So with -P .htpasswd packages the command inside the container becomes:
pypi-server -p 8080 -P .htpasswd packages

Unable to connect java client application to dockerize ignite server in windows 10

I have been able to successfully run apache ignite with custom config using the command
docker run -it --net=host -v "pathToLocalDirectory"/config:/opt/ignite/apache-ignite/config -e "CONFIG_URI=file:///opt/ignite/apache-ignite/config/default-config.xml" apacheignite/ignite.
But when I run my java project in IntelliJ I get the message
"IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network...".
Note: the java client project works if I run the ignite server using windows batch file.
Also, I have published 47500 port as well. the result is the same.
try running it using docker -run -it --net=host (don't mount the volumes).
If that doesn't work, it means that either something is incorrect w/your docker setup OR you are configuring discovery differently for clients and servers.
check the IP addresses listed in your client discovery section.
ssh into the container and check what is actually mounted?
run docker exec -it container-name /bin/bash
check: /opt/ignite/apache-ignite/config/default-config.xml is there and contains the correct discovery info.
Check that the ignite log (located in /opt/ignite/apache-ignite/work/log/) specifies that the correct config is being used.
It will have a line like so: [INFO][main][IgniteKernal] Config URL: file:/opt/ignite/apache-ignite/config/default-config.xml
If you don't see the mounted config file try mounting more simply.
docker run -d -v /local/dir/config.xml:/config-file.xml -e CONFIG_URI=/config-file.xml apacheignite/ignite
more info:
https://apacheignite.readme.io/docs/docker-deployment
https://apacheignite.readme.io/docs/tcpip-discovery

Trouble connecting to a TCP server through Docker on WSL 2

I'm using WSL2 on Windows 10 using an Ubuntu image, and Docker for Desktop Windows (2.2.2.0) with the WSL integration.
I have a super basic rust tcp server. I think the only relevant bit is:
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
println!("Listening on 8080");
for stream in listener.incoming() {
println!("Received connection");
let stream = stream.unwrap();
handle_connection(stream);
}
I can cargo install and run the binary without issue; the line above prints, I can curl localhost:8080 from WSL and see the response as I'd expect from the rest of the code.
I wanted to turn it into a docker image. Here's the Dockerfile.
FROM rust:1.40 as builder
COPY . .
RUN cargo install --path . --root .
FROM debian:buster-slim
COPY --from=builder ./bin/coolserver ./coolserver
EXPOSE 8080
ENTRYPOINT ["./coolserver"]
I then do:
docker build -t coolserver .
docker run -it --rm -p 8080:8080 coolserver
I see Listening on 8080 as expected (i.e. no panic), but attempting to curl localhost:8080 yields curl: (52) Empty reply from server. This, I don't know what to make of. Logging suggests my program gets to the point where it reaches listener.incoming(), but does not enter into the block.
To see if it was something to do with my setup (Docker for Desktop, WSL, etc.) or my Dockerfile, I followed the README for the docker-http-https-echo image, successfully. I can curl it on the specified ports.
I don't know how to debug further. Thanks in advance.
EXPOSE keyword is to open up ports for inter container communication for using these ports from host you have to use -p 8080:8080 while running docker via docker run
#CarlosRafaelRamirez resolved it for me. It was as simple as binding to 0.0.0.0 rather than the loopback address 127.0.0.1. More info here: https://pythonspeed.com/articles/docker-connection-refused/

Can't access webserver of airflow after run the container

I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?
I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.

Docker container with Blazegraph Triple Store not working possibly due to networking

I'm preparing a Docker image to teach my students the basics of Linked Data. I want them to actually prepare proper RDF and simulate the process of publishing it on the web as Linked Data, so I have prepared a Docker image comprising:
Triple Store: Blazegraph, listening to port 9999.
GRefine. I have copied an instance of Open Refine, with the RDF extension included. Listening to port 3333.
Linked Data Server: I have copied an instance of Jetty, with Pubby inside it. Listening to port 8080.
I have tested the three in my localhost (runing Ubuntu 14.04) and they work fine. This is the Dockerfile I'm using to build the image:
FROM ubuntu:14.04
MAINTAINER Mikel Egaña Aranguren <my.email#x.com>
RUN apt-get update && apt-get install -y openjdk-7-jre wget curl
RUN mkdir /LinkedDataServer
COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5
COPY blazegraph /LinkedDataServer/blazegraph
COPY jetty /LinkedDataServer/jetty
EXPOSE 9999
EXPOSE 3333
EXPOSE 8080
WORKDIR /LinkedDataServer
CMD java -server -jar blazegraph/bigdata-bundled.jar
CMD google-refine-2.5/refine -i 0.0.0.0
WORKDIR /LinkedDataServer/jetty
CMD java -jar start.jar jetty.port=8080
I run the container and it does map the appropriate ports:
docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a08709d23acb mikeleganaaranguren/linked-data-server:0.0.1 /bin/sh -c 'java -ja 5 seconds ago Up 4 seconds 0.0.0.0:3333->3333/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9999->9999/tcp dreamy_engelbart
The triple store, for example, seems to be working. If I go to 127.0.0.1:9999, I can access the triple store:
However, if try to do anything (queries, upload data, ...), the triple store simply fails with an "ERROR: Could not contact server". Since the same setting works on the host, I assume I'm doing something wrong with Docker. I have tried with -P instead of mapping the ports, and with --net=host, but I get the same error.
PS: Jetty also fails in the same fashion, and GRefine is not even working.
You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container.
You'll need to run something like
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID"
Where CONTAINER ID is the value of your docker instance.

Resources