Rust actix_web inside docker isn't attainable, why? - docker

I'm trying to make a docker container of my rust programme, let's look
Dockerfile
FROM debian
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install git curl g++ build-essential
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
WORKDIR /usr/src/app
RUN git clone https://github.com/unegare/rust-actix-rest.git
RUN ["/bin/bash", "-c", "source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo build --release; mkdir uploaded"]
EXPOSE 8080
ENTRYPOINT ["/bin/bash", "-c", "echo 'Hello there!'; source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo run --release"]
cmd to run: docker run -it -p 8080:8080 rust_rest_api/dev
but curl from outside curl -i -X POST -F files[]=#img.png 127.0.0.1:8080/upload results into curl: (56) Recv failure: Соединение разорвано другой стороной i.e. refused by the other side of the channel
but inside the container:
root#43598d5d9e85:/usr/src/app# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
actix_003 6 root 3u IPv4 319026 0t0 TCP localhost:http-alt (LISTEN)
but running the programme without docker works properly and processes the same request from curl adequately.
and inside the container:
root#43598d5d9e85:/usr/src/app# curl -i -X POST -F files[]=#i.jpg 127.0.0.1:8080/upload
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
content-length: 70
content-type: application/json
date: Wed, 24 Jul 2019 08:00:54 GMT
{"keys":["uploaded/5nU1nHznvKRGbkQaWAGJKpLSG4nSAYfzCdgMxcx4U2mF.jpg"]}
What is the problem from outside?

If you're like myself and followed the examples on the Actix website, you might have written something like this, or some variation thereof:
fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("127.0.0.1:8088")
.unwrap()
.run()
.unwrap();
}
The issue here is that you're binding to a specific IP, rather than using 0.0.0.0 to bind to all IPs on the host container. I had the same issue as you and solved it by changing my code to:
fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("0.0.0.0:8088")
.unwrap()
.run()
.unwrap();
}
This might not be the issue for you, I couldn't know without seeing the code to run the server.

To complete what John said, in my case I had to use a tuple: .bind( ("0.0.0.0", 8088) )

Related

shell script having ssh command inside docker container fails with - ssh: not found

I have a spring boot java application running on a docker container, and it tries to run a shell script. The shell script has a ssh command and I get the following error while running it
2020-08-12 09:22:29.425 INFO 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService : Executing spark submit, calling shell script: /tmp/temp843155675494688636.sh 172.29.199.15
2020-08-12 09:22:29.434 DEBUG 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService : Starting Input Stream:
2020-08-12 09:22:29.435 INFO 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService : #1 arg: 172.29.199.15
2020-08-12 09:22:29.436 INFO 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService : Exist Value127
2020-08-12 09:22:29.436 ERROR 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService : Starting Error Stream:
2020-08-12 09:22:29.436 ERROR 1 --- [io-11013-exec-1] b.n.i.s.d.e.service.EmrManagerService :
/tmp/temp843155675494688636.sh: line 5: ssh: not found
The same code works fine when am running the jar directly and not as docker container.
Is it something to do with ssh not recognized in docker container?
shell script -
#!/bin/bash
echo "#1 arg:" $1
ssh -i /home/dnaidaasd/aws-oneid-idaas-2020Q2.pem -oStrictHostKeyChecking=no hadoop#$1 '/etc/alternatives/jre/bin/java -Xmx1000m -server \
-XX:OnOutOfMemoryError="kill -9 %p" -cp "/usr/share/aws/emr/instance \
-controller/lib/*" -Dhadoop.log.dir=/mnt/var/log/hadoop/steps/s-100-120 \
-Dhadoop.log.file=syslog -Dhadoop.home.dir=/usr/lib/hadoop \
-Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native \
-Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true \
-Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/s-14611-353/tmp \
-Dhadoop.security.logger=INFO,NullAppender \
-Dsun.net.inetaddr.ttl=30 \
org.apache.hadoop.util.RunJar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit \
--conf spark.hadoop.mapred.output.compress=true \
--conf spark.hadoop.mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
--class biz.neustar.idaas.services.dataprofile.ProfileMain \
--name IdaasProfile --conf spark.dynamicAllocation.enabled=true \
--conf spark.executor.instances=2 --conf spark.driver.memory=8G \
--conf spark.executor.memory=4G --conf spark.executor.cores=1 \
--conf spark.sql.catalogImplementation=hive \
--jars s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/spark-core_2.11-2.4.5.jar,s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/spark-sql_2.11-2.4.5.jar,s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/spark-mllib_2.11-2.4.5.jar,s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/jackson-module-scala_2.11-2.6.7.1.jar,s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/jackson-databind-2.6.7.jar s3://oneid-idaas-dev-us-east-1/dev/emr/TestIdaasProfile/data-profile-14.0.jar' \
$2 $3 $4
This shell script is called as -
public void executeSparkSubmit(String masterNodeIp, String pathToScript, String input_hive_table, String s3_output_path, String output_hive_table ) throws IOException, InterruptedException, DataProfileServiceException {
log.info("Executing spark submit, calling shell script: " + pathToScript + " " + masterNodeIp);
ProcessBuilder pb = new ProcessBuilder("sh", pathToScript, masterNodeIp, input_hive_table, s3_output_path, output_hive_table);
Process pr = pb.start();
And the Dockerfile contents are:
FROM openjdk:8-jdk-alpine
ADD ./data-profile-provider/build/libs/data-profile-provider-203.2.0-SNAPSHOT.jar data-profile.jar
EXPOSE 11013
ENTRYPOINT ["java", "-jar", "data-profile.jar", "application.properties"]
As I suspected - your image is Alpine-based and Alpine does not have SSH client installed by default.
Corrected Dockerfile:
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache openssh-client
ADD ./data-profile-provider/build/libs/data-profile-provider-203.2.0-SNAPSHOT.jar data-profile.jar
EXPOSE 11013
ENTRYPOINT ["java", "-jar", "data-profile.jar", "application.properties"]
Edit: I forgot to add that Alpine does not have Bash either. Luckily your app invokes your script with sh scriptname.sh - otherwise you'd get bash: not found error.
SSH might not be installed.
My example here assumes an Ubuntu/Linux image derived from since you did not specify the Dockfile contents at the time.
If your container can launch successfully (ignore the fact that your app is failing), you can just simply run ssh on the command-line to see (it will give you something similar to command not found)
To run commands inside Docker container: Since an Ubuntu image has bash installed, you can run like this:
docker exec -ti containername bash
Inside Docker container: (One of my containers where there is no SSH installed)
ssh
ssh: command not found
The base container you inherit from might not have the tool installed. Most Docker containers you inherit from are usually with 'bare minimum' in mind, so your custom Docker image needs to install it otherwise.
Just adding the run command that you can add onto the Dockerfile, make sure your user are able to run these. (In this example I made sure the container image user is root) This example installs only the ssh-client only (which is what is required)
USER root
RUN apt-get update \
&& apt-get install openssh-client
USER mydockercontaineruser

How to flash a pixhawk from docker container?

I do my first step in developing on the PX4 using Docker.
Therefore I extend the px4io/px4-dev-nuttx image to px4dev with some extra installations.
Dockerfile
FROM px4io/px4-dev-nuttx
RUN apt-get update && \
apt-get install -y \
python-serial \
openocd \
flex \
bison \
libncurses5-dev \
autoconf \
texinfo \
libftdi-dev \
libtool \
zlib1g-dev
RUN useradd -ms /bin/bash user
ADD ./Firmware /src/firmware/
RUN chown -R user:user /src/firmware/
Than I run the image/container:
docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:rw \
px4dev \
bash
I also tried:
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
Than I switched to /src/firmware/, build the code. But the upload always results in this error:
make px4fmu-v2_default upload
ninja: Entering directory `/src/firmware/build/nuttx_px4fmu-v2_default'
[0/1] uploading px4
Loaded firmware for board id: 9,0 size: 1028997 bytes (99.69%), waiting for the bootloader...
I use a Pixhawk 2.4.8, my host is an Ubuntu 18.04 64bit. Doing the same at the host will work.
What is going wrong here? Does maybe a reboot of the PX4 during flashing it cause the problem?
If it is generally not possible, what is the output file of the build and is it possible to upload this using QGroundControl?
Kind regards,
Alex
run script:
#!/bin/bash
docker run -it --rm --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
--name=dev01 \
px4dev \
bash
for any reason sometimes the upload ends differently:
user#7d6bd90821f9:/src/firmware$ make px4fmu-v2_default upload
...
[153/153] Linking CXX executable nuttx_px4io-v2_default.elf
[601/602] uploading /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
Loaded firmware for 9,0, size: 1026517 bytes, waiting for the bootloader...
If the board does not respond within 1-2 seconds, unplug and re-plug the USB connector.
but even if I do so. It stucks here.
regarding the default device, I grep through the build folder:
user#7d6bd90821f9:/src/firmware$ grep -r "/dev/serial" ./build/
./build/px4fmu-v2_default/build.ninja: COMMAND = cd /src/firmware/build/px4fmu-v2_default && /usr/bin/python /src/firmware/Tools/px_uploader.py --port "/dev/serial/by-id/*_PX4_*,/dev/serial/by-id/usb-3D_Robotics*,/dev/serial/by-id/usb-The_Autopilot*,/dev/serial/by-id/usb-Bitcraze*,/dev/serial/by-id/pci-3D_Robotics*,/dev/serial/by-id/pci-Bitcraze*,/dev/serial/by-id/usb-Gumstix*" /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
there is px_uploader.py --port "...,/dev/serial/by-id/usb-3D_Robotics*,...". So I would say it looks out for /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00!
Looking with ls /dev/ inside the container for the devices available, neither /dev/ttyACM0 nor /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 is listed. Here may is the problem. Something is wrong with --device=...
But ls shows that /dev/usb/ is available. So I checked it with lsusb and the PX4 is listed next to the others:
user#3077c8b483f8:/$ lsusb
Bus 003 Device 018: ID 26ac:0011
Maybe there is not correct driver inside the container for this USB device?
On my host I got the major:minor no 166:0:
user:~$ ll /dev/
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
The folder /sys/dev/char/166:0 is identical at host and container as far as I can see. And at the container seems to be a link to someting with */tty/ttyACM0 like on the host:
user#3077c8b483f8:/$ ls -l /sys/dev/char/166\:0
lrwxrwxrwx 1 root root 0 Jan 1 23:44 /sys/dev/char/166:0 -> ../../devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3.1/3-1.3.1.3/3-1.3.1.3:1.0/tty/ttyACM0
At the host I got this information about the devices - but this is missing inside the container:
user:~$ ls -l /dev/ttyACM0
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
user:~$ ls -l /dev/serial/by-id/
total 0
lrwxrwxrwx 1 root root 13 Jan 2 00:40 usb-3D_Robotics_PX4_FMU_v2.x_0-if00 -> ../../ttyACM0
Following this post I changed my run script to (without the privileged flag)
#!/bin/bash
DEV1='/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00'
docker run \
-it \
--rm \
--env=LOCAL_USER_ID=0 \
--device=/dev/ttyACM0 \
--device=$DEV1 \
-v ${PWD}/Firmware:/opt/Firmware \
px4dev_nuttx \
bash
Than I see the devices. But they are not accessible.
root#586fa4570d1c:/# setserial /dev/ttyACM0
/dev/ttyACM0, UART: unknown, Port: 0x0000, IRQ: 0
root#586fa4570d1c:/# setserial /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00
/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00, UART: unknown, Port: 0x0000, IRQ: 0

How to setup local syslog-ng in docker container of CentOS 7?

I want to isolate a testing environment in docker, I did that on CentOS 6 How to let syslog workable in docker?
In CentOS 7, the syslog-ng's configuration is different, when I run
/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
It appears the following error message, but there is no proc/kmsg in config files.
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
The Dockerfile
FROM centos
RUN yum update --exclude=systemd -y \
&& yum install -y yum-plugin-ovl \
&& yum install -y epel-release
RUN yum install -y syslog-ng syslog-ng-libdbi
The test process:
docker build -t t1 .
docker run --rm -i -t t1 /bin/bash
In container, run following commands
# check config, no keyword like proc/kmsg
cd /etc/syslog-ng
grep -r -E 'proc|kmsg'
/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
Change /etc/syslog-ng/syslog-ng.conf from
source s_sys {
system();
internal();
};
to
source s_sys {
unix-stream("/dev/log");
internal();
};
It still show error message, but running instead of exit
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
To solve this, just run with --no-caps option
/usr/sbin/syslog-ng --no-caps -F -p /var/run/syslogd.pid

Run dbus-daemon inside Docker container

I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors

Docker doesn't seem to be mapping ports

I'm working with Hugo
Trying to run inside a Docker container to allow people to easily manage content.
My first task is to get Hugo running and people able to view the site locally.
Here's my Dockerfile:
FROM alpine:3.3
RUN apk update && apk upgrade && \
apk add --no-cache go bash git openssh && \
mkdir -p /aws && \
apk -Uuv add groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/* && \
mkdir -p /go/src /go/bin && chmod -R 777 /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
RUN go get -v github.com/spf13/hugo
RUN git clone http://mygitrepo.com /app
WORKDIR /app
EXPOSE 1313
ENTRYPOINT ["hugo","server"]
I'm checking out the site repo then running Hugo - hugo server
I'm then running this container via:
docker run -d -p 1313:1313 --name app app
Which reports everything is starting OK however when I try to browse locally on localhost:1313 I see nothing.
Any ideas where I'm going wrong?
UPDATE
docker ps gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e1f12849044 app "hugo server" 16 minutes ago Up 16 minutes 0.0.0.0:1313->1313/tcp app
And docker logs 9e1 gives me:
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
25 pages created
0 non-page files copied
0 paginator pages created
0 tags created
0 categories created
total in 64 ms
Watching for changes in /ltec/{data,content,layouts,static,themes}
Serving pages from memory
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
I had the same problem, but following this tutorial http://ahmedalani.com/post/so-recursive-it-hurts/, says about to use the param --bind from hugo server command.
Adding that param mentioned, and the ip 0.0.0.0 we have --bind=0.0.0.0
It works to me, I think this is a natural behavior from every container taking a localhost for self scope, but if you bind with 0.0.0.0 takes a visible scope to the main host.
This is because Docker is actually running in a VM. You need to navigate to the docker-machine ip instead of localhost.
curl $(docker-machine ip):1313
Delete EXPOSE 1313 in your Dockerfile. Dockerfile reference.

Resources