How to flash a pixhawk from docker container? - docker

I do my first step in developing on the PX4 using Docker.
Therefore I extend the px4io/px4-dev-nuttx image to px4dev with some extra installations.
Dockerfile
FROM px4io/px4-dev-nuttx
RUN apt-get update && \
apt-get install -y \
python-serial \
openocd \
flex \
bison \
libncurses5-dev \
autoconf \
texinfo \
libftdi-dev \
libtool \
zlib1g-dev
RUN useradd -ms /bin/bash user
ADD ./Firmware /src/firmware/
RUN chown -R user:user /src/firmware/
Than I run the image/container:
docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00:rw \
px4dev \
bash
I also tried:
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
Than I switched to /src/firmware/, build the code. But the upload always results in this error:
make px4fmu-v2_default upload
ninja: Entering directory `/src/firmware/build/nuttx_px4fmu-v2_default'
[0/1] uploading px4
Loaded firmware for board id: 9,0 size: 1028997 bytes (99.69%), waiting for the bootloader...
I use a Pixhawk 2.4.8, my host is an Ubuntu 18.04 64bit. Doing the same at the host will work.
What is going wrong here? Does maybe a reboot of the PX4 during flashing it cause the problem?
If it is generally not possible, what is the output file of the build and is it possible to upload this using QGroundControl?
Kind regards,
Alex
run script:
#!/bin/bash
docker run -it --rm --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
--device=/dev/ttyACM0 \
--device=/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 \
--name=dev01 \
px4dev \
bash
for any reason sometimes the upload ends differently:
user#7d6bd90821f9:/src/firmware$ make px4fmu-v2_default upload
...
[153/153] Linking CXX executable nuttx_px4io-v2_default.elf
[601/602] uploading /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
Loaded firmware for 9,0, size: 1026517 bytes, waiting for the bootloader...
If the board does not respond within 1-2 seconds, unplug and re-plug the USB connector.
but even if I do so. It stucks here.
regarding the default device, I grep through the build folder:
user#7d6bd90821f9:/src/firmware$ grep -r "/dev/serial" ./build/
./build/px4fmu-v2_default/build.ninja: COMMAND = cd /src/firmware/build/px4fmu-v2_default && /usr/bin/python /src/firmware/Tools/px_uploader.py --port "/dev/serial/by-id/*_PX4_*,/dev/serial/by-id/usb-3D_Robotics*,/dev/serial/by-id/usb-The_Autopilot*,/dev/serial/by-id/usb-Bitcraze*,/dev/serial/by-id/pci-3D_Robotics*,/dev/serial/by-id/pci-Bitcraze*,/dev/serial/by-id/usb-Gumstix*" /src/firmware/build/px4fmu-v2_default/px4fmu-v2_default.px4
there is px_uploader.py --port "...,/dev/serial/by-id/usb-3D_Robotics*,...". So I would say it looks out for /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00!
Looking with ls /dev/ inside the container for the devices available, neither /dev/ttyACM0 nor /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00 is listed. Here may is the problem. Something is wrong with --device=...
But ls shows that /dev/usb/ is available. So I checked it with lsusb and the PX4 is listed next to the others:
user#3077c8b483f8:/$ lsusb
Bus 003 Device 018: ID 26ac:0011
Maybe there is not correct driver inside the container for this USB device?
On my host I got the major:minor no 166:0:
user:~$ ll /dev/
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
The folder /sys/dev/char/166:0 is identical at host and container as far as I can see. And at the container seems to be a link to someting with */tty/ttyACM0 like on the host:
user#3077c8b483f8:/$ ls -l /sys/dev/char/166\:0
lrwxrwxrwx 1 root root 0 Jan 1 23:44 /sys/dev/char/166:0 -> ../../devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3.1/3-1.3.1.3/3-1.3.1.3:1.0/tty/ttyACM0
At the host I got this information about the devices - but this is missing inside the container:
user:~$ ls -l /dev/ttyACM0
crw-rw---- 1 root dialout 166, 0 Jan 2 00:40 ttyACM0
user:~$ ls -l /dev/serial/by-id/
total 0
lrwxrwxrwx 1 root root 13 Jan 2 00:40 usb-3D_Robotics_PX4_FMU_v2.x_0-if00 -> ../../ttyACM0
Following this post I changed my run script to (without the privileged flag)
#!/bin/bash
DEV1='/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00'
docker run \
-it \
--rm \
--env=LOCAL_USER_ID=0 \
--device=/dev/ttyACM0 \
--device=$DEV1 \
-v ${PWD}/Firmware:/opt/Firmware \
px4dev_nuttx \
bash
Than I see the devices. But they are not accessible.
root#586fa4570d1c:/# setserial /dev/ttyACM0
/dev/ttyACM0, UART: unknown, Port: 0x0000, IRQ: 0
root#586fa4570d1c:/# setserial /dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00
/dev/serial/by-id/usb-3D_Robotics_PX4_FMU_v2.x_0-if00, UART: unknown, Port: 0x0000, IRQ: 0

Related

Passwordless SSH from GitLab CI to Remote Server

Just recently I stumbled on an SSH issue that I cannot figure out what is missing. We use GitLab CI to build and deploy the project to one of our remote servers. As a part of the upgrade plan, we need to replace the degrading Debian 6 server with a new RHEL 7 server. I cannot get the passwordless SSH to work right from GitLab Runner to a remote machine.
I created a reproducible example in a Dockerfile, the IP of the remote server and the user is replaced with non-sensitive data.
FROM centos:7
RUN yum install -y epel-release
RUN yum update -y
RUN yum install -y openssh-clients
RUN useradd -m joe
RUN mkdir -p /home/joe/.ssh
COPY id_rsa_shared /home/joe/.ssh/id_rsa
RUN echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
RUN ssh-keyscan 10.x.x.x >> /home/joe/.ssh/known_hosts
RUN chown -R joe:joe /home/joe/.ssh
USER joe
CMD ["/bin/bash"]
The file id_rsa_shared is created on local machine with the following command:
ssh-keygen -t rsa -b 2048 -f ./id_rsa_shared
ssh-copy-id -i ./id_rsa_shared joe#10.x.x.x
This works on local. A simple ssh joe#10.x.x.x uname -a in the docker container will output the following:
Linux newweb01p.company.local 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
However, if I commit this to a branch as GitLab CI script, as shown:
image: centos:7
stages:
- deploy
dev-www:
stage: deploy
tags:
- docker
environment:
name: dev-www
url: http://dev-www.company.local
variables:
DEV_HOST: 10.x.x.x
APP_ENV: dev
DEV_USER: joe
script:
- whoami
- yum install -y epel-release
- yum update -y
- yum install -y openssh-clients
- useradd -m joe
- mkdir -p /home/joe/.ssh
- cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
- echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
- chown -R joe:joe /home/joe/.ssh/
- chmod 600 /home/joe/.ssh/*
- chmod 700 /home/joe/.ssh
- ls -Fsal /home/joe/.ssh
- su - joe
- ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
when: manual
The pipeline will fail authentication as shown:
Running with gitlab-runner 13.12.0 (7a6612da)
on docker.hqgitrunner01d.company.local K47w1s77
Preparing the "docker" executor
Using Docker executor with image centos:7 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image centos:7 ...
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxxx ...
Preparing environment
Running on runner-k47w1s77-project-93-concurrent-0 via hqgitrunner01d.company.local...
Getting source from Git repository
Fetching changes...
Reinitialized existing Git repository in /builds/webversion3/API/.git/
Checking out 6a7c193b as tdr/psr4-composer...
Updating/initializing submodules recursively...
Executing "step_script" stage of the job script
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxx ...
$ whoami
root
$ useradd -m joe
$ mkdir -p /home/joe/.ssh
$ cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
$ echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
$ echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
$ chown -R joe:joe /home/joe/.ssh/*
$ chmod 600 /home/joe/.ssh/*
$ chmod 700 /home/joe/.ssh
$ ls -Fsal /home/joe/.ssh
total 16
0 drwx------ 2 root root 53 Apr 1 15:19 ./
0 drwx------ 3 joe joe 74 Apr 1 15:19 ../
4 -rw------- 1 joe joe 37 Apr 1 15:19 config
4 -rw------- 1 joe joe 3414 Apr 1 15:19 id_rsa
8 -rw------- 1 joe joe 6241 Apr 1 15:19 known_hosts
$ su - joe
$ ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
Warning: Permanently added '10.x.x.x' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Cleaning up file based variables
ERROR: Job failed: exit code 1
Maybe there’s a step I missed because I get a ‘Permission denied, please try again’ message. How do I get Docker Executor to use passwordless SSH to a remote server?
The solution was really simple, and straightforward. The important part is understanding SSH.
The solution works. A snippet from the .gitlab-ci.yml for those who has the same problem as I do.
...
- mkdir -p ~/.ssh
- touch ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- chmod 600 ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- echo "$OPENSSH_KEY" >> ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no" >> ~/.ssh/config
- ssh-keyscan ${DEV_HOST} >> ~/.ssh/known_hosts
Just inline all your ssh options. Use -i to specify your key file. You can also use -o UserKnownHostsFile to specify your known hosts file -- you don't need to copy all that it into an ssh configuration.
This should be enough to ssh successfully:
# ...
- echo "$DEV_USER_OPENSSH_KEY" > "${CI_PROJECT_DIR}/id_rsa.key"
- chmod 600 "${CI_PROJECT_DIR}/id_rsa.key"
- |
ssh -i "${CI_PROJECT_DIR}/id_rsa.key" \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile="${CI_PROJECT_DIR}/gitlab/known_hosts" \
-o StrictHostKeyChecking=no \
user#host ...
Also, since you're disabling StrictHostKeyChecking, you can also just use /dev/null for your UserKnownHostsFile. If you want key checking, omit the StrictHostKeyChecking=no option.

docker can't run vscodium

Mine is a bit of a peculiar situation, I created a dockerfile that "works" if not for some proiblems,
Here is a "working" version:
ARG IMGVERS=latest
FROM bensuperpc/tinycore:${IMGVERS}
LABEL maintainer "Vinnie Costante <****#gmail.com>"
ARG DOWNDIR=/tmp/download
ARG INSTDIR=/opt/vscodium
ARG REPOAPI="https://api.github.com/repos/VSCodium/vscodium/releases/latest"
ENV LANG=C.UTF-8 LC_ALL=C PATH="${PATH}:${INSTDIR}/bin/"
RUN tce-load -wic Xlibs nss gtk3 libasound libcups python3.9 tk8.6 \
&& rm -rf /tmp/tce/optional/*
RUN sudo ln -s /lib /lib64 \
&& sudo ln -s /usr/local/etc/fonts /etc/fonts \
&& sudo mkdir -p ${DOWNDIR} ${INSTDIR} \
&& sudo chown -R tc:staff ${DOWNDIR} ${INSTDIR}
#COPY VSCodium-linux-x64-1.57.1.tar.gz ${DOWNDIR}/
RUN wget http://192.168.43.6:8000/VSCodium-linux-x64-1.57.1.tar.gz -P ${DOWNDIR}
RUN tar xvf ${DOWNDIR}/VSCodium*.gz -C ${INSTDIR} \
&& rm -rf ${DOWNDIR}
CMD ["codium"]
The issues are these:
Starting the image with this command vscodium does not start, but entering the shell (adding /bin/ash to the end of the docker run) and then running codium instead vscodium starts. I tried many ways, even changing the entrypoint, the result is always the same. But if I try to add any other graphic program (like firefox) and replace the argument of the CMD instruction inside the dockerfile, everything works as it should.
docker run -it --rm \
--net=host \
--env="DISPLAY=unix${DISPLAY}" \
--workdir /home/tc \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
--name tc \
tinycodium
the last two versions of codium (1.58.0 and 1.58.1) don't work at all on docker but they start normally on the same distro not containerized. I tried installing other dependencies but nothing worked. Right now I don't know how to understand what's wrong with these two new versions.
I don't know how to set a volume to save codium data, I tried something like this --volume=/home/vinnie/docker:/home/tc but there are always problems with user/group permissions. I've also tried booting the container as user by adding it to the docker group but there's always a mess with permissions. If someone could explain me how to proceed, the directories I want to save are these:
/home/tc/.vscode-oss
/home/tc/.cache/mesa_shader_cache
/home/tc/.config/VSCodium
/home/tc/.config/glib-2.0/settings
/home/tc/.local/share
Try running codium --verbose and see if the container starts

Docker BuildKit build with tmpfs mount fails the 2nd time around

This works:
# note: cache .cache/go-build across docker builds
RUN --mount=type=tmpfs,target=/home/myuser/.cache \
pacman -S --needed --noconfirm go && \
su - myuser -c " \
my GO build goes here" && \
pacman -Rcsn --noconfirm go
in the sense that the non-root user myuser is able to write to the tmpfs mount and the GO build completes successfully.
But if I prepend to the above, another instruction in the dockerfile which mounts the same tmpfs, such as
RUN --mount=type=tmpfs,target=/home/myuser/.cache \
ls -Al
then surprisingly the GO build fails with
#18 42.37 failed to initialize build cache at /home/myuser/.cache/go-build: mkdir /home/myuser/.cache/go-build: permission denied
It appears as though the tmpfs mount does not have proper permissions the 2nd time it is mounted. Anyone experience the same? is it a bug?

Rust actix_web inside docker isn't attainable, why?

I'm trying to make a docker container of my rust programme, let's look
Dockerfile
FROM debian
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install git curl g++ build-essential
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
WORKDIR /usr/src/app
RUN git clone https://github.com/unegare/rust-actix-rest.git
RUN ["/bin/bash", "-c", "source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo build --release; mkdir uploaded"]
EXPOSE 8080
ENTRYPOINT ["/bin/bash", "-c", "echo 'Hello there!'; source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo run --release"]
cmd to run: docker run -it -p 8080:8080 rust_rest_api/dev
but curl from outside curl -i -X POST -F files[]=#img.png 127.0.0.1:8080/upload results into curl: (56) Recv failure: Соединение разорвано другой стороной i.e. refused by the other side of the channel
but inside the container:
root#43598d5d9e85:/usr/src/app# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
actix_003 6 root 3u IPv4 319026 0t0 TCP localhost:http-alt (LISTEN)
but running the programme without docker works properly and processes the same request from curl adequately.
and inside the container:
root#43598d5d9e85:/usr/src/app# curl -i -X POST -F files[]=#i.jpg 127.0.0.1:8080/upload
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
content-length: 70
content-type: application/json
date: Wed, 24 Jul 2019 08:00:54 GMT
{"keys":["uploaded/5nU1nHznvKRGbkQaWAGJKpLSG4nSAYfzCdgMxcx4U2mF.jpg"]}
What is the problem from outside?
If you're like myself and followed the examples on the Actix website, you might have written something like this, or some variation thereof:
fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("127.0.0.1:8088")
.unwrap()
.run()
.unwrap();
}
The issue here is that you're binding to a specific IP, rather than using 0.0.0.0 to bind to all IPs on the host container. I had the same issue as you and solved it by changing my code to:
fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("0.0.0.0:8088")
.unwrap()
.run()
.unwrap();
}
This might not be the issue for you, I couldn't know without seeing the code to run the server.
To complete what John said, in my case I had to use a tuple: .bind( ("0.0.0.0", 8088) )

CentOS 6 Docker build using livemedia-creator is failing

I am trying to build an Docker base image using livemedia-creator on CentOS 7.5 with latest patches installed is failing. Below is the error I am getting.
# livemedia-creator --make-tar --no-virt --iso=CentOS-6.10-x86_64-netinstall.iso --ks=centos-6.ks --image-name=centos-root.tar.xz
Starting package installation process
The installation was stopped due to incomplete spokes detected while running in non-interactive cmdline mode. Since there cannot be any questions in cmdline mode, edit your kickstart file and retry installation.
The exact error message is:
CmdlineError: Missing package: firewalld.
The installer will now terminate.
The kickstart file which I am using is as below
url --url="http://mirrors.kernel.org/centos/6.9/os/x86_64/"
install
keyboard us
lang en_US.UTF-8
rootpw --lock --iscrypted locked
authconfig --enableshadow --passalgo=sha512
timezone --isUtc Etc/UTC
selinux --enforcing
#firewall --disabled
firewall --disable
network --bootproto=dhcp --device=eth0 --activate --onboot=on
reboot
bootloader --location=none
# Repositories to use
repo --name="CentOS" --baseurl=http://mirror.centos.org/centos/6.9/os/x86_64/ --cost=100
repo --name="Updates" --baseurl=http://mirror.centos.org/centos/6.9/updates/x86_64/ --cost=100
# Disk setup
zerombr
clearpart --all
part / --size 3000 --fstype ext4
%packages --excludedocs --nobase --nocore
vim-minimal
yum
bash
bind-utils
centos-release
shadow-utils
findutils
iputils
iproute
grub
-*-firmware
passwd
rootfiles
util-linux-ng
yum-plugin-ovl
%end
%post --log=/tmp/anaconda-post.log
# Post configure tasks for Docker
# remove stuff we don't need that anaconda insists on
# kernel needs to be removed by rpm, because of grubby
rpm -e kernel
yum -y remove dhclient dhcp-libs dracut grubby kmod grub2 centos-logos \
hwdata os-prober gettext* bind-license freetype kmod-libs dracut
yum -y remove dbus-glib dbus-python ebtables \
gobject-introspection libselinux-python pygobject3-base \
python-decorator python-slip python-slip-dbus kpartx kernel-firmware \
device-mapper* e2fsprogs-libs sysvinit-tools kbd-misc libss upstart
#clean up unused directories
rm -rf /boot
rm -rf /etc/firewalld
# Randomize root's password and lock
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root
#LANG="en_US"
#echo "%_install_lang $LANG" > /etc/rpm/macros.image-language-conf
awk '(NF==0&&!done){print "override_install_langs='$LANG'\ntsflags=nodocs";done=1}{print}' \
< /etc/yum.conf > /etc/yum.conf.new
mv /etc/yum.conf.new /etc/yum.conf
echo 'container' > /etc/yum/vars/infra
rm -f /usr/lib/locale/locale-archive
#Setup locale properly
localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
#disable services
for serv in `/sbin/chkconfig|cut -f1`; do /sbin/chkconfig "$serv" off; done;
mv /etc/rc1.d/S26udev-post /etc/rc1.d/K26udev-post
rm -rf /var/cache/yum/*
rm -f /tmp/ks-script*
rm -rf /etc/sysconfig/network-scripts/ifcfg-*
#Generate installtime file record
/bin/date +%Y%m%d_%H%M > /etc/BUILDTIME
%end
I am not able to figure out from where firewalld is being picked. Any thought how to fix this issue.

Resources