I'm developing a program for Raspberry Pi in Java which works fine on a Raspi. Now I want to add functionality, that is not directly related to GPIO. And I don't want to carry around the Raspi with my development notebook. My idea:
pull the working image from raspi's memory card
create a docker container out of that
run it in docker
see other post for that...works fine
start docker container, go into it, start sshd...works
use Eclipse and Maven to push jar to container...works
BUT...the same program, that works fine on the hardware-pi, crashes on startup in the docker container with
Caused by: com.pi4j.library.pigpio.PiGpioException: PIGPIO ERROR: PI_INIT_FAILED; pigpio initialisation failed
at com.pi4j.library.pigpio.impl.PiGpioBase.validateResult(PiGpioBase.java:263) ~[pi4j-library-pigpio.jar:?]
at com.pi4j.library.pigpio.impl.PiGpioBase.validateResult(PiGpioBase.java:249) ~[pi4j-library-pigpio.jar:?]
Start is with sudo directly in the container. For my development use case, I do not need working Pins from GPIO. My program shall set a pin to hi or low and qemu/docker shall silently ignore that. I know, that this doesn't make sense in a container...but for the functionality to be added working pins are not important.
Is there some chance to get that running? Thank you for your support!
Related
I'm trying to run a NextJS app in a docker container. I'm using Prisma to connect to my database and NextAuth for OAuth.
If I run the app locally, I am able to successfully login (i.e., I can run through the whole flow as expected).
However, if I run it in the docker container, I'm getting errors as soon as I hit my pages/api/auth[...nextAuth].ts route.
The only logs I seem to be able to get are:
docker-crud-next-1 | wait - compiling /api/auth/[...nextauth] (client and server)...
docker-crud-next-1 | event - compiled client and server successfully in 5.9s (610 modules)
docker-crud-next-1 exited with code 0
I've tried following the debugger docs, but am not able to get it to work within the docker container?
I've also tried running the app manually from within the container, but same situation - the app just dies and there are no logs to look at.
I.e., I create the container, then open an interactive shell and manually run the start commands. No improvement from a logs perspective. ,
I think it might be the prisma client (here)... but I'm at a loss to figure out what is causing the app to crash.
So, questions:
What's the better way to get logs so I can figure out why it's crashing?
Any idea why it might work locally but not in the container?
I'm on Mac 13.1 on with an Intel (not M1/2 chip).
My understanding is that I'm using a Debian base for my docker container (though, that's probably question 3 -- how can I interrogate the image I'm using since it's based on Microsoft's Typescript Node image)
The issue appears to have been related to the base image I have been using.
I missed a disclaimer when porting a project from a different machine (an M1 mac) onto this one (an intel based Mac) that the image I was using reserves the -bullseye variant for M1 (source).
Moving to the base image without a variant worked:
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node as base
or using a the even more basic node:18-alpine.
Interestingly, trying to tag the typescripte-node image with a variant of 18 did not seem to work:
ARG VARIANT="18"
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT} as base
Either way, I now have a working app and Next isn't crashing. I would still love someone to help me understand what dependency was crashing and even better how I could have detected it because my logs were still totally bare.
It's been quite a while that I am struggling to solve an issue but no success. I would like to run Carla simulator 9.10.1 in their provided docker container on a headless (without display) cluster which is managed by Slurm. We use enroot containers on our cluster. So, we convert docker images to enroot sqsh files first, and then we run it. Carla runs flawlessly when a display is connected with or without a container on my PC. However, when it comes to running it on a headless cluster, non of the official Carla methods for running it on a headless cluster worked for me because I am only able to run my experiments inside an enroot container with Slurm (not on the head node) on a headless cluster. So, there are some permission restrictions for my user as well on the cluster. I have to mention that I only need to run Carla and I don't want to see the GUI but the problem is Carla doesn't start off-screen or without rendering. The app quits without any error when I also run it with Off-screen or no-rendering flags. So, I was wondering
if it is possible at all to run a GUI app inside a container on a headless cluster?
I have tried creating a virtual display but when I'd like to run Xorg :7, I get (EE) parse_vt_settings: Cannot open /dev/tty0 (Permission denied). I have also set allowed_users=console to /etc/X11/Xwrapper.config file but it didn't help. Is there any other way to create a virtual display?
I have not tried xvbf because Carla needs OpenGL and people say it may not be easy to make it work.
Is there any way to get this working? Thanks.
you can try virtualgl but it only works if you don't need to see the display itself.
wget https://sourceforge.net/projects/virtualgl/files/3.0.1/virtualgl_3.0.1_amd64.deb
dpkg -i virtualgl_3.0.1_amd64.deb
apt-get update
apt-get -f install
after installing you can use
vglrun glxgears
where glxgears is your application.
That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.
Docker is a wonderful tool for running/deploying your application in a well-defined, controlled environment, and is well supported by e.g. the GitLab CI or by MS Azure.
We would like to use it also in the development phase, so that all developers have the same environment available. Of course, we want to keep the image as light as possible and we do not want e.g. any IDE or other development tool inside of it.
So the actual development takes place outside of docker.
Running our (python) application inside of docker is no problem, but debugging it is not trivial: I do not know of a way to attach a debugger to an application running inside docker. In theory this should be possible, but how does one do it?
Additional info: we use visual studio code, that does have some docker, plugin, but nothing of this sort is mentioned.
Turns out that this is possible, following the same steps needed for remote debugging.
The IP address of the docker image can be retrieved through:
docker inspect <container_id> | grep -i ip
just be sure to add at the beginning of your application:
import ptvsd
# Allow other computers to attach to ptvsd at this IP address and port, using the secret
ptvsd.enable_attach(secret=None, address = ('0.0.0.0', 3000))
ptvsd.wait_for_attach()
'0.0.0.0' means on all interfaces.
For vscode, the last steps consists in adapting the python: Attach configuration, specifying the address and the remote and local roots for your script.
However, for some mysterious reason my breakpoints are ignored.
I was able to setup the minimesos cluster on my laptop and also could deploy a small command-line utility. Now the questions;
What is the image "containersol/minimesos" used for? It is pulled but I don't see it running, when I do "docker ps". "docker images" lists it.
How come when I run "top" inside the mesos-agent container, I see all the processes running in my host (laptop)? This is a bit strange.
I was trying to figure out what's inside minimesos script. I see that there's just one "docker run ... " command. Would really appreciate if I could get to know what the aforementioned command does that results into 4 containers (1 master, 1 slave, 1 zk, 1 marathon) running on my laptop.
containersol/minimesos runs the Java code that is the core of minimesos. It only runs until it executes the command from the CLI. When you do minimesos up the command name and the minimesosFile will be passed to this container. The container in turn will execute the Java code that will create the other containers that form the Mesos cluster specified in the minimesosFile. That should answer #3 also. Take a look at MesosCluster class thats the root of where the magic happens.
I don't know the answer to #2 will get back to you when I find out.
Every minimesos command runs as a short lived container, whose image is containersol/minimesos.
When you run 'minimesos up' it launches containersol/minimesos with 'up' as the argument. It then launches a cluster by starting other containers like containersol/mesos-agent and containersol/mesos-master. After the cluster is up the containersol/minimesos container exists and is removed.
We have separated cli and minimesos core as a refactoring to prepare for the upcoming API module. We are creating an API to support clients for different programming language. The first client will be a Golang client.
In this new setup minimesos will run launch a long running API server and any minimesos cli commands call the API. The clients will also launch the API server and call the API.