NextJS App Crashing within Docker With No Logs - docker

I'm trying to run a NextJS app in a docker container. I'm using Prisma to connect to my database and NextAuth for OAuth.
If I run the app locally, I am able to successfully login (i.e., I can run through the whole flow as expected).
However, if I run it in the docker container, I'm getting errors as soon as I hit my pages/api/auth[...nextAuth].ts route.
The only logs I seem to be able to get are:
docker-crud-next-1 | wait - compiling /api/auth/[...nextauth] (client and server)...
docker-crud-next-1 | event - compiled client and server successfully in 5.9s (610 modules)
docker-crud-next-1 exited with code 0
I've tried following the debugger docs, but am not able to get it to work within the docker container?
I've also tried running the app manually from within the container, but same situation - the app just dies and there are no logs to look at.
I.e., I create the container, then open an interactive shell and manually run the start commands. No improvement from a logs perspective. ,
I think it might be the prisma client (here)... but I'm at a loss to figure out what is causing the app to crash.
So, questions:
What's the better way to get logs so I can figure out why it's crashing?
Any idea why it might work locally but not in the container?
I'm on Mac 13.1 on with an Intel (not M1/2 chip).
My understanding is that I'm using a Debian base for my docker container (though, that's probably question 3 -- how can I interrogate the image I'm using since it's based on Microsoft's Typescript Node image)

The issue appears to have been related to the base image I have been using.
I missed a disclaimer when porting a project from a different machine (an M1 mac) onto this one (an intel based Mac) that the image I was using reserves the -bullseye variant for M1 (source).
Moving to the base image without a variant worked:
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node as base
or using a the even more basic node:18-alpine.
Interestingly, trying to tag the typescripte-node image with a variant of 18 did not seem to work:
ARG VARIANT="18"
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT} as base
Either way, I now have a working app and Next isn't crashing. I would still love someone to help me understand what dependency was crashing and even better how I could have detected it because my logs were still totally bare.

Related

pigpio initialisation failed on raspi in qemu in docker

I'm developing a program for Raspberry Pi in Java which works fine on a Raspi. Now I want to add functionality, that is not directly related to GPIO. And I don't want to carry around the Raspi with my development notebook. My idea:
pull the working image from raspi's memory card
create a docker container out of that
run it in docker
see other post for that...works fine
start docker container, go into it, start sshd...works
use Eclipse and Maven to push jar to container...works
BUT...the same program, that works fine on the hardware-pi, crashes on startup in the docker container with
Caused by: com.pi4j.library.pigpio.PiGpioException: PIGPIO ERROR: PI_INIT_FAILED; pigpio initialisation failed
at com.pi4j.library.pigpio.impl.PiGpioBase.validateResult(PiGpioBase.java:263) ~[pi4j-library-pigpio.jar:?]
at com.pi4j.library.pigpio.impl.PiGpioBase.validateResult(PiGpioBase.java:249) ~[pi4j-library-pigpio.jar:?]
Start is with sudo directly in the container. For my development use case, I do not need working Pins from GPIO. My program shall set a pin to hi or low and qemu/docker shall silently ignore that. I know, that this doesn't make sense in a container...but for the functionality to be added working pins are not important.
Is there some chance to get that running? Thank you for your support!

Live reload and two-way communication for Expo in a docker container under new local CLI

I'm using the "new" (SDK 46) project-scoped Expo CLI in a docker container. Basic flow is:
Dockerfile from node:latest runs the Expo npx project creation script, then copies in some app-specific files
CMD is npx expo start
Using docker-compose to create an instance of the above image with port 19000 mapped to local (on a Mac), and EXPO_PACKAGER_PROXY_URL set to my host local IP (see below). I've also mounted a network volume containing my components to the container to enable live edits on those source files.
If you google around, you'll see a few dozen examples of how to run Expo in a docker container (a practice I really believe should be more industry-standard to get better dev-time consistency). These all make reference to various environment variables used to map URLs correctly to the web-based console, etc.. However, as of the release of the new (non-global) CLI, these examples are all out of date.
Using the Expo Go app I've been able to successfully connect to Metro running on the container, after setting EXPO_PACKAGER_PROXY_URL such that the QR code showing up in the terminal directs the Go app to my host on 19000, and then through to the container.
What is not working is live reloading, or even reloading the app at all. To get a change reflected in the app I need to completely restart my container. For whatever reason, Metro does not push an update to the Go app when files are changed (although weirdly I do get a little note on Go saying "Refreshing..." which shows it knows a file has changed). Furthermore, it seems like a lot of the interaction between the app and the container console are also not happening, for example when the Go app loads the initial JS bundle, loading progress is not shown in the console as it is if I try running expo outside of Docker.
At this point my working theory is that this may have something to do with websockets not playing nicely with the container. Unfortunately Expo has so much wrapped under it that it's tough for me to figure out exactly why.
Given that I'm probably not the only one who will encounter this as more people adopt the new CLI and want a consistent dev environment, I'm hoping to crowdsource some debugging ideas to try to get this working!
(Additional note -- wanted to try using a tunnel to see if this fixes things, but ngrok is also quite a pain to get working correctly through docker, so really trying to avoid that if possible!)

How to produce an app bundle so that user can double click to launch docker application?

Suppose I have a docker application (such as this one). The standard usage is using the CLI to run docker run, in this case, for macOS users it would be:
docker run -it --rm bigdeddu/nyxt:2.2.1
Now, I would like to produce an app bundle or something so that users can double click to launch this docker application as a desktop application. It would be kind of a GUI shortcut to launch docker.
How can I achieve that?
1 - Is there a solution already done for it? If so, which one?
2 - If there is not a solution already done for it, what would be a rough sketch on how to build one?
Thanks!
Docker was designed to encapsulate server processes. For servers, the CLI is a reasonable and often satisfactory interface.
If you want users to run their possibly interactive application, you may want to look for https://appimage.org/. Although I am unsure whether that is available for MacOS.
To get around these limitations, you could either think of creating an end user targeting GUI for docker, or an implementation of AppImage for MacOS.

Get information about the volume from inside the docker container

Inside a container I build a (C++) app. The source code directory is shared with --volume.
If docker runs on Linux, the shared directory runs at full speed, but if docker runs on mac, docker has to bridge the share which results in speed drop. Therefore I have to copy the whole source directory to the container before starting compiling. But this copy step is necessary on non-Linux hosts only.
How can I detect if the share is "natively" shared?
Can I get information about the host os from inside the container?
Update
The idea behind this workflow is to setup an image for a defined environment to cross-build the product for multiple platforms (win, mac, linux). Otherwise each developer has a different Linux OS/compilers/components etc installed.
As a docker newbie I thought that this image (with all required 3rdParty components/compilers) can be used to build the app within a container when it is launched.
One workaround I can think of is that you can use a special networking feature which is available in both Mac and Windows hosts, but not in Linux.
It is a special dns entry you can use to get the ip of the host from inside container - host.docker.internal. Read more here and here.
Now you just need a command to get a boolean value if it resolves or not. Since I don’t know which shell you are using, I cant say for sure but something like this should help you.
In my opinion you are looking at the issue from the wrong perspective.
First of all the compilation should be done at build time, not at runtime. If you do it in the container then it means that you are shipping an image with build tools, not to say that user of the image would need the source code to run the image. For this reason it is a good practice to compile at build time and only ship an image with the binary to run.
Secondly, compiling at build time is fast because the source code is sent to the docker daemon and accessed directly from there, no need for volumes.
Lastly, to answer your last question, it is you who runs the container. So you can tell it everything about the host where it is running by just adding and environment variable (for example). It is over complicated to just run the container and let it guess where it is running, when you already have that information at the moment yuo start the container.
I used the --env DO_COPY=1 when creating the container.

What's the purpose of the node-modules container in wolkenkit?

That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.

Resources