Docker Website using Node and Yarn wont let we access website - docker

I just started writing docker files and am trying to start a website using docker but every time I run the file I cant access the website.
dockerfile
dockerlog

This is just a warning that the method is deprecated and does not affect the running of the application, it is not the cause of the problem.

I actually figured it out when running the docker run command adding the
--network="host" flag let me connect to the website

Related

I run OpenWhisk on Docker Compose according to the docs instructions, but still I can't use OpenWhisk CLI to create any actions

I followed the instructions in the documentation to download this preset they created for easily running Apache OpenWhisk for development purposes on Docker Compose.
I use make run which works fine. Then make hello-world will run the example action just as fine.
I read the .wskprops file to see that it's running it in port 9090 and the auth value is 23bc46b1-...:123zO3.... So I use wsk property set --apihost localhost:9090 --auth 23bc46....
But if I try using wsk action create someAction main.js to create my own action it returns Unable to create action 'someAction': Put "https://localhost:31001/api/v1/namespaces/_/actions/test?overwrite=false": dial tcp [::1]:31001: connect: connection refused.
These are the steps the Makefile appears to follow.
I'm not sure if perhaps I'm missing a step? How do I link running it and using it? The documentation doesn't seem to specify this. My knowledge of Docker Compose is naught, but I need to run this with the time I have available, I hoped this would be a simple solution. I've been stuck trying to run OpenWhisk in my local computer for a week, so any help would be massively appreciated!
I figured it out! I got into the Makefile and printed WSK_CLI to discover that it was using docker-compose/openwhisk-src/bin/wsk instead of my own installation of wsk.
So essentially, after running make run, I can create an action using ./openwhisk-src/bin/wsk action -i create <action_name> <action.js>. Note that the -i is needed to get over the security simplifactions of running it locally for development purposes.

Testcontainers do not start after replacing Docker Desktop with minikube

I want to make my testcontainers in Java integration tests work with minikube replacing Docker Desktop.
I followed below article to get started:
https://www.atomicjar.com/2021/10/docker-on-windows-and-macos/#minikube
This is what I've got in testcontainers.properties
docker.client.strategy=org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy
docker.host=tcp\://192.168.64.2\:2376
docker.cert.path=/Users/username/.minikube/certs
docker.tls.verify=true
Although my docker is up and running, I'm getting following exception:
Caused by: java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
Can anybody please suggest anything to make it working?
TA
If you are using gradle try -no-daemon flag to use a new daemon. Your old gradle daemon still using your previous testcontainers properties, also restart your IDE if you're running your build inside.
After restarting Minikube and Intellij editor, and updating testcontainer-bom to be the latest - from 1.15 to 1.16.2, I was able to pull some third-party docker images. This means docker is working now.
However, I'm still trying to find a way to work with local images (Other application docker images) for integration testing as it used to work with Docker Desktop.

Meteor build refresh after file change saving drops an EACCES error in a docker container

SUMMARY
First of all I run my Meteor web app using docker-compose with docker-compose -f docker-file.dev.yml up -d --build. Everything works fine for the first build. The app is reachable on my browser via localhost.
Then I have an EACESS error while I use my code editor (vscode) to edit some .jsx files, this runs the build refresh, then, the application, running in a docker container crashes. It means I have a permission error.
But, when I save an updated file, this is done with my current (windows) user into VSCode, anyway, the app is run by "meteoruser" user defined in the docker-compose.dev.yml file below.
So what is the problem with this configuration, it worked well on a previous computer running ubuntu 16.04. Is windows the problem?
If I check docker logs I have this output :
If I rerun docker-compose build command, then everything is fine, the files are updated and the app is running. But I can't work like that and rerun this command every time I make a file change.
What I expect is a build refresh working without dropping an EACCESS error.
Additionnal informations about my project
This is a React project within Meteor framework in a container, using mongodb container within an nginx proxy in a container as well.
Project scaffold:
docker-compose file
docker file (.dev)
nb: as you can see, I create an user in the container, then I run the app into the container with this user. This is needed to ensure not running locally the app as root user.
WHAT I HAVE TRIED
Delete the .meteor folder and rebuild locally with meteor run to ensure my current windows user has the rights for the meteor app build folder. It do not work.
Remove the usage of meteoruser in the dockerfile, this does not work and instead drop an error, because meteor do not allow running app in a dev environment with root user, because it could lead to permission errors later in production..
Thanks in advance for your help!

How to load and run offline docker image built using docker-compose build?

I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

Resources