Containerizing a Vue application does not display webpage on localhost - docker

I'm trying to containerize my vue application created from vue-cli. I have a docker-compose.yml looking like this:
version: '3.8'
services:
npm:
image: node:current-alpine
ports:
- '8080:8080'
stdin_open: true
tty: true
working_dir: /app
entrypoint: [ 'npm' ]
volumes:
- ./app:/app
I have in the same directory the docker-compose.yml and the /app where the vue source code is located.
/vue-project
/app (vue code)
/docker-compose.yml
I install my node dependencies:
docker-compose run --rm npm install
They install correctly in the container as I see the folder appear in my host.
I am running this command to start the server:
docker-compose run --rm npm run serve
The server starts to run correctly:
App running at:
- Local: http://localhost:8080/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/
Note that the development build is not optimized.
To create a production build, run npm run build.
But I cannot access it at http://localhost:8080/ from my browser. I've tried different ports, I've also tried to run the command like this:
docker-compose run --rm npm run serve --service-ports
But none of this works. I've looked at other dockerfiles but they are so different from mine, what am I exactly doing wrong here?
docker ps -a
is showing these:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf0d5bc724b7 node:current-alpine "npm run serve --ser…" 21 minutes ago Up 21 minutes docker-compose-vue_npm_run_fd94b7dd5be3
ff7ac833536d node:current-alpine "npm" 22 minutes ago Exited (1) 22 minutes ago docker-compose-vue-npm-1

Your compose-file instructs docker to expose the container's port 8080 to the host's port 8080, yet your docker ps output shows the container is not listening.
Is it possible your containerized app is not listening on 8080?

Related

'Cypress could not verify that this server is running' error when using Docker

I am running Cypress version 10.9 from inside Docker in a Mac OS. I set my base URL as localhost:80. As a simple example, I am running an Apache server on localhost:80 which if I go to a web browser, I get the 'It works!' page, so it is indeed up. I also can ping localhost:80 from the same terminal I am executing my Docker Cypress container.
But I get this error every time when attempting to run my Cypress container:
Cypress could not verify that this server is running:
> http://localhost
We are verifying this server because it has been configured as your baseUrl.
I do see there are some stackoverflow posts(ie, [https://stackoverflow.com/questions/53959995/cypress-could-not-verify-that-the-server-set-as-your-baseurl-is-running][1]) that talk about this error. However, the application under test in these posts are inside another Docker container. The Apache page is not under a container.
This is my docker-compose.yml:
version: '3'
services:
# Docker entry point for the whole repo
e2e:
build:
context: .
dockerfile: Dockerfile
environment:
CYPRESS_BASE_URL: $CYPRESS_BASE_URL
CYPRESS_USERNAME: $CYPRESS_USERNAME
CYPRESS_PASSWORD: $CYPRESS_PASSWORD
volumes:
- ./:/e2e
I pass 'http://localhost' from my environment CYPRESS_BASE_URL setting.
This is the docker command I use to build my image:
docker compose up --build
And then to run the Cypress container:
docker compose run --rm e2e cypress run
Some other posts suggest running the docker run command with --network to make sure my Cypress container runs on the same network as the compose network(ref: Why Cypress is unable to determine if server is running?) but I am executing 'docker compose run' which does not have a --network argument.
I also verified that my /etc/hosts has an entry of 127.0.0.1 localhost as other posts have suggested. Any suggestions? Thanks.

docker-compose cannot resolve DNS

The problem is that docker compose cannot build image, failing on RUN npm ci. But after hours of debugging, I isolated the problem and pinned it in this minimal setup:
My docker-compose.yml
version: '3.8'
services:
myapp:
build:
dockerfile: Dockerfile
context: .
target: development
command: sleep Infinity
My Dockerfile
FROM node:18-alpine AS development
RUN ping google.com
When I run docker compose -f docker-compose.yml up -d --build
I'm getting error:
What I tried so far
In Dockerfile replace ping google.com to ping <real-ip>. ✅ And it works, so I assume it's DNS problem.
Add dns into docker-compose.yml: dns: 8.8.8.8. ❌ No luck
Run under super user sudo docker compose …. ❌ No luck
I tried to build image from Dockerfile without compose, using just docker build command. ✅ And it works, so the problem with docker compose.
Commented RUN ping … command, so it does not fail and runs sleep Infinity form the compose config. Then I connected into the container via docker exec -it <container> sh and was able to ping google and run npm ci. So when container is running it has access to DNS. The problem happens only in docker compose on the build stage from Dockerfile.
Environment
It's a VPS on hetzner. I ssh under a user with sudo and docker group.

issues in docker-compose when running up, cannot find localhost and services starting in wrong order

I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main

docker exec not working in docker-compose containers

I'm executing two docker containers using docker compose.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eef95ca1b59b gogent_qaf "/bin/sh -c ./slave.s" 14 seconds ago Up 12 seconds 4242/tcp, 7000-7005/tcp, 9999/tcp, 0.0.0.0:30022->22/tcp coreqafidm_qaf_1
a01373e893eb gogent_master "/bin/sh -c ./master." 15 seconds ago Up 13 seconds 4242/tcp, 0.0.0.0:27000->7000/tcp, 0.0.0.0:27001->7001/tcp, 0.0.0.0:27002->7002/tcp, 0.0.0.0:27003->7003/tcp, 0.0.0.0:29999->9999/tcp coreqafidm_master_1
When I try to use:
docker exec -it coreqafidm_qaf_1 /bin/bash
I get the error:
docker exec -it coreqafidm_qaf_1 /bin/bash
no such file or directory
Here is the docker-compose file:
version: '2'
services:
master:
image: gogent_master
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
ports:
- "27000-27003:7000-7003"
- "29999:9999"
build:
context: .
dockerfile: Dockerfile.master
qaf:
image: gogent_qaf
ports:
- "30022:22"
volumes:
- .:/d1/apps/qaf
- ./../core-idm-gogent/:/d1/apps/gogent
depends_on: [master]
build:
context: .
dockerfile: Dockerfile.qaf
Both Docker files involved have as their last WORKDIR command:
WORKDIR /d1/apps/qaf
If there is a REAL directory /d1/apps/qaf on the machine's natural file system docker exec works, to some degree. It will open up a shell. However, the mapped in volumes are not available to this shell and the files I see are the ones in the real, natural directory, not what should be the mapped in volume.
$ mkdir /d1/apps/qaf
$ docker exec -it coreqafidm_qaf_1 /bin/bash
root#eef95ca1b59b:/d1/apps/qaf#
root#eef95ca1b59b:/d1/apps/qaf# ls /d1/apps/gogent
ls: cannot access /d1/apps/gogent: No such file or directory
The volumes work correctly from within the docker-compose context. I have scripts executing in their and they work. It's just docker exec that fails to see the volumes.
The error stems from a the container not finding /bin/bash, hence the no such file or directory error. The docker exec works fine though.
Try with /bin/sh.
Well, I installed docker-compose etc. on a different machine and this problem was not there. Go figure. This is just one of those things I don't have time to track down.

Docker - issue command from one linked container to another

I'm trying to set up a primitive CI/CD pipeline using 2 Docker containers -- I'll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that's done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.
The jenkins container is using the latest image (Dockerfile).
The node-app container's Dockerfile is:
FROM node:latest
EXPOSE 80
WORKDIR /usr/src/final-exercise
ADD . /usr/src/final-exercise
RUN apt-get update -y
RUN apt-get install -y nodejs npm
RUN cd /src/final-exercise; npm install
CMD ["node", "/usr/src/final-exercise/app.js"]
jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to #alkis):
node-app:
container_name: node-app
build: .
ports:
- 80:80
links:
- jenkins
jenkins:
container_name: jenkins
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69e52b216d48 finalexercise_node-app "node /usr/src/final-" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp node-app
5f7e779e5fbd jenkins "/bin/tini -- /usr/lo" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
I can ping jenkins from node-app and vice versa.
Is this even possible? If not, am I making an architectural mistake here?
Thank you very much in advance, I appreciate it!
EDIT:
I've stumbled upon nsenter and easily entering a container's shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.
node-app:
build: .
ports:
- 80:80
links:
- finalexercise_jenkins_1
jenkins:
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Try the above. You are linking by image name, but you must use container name.
In your case, since you don't specify explicitly the container name, it gets auto-generated like this
finalexercise : folder where your docker-compose.yml is located
node-app : container configs tag
1 : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
The setup of the yml files:
node-app:
build: .
container_name: my-node-app
ports:
- 80:80
links:
- my-jenkins
jenkins:
image: jenkins
container_name: my-jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.
Update
In order to test, log to a bash terminal of the jenkins container
docker exec -it my-jenkins bash
Then try to ping my-node-app, or even telnet for the specific port.
ping my-node-app:80
Or you could
telnet my-node-app 80
Update
What you want to do is easily accomplished by the exec command.
From your host you can execute this (try it so you are sure it's working)
docker exec -i <container_name> ./deploy.sh
If the above works, then your problem delegates to executing the same command from a container. As it is you can't do that, since the container that's issuing the command (jenkins) doesn't have access to your host's docker installation (which not only recognises the command, but holds control of the container you need access to).
I haven't used either of them, but I know of two solutions
Use this official guide to gain access to your host's docker daemon and issue docker commands from your containers as if you were doing it from your host.
Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it's shared).
This thread from SO gives some more insight about this issue.

Resources