I have created a docker image for grails on dockerhub:
https://hub.docker.com/r/dhobdensa/docker-alpine-grails/
It is based on the official openjdk:alpine image
First I run the container:
docker container run -it --rm -p 8080:8080 -p 3000:3000 dhobdensa/docker-alpine-grails
Then I create a new grails app with the vue profile
grails create-app --inplace --profile vue
And then I run the app:
./gradlew bootRun -parallel
Which starts a grails REST API server, and a vue client app using vue-cli and webpack
The server says your app is running on localhost:8080. This can be accessed and returns the expected result.
The client says your app is running on localhost:3000. But when attempting to access this, the browser just shows the default ERR_EMPTY_RESPONSE page.
I have tried different browsers and clearing caches.
Any ideas why accessing port 3000 is not working, but 8080 is?
Additional info
It seems that gradle is essentially running this command:
webpack-dev-server --inline --progress --config build/webpack.dev.conf.js
And this is the file:
https://gist.github.com/dhobdensa/4e22a188cc2b26cf5b0dd4028755d39b
Perhaps this is linked to webpack dev server?
So I found my answer.
I suspected that webpack dev server was the place to be looking.
Then I found this issue on github:
Cant run webpack-dev-server inside of a docker container?
https://github.com/webpack/webpack-dev-server/issues/547
Long story short, I had to add --host 0.0.0.0 to the "dev" task in package.json
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js --host 0.0.0.0"
Related
I was following a tutorial serving content for a simple app using gulp, and managed to get it working on my local (Mac) machine, which allowed me to access the site from localhost:8000 in my browser. However, when I tried to dockerize it with the following Dockerfile
FROM node:14.17.2-stretch-slim
WORKDIR /app
COPY . .
RUN npm i npm#latest -g && npm install -g gulp
RUN npm install
RUN npm rebuild node-sass
CMD ["gulp", "serve", "--dir=folder"]
EXPOSE 8000
After building the docker image and running it with:
docker run -p 8000:8000 test-image
It looked like it was successfully started with the same console output I get when I was running it locally:
[08:55:46] Using gulpfile /app/gulpfile.js
[08:55:46] Starting 'serve'...
[08:55:46] Starting 'build'...
...
[08:55:53] Finished 'build' after 6.99 s
[08:55:53] Starting 'watch'...
[08:55:53] Starting '<anonymous>'...
[08:55:53] Starting 'watch:css'...
[08:55:53] Starting 'watch:html'...
[08:55:53] Starting 'watch:images'...
[08:55:53] Starting 'watch:js'...
[08:55:53] Webserver started at http://localhost:8000
[08:55:54] Finished '<anonymous>' after 52 ms
Also verified that there were no errors with running docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4c92aaf3a8a test-image "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp practical_borg
But when I tried to access localhost:8000 from my browser, I ran into localhost didn’t send any data. ERR_EMPTY_RESPONSE
I also tried docker run -p 127.0.0.1:8000:8000 test-image with the same result - not sure if I'm doing something wrongly, my understanding is that with docker on Mac, I should be able to access it from my localhost:8000 as long as the port is mapped correctly..
Use 0.0.0.0 as gulp host address, it will serve on all interfaces, and you should be able to reach it.
It didn't work so far because the localhost address (127.0.0.1) isn't the same between your host network and the container network.
I am trying to make my simple spring boot app to work on docker
I am running on windows 10, docker 2.2.0.5 engine 19.03.8
I tested simple nginx image and it working after start I am able to access it via "http//localhost:8000"
But when I am trying to run my app it is not working
this is my Dockerfile:
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
I set very basic application.properties file:
server.port=8080
spring.resources.static-locations:"file:./swagger-ui/"
then build and run with following commands:
docker build -t myapp-image .
docker run -p 8081:8080 myapp-image
docker container is running , I am able to ssh and when trying to curl some api on port 8080 all works or at least I am getting 404 error
tested telnet from my host to 127.0.0.1 8081 is accessible
but when trying to do localhost:8081/somepath getting "This page isn’t working error"
what am I missing here, please help?
I can reach from my computer with web browser to xxxx.com:8089 which is running on a container too but different remote machine. Everything is fine. My cypress.json
"baseUrl": "http://xxxx.com:8089" is this.
I am trying to run docker container to run tests with Cypress :
docker run --rm --name burak --ipc="host" -w /hede -v /Users/kurhanb/Desktop/nameoftheProject:/hede' cypress /bin/bash -c cypress run --browser chrome && chmod -R 777 . --reporter mochawesome --reporter-options --reportDir=Users/kurhanb/Desktop/CypressTest overwrite=false
It gives me :
Cypress could not verify that the server set as your 'baseUrl' is running:
http://xxxx.com
Your tests likely make requests to this 'baseUrl' and these tests will fail if you don't boot your server.
So basically, I can reach from my computer but container can not ?
Cypress is giving you that error because at the time that Cypress starts, your application is not yet ready to start responding to requests.
Let's say your application takes 10 seconds to set up and start responding to requests. If you start your application and Cypress at the exact same time, Cypress will not be able to reach your application for the first ten seconds. Rather then wait indefinitely for the app to come online, Cypress will exit and tell you that your app is not yet online.
The way to fix this problem is to make sure that your web server is serving before Cypress ever turns on. The Cypress docs have some examples of how you can make this happen: https://docs.cypress.io/guides/guides/continuous-integration.html#Boot-your-server
I got the "Cypress failed to verify that your server is running" problem despite being able to access the site through a browser. The problem was that my
/etc/hosts
file did not include a line:
127.0.0.1 localhost
In your CICD pipeline, make sure your application is up and running before Cypress.
Using Vue.js app as example, use following commands in CICD pipeline scripts:
npm install -g wait-on
npm run serve & wait-on http://localhost:8080
cypress run
If you are trying to run your test on a preview/staging environment and you can reach it manually, the problem can be your proxy settings. Try to add the domain to / IP address to NO_PROXY:
set NO_PROXY="your_company_domain.com"
If you navigate to the Proxy Settings in the test runner, you should see the Proxy Server and a Proxy Bypass List, with the NO_PROXY values.
I hope it helps somebody. Happy coding.
I faced the same issue when i was working on a react application with gitlab ci and docker image. React application have default port as 3000, but some how docker image was assigning default port 5000. So i used cross-env to change default port of react app to 5000: -
Steps
download cross-env package
in package.json file under script change 'start': 'react-scripts start' to 'start': 'cross-env PORT=5000 react-scripts start'
Also, if working with docker, you need to make sure that your front-end and back-end are working. Cypress doesn't automatically starts your front-end server.
Steps should be
npm start
npm start < back-end>
npx cypress run or npx cypress open
I faced the same issue. I tried the suggestions above. I think this is a docker issue. Restarting docker fixed it for me.
I have a scrapy spider that uses splash which runs on Docker localhost:8050 to render javascript before scraping. I am trying to run this on heroku but have no idea how to configure heroku to start docker to run splash before running my web: scrapy crawl abc dyno. Any guides is greatly appreciated!
From what I gather you're expecting:
Splash instance running on Heroku via Docker container
Your web application (Scrapy spider) running in a Heroku dyno
Splash instance
Ensure you can have docker CLI and heroku CLI installed
As seen in Heroku's Container Registry - Pushing existing image(s):
Ensure docker CLI and heroku CLI are installed
heroku container:login
docker tag scrapinghub/splash registry.heroku.com/<app-name>/web
docker push registry.heroku.com/<app-name>/web
To test the application: heroku open -a <app-name>. This should allow you to see the Splash UI at port 8050 on the Heroku host for this app name.
You may need to ensure $PORT is set appropriately as the EXPOSE docker configuration is not respected (https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime)
Running Dyno Scrapy Web App
Configure your application to point to <app-host-name>:8050. And the Scrapy spider should now be able to request to the Splash instance previously run.
Run at the same problem. Finally, I succesfully deployed splash docker image on Heroku.
This is my solution:
I cloned the splash proyect from github and changed the Dockerfile.
Removed command EXPOSE because it's not supported by Heroku
Replaced ENTRYPOINT by CMD command.
CMD python3 /app/bin/splash --proxy-profiles-path
/etc/splash/proxy-profiles --js-profiles-path /etc/splash/js-profiles
--filters-path /etc/splash/filters --lua-package-path /etc/splash/lua_modules/?.lua --port $PORT
Notice that I added the option --port=$PORT. This is just to listen at the port specified by Heroku instead of the default (8050)
A fork to the proyect with this change its avaliable here
You just need to build the docker image and push it to the heroku's registry, like you did before.
You can test it locally first but you must pass the environment variable "PORT" when running the docker
sudo docker run -p 80:80 -e PORT=80 mynewsplashimage
I spent the weekend pouring over the Docker docs and playing around with the toy applications and example projects. I'm now trying to write a super-simple web service of my own and run it from inside a container. In the container, I want my app (a Spring Boot app under the hood) -- called bootup -- to have the following directory structure:
/opt/
bootup/
bin/
bootup.jar ==> the app
logs/
bootup.log ==> log file; GETS CREATED BY THE APP # STARTUP
config/
application.yml ==> app config file
logback.groovy ==> log config file
It's very important to note that when I run my app locally on my host machine - outside of Docker - everything works perfectly fine, including the creation of log files to my host's /opt/bootup/logs directory. The app endpoints serve up the correct content, etc. All is well and dandy.
So I created the following Dockerfile:
FROM openjdk:8
RUN mkdir /opt/bootup
RUN mkdir /opt/bootup/logs
RUN mkdir /opt/bootup/config
RUN mkdir /opt/bootup/bin
ADD build/libs/bootup.jar /opt/bootup/bin
ADD application.yml /opt/bootup/config
ADD logback.groovy /opt/bootup/config
WORKDIR /opt/bootup/bin
EXPOSE 9200
ENTRYPOINT java -Dspring.config=/opt/bootup/config -jar bootup.jar
I then build my image via:
docker build -t bootup .
I then run my container:
docker run -it -p 9200:9200 -d --name bootup bootup
I run docker ps:
CONTAINER ID IMAGE COMMAND ...
3f1492790397 bootup "/bin/sh -c 'java ..."
So far, so good!
My app should then be serving a simple web page at localhost:9200, so I open my browser to http://localhost:9200 and I get nothing.
When I use docker exec -it 3f1492790397 bash to "ssh" into my container, I see everything looks fine, except the /opt/bootup/logs directory, which should have a bootup.log file in it -- created at startup -- is instead empty.
I tried using docker attach 3f1492790397 and then hitting http://localhost:9200 in my browser, to see if that would generated some standard output (my app logs both to /opt/bootup/logs/bootup.log as well as the console) but that doesn't yield any output.
So I think what's happening is that my app (for some reason) doesn't have permission to create its own log file when the container starts up, and puts the app in a weird state, or even prevents it from starting up altogether.
So I ask:
Is there a way to see what user my app is starting up as?; or
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
Thanks in advance!
I don't know why your app isn't working, but can answer your questions-
Is there a way to see what user my app is starting up as?; or
A: Docker containers run as root unless otherwise specified.
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
A: Docker containers dump stdout/stderr to the Docker logs by default. There are two ways to see these- 1 is to run the container with the flag -it instead of -d to get an interactive session that will list the stdout from your container. The other is to use the docker logs *container_name* command on a running or stopped container.
docker attach 3f1492790397
This doesn't do what you are hoping for. What you want is docker exec (probably docker exec -it bootup bash), which will give you a shell in the scope of the container which will let you check for your log files or try and hit the app using curl from inside the container.
Why do I get no output?
Hard to say without the info from the earlier commands. Is your app listening on 0.0.0.0 or on localhost (your laptop browser will look like an external machine to the container)? Does your app require a supervisor process that isn't running? Does it require some other JAR files that are on the CLASSPATH on your laptop but not in the container? Are you running docker using Docker-Machine (in which case localhost is probably not the name of the container)?