pwa-studio docker configuration dev-environmet - docker

In order to work with pwa-studio I have to use docker due to the fact that I am using windows. So I made it in the simplest possible way for me with Dockerfile:
FROM node:10
RUN mkdir /app
WORKDIR /app
EXPOSE 3000
then I created container:
winpty docker run -it -p 3000:3000 --mount type=bind,source="$(pwd)",target=/app nfr:1.0 bash
all commands related to installation packages or running app I make being attached to container
Using address localhost:3000 I can see my app running since I expose the port.
PROBLEM:
One of the first configuration steps in pwa-studio is to add a custom hostname and SSL cert, which is done with
yarn buildpack create-custom-origin ./
As a result app inside container is no longer available via localhost:3000 but via domain name, in my case it is: https://pwa-aynbv.local.pwadev:3000/
How to configure docker to expose this domain outside of the container ?
Thanks in advance for any help

Related

How can i run docker commands inside a docker file?

I have this docker file:
# Use the official image as a parent image
FROM mysql/mysql-server:8.0
# Set the working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the file from your host to your current location
COPY customers.sql .
COPY entrypoint.sh .
# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 1433:1433
# Run the command inside your image filesystem
RUN chmod +x entrypoint.sh
# Run the specified command within the container.
RUN /bin/bash ./entrypoint.sh
And entypoint.sh:
mysql --host=localhost --protocol=tcp -u root -pMypassword -e "create database customersDatabase; use customersDatabase; source customers.sql;"
but i get the following error message:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
when i run docker build
what is the correct way to build entrypoint.sh in order to run docker commands?
BEFORE OP EDIT:
problem:
./entrypoint.sh: line 2: docker: command not found
You are trying to run docker inside docker.
Possible solutions
1) Mount host's docker sock
or
2) Install docker inside docker before you run your
-> apt install docker.io
--> expect super size of your image
entrypoint
Difference between 1) and 2)
in 1) your docker's docker is the host's docker
while in 2) installed docker in the docker is independent and thus isolated from host
AFTER OP EDIT
problem:
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (99)
EDIT: And since you edit your question, which now doesnt correspond with your title, I will provide you with your second problem
You cannot connect to your localhost, because insider docker, localhost is docker itself, not your host.
This can be solved by using host network driver.
Or preferably, put your db in docker too, have in the same docker network,expose port, name your db container as mysql_database, and connect to it as mysql_database:port
Or dont try to connect to db which is in your container from within your container. Thats I think antipattern. Usually it should be possible to get into db's CLI where you can run commands

How to pull and run a docker image from a repo - Docker

I have a WebApi created using .Net Core. I have a .dockerfile in the root of my solution:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.1-stretch-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.1-stretch AS build
WORKDIR /src
COPY ["Nuget.config", ""]
COPY ["Proyecto.WebApi/Proyecto.WebApi.csproj", "Proyecto.WebApi/"]
COPY ["Proyecto.Model/Proyecto.Model.csproj", "Proyecto.Model/"]
COPY ["Proyecto.Bl/Proyecto.Bl.csproj", "Proyecto.Bl/"]
RUN dotnet restore "Proyecto.WebApi/Proyecto.WebApi.csproj" --configfile "Nuget.config"
COPY . .
WORKDIR "/src/Proyecto.WebApi"
RUN dotnet build "Proyecto.WebApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Proyecto.WebApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Proyecto.WebApi.dll"]
I can build, run and push my Api without problem using those commands:
docker build -t dockerlocal .
docker run -d -p 8080:80 --name myapp dockerlocal
docker tag <image_id> docker.nexus.example.com/dockerlocal
docker push docker.nexus.example.com/dockerlocal
When I test my api from the browser http://localhost:8080/api/values, it works perfectly.
Now I need, download the image from the repo and run the Api, so I execute:
docker pull docker.nexus.example.com/dockerlocal
docker run -d -p 9090:90 --name mynexusapp docker.nexus.example.com/dockerlocal
According the console, everything is working. But when I test the API http://localhost:9090/api/values I get in my browser:
"localhost no envió ningún dato. ERR_EMPTY_RESPONSE"
What is the problem? Why I can't run my WebApi after docker pull comand.
Following discussion in the comments: the issue here was that the port mapping was changed from 8080:80 in the first command to 9090:90 in the second command. Switching mapping back to port 80 for container port fixes the issue as confirmed by the author.
Now, to explain what happens here: Port mapping 8080:80 means that you are mapping port 8080 on the host environment to port 80 on the guest environment. Where guest environment is the actual container.
Host port part of the mapping may be changed arbitrarily - i.e. changing it from 8080 to 9090 works - as long as the host port is not taken by another process. The same however is not true about the guest port, since the guest port is determined by the process that is run on the container. So if the container application listens to port 80, then switching guest mapping to port 90 (as happened in this case) won't work - because nothing is listening there.
It is also quite possible to have several port mapping, i.e. if in the container you have port 80 where your application listens and port 90 where some admin console listens, you may end up with 2 port mappings: say, 8080:80 and 9090:90.
If you want to use multiple environments at the same time - spawn multiple containers, i.e. you could do something like:
docker run -d -p 8080:80 --name myapp1 dockerlocal
docker run -d -p 9090:80 --name myapp2 dockerlocal
Note however, that you will end up with 2 separate containers running independent from each other.

Revel and Docker container

I am attempting to create a docker container that contains the revel skeleton app. Everything seems to build OK and the container is created but when I go to localhost:9000 in my browser nothing comes up.
To make sure my environment is working properly I created a simple hello world go app and created a docker container for it. It worked OK using the same port 9000. This makes me think that there is something not configured properly in my dockerfile.
Dockerfile:
#Compile stage
FROM golang:1.11.4-alpine3.8 AS build-env
ENV CGO_ENABLED 0
RUN apk add --no-cache git
ADD . /go/src/revelapp
# Install revel framework
RUN go get -u github.com/revel/revel
RUN go get -u github.com/revel/cmd/revel
#build revel app
RUN revel build revelapp app dev
# Final stage
FROM alpine:3.8
EXPOSE 9000
WORKDIR /
COPY --from=build-env /go/app /
ENTRYPOINT /run.sh
Docker command used:
docker build -t revelapp . && docker run -p 9000:9000 --name revelapp revelapp
After command is executed and container is created the console shows:
INFO 17:25:01 app run.go:32: Running revel server
INFO 17:25:01 app plugin.go:9: Go to /#tests to run the tests.
Revel engine is listening on.. localhost:9000
When I go to localhost:9000 I would expect to see the text It Works!
You're listening on localhost:9000, so 127.0.0.1 points to your container and not your local machine.
You have two solutions to make it work:
Listen on 0.0.0.0:9000
Use --network="host" in your docker run command: 127.0.0.1 in your docker container will point to your docker host.

Cannot access server running in container from host

I have a simple Dockerfile
FROM golang:latest
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV GOPATH /app
RUN go install huru
EXPOSE 3000
ENTRYPOINT /app/bin/huru
I build like so:
docker build -t huru .
and run like so:
docker run -it -p 3000:3000 huru
for some reason when I go to localhost:3000 with the browser, I get
I have exposed servers running in containers to the host machine before so not sure what's going on.
From the information provided in the question if you see logs of the application
(docker logs <container_id>) than the docker application starts successfully and it looks like port exposure is done correctly.
In any case in order to see ports mappings when the container is up and running you can use:
docker ps
and check the "PORTS" section
If you see there something like 0.0.0.0:3000->3000/tcp
Then I can think about some firewall rules that prevent the application from being accessed...
Another possible reason (although probably you've checked this already) is that the application starts and finishes before you actually try to access it in the browser.
In this case, docker ps won't show the exited container, but docker ps -a will.
The last thing I can think of is that in the docker container itself the application doesn't really answer the port 3000 (I mean, maybe the startup script starts the web server on some other port, so exposing port 3000 doesn't really do anything useful).
In order to check this you can enter the docker container itself with something like docker exec -it <container_id> bash
and check for the opened ports with lsof -i or just wget localhost:3000 from within the container itelf
Try this one, if this has any output log. Please check them...
FROM golang:latest
RUN apt -y update
RUN mkdir -p /app
COPY . /app
RUN go install huru
WORKDIR /app
docker build -t huru:latest .
docker run -it -p 3000:3000 huru:latest bin/huru
Try this url: http://127.0.0.1:3000
I use the loopback

Dockerfile build : Unable to connect to docker daemon

I am trying to modify the dockerfile of alpine:3.4 to include running git commands and automatically run nginx. Here are the changes I am appending to the default dockerfile.
RUN apk update
RUN apk add git
RUN mkdir mygit
RUN cd mygit
RUN git clone 'some url'
RUN apk add sudo
RUN sudo apk add docker
RUN sudo docker run --rm --name nginx nginx
The git command executes successfully and the RUN apk add docker also runs successfully. However, RUN sudo docker run --rm --name nginx nginx
fails.
Here is the log.
Step 28/31 : RUN sudo apk add docker
---> Using cache
---> 1cdf3005ea4b
Step 29/31 : RUN sudo docker run --rm --name nginx nginx
---> Running in 6c8c03b8a97d
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
You are trying to run docker in docker which is "not possible" by default. Why don't you extend the nginx image instead and add git there?
Anyway, this feels like a fool's errand. Instead you should have a building environment in which you would copy application data into a nginx container for instance. Don't try to put everything in one container.
For instance look at my example Dockerfile which is serving Jekyll based static site:
FROM nginx:1.13-alpine
COPY site/ /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
It is better to use one container for one service.
Use Docker compose for your use case.
For sharing data between two containers, you can always use something like volumes(which is persistent, your host too can use that). This will solve your problem.

Resources