I create a new Laravel 5 application in a docker container. I can access the home url and get the welcome message. I try create new routes and they are working too. Then I run a MariaDB docker container to link to the Laravel 5 application. Here is where the problems begin.
When I'm trying to run migrations in Laravel 5 with the following command:
php artisan migrate --force
And I get the following error message:
Can't connect to MySQL server on '127.0.0.1'
My .env file are like this:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_DATABASE=blog
DB_USERNAME=blog
DB_PASSWORD=123456
I know that these variables are used by Laravel to connect the data base because they exists in Laravel log file like this:
PDO->__construct(‘mysql:127….’, ‘blog’, ‘123456’, Array)
The database engine is MariaDB and it is running in a docker container. This docker container exposes the port 3306 and is linked to the container that run Laravel. To link the container I use the following docker command:
docker run –i –t - - link mariadb:mysql miguelbgouveia/laravel:v3 /bin/bash
I also know that my MariaDB docker container is running with the correct configurations because I use a phpmyadmin docker container that is linked to it and I can connect with to the data base with success. I link the MariaDB container with the phpmyadmin container in the same manner that I link it to Laravel container (--link mariadb:mysql)
Why I can’t connect to the database? There is any configuration or php module to install that are missing?
After all is very simple. If I use the mysql host in my environment variables it just work without having to known the IP address of the MariaDB docker container.
The .env file goes like this:
DB_CONNECTION=mysql
DB_HOST=mysql
DB_DATABASE=blog
DB_USERNAME=blog
DB_PASSWORD=123456
Now I can connect the MariaDB engine with success.
Related
I have Laradock setup and serving a website in larval, but when I try to run php artisan migrate I get this error.
SQLSTATE[HY000] [2002] No such file or directory (SQL: select * from information_schema.tables where table_schema = yt and table_name = migrations)
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=yt
DB_USERNAME=root
DB_PASSWORD=root
I can not seem to find a solution to my issue.
First thing you should check which container run the mysql service :
sudo docker ps
Maybe it not expose the port from mysql container to localhost (127.0.0.1) so laravel can't connect to it .
Find the mysql container name then change the DB_HOST .Let take an example:
app-container 172.0.0.1
mysql-container 172.0.0.2
Because when docker run up ,it will create a virtual networking for itself ,then it will expose to your computer .So if you want laravel can work with msql ,you should change the DB_HOST to 172.0.0.2 in this example case .
I had same issue with Laradock on MacOS, couldn't connect to MariaDB container.
My way:
Get correct name for MariaDB container:
docker ps
Inspect container (for example container name is: container_mariadb_1)
docker inspect container_mariadb_1
At very bottom of long list of parameters you can see IPAddress
"IPAddress": "172.26.0.3"
I put this IP in Laravel's .env config file as DB_HOST and this is it.
Of course I'm not sure if this way is really correct, but I know that it's work for me at least twice.
UPDATE: Also in my case Laravel connects to MariaDB normally if I use DB_HOST=mariadb in .env file.
I have two Docker containers
A Web API
A Console Application that calls Web API
Now, on my local web api is local host and Console application has no problem calling the API.However, I have no idea when these two things are Dockerized, how can I possibly make the Url of Dockerized API available to Dockerized Console application?
i don't think i need a Docker Compose because I am passing the Url of API as an argument of the API so its just the matter of making sure that the Dockerized API's url is accessible by Dockerized Console
Any ideas?
The idea is not to pass the url, but the hostname of the other container you want to call.
See Networking in Compose
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This is what replace the deprecated --link option.
And if your containers are not running on a single Docker server node, Docker Swarm Mode would enable that discoverability across multiple nodes.
This is the best way I have found to connect multiple containers in a local machine / single cluster.
Given: data-provider-service, data-consumer-service
Option 1: Using Network
docker network create data-network
docker run --name=data-provider-service --net=data-network -p 8081:8081 data-provider-image
docker run --name=data-consumer-service --net=data-network -p 8080:8080 data-consumer-image
Make sure to use URIs like: http://data-provider-service:8081/ inside your data-consumer-service.
Option 2: Using Docker Compose
You can define both the services in a docker-compose.yml file and use depends_on property in data-provider-service.
e.g.
data-consumer-service:
depends_on:
- data-provider-service
You can see more details here on my Medium post: https://saggu.medium.com/how-to-connect-nultiple-docker-conatiners-17f7ca72e67f
You can use the link option with docker run:
Run the API:
docker run -d --name api api_image
Run the client:
docker run --link api busybox ping api
You should see that api can be resolved by docker.
That said, going with docker-compose is still a better option.
The problem can be solved easily if using compose feature. With compose, you just create one configuration file (docker-compose.yml) like this :
version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
To make it run, just call up like this :
docker-compose up
This is the best way to run all your stack, so, check this reference :
https://docs.docker.com/compose/
Success!
I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code