I am trying to get my Spring Boot, Angular and Mysql together making use of docker-compose (locally it is working). Spring Boot Image as well as angular image are working correctly after executing docker-compose up. I can see my angular app in the browser and I can make successfull Rest Call to my Spring API. The main problem ist, that if I make a Request from Angular to the API there is no successful Rest Call anymore...
Problem could be with db... first it says:
/usr/sbin/mysqld: ready for connections. Version: '8.0.21' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
in the console. But a bit later as last console output for db it says:
mbind: Operation not permitted
I don't know if this is a problem because I can make some Restcalls from browser (not angular) successfully as written earlier.
Another assumption I have is, that ports have to be configured in another way.. but I tried already a lot of different combinations also with spring application + always creating new spring image.
What can also be an issue is, that the db throws some SQL errors like
Error executing DDL "alter table userrole add constraint userIdReference foreign key (`user_id`) references `user` (`user_id`)" via JDBC Statement
But still I can make some RestCalls.. and for instance within MySql workbench I can import the sql file without any problems and start spring boot + angular locally to successfully start the project.
springpart_1 | Hibernate: select * from product where product.current_name = ?
Messages like above do appear on console after starting docker compose-up but does not load anything into angular client.
GET http://localhost:8077/products net::ERR_CONNECTION_REFUSED
Other than that I have no real clue what could be the problem.. probably also because I am new to docker. Thank you in advance for your help.
docker-compose_file
services:
springpart:
image: ce153fc5b589
ports:
- '8077:8077'
environment:
- DATABASE_HOST=db
- DATABASE_PORT=3306:3306
networks:
- backend
- frontend
depends_on:
- db
restart: on-failure
db:
image: mysql:8.0
volumes:
- .src/main/resources/guitarshop/currentGuitarshopData:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=guitarshop
networks:
- backend
angularpart:
image: b8140c7fedec
ports:
- '4200:80'
networks:
- frontend
networks:
frontend:
backend:
angular-image-creation docker_file
FROM node:alpine As builder
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build --prod
FROM nginx:alpine
COPY --from=builder /usr/src/app/dist/guitarShopAngular/ /usr/share/nginx/html
EXPOSE 80
application.properties_file
spring.datasource.url=jdbc:mysql://db:3306/guitarshop?serverTimezone=UTC&useLegacyDatetimeCode=false?autoReconnect=true&failOverReadOnly=false&maxReconnects=10
spring.datasource.initialization-mode=never
spring.datasource.username = root
spring.datasource.password = mypassword
spring.datasource.platform=mysql
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL8Dialect
spring.jpa.show-sql = true
server.port = 8077
spring.main.banner-mode=off
spring.jackson.serialization.fail-on-empty-beans=false
spring.servlet.multipart.enabled=true
spring.servlet.multipart.max-file-size=500KB
spring.servlet.multipart.max-request-size=500KB
spring.servlet.multipart.resolve-lazily=false
If you need any more information just let me know..
Try changing your volume bind mount like so:
"./src/main/resources/guitarshop/currentGuitarshopData:/var/lib/mysql"
Like mentioned in the comments I had no issues with getting the app up and running. In terms of the data it seems like the db files included in the project don't have any data, but I was able to add data manually and then see that data through the app. The only real issue in your compose was the bind mount path, but once I fixed that the data I added persisted after recreating containers.
Here are some suggestions as ideally you should be able to clone the repo and run "docker-compose up -d" and have the app running. Right now that isn't possible since you first have to build your spring app locally, then manually build the spring and angular docker images, then run compose up.
Create a new root folder and move the backend and frontend folders into it.
Move the compose file to the root folder.
Look into building your spring app by using a multi stage build like explained here: https://spring.io/blog/2018/11/08/spring-boot-in-a-container/#multi-stage-build.
Modify your compose file to something like this:
"
version: '3'
services:
springpart:
build: ./GuitarShopBackend
ports:
- '8077:8077'
environment:
- DATABASE_HOST=db
- DATABASE_PORT=3306
networks:
- backend
- frontend
depends_on:
- db
restart: on-failure
db:
image: 'mysql:8.0.17'
volumes:
# This will initialize your database if it doesn't already exist with your provided sql file
- ./GuitarShopBackend/src/main/resources/guitarshop/initialGuitarshopData:/docker-entrypoint-initdb.d
# This will persist your database across container restarts
- ./GuitarShopBackend/src/main/resources/guitarshop/currentGuitarshopData:/var/lib/mysql
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=guitarshop
networks:
- backend
angularpart:
build: ./GuitarShopAngular
ports:
- '4200:80'
networks:
- frontend
networks:
frontend: null
backend: null
Related
I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge
Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.
As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!
You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I am currently writing a webapp - java backend, react front end and have been deploying via a docker compose file. I've made changes and when I try to run them via yarn build for my front end server and starting my back end server with maven, the changes appear. However, when running with docker, the changes aren't there.
I've been using the docker compose up and docker compose down commands and I even run docker system prune -a after stopping my docker containers via the docker compose down command but my new changes aren't showing. I'd appreciate any guidance on what I'm doing wrong to help show my changes.
I also have docker desktop and have manually gone and deleted all of the volumes, containers and images so that they have to be regenerated. Running the build commands to specify ignoring cache didn't help either.
I also deleted the .m2 folder so that this gets generated (my understanding is that this is the cache store for the backend). My changes are mainly on the front end but since my front end container depends on this, I thought regenerating the back-end container may have a knock on effect that may help.
I would greatly appreciate any help, please do let me know if there's anything else to help with context. The changes involve removing a search bar and some text, both of which are commented out in the code but still appear whilst I also add another button which doesn't show up.
My docker compose file is below as follows:
services:
mysqldb:
# image: mysql:5.7
build: ./Database
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- backend
app_backend:
depends_on:
- mysqldb
build: ./
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:$MYSQLDB_DOCKER_PORT/$MYSQLDB_DATABASE?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC",
"spring.datasource.username" : "$MYSQLDB_USER",
"spring.datasource.password" : "$MYSQLDB_ROOT_PASSWORD",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
networks:
- backend
- frontend
app_frontend:
depends_on:
- app_backend
build:
../MyProjectFrontEnd
restart: on-failure
ports:
- 80:80
networks:
- frontend
volumes:
db:
networks:
backend:
frontend:
Since the issue is on the front end, I've also attached the dockerfile for the front end below:
FROM node:16.13.0-alpine AS react-build
WORKDIR /MyProjectFrontEnd
RUN yarn cache clean
RUN yarn install
COPY . ./
RUN yarn
RUN yarn build
# Stage 2 - the production environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY /build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Update - the cache in the browser was storing some (rookie error) however, not all the changes are still being loaded
If your source code is in the same folder (usually) of your Dockerfile, you can be sure that your last source code will be built and deployed. This feature is one of the cornerstones, which is the base of docker. If this would be failing, it would be the end of the world.
These kind of errors are not related to the docker core. Usually is something at application level and/or its development:
Libraries mistake
Developer mistake
Functional test mistake
Load Balancer mistake
Advice
docker-compose and windows are for development stage. For deployment on real environments for real users, you should use linux and some tool like Kubernetes.
I am new to docker, so this may seem very basic to you, anyway - its freaking me out at the moment.
I decided to develop a new web-project ontop of containers, of course i thought about docker. After finishing the tutorial and reading some Dockerfiles and so on, i decided to go with docker-compose.
I want to have multiple compose-files, one for Development, one for Production and so on. Now i managed to orchestrate a basic php/mysql/redis application using 3 different services. The main application is php based and maintained in the project src. Mysql and Redis are simply configured with base images and do not require any business logic.
I can build the containers and bring them up with
build:
docker-compose -f compose-Development.yml build
up:
docker-compose -f compose-Development.yml up
Many files in the main application container are built by gulp (templates, css, etc) and code will exist in both javascript and php.
I noticed, that my app state does not change when i change my files. I would have to rebuild and restart my containers.
Having some experience with Vagrant, i would go for some kind of shared source during development. But how would i achieve that?
My application Dockerfile (for development) looks like this:
FROM webdevops/php-nginx:7.1
COPY ./ /app
COPY docker/etc/ /opt/docker/etc
# php config...
RUN ln -sf /opt/docker/etc/php/php.Development.ini /opt/docker/etc/php/php.ini
WORKDIR /app/
EXPOSE 80
The compose file:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile.Development
links:
- mysql
- redis
volumes:
- ./data/fileadmin:/app/public/fileadmin
- ./data/uploads:/app/public/uploads
env_file:
- docker/env/All.yml
- docker/env/Development.yml
ports:
- "80:80"
restart: always
# Mysql Container
mysql:
build:
context: docker/mysql/
dockerfile: Dockerfile
restart: always
volumes:
- mysql:/var/lib/mysql
env_file:
- docker/env/All.yml
- docker/env/Development.yml
# Cache Backend Container
redis:
build:
context: docker/redis/
dockerfile: Dockerfile
ports:
- "6379:6379"
volumes:
- redis:/data
env_file:
- docker/env/All.yml
- docker/env/Development.yml
restart: always
volumes:
mysql:
redis:
So far, i used some github repositories to copy chunks from. I know there might be other problems in my setup as well, for the moment the most blocking issue is the thing with the linked/copied source.
Kind regards,
Philipp
The idea of "Development/Production parity" confuses many on this front. This doesn't mean that you can simply have a single configuration and it will work across everything; it means you'll have much closer parity and that you can create an environment that resembles something very close to what you'll have in production.
What's wrong here is that currently you're building your image and it would be ready to ship out, it'd have your code, you have volumes set aside for uploads, etc. Awesome!
Unfortunately, this setup is not correct for development. If you want to be editing code on the fly - you need to attach your local working directory to the image as a volume as well. This would not be done in production; so it's very close - but not exactly the same setup.
Add the following in to the app service volumes section of your compose-file and you should be good to go:
- .:/app