Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 days ago.
Improve this question
I am coming from a PHP world. Our project is in a docker container and you can use php unit locally, on travis etc, you just tell it to search for files that have .tests. We are switching over to Go so I am trying to learn this language and what I am struggling to understand is testing. So I have a service that is compiled and its binary is copied to Docker and run from there. Locally I can run go test, however, because it is running outside of my docker container it can't connect to my database. The other problem is setting up the deployment process. I assume I can't just tell travis to search for files that end with test.go because they are not in the docker container. So does anyone have a good material on how it works in Go world? Am I supposed to copy not just the binary but all the files and run tests inside Docker?
use it:
1.run go test before go build in docker-build
2.usegomock if test have different port and url
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed yesterday.
Improve this question
I was thinking whether there is a flag in docker build command.
The scenario is that when I use docker build, I must keep watching it. If it failed in the build process this time, it will use the latest image in local machine. If I didn't notice this, I would miss some problems. Cause the lastest image was a successful one without my new changes.
So I want to know if there are some flag for docker build command.
For example, "docker build --ForceThisBuild". If the build process failed, the build will stop and not use any other existing images. No matter the local ones or the remote imagines in the internet.
At the same time, I hope it can use the existing layer. It is fine for my purpose.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I just started myself self-tutoring on Docker but run into all kinds of issues. I installed Docker on a Windows 10 Home laptop and did an initial test which worked. Then I started doing this MS Learn tutorial: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/ but am running into all kinds of issues. Everything went well until exercise 5: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/5-exercise-create-custom-docker-image
I suspect I lack some knowledge of the basics, so here are my "stupid questions":
in Create step 3: where do I store the Dockerfile? In Build step 1:
this command throws an error because it can't find the Dockerfile (yes, I did store it without the .txt extension)
In Build step 1: where is the built image stored? In which directory should I build it?
In Test step 2: In which directory should I run it?
You can store the Dockerfile in your projects root. I don't know how the commands works, but you can simple create a new file in notepad++ and save it as Dockerfile. I suggest to use Viusal Studio Code instead (it is free and can help you a lot).
The built image is stored at your docker-deamon. The image is not a file.
I think you mean step 1: It dosen't matter where you run docker run -p 8080:80 -d --name reservations reservationsystem. The image is stored in your docker-deamon. Because of this fact, you can run the command "everywhere" on your maschine (it dosen't read or create files at the working-directory).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I need to set up an environment with docker, containing multiple technologies such as a BD, a test environment, continuous integration, and some other things. I also need it to be available for my teammates to use.
I'm don't quite understand docker beyond the high-level concept of it so I have no idea where to start from. Useful answers would go from a step-by-step how to do it, to simply pointing me towards the right links for my problem. Thank you!
We intend to use either:
PostgreSQL
Node.js
Vue
Jenkins
, or:
PostgreSQL
Android Studio
Jenkins
To answer your first question about sharing a dev docker with teammates you need to have 2 different docker-compose files in your project like dev and prod.
on the other hand when your not yet comfortable with docker you better get involve with it step by step.
learn about making an stateless application, because when you are working with docker you want to scale horizontally later on
dockerize your apps (learn how to make a docker file for your nodejs project)
learn how to make a docker-compose file for nodejs + postgres application test it and make sure they are connected and are in one docker network which you created in docker-compose
you need a docker repository like docker hub or your own repo installation like nexus to push your production ready code after jenkins automated tests which then you can deploy
you can put your front and back end in one docker-compose file but i wouldn't recommend it because, multiple teams should work with one docker-compose file which might at first confuses them
you can ask your devOps team for Jenkins installation and create your CI yaml files
docker-compose files that you would create would be in the project directory and any one who clones the project would have access to it
create a read-me file with clear instructions for building, testing and deployment of the project for both dev and prod environment.
i don't know this would help or not because your question was not specific but i hope it will.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm new to Terraform and just did all the Tutorials i could find about it. I have set up Multiple Docker Containers and a Network, Currently starting it with a shell skript. The general plan would be, to be able to start my testbed and all its components with Terraform (Like ONOS with containernet, Routers, ...).
My First Question would, is Terraform made for that kind of Question? Or would you suggest anything different. I thought using Terraform would make it Easy to write new Scenarios.
AT this Point I use the Shell skripts to build and run the Docker Containers. Does it make sense to let Terraform do the RUN (not build) task?
Thanks for your Help & Opinions
I'm new to Stack, it would be awesome if you explain a downvote - so i can learn to do it better.
edit ( Build file deleted - unnecassary)
The general plan would be, to be able to start my testbed and all its components with Terraform
Tl;Dr
Don't do this.
This isn't what terraform is for. Terraform provisions infrastructure. So (as an example) if you want an azure function, you write a terraform file that describes what this looks like then run terraform to create it. It doesn't (nor should it) run your function. It simply describes how Azure should create these structures prior to them being ran.
It seems you actually want a build pipline that uses terraform as part of that process to provision the infrastructure. The build script would then run the containers once terraform had done it's job
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.