Use a flag to force docker build not use other images if the build process failed [closed] - docker

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed yesterday.
Improve this question
I was thinking whether there is a flag in docker build command.
The scenario is that when I use docker build, I must keep watching it. If it failed in the build process this time, it will use the latest image in local machine. If I didn't notice this, I would miss some problems. Cause the lastest image was a successful one without my new changes.
So I want to know if there are some flag for docker build command.
For example, "docker build --ForceThisBuild". If the build process failed, the build will stop and not use any other existing images. No matter the local ones or the remote imagines in the internet.
At the same time, I hope it can use the existing layer. It is fine for my purpose.

Related

I need advice on setting up Go tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 days ago.
Improve this question
I am coming from a PHP world. Our project is in a docker container and you can use php unit locally, on travis etc, you just tell it to search for files that have .tests. We are switching over to Go so I am trying to learn this language and what I am struggling to understand is testing. So I have a service that is compiled and its binary is copied to Docker and run from there. Locally I can run go test, however, because it is running outside of my docker container it can't connect to my database. The other problem is setting up the deployment process. I assume I can't just tell travis to search for files that end with test.go because they are not in the docker container. So does anyone have a good material on how it works in Go world? Am I supposed to copy not just the binary but all the files and run tests inside Docker?
use it:
1.run go test before go build in docker-build
2.usegomock if test have different port and url

Basic questions about Docker [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I just started myself self-tutoring on Docker but run into all kinds of issues. I installed Docker on a Windows 10 Home laptop and did an initial test which worked. Then I started doing this MS Learn tutorial: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/ but am running into all kinds of issues. Everything went well until exercise 5: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/5-exercise-create-custom-docker-image
I suspect I lack some knowledge of the basics, so here are my "stupid questions":
in Create step 3: where do I store the Dockerfile? In Build step 1:
this command throws an error because it can't find the Dockerfile (yes, I did store it without the .txt extension)
In Build step 1: where is the built image stored? In which directory should I build it?
In Test step 2: In which directory should I run it?
You can store the Dockerfile in your projects root. I don't know how the commands works, but you can simple create a new file in notepad++ and save it as Dockerfile. I suggest to use Viusal Studio Code instead (it is free and can help you a lot).
The built image is stored at your docker-deamon. The image is not a file.
I think you mean step 1: It dosen't matter where you run docker run -p 8080:80 -d --name reservations reservationsystem. The image is stored in your docker-deamon. Because of this fact, you can run the command "everywhere" on your maschine (it dosen't read or create files at the working-directory).

Setting up Docker Containers and Network with Terraform [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm new to Terraform and just did all the Tutorials i could find about it. I have set up Multiple Docker Containers and a Network, Currently starting it with a shell skript. The general plan would be, to be able to start my testbed and all its components with Terraform (Like ONOS with containernet, Routers, ...).
My First Question would, is Terraform made for that kind of Question? Or would you suggest anything different. I thought using Terraform would make it Easy to write new Scenarios.
AT this Point I use the Shell skripts to build and run the Docker Containers. Does it make sense to let Terraform do the RUN (not build) task?
Thanks for your Help & Opinions
I'm new to Stack, it would be awesome if you explain a downvote - so i can learn to do it better.
edit ( Build file deleted - unnecassary)
The general plan would be, to be able to start my testbed and all its components with Terraform
Tl;Dr
Don't do this.
This isn't what terraform is for. Terraform provisions infrastructure. So (as an example) if you want an azure function, you write a terraform file that describes what this looks like then run terraform to create it. It doesn't (nor should it) run your function. It simply describes how Azure should create these structures prior to them being ran.
It seems you actually want a build pipline that uses terraform as part of that process to provision the infrastructure. The build script would then run the containers once terraform had done it's job

How build docker image from multiple custom containers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have configured MySQL + phpMyAdmin + prestashop containers.
I would like to build a unique image with my own custom containers.
How can I do?
Thanks for your help.
What you are looking for is Docker Compose. Docker compose will automatically start the images that you have and you can also link these images automatically inside docker-compose.
If you have done some manual changes to the containers running, you can commit the container and the changes to a new image using
docker commit <mysqlcontainer> mysql-custom-image
And then in the compose file, you can just reference those commits.
...
image: mysql-custom-image
A container is made from a Dockerfile
https://docs.docker.com/engine/reference/builder/
If you want to build your custom container, you should create one and make it as you want it to be!
Here's a tutorial: https://www.howtoforge.com/tutorial/how-to-create-docker-images-with-dockerfile/

Best way to distribute a docker container to people with varying technical backgrounds [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.

Resources