Basic questions about Docker [closed] - docker

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I just started myself self-tutoring on Docker but run into all kinds of issues. I installed Docker on a Windows 10 Home laptop and did an initial test which worked. Then I started doing this MS Learn tutorial: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/ but am running into all kinds of issues. Everything went well until exercise 5: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/5-exercise-create-custom-docker-image
I suspect I lack some knowledge of the basics, so here are my "stupid questions":
in Create step 3: where do I store the Dockerfile? In Build step 1:
this command throws an error because it can't find the Dockerfile (yes, I did store it without the .txt extension)
In Build step 1: where is the built image stored? In which directory should I build it?
In Test step 2: In which directory should I run it?

You can store the Dockerfile in your projects root. I don't know how the commands works, but you can simple create a new file in notepad++ and save it as Dockerfile. I suggest to use Viusal Studio Code instead (it is free and can help you a lot).
The built image is stored at your docker-deamon. The image is not a file.
I think you mean step 1: It dosen't matter where you run docker run -p 8080:80 -d --name reservations reservationsystem. The image is stored in your docker-deamon. Because of this fact, you can run the command "everywhere" on your maschine (it dosen't read or create files at the working-directory).

Related

Use a flag to force docker build not use other images if the build process failed [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed yesterday.
Improve this question
I was thinking whether there is a flag in docker build command.
The scenario is that when I use docker build, I must keep watching it. If it failed in the build process this time, it will use the latest image in local machine. If I didn't notice this, I would miss some problems. Cause the lastest image was a successful one without my new changes.
So I want to know if there are some flag for docker build command.
For example, "docker build --ForceThisBuild". If the build process failed, the build will stop and not use any other existing images. No matter the local ones or the remote imagines in the internet.
At the same time, I hope it can use the existing layer. It is fine for my purpose.

I need advice on setting up Go tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 days ago.
Improve this question
I am coming from a PHP world. Our project is in a docker container and you can use php unit locally, on travis etc, you just tell it to search for files that have .tests. We are switching over to Go so I am trying to learn this language and what I am struggling to understand is testing. So I have a service that is compiled and its binary is copied to Docker and run from there. Locally I can run go test, however, because it is running outside of my docker container it can't connect to my database. The other problem is setting up the deployment process. I assume I can't just tell travis to search for files that end with test.go because they are not in the docker container. So does anyone have a good material on how it works in Go world? Am I supposed to copy not just the binary but all the files and run tests inside Docker?
use it:
1.run go test before go build in docker-build
2.usegomock if test have different port and url

Setting up Docker Containers and Network with Terraform [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm new to Terraform and just did all the Tutorials i could find about it. I have set up Multiple Docker Containers and a Network, Currently starting it with a shell skript. The general plan would be, to be able to start my testbed and all its components with Terraform (Like ONOS with containernet, Routers, ...).
My First Question would, is Terraform made for that kind of Question? Or would you suggest anything different. I thought using Terraform would make it Easy to write new Scenarios.
AT this Point I use the Shell skripts to build and run the Docker Containers. Does it make sense to let Terraform do the RUN (not build) task?
Thanks for your Help & Opinions
I'm new to Stack, it would be awesome if you explain a downvote - so i can learn to do it better.
edit ( Build file deleted - unnecassary)
The general plan would be, to be able to start my testbed and all its components with Terraform
Tl;Dr
Don't do this.
This isn't what terraform is for. Terraform provisions infrastructure. So (as an example) if you want an azure function, you write a terraform file that describes what this looks like then run terraform to create it. It doesn't (nor should it) run your function. It simply describes how Azure should create these structures prior to them being ran.
It seems you actually want a build pipline that uses terraform as part of that process to provision the infrastructure. The build script would then run the containers once terraform had done it's job

Some questions about Docker images and containers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Question 1: I have created a MERN stack application and successfully containerize and push image on Dockerhub. My friend wants to access my code so he takes a pull from DockerHub but how he can be able to see my code?
Question 2: If he is not able to see the code, how can he change the code in the workplace ?
Question 3: If an image is only used to store the setup configuration so after run the image file in his machine and run the code then how Docker put the impact on the versions if user did not run the project in the same container?
You aren't exactly sharing "code" with docker, you're sharing an image that can be run. It's the difference between sharing source code and a compiled executable.
If you want it to be changable, you have to make it configurable through environment variables or arguments that are enumerated in the Dockerfile with ENV declarations that can be overridden by image consumers with docker run --env <key>=<value>.
It seems you are quite new in programming. The usual way is to create a source code repository. There are several tools. Git is the most popular one. There are tons of free places to host your code so you and your friend can access the code like Github, Gitlab, or BitBucket. In Github, you can create a private repository for free.
Then you as a part of the code you should include your .dockerfile. There are tons of places on the Internet where you can find examples of how to create and edit a .dockerfile and then your friend can build the docker solution with docker build -t [name of the app]. A good start would be in the Docker docs
What you are asking for , it's Git or you can find another replacement also such as Bit Bucket ... etc , if you want to share your code with your friend globally or inside the company you can use Git (Private or Public Repo depends on what you want), But for docker part it's developed to create, deploy, and run applications much faster.
I think I better understood your situation now. So please let me know if this new part answer your questions:
Question 1:
You don't containerized YOUR MERN code. Your image contains commands(layers) that will build your code (you code will be copied from your computer or cloned from git repos).
Question 2:
first one also answer this.
Question 3:
Your versions, don't depend from docker , but from git repository because NEVER was in your IMAGE you deploy to dockerhub. When you "docker run" a container with your IMAGE, then your code will be there, because of the COPY , ADD or VOLUMES command in your image (layers).

Best way to distribute a docker container to people with varying technical backgrounds [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.

Resources