Some questions about Docker images and containers [closed] - docker

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Question 1: I have created a MERN stack application and successfully containerize and push image on Dockerhub. My friend wants to access my code so he takes a pull from DockerHub but how he can be able to see my code?
Question 2: If he is not able to see the code, how can he change the code in the workplace ?
Question 3: If an image is only used to store the setup configuration so after run the image file in his machine and run the code then how Docker put the impact on the versions if user did not run the project in the same container?

You aren't exactly sharing "code" with docker, you're sharing an image that can be run. It's the difference between sharing source code and a compiled executable.
If you want it to be changable, you have to make it configurable through environment variables or arguments that are enumerated in the Dockerfile with ENV declarations that can be overridden by image consumers with docker run --env <key>=<value>.

It seems you are quite new in programming. The usual way is to create a source code repository. There are several tools. Git is the most popular one. There are tons of free places to host your code so you and your friend can access the code like Github, Gitlab, or BitBucket. In Github, you can create a private repository for free.
Then you as a part of the code you should include your .dockerfile. There are tons of places on the Internet where you can find examples of how to create and edit a .dockerfile and then your friend can build the docker solution with docker build -t [name of the app]. A good start would be in the Docker docs

What you are asking for , it's Git or you can find another replacement also such as Bit Bucket ... etc , if you want to share your code with your friend globally or inside the company you can use Git (Private or Public Repo depends on what you want), But for docker part it's developed to create, deploy, and run applications much faster.

I think I better understood your situation now. So please let me know if this new part answer your questions:
Question 1:
You don't containerized YOUR MERN code. Your image contains commands(layers) that will build your code (you code will be copied from your computer or cloned from git repos).
Question 2:
first one also answer this.
Question 3:
Your versions, don't depend from docker , but from git repository because NEVER was in your IMAGE you deploy to dockerhub. When you "docker run" a container with your IMAGE, then your code will be there, because of the COPY , ADD or VOLUMES command in your image (layers).

Related

Pull image vs. Git pull [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I am currently working on a production release pipeline and am asking myself why are people even using container registries to build and push their images when they could also pull the whole repository and run compose up?
Don't get me wrong, I know that docker is perfectly for setting up equal environments and I am using it too in production and development. I just would like to know if there are any benefits in choosing to pull released images instead?
From my perspective I am setting up every dependent service within docker-compose for my app, which I would not have access to anymore if my release pipeline would pull the production image instead. On the other side when I choose to pull the repo I just run docker-compose up from my production folder and all dependencies are installed - including the dockerized application via Dockerfile.
There many reasons, lets pick up some of them:
Docker images are not just code
A docker-images contains all the stuff, which is necessary for an application. That can be a specific version of java, php or other dependencies and binaries (ping is a good example).
Docker images are prebuilt
In a git-repository is only code. That means: There are no dependencies in there. If you want to run that code in production, then the production-server must download all dependencies - which can be a lot, especially with npm - and then it have to build it. The building-process can take a long time and needs a lot of resources (CPU-time, memory, io, ...). Resources that are not useable for your users while the server is building.
Docker containers are isolated
What happens if you want to run different applications on the same server? Spring Boot applications a running per default on port 8080. A port is an exclusive resource, which can be only used by one process.
Docker images are versioned
You can define versions for images like node:16. Yes, you can get that in git with tags, but versions are a lot easier.
It is not only docker
There is a change at servers. Bare metal servers are dying, today we're using clusters. Clusters which can be autoscaled out-of-the-box. A very short startup-time for applications is necessary. And that is not possible with git.
many more

Basic questions about Docker [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I just started myself self-tutoring on Docker but run into all kinds of issues. I installed Docker on a Windows 10 Home laptop and did an initial test which worked. Then I started doing this MS Learn tutorial: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/ but am running into all kinds of issues. Everything went well until exercise 5: https://learn.microsoft.com/en-us/learn/modules/intro-to-containers/5-exercise-create-custom-docker-image
I suspect I lack some knowledge of the basics, so here are my "stupid questions":
in Create step 3: where do I store the Dockerfile? In Build step 1:
this command throws an error because it can't find the Dockerfile (yes, I did store it without the .txt extension)
In Build step 1: where is the built image stored? In which directory should I build it?
In Test step 2: In which directory should I run it?
You can store the Dockerfile in your projects root. I don't know how the commands works, but you can simple create a new file in notepad++ and save it as Dockerfile. I suggest to use Viusal Studio Code instead (it is free and can help you a lot).
The built image is stored at your docker-deamon. The image is not a file.
I think you mean step 1: It dosen't matter where you run docker run -p 8080:80 -d --name reservations reservationsystem. The image is stored in your docker-deamon. Because of this fact, you can run the command "everywhere" on your maschine (it dosen't read or create files at the working-directory).

How to setup a docker dev environment for my team to work on? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I need to set up an environment with docker, containing multiple technologies such as a BD, a test environment, continuous integration, and some other things. I also need it to be available for my teammates to use.
I'm don't quite understand docker beyond the high-level concept of it so I have no idea where to start from. Useful answers would go from a step-by-step how to do it, to simply pointing me towards the right links for my problem. Thank you!
We intend to use either:
PostgreSQL
Node.js
Vue
Jenkins
, or:
PostgreSQL
Android Studio
Jenkins
To answer your first question about sharing a dev docker with teammates you need to have 2 different docker-compose files in your project like dev and prod.
on the other hand when your not yet comfortable with docker you better get involve with it step by step.
learn about making an stateless application, because when you are working with docker you want to scale horizontally later on
dockerize your apps (learn how to make a docker file for your nodejs project)
learn how to make a docker-compose file for nodejs + postgres application test it and make sure they are connected and are in one docker network which you created in docker-compose
you need a docker repository like docker hub or your own repo installation like nexus to push your production ready code after jenkins automated tests which then you can deploy
you can put your front and back end in one docker-compose file but i wouldn't recommend it because, multiple teams should work with one docker-compose file which might at first confuses them
you can ask your devOps team for Jenkins installation and create your CI yaml files
docker-compose files that you would create would be in the project directory and any one who clones the project would have access to it
create a read-me file with clear instructions for building, testing and deployment of the project for both dev and prod environment.
i don't know this would help or not because your question was not specific but i hope it will.

Why when I upload a new jar to AWS EC2 I do not see the changes in the functionalities [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have a question, I have tried to make changes to the rest service made with SpringBoot and deploy it in EC2 with a new jar generated with mvn install -DskipTests but when running it with docker-compose up I don't see that the functionality of the service has changed. I have tried restarting the EC2 instance but nothing happens ... Is there a special way to upload a change to EC2 or do I have to do some more step that I don't know about?
I have searched the internet but I can't find the answer, that's why I come here to ask.
The change I am making is in the expiration date of a user, I have tried to change it to months, more days or others, but I do not see reflected the changes in the functionality of the service when I generate and upload a new jar with the changes (obviously before I brought the changes from github). The strange thing is that I test it locally (on my computer) and these changes in the rest service work perfectly, but when uploading it to EC2, no changes are seen.
I hope you can help me, in advance, thank you very much.
At this point I am just assuming you have:
a Dockerfile
a JAR <-- that you modify
a docker-compose.yaml file
You ran once docker-compose up -d to start and build your service. You need to run docker-compose up -d --build for any subsequent changes to your JAR or Dockerfile otherwise docker-compose will just use the already built image. You can check that with docker image ls

Best way to distribute a docker container to people with varying technical backgrounds [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.

Resources