I am a bit confused about Docker and how can I use it. My situation is the following:
I have a project that requires the use of a requisite, in my case installing ROS2. I have installed it in my system and develop a program. No problem there.
I wish to upload it to Gitlab and use CICD there. So I am guessing I will push it to my repository and then build a pipeline where I can use as image the docker image for ROS 2. I haven't tried it yet (will do it tomorrow) but I guess that is how I should do it.
My question is, can I do something similar (or how to ) in my local machine? In other words, just use the docker image and then develop and build over there and not install the requisite in the first place?
I heartily agree that using docker to develop locally improves the development experience, primarily by obviating system specific dependency management, just as you say.
Exactly how this is done depends on how many components you need to develop simultaneously, and how you want the development environment to behave .
An obvious place to start might be docker compose, a framework for starting multiple docker containers. https://docs.docker.com/compose/gettingstarted/ looks like quite a nice tutorial on the subject, and straight from the horse's mouth too.
However, your robotics project (?) may not be a very good fit for the server/client model behind the write - restart python - execute client - debug - repeat cycle in the document. To provide a better answer, we'd need a lot more understanding of how exactly your local development works - what exactly you want your development process to look like in this project might require a different solution. So add some workflow details to your question!
Related
I have been working with Docker for Windows for about a year now, and I still do not have a good grasp of when I should use the different images, how they are related, and what components of Windows that are in them.
On this link:
https://hub.docker.com/_/microsoft-windows-base-os-images
there are four "Featured repos":
windows/servercore
windows/nanoserver
windows/iotcore
windows
I understand that windows/servercore should contain more things that nanoserver, but what are does things exactly? Why does some programs work in servercore and not nanoserver and is there some way of finding what is missing in nanoserver for a particular program?
In addition to this, they list three related repos:
microsoft/dotnet-framework
microsoft/dotnet
microsoft/iis
Both of the dotnet repos contain five sub repos, and the difference is that dotnet-framework is based on server core, while dotnet is based on nanoserver.
Is there some comprehensible documentation of all these repos/images, maybe with a graph for a simple overview? Do some of them have a public Dockerfile that explains how they were created, like for example this:?
https://github.com/docker-library/python/blob/master/3.6/windows/windowsservercore-ltsc2016/Dockerfile
The differences your are mentionning are less linked to Docker than you think.
All images are a successions of operation which will result in a functionning environnement. See it as an automated installation, just like you would do it by hand on a physical machine.
Having different images on a repo means that the installation is different, with different settings. I'm not a .NET expert nor a Windows Server enthousiast, but for what I found, Nano Server is another way to install a Windows Server, with less functionnality so it's light-weigth. (https://learn.microsoft.com/en-us/windows-server/get-started/getting-started-with-nano-server)
Those kind of technical difference are technology specific and you'll find all the informations needed on the official documentations of Microsoft.
Remember that Docker is a way to do something, not the designer of the os you are using, so most of the time you'll have to search in the actual documentation of your system (in that case, Windows Server and .NET framework).
I hope this helped you to understand a little better, have fun with Docker!
I started looking into docker lately and I understand a lot of the benefits it offers I think, you can quickly create a docker container and run it on different machines. Building (compiling) is also relatively easy, you can download the maven image for example and just build your code. That works fine. So, building is easy, testing is easy and deploying (and running) in production is easy.
What I don't understand is how docker can make the development phase easier. And what I mean with the development phase is, starting up your IDE, reading code, quickly navigate through your methods definition using the methods the IDE provides, use intelliSense, etc. Then change something, run a unit test, try a different third party library, etc. All things you can do with your IDE. But I don't understand how to do this with a docker image. I've read a few posts about starting the IDE from within your docker container, but that requires setting things up with a windows manager and I am not sure if that's the way to go.
Of course I can set up my laptop in such a way that I can do all of this with my IDE, but that way I bypass all of the benefits docker should offer. I still have to download dependencies, set up environment variables, do a lot of manual settings etc. And not just me, but everyone in the team.
So, not a very concrete question, possibly a duplicate, but I just can't wrap my head around it, how to use an IDE together with docker?
Yeah it's hard. It also depends on what language/framework you're using. But the things you mention are all easy to accomplish. For example we use Ruby a lot and someone in my team uses RubyMine to work with his code. That source code is mapped onto the container so the changes are reflected immediately. If you want to run a test, I'm sure you can override the command your IDE brings by default with something custom like docker run --rm myapp ./run_tests.sh or similar. At least that's what I do with VIM.
Probably the most important missing part when doing dev with Docker is debugging. I think JetBrains is starting to add features to their IDE's but I'm not sure on the status of that.
Also, almost every IDE or good editor has an integrated console. You could maintain a docker exec sessions opened there and run all your app commands, like tests, generators or any other. Even do some basic debugging.
Hope it helps.
When building Docker images, I find myself in a strange place -- I feel like I'm doing something that somebody has already done many times before -- and did a vastly better job at it. In most cases, this gut feeling is absolutely right -- I'm taking a piece of software and re-describe everything that's already described in the OS's packaging system in a Dockerfile.
More often than not, I even find myself installing software into the image using a packager manager and then looking inside that package to get some clues about writable paths, configuration files, open ports etc. for my Dockerfile. The duplication of effort between OS packager and Docker packager is most evident in such a case which I assume is one of the more common.
So basically, every Docker user building an image on top of pre-packaged software is re-packaging almost from scratch, but without the time and often the domain knowledge the OS packagers had for trial, error and polish. If we consider the low reusability of community-maintained images (re-basing from Debian to RHEL hurts), we're stuck with copying or re-implementing functionality that already exists and works on OS level, wasting a lot of time and putting a maintenance burden on the poor souls who'd inherit whatever we might leave behind.
Is there any way to resolve this duplication of effort and re-use whatever package maintainers have already learned about a piece of software in Docker?
The main source for Docker image reuse is hub.docker.com
Search first there if your system is already described in one of those images.
You can see their Dockerfile, and start your own from one of those images instead of starting from a basic ubuntu or wheezy one.
I need to integrate a CI(continues integration)+CD(continuous deployment) tool for my ruby on rails application.
It would be great if it has the following features.
Code metric tool like (metric_fu)
Gitlab,Gitlub for repository managements.
Following are the list of CI tool I'm consider.
https://travis-ci.org/
https://circleci.com/
https://www.shippable.com/
https://www.codeship.io/
https://wiki.jenkins-ci.org
http://cruisecontrolrb.thoughtworks.com/
https://drone.io/
​please help me to compare the feature of these CI tools......
(full disclosure, I work for ThoughtWorks)
I'd replace CruiseControl.rb with http://www.go.cd/ or http://snap-ci.com - ThoughtWorks isn't actively updating CruiseControl.rb since Go CD came out several years ago. Both of the newer tools (also from TW) are designed with CD at their core.
To the main part of your question - With any of the tools you mentioned I'd recommend calling the code metrics tool of your choice as one of the stages, and not relying on the CI/CD tool to have that knowledge natively. Generally speaking, CI/CD servers are used to orchestrate a process that relies on several other tools like build systems, metrics tools, testing tools etc.
Magnum CI should be great for what you need.
It also deploys to Heroku, which hosts Rails apps. So if you want it to run some tests then deploy automatically, it sounds like it could be a good fit!
I found magnum-ci.com recently and feel it suites my entire need.
Hope this may help ROR enthusiastic...
I am working on a Django project. And our team follows agile practices(TDD,very short launch times etc..)
When it comes to deployment stage of a feature, we follow manual procedures to get the code deployed on to our production machine. Manual procedures in the sense, run the test cases (of the entire project) manually and run the syncdb (or any latest db changes) manually and other stuff.
This manual stuff is taking about 1/2 hour per day,everyday. So I wanted to automate this. So is there any automatic deployement system or something that takes the commands I give and do what I want or it should be done only MANUALLY?? (using a simple shell file which has got all my commands. In which case, I need to learn shell programming a bit because I don't want to update my latest code in my production machine if any test case fails. So such a sort of conditions exist for deploying, which may involve a bit of shell programming. I guess!)
I just started researching this myself and it looks like the Python-equivalent of Capistrano is Fabric, although maybe not as mature. There seem to be a growing number of people wanting to switch over simply because Capistrano is too ruby/rails centric, although I haven't seen anyone complain about Capistrano being incapable or lacking. YMMV. More info here and here.
EDIT: Looks like Fabric is undergoing a lot of changes. They're moving their homepage to http://docs.fabfile.org/
So in Agile terms, you've identified duplication in your deployment process, now you're looking for ways to refactor it. ;-)
In the Ruby/Rails world, you'd turn first to Capistrano or Vlad The Deployer, and in the Python/Django world you'd turn to the equivalent, except AFAIK there isn't one.
It seems from this post that Capistrano has been used successfully to handle Django deployment, so that's an option worth exploring.
Beyond that, why would you need shell programming? You have a super-powerful scripting language in Python - why not automate your manual process in that?