I want to use multiple individual docker files/containers in one combined application. Suppose I have three docker containers. Two containers, Framework A and Framework B, that process data in a two different ways, but give the same output. The other container, main, will process that output. Now in my application I would like to use either Framework A or Framework B, or perhaps both.
I looked at a solution to combine multiple docker containers, but this does not seem to be supported by docker and is also not exactly what I want. So I looked into Docker volumes, but I don't really want to 'write' or store the data from the two individual frameworks, just pass it to the third container. I also read about docker-compose, but again this does not seem to be exactly what I want.
Suppose that in the main container I have some python code, that will call some other code from either Framework A or Framework B, and data then gets passed from either of those two to main (a python object, array or something similar). What would be the best way to achieve this? And is this easily interchangeable, given that the outputs of framework A and framework B are the same?
Related
Migrating from monolith to a microservice architecture having a single API gateway to access multiple services i.e., cars, shoes, mobiles etc. I'm using NET 6, Docker, Docker-Compose, Ocelot for my project. I'd highly appreciate your feedbacks on my question below based on two scenarios.
Scenario 1
Number of solutions [ApiGateway.sln, Cars.sln, Shoes.sln, Mobiles.sln, ...]
Docker Container [ApiGateway, Cars, Shoes, Mobiles, ...]
Docker Sub Containers for Cars [Hyundai, Honda, ...]
Ocelot used for [ApiGateway, Cars, Shoes, Mobiles]
Sub-ApiGateways: used for all services. MasterAPIGateway will interact with the SubApiGateways of each services.
Details: For an instance, a call for getting all cars of Hyundai is made. So the MasterApiGateway calls the cars service. Now the car serivce uses its own ApiGateways configured using Ocelot to call the required project i.e., Hyundai.csproj methods.
Yes this can be simplied by removing the ocelot from Cars and converting projects into methods.
Scenario 2
Number of solutions [ApiGateway.sln, Services.sln]
Docker Container [ApiGateway, Services]
Docker Sub Containers for Services [Cars, Mobiles, Shoes, ...]
Ocelot used for [ApiGateway]
Details: This is too mainstream but what if each services cars is a big project in itself. Due to which I've tried to separate the services i.e., cars.service, mobile.services hosted in differnt ports as seen in the above image. Again what if services has a huge module i.e., cars.services.honda has over 1000 methods. Due to which I've created sub projects within Cars again hosted in different ports. However, I am trying to encapsulate these sub projects as a single service i.e., for cars only 5000 port will be used by the MasterApiGateway.
Please do suggest me best way to achieve this. Again each service and sub projects within each services is a huge project. So having all these in one solution is something I'm trying to avoid. Thank you for your feedbacks.
this is a design problem and it is highly abstract and depends on business requirements so there is no absolute solution.
the scenario that you have car service and has api for each car may looks proper one BUT as you said each one of them is huge. THIS IS MY OPONION AND NOT A SOLUTION:
if it is just HUGE in data dont bother your self its better go fore one service of car
if all type of cars share same sort of functionality (methods , process..etc) then one service is ok.
if each car type has its own methods and process (not just getting data) then you have complexity in business logic then go for services for each car or one main service of car with similar functionality which can be support by services specific to cars type which contains specific functionality to the car type. here the car service may be play role as aggregator service.
if the car service become very very huge in code size in such a way that the maintenance require more than 5 colleagues (the number may be vary depend on organization size etc) then it should break in pieces.
also look at ubiquitous language in Domain Driven Design. at least it helps your architecture to be more appropriate by hard communication with domain experts.
your problem is the very challenging part of microservices (true microservices) and its out of my experience (i am still studying the microservice architecture and always i find the huge mistakes on my previous works). so please discuss and study and just dont rely what i said.
these two articles are very very useful
decompose-by-subdomain decompose-by-business-capability
The first question you should ask yourself is why do you need microservices, and usually it's better to start with a modular monolith and then break out a service at a time when needed...
You really need to have a clear understanding of the reason why you do it and not just for the fun of creating a solution like this.
I agree what Rouzbeh says about domain driven design, start there! and find your true bounded contexts using the ubiquitous language as a guide.
Let me start with the preface that I'm pretty new to the docker world.
I've found myself (and colleagues) searching for docker images on several occasions, where we looked for any docker image that contains specific binaries. E.g. we want to run selenium-based tests in an headless Chrome, so we look for an image that contains node, Chrome and selenium.
This kind of search, at least in what I've experienced, is not well supported by dockerhub. You can search there for all ingredients separately, and find great images for them. But you don't find images containing all ingredients at once.
Am I missing an obvious place to look for?
Needless to say we want to avoid creating our own images if we can, as we would have to maintain them in the long run, and we were of the impression that we couldn't be the first ones needing a similar image.
I have my frontend in Angular. Backend in Java-Spring MVC. Do you recommend a separate docker image for both or a single composite image combining both to be deployed in production?
If your front-end requires a separate process to run (and I read your input like it does), then you should have 2 separate images.
Rule of thumb - you can have only one main process per image. If you find yourself in a situation where you need more than one, you need to separate this into another image.
I'm in the process of designing a web-service hosted with Google App Engine comprised of three parts, a client website (or more), a simple CMS I designed to edit and view the content of that website, and lastly a server component to communicate between these two services and the database. I am new to docker and currently doing research to figure out how exactly to set up my containers along with the structure of my project.
I would like each of these to be a separate service, and therefor put them in different containers. From my research it seems perfectly possible to put them in separate containers and still have them communicate, but is this the optimal solution? Also given that in the future I might want to scale up so that my backed can supply multiple different frontends all managed from the same CMS.
tldr:
How should I best structure my web-service with docker, as well as assuming my back-end supplies more than one front end managed from a CMS.
Any suggestion for tools, or design patterns that make my life easier are welcome!
Personally, I don't like to think of designing whatever in terms of containers. Containers should be good for deployment process, for their main goal.
If you keep your logic in separate components/services you'll be able to combine them within containers in many different ways.
Once you have criteria what suits your product requirements (performance, price, security etc) you'll configure your docker images in the way you prefer.
So my advise is focus on design of your application first. Start from the number of solutions you have, provide a dockerfile for each one and then see what you will have to change.
I've been following Daniel Cazzulino's series about building a DI container using TDD. In part five of the series, he adds support for container hierarchies without commenting on what makes this feature useful. I've seen mention of support for hierarchies in many of the DI frameworks, but I'm having trouble understanding when they'd be used, and why. Can someone offer some insight?
I left a comment on kzu's blog asking the same question. It's a shame he didn't clarify the use-case for such a feature before coding it.
The only thing I could think of is if you wanted to have different types resolved from your container in different parts of your app. For example, if you had an order-entry system with two separate sections, and each section was identical except that they needed to present a different product list, you could create a child container for each section, and "override" the registration of your product repository in each. Whenever a section tried to resolve a product repository (or anything that depended on one) it would get the instance you set up in the child container rather than the parent. Sort of like overriding a virtual method.
This might be way off base, but it's the best I could come up with.
Here's a sample that uses child containers in a scenario similar to the one Matt describes. It uses child containers to select between different database configurations.
The key here is that most of the configuration is shared between the child containers (that shared part belongs in the parent container)
There is good reason for having child containers if dependency injection is fully embraced by the project. Let's imagine an application that has processes messages from two different, but similar systems. Most of the processing is similar, but there are variations to support compatability from those systems. Our aim is to re-use the code we can, while writing different code as requirements differ.
In OO programming, we wire together a series of classes that will collaborate to meet the system requirements. The DI container takes this responsibility. When a message arrives from a system, we want to build a set of collaborating classes suitable for processing a message from that particular system.
We have a top level container which has items that do not vary between the two systems. Then, we have child containers that do vary between systems. When a message arrives, we ask the approriate child DI container for a messageProcessor. Based on the configuration of that container (falling back to the parent container as necessary) the DI framework can return a messageProcessor (being an object backed by approriate collaborators) for the system in question.
Please leave a comment if this is not a clear answer. Also, you can search for "robot legs problem". Each leg is identical but one needs a left foot and the other needs a right foot. We could have a child DI container for each leg.
The best example that I'm aware of for nested containers is a windowing system. It's very nice for just separation of concerns to have each tab/window have it's own container independent of the other tabs/windows, with all window containers inheriting global dependencies from a parent container.
This is especially needed if you can have duplicate tab/windows, since in many cases you want to distinct instances of various classes for each duplicate tab/window