Set DB Alias in docker container - asp.net-mvc

I'm trying to containerize a Web API project using aspnet Docker image. My web API needs to use a connection string with datasource = <DB alias>. I can use cliconfg.exe to set the DB alias on Windows. However I'm not able to set the DB alias in aspnet docker image as it does not have SQL client tools installed on it.
Is there a way I can install SQL client tools on my docker container and be able to deploy the Web API with connection string having DB alias?

I've had the same problem with this, especially because I used linux. If you only wanted to use this alias inside mssql database you could add host alias to /etc/hosts by adding in docker command line
--add-host=myDbAlias:127.0.0.1
On Linux your should just add to hosts mapping to your aliases, and that's all.
On Windows
Adding aliases should be placed in this branches of Windows Registry on Client side machine.
HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo
HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo
Inside put Key as Your aliasNameServer and in data put "DBMSSOCN,YourRealServerName" where YourRealServerName = your real server name
You could of course add your alias to hosts also, but if you have no privilege to access system files, you can use registry solution

Related

How do you manage the variation between local and cloud dependencies within Docker?

I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)

pgAdmin-4 container connected to PostgreSQL server at boot

I am running pgAdmin-4 as a docker container alongside my PostgreSQL deployment (in docker containers as well).
I am able to connect to the WebUI and manually add the DB server, getting access to all the needed information.
Is there any way to make the pgAdmin container automatically connected to my PostgreSQL server without the need for a manual configuration after the launch?
Thank you
You can export & save the list of servers details in a JSON file and after starting your instance you import that into pgAdmin4. See export/import servers.
Then you can map the resulted JSON file in the docker as mentioned in the documentation.

How should I containerize my application requiring apache/php/mysql with an authenticated and public site experience?

I’ve spent months building an application and now I’m looking to deploy it, but I’m new to Docker and I seem to have brain block when it comes to actually containerizing my application. I need to run the following technologies:
php 7.2
mysql 5.7
apache 2.4
phpMyAdmin 4.7
My application will need to be available exclusively through https and I’m assuming the connection between my application and the mysql container will also need to be through a secure port.
In addition to that I have a wordpress site that will serve as the pre-login experience for my application that I’d like to dockerize, but should not share the same DB. When I move this to a prod environment, I will not include the phpMyAdmin container.
How many containers do I need? I was thinking that I would need at least 5:
apache
php
mysql (my application)
mysql (wordpress)
phpmyAdmin
Should my application and the worpress site live in the php container? or should I create separate containers for each.
What should my docker-compose.yml file and dockerfiles look like to achieve this feat?
The driving idea here is that a container should contain a single "service". You don't break things into containers by software component (php, apache, etc.) but rather by whatever needs to be combined to create a single service. So if your application is a PHP application hosted by Apache, then you'd want a container for your application that contained PHP, Apache and your application code. That would provide your application as a service.
Same goes for Wordpress. If Wordpress is running behind Apache and needs PHP, you'd create a second container containing PHP, Apache, WordPress, and your WordPress content, producing your "Wordpress service".
Each of your individual databases can be seen as a service, so you might want two containers running MySQL, one serving each of your databases. You could choose to consider the database server as a whole to be a service, and have it serve both of your databases. Then you could get away with a single MySQL container. Which way you go with this is a minor issue. Having a single database server will likely save a little bit of resources by avoiding some duplication.
If all of your services need to talk to each other, the easiest way to do this with Docker is to use Docker Compose. This lets you create multiple containers that know about each other and can communicate very easily between each other by way of some simple DNS logic that Docker Compose provides. With Compose, you give each of your containers a simple name, and then that name can be looked up via DNS to provide the IP address of each container. So for example, if your MySql container was named "mysql", your app container could connect to it via the DNS address "mysql" with no additional work on your part.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Centralized team development environment with docker

I want to build a "centralized" development environment using docker for my development team (4 PHP developers)
I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)
The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html
I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions
Does it make sense ?
But after, I don't really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of "dynamic DNS" to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?
To summarize, the workflow I would like to have :
A developer ssh into the dev env (the big linux server).
from his home directory he goes into the project directory and do a fig up
a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
The result can be viewed by accessing the vhost, for example :
randomly-generated-id.dev.example.org
when the changes are ok, the developper can do a git commit/push
then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.
So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions ... but remember that I would like to keep a lightweight workflow (after all we are a small team :))
Thanks for your help.
Does it make sense ?
yes, that setup makes sense
I would suggest taking a look at one of these projects:
https://github.com/crosbymichael/skydock
https://github.com/progrium/registrator
https://github.com/bnfinet/docker-dns
They're all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don't think you'll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Now, there's an even better option for you: Traefik. It will act as a reverse proxy, listening on 80/443, and will differentiate by hostname. Then, it will forward traffic dynamically, based on labels applied to the containers.
Here is a good solution to your issue:
1) Setup Traefik to listen to the docker daemon, forwarding based on ports
2) Ensure the frontend app servers for your devs are on the same docker network as traefik
3) Set a wildcard dns entry point to your server. For example: *.localdev.example.com.
4) On each container, set the hostname in that wildcard namespace. For example: jsmith-dev1localdev.example.com. This would be specified in a docker label such as: traefik.frontend.rule=Host:jsmith-dev1localdev.example.com.
This would allow developers to dynamically forward traffic by domain to their own dev containers.
Yes, I'm aware this is a 3 year old question. It still comes up in 2018 first on google for "centralized docker development server" so I'm going to post this anyways for the help of those currently searching.

Resources