A newbie question regarding both Grails app running in Docker, and spring-security-ldap 2.0.1 plugin.
Currently I am running a local Grails apps (not in Docker), installation of spring-security-ldap 2.0.1 is pretty simple with mainly a added line in BuildConfig.groovy:
plugins {
compile ":spring-security-ldap:2.0.1"
}
After implemented the necessary user details and mapping classes, the first time running the Grails app, the installation of spring-security-ldap is automatically carried out and install a bunch of stuff under the
//src/target/work/plugins/spring-security-ldap-2.0.1/ folder, these are the things that drive the Ldap login support I supposed.
Now, if I were to move my Grails App into a docker container, what is the proper way to get this plugin installation done? Where to specify the resolution and installation of the plugin?
[Update 20180425]
NVM, I just tried it with my changed codes (specifying the LDAP plugins in BuildConfig.groovy), rebuilt the Docker images, and executed it and I can now see the auth login page. That means the plugin had been successfully resolved from external repo and built into the Docker image!
The problem now is, I am not able to log in with the test users:
org.springframework.security.authentication.InternalAuthenticationServiceException: localhost:389; nested exception is javax.naming.CommunicationException: localhost:389 [Root exception is java.net.ConnectException: Connection refused (Connection refused)]
It has to be about the opening the Docker in/out ports for my local OpenLdap. I will read a bit on Docker documentation for this.
First of all, create a war file using grails war. It will automatically add all the dependencies in the war file including spring-security-ldap. You don't have to worry about anything regarding dependency injection.
Afterward, you can follow these steps to create and run a docker image:
A) Creating Dockerfile
Create a file named Dockerfile in your project directory with the following content:
FROM tomcat:7.0.86-jre7
WORKDIR /usr/local/tomcat/bin
COPY <path/to/your-war-file> /usr/local/tomcat/webapp/<application-name>.war
CMD ["catalina.sh", "run"]
B) Creating an Image
Simply execute docker build -t <image name>:<image version> . This will create a docker image in your local docker engine.
C) Running the Container
Finally, start your application by executing docker run -p <port you wanna bind>:8080 <image name>:<image version>
If everything goes right, you can now access your application on the port you bound in docker run .. command.
Update
To answer your updated question, when you access localhost inside a container, it doesn't resolve to the localhost of the docker host machine. It resolves to the container itself. So, if you have something running on the host machine (OpenLDAP on port 389 in this case), you'll have to access this by using the IP of the host machine.
A better solution, in this case, is to run OpenLDAP in docker container also. That way, you can access OpenLDAP with its hostname and you won't have to change the IP if it changes.
Related
i'm new to node-red and docker. For my internship i was asked to convert a subflow into a module (in order to be in the palette of every instance of node-RED created) So, i've started with a little example showing how to add custom node as a module by following these steps (the node-RED is installed in a docker container):
connecting to an ec2 machine
going inside the container by executing the command docker exec -it mynodered /bin/bash/
and then i follow the steps as shown in this example https://techeplanet.com/how-to-create-custom-node-in-node-red/ to create the node and install it. After that i went to the "manage palette" to look for the recently installed module but it's not there ... If some one could help i will appreciate that. Thanks
Firstly, Nodes installed on the command line with npm will not show up until Node-RED is restarted.
The problem with this, in your case is that you created the node in the docker container, under normal circumstances any files you created in the running container will be lost when you restart it. This is because containers do not persist changes.
Also in the docker container the userDir is not ~/.node-red but /data.
So when you restart the container the node will likely be lost and it also will not have been installed into the node_modules directory in the /data userDir unless you have /data backed by a persistent volume.
If you want to create a node on your local machine, you can test it locally by using npm to install it and then restarting the a local instance of Node-RED to pick up the new node.
You can then use the npm pack command to create a tgz file which you can upload to the remote instance via the Palette Manager to test it in the Docker container if needed.
For longer term use of this new node you have several choices:
Publish the node to public npm with suitable tags and have it added to the public list of Node-RED nodes as described in the doc. This will allow anybody to install the node. You should ONLY do this with nodes you expect anybody to be able to use
Build a custom docker container that installs your node as part of the build process. Examples of how to do this are here
Build a custom docker container with a custom settings.js that points to a private npm repo and catalogue service that will allow you to host custom nodes. A blog post touching on this is here
Secondly the guide you are following is for building Node-RED nodes, but not for converting a subflow into a node. While it is possible to completely reimplement the subflow from scratch it will probably require recreating lots of work done in the nodes in use, this is not really an efficient approach.
There is on going work to build a tool that will automatically convert subflows into nodes but it is not ready for release just yet.
I suggest you join the Node-RED Slack or Discourse forum to be notified when it is available.
I have ansible scripts that install a docker container running nifi. I've run these scripts on our dev box without issue. However, when I run them on our int box I see the following error in the nifi-bootstrap.log, causing the whole nifi to die immediately at startup:
java.io.FileNotFoundException: /data/nifi/work/snappy-1.0.5-libsnappyjava.so (No such file or directory)
I checked the dev server where this is running and there is no /data/nifi/work directory and libsnappyjava doesn't exist anywhere on that server according to mlocate.
The flow file is exactly the same between the two versions, I've done an md5sum to ensure that. The only difference in the nifi.properties file is that the they each have their own VM's hostname injected into appropriate fields by ansible. The nifi installation is part of a parent docker image that hasn't been touched and so should also be identical across images.
I'm using a nifi tarball created by my company, containing some company specific jars etc, but it should be built on top of the latest nifi version.
The only difference between the functional dev and non-functional int that I can tell is that I original installed a docker image running an older nifi version before upgrading the nifi to get the more recent nifi api. I don't know if somehow running the old nifi before upgrading it would have made a change to our /data directory in some way preventing the upgraded nifi from failing?
So why is my int looking for snappyJava when dev seems fine without it?
I am currently working on a project which needs to be deployed on customer infra (which is not cloud) and also it will not have internet.
We currently deploy manually our application and install dependencies using tarball, can docker help us here?
Note:
Application stack:
NodeJs
MySql
Elasticsearch
Redis
MongoDB
We will not have internet.
You can use docker load and docker save to load Docker images in TAR format or export these images. If you package your application files within these images this could be used to deliver your project to your customers.
Also note that the destination services must all have Docker Engine installed and running.
If you have control over your dev environment, you can also use Nexus or Gitlab as your private Docker repository. You can then pull your images from there into production, if it makes sense for your product.
I think the most advantage can be had in your local dev setup. Instead of installing, say, MySQL locally, you can run it as a Docker container. I use docker-compose for all client services in my current project. This helps keep your computer clean, makes it easy to avoid versioning hell (if you use different versions for each release or stage) and you don't have to mess around with configuration for each dev machine.
In my previous job every developer had a local Oracle SQL install, and that was not a happy state of affairs.
I have a number of Docker containers (10) each running a Java service that makes up my system. To create these containers I use a couple of docker-compose files. Using the Docker Integration plugin for IntelliJ, I can now spool up these services to my remote server using the Docker-compose option (the images used are built outside of IntelliJ, using Gradle). Here are the steps I have done to achieve this:
I have added a Docker server using the Docker Machine option to connect to the remote Docker daemon (message says Connection Successful).
I have added a new Docker Compose configuration, using the server, specifying my compose files, and the services I want to start.
Now that I have the system controlled through IntelliJ, I have been trying to figure out how to attach the remote debugger to each of these services so that IntelliJ will hit my breakpoints.
Will I need to add the JVM args (-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005) to each service (container) and add the usual remote debug configuration for each service? Do I need to use a different address for each service? If so, how do I add these args? Surely with the Docker Integration plugin, there is an easier way to do this.
IntelliJ Idea v2018.1.5 (Community Edition)
Docker Integration v181.5087.20
I'm creating a module for Keycloak and I'm trying to register it using Modules, just how the documentation says to.
How can I register this module on keycloak-server subsystem section of standalone.xml when I'm running Keycloak with Docker?
Start the server during the Docker build. Then run a jboss-cli batch script to modify the configuration.
If >jboss-cli.bat --file=adapter-install.cli doesn't work, then you can add --connect.
Try following command;
jboss-cli.bat --connect --file=adapter-install.cli
You can prepare module.xml manually (you could use examples from JBOSS_HOME/modules/..). Don't forget to specify all required dependencies (keycloak-core, javax ...). Now you can add module.xml and corresponding jars during Dockerfile build. Or add module.xml during image build and add jars as volumes.
Also consider running Jboss scripts in embed mode during image build. As for me there too many preliminary script job running before actual keycloak service is started. I would prefer to bake custom image using only Dockerfile (but use official keycloak docker sources as a reference).
As you are using docker to run keycloak.
You can copy your custom CLIs in the docker file and run them. We mimic what keycloak did in their image and it worked for us even add modules.
https://github.com/jboss-dockerfiles/keycloak/tree/master/server/tools/cli
Our case was adding sentry module http://cloudtrust.io/doc/chapter-keycloak/sentry.html but we didn't follow it Literally.