Spring Cloud DataFlow: access to application.jar - docker

Spring Cloud DataFlow (SCDF) deployed and runned in docker-container.
Applications deployed at localhost.
The task is to register applications in SCDF.
When I use this URI file:///${HOME}/IdeaProjects/file_read_maven-0.0.1-SNAPSHOT, where {HOME} is an element of absolute path at localhost, I receive in log "Error: Unable to access jarfile."
When I change location of application from localhost to container, in which SCDF is deployed, I receive the same result.
What may be the possible solution?

To resolve local artifacts from your laptop from inside the running Docker daemon, you will have to mount the volume where the artifacts are hosted.
Either you can resolve from the mounted file-system or alternatively you can from the local Maven cache, as well.
More details here.

Related

Not able to access External server from inside container

I am running a docker window container which is running a window service . It is trying to load a file from On-Premise file server. I have tried to map the file server path inside container using "net use" command with different credentials who have access to the file server path. After that I have used "CMDKEY" commands to authenticate the file server. I am able to do that but while application is running it is not able to access that file. I am getting access denied error. Can anyone help me on this?
For more information, I have run the IIS as well to keep container alive.
Net Use command:
Net use * "" /user:<username> /p:yes
CMDKey command
cmd.exe /C "cmdkey /add: /user:<username> /pass:"
While doing the same thing with Console Application, I am able to access the file server path from inside container but not working for window service.

MLflow: Unable to store artifacts to S3

I'm running my mlflow tracking server in a docker container on a remote server and trying to log mlflow runs from local computer with the eventual goal that anyone on my team can send their run data to the same tracking server. I've set the tracking URI to be http://<ip of remote server >:<port on docker container>. I'm not explicitly setting any of the AWS credentials on the local machine because I would like to just be able to train locally and log to the remote server (run data to RDS and artifacts to S3). I have no problem logging my runs to an RDS database but I keep getting the following error when it get to the point of trying to log artifacts: botocore.exceptions.NoCredentialsError: Unable to locate credentials. Do I have to have the credentials available outside of the tracking server for this to work (ie: on my local machine where the mlflow runs are taking place)? I know that all of my credentials are available in the docker container that is hosting the tracking server. I've be able to upload files to my S3 bucket using the aws cli inside of the container that hosts my tracking server so I know that it as access. I'm confused by the fact that I can log to RDS but not S3. I'm not sure what I'm doing wrong at this point. TIA.
Yes, apparently I do need to have the credentials available to the local client as well.

How to integrate configuration files from a config service in rails?

I am currently running a rails application and a SpringBoot configuration service in the same local network. Is it possible to configure rails to use the config files provided by the service in Springboot?
More specifically I am looking to fetch the database connection and user data via the service and let rails connect to a remote database.
The service provides theses files via http as json or yml.
Thank you.
Edit: Solved it by using a bash script with wget to pull and assemble config files manually via container scripts that are executed before each deploy.

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Running an Ant script to prepare a Database in Bluemix

I have an Ant script that I use to populate/prepare a database. All I need is to set the host, port and credentials for the database. It works fine for MySQL and DB2, the DB just need to be reachable from were the script is executed.
The DB service in Bluemix gives me a DB with an IP (75.x.x.x) that is only reachable from the internal network of Bluemix, it is not accessible externally.
My understanding is that my ant script needs to be executed from inside the Bluemix network/servers.
How can I do that?
What would be the alternatives?
I'm considering to create a NodeJS script to trigger that ant internally, but I'm not sure if it will work properly.
dashDB always had the ability for local clients (outside of Bluemix) to connect to the cloud database, and SQL Database later added the feature as well. So you should be able to populate a database as long as you have the correct driver client installed on your local machine.
Can you provide more details on how you tested that the IP is not reachable? Is there a firewall put in place between your local machine and Bluemix? Note that ping is not a good test because the port is blocked for security reasons. You may try the JDBC port indicated on the connection page from the console.
See link for instructions on how to make a connection:
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#connecting-to-sqldb
You might be able to use a simple custom buildpack. You can start with a sample like this one:
https://github.com/dmikusa-pivotal/cf-test-buildpack
fork it and modify the bin/compile script to run your ant task instead. Then put your ant script (and probably executable as I expect it is not installed in the Bluemix environment) in a directory and run
cf push <appname> -b <your forked git url>
To push it to Bluemix and run it. If you're just using it once you can probably get away with hard-coding the address and credentials, or else you can bind to the same service instance and get the info from VCAP_SERVICES.

Resources