Is it possible to access the web behind a corporate proxy inside of jshell?
Or to set environment variables inside of jshell in general?
jshell does not seem to load the java environment variables that are set, like JDK_JAVA_OPTIONS, JAVA_OPTS, or JAVA_OPTIONS.
For example when I try to run this script I run into a timeout:
// a script that scrapes the website example.com with jsoup
/env --class-path jsoup-1.15.3.jar
import org.jsoup.*;
import org.jsoup.nodes.*;
Document doc = Jsoup.connect("http://example.com/").get();
String title = doc.title();
System.out.println("this is the title of the document: " + title);
/exit
Related
I am using a openSUSE Tumbleweed container in a gitlab ci pipeline which runs some script. In that script, I need to send an email at some point of time with certain content.
In the container, I am installing postfix and configuring that relay server in /etc/postfix/main.cf.
The following command works on my laptop using that same relay server:
echo "This is the body of the email" | mail -s "This is the subject" -r sender#email.com receiver#email.com
but doesn't work from the container, even having the same postfix configuration.
I've seen some tutorials that show how to use the postfix/smtp configuration from the host, but since this is a container running in gitlab ci, that's not applicable.
So, finally opted for a python solution and call the script from bash, this way I really don't need to configure postfix, smtp or any other thing. You just export your variables in bash (our use argparse) and run this script. Of course, you need a relay server without auth (normally on port 25).
import os
import smtplib
from email.mime.text import MIMEText
smtp_server = os.environ.get('RELAY_SERVER')
port = os.environ.get('RELAY_PORT')
sender_email = os.environ.get('SENDER_EMAIL')
receiver_email = os.environ.get('RECEIVER_EMAIL')
mimetext = MIMEText("this is the body of the email")
mimetext['Subject'] = "this is the subject of the email"
mimetext['From'] = sender_email
mimetext['To'] = receiver_email
server = smtplib.SMTP(smtp_server, port)
server.ehlo()
server.sendmail(sender_email, receiver_email.split(','), mimetext.as_string())
I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306
I am currently trying to build an NGINX Docker container that will be running alongside a Jupyter container. Within Jupyter, there is a download capability that I wish to disable or enable during the NGINX container build process.
Currently, I am passing a build argument in through the Dockerfile that will be read into the nginx.conf file as an environment variable. However, it seems as though the location directive that controls download within Jupyter cannot be placed within a conditional. If I understand correctly, the location directive must be under the server directive at all times.
env DOWNLOAD;
...
http {
...
server {
...
if (DOWNLOAD = 'true') {
location / {
...
}
}
}
When I attempt to build the container with the configuration above, I run into this error:
"location" directive is not allowed here..."
My question is - if conditionals are tricky to have functioning correctly in a NGINX conf file, are there are any approaches to controlling a location directive within the NGINX conf file provided an environment variable?
Thanks in advance.
The approach I use:
Create nginx-entry.sh file which would resolve all configuration variables of nginx
Inject this nginx-entry.sh file into nginx container
Switch entrypoint of this nginx container to nginx-entry.sh file
Working sample in my toy project:
Dockerfile - https://github.com/taleodor/mafia-vue/blob/master/Dockerfile
Nginx config - https://github.com/taleodor/mafia-vue/tree/master/nginx
Using this technique you can tweak / template configuration the way you need it.
When running my Docker image and requesting the Swagger UI I receive: 502 Bad Gateway.
I am attempting to run Connexion (Flask-based Swagger UI generator) with uWSGI on nginx. I assume it is because uWSGI does not correctly pick up my Flask instance. However, it appears to me that my container instance is correctly configured.
If you look here https://github.com/Microsoft/python-sample-vscode-flask-tutorial, the setup of my application and the configuration is similar and it works without issues.
According to the Connexion documentation I should be able to expose the app instance to uWSGI using
app = connexion.App(__name__, specification_dir='swagger/')
application = app.app # expose global WSGI application object
You can find my complete application code here:
https://bitbucket.org/lammy123/flask_nginx_docker_connexion/src/master/
The Flask/Connexion object is in application/__init__.py
uwsgi.ini:
[uwsgi]
module = application.webapp
callable = app
uid = 1000
master = true
threads = 2
processes = 4
__init__.py:
import connexion
app = connexion.App(__name__, specification_dir='openapi/')
app.add_api('helloworld-api.yaml', arguments={'title': 'Hello World Example'})
webapp.py:
from . import app # For application discovery by the 'flask' command.
from . import views # For import side-effects of setting up routes.
from . import app
import connexion
application = app.app
Running the code with the build-in development server works.
Expected behavior is that the Swagger UI is available at:
http://localhost:5000/v1.0/ui/#/
when running from a Docker container.
Is there a way to provide custom variables via Docker-Compose that can be referenced within a Kafka Connector config?
I have the following setup in my docker-compose.yml:
- "sql_server=1.2.3.4"
- "sql_database=db_name"
- "sql_username=some_user"
- "sql_password=nahman"
- "sql_applicationname=kafka_connect"
Here is my .json configuration file:
{
"name": "vwInv_Tran_Amounts",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"src.consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
"tasks.max": 2,
"connection.url": "jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}",
"query": "SELECT * FROM vwInv_Tran_Amounts",
"mode": "timestamp",
"topic.prefix": "inv_tran_amounts",
"timestamp.column.name": "timestamp",
"incrementing.column.name": "Inv_Tran_ID"
}
}
I was able to reference the environment variables using this method with Elastic Logstash, but it doesn't appear to work here.
Whenever loading it via curl I receive:
The connection string contains a badly formed name or value. for configuration Couldn't open connection to jdbc:sqlserver://${sql_server};database=${sql_database};user=${sql_username};password={sql_password};applicationname={sql_applicationname}\nInvalid value com.microsoft.sqlserver.jdbc.SQLServerException: The connection string contains a badly formed name or value.
EDIT/////////
I tried prefixing environment varibles like CONNECT_SQL_SERVER and that didn't work.
I feel like you are looking for Externalizing Kafka Connect secrets, but that would require mounting a file, not using env vars.
JSON Connector config files aren't loaded on Docker container startup. I made this issue to see if this would be possible.
You would have to template out the JSON file externally, then HTTP-POST them to the port exposed by the container.
Tried prefixing environment varibles like CONNECT_SQL_SERVER
Those values would go into the Kafka Connect Worker properties, not the properties that need to be loaded by a specific connector task.