My application uses Minio for S3-compatible object storage, and I'd like to use the Minio docker image in my integration tests via Testcontainers.
For some very basic tests, I run a GenericContainer using the minio/minio docker image and no configuration except MINIO_ACCESS_KEY and MINIO_SECRET_KEY. My tests then use Minio's Java Client SDK. These work fine and behave just like expected.
But for other integration tests, I need to set up separate users in Mino. As far as I can see, users can only be added to Minio using the Admin API, for which there is no Java client, only the minio/mc docker image (the mc CLI is not available in the minio/minio docker image used for the server).
On the command line, I can use the Admin API like this:
$ docker run --interactive --tty --detach --entrypoint=/bin/sh --name minio_admin minio/mc
The --interactive --tty is a bit of a hack to keep the container running so I can later run commands like this one:
$ docker exec --interactive --tty minio_admin mc admin user add ...
Using Testcontainers, I try to do the same like this:
public void testAdminApi() throws Exception {
GenericContainer mc = new GenericContainer("minio/mc")
.withCommand("/bin/sh")
.withCreateContainerCmdModifier(new Consumer<CreateContainerCmd>() {
#Override
public void accept(CreateContainerCmd cmd) {
cmd
.withAttachStdin(true)
.withStdinOpen(true)
.withTty(true);
}
});
mc.start();
log.info("mc is running: {}", mc.isRunning());
String command = "mc";
Container.ExecResult result = mc.execInContainer(command);
log.info("Executing command '{}' returned exit code '{}' and stdout '{}'", command, result.getExitCode(), result.getStdout());
assertEquals(0, result.getExitCode());
}
The logs show the container being started, but executing a command against it returns exit code 126 and claims it's in a stopped state:
[minio/mc:latest] - Starting container with ID: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest is starting: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest started
minio.MinioAdminTests - mc is running: true
org.testcontainers.containers.ExecInContainerPattern - /kind_volhard: Running "exec" command: mc
minio.MinioAdminTests - Executing command 'mc' returned exit code '126'
and stdout 'cannot exec in a stopped state: unknown'
java.lang.AssertionError: Expected: 0, Actual: 126
After fiddling around with this for hours, I'm running out of ideas. Can anyone help?
Thanks to #glebsts and #bsideup I was able to get my integration tests to work. Here's a minimal example of how to add a user:
public class MinioIntegrationTest {
private static final String ADMIN_ACCESS_KEY = "admin";
private static final String ADMIN_SECRET_KEY = "12345678";
private static final String USER_ACCESS_KEY = "bob";
private static final String USER_SECRET_KEY = "87654321";
private static GenericContainer minioServer;
private static String minioServerUrl;
#BeforeAll
static void setUp() throws Exception {
int port = 9000;
minioServer = new GenericContainer("minio/minio")
.withEnv("MINIO_ACCESS_KEY", ADMIN_ACCESS_KEY)
.withEnv("MINIO_SECRET_KEY", ADMIN_SECRET_KEY)
.withCommand("server /data")
.withExposedPorts(port)
.waitingFor(new HttpWaitStrategy()
.forPath("/minio/health/ready")
.forPort(port)
.withStartupTimeout(Duration.ofSeconds(10)));
minioServer.start();
Integer mappedPort = minioServer.getFirstMappedPort();
Testcontainers.exposeHostPorts(mappedPort);
minioServerUrl = String.format("http://%s:%s", minioServer.getContainerIpAddress(), mappedPort);
// Minio Java SDK uses s3v4 protocol by default, need to specify explicitly for mc
String cmdTpl = "mc config host add myminio http://host.testcontainers.internal:%s %s %s --api s3v4 && "
+ "mc admin user add myminio %s %s readwrite";
String cmd = String.format(cmdTpl, mappedPort, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY, USER_ACCESS_KEY, USER_SECRET_KEY);
GenericContainer mcContainer = new GenericContainer<>("minio/mc")
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(containerCommand -> containerCommand
.withTty(true)
.withEntrypoint("/bin/sh", "-c", cmd));
mcContainer.start();
}
#Test
public void canCreateBucketWithAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "foo";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#Test
public void canCreateBucketWithNonAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, USER_ACCESS_KEY, USER_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "bar";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#AfterAll
static void shutDown() {
if (minioServer.isRunning()) {
minioServer.stop();
}
}
}
You could run an one-off container (use OneShotStartupCheckStrategy) with mc and withCommand("your command"), connected to the same network as the minio server you're running (see Networking).
As #bsideup suggested, you can use one-shot strategy, i.e. as in here.
UPD: added working test. Here is important to know that
When the container is launched, it executes entrypoint + command (this is Docker in general and has nothing to do with Testcontainers). Source from TC github
public class TempTest {
#Rule
public Network network = Network.newNetwork();
private String runMcCommand(String cmd) throws TimeoutException {
GenericContainer container = new GenericContainer<>("minio/mc")
.withCommand(cmd)
.withNetwork(network)
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(command -> command.withTty(true));
container.start();
WaitingConsumer waitingConsumer = new WaitingConsumer();
ToStringConsumer toStringConsumer = new ToStringConsumer();
Consumer<OutputFrame> composedConsumer = toStringConsumer.andThen(waitingConsumer);
container.followOutput(composedConsumer);
waitingConsumer.waitUntilEnd(4, TimeUnit.SECONDS);
return toStringConsumer.toUtf8String();
}
private void showCommandOutput(String cmd) throws TimeoutException {
String res = runMcCommand(cmd);
System.out.printf("Cmd '%s' result:\n----\n%s\n----%n", cmd, res);
}
#Test
public void testAdminApi() throws Exception {
showCommandOutput("ls");
showCommandOutput("version");
}
}
Another option is to use content of dockerfile of minio/mc, which is small, modify executed command (one-off "mc" by default), and run own container once per test, which, compared to one-off container, will save some time if you need to execute multiple commands:
#Rule
public Network network = Network.newNetwork();
#Rule
public GenericContainer mc = new GenericContainer(new ImageFromDockerfile()
.withDockerfileFromBuilder(builder ->
builder
.from("alpine:3.7")
.run("apk add --no-cache ca-certificates && apk add --no-cache --virtual .build-deps curl && curl https://dl.minio.io/client/mc/release/linux-amd64/mc > /usr/bin/mc && chmod +x /usr/bin/mc && apk del .build-deps")
.cmd("/bin/sh", "-c", "while sleep 3600; do :; done")
.build())
)
.withNetwork(network);
public void myTest() {
mc.execInContainer("mc blah");
mc.execInContainer("mc foo");
}
Basically, it runs image with mc installed, and sleeps for 1h which is enough for your tests. While it runs, you can execute commands etc. After you finish, it is killed.
Your minio container can be in same network.
Minio with docker compose:
For those who are looking for s3 with minio object server integration test.
The current implementation is based on docker-compose.
The current implementation utilising AWS S3 client for CURD opertations
docker-compose file:
version: '3.7'
services:
minio-service:
image: quay.io/minio/minio
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
The actual IntegrationTest class:
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;
import java.io.File;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {
private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
.withExposedService("minio-service", 9000);
private static final String MINIO_ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minio";
private static final String SECRET_KEY = "minio123";
private AmazonS3 s3Client;
#BeforeAll
void setupMinio() {
minioContainer.start();
initializeS3Client();
}
#AfterAll
void closeMinio() {
minioContainer.close();
}
private void initializeS3Client() {
String name = Regions.US_EAST_1.getName();
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.build();
}
#Test
void shouldReturnActualContentBasedOnBucketName() throws Exception{
String bucketName = "test-bucket";
String key = "s3-test";
String content = "Minio Integration test";
s3Client.createBucket(bucketName);
s3Client.putObject(bucketName, key, content);
S3Object object = s3Client.getObject(bucketName, key);
byte[] actualContent = new byte[22];
object.getObjectContent().read(actualContent);
Assertions.assertEquals(content, new String(actualContent));
}
}
Related
I’m developing a NET 6.0 FTP server as part of a functionality to load the firmware of a hardware device. I need to have it inside a Docker container but I’m unable to make it work on that environment when it works perfectly when I execute it as a regular executable. It seems to be something related to docker networking but I can’t figure it out what it is.
This is the Dockerfile for the container, that is based on Alpine (mcr.microsoft.com/dotnet/aspnet:6.0-alpine), with some additions from the default Dockerfile created by Visual Studio:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
RUN apk add openrc --no-cache
ENV MUSL_LOCALE_DEPS cmake make musl-dev gcc gettext-dev libintl
ENV MUSL_LOCPATH /usr/share/i18n/locales/musl
RUN apk add --no-cache \
$MUSL_LOCALE_DEPS \
&& wget https://gitlab.com/rilian-la-te/musl-locales/-/archive/master/musl-locales-master.zip \
&& unzip musl-locales-master.zip \
&& cd musl-locales-master \
&& cmake -DLOCALE_PROFILE=OFF -D CMAKE_INSTALL_PREFIX:PATH=/usr . && make && make install \
&& cd .. && rm -r musl-locales-master
RUN apk add icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
COPY ["nuget.config", "."]
COPY ["CONTAINERS/Project1/Project1.csproj", "CONTAINERS/Project/"]
RUN dotnet restore "CONTAINERS/Project1.csproj"
COPY . .
WORKDIR "/src/CONTAINERS/Project1"
RUN dotnet build "Project1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Project1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Project1.dll"]
The Docker run parameters are these
-p 20:20 -p 21000-22000:21000-22000
where 20 is the control port for FTP, it is the port used by that external hardware device and cannot be modified by me, and 21000-22000 is the range for FTP passive mode.
The FTP server code is quite simple and it works nice directly being executed in the host machine:
public class FtpServer : IDisposable
{
...
public ErrorCode Start(string ip, int port, string basepath, string user, string password, int minPassivePort = 0, int maxPassivePort = 0)
{
ErrorCode retVal = ErrorCode.Success;
_basepath = basepath;
_user = user;
_password = password;
PassivePortMin = minPassivePort;
PassivePortMax = maxPassivePort;
ServicePointManager.DefaultConnectionLimit = 200;
_localEndPoint = new IPEndPoint(IPAddress.Parse(ip), port);
_listener = new TcpListener(_localEndPoint);
_listening = true;
_activeConnections = new List<ClientConnection>();
try
{
_listener.Start();
LocalEndpoint = ((IPEndPoint)_listener.LocalEndpoint).Address.ToString();
_listener.BeginAcceptTcpClient(HandleAcceptTcpClient, _listener);
}
catch (Exception ex)
{
log.Error("Error starting FTP server", ex);
retVal = ErrorCode.ConnectionFailure;
}
return retVal;
}
private void HandleAcceptTcpClient(IAsyncResult result)
{
if (_listening)
{
TcpClient client = _listener.EndAcceptTcpClient(result);
_listener.BeginAcceptTcpClient(HandleAcceptTcpClient, _listener);
ClientConnection connection = new ClientConnection(client, _user, _password, _basepath);
ThreadPool.QueueUserWorkItem(connection.HandleClient, client);
}
}
public class ClientConnection
{
public ClientConnection(TcpClient client, string username, string password, string basepath)
{
_controlClient = client;
_currentUser = new User
{
Username = username,
Password = password,
HomeDir = basepath
};
_validCommands = new List<string>();
}
public void HandleClient(object obj)
{
// bool error = false;
try
{
_remoteEndPoint = (IPEndPoint)_controlClient.Client.RemoteEndPoint;
_clientIP = _remoteEndPoint.Address.ToString();
_controlStream = _controlClient.GetStream();
_controlReader = new StreamReader(_controlStream);
_controlWriter = new StreamWriter(_controlStream);
_controlWriter.WriteLine("220 Service Ready.");
_controlWriter.Flush();
_validCommands.AddRange(new string[] { "AUTH", "USER", "PASS", "QUIT", "HELP", "NOOP" });
string line;
_dataClient = new TcpClient();
string renameFrom = null;
while ((line = _controlReader.ReadLine()) != null)
{
string response = null;
string[] command = line.Split(' ');
string cmd = command[0].ToUpperInvariant();
string arguments = command.Length > 1 ? line.Substring(command[0].Length + 1) : null;
if (arguments != null && arguments.Trim().Length == 0)
{
arguments = null;
}
if (!_validCommands.Contains(cmd))
{
response = CheckUser();
}
if (cmd != "RNTO")
{
renameFrom = null;
}
Console.WriteLine(cmd + " " + arguments);
if (response == null)
{
switch (cmd)
{
default:
response = "502 Command not implemented";
break;
}
}
if (_controlClient == null || !_controlClient.Connected)
{
break;
}
else
{
if (!string.IsNullOrEmpty(response))
{
_controlWriter.WriteLine(response);
_controlWriter.Flush();
}
Console.WriteLine(response);
if (response.StartsWith("221"))
{
break;
}
}
}
}
catch (Exception ex)
{
log.Error("Error sending command", ex);
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
Dispose();
}
}
The issue seems to be located in _controlWriter, it seems that anything is blocking the response to the device (220 Service Ready) or maybe the frame is not being redirected to the right network interface, because nothing is read from _controlReader. As I mentioned earlier, this exact same code works perfectly when I execute it in the host machine, outside Docker container, that's the reason why I think it could be something related to Docker networking.
I hope you can help me, thanks!
It was something related to carriage return. Since docker container used a Linux based image, the carriage return was \n and the device expected \r\n.
Thanks to everyone who took a look at this.
I run RabbitMQ through Docker Desktop with the following settings:
rabbitmq:
container_name: rabbitmq
restart: always
ports:
- "5672:5672"
- "15672:15672"
Second port number is for the RabbitMQ Dashboard. And, I have a basic REST API endpoint which is supposed to publish a RabbitMQ message as follows:
private readonly IMediator _mediator;
private readonly IPublishEndpoint _publish;
public FlightController(IMediator mediator, IPublishEndpoint publish)
{
_mediator = mediator;
_publish = publish;
}
[HttpPost(Name = "CheckoutCrew")]
[ProducesResponseType((int)HttpStatusCode.Accepted)]
public async Task<IActionResult> CheckoutCrew([FromBody] ScheduleFlightCommand command)
{
var crewIds = new List<string>() { command.SeniorCrewId, command.Crew1Id, command.Crew2Id, command.Crew3Id };
var hasSchedule = true;
var crewCheckoutEvent = new CrewCheckoutEvent() { EmployeeNumbers = crewIds, HasSchedule = hasSchedule };
await _publish.Publish(crewCheckoutEvent);
return Accepted();
}
And, below codes represent the configurations regarding RabbitMQ:
services.AddMassTransit(config => {
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
});
});
services.AddMassTransitHostedService();
This Configuration["EventBusSettings:HostAddress"] line points here on appsettings.json:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
After I have run my API (named Flight.API), I check RabbitMQ logs via DockerDesktop and see these:
2022-03-31 12:52:41.794701+00:00 [info] <0.1020.0> accepting AMQP connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672)
2022-03-31 12:52:41.817563+00:00 [info] <0.1020.0> Connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672) has a client-provided name: Flight.API
2022-03-31 12:52:41.820704+00:00 [info] <0.1020.0> connection <0.1020.0> (xxx.xx.x.x:45292 -> xxx.xx.x.x:5672 - Flight.API): user 'guest' authenticated and granted access to vhost '/'
Everything seems okay, do not they?
I have also wrap .Publish method with try...catch but it also doesn't throw any exceptions. When my endpoint returns Accepted without any issue, I go and check RabbitMQ dashboard but it shows Connections: 0, Channels: 0 etc. Message rates section is also staying on idle.
I cannot see what I am missing.
(Currently, I do not have any consumers, but I should still see some life signs, am I right? Those Connections and Channels counters shouldn't be staying at 0 after I have successfully published my payload)
Thank you in advance.
Edit after adding a consumer class
Still no changes on RabbitMQ Management screens. Everything is on their default values, empty, or idle. Below is my configuration on the consumer project:
services.AddMassTransit(config => {
config.AddConsumer<CrewChecoutConsumer>();
config.UsingRabbitMq((ctx, cfg) => {
cfg.Host(Configuration["EventBusSettings:HostAddress"]);
cfg.UseHealthCheck(ctx);
cfg.ReceiveEndpoint(EventBusConstants.CrewCheckoutQueue, config => {
config.ConfigureConsumer<CrewChecoutConsumer>(ctx);
});
});
});
services.AddMassTransitHostedService();
services.AddScoped<CrewChecoutConsumer>();
appsettings.json file on consumer project is changed accordingly:
"EventBusSettings": {
"HostAddress": "amqp://guest:guest#localhost:5672"
}
And, below is my complete consumer class:
public class CrewChecoutConsumer : IConsumer<CrewCheckoutEvent>
{
private readonly IMapper _mapper;
private readonly IMediator _mediator;
public CrewChecoutConsumer(IMapper mapper, IMediator mediator)
{
_mapper = mapper;
_mediator = mediator;
}
public async Task Consume(ConsumeContext<CrewCheckoutEvent> context)
{
foreach (var employeeNumber in context.Message.EmployeeNumbers)
{
var query = new GetSingleCrewQuery(employeeNumber);
var crew = await _mediator.Send(query);
crew.HasSchedule = context.Message.HasSchedule;
var updateCrewCommand = new UpdateCrewCommand();
_mapper.Map(crew, updateCrewCommand, typeof(CrewModel), typeof(UpdateCrewCommand));
var result = await _mediator.Send(updateCrewCommand);
}
}
}
If you do not have any consumers, the only thing you will see is a message rate on the published message exchange as messages are delivered to the exchange, but then discarded as there are no receive endpoints (queues) bound to that message type exchange.
Until you have a consumer, you won't see any messages in any queues.
Also, you should pass the controller's CancellationToken to the Publish call.
I have a .Net Core Console Application which I have containerized. The purpose of my application is to accept a file url and return the text. Below is my Dockerfile.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["CLI_ReadData/CLI_ReadData.csproj", "CLI_ReadData/"]
RUN dotnet restore "CLI_ReadData/CLI_ReadData.csproj"
COPY . .
WORKDIR "/src/CLI_ReadData"
RUN dotnet build "CLI_ReadData.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CLI_ReadData.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CLI_ReadData.dll"]
I now want to create an Argo Workflow for the same. Below is the corresponding .yaml file
metadata:
name: read-data
namespace: argo
spec:
entrypoint: read-data
templates:
- name: read-data
dag:
tasks:
- name: read-all-data
template: read-all-data
arguments:
parameters:
- name: fileUrl
value: 'https://dpaste.com/24593EK38'
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
- app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
ttlStrategy:
secondsAfterCompletion: 300
While creating the Argo Workflow I am getting the below error :
task 'read-data.read-all-data' errored: container "main" in template
"read-all-data", does not have the command specified: when using the
emissary executor you must either explicitly specify the command, or
list the image's command in the index:
https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary
I am also attaching my Program.cs file for reference purposes
class Program
{
public class CommandLineOptions
{
[Option("fileUrl", Required = true, HelpText = "Please provide a url of the text file.")]
public string fileUrl { get; set; }
}
static void Main(string[] args)
{
try
{
var result = Parser.Default.ParseArguments<CommandLineOptions>(args)
.WithParsed<CommandLineOptions>(options =>
{
Console.WriteLine("Arguments received...Processing further !");
var text = readTextFromFile(options.fileUrl);
Console.WriteLine("Read names from textfile...");
var names = generateListOfNames(text);
});
if (result.Errors.Any())
{
throw new Exception($"Task Failed {String.Join('\n', result.Errors)}");
}
//exit successfully
Environment.Exit(0);
}
catch (Exception ex)
{
Console.WriteLine("Task failed!!");
Console.WriteLine(ex.ToString());
//failed exit
Environment.Exit(1);
}
Console.WriteLine("Hello World!");
}
public static string readTextFromFile(string path)
{
System.Net.WebRequest request = System.Net.WebRequest.Create(path);
System.Net.WebResponse response = request.GetResponse();
Stream dataStream = response.GetResponseStream();
var reader = new StreamReader(dataStream);
var text = reader.ReadToEnd();
reader.Close();
response.Close();
return text;
}
public static List<string> generateListOfNames(string text)
{
var names = text.Split(',').ToList<string>();
foreach (var name in names)
Console.WriteLine(name);
return names;
}
}
Can anyone please help me out ?
The read-all-data template looks to me like invalid YAML. I think you're missing the command field name. I think the path also needs either a leading / (for an absolute path), or to start with bin/ (for a relative path with /app as the working directory).
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
command:
- /app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
I have a working dotnet application that I can run locally, as well, the same code runs in an azure web app. I have been able to containerize it. However, when I run it in the container it fails to read the environment variable:
Code to get/check environment variable in the controller:
public ReportController(ILogger<ReportController> logger, IConfiguration iconfig)
{
_logger = logger;
_config = iconfig;
_storageConnString = Environment.GetEnvironmentVariable("AzureWebJobsStorage");
_containerName = Environment.GetEnvironmentVariable("ReportContainer");
string CredentialConnectionString = Environment.GetEnvironmentVariable("CredentialConnectionString");
if(CredentialConnectionString == null)
{
throw new Exception("Credential connection string is null");
}
}
code in start up:
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
})
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
});
```
my docker compose that is setting the variables:
services:
myreports:
image: myreports
build:
context: .
dockerfile: myreports/Dockerfile
ports: [5000:5000]
environment:
- "APPSETTINGS_AzureWebJobsStorage = DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=xxxx+xx/xx==;EndpointSuffix=core.windows.net"
- "APPSETTINGS_HarmonyConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Harmony;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "APPSETTINGS_CredentialConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "CredentialConnectionString = Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass"
- "APPSETTINGS_ReportContainer = taxdocuments"
As you can see I'm attempting both the APPSETTINGS_ prefix and not
but when I hit the port in the app the container returns:
myreports-1 | System.Exception: Credential connection string is null
the code works fine the in the app service getting the variables
You don't need to add APPSETTINGS_ in front of the variable names. What's causing the issue is the spaces around the equals sign in your docker-compose file. The quotes are not needed, so I'd remove them.
This should work
services:
myreports:
image: myreports
build:
context: .
dockerfile: myreports/Dockerfile
ports: [5000:5000]
environment:
- AzureWebJobsStorage=DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=xxxx+xx/xx==;EndpointSuffix=core.windows.net
- HarmonyConnectionString=Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Harmony;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass
- CredentialConnectionString=Data Source=mydb.database.windows.net;AttachDbFilename=;Initial Catalog=Credential;Integrated Security=False;Persist Security Info=False;User ID=sqlreporter;Password=mypass
- ReportContainer=taxdocuments
I have a PHP login system that should be built to run on both XAMPP and Docker at the same time. My database need to be localy stored.
I create my Container and Image like these:
Image: docker build -t php . Container: docker run -dp 9000:80 --name php-app php
<?php
$host = "host.docker.internal"; // need to be that or 'localhost'
$name = "test";
$user = "root";
$passwort = "";
try {
$mysql = new PDO("mysql:host=$host;dbname=$name", $user, $passwort);
}
catch (PDOException $e) {
echo "SQL Error: ".$e->getMessage();
}
?>
Where do I get the information on which system I am running to make this value dynamic?
You can check if you are inside Docker this way:
function isDocker(): bool
{
return is_file("/.dockerenv");
}
I haven't worked on windows system yet but in Linux, You can check the processes and find process execute using docker or not.
$processes = explode(PHP_EOL, shell_exec('cat /proc/self/cgroup'));
// Check process folder path and pass here
$processes = array_filter($processes);
$is_docker = true;
foreach ($processes as $process) {
if (strpos($process, 'docker') === false) {
$is_docker = false;
}
}
Then you can implement as per your need.
if($is_docker === true){
// Do something
}