NET6.0 FTP server networking issue on Docker container - docker

I’m developing a NET 6.0 FTP server as part of a functionality to load the firmware of a hardware device. I need to have it inside a Docker container but I’m unable to make it work on that environment when it works perfectly when I execute it as a regular executable. It seems to be something related to docker networking but I can’t figure it out what it is.
This is the Dockerfile for the container, that is based on Alpine (mcr.microsoft.com/dotnet/aspnet:6.0-alpine), with some additions from the default Dockerfile created by Visual Studio:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
RUN apk add openrc --no-cache
ENV MUSL_LOCALE_DEPS cmake make musl-dev gcc gettext-dev libintl
ENV MUSL_LOCPATH /usr/share/i18n/locales/musl
RUN apk add --no-cache \
$MUSL_LOCALE_DEPS \
&& wget https://gitlab.com/rilian-la-te/musl-locales/-/archive/master/musl-locales-master.zip \
&& unzip musl-locales-master.zip \
&& cd musl-locales-master \
&& cmake -DLOCALE_PROFILE=OFF -D CMAKE_INSTALL_PREFIX:PATH=/usr . && make && make install \
&& cd .. && rm -r musl-locales-master
RUN apk add icu-libs
ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
COPY ["nuget.config", "."]
COPY ["CONTAINERS/Project1/Project1.csproj", "CONTAINERS/Project/"]
RUN dotnet restore "CONTAINERS/Project1.csproj"
COPY . .
WORKDIR "/src/CONTAINERS/Project1"
RUN dotnet build "Project1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Project1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Project1.dll"]
The Docker run parameters are these
-p 20:20 -p 21000-22000:21000-22000
where 20 is the control port for FTP, it is the port used by that external hardware device and cannot be modified by me, and 21000-22000 is the range for FTP passive mode.
The FTP server code is quite simple and it works nice directly being executed in the host machine:
public class FtpServer : IDisposable
{
...
public ErrorCode Start(string ip, int port, string basepath, string user, string password, int minPassivePort = 0, int maxPassivePort = 0)
{
ErrorCode retVal = ErrorCode.Success;
_basepath = basepath;
_user = user;
_password = password;
PassivePortMin = minPassivePort;
PassivePortMax = maxPassivePort;
ServicePointManager.DefaultConnectionLimit = 200;
_localEndPoint = new IPEndPoint(IPAddress.Parse(ip), port);
_listener = new TcpListener(_localEndPoint);
_listening = true;
_activeConnections = new List<ClientConnection>();
try
{
_listener.Start();
LocalEndpoint = ((IPEndPoint)_listener.LocalEndpoint).Address.ToString();
_listener.BeginAcceptTcpClient(HandleAcceptTcpClient, _listener);
}
catch (Exception ex)
{
log.Error("Error starting FTP server", ex);
retVal = ErrorCode.ConnectionFailure;
}
return retVal;
}
private void HandleAcceptTcpClient(IAsyncResult result)
{
if (_listening)
{
TcpClient client = _listener.EndAcceptTcpClient(result);
_listener.BeginAcceptTcpClient(HandleAcceptTcpClient, _listener);
ClientConnection connection = new ClientConnection(client, _user, _password, _basepath);
ThreadPool.QueueUserWorkItem(connection.HandleClient, client);
}
}
public class ClientConnection
{
public ClientConnection(TcpClient client, string username, string password, string basepath)
{
_controlClient = client;
_currentUser = new User
{
Username = username,
Password = password,
HomeDir = basepath
};
_validCommands = new List<string>();
}
public void HandleClient(object obj)
{
// bool error = false;
try
{
_remoteEndPoint = (IPEndPoint)_controlClient.Client.RemoteEndPoint;
_clientIP = _remoteEndPoint.Address.ToString();
_controlStream = _controlClient.GetStream();
_controlReader = new StreamReader(_controlStream);
_controlWriter = new StreamWriter(_controlStream);
_controlWriter.WriteLine("220 Service Ready.");
_controlWriter.Flush();
_validCommands.AddRange(new string[] { "AUTH", "USER", "PASS", "QUIT", "HELP", "NOOP" });
string line;
_dataClient = new TcpClient();
string renameFrom = null;
while ((line = _controlReader.ReadLine()) != null)
{
string response = null;
string[] command = line.Split(' ');
string cmd = command[0].ToUpperInvariant();
string arguments = command.Length > 1 ? line.Substring(command[0].Length + 1) : null;
if (arguments != null && arguments.Trim().Length == 0)
{
arguments = null;
}
if (!_validCommands.Contains(cmd))
{
response = CheckUser();
}
if (cmd != "RNTO")
{
renameFrom = null;
}
Console.WriteLine(cmd + " " + arguments);
if (response == null)
{
switch (cmd)
{
default:
response = "502 Command not implemented";
break;
}
}
if (_controlClient == null || !_controlClient.Connected)
{
break;
}
else
{
if (!string.IsNullOrEmpty(response))
{
_controlWriter.WriteLine(response);
_controlWriter.Flush();
}
Console.WriteLine(response);
if (response.StartsWith("221"))
{
break;
}
}
}
}
catch (Exception ex)
{
log.Error("Error sending command", ex);
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
Dispose();
}
}
The issue seems to be located in _controlWriter, it seems that anything is blocking the response to the device (220 Service Ready) or maybe the frame is not being redirected to the right network interface, because nothing is read from _controlReader. As I mentioned earlier, this exact same code works perfectly when I execute it in the host machine, outside Docker container, that's the reason why I think it could be something related to Docker networking.
I hope you can help me, thanks!

It was something related to carriage return. Since docker container used a Linux based image, the carriage return was \n and the device expected \r\n.
Thanks to everyone who took a look at this.

Related

"localhost didn't send any data" when using Golang + Gin + Docker

I've created a simple API following a youtube tutorial that works perfectly locally. Once I containerise the app and run the container, I can't access the API at http://localhost:8080. I'm guessing it has something to do with the port settings I'm using in the dockerfile, but I'm not sure.
main.go file:
package main
import (
"net/http"
"github.com/gin-gonic/gin"
"errors"
)
type phone struct{
ID string `json:"id"`
Model string `json:"model"`
Year string `json:"year"`
Quantity int `json:"quantity"`
}
var phones = []phone{
{ID: "1", Model: "iPhone 11", Year: "2019", Quantity: 4},
{ID: "2", Model: "iPhone 6", Year: "2014", Quantity: 9},
{ID: "3", Model: "iPhone X", Year: "2017", Quantity: 2},
}
func phoneById(c *gin.Context) {
id := c.Param("id")
phone, err := getPhoneById(id)
if err != nil {
c.IndentedJSON(http.StatusNotFound, gin.H{"message": "Phone not found."})
return
}
c.IndentedJSON(http.StatusOK, phone)
}
func checkoutPhone(c *gin.Context) {
id, ok := c.GetQuery("id")
if !ok {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Missing id query paramater"})
return
}
phone, err := getPhoneById(id)
if err != nil {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Phone not found"})
return
}
if phone.Quantity <= 0 {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Phone not available."})
return
}
phone.Quantity -= 1
c.IndentedJSON(http.StatusOK, phone)
}
func returnPhone(c *gin.Context) {
id, ok := c.GetQuery("id")
if !ok {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Missing id query paramater"})
return
}
phone, err := getPhoneById(id)
if err != nil {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Phone not found"})
return
}
if phone.Quantity <= 0 {
c.IndentedJSON(http.StatusBadRequest, gin.H{"Message": "Phone not available."})
return
}
phone.Quantity += 1
c.IndentedJSON(http.StatusOK, phone)
}
func getPhoneById(id string) (*phone, error) {
for i, p := range phones {
if p.ID == id {
return &phones[i], nil
}
}
return nil, errors.New("Phone not found.")
}
func getPhones(c *gin.Context) {
c.IndentedJSON(http.StatusOK, phones)
}
func createPhone(c *gin.Context) {
var newPhone phone
if err := c.BindJSON(&newPhone); err != nil {
return
}
phones = append(phones, newPhone)
c.IndentedJSON(http.StatusCreated, newPhone)
}
func main(){
router := gin.Default()
router.GET("/phones", getPhones)
router.GET("/phones/:id", phoneById)
router.POST("/phones", createPhone)
router.PATCH("/checkout", checkoutPhone)
router.PATCH("/return", returnPhone)
router.Run("localhost:8080")
}
and my dockerfile:
#The standard golang image contains all of the resources to build
#But is very large. So build on it, then copy the output to the
#final runtime container
FROM golang:latest AS buildContainer
WORKDIR /go/src/app
COPY . .
#flags: -s -w to remove symbol table and debug info
#CGO_ENALBED=0 is required for the code to run properly when copied alpine
RUN CGO_ENABLED=0 GOOS=linux go build -v -mod mod -ldflags "-s -w" -o restapi .
#Now build the runtime container, just a stripped down linux and copy the
#binary to it.
FROM alpine:latest
WORKDIR /app
COPY --from=buildContainer /go/src/app/restapi .
ENV GIN_MODE release
ENV HOST 0.0.0.0
ENV PORT 8080
EXPOSE 8080
CMD ["./restapi"]
I've tried different dockerfiles found on Google, and tried creating my own from scratch.
You need to bind to the public network interface inside the container. Because each container is its own host, and when you bind to the loopback interface inside, it will not be accessible to the outside world.
router.Run("0.0.0.0:8080")
Additionally, make sure you publish this port when running the container.
docker run --publish 8080:8080 myapp
You actually indicate the right intend with your environment variables, but they are not used in your code.
ENV HOST 0.0.0.0
ENV PORT 8080
You can use os.Getenv or os.LookupEnv to get those variables from your code and use them.

Unable to run Argo workflow with containerized .Net Console app

I have a .Net Core Console Application which I have containerized. The purpose of my application is to accept a file url and return the text. Below is my Dockerfile.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["CLI_ReadData/CLI_ReadData.csproj", "CLI_ReadData/"]
RUN dotnet restore "CLI_ReadData/CLI_ReadData.csproj"
COPY . .
WORKDIR "/src/CLI_ReadData"
RUN dotnet build "CLI_ReadData.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CLI_ReadData.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CLI_ReadData.dll"]
I now want to create an Argo Workflow for the same. Below is the corresponding .yaml file
metadata:
name: read-data
namespace: argo
spec:
entrypoint: read-data
templates:
- name: read-data
dag:
tasks:
- name: read-all-data
template: read-all-data
arguments:
parameters:
- name: fileUrl
value: 'https://dpaste.com/24593EK38'
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
- app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
ttlStrategy:
secondsAfterCompletion: 300
While creating the Argo Workflow I am getting the below error :
task 'read-data.read-all-data' errored: container "main" in template
"read-all-data", does not have the command specified: when using the
emissary executor you must either explicitly specify the command, or
list the image's command in the index:
https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary
I am also attaching my Program.cs file for reference purposes
class Program
{
public class CommandLineOptions
{
[Option("fileUrl", Required = true, HelpText = "Please provide a url of the text file.")]
public string fileUrl { get; set; }
}
static void Main(string[] args)
{
try
{
var result = Parser.Default.ParseArguments<CommandLineOptions>(args)
.WithParsed<CommandLineOptions>(options =>
{
Console.WriteLine("Arguments received...Processing further !");
var text = readTextFromFile(options.fileUrl);
Console.WriteLine("Read names from textfile...");
var names = generateListOfNames(text);
});
if (result.Errors.Any())
{
throw new Exception($"Task Failed {String.Join('\n', result.Errors)}");
}
//exit successfully
Environment.Exit(0);
}
catch (Exception ex)
{
Console.WriteLine("Task failed!!");
Console.WriteLine(ex.ToString());
//failed exit
Environment.Exit(1);
}
Console.WriteLine("Hello World!");
}
public static string readTextFromFile(string path)
{
System.Net.WebRequest request = System.Net.WebRequest.Create(path);
System.Net.WebResponse response = request.GetResponse();
Stream dataStream = response.GetResponseStream();
var reader = new StreamReader(dataStream);
var text = reader.ReadToEnd();
reader.Close();
response.Close();
return text;
}
public static List<string> generateListOfNames(string text)
{
var names = text.Split(',').ToList<string>();
foreach (var name in names)
Console.WriteLine(name);
return names;
}
}
Can anyone please help me out ?
The read-all-data template looks to me like invalid YAML. I think you're missing the command field name. I think the path also needs either a leading / (for an absolute path), or to start with bin/ (for a relative path with /app as the working directory).
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
command:
- /app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'

How does my PHP recognise whether it is running via Docker or Xampp?

I have a PHP login system that should be built to run on both XAMPP and Docker at the same time. My database need to be localy stored.
I create my Container and Image like these:
Image: docker build -t php . Container: docker run -dp 9000:80 --name php-app php
<?php
$host = "host.docker.internal"; // need to be that or 'localhost'
$name = "test";
$user = "root";
$passwort = "";
try {
$mysql = new PDO("mysql:host=$host;dbname=$name", $user, $passwort);
}
catch (PDOException $e) {
echo "SQL Error: ".$e->getMessage();
}
?>
Where do I get the information on which system I am running to make this value dynamic?
You can check if you are inside Docker this way:
function isDocker(): bool
{
return is_file("/.dockerenv");
}
I haven't worked on windows system yet but in Linux, You can check the processes and find process execute using docker or not.
$processes = explode(PHP_EOL, shell_exec('cat /proc/self/cgroup'));
// Check process folder path and pass here
$processes = array_filter($processes);
$is_docker = true;
foreach ($processes as $process) {
if (strpos($process, 'docker') === false) {
$is_docker = false;
}
}
Then you can implement as per your need.
if($is_docker === true){
// Do something
}

Socket.IO Cluster In Kubernetes With Multiple Nodes Not Emitting To All Clients

i have a kubernetes environment i am trying to publish my socket.io nodejs app to it for scaling.
It only emits the message to clients connected to same server, other clients are not getting any message at all.
Using:
socket.io#4.1.0
sticky#1.0.1
cluster-adapter#0.1.0
I was following this documentation:
Here is my clustered-socket.js:
const cluster = require("cluster");
const http = require("http");
const { Server } = require("socket.io");
const numCPUs = require("os").cpus().length;
const { setupMaster, setupWorker } = require("#socket.io/sticky");
const { createAdapter, setupPrimary } = require("#socket.io/cluster-adapter");
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
const httpServer = http.createServer();
setupMaster(httpServer, {
loadBalancingMethod: "least-connection",
});
setupPrimary();
cluster.setupMaster({
serialization: "advanced",
});
httpServer.listen(80);
console.log(`Starting ${numCPUs} workers...`);
for (let i = 0; i < numCPUs; i++) {
var worker = cluster.fork();
console.log(`Worker ${worker.process.pid} started.`);
}
console.log(`Started ${numCPUs} workers.`);
cluster.on("exit", (worker) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork();
});
} else {
console.log(`Worker ${process.pid} started`);
const httpServer = http.createServer();
const io = new Server(httpServer, {
transports: ["websocket"],
cors: {
origin: "*",
methods: ["GET", "POST"]
}
});
io.adapter(createAdapter());
setupWorker(io);
io.on('connection', (socket) => {
socket.emit("MESSAGE", "Welcome to Stream Socket.");
socket.on('disconnect', () => { });
socket.on('SUBSCRIBE', (msg) => {
try {
console.log(msg);
var obj = JSON.parse(msg);
socket.join(obj.requestedStream);
if (obj.requestedStream.startsWith("OLD_MESSAGES")) {
// This here is only emitting to clients connected to same server.
io.to("OLD_MESSAGES").emit("___OLD MESSAGES HERE____");
}
} catch (e) { console.log(e); }
});
socket.on('UNSUBSCRIBE', (msg) => {
var obj = JSON.parse(msg);
socket.leave(obj.requestedStream);
});
});
}
I've enabled sticky-session on kubernetes with nginx, ingress:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/affinity-canary-behavior: "sticky"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-expires: "10800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "10800"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on"
Here is how i am connecting to socket:
var server = document.getElementById('address').value;
socket = io(server, {
transports: ["websocket"],
'reconnection': false,
});
socket.on('MESSAGE', (msg) => {
logResponse(msg);
});
socket.on('MESSAGE', (msg) => {
logResponse(msg);
});
And finally my dockerfile:
FROM node:14
EXPOSE 80
EXPOSE 443
WORKDIR /usr/src/app
COPY package*.json ./
RUN apt-get update && \
apt-get install -y software-properties-common && \
rm -rf /var/lib/apt/lists/*
RUN sed -i "/^# deb.*multiverse/ s/^# //" /etc/apt/sources.list
RUN sed -i "/^# deb.*universe/ s/^# //" /etc/apt/sources.list
RUN npm install
RUN npm ci --only=production
COPY . .
CMD ["node", "clustered-socket.js"]
Use an adapter that uses a database connected to all nodes like Redis, Mongodb or Postgres adapters.

Configuring Minio server for use with Testcontainers

My application uses Minio for S3-compatible object storage, and I'd like to use the Minio docker image in my integration tests via Testcontainers.
For some very basic tests, I run a GenericContainer using the minio/minio docker image and no configuration except MINIO_ACCESS_KEY and MINIO_SECRET_KEY. My tests then use Minio's Java Client SDK. These work fine and behave just like expected.
But for other integration tests, I need to set up separate users in Mino. As far as I can see, users can only be added to Minio using the Admin API, for which there is no Java client, only the minio/mc docker image (the mc CLI is not available in the minio/minio docker image used for the server).
On the command line, I can use the Admin API like this:
$ docker run --interactive --tty --detach --entrypoint=/bin/sh --name minio_admin minio/mc
The --interactive --tty is a bit of a hack to keep the container running so I can later run commands like this one:
$ docker exec --interactive --tty minio_admin mc admin user add ...
Using Testcontainers, I try to do the same like this:
public void testAdminApi() throws Exception {
GenericContainer mc = new GenericContainer("minio/mc")
.withCommand("/bin/sh")
.withCreateContainerCmdModifier(new Consumer<CreateContainerCmd>() {
#Override
public void accept(CreateContainerCmd cmd) {
cmd
.withAttachStdin(true)
.withStdinOpen(true)
.withTty(true);
}
});
mc.start();
log.info("mc is running: {}", mc.isRunning());
String command = "mc";
Container.ExecResult result = mc.execInContainer(command);
log.info("Executing command '{}' returned exit code '{}' and stdout '{}'", command, result.getExitCode(), result.getStdout());
assertEquals(0, result.getExitCode());
}
The logs show the container being started, but executing a command against it returns exit code 126 and claims it's in a stopped state:
[minio/mc:latest] - Starting container with ID: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest is starting: 4f96fc7583fe62290925472c4c6b329fbeb7a55b38a3c0ad41ee797db1431841
[minio/mc:latest] - Container minio/mc:latest started
minio.MinioAdminTests - mc is running: true
org.testcontainers.containers.ExecInContainerPattern - /kind_volhard: Running "exec" command: mc
minio.MinioAdminTests - Executing command 'mc' returned exit code '126'
and stdout 'cannot exec in a stopped state: unknown'
java.lang.AssertionError: Expected: 0, Actual: 126
After fiddling around with this for hours, I'm running out of ideas. Can anyone help?
Thanks to #glebsts and #bsideup I was able to get my integration tests to work. Here's a minimal example of how to add a user:
public class MinioIntegrationTest {
private static final String ADMIN_ACCESS_KEY = "admin";
private static final String ADMIN_SECRET_KEY = "12345678";
private static final String USER_ACCESS_KEY = "bob";
private static final String USER_SECRET_KEY = "87654321";
private static GenericContainer minioServer;
private static String minioServerUrl;
#BeforeAll
static void setUp() throws Exception {
int port = 9000;
minioServer = new GenericContainer("minio/minio")
.withEnv("MINIO_ACCESS_KEY", ADMIN_ACCESS_KEY)
.withEnv("MINIO_SECRET_KEY", ADMIN_SECRET_KEY)
.withCommand("server /data")
.withExposedPorts(port)
.waitingFor(new HttpWaitStrategy()
.forPath("/minio/health/ready")
.forPort(port)
.withStartupTimeout(Duration.ofSeconds(10)));
minioServer.start();
Integer mappedPort = minioServer.getFirstMappedPort();
Testcontainers.exposeHostPorts(mappedPort);
minioServerUrl = String.format("http://%s:%s", minioServer.getContainerIpAddress(), mappedPort);
// Minio Java SDK uses s3v4 protocol by default, need to specify explicitly for mc
String cmdTpl = "mc config host add myminio http://host.testcontainers.internal:%s %s %s --api s3v4 && "
+ "mc admin user add myminio %s %s readwrite";
String cmd = String.format(cmdTpl, mappedPort, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY, USER_ACCESS_KEY, USER_SECRET_KEY);
GenericContainer mcContainer = new GenericContainer<>("minio/mc")
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(containerCommand -> containerCommand
.withTty(true)
.withEntrypoint("/bin/sh", "-c", cmd));
mcContainer.start();
}
#Test
public void canCreateBucketWithAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, ADMIN_ACCESS_KEY, ADMIN_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "foo";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#Test
public void canCreateBucketWithNonAdminUser() throws Exception {
MinioClient client = new MinioClient(minioServerUrl, USER_ACCESS_KEY, USER_SECRET_KEY);
client.ignoreCertCheck();
String bucketName = "bar";
client.makeBucket(bucketName);
assertTrue(client.bucketExists(bucketName));
}
#AfterAll
static void shutDown() {
if (minioServer.isRunning()) {
minioServer.stop();
}
}
}
You could run an one-off container (use OneShotStartupCheckStrategy) with mc and withCommand("your command"), connected to the same network as the minio server you're running (see Networking).
As #bsideup suggested, you can use one-shot strategy, i.e. as in here.
UPD: added working test. Here is important to know that
When the container is launched, it executes entrypoint + command (this is Docker in general and has nothing to do with Testcontainers). Source from TC github
public class TempTest {
#Rule
public Network network = Network.newNetwork();
private String runMcCommand(String cmd) throws TimeoutException {
GenericContainer container = new GenericContainer<>("minio/mc")
.withCommand(cmd)
.withNetwork(network)
.withStartupCheckStrategy(new OneShotStartupCheckStrategy())
.withCreateContainerCmdModifier(command -> command.withTty(true));
container.start();
WaitingConsumer waitingConsumer = new WaitingConsumer();
ToStringConsumer toStringConsumer = new ToStringConsumer();
Consumer<OutputFrame> composedConsumer = toStringConsumer.andThen(waitingConsumer);
container.followOutput(composedConsumer);
waitingConsumer.waitUntilEnd(4, TimeUnit.SECONDS);
return toStringConsumer.toUtf8String();
}
private void showCommandOutput(String cmd) throws TimeoutException {
String res = runMcCommand(cmd);
System.out.printf("Cmd '%s' result:\n----\n%s\n----%n", cmd, res);
}
#Test
public void testAdminApi() throws Exception {
showCommandOutput("ls");
showCommandOutput("version");
}
}
Another option is to use content of dockerfile of minio/mc, which is small, modify executed command (one-off "mc" by default), and run own container once per test, which, compared to one-off container, will save some time if you need to execute multiple commands:
#Rule
public Network network = Network.newNetwork();
#Rule
public GenericContainer mc = new GenericContainer(new ImageFromDockerfile()
.withDockerfileFromBuilder(builder ->
builder
.from("alpine:3.7")
.run("apk add --no-cache ca-certificates && apk add --no-cache --virtual .build-deps curl && curl https://dl.minio.io/client/mc/release/linux-amd64/mc > /usr/bin/mc && chmod +x /usr/bin/mc && apk del .build-deps")
.cmd("/bin/sh", "-c", "while sleep 3600; do :; done")
.build())
)
.withNetwork(network);
public void myTest() {
mc.execInContainer("mc blah");
mc.execInContainer("mc foo");
}
Basically, it runs image with mc installed, and sleeps for 1h which is enough for your tests. While it runs, you can execute commands etc. After you finish, it is killed.
Your minio container can be in same network.
Minio with docker compose:
For those who are looking for s3 with minio object server integration test.
The current implementation is based on docker-compose.
The current implementation utilising AWS S3 client for CURD opertations
docker-compose file:
version: '3.7'
services:
minio-service:
image: quay.io/minio/minio
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
The actual IntegrationTest class:
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;
import java.io.File;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {
private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
.withExposedService("minio-service", 9000);
private static final String MINIO_ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minio";
private static final String SECRET_KEY = "minio123";
private AmazonS3 s3Client;
#BeforeAll
void setupMinio() {
minioContainer.start();
initializeS3Client();
}
#AfterAll
void closeMinio() {
minioContainer.close();
}
private void initializeS3Client() {
String name = Regions.US_EAST_1.getName();
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.build();
}
#Test
void shouldReturnActualContentBasedOnBucketName() throws Exception{
String bucketName = "test-bucket";
String key = "s3-test";
String content = "Minio Integration test";
s3Client.createBucket(bucketName);
s3Client.putObject(bucketName, key, content);
S3Object object = s3Client.getObject(bucketName, key);
byte[] actualContent = new byte[22];
object.getObjectContent().read(actualContent);
Assertions.assertEquals(content, new String(actualContent));
}
}

Resources