While doing first database update i get an error
System.Net.Sockets.SocketException (10061): No connection could be made because the target machine actively refused it.
at Npgsql.NpgsqlConnector.Connect(NpgsqlTimeout timeout)
at Npgsql.NpgsqlConnector.RawOpen(NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnector.Open(NpgsqlTimeout timeout, Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnection.<>c__DisplayClass32_0.<g__OpenLong|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Npgsql.NpgsqlConnection.Open()
at Npgsql.EntityFrameworkCore.PostgreSQL.Storage.Internal.NpgsqlDatabaseCreator.Exists()
at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.Exists()
at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate(String targetMigration)
at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.UpdateDatabase(String targetMigration, String contextType)
at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)
No connection could be made because the target machine actively refused it.
I'm using Docker Toolbox for low end pc. And I ran postgres container with command:
docker run --name pg -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres;
For connecting EF Core to pgsql I used Npgsql nuget package and connectionString
"Server=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
Next, I created in MyDbContext file
public class MyDbContext : DbContext
{
public MyDbContext(DbContextOptions<MyDbContext> options) : base(options) { }
}
public class MyDbContextFactory : IDesignTimeDbContextFactory<MyDbContext>
{
public MyDbContext CreateDbContext(string[] args)
{
var optionsBuilder = new DbContextOptionsBuilder<MyDbContext>();
optionsBuilder.UseNpgsql("Server=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password");
return new MyDbContext(optionsBuilder.Options);
}
}
After which i added dbContext in Startup class
public void ConfigureServices(IServiceCollection services, IConfiguration config)
{
services.AddDbContext<Models.MyDbContext>();
Can anyone help me, or suggest what could I have missed?
For your scenario, Server=127.0.0.1 will work when you start your .net core project from host.
I assume you start your .net core project from docker, if so, you need to follow steps belwo which connect the two container.
Change your connectionstring with postgres container name pg like
"Server=pg;Port=5432;Database=postgres;Username=postgres;Password=password"
Specify the --link=pg while running the .net core like
docker run -it -p 8585:80 --link=pg dockerpostgres
Related
I finally found the motivation to work with Docker : I tried to deploy a basic "hello-world" servlet, on a tomcat running on a docker container.
This servlet works perfectly when I run it on the Tomcat started by intelliJ.
But when I use it with Docker, using this Dockerfile
FROM tomcat:latest
ADD example.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
And I build/start the image/container:
docker build -t example .
docker run -p 8090:8080 example
The index.jsp is displayed correctly at localhost:8090/example/, but I get a 404 when trying to access the servlet at localhost:8090/example/hello-servlet
At the same time, I can access localhost:8080/example/hello-servlet, when my non dockerized tomcat runs, and it works well.
Here is the servlet code :
package io.bananahammock.bananahammock_backend;
import java.io.*;
import javax.servlet.http.*;
import javax.servlet.annotation.*;
#WebServlet(name = "helloServlet", value = "/hello-servlet")
public class HelloServlet extends HttpServlet {
private String message;
public void init() {
message = "Hello World!";
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("<html><body>");
out.println("<h1>" + message + "</h1>");
out.println("</body></html>");
}
public void destroy() {
}
}
What am I missing?
Since August 31, 2021 (this commit) the Docker image tomcat:latest uses Tomcat 10 (see the list of available tags).
As you are probably aware, software which uses the javax.* namespace does not work on Jakarta EE 9 servers such as Tomcat 10 (see e.g. this question). Therefore:
if it is a new project, migrate to the jakarta.* namespace and test everything on Tomcat 10 or higher,
if it is a legacy project, use another Docker image, e.g. the tomcat:9 tag.
driver.close() method not working in docker container for latest image of selenium/node-firefox-debug
selenium/node-firefox-debug: Grid Node with Firefox installed and runs a VNC server, needs to be connected to a Grid Hub
There is no problem with execution just driver.close() is giving issues
below is how initiated forefox
public class Docker_class_firefox {
static RemoteWebDriver driver;
#Test
public void test() throws MalformedURLException, InterruptedException {
System.out.println("Hello FireFox");
DesiredCapabilities cmp = new DesiredCapabilities();
cmp.setPlatform(Platform.LINUX);
cmp.setBrowserName(BrowserType.FIREFOX);
driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"),cmp);
driver.manage().timeouts().setScriptTimeout(60,TimeUnit.SECONDS);
driver.manage().timeouts().implicitlyWait(60,TimeUnit.SECONDS);
Had same issue. webDriverThreadLocal.get().quit() made node available again.
I have a gRPC service defined and implemented in dotnet core 3.1 using C#. I have a stream call defined like so:
service MyService {
rpc MyStreamingProcedure(Point) returns (stream ResponseValue);
}
In the service it is generated
public virtual global::System.Threading.Tasks.Task MyStreamingProcedure(global::MyService.gRPC.Point request, grpc::IServerStreamWriter<global::MyService.gRPC.ResponseValue> responseStream, grpc::ServerCallContext context)
{
throw new grpc::RpcException(new grpc::Status(grpc::StatusCode.Unimplemented, ""));
}
In my service it is implemented by overriding this:
public override async Task MyStreamingProcedure(Point request, IServerStreamWriter<ResponseValue> responseStream, ServerCallContext context)
{
/* magic here */
}
I have this building in a docker container, and when I run it on localhost it runs perfectly:
docker run -it -p 8001:8001 mycontainerregistry.azurecr.io/myservice.grpc:latest
Now here is the question. When I run this in an Azure Container Instance and call the client using a public IP address, the call fails with
Unhandled exception. Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="Method is unimplemented.")
at Grpc.Net.Client.Internal.HttpContentClientStreamReader`2.MoveNextCore(CancellationToken cancellationToken)
It appears that it is not seeing the override and is running the procedure in the base class. The unary call on the same gRPC service works fine using the container running in public ACI. Why would the streaming call behave differently on localhost and running over a public IP address?
I got the same error, because I had not registered service.
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<MyService>();
});
Is there a way to connect redis as a full featured stompbroker?
As per redis documentation, we can use redis as message broker. we are planning to use redis as a message broker for our chat product.
I am trying to connect to redis but its failing. Is there a way to connect reids message broker for stomp?
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/chat");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.setApplicationDestinationPrefixes("/app");
// registry.enableSimpleBroker("/topic");
registry.enableStompBrokerRelay("/topic").setRelayHost("localhost").setRelayPort(6379).setClientLogin("guest").setClientPasscode("guest");
}
}
I got this exception, when I tried.
io.netty.handler.codec.DecoderException: java.lang.IllegalArgumentException: No enum constant org.springframework.messaging.simp.stomp.StompCommand.-ERR unknown command CONNECT, with args beginning with:
You need STOMP compatible message broker. For example RabbitMQ with stomp plugin Spring directly pass every STOMP command to broker. There is no way to convert STOMP command to Redis Pub/Sub command.
I am trying to implement a solution that'd shutdown the node running inside a docker (Swarm) container after a test run.
I looked at docker remove command but cannot use the docker container rm command as the containers are at the service-task level
I looked at the /lifecycle-manager api but cannot get to the node from client, the docker stack is running through a nginx server and only one port(4444) gets exposed
Finally I looked at extended the grid node (DefaultRemoteProxy). Excuse my bad java code, this is my first stab at writing java code. With this, it looks like I can stop the node but it gets registered to the hub
How can i stop this re-registration process or start the node without it
My goal is to have a new container for every test and let the docker orchestration bring up a new container when the node is shutdown and container gets removed (docker api https://docs.docker.com/engine/api/v1.24/)
public class ExtendedProxy extends DefaultRemoteProxy implements TestSessionListener {
public ExtendedProxy(RegistrationRequest request, GridRegistry registry) {
super(request, registry);
}
#Override
public void afterCommand(TestSession session, HttpServletRequest request, HttpServletResponse response) {
RequestType type = SeleniumBasedRequest.createFromRequest(request, getRegistry()).extractRequestType();
if(type == STOP_SESSION) {
System.out.println("Going to Shutdown the Node");
GridRegistry registry = getRegistry();
registry.stop();
registry.removeIfPresent(this);
}
}
}
Hub
[DefaultGridRegistry.assignRequestToProxy] - Shutting down registry.
[DefaultGridRegistry.removeIfPresent] - Cleaning up stale test sessions on the unregistered node
[DefaultGridRegistry.add] - Registered a node
Node
[ActiveSessions$1.onStop] - Removing session de04928d-7056-4b39-8137-27e9a0413024 (org.openqa.selenium.firefox.GeckoDriverService)
[SelfRegisteringRemote.registerToHub] - Registering the node to the hub: http://localhost:4444/grid/register
[SelfRegisteringRemote.registerToHub] - The node is registered to the hub and ready to use
I figured out the solution. I am answering my own question, hoping it'd benefit the community.
Start the node with the command line flag. This stops the auto registration thread from ever getting created.
registerCycle - 0
And in your class that extends DefaultRemoteProxy, override the afterSession
#Override
public void afterSession(TestSession session) {
totalSessionsCompleted++;
GridRegistry gridRegistry = getRegistry();
for(TestSlot slot : getTestSlots()) {
gridRegistry.forceRelease(slot, SessionTerminationReason.PROXY_REREGISTRATION);
}
teardown();
gridRegistry.removeIfPresent(this);
}
When the client executed the driver.quit() method, the node de-registers with the hub.