Open browser with URL after wamp server loads - wampserver

I'm looking to be able to start Wamp Server and after it loads apache and mysql have chrome automatically open up with a url of the local server.
So essentially all that a person would need to do is to click on the wamp server icon and my localwebsite loads up in a browser automatically.
Is this possible?
I was looking at the wampmanager.ini file and trying some lines under
;WAMPSTARTUPACTIONSTART
Action: run; FileName: "D:/Programs/wamp/bin/php/php5.4.12/php-win.exe";Parameters: "refresh.php";WorkingDir: "D:/Programs/wamp/scripts"; Flags: waituntilterminated
Action: resetservices
Action: readconfig;
Action: service; Service: wampapache; ServiceAction: startresume; Flags: ignoreerrors
Action: service; Service: wampmysqld; ServiceAction: startresume; Flags: ignoreerrors
;WAMPSTARTUPACTIONEND
But i get errors.

OK, thats an interesting idea.
The problem you are having is that you should not edit the wampmanager.ini file, as this is rebuilt from wampmanager.tpl each time you start WAMPManager or do a right click -> refresh on the wampmanager icon, the Green W icon.
So edit \wamp\wampmanager.tpl
Change this section from this original state ( back it up first )
[StartupAction]
;WAMPSTARTUPACTIONSTART
Action: run; FileName: "${c_phpCli}";Parameters: "refresh.php";WorkingDir: "${c_installDir}/scripts"; Flags: waituntilterminated
Action: resetservices
Action: readconfig;
Action: service; Service: wampapache; ServiceAction: startresume; Flags: ignoreerrors
Action: service; Service: wampmysqld; ServiceAction: startresume; Flags: ignoreerrors
;WAMPSTARTUPACTIONEND
to this
[StartupAction]
;WAMPSTARTUPACTIONSTART
Action: run; FileName: "${c_phpCli}";Parameters: "refresh.php";WorkingDir: "${c_installDir}/scripts"; Flags: waituntilterminated
Action: resetservices
Action: readconfig;
Action: service; Service: wampapache; ServiceAction: startresume; Flags: ignoreerrors
Action: service; Service: wampmysqld; ServiceAction: startresume; Flags: ignoreerrors
Action: run; FileName: "${c_navigator}"; Parameters: "http://localhost/phpmyadmin/"; Flags: ignoreerrors
;WAMPSTARTUPACTIONEND
Change the Parameters: "http://localhost/phpmyadmin/"; to the url of the site you want to auto load. I just used phpmyadmin as a example.

Related

Getting index out of range error when creating metaplex metadata account

Why am I getting the following error when trying to create a metadata account using createCreateMetadataAccountV2Instruction from the #metaplex-foundation/mpl-token-metadata library?
SendTransactionError: failed to send transaction: Transaction simulation failed: Error processing Instruction 0: Program failed to complete
at Connection.sendEncodedTransaction (C:\xampp\htdocs\sol-tools\node_modules\#solana\web3.js\src\connection.ts:4464:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Connection.sendRawTransaction (C:\xampp\htdocs\sol-tools\node_modules\#solana\web3.js\src\connection.ts:4423:20)
at async Connection.sendTransaction (C:\xampp\htdocs\sol-tools\node_modules\#solana\web3.js\src\connection.ts:4411:12)
at async sendAndConfirmTransaction (C:\xampp\htdocs\sol-tools\node_modules\#solana\web3.js\src\util\send-and-confirm-transaction.ts:31:21)
at async addMetadataToToken (C:\xampp\htdocs\sol-tools\src\lib\metadata.ts:86:16)
at async Command.<anonymous> (C:\xampp\htdocs\sol-tools\src\cli.ts:48:7) {
logs: [
'Program metaqbxxUerdq28cj1RbAWkYQm3ybzjb6a8bt518x1s invoke [1]',
'Program log: Instruction: Create Metadata Accounts v2',
"Program log: panicked at 'range end index 36 out of range for slice of length 0', program/src/utils.rs:231:27",
'Program metaqbxxUerdq28cj1RbAWkYQm3ybzjb6a8bt518x1s consumed 6223 of 1400000 compute units',
'Program failed to complete: BPF program panicked',
'Program metaqbxxUerdq28cj1RbAWkYQm3ybzjb6a8bt518x1s failed: Program failed to complete'
]
}
Here's my code:
import {
createCreateMetadataAccountV2Instruction,
PROGRAM_ID,
} from '#metaplex-foundation/mpl-token-metadata'
import {
Connection,
Keypair,
PublicKey,
sendAndConfirmTransaction,
Transaction,
} from '#solana/web3.js'
export const addMetadataToToken = async (
connection: Connection,
tokenMint: PublicKey,
tokenOwner: Keypair,
name: string,
symbol: string,
arweaveLink: string
) => {
const seed1 = Buffer.from('metadata', 'utf8')
const seed2 = PROGRAM_ID.toBuffer()
const seed3 = tokenMint.toBuffer()
const [metadataPDA, _bump] = PublicKey.findProgramAddressSync(
[seed1, seed2, seed3],
PROGRAM_ID
)
const accounts = {
metadata: metadataPDA,
mint: tokenMint,
mintAuthority: tokenOwner.publicKey,
payer: tokenOwner.publicKey,
updateAuthority: tokenOwner.publicKey,
}
const dataV2 = {
name,
symbol,
uri: arweaveLink,
// we don't need these
sellerFeeBasisPoints: 0,
creators: null,
collection: null,
uses: null,
}
const args = {
createMetadataAccountArgsV2: {
data: dataV2,
isMutable: true,
},
}
const ix = createCreateMetadataAccountV2Instruction(accounts, args)
const tx = new Transaction()
tx.add(ix)
const txid = await sendAndConfirmTransaction(connection, tx, [tokenOwner])
console.log(txid)
}
Turns out I was on trying to create metadata for a token on devnet, but was using a mainnet-beta rpc endpoint for the Connection class. Thus the token I was trying to create metadata for didn't exist.
This is a really Common Error Message that occurs when there is some issue with what you are passing to the program. So make sure everything that you are input to the program is correct. In 90% of the cases, it gets resolved when checking the inputs correctly.

grpc client in docker can't reach server on host

I have a node grpc-server running on localhost and my grpc-client is a python flask server. If the client also runs on localhost directly then everything works as intended. Once I host the client(flask server) in a docker-container it is unable to reach the grpc-server though.
The error simply states:
RPC Target is unavaiable
I can call the flask-api from the host without issues. Also I changed the server address from 'localhost' to 'host.docker.internal', which is getting resolved correctly. Not sure if I am doing something wrong or this just doesn't work. I greatly appreciate any help or suggestions. Thanks!
Code snippets of the server, client and docke-compose :
server.js (Node)
...
const port = 9090;
const url = `0.0.0.0:${port}`;
// gRPC Credentials
import { readFileSync } from 'fs';
let credentials = ServerCredentials.createSsl(
readFileSync('./certs/ca.crt'),
[{
cert_chain: readFileSync('./certs/server.crt'),
private_key: readFileSync('./certs/server.key')
}],
false
)
...
const server = new Server({
"grpc.keepalive_permit_without_calls": 1,
"grpc.keepalive_time_ms": 10000,
});
...
server.bindAsync(
url,
credentials,
(err, port) => {
if (err) logger.error(err);
server.start();
}
);
grpc_call.py (status_update is called by app.py)
import os
import logging as logger
from os.path import dirname, join
import config.base_pb2 as base_pb2
import config.base_pb2_grpc as base_pb2_grpc
import grpc
# Read in ssl files
def _load_credential_from_file(filepath):
real_path = join(dirname(dirname(__file__)), filepath)
with open(real_path, "rb") as f:
return f.read()
# -----------------------------------------------------------------------------
def status_update(info, status, info=""):
SERVER_CERTIFICATE = _load_credential_from_file("config/certs/ca.crt")
SERVER_CERTIFICATE_KEY = _load_credential_from_file("config/certs/client.key")
ROOT_CERTIFICATE = _load_credential_from_file("config/certs/client.crt")
credential = grpc.ssl_channel_credentials(
root_certificates=SERVER_CERTIFICATE,
private_key=SERVER_CERTIFICATE_KEY,
certificate_chain=ROOT_CERTIFICATE,
)
# grpcAddress = "http://localhost"
grpcAddress = "http://host.docker.internal"
grpcFull = grpcAddress + ":9090"
with grpc.secure_channel(grpcFull, credential) as channel:
stub = base_pb2_grpc.ProjectStub(channel)
request = base_pb2.ContainerId(id=int(info), status=status)
try:
response = stub.ContainerStatus(request)
except grpc.RpcError as rpc_error:
logger.error("Error #STATUS_UPDATE")
if rpc_error.code() == grpc.StatusCode.CANCELLED:
logger.error("RPC Request got cancelled")
elif rpc_error.code() == grpc.StatusCode.UNAVAILABLE:
logger.error("RPC Target is unavaiable")
else:
logger.error(
f"Unknown RPC error: code={rpc_error.code()} message={rpc_error.details()}"
)
raise ConnectionError(rpc_error.code())
else:
logger.info(f"Received message: {response.message}")
return
Docker-compose.yaml
version: "3.9"
services:
test-flask:
image: me/test-flask
container_name: test-flask
restart: "no"
env_file: .env
ports:
- 0.0.0.0:8010:8010
command: python3 -m flask run --host=0.0.0.0 --port=8010

.Net Framework Web Api SelfHost service stopped suddenly

I created a simple Web Api selfHost as windows service which listens to an address which is dynamically loads from the database and normally it includes port number( like : http://localhost:1900)
When I change the address( for example port number, something like http://localhost:1901) the service can catch the requests on the new port but the requests on old port ( http:localhost:1900) leads to crashing the service and it will be stopped.
I just could debug my service and saw just NullReference Error and not any more info about it.
I don't know even where this error happened and non of my logs could help me.
what do you think about this error? Have you ever seen this kind of error before?
For more info I should say just some errors I can see in Event Viewer window :
Application: {Service.exe}
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.NullReferenceException
at System.Web.Http.SelfHost.HttpSelfHostServer+d__35.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
at System.Web.Http.SelfHost.HttpSelfHostServer+d__34.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.b__6_1(System.Object)
at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
Faulting application name: {Service.exe}, version: 1.0.0.0, time stamp: 0xc594704b
Faulting module name: KERNELBASE.dll, version: 10.0.14393.3383, time stamp: 0x5ddcb9ff
Exception code: 0xe0434352
Fault offset: 0x000dc232
Faulting process id: 0x7370
Faulting application start time: 0x01d72886545b1d41
Faulting application path: {Service PhysicalAddress}
Faulting module path: C:\Windows\System32\KERNELBASE.dll
Report Id: 305c75f4-8c83-484a-b673-565abfc2b7d6
Faulting package full name:
Faulting package-relative application ID
For more details I bring my Service Class Body below :
class service
{
HttpSelfHostConfiguration config;
HttpSelfHostServer server;
Timer _timer = new Timer();
protected override void OnStart(string[] args)
{
_timer.Interval = 2000;
_timer.Elapsed += _timer_Elapsed;
_timer.Enabled = true;
}
private void _timer_Elapsed(object sender, ElapsedEventArgs e)
{
var listenToUrl = _getDestUrlFromDB();
var configChanged = false;
if (config != null && config.BaseAddress.AbsoluteUri != listenToUrl + "/")
{
configChanged = true;
config.Dispose();
}
config = new HttpSelfHostConfiguration(uploadApiUrl.Data);
config.Routes.MapHttpRoute("default",
"api/{controller}/{id}",
new { controller = "Home", id = RouteParameter.Optional });
config.ClientCredentialType = System.ServiceModel.HttpClientCredentialType.Windows;
if (server == null)
{
server = new HttpSelfHostServer(config);
var task = server.OpenAsync();
task.Wait();
}
else if (configChanged)
{
try
{
Process.Start("cmd", $#"netsh http add urlacl url={listenToUrl} ");
Process.Start("cmd", $#"delete urlacl url={listenToUrl} ");
server.Dispose();
server = new HttpSelfHostServer(config);
var task = server.OpenAsync();
task.Wait();
}
catch (Exception ex)
{
}
}
}
}

gRPC-node: When *Dockerizing* Service, request doesn't go through service's server? [Screenshots included]

I created a really simple bookstore with a Books, Customer, and a main service. This particular problem involves the main and books service.
I'm currently making a gRPC request called: "createBook", which creates a book in our DB, and also console logs.
When running the gRPC server (booksServer) without docker, the process runs smoothly.
But as soon as I use docker, it seems as if a gRPC request doesn't go into the gRPC server...
By "using docker" I mean using docker to run the booksServer. (As shown below)
Result: Without Docker
As you can see, without docker, the request is fulfilled, and everything works as it should.
Our gRPC client makes a call to the gRPC server (in which metadata is created) and the metadata is also sent back to the client.
(Scroll down to see the gRPC server file with the method called "getBooks".)
booksServer (without docker)
*** Notice the console logs in the booksServer!!! ***
Let me run the booksServer (with docker)
(Dockerfile below)
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
RUN npm install
RUN npm install nodemon -g
EXPOSE 30043
CMD ["nodemon", "booksServer.js"
Here's my main service docker file too which initiates the request:
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
COPY . /usr/src/app
# COPY wait-for-it.sh .
# RUN chmod +x /wait-for-it.sh
RUN npm install
EXPOSE 4555
CMD ["node", "main.js"]
^^^ Notice how when dockerfile is used to run booksServer
it doesn't go/run inside the booksServer file
***It does NOT produce any console.logs when I fire off a gRPC requesst***
This is what the booksServer.js file looks like
Heres the Books Stub
//use this for bookInitiator
const path = require('path');
const PROTO_PATH = path.join(__dirname, "../protos/books.proto");
const grpc = require("grpc");
const protoLoader = require("#grpc/proto-loader");
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
arrays: true
});
const BooksService = grpc.loadPackageDefinition(packageDefinition).BooksService;
// potential issues to fix 1) making localhost port dynamic 2) docker containerization may cause conflict
const client = new BooksService (
"172.17.0.2:30043",
grpc.credentials.createInsecure()
);
console.log("Creating stub inside booksStub");
module.exports = client;
Here's the gRPC Server file (with the binded ports).
// const PROTO_PATH = "../protos/books.proto";
const path = require('path');
const PROTO_PATH = path.join(__dirname, './protos/books.proto');
const grpc = require("grpc");
const protoLoader = require("#grpc/proto-loader");
const express = require("express");
const controller = require("./booksController.js");
const app = express();
app.use(express.json());
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
arrays: true,
});
const booksProto = grpc.loadPackageDefinition(packageDefinition);
const { v4: uuidv4 } = require("uuid");
const server = new grpc.Server();
server.addService(booksProto.BooksService.service, {
CreateBook: (call, callback) => {
console.log("call to CreateBook");
//sample will take the call information from the client(stub)
const book = {
title: call.request.title,
author: call.request.author,
numberOfPages: call.request.numberOfPages,
publisher: call.request.publisher,
id: call.request.id,
};
controller.createBook(book);
let meta = new grpc.Metadata();
meta.add("response", "none");
console.log("metadata in createBook...: ", meta);
call.sendMetadata(meta);
callback(
null,
//bookmodel.create
{
title: `completed for: ${call.request.title}`,
author: `completed for: ${call.request.author}`,
numberOfPages: `completed for: ${call.request.numberOfPages}`,
publisher: `completed for: ${call.request.publisher}`,
id: `completed for: ${call.request.id}`,
}
);
},
GetBooks: (call, callback) => {
console.log("call to GetBooks");
// read from database
let meta = new grpc.Metadata();
meta.add('response', 'none');
call.sendMetadata(meta);
controller.getBooks(callback);
}
});
server.bind("0.0.0.0:30043", grpc.ServerCredentials.createInsecure());
console.log("booksServer.js running at 0.0.0.0:30043");
console.log("Inside Books Server!");
console.log("call from books server");
server.start();
horus.js (custom made simple tracing tool),
grab trace grabs the journey of a certain request
and sends it back to the gRPC client as metadata
const fs = require("fs");
const grpc = require("grpc");
const path = require("path");
class horus {
constructor(name) {
this.serviceName = name; // represents the name of the microservices
this.startTime = null;
this.endTime = null;
this.request = {};
this.targetService = null; // represents the location to which the request was made
this.allRequests = []; // array which stores all requests
this.timeCompleted = null;
this.call;
}
static getReqId() {
// primitive value - number of millisecond since midnight January 1, 1970 UTC
// add service name/ initials to the beginning of reqId?
return new Date().valueOf();
}
// start should be invoked before the request is made
// start begins the timer and initializes the request as pending
start(targetService, call) {
this.startTime = Number(process.hrtime.bigint());
this.request[targetService] = "pending"; // {books: 'pending', responseTime: 'pending'}
this.request.responseTime = "pending";
this.targetService = targetService;
this.call = call;
this.request.requestId = horus.getReqId();
}
// end should be invoked when the request has returned
end() {
this.endTime = Number(process.hrtime.bigint());
this.request.responseTime = (
(this.endTime - this.startTime) /
1000000
).toFixed(3); //converting into ms.
this.sendResponse();
this.request.timeCompleted = this.getCurrentTime();
}
// grabTrace accepts inserts trace into request
// trace represents the "journey" of the request
// trace expects metaData to be 'none when the server made no additional requests
// trace expects metaData to be the request object generated by the server otherwise
// in gRPC, the trace must be sent back as meta data. objects should be converted with JSON.parse
grabTrace(metaData) {
//console.log("incoming meta data ", metaData);
console.log("Inside Grab Trace Method.");
console.log("Metadata inside grabTrace: ", metaData);
if (metaData === "none" || metaData === undefined) this.request[this.targetService] = "none";
else {
metaData = JSON.parse(metaData);
this.request[this.targetService] = metaData;
}
this.allRequests.push(this.request);
this.sendResponse();
}
// displayRequests logs to the console all stored requests
// setTimeout builds in deliberate latency since metadata may be sent before or after a request is done processing
displayRequests() {
console.log("\n\n");
console.log("Logging all requests from : ", this.serviceName);
this.allRequests.forEach((request) => {
console.log("\n");
console.log(request);
});
console.log("\n\n");
}
// sends response via metadata if service is in the middle of a chain
sendResponse() {
if (
this.request.responseTime === "pending" ||
this.request[this.targetService] === "pending" ||
this.call === undefined
)
return;
console.log("Inside send response");
let meta = new grpc.Metadata();
meta.add("response", JSON.stringify(this.request));
console.log('meta in send response: ', meta)
this.call.sendMetadata(meta);
}
writeToFile() {
console.log("call to writeToFile");
console.log("logging request obj ", this.request);
let strRequests = "";
for (let req of this.allRequests) {
// First write to file - contains Total
// subsequent - chained requests
strRequests += `Request ID: ${req.requestId}\n`;
strRequests += `"${
Object.keys(req)[0]
}" service -> Response received in ${Object.values(req)[1]} ms (Total)\n`;
strRequests += `Timestamp: ${req.timeCompleted}\n`;
// while we don't hit an empty object on the 1st key, go inside
// add numbering in order for nested requests inside original?!
let innerObj = Object.values(req)[0];
while (innerObj !== "none") {
strRequests += `"${
Object.keys(innerObj)[0]
}" service -> Response received in ${Object.values(innerObj)[1]} ms\n`;
strRequests += `Timestamp: ${innerObj.timeCompleted}\n`;
innerObj = Object.values(innerObj)[0];
}
strRequests +=
"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n";
}
console.log('strRequests', strRequests)
fs.writeFile(this.serviceName + 'data' + '.txt', strRequests, { flag: "a+" }, (err) => {
if (err) {
console.error(err);
}
}); //'a+' is append mode
}
module.exports = horus;
main.js (initiates gRPC client request)
const path = require('path');
// const grpc = require("grpc");
const customersStub = require("./stubs/customersStub.js");
const booksStub = require("./stubs/booksStub.js");
const horusTracer = require(path.join(__dirname, "./horus/horus.js"));
//In master branch
console.log("Stub is Inside main service!!!");
const book = {
title: "ITttttt",
author: "Stephen King",
numberOfPages: 666,
publisher: "Random House",
id: 200,
};
const bookId = {
id: 200
}
const customer = {
id: 123,
name: "Lily",
age: 23,
address: "Blablabla",
favBookId: 100
};
const customerId = {
id: 123
}
let ht = new horusTracer("main");
function CreateBook () {
ht.start('books')
booksStub.CreateBook(book, (error, response) => {
if (error) console.log("there was an error ", error);
ht.end();
ht.displayRequests();
ht.writeToFile();
}).on('metadata', (metadata) => {
console.log("Before grab trace is invoked!");
ht.grabTrace(metadata.get('response')[0]);
});
}
}
CreateBook(); //Works
What I think is the issue.
Edit: murgatroid99 mentioned that it was a networking issue with docker!
~~~~~~~~~
I initially thought this was a networking issue, but I don't think it is
because all my docker files are running on the default bridge network.
So they all technically can communicate with one another...
Is it something wrong with nodemon interacting with Docker?
Does the server not output the console logs...?
Is the server actually running and working...?
Do I need a reverse proxy like nginx?
``
The problem is that your server is binding to "127.0.0.1:30043". You say that you are running the docker images using the default bridge network. In that mode the docker image has a different (virtual) network than the host machine has, so its loopback address is different from the host machine's loopback address. To fix that, you can instead bind the server to 0.0.0.0:30043 or [::]:30043to bind to other network interfaces that the client can connect to from outside of the docker container.
For the same reason, connecting the client to localhost:30043 will not work: its "localhost" address also refers to the loopback interface within the docker container. You should instead replace "localhost" with the IP address of the server container.
Alternatively, as described in this question, you can network the docker containers in "host" mode so that they share the same network with the host machine.

Call gRPC service (Go-micro) from Go client through Traefik

I'm using Go-micro, Docker, Traefik to deploy my service. I deployed go-micro service and registered with Traefik. This is my sum(grpc service) status in Traefik dashboard. When i curl it in Terminal, I got this result, I thought it's grpc message in binary. But when I used this code
package main
import (
"context"
"fmt"
proto "gomicro-demo/client/service"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"log"
)
func main() {
con, err := grpc.Dial("localhost:8080", grpc.WithInsecure())
if err != nil {
log.Fatal("Connection error: ", err)
}
md := metadata.New(map[string]string{"Host": "sum.traefik"})
ctx := metadata.NewOutgoingContext(context.Background(), md)
service := proto.NewSumClient(con)
res, err2 := service.GetSum(ctx, &proto.Request{})
if err2 == nil {
fmt.Println(res)
} else {
log.Fatal("Call error:", err2)
}
}
i got this error rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8". I can't know how this error happen, because of address or grpc metadata (Host header). Please help me with this problem. Thank you very much!
you can export tcp like it. please using trefik2,
HostSNI must be seted
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: redis
spec:
entryPoints:
- redis
routes:
- match: HostSNI(`*`)
services:
- name: redis
port: 6379

Resources