You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We inherited the old model of compute envs from previous versions, but it's clumsy and hard to configure.
Why:
we are exposing multiple envs, but there is no way of imposing limits (imagine a host with 2 cpus, but exposing 10 envs, with different configs. At most, you will be able to run 2 jobs in the same time (cpu limits)
So, let's switch to the following structure:
{
id: string
desc: string
arch: "x86_64",
totalCpu: number // total cpu available for jobs
maxCpu: number // max cpu for a single job. Imagine a K8 cluster with two nodes, each node with 10 cpus. Total=20, but at most you can allocate 10 cpu for a job
totalRam: number // total gb of RAM
maxRam: number // max allocatable GB RAM for a single job.
maxDisk: number // max GB of disck allocatable for a single job
maxJobDuration: number // max seconds for a job
consumerAddress: string
storageExpiry: number
fees: {
"chain_X":[
feeToken: "0x123",
prices: [
{ type: "cpu", "price": 0.01 }, // price per cpu per minute
{ type: "memory", "price": 0.02 }, // price per memory per minute
{ type: "storage", "price": 0.01 }, // price per 1gb of storage per minute
]
},
"chain_Y":[
"feeToken": "0x123",
"prices": [
{ "type": "cpu", "price": 0.01 }, // price per cpu per minute
{ "type": "memory", "price": 0.02 }, // price per memory per minute
{ "type": "storage", "price": 0.01 }, // price per 1gb of storage per minute
]
},
}
Any user, when starting a compute job, can request resources like:
and then ocean-node will compute the pricing, by summing price per resource per minute
Configuring envs (by node owner):
Node owner will define compute engines (like docker socker) and ocean-node will create envs (paid + free if needed), the only required data is type of env (docker, k8, etc) and fees. Everything else we can detect (no of cpus, memory, etc). That can be overwritten by node owner (example: host has 20 cpus, but node owner can set totalCpu to 18, so compute will never hog the host)
The text was updated successfully, but these errors were encountered:
hey @alexcos20 can you have a 2nd look at the structure above and maybe clean it a bit?
i see 2 times pricePerCpu: number, so either a copy/paste duplicate or a typo and its supposed to be something else
Note that there is also a cpu price under the fees property, is this the same thing?
Another thing i noticed is that we also removed the chainId from the structure... this can affect some APIs like getting compute environments by specific chain id like: engine.getComputeEnvironments(chainId)
So how do we deal with those ? If an env is not "attached" to a chain anymore. We just skip the chain id check?
thanks in advance
We inherited the old model of compute envs from previous versions, but it's clumsy and hard to configure.
Why:
So, let's switch to the following structure:
Any user, when starting a compute job, can request resources like:
and then ocean-node will compute the pricing, by summing price per resource per minute
Configuring envs (by node owner):
Node owner will define compute engines (like docker socker) and ocean-node will create envs (paid + free if needed), the only required data is type of env (docker, k8, etc) and fees. Everything else we can detect (no of cpus, memory, etc). That can be overwritten by node owner (example: host has 20 cpus, but node owner can set totalCpu to 18, so compute will never hog the host)
The text was updated successfully, but these errors were encountered: