Blobber Utilization Considerations

Further to previous posts on Service Provision, here are some more considerations for potential Blobbers

An example Resource Utilization Threshold


This is a network (egress) graph showing an example of what typical utilization might be like from a Blobber. As you can see, there are peaks and troughs in demand. (For the purpose of this post, we will say that CPU and RAM requirements are proportional to network although the relationship will not be strictly linear in practice).

Resource Limits
So as you can see, I have drawn an imaginary threshold line. If this was an actual resource threshold, such as Network, CPU or RAM, the Blobber would not be able to cope with demands above this threshold. In practice, to avoid maxing out resources (and in the instance of RAM, causing Swap memory to be used which will further degrade performance), the Blobber would ideally set their configuration to limit the number of concurrent connections for example so that the threshold was not exceeded.

In the above example, this would result in the threshold being exceeded around 8% of the time.

So what are the implications for this blobber and the clients?
Well, for clients wishing to download file chunks from this blobber, they would experience a lag in the download from that particular blobber. However they will keep trying, and so it could be that within a second or two, that blobber will then successfully serve the requested chunks. It may also be that during that time, the client has received enough chunks (shards) from other blobbers to reconstruct the data, in which case it wont request it again, and in this instance, the blobber will have lost the potential ‘read’ revenue from serving those files.

If you follow that in this example, this imaginary threshold was only exceeded 8% of the time, and that then there is a good chance that it would have been able to serve those requested file chunks within a few seconds, there may be maybe less than 5% of this potential revenue lost.

So what does this all mean?
It means that with careful experimentation, a Service Provider should be able to configure their Blobber(s) so that they do not exceed their available resources, and provided these peaks in demand do not happen all the time, the clients will be relatively unaffected in the same way that the network as a whole is able to tolerate failure.

Apportioning Resources
Working backwards then, potential SPs could potentially work out their most limited resource and try to set up their rig proportionally to it.

This can also include the storage costs configuration aswell. So consider someone with a home broadband provision the gives high download speeds but much lower upload speeds. From a client perspective, they would be able to upload to this blobber much quicker than they could download. Knowing this, the blobber could set cheap upload prices, but make the download prices more expensive, thus deterring clients that will potentially want relatively lots of downloads thanks to the flexibility of the 0chain model.

So now I guess the question will be, so what are these rations between RAM/CPU/Network then?
Well, there has been some discussion on this previously but we will only get a true picture when we hit mainnet and have real world usage scenarios.

Another Example

In this example, from the same server, I chose a longer duration per plot of the graph but at a different scale, the intention is that this is more representative of server offering more Blobber capacity. Overall, although there is a long peak at the start, the general usage is more averaged out with less peaks and troughs. This should be the pattern as the capacity increases, there should be a smoothing out of graphs due to scale. Another way to think is if you have 100 allocations on a small provision with average 1 % being accessed at a time, that is just an average of 1 allocation at a time, so an extra 5 clients accessing their allocations would be a 500% increase. If you have a larger provision with 10,000 allocations. 5 extra clients access would only be a 5% difference!

Larger Capactiy Blobbers
So since this (roughly) translates to other resources, we can see that larger capacity blobbers should be less erratic than smaller ones. There could still be instances where limits are reached but these should be much less frequent as capacities increase.

In any case, there will always need to be some room for surges in demand, so using these graphs as an example, you might conclude that the average usage is about 20% of the peak, so you could factor this in to how much capacity you could potentially cope with if your network bandwidth is a potentially limiting resource.