You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As we've been talking with vendors who may be interested in building a Prebid Server module, the issue of local storage is consistent. For their browser- or SDK-based functionality, there's most often an ability to keep some local state. If there's no local state in Prebid Server, request latency and network costs are higher for everyone.
The first vendor that built a Prebid Server module requires a local Redis cluster with special stored procedure logic. PBS host companies are not going to be able to support an ecosystem of vendors that require disparate storage mechanisms.
So we've discussed in committee that expanding Prebid Cache to be able to operate as a local storage would give PBS host companies some options. At a high level, the idea is that each host company + vendor combination can choose how to handle the state needs for each module:
Configure the module to just phone home on each request. (higher latency and network costs)
Configure the module to point to the host company's Prebid Cache server with a simplified Key-Value interface. (potential to affect existing PBC functionality)
If absolutely necessary, a vendor may require a special storage requirements. (This may limit the adoption of their module.)
Prebid Cache
The current usage patterns of Prebid Cache are:
supports several NoSQL backends (Aerospike, Cassandra, Memcache, Redis)
stores VAST and bids for a short period (5-15 minutes in general)
problems with storing or retrieving VAST/bids affects publisher monetization
In general, Prebid Cache itself is a lightly loaded tier of servers and for most host companies these servers can probably handle more traffic. However, the backend storage is often more difficult to manage operationally: sizing, failovers, fragmentation, monitoring... bit of a pain.
There are several enhancements needed for PBC to support the new usage pattern:
Support a 'text' type in addition to XML and JSON.
Support accepting the key if the request is from it's local PBS rather than always generating it. There's concern about allowing arbitrary keys from anywhere on the internet.
Accept an "application" parameter that defines which backend configuration to utilize.
Allow the host company to raise the max ttlseconds per application.
So the proposal here is to give host companies 3 options:
Just use their current PBC and NoSQL backend and mix ads and module data
Spin up a new PBC+backend specifically to store module data.
Build a new feature in PBC that enables host companies to use the same tier of PBC servers but split the backend so modules use different storage.
First, a reminder of the current PBC POST interface:
{
"puts": [
{
"type": "xml",
"key": "ArbitraryKeyValue1",
"ttlseconds": 60,
"value": "<tag>Your XML content goes here.</tag>"
},
{
"type": "json",
"key": "ArbitraryKeyValue2"
"ttlseconds": 300,
"value": [1, true, "JSON value of any type can go here."]
}
]
}
PBC/S Enhancements
Storage Service
Set of functions that modules can invoke to store/retrieve their data.
This service should be designed to someday accommodate an LRU cache mechanism, but we don't need to implement the LRU initially.
Text storage type
Modules could supply JSON if they want, but for usages that aren't JSON, they should be able to supply text data, e.g. base64 encoded.
Modules will need to be able to supply a key for their data. e.g. "mymodule-sharedid-1234567890". Writing to an existing key should overwrite the existing value.
We can't allowing arbitrary key overwrite from anywhere on the internet because that could allow some hacking opportunities.
It's up for discussion how PBC should confirm that the request is allowed to set a key and overwrite. A couple of options:
a) IP address range configuration. PBC could recognize that any IP address of a certain pattern (e.g. 100.100.*) is allowed to write
b) public/private key pair
Application parameter
To support the ability for host companies to bifurcate data storage, the proposal is to allow PBC to configure different "applications". The default application is the "adstorage" that exists today. Any number of new applications can be added that can define different backend NoSQL, different max TTL, etc.
Done with PBS-Java 3.5, though there were some changes from the proposed spec here. A new endpoint was created rather than trying to expand the existing /cache endpoint. Details forthcoming.
As we've been talking with vendors who may be interested in building a Prebid Server module, the issue of local storage is consistent. For their browser- or SDK-based functionality, there's most often an ability to keep some local state. If there's no local state in Prebid Server, request latency and network costs are higher for everyone.
The first vendor that built a Prebid Server module requires a local Redis cluster with special stored procedure logic. PBS host companies are not going to be able to support an ecosystem of vendors that require disparate storage mechanisms.
So we've discussed in committee that expanding Prebid Cache to be able to operate as a local storage would give PBS host companies some options. At a high level, the idea is that each host company + vendor combination can choose how to handle the state needs for each module:
Prebid Cache
The current usage patterns of Prebid Cache are:
In general, Prebid Cache itself is a lightly loaded tier of servers and for most host companies these servers can probably handle more traffic. However, the backend storage is often more difficult to manage operationally: sizing, failovers, fragmentation, monitoring... bit of a pain.
There are several enhancements needed for PBC to support the new usage pattern:
So the proposal here is to give host companies 3 options:
First, a reminder of the current PBC POST interface:
PBC/S Enhancements
Storage Service
Set of functions that modules can invoke to store/retrieve their data.
This service should be designed to someday accommodate an LRU cache mechanism, but we don't need to implement the LRU initially.
Text storage type
Modules could supply JSON if they want, but for usages that aren't JSON, they should be able to supply text data, e.g. base64 encoded.
Accept the key from PBS
Modules will need to be able to supply a key for their data. e.g. "mymodule-sharedid-1234567890". Writing to an existing key should overwrite the existing value.
We can't allowing arbitrary key overwrite from anywhere on the internet because that could allow some hacking opportunities.
It's up for discussion how PBC should confirm that the request is allowed to set a key and overwrite. A couple of options:
a) IP address range configuration. PBC could recognize that any IP address of a certain pattern (e.g. 100.100.*) is allowed to write
b) public/private key pair
Application parameter
To support the ability for host companies to bifurcate data storage, the proposal is to allow PBC to configure different "applications". The default application is the "adstorage" that exists today. Any number of new applications can be added that can define different backend NoSQL, different max TTL, etc.
The text was updated successfully, but these errors were encountered: