Synthetix is moving towards a protocol that only allows actions to occur using an RRP (request-response-protocol also known as on-demand oracle) datapoint as opposed to a PSP datapoint (regular oracle read from the chain) https://blog.synthetix.io/perps-v2-summary/. This is designed to both eliminate the majority of oracle latency arbitrage, as well as reduce oracle operating costs. The intuition being that if you only allow trading at the most up to date price, there is nothing to arbitrage and therefore no value to extract from the liquidity providers. The major oracle service providers have planned to release new oracle designs to allow for this https://docs.pyth.network/pythnet-price-feeds/on-demand https://blog.chain.link/low-latency-oracle-solution/, suggesting many other dapps may follow Synthetix.
While this design poses many new tradeoffs, unknown side effects, and requires significant development from applications to adopt, it has the potential to reduce much of the negative forms of OEV that don’t bring any benefits to the dapp or its users. In a future where this design is successful, it could have a lot of synergy with the OEV relay because while RRP may be good at reducing negative OEV, it still forfeits value from inevitable OEV such as liquidations. An example of this synergy is that an app could require RRP responses be used only for opening and closing positions in order to reduce negative forms of OEV such as arbitrage, and OEV auction responses to be used only for liquidations in order to capture the value from the necessary forms of OEV. This protocol design could in theory have the best of both worlds, all the benefits of the Synthetix V2 design for trading, while still capturing the OEV that remains.
Possible implementation of RRP + OEV auctions using API3DAO that needs little to no changes to the API3 protocol, this option offers users high price certainty before they make a trade, but this means it must also require strict timestamp enforcement to prevent latency arbitrage:
API3DAO already plans on developing an API to publish data that can be used in an RRP-like way, but this data must be delayed so as not to interfere with the OEV auctions, making it unacceptable for a protocol design like Synthetix V2. To work around this the dapp can set up two data feed contracts, one to be referenced for certain actions such as opening/closing positions, and the other for only liquidations. The API can then publish up to date data points for one contract without disrupting the auctions for liquidations on the other.
The API would be required to always be able to provide highly up to date data points so that on-chain contracts can verify that a user’s trade fulfillment is within a close timeframe to when the user made his request. The API may also have to save data points for a certain period of time in the past so that traders can request a datapoint from a timestamp in which they made an on-chain request. The latency arbitrage that a searcher can perform is then constrained to the time period that the dapps contract allows between requests and on-chain fulfillment, see here for more information Using Price Feeds - Pyth. Note that liquidations contract should also be modified to require the data points timestamp be within a certain amount of time from fulfillment.
I assume we don’t want OEV to be extracted from position opening/closing because it leads to sandwich attack-sort of an OEV extraction. Virtually any data feed deviation will create opportunities for such OEV, and even though the user project will receive this OEV, they will bleed a significant amount as gas costs of executing these opportunities, and this would result in OEV doing more harm than good.
I’d argue that the OEV type above is not all bad though. Say the ETH/USD data feed is 2000. A searcher can extract OEV by borrowing USDT for ETH, update the data feed to 1500, and close their position. Assuming a competitive OEV auction, the user project will have the (extremely inaccurate) data feed they depend on be corrected at the cost of the gas cost of the update and a minimal searcher profit, which is a great deal. However, if this happens at every step through 1500 → 1520 → 1500 ->1520 → … the project will keep having to pay for the gas costs of these OEV extractions that don’t help as much.
In short, I think negative and positive OEV can be distinguished more elegantly and universally by the size of the OEV rather than the source mechanics (position opening/closing vs liqudations, which won’t necessarily apply to all projects anyway). The project would use a single, OEV-supporting data feed and the minimum bid amount for the respective auction would be tuned to optimize the financial outcome for the project (this is an additional reason why the minimum bid amount must never be zero, in addition to DoS risk). This also results in all mechanics that depend on this data feed to enjoy trust-minimization that the one size fits all architecture provides without having to wait for API provider-hosted auctions (I had already mentioned why multiple parties can’t run OEV auctions for the same data feed simultaneously).
I am not sure you are understanding how Synthetix V2 will work and the argument im making. In the example you provide you reason that arbitrage OEV is not all bad because while it leaks some value to gas cost and compensating searchers, the benefit is that it brings accuracy to the data feed and we can fine tune the min bid to prevent unnesescary extraction. With Synthetix V2 they can achieve an even higher level of data feed accuracy than you would with an OEV feed by forcing users to only trade at the current off-chain price, and it achieves this accuracy without leaking ANY value to OEV through arbitrage (technically OEV can still be extracted from but this can be constrained to price movements under 10-15 seconds which should almost always be too small to arbitrage). If their design works in practice then the data feed is more accurate than an OEV feed and they pay less money for it (less OEV is extracted and likely much less gas costs overall). The only arguments I see against this are that for some reason Synthetix V2 wont work in practice as they believe.
If this synthetix protocol design can essentially completely take away the ability for their LPs to be arbitraged (by always providing a more accurate price) then that will be superior to minimizing the arbitrage value extracted like the OEV relay does. Your right in that this may not be needed for all protocols using an oracle but it could be for derivitives which is a major use case, but if it works there I dont see why lending protocols wouldn’t also do this.
Current off-chain price delivery mechanisms depend on the oracle consensus + additional points of failure, which means off-chain price delivery is not trust-minimized and requires to be able to fall back on oracle consensus (as ours does). What you’re describing as Synthetix V2 omits the fallback to avoid executing some extra data feed updates, which is a very costly tradeoff (and my subjective opinion is that it’s a bad one). Your proposal suffers from the same issue because it requires the off-chain price delivery mechanism to be operational for critical user interactions. If we deemed this to be acceptable, we would have offered data feeds that only updated based on OEV, yet we don’t.
In general, only depending on off-chain price delivery mechanisms is only acceptable once these are trust-minimized, i.e., fully hosted and maintained by the API providers themselves with no significant additional points of failure in addition to the API itself (as mentioned in API3 whitepaper Section 4.1.1). This is a very old concept that didn’t become a thing because the oracle problem is fundamentally an ecosystem problem, and API providers didn’t have much of an incentive to provide this service as Coinbase did. The solution is to incentivize the API providers to be engaged in the ecosystem, provide them with a standard and ask them to support it. The fact that this depends on ecosystem building is what forces this to be a long term plan, and the alternatives until then are not the same thing.
The tradeoff between this approach and OEV feeds is not as simple as to avoid executing some extra data feed updates, it also means dapps dont have to pay searchers any of their OEV arbitrage as there is none. While using OEV searchers should only receive a very minimal part of the total profits from arbitrage, this may still be not so in-significant to a dapp as they scale, although it can be argued that avoiding paying searchers what might only be 5-10% of total arbitrage profits is only a marginal improvement.
I thought the published data API was going to be more trust minimised so I agree this isn’t currently do-able, but I do think it should be considered in the next iteration of data feeds when we can get to it.
I meant that in the sense that the searcher cut should go to zero in a competitive market but the gas cost of executing the opportunities will be a flat overhead per auction but yes, in practice some value will be lost to the searchers too.
One additional thing to note about API providers publishing signed data on their own is that they will require to be compensated for this. Expecting to sustainably receive a secure service without paying a proportional amount is illogical. It’s very difficult to estimate what this “proportional amount” is (let alone estimating it in a trustless way), and API providers receiving a cut of the OEV is such an elegant way of achieving that that I don’t see it going away in the absence of a better incentivization model.
Agreed re provider monetization, that is the beauty of this dapp idea im proposing that combines a synthetix V2 design with our OEV auctions and gives the best of both worlds. Dapps just need two data feeds both mapped to the same dapi, one they use for opening/closing positions (using the Synthetix V2 design), and one for liquidations (using OEV auctions). By separating them we can still host OEV auctions for liquidation updates within a protocol like Synthetix V2, and I believe liquidations will be more than enough to compensate the providers (if not we slightly increase the fee for dapps that use this design). This way the dapp negates all of the unnecessary OEV (arbitrage) and retains the value from the necessary OEV (liquidations).
Note that in this design the liquidation datafeed can probably to continue to be used as a standard proxy with airseeker/provider updates as the fallback so that we dont need a trust minimized OEV relay for this, but I haven’t thought deeply about that yet.