[SC] - Introducing dAPI Monolith

Idea

The API3 DAO has adopted the term monolith to demarcate major projects that are fundamental pieces of the greater API3 vision. They provide major signposts (or a nonlinear “roadmap”) that guide the high-level direction of the project.

I’m proposing here the first formalizalization of the dAPI monolith.

“Formalization” is a bit of a weird word to use here, since there aren’t any technical formalisms to agree upon a new monolith (e.g. no proposal is needed to pass). It’s more of a social consensus thing.

(Before going any further, I will make it clear that I am currently drafting an official proposal for the first undertaking for this monolith.)

Motivation

In addition to providing Airnode-enabled single-source APIs, the API3 DAO should also provide aggregated data feeds (“dAPIs”) created from aggregating multiple Airnode responses (of a particular data type). This is necessary in order to minimize the risk imposed by potential downtime of a given Airnode, and thereby also minimize the premium for insuring data. This way users have greater choice with how they manage their risks and costs.

Details

Without making this too long, here is the current list of goals & deliverables for the dAPI monolith. Once I gather enough feedback and reach some amount of social consensus, I will add more details and make a blog post that “officializes” this monolith.

  • Focus on Ethereum (and Solidity), for now
  • Create the necessary on-chain mainnet smart contract framework for constructing dAPIs
    • Tested and audited, of course
  • Work with the BD team to implement dAPIs for data types that make business sense
    • This also involves: selecting the appropriate single APIs and the appropriate aggregation method
  • Monitor data feeds
    • This is needed for things like: testing aggregation algorithm, monitoring gas usage, potential downtime, etc
  • Pub-sub protocol
    • Either build out the protocol, or integrate with it
  • Develop a dAPI pricing strategy
  • Develop a dAPI insurance model

Note: Today I will be posting a more detailed sentiment check (SC) for the first undertaking, that primarily covers the smart contract development. Make sure to check that out to see more detailed, shorter term deliverables.

Thoughts/feedback/comments welcome!

11 Likes

I’m excited for this. Will be happy to help out on implementing data types based on demand. API selection will likely be a concerted effort between BD and Marketing.

2 Likes

This is an absolute must and the core underpinnings of API3. Glad you have brought this up. Would love to see this making headway ASAP.

1 Like

This should be a valuable addition to API3’s repertoire. Thank you for sharing, looking forward to more details. Some thoughts that crossed my mind

Would the construct be as

  1. A backup mechanism. i.e. data is primarily pulled from say API provider #1. But if the feed fails for any reason, the data is pulled from API provider #2, during the downtime of API #1

    or

  2. Aggregating the data from multiple feeds and then providing a derived value

Some other thoughts

  • Who would have the ability to - a) create the dAPIs and b) run them. While the API3 team or the data consumers could play a role, perhaps permitting a larger set / community to create / run the dAPIs may open an attack surface

  • If the objective is #2 above, it may increase the query fees. Which is good for token holders , as more revenue will generated

2 Likes

yes, definitely. there will have to alignment/coordination between dAPI team and marketing and BD, at the least

2 Likes

Thanks for your comments!

In order to maintain consistency of data, it will be (2).

It is yet to be decided whether the dAPI creation will be DAO/sub-DAO/team based. The idea is to create a general “authorization” framework that will enable any of these. Once a dAPI/data feed is deployed, there’s shouldn’t be any required “running”. But monitoring makes sense in order to analyze/validate the data sources, aggregation method, etc.

Re: (2) it’s not entirely a matter of what is most secure and statistically sound, but it’s also a matter of what clients are accustom to. When a client requests a data feed, they usually want an aggregated data feed. But that isn’t to say that the “backup” structure should be an avenue of research.

2 Likes

Hi all,

Just to add to the above, some applications of a dAPI are as follows (these are non-goals at the moment but I can see these being implemented in Undertaking #2 by this team):

  1. Pricing data from exchanges (easy)
  2. Pricing data from DEXs (unknown complexity, I have worked with Uniswap and Balancer data feeds before but I will attest to the lack of developer support I received and my expectation is that this is hard to maintain, but will be sought for and people will pay for a quality implementation)
  3. NFT pricing data (both on chain and off chain tokens, gaming tokens, steam keys, what have you)
  4. Weather data (see HNT)
  5. Other climate data (wind speed etc.)
  6. Bridged data from other chains (AR, SC, BSC, TRX, CSPR, SOL etc. integrations)
  7. Economic data by country
  8. Fundamental data like population etc by country/region,

will add more here as time goes on…

4 Likes

Thank you for your reply.

We’ve seen some other Oracles add an intermediary layer. However API3’s strength is from it’s more direct and uncomplicated approach. Therefore just wanted to understand dAPIs better. Sounds great the way you have envisioned it.

All the best to you and the team.

2 Likes

100%. I would prioritise and focus exactly in that order with 1,2,3 being the most important

1 Like