Inveniam.io

Introducing our system: Inveniam IO. At Inveniam, we see our long-term role as supporting analytics, intelligence and valuation work done by others; making our tools and the data available in standard or self-evident formats. This is the goal and purpose of the Inveniam IO platform.

Inveniam.io Dashboard

Today our focus is institutional quality private market transactions and further developing / proving out our system. We are evolving to the next phase of our development where 3rd parties are using our system as well, and ultimately where we are not doing any banking but the inventory of other financial professionals is priced and marketed on our system for better pricing, analysis, distribution, and reporting. Our data management objective is to curate and publish data in open standards – akin to building an HTML website in 1994 to be read and used by any search engine, whether by the then-available Excite (founded 1993) or future platforms such as Yahoo (founded 1995) or Google (founded 1998).

In this, Inveniam’s role is to create a distributed private market eco-system similar to the SEC’s EDGAR reporting system or Bloomberg’s news and alternative data feeds; either and both enable data-driven valuation and trading. We want millions of data creators using our tools, validated by thousands of data set curators. Our goal is to deliver data that is true and useful, which requires significant data analysis by our team.

Currently, this data analysis is more “meta” than BI – focused on ensuring the completeness, integrity, and usefulness of the data. Thus, the current data science mandate includes:

  1. Identifying primary data sources associated with the asset (including systems “above” and “below” the asset, such as municipal and tenant systems, respectively) and any reliable secondary sources (open source flows).
  2. Identifying which in (1), above, can or should be made available with a default assumption that all data may prove useful and should be collected.
  3. Enabling the capturing of data; timestamping and hashing data in an off-chain system of record and recording meta-data on the ERC-20 blockchain.
  4. Maintaining on-chain identifiers and validations consistent with the off-chain system of record.
  5. Anticipating both analysis systems currently in use and preparing for BI/AI/ML systems yet-to-come.
  6. Striking a balance between the “raw truth” of saving “data dumps” versus the “accessibility” of having transformed/pre-prepared the data before archiving (for example, taking them out of proprietary or obsolete formats and into open standards (e.g. XBRL) or into flat (.txt/.tab/.csv), time-series, or columnar formats)
  7. Recording both the data and any transformations (such as map, filter, or reduce) made as it is processed.

We anticipate our investors and secondary market participants to perform BI/AI/ML analytics on the asset-based data we collect and distribute. We do not yet have an opinion on which BI, AI, or analysis platform would be most desirable to interface with. We are quite sure we know what true, usable private-asset data will need to look like just before it is “sucked into” whatever is the application of choice.

We are certain that to inform valuation the data must be saved, validated, and reported and we believe it is critical to connect the digital twin of the asset to digital ownership just as we believe the power of our system lies in capturing, managing and marketing the data.