Over 2 billion people watch online videos, every month, around the world. The medium grows exponentially. Manipulating video on the web has become easy after HTML5 and the tag (along upper-level abstractions like Video.js). However, streaming remains complex.
"We once heard a large (~20%) share of Amazon’s transcoding services revenue came from Google itself, which needed to outsource the processing for some of the monstrous amount of content Youtube got. The anecdote may be false, but the point remains: it’s just impractical to compete with the big oligopolies in economies of scale, when it comes down to streaming."
Value capture is a topic that always seemed a bit overlooked in business modelling. Traditionally, as Peter Thiel frequently points out, there’s little correlation between value creation and value capture (e.g. one may generate tons of revenue without really profiting from it). Some industries have even established dynamics that clearly separate value creation from capture: think of the film industry, with production companies doing all the creative and operation work on one side, while, on the other hand, distributors and exhibitors take 80–90% of the share of profit, at the end of a movie’s life cycle.
In the context of video, specifically, we need reliable decentralised transcoding (Livepeer). We need incentivised bandwidth allocation (Filecoin, Swarm). We need decentralised querying (The Graph). We need permissioned storage (Lightstreams, Unlock). We need token abstraction (0x). We need app-to-app, or even vertical-specific relayers (Amadeus). We need global identity standards, and we need scalability.
We can already say we have peer to peer storage and delivery (IPFS), plus a bare identity backbone and a decently automatable payments infrastructure (ethereum). If that all sounds too otherworldly, bear with us: it means there’s a clear path to decentralisation in the field, although self-governing media applications are not around the corner yet.
Today, we’re still at ‘newspaper on the iPad’ stage. Clunky UX, abstract concepts, a constant struggle to bridge scientific research with technique and implementation (the usage rate for the first TCR live on Ethereum is close to negligible, so far).
Teams will get this wrong a thousand times. Then, when the “Gangnam Style of crypto-video platforms” happens, puts in practice a redistributive model that works, and enriches those who spotted the content early on, many will ponder: “how haven’t I thought of this earlier!?”.
Well, few believed the internet would ever handle video, in the beginning, and here we are, a couple decades later, discussing its intersection with crypto-assets.
Industry-wide shifts are usually unclear until they do come to fruition.
I believe that the trend towards self-sovereignty, privacy and “user-generated-work” means institutional investors are losing the power to dictate the direction of content monetisation as a whole. The monopolies in the space have already been making it unattractive to VCs. Social consequences of “surveillance-by-default” are becoming clearer to all. In the end of the day, “redistributive” models basically recognise that consumers who’re sovereign over their own money, attention and data are, in practice, investors themselves.
We don’t need to play investor storytime with them. We need to get them to play the investors in content platforms we come to build. If it sounds too far off, still, make an effort to believe: with a little bit of imagination and courage, it becomes conceivable that ads or subs are not all that’s left for content makers.
Researching & developing video applications on top of public web3 infrastructure.
Imagine a library any developer can use to put a video stream and get a playable url for it, while it gets ingested, stored, transcoded and distributed behind the scenes, all through non-centralised means? With this, one can easily build out-of-the box decentralisable video-powered web applications. Paratii.JS has early functionalities for handling tokens too, meaning one will soon be able to use it to set monetisation models for videos, collect earnings, participate in curation, and else.
Extrapolating the definition of a “redistributive” monetisation model, one realises the potential of self-sustainable content communities as a business model. Despite alternatives for “having stakes on content”, and for more “distributive value exchanges”, we are still far from having autonomous media apps (a.k.a. the decentralised YouTube).
Under a generic framework for distributed curation, TCR tokens confer holders the right to adjudicate over the contents of a registry. Such registry usually requires a minimum stake for new listings to come through, it puts in place propose-challenge mechanisms for any token holder to curate over listed items, and it redistributes stakes between token holders aligned with their pairs every time challenges happen or stakes are forfeited.
- Example: Relevant, ... (Aragon Video about TCR's!!!)
- Value accretion: if a list accurately represents its focal point; and (2) this focal point is of interest to certain audience(s); TCRs tokens will accrue value proportionally to the value that listees earn by being in the list.
In the context of video, if content consumers desire high-quality curation, and content producers like to be included in well curated lists, “a market can exist in which the incentives of rational, self-interested token holders are aligned towards curating a list of high quality”.
The paradigm shift begins by understanding that it’s not one who’s capitalising on anything anymore, but rather many. To illustrate with an example, let’s take recommendations, something that’s at the heart of modern video platforms and is constantly being optimised.
But optimised for whom? Ad-dependent business models might shape their engines to present users with promotional content highly likely to be watched; platforms seeking engagement will maximise exposure to addictive sugary videos. Whatever the drive is, incentives are generally skewed in favour of businesses, and in disfavour of end-users.
Now imagine a different scenario, in which content is distributed in a p2p fashion and traffic data is not privately held. Instead, user behaviour is tracked client-side, properly anonymised, and then broadcast to peers, being eventually registered in a public ledger where data will live forever, untampered and available for anyone to explore or build predictive models upon. Now why would you build a predictive model upon pieces of “anonymous navigation history”? Because if data is public, so should recommendations be. In this hypothetical paradigm, we could have a market of decentralised engines analysing traffic patterns, applying their own prediction strategies and updating an index that measures quality of outcomes, in a way that any user can choose which recommender to subscribe to, and poll suggestions from. The issue of filter bubbles is tackled at the same time that a new revenue flow is opened up, by replacing the work of monolithic recommendation engines by that of competing algorithms in an open market.
In this case, there’s still a handful of practical constraints, but, a decade ago, the model could hardly be conceived (in fact, a decade ago we were discussing about ads on amateur videos, as you can see below). Along the same lines, the longtime debate comparing “advertising-supported” vs. “subscription-based” vs. “pay-per-view” monetisation models has became anachronistic in an age of customisable smart contracts. Discussion has to be broadened to encompass a wide range of alternative methods being tried out. However, development still has to go a long road, before we can have fully-decentralised video sharing applications operating on global scale.
While the code works, it’s still unusable on the Ethereum mainnet. Primordial scalability issues (when it comes to public chains) are intentionally unaddressed, and no external security audit has been conducted. There is a lot of work ahead until this becomes a viable system.
The public nature of blockchains makes privacy a sensitive issue. How can media buyers (be they advertisers) know who they are targeting without everyone else in the world knowing about user’s private data? Even if we don’t want ads, how can we build content engines aware of the users they’re dealing with, without letting them overexposed? A useful reference here is Brave’s use of Anonize-ZKP to convey ad attributions without leaking private information. Another heavily discussed issue regards the ability of networks to provide sufficing incentives for audience itself to moderate abusive or inappropriate content. If the internet brought self-publishing to a global scale, will web3 make self-tokenisation the next big trend in digital media distribution?
Credit: Felipe Gaúcho Pereira
Google is teaming up with Theta Labs in a move aimed to help the video delivery network onboard users through Google Cloud. As part of the partnership, the tech giant is assisting Theta with its Mainnet 2.0 launch, said Theta Labs CEO Mitch Liu. Google will become the protocol’s fifth external validator node, staking 5 million THETA tokens (worth about $2.4 million at a press-time price of $0.48 apiece) on the network.
Theta rewards network participants for relaying video content to other users using their spare bandwidth and computing resources. The end result should be a “massive decentralized mesh network of relayers,” Liu said.
Eventually, Theta aims to have 31 external enterprise validators. Google Cloud is also becoming Theta’s preferred cloud provider for users with today’s announcement. Google will be Theta’s first European enterprise validator since it will be hosting the node at its office in Ireland, further geographically decentralizing the network. In February, Hedera Hashgraph announced that Google Cloud would run a node on the blockchain-like network and make hashgraph analytics available for users. In addition, hundreds of Guardian nodes (available to the public) will act as an extra layer of consensus with the Mainnet 2.0 launch by finalizing blocks and checking for bad actor validator nodes. Liu said he could see Google help Theta scale the number Guardian nodes on the network to the thousands or tens of thousands.
Theta also plans to further collaborate with Google’s artificial intelligence, machine-learning and big-data initiatives. Google also owns YouTube, a key target for Theta’s partnership aspirations.
“YouTube is particularly interesting because they utilize mostly internally-developed technology for video delivery and streaming, which makes experimentation a lot easier without having to rely on external platforms like Akamai or AWS,” Liu said.