Distributing Workflows through the Clouds

Posted on December 15, 2020
The need for interoperable platform architectures to make work flow.

What’s in a cloud?

For simplicity, the technology world tends to describe “the cloud” as if it were a single amorphous mass, which of course it is not. The cloud really consists of interconnected private data centers, co-location providers, and hyperscale cloud service providers (which are further subdivided into distinct regions and specific facilities often called availability zones). In the 2030 Vision paper, we relied on the convention of a singular cloud to help define a principle that “ALL ASSETS ARE CREATED OR INGESTED STRAIGHT INTO THE CLOUD AND DO NOT NEED TO BE MOVED.” In this post I will discard the convention of a monolithic cloud and discuss different parts of the cloud infrastructure and the need to interconnect those parts to make media production work and flow (i.e., to make a workflow).

In describing workflows spanning cloud infrastructures, we should recognize initially that managing a mix of different cloud infrastructures is analogous to managing the mix of different infrastructures that make up traditional media workflows. Our current infrastructure for creative workflows is distributed across different studio and network facilities, post houses, VFX houses, sound facilities, dubbing services and so on. When a vendor requires a set of files in a production, the simple (but inefficient and slow) solution is to copy the files, transfer them to the vendor’s infrastructure, and when the work is done, return any new files, including any updated versions of the original set, back to the studio or production. In the 2030 Vision this inefficiency is eliminated by maintaining one copy of the file in cloud infrastructure and inviting vendors and applications to come to the file to do the work. This element of the MovieLabs vision introduces efficiencies, but also a new type of infrastructure complexity that we explore below.

Also important to note, the MovieLabs definition of cloud includes public cloud service providers (which we dub “hyperscale cloud services”) and “… any internet-accessible storage platform that can be used as a common space for collaboration and exchange of data.” So while other industries discuss multi-cloud in the context of an application that can be deployed in one or more public clouds, our definition of multi-cloud refers more to a production and its ability to deploy across multiple internet-connected infrastructures. It’s an important distinction because a production can comprise many creative applications and higher-level production management systems (such as MAMs, schedulers, workflow orchestration or automation tools) that enable a holistic view of the assets, tasks, and participants in a production – all of which may be scattered across different cloud infrastructures.

Enabling choice and flexibility

Now that we have established that we’re living already in a world of multiple infrastructures, we address what happens when we move to the clouds and need to maintain the flexibility of those infrastructures. Different creative vendors choose different cloud providers or infrastructure to host services for their specialist tasks, and we need to embrace them all to expand a market of choice and competition in cloud services and applications. As an example, in one use case for cloud interoperability assets for Production 1 are stored in Cloud A, and assets for Production 2 are stored in Cloud B, requiring the studio behind both productions to interface with both Cloud A and Cloud B to manage their slate. In another more likely scenario, files for both productions may be scattered intentionally across Clouds A, B and perhaps even C – based on the choices of vendors who create the files and rely on different cloud infrastructures. For example, an audio file may be created at a sound mixing facility and reside on a private cloud, but a VFX vendor on the same production may create and manage 3D assets on Cloud B, which is used for rendering. The complex use of multiple infrastructures does not break the 2030 Vision, but in fact highlights the flexibility of the model. However, it does make asset management somewhat harder and require a certain level of cloud-agnostic abstraction between the infrastructure and the compute/application layers.

The challenge is to enable interoperability such that assets can be placed in any infrastructure and applications can freely discover and operate upon those assets from any other infrastructure. That type of interoperability would replicate the flexibility of today but with the almost infinite computing power and flexibility of the cloud and without the need to duplicate files. Any participant in the ecosystem could then contribute (with security and permissions) to an active production using their preferred infrastructure with their preferred business model, relying on a hyperscale cloud service provider (OpEx), a private cloud data center (CapEx), or any combination of both.

In fact, we already are beginning to see cloud migration occurring in our industry based on exactly this model. Today many production companies and vendors have existing investments in on-prem infrastructure. Their goal is to sweat those assets, while simultaneously preparing for a time when that equipment is retired. When that time arrives, the work should switch seamlessly to the cloud if by then the right pieces are in place. If we design an interoperable architecture now that accommodates both existing on-prem equipment and interfaces seamlessly with cloud resources, then we can accelerate cloud migration by making it simpler and more cost effective for all to make that switch.

Enhancing efficiencies and reducing barriers to viable cloud adoption speeds implementation of the 2030 Vision. It can be challenging to move one software application (whether licensed or developed in-house) to one cloud infrastructure, let alone multiple clouds. In the interoperable scenario above, potentially all software may need to discover and access files securely on any cloud platform to enable the benefits of multiple cloud infrastructures. Enabling that scenario is complex, but there is a real opportunity if the industry does that work together instead of alone or piecemeal.

At MovieLabs we’re looking to assist by working with application and infrastructure providers to develop cloud-agnostic standardized interfaces that lower the development effort for all ecosystem participants – from the large studios and public cloud providers to application developers to small production houses with limited IT resources. Enabling all types of productions to configure software-defined workflows that easily span multiple clouds is a key benefit of the 2030 Vision.

Where we go next …

MovieLabs is devoting significant attention to the challenges of using multiple cloud infrastructures in production workflows. We expect to have more to say on that in future blogs, but the crux of the solution comes down to the power of the network and its ability to interconnect all the parts of the infrastructure seamlessly while embracing variation across clouds and technologies. We have to enable productions and studios to benefit from the innovation and flexibility of the cloud, while obscuring the infrastructure complexity from the creative team, allowing them to focus on their art. It’s not an easy task, but the opportunities are considerable if we get it right. It takes an industry, and we look forward to working together with all of you to achieve that goal.

#MovieLabs2030, #ML2030Cloud

 

 

 

You May Also Like…

Zero Trust and Protecting Cloud Production

Zero Trust and Protecting Cloud Production

Spencer Stephens delves into the perfect storm of challenges surrounding Production Security amidst a convergence of factors, such as the migration of production to cloud environments, the intricate nature of safeguarding cloud infrastructure, and the persistent rise in cybersecurity incidents despite advancements in defensive technologies.