Multi-Cloud Infrastructures Archives - MovieLabs https://movielabs.com/category/multi-cloud-infrastructures/ Driving Innovation in Film and TV Content Creation and Distribution Tue, 09 Jan 2024 04:07:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://movielabs.com/wp-content/uploads/2021/10/cropped-favicon-32x32.png Multi-Cloud Infrastructures Archives - MovieLabs https://movielabs.com/category/multi-cloud-infrastructures/ 32 32 Are we there yet? Part 3 https://movielabs.com/are-we-there-yet-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-3 Tue, 09 Jan 2024 00:52:58 +0000 https://movielabs.com/?p=13486 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>

In this final part of our blog series on the current gaps between where are now and realizing the 2030 Vision, we’ll address the last two sections of the original whitepaper and look specifically at gaps around, Security and Identity, and Software-Defined Workflows. As with previous blogs in this series (see Parts 1 and 2) we’ll include both the gap as we see it, an example as it applies in a real workflow, and the broader implications of the gap.

So let’s get started with…

MovieLabs 2030 Vision Principle 6
  1. Inconsistent and inefficient management of identity and access policies across the industry and between organizations.

    Example: A producer wants to invite two studio executives, a director and an editor, into a production cloud service but the team has 3 different identity management systems. There’s no common way to identify the correct people to provide access to critical files or to provision that access.

    This is an issue addressed in the original 2030 Vision, which called for a common industry-wide Production User ID (or PUID) to identify individuals who will be working on a production. While there are ways today to stitch together different identify management and access control solutions between different organizations, they are point to point, require considerable software or configuration expertise, and are not “plug and play.”

MovieLabs 2030 Vision Principle 7
  1. Difficulty in securing shared multi-cloud workflows and infrastructure.

    Example: A production includes assets spread across a dozen different cloud infrastructures, each of which is under control of a different organization, and yet all need a consistent and studio-approved level of security.

    MovieLabs believes the current ”perimeter” security model is not sufficient to cope with the complex multi-organizational, multi-infrastructure systems that will be commonplace in the 2030 Vision. Instead, we believe the industry needs to pivot to a more modern ”zero-trust” approach to security, where the stance changes from ”try to prevent intruders” to every access to an asset or service is authenticated and checked for authorization. To that end, we’ve developed the Common Security Architecture for Production which is based on a Zero Trust Foundation, take a look at this blog to learn more.

MovieLabs 2030 Vision Principle 8
  1. Reliance on file paths/locations instead of identifiers.

    Example: A vendor requires a number of assets to do their work (e.g., a list of VFX plates to pull or a list of clips) that today tend to be copied as a file tree structure or zipped together to be shared along with a manifest of the files.

    In a world where multiple applications, users and organizations can be simultaneously pulling on assets, it becomes challenging for applications to rely on file names, locations, and hierarchies. MovieLabs instead is recommending unique identifiers for all assets that can be resolved via a service to specify where a specific file is actually stored. This intermediate step provides an abstraction layer and allows all applications to be able to find and access all assets. For more information, see Through the Looking Glass.

MovieLabs 2030 Vision Principle 9
  1. Reliance on email for notifications and manual processing of workflow tasks.

    Example: A vendor is required to do a task on a video asset and is sent an email, a PDF attachment containing a work order, a link to a proxy video file for the work to be done, and a separate link to a cloud location where the RAW files are. It takes several hours/days for the vendor to extract the required work, download, QC, and store the media assets, and then assign the task on an internal platform to someone who can do the work. The entire process is reversed to send the completed work back to the production/studio.

    By having non-common systems to send workflow requests, asset references and assign work to individual people, we have created an inherently inefficient industry. In the scenario above, a more efficient system would be for the end user to receive an automated notification from a production management system that includes a definition of the task to be done and links to the cloud location of the proxies and RAW files, with all access permissions already assigned so they can start their work. Of course, our industry is uniquely distributed between organizations that handle very nuanced tasks in the completion of a professional media project. This complicates the flow of work and work orders, but there are new software systems that can enable seamless, secure, and automated generation of tasks. We can strip weeks out of major production schedules simply by being more efficient in handoffs between departments, vendors and systems.

  2. Monolithic systems and the lack of API-first solutions inhibit our progress towards interoperable modern application stacks.

    Example: A studio would like to migrate their asset management and creative applications to a cloud workflow that includes workflow automation, but the legacy nature of their software means that many tasks need to be done through a GUI and that it needs to be hosted on servers and virtual machines that mimic the 24/7 nature of their on-premises hardware.

    Modern applications are designed as a series of micro-services which are assembled and called dynamically depending on the process, which enables considerable scaling and also lighter weight applications that can deploy on a range of compute instances (e.g., on workstations, virtual machines or even behind browsers). While the pandemic proved we can have creative tasks running remotely or from the cloud a lot of those processes were ”brute forced” with remote access or cloud VMs running legacy software and are not the intended end goal of a ”cloud native” software stack for media and entertainment. We recognize this is an enormous gap to fix and will take beyond the 2030 timeframe to move all of the most vital applications/services to modern software platforms. However we need the next-generation of software systems to enable open APIs and deploy in modern containers to accelerate the interoperable and dynamic future that is possible within the 2030 Vision.

MovieLabs 2030 Vision Principle 10
  1. Many workflows include unnecessarily time consuming and manual steps.

    Example: A director can’t remotely view a final color session in real time from her location, so she needs to wait for a full render of the sequence, for it to be uploaded to a file share, for an email with the link to be sent, and then for her to download it and find a monitor that matches the one that was used for the grade.

    We could write so many examples here. There’s just way too little automation and way too much time wasted in resolving confusions, writing metadata, reading it back, clarifying intent, sending emails, making calls etc. Many of the technologies exist to fix these issues, but we need to redevelop many of our control plane functions to adopt to a more efficient system which requires investment in time, staff, and development. But those that do the work will come out leaner, faster and more competitive at the end of the process. We recommend that all participants in the ecosystem take honest internal efficiency audits to look for opportunities to improve and prioritize the most urgent issues to fix.

Phew!  So, there we have it. For anyone that believes the 2030 Vision is “doable” today, there are 24 reasons why MovieLabs disagrees. Don’t consider this post a negative, we still have time to resolve these issues, and it’s worth being honest about the great progress completed but also what’s still to do.

Of course, there’s no point making a list of things to do without a meaningful commitment to cross them off. MovieLabs and the studios can’t do this alone, so we’re laying down the gauntlet to the industry – help us, to help us all. MovieLabs will be working to close those gaps that we can affect, and we’ll be publishing our progress on this blog and on LinkedIn. We’re asking you to do the same – share what your organization is doing with us by contacting info@movielabs.com and use #2030Vision in your posts.

There are three specific calls to action from this blog for everyone in the technical community:

  1. The implementation gaps listed in all parts of this blog are the easiest to close – the industry has a solution we just need the commitment and investment to implement and adopt what we already have. These are ones we can rally around now, and MovieLabs has already created useful technologies like the Common Security Architecture for Production, the Ontology for Media Creation, and the Visual Language.
  2. For those technical gaps where the industry needs to design new solutions, sometimes individual companies can pick these ideas up and run with them, develop their own products, and have some confidence that if when they build them customers will come. Some technical gaps can only be closed by industry players coming together, with appropriate collaboration models, to create solutions that enable change, competition, and innovation. There are existing forums to do that work including SMPTE and the Academy Software Foundation, and MovieLabs hosts working groups as well.
  3. And though not many issues are in the Change Management category right now, we still need to work together to share and educate how these technologies can be combined to make the creative world more efficient.

We’re more than 3 years into our Odyssey towards 2030. Join us as we battle through the monsters of apathy, slay the cyclops of single mindedness, and emerge victorious in the calm and efficient seas of ProductionLandia. We look forward to the journey where heroes will be made.

-Mark “Odysseus” Turner

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>
Are we there yet? Part 2 https://movielabs.com/are-we-there-yet-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-2 Thu, 14 Dec 2023 03:15:54 +0000 https://movielabs.com/?p=13461 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>

In Part 1 of this blog series we looked at the Gaps in Interoperability, Operational Support and Change Management that are impeding our journey to the 2030 Vision’s destination (the mythical place we call “ProductionLandia”). In these latter parts we’ll examine the gaps we have identified that are specific to each of the Principles of the 2030 Vision. For ease of reference, the Gaps below are numbered starting from 9 (because we had 1-8 in Part 1 of the blog). For each Gap we list the Principle, a workflow example of the problem, and the implications for the Gap.

In this post we’ll look just at the gaps around the first 5 Principles of the 2030 Vision which address a new cloud foundation.

MovieLabs 2030 Vision Principle 1
  1. Limitations of sufficient bandwidth and performance, plus auto recovery from variability in cloud connectivity.

    Example: Major productions can generate terabytes of captured data per day during production and getting it to the cloud to be processed is the first step.

    Even though there are studio and post facilities with large internet connections, there are still many more locations, especially remote or overseas ones, where the bandwidth is not large enough, the throughput not guaranteed or predictable enough, such as to hobble cloud-based productions at the outset. Some of the benefits in cloud-based production involve the rapid access for teams to manipulate assets as soon as they are created and for that we need big pipes into the cloud(s), that are both reliable and self-healing. Automatic management of those links and data transfers is vital as they will be used for all media storage and processing.

  2. Lack of universal direct camera, audio, and on-set data straight to the cloud.

    Example: Some new cameras are now supporting automated upload of proxies or even RAW material direct to cloud buckets. But for the 2030 Vision to be realized we need a consistent, multi-device on-set environment to be able to upload all capture data in parallel to the cloud(s) including all cameras, both new and legacy.

    We’re seeing great momentum with camera to cloud in certain use cases (with limited support from newer camera models) sending files to specific cloud platforms or SaaS environments. But we’ve got some way to go before it’s as simple and easy to deploy a camera-to-cloud environment as is it to rent cameras, memory cards/hard drives, and a DIT cart today. We also need support for multiple clouds (including private clouds) and or SaaS platforms so that the choice of camera-to-cloud environment is not a deciding factor that locks downstream services into a specific infrastructure choice. We’ve also included in the gap that it’s not just ”camera to cloud” but “capture to cloud” that we need, which includes on-set audio and other data streams that may be relevant to later production stages including lighting, lenses, and IOT devices. All of that needs to be securely and reliably delivered to redundant cloud locations before physical media storage on set can be wiped.

  3. Latency between “single source of truth in cloud” and multiple edge-based users.

    Example: A show is shooting in Eastern Europe, posting in New York, with producers in LA and VFX companies in India. Which cloud region should they store the media assets in?

    As an industry we tend to talk about “the cloud” as a singular thing or place, but in reality of course it is not – it’s made up of private data centers, and various data centers which hyperscale cloud providers tend to arrange into different “availability zones” or “regions” which must be declared when storing media. As media production is a global business the example above is very real, it leads to the question – where should we store the media and when should we duplicate it for performance and/or resiliency? This is also one of the reasons why we believe multi-cloud systems need to be supported because it’s also possible that the assets for a production are scattered across different availability zones, cloud accounts (depending on which vendor has “edit rights” on the assets at any one time), and cloud providers (public, private and hybrid infrastructures). The gap here is that currently decisions need to be made, potentially involving IT systems teams and custom software integrations, about where to store assets to ensure they are available, at very low latency (sub 25 milliseconds round trip – see Is the Cloud Ready to Support Millions of Remote Creative Workers? for more details) for the creative users who need to get to them. By 2030 we’d expect some “intelligent caching’” systems or other technologies that would understand, or even predict, where certain assets need to be for users and stage them close enough for usage before they are needed. This is one of the reasons why we reiterate that we expect, and encourage, media assets to be distributed across cloud service providers and regions and merely ”act” as a single storage entity even though they may be quite disparate. This is also implies that applications need to be able to operate across all cloud providers because they may not be able to predict or control where assets are in the cloud.

  4. Lack of visibility of the most efficient resource utilization within the cloud , especially before the resources are committed.

    Example: When a production today wants to rent an editorial system, it can accurately predict the cost, and map it straight to their budget. But with the cloud equivalent it’s very hard to get an upfront budget because the costs for cloud resources rely on predicting usage, which is hard to know including hours of usage, amount of storage required, data egress, etc.

    Creative teams take on a lot when committing to a show, usually with a fixed budget and timeline. It’s hard to ask them to commit to unknown costs, especially for variables which are hard to control at the outset – could you predict how many takes for a specific scene? How many times a file will be accessed or downloaded? Or how many times a database queried? Even if they could accurately predict usage, most cloud billing is done in arrears, and therefore the costs are not usually known until after the fact, and consequently it’s easy to overrun costs and budgets without even knowing it.

    Similarly, creative teams would also benefit from greater education and transparency concerning the most efficient ways to use cloud products. Efficient usage will decrease costs and enhance output and long-term usage.

    For cloud computing systems to become as ubiquitous as the physical equivalent, providers need to find ways to match the predictability and efficient use of current on-premises hardware, but with the flexibility to burst and stretch when required and authorized to do so.

MovieLabs 2030 Vision Principle 2
  1. Too few cloud-aware/cloud-native apps, which necessitates a continued reliance on moving files (into clouds, between regions, between clouds, out of clouds).

    Example: An editor wants to use a cloud SaaS platform for cutting their next show, but the assets are stored in another cloud, the dailies system providing reference clips is on a third, and the other post vendors are using a private cloud.

    We’re making great progress with getting individual applications and processes to move to the cloud but we’re in a classic ”halfway” stage where it’s potentially more expensive and time consuming to have some applications/assets operating in the cloud and some not. That requires moving assets into and out of a specific cloud to take advantage of its capabilities and if certain applications or processes are available only in one cloud then moving those assets specifically to that cloud, which is the the sort of “assets chasing tasks” from the offline world that this principle was designed to avoid in the cloud world. We need to keep pushing forward with modern applications that are multi-cloud native and can migrate seamlessly between clouds to support assets stored in multiple locations. We understand this is not a small task or one that will be quick to resolve. In addition, many creative artists used Mac OS and that is not broadly available in cloud instances and in a way that can be virtualized to run on myriad cloud compute types.

  2. Audio post-production workflows (e.g., mixing, editing) are not natively running in the cloud.

    Example: A mixer wants to remotely work on a mix with 9.1.6 surround sound channels that are all stored in the cloud. However most cloud based apps only support 5.1 today, and the audio and video channels are streamed separately so the sync between the audio and the video can be “soft” in a way that it can be hard to know if the audio is truly playing back in sync.

    The industry has made great strides in developing technologies to enable final color (up to 12 bit) to be graded in the cloud, but now similar attention needs to be paid to the audio side of the workflows. Audio artists can be dealing with thousands, or even tens of thousands of small files and they have unique challenges which need to be resolved to enable all production tasks to be completed in the cloud without downloading assets to work remotely. The audio/video sync and channel count challenges above are just illustrative of the clear need for investment and support of both audio and video cloud workflows simultaneously to get to our “ProductionLandia” where both can be happening concurrently on the same cloud asset pool.

MovieLabs 2030 Vision Principle 3
  1. Lack of communication between cross-organizational systems (AKA “too many silos”) and inability to support cross-organizational workflows and access.

    Example: A director uses a cloud-based review and approval system to provide notes and feedback on sequences, but today that system is not connected to the workflow management tools used by her editorial department and VFX vendors, so the notes need to be manually translated into work orders and media packages.

    As discussed above we’re in a transition phase to the cloud, and as such we have some systems that may be able to receive communication (messages, security permission requests) and commands (API calls), whereas other systems are unaware of modern application and control plane systems. Until we have standard systems for communicating (both routing and common payloads for messages and notifications) and a way for applications to interoperate between systems controlling different parts of the workflow, then we’ll have ongoing issues with cross-organizational inefficiencies. See the MovieLabs Interoperability Paper for much more on how to enable cross-torganizational interop.

MovieLabs 2030 Vision Principle 4
  1. No common way to describe each studio’s archival policy for managing long term assets.

    Example: Storage service companies and MAM vendors need to customize their products to adapt to each different content owner’s respective policies and rules for how archival assets are selected and should be preserved.

    The selection of which assets need to be archived and the level of security robustness, access controls, and resilience are all determined by studio archivists depending on the type of asset. As we look to the future of archives we see a role for a common and agreed way of describing those policies so any software storage system, asset management or automation platform could read the policies and report compliance against them. Doing so will simplify the onboarding of new systems with confidence.

MovieLabs 2030 Vision Principle 5
  1. Challenges of measuring fixity across storage infrastructures.

    Example: Each studio runs a checksum against an asset before uploading it to long term storage. Even though storage services and systems run their own checks for fixity those checksums or other mechanisms are likely different than the studios’ and not exposed to end clients. So instead, the studio needs to run their own checks for digital degradation by occasionally pulling that file back out of storage and re-running the fixity check.

    As there’s no commonality between fixity systems used in major public clouds, private clouds, and storage systems, the burden of checking that a file is still bit-perfect falls on the customer to incur the time, cost, and inconvenience of pulling the file out of storage, rehashing it, and comparing to the original recorded hash. This process is an impediment to public cloud storage and the efficiencies it offers for the (very) long term storage it offers for archival assets.

  2. Proprietary formats need to be archived for many essence and metadata file types.

    Example: A studio would like to maintain original camera files (OCF) in perpetuity as the original photography captured on set, but the camera file format is proprietary, and tools may not be available in 10, 20, or 100 years’ time. The studio needs to decide if it should store the assets anyway or transcode them to another format for the archive.

    The myriad of proprietary files and formats in our industry contain critical information for applications to preserve creative intent, history, or provenance, but that proprietary data becomes a problem if it is necessary to open the file in years or decades, perhaps after the software is not even available. We have a few current and emerging examples in some areas of public specifications and standards, and open source software that can enable perpetual access, but the industry has been slow to appreciate the legacy challenges in preserving access to this critical data in the archive.

In the final part of this blog series, we’ll address the gaps remaining within the Principles covering Security and Identity and Software-Defined Workflows… Stay Tuned…

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>
Are we there yet? Part 1 https://movielabs.com/are-we-there-yet-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-1 Wed, 26 Jul 2023 16:13:10 +0000 https://movielabs.com/?p=13094 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>

It’s mid-2023, we’re about 4 years into our odyssey towards “ProductionLandia” – an aspirational place where video creation workflows are interoperable, efficient, secure-by-nature and seamlessly extensible. It’s the destination. The 2030 Vision is our roadmap to get there. Each year at MovieLabs we check the industry’s progress towards this goal, adjusting focus areas, and generally providing navigation services to ensure we’re all going to arrive in port in ProductionLandia at the same time and with a suite of tools, services and vendors that work seamlessly together. As part of that process, we take a critical look at where we are collectively as an M&E ecosystem – and what work still needs to be done – we call this “Gap Analysis”.

Before we leap into the recent successes and the remaining gaps, let’s not bury the lead – while there has been tremendous progress, we have not yet achieved the 2030 Vision (that’s not negative, we have a lot of work to do and it’s a long process). So, despite some bold marketing claims from some industry players, there’s a lot more in the original 2030 Vision white paper than lifting and shifting some creative processes to the cloud, the occasional use of virtual machines for a task or a couple of applications seamlessly passing a workflow process between each other. The 2030 Vision describes a paradigm shift that starts with a secure cloud foundation, and also reinvents our workflows to be composable and more flexible, removing the inefficiencies of the past, and includes the change management that is necessary to give our creative colleagues the opportunity to try, practice and trust using these new technologies on their productions. The 2030 Vision requires an evolution in the industry’s approach to infrastructure, security, applications, services and collaboration and that was always going to be a big challenge. There’s still much to be done to achieve dynamic and interoperable software-defined workflows built with cloud-native applications and services that securely span multi-cloud infrastructures.

Status Check

But even though we are not there yet, we’re actually making amazing progress based on where we started (albeit with a global pandemic to give a kick of urgency to our journey!). So many major companies including cloud services companies, creative application tool companies, creative service vendors and other industry organizations have now backed the 2030 Vision; it is no longer just the strategy of the major Hollywood studios but has truly become the industry’s “Vision.” The momentum is truly behind the vision now, and it’s building – as is evident in the 2030 Showcase program that we launched in 2022 to highlight and share 10 great case studies where companies large and small are demonstrating Principles of the Vision that are delivering value today.

We’ve also seen the industry respond to our previous blogs on gaps including what was missing around remote desktops for creative applications, software-defined workflows  and cloud infrastructures. We can now see great progress with camera to cloud capture, automated VFX turnovers, final color pipelines now technically possible in the cloud, amazing progress on real-time rendering and iteration via virtual production, creative collaboration tools and more applications opening their APIs to enable new and unpredictable innovation.

Mind the Gaps

So, in this two-part Blog, let’s look at what’s still missing. Where should the industry now focus its attention to keep us moving and accelerate innovation and the collective benefits of a more efficient content creation ecosystem? We refer to these challenges as “gaps” between where we are today and where we need to be in “ProductionLandia.” When we succeed in delivering the 2030 Vision, we’ll have closed all of these gaps. As we analyze where we are in 2023 we see these gaps falling into the 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-Defined Workflows), plus 3 underlying ones that bind them altogether:

image: 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-defined Workflows), plus 3 underlying ones that bind them altogether

In this Part 1 of the Blog we’ll look at the gaps related to these areas. In Part 2 we’ll look at the gaps we view as most critical for achieving each of the principles of the vision, but let’s start with those binding challenges that link them all.

It’s worth noting that some gaps involve fundamental technologies (a solution doesn’t exist or a new standard, or open source project is required) some are implementation focused (e.g., technology exists but needs to be implemented/adopted by multiple companies across the industry to be effective – our cloud security model CSAP  is an example here where a solution is now ready to be implemented) and some are change management gaps (e.g., we have a viable solution that is implemented but we need training and support to effect the change). We’ve steered clear of gaps that are purely economic in nature as MovieLabs does not get involved in those areas. It’s probably also worth noting that some of these gaps and solutions are highly related, so we need to close some to support closing others.

Interoperability Gaps

  1. Handoffs between tasks, teams and organizations still require large scale exports/imports of essence and metadata files, often via an intermediary format. Example: Generation of proxy video files for review/approval of specific editorial sequences. These handovers are often manual, introducing the potential for errors, omissions of key files, security vulnerabilities and delays. See note1.
  2. We still have too many custom point-to-point implementations rather than off-the-shelf integrations that can be simply configured and deployed with ease. Example: An Asset Management System currently requires many custom integrations throughout the workflow, which makes changing it out for an alternative a huge migration project. Customization of software solutions adds complexity and delay and makes interoperability considerably harder to create and maintain.
  3. Lack of open, interoperable formats and data models. Example: Many applications create and manage their own sequence timeline for tracking edits and adjustments instead of rallying around open equivalents like OpenTimelineIO for interchange. For many use cases, closing this gap requires the development of new formats, data models, and their implementation.”.
  4. Lack of standard interfaces for workflow control and automation. Example: A workflow management software cannot easily automate multiple tasks in a workflow by initiating applications or specific microservices and orchestrate their outputs to form an output for a new process. Although we have automation systems in some parts of the workflow the lack of standard interfaces again means that implementors frequently have to write custom connectors to get applications and processes to talk to each other.
  5. Failure to maintain metadata and a lack of common metadata exchange across components of the larger workflow. Example: Passing camera and lens metadata from on-set to post-production systems for use in VFX workflows. Where no common metadata standards exist, or have not been implemented, systems rarely pass on data they do not need for their specific task as they have no obligation to do so, or don’t know which target system may need it. A more holistic system design however would enable non-adjacent systems to be able to find and retrieve metadata and essence from upstream processes and to expose data to downstream processes, even if they do not know what it may be needed for.

Operational Support

  1. Our workflows, implementations and infrastructures are complex and typically cross between boundaries of any one organization, system or platform. Example: A studio shares both essence and metadata with external vendors to host on their own infrastructure tenants but also less structured elements such as work orders (definitions of tasks), context, permissions and privileges with their vendors. Therefore, there is a need for systems integrators and implementors to take the component pieces of a workflow and to design, configure, host, and extend them into complete ecosystems. These cloud-based and modern software components will be very familiar to IT systems integrators, but they need the skills and understanding in our media pipelines to know how to implement and monetize them in a way which will work in our industry. We therefore have a mismatch gap between those that understand cloud-based IT infrastructures and software, and those that understand the complex media assets and processes that need to operate on those infrastructures. There are few companies to chose from that have the correct mixture of skills to understand both cloud and software systems as well as media workflow systems, and we’ll need a lot more of them to support the industry wide migration.
  2. We also need systems that match our current support models. Example: A major movie production can be simultaneously operating across multiple countries and time zones in various states of production and any down system can cause backlogs in the smooth operations. The media industry can work some unusual and long hours, at strange times of the day and across the world – demanding a support environment that can support it with specialists that understand the challenges of media workflows and not just open an IT ticket that will be resolved when the weekday support comes in at 9am on Monday. In the new 2030 world, these problems are compounded by the shared nature of the systems – so it may be hard for a studio or production to understand which vendor is responsible if (when) there are workflow problems – who do you call when applications and assets seamlessly span infrastructures? How do you diagnose problems?

Change Management

  1. Too few creatives have tried and successfully deployed new ‘2030 workflows’ to be able to share and train others. Example: Parts of the workflow like Dailies have migrated successfully to the cloud, but we’re yet to see a major production running from ”camera to master” in the cloud – who will be the first to try it? Change Management comprises many steps before new processes are considered “just the way we do things.” There are many steps but the main ones we need to get through are:
    • Educating and socializing the various stakeholders about the benefits of the 2030 vision, for their specific areas of interest
    • Involving creatives early in the process of developing new 2030 workflows
    • Then demonstrating value of new 2030 workflows to creatives with tests, PoCs, limited trials and full productions
    • Measuring cost/time savings and documenting them
    • Sharing learnings with others across the industry to build confidence.

Shortly, we’ll add a Part II to this blog which will add to the list of gaps with those that are most applicable to each of the 10 Principles of the Vision. In the meantime, there’re eight gaps here which the industry can start thinking about, and do please let us know if you think you already have solutions to these challenges!

[1] The Ontology for Media Creation (OMC) can assist in common payloads for some of these files/systems.

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>
Announcing CSAP Part 4: Securing Software-Defined Workflows https://movielabs.com/applying-the-security-architecture-to-workflows/?utm_source=rss&utm_medium=rss&utm_campaign=applying-the-security-architecture-to-workflows Wed, 19 Oct 2022 19:00:46 +0000 https://movielabs.com/?p=11463 Applying the Security Architecture to Workflows

The post Announcing CSAP Part 4: Securing Software-Defined Workflows appeared first on MovieLabs.

]]>

Introducing Part 4 of the Common Security Architecture for Production (CSAP)

Today we are publishing Part 4: Securing Software-Defined Workflows of the Common Security Architecture for Production (CSAP). It brings together two central threads of the MovieLabs 2030 Vision: CSAP and software-defined workflows (SDWs). Software-supported collaboration and automation are crucial to the future of scalable, multi-cloud workflows. This means that the security systems must work with, and in many cases, be driven by workflow software.

For this reason, CSAP is a workflow-driven security architecture for production in the cloud. It is a zero-trust architecture with a deny-by-default security posture and CSAP authorization rules that authorize activities. “Workflow-driven” means that security policies are created in response to what’s happening in the workflow, for example, the assignment of a task to an artist or the publication of dailies for review.

We use the term software-defined workflows (SDW) to broadly describe workflows where anyone designing a workflow has the ability to choose which tasks are used to perform specific functions, what assets and associated information those tasks communicate, which participants are involved, and what the rules are to advance or gate the process. Unlike workflows that are bound to specific hardware or rigid stacks of applications, SDWs are designed for change and the need to constantly modify and adapt workflows dynamically.

Multi-cloud and multi-org workflows are also a main driver for Part 4. The 2030 Vision assumes cloud infrastructures will be dynamic and shared across everyone and every organization working on production and therefore could be accessed by many organizations and independent contractors. This happens outside of the organization that controls the infrastructure, which contrasts with how private on-premises infrastructure is used and secured today. Extending the perimeter security models protecting private infrastructure to this cloud environment will be too complicated1 for the agile security management necessary to respond fast enough to new and changing workflows. It will exacerbate the problem of security interfering with the creative process.

This is the reason CSAP exists. It is designed to work hand in hand with the new way of producing content and, and to do so without impinging on the creative process.

Workflows have some form of workflow management – the “thing” that is managing the workflow.2 “CSAP Part 4: Securing Software-Defined Workflows” describes how workflow management and CSAP work together to secure workflows and protect their integrity.

Putting CSAP: Part 4 into Context

The lifetime of any workflow can be broken down into initialize and execute. Let’s take as an example that is part of a dailies workflow and see how Part 4 can be used to secure the steps in an example production workflow.

CSAP Part 4: Figure 1

Figure 1 – A simple dailies creation workflow

Initialize is where everything is brought together. In our example, which is not atypical, initialization means the shooting schedule, crew selection, delivery specifications, camera specification, etc. Each step in workflow initialization is accompanied by initialization of parts of the workflow’s security. For example:

  • When the production sets up the department: Accounts are created and roles defined for each crew member. Global policies are defined.
  • When the production agrees on its workflows: Authorization policy templates are created. Policy Enforcement Points (PEPs) are provisioned.

The second step could be further broken down into inter-departmental and intra-departmental initializations.

Execution, what happens after someone hits “go” is often largely event driven and, once the department agrees on its workflow, adjustments are made to accommodate new requirements from the production management or to improve the workflow. In this case, execution would likely be event driven with steps that look like this:

CSAP Part 4: Figure 2

Figure 2 – The events driving our dailies workflow example

The lifetime of authorization rules is set according to the security requirements of the production. In the case where the production has decided on more of a “least-privilege” approach using short-lifetime authorization rules, many of the events in the workflow could trigger security authorization changes. On the other hand, if a production uses long-lifetime authorization rules, the authorizations will be more static.

Let’s look at a couple possible examples from the dailies workflow above.

When camera and sound files arrive: The crew members tasked with syncing and uploading are authorized to access the files and workstations.

After the dailies have been approved and the files transcoded for creative review: Creative reviewers are authorized to stream the dailies. This type of authorization rule can be used to prevent premature delivery, meaning before review is completed.

These examples give some idea of how workflow management can drive security both at initialization and at execution. CSAP Part 4 goes into much more detail on how this workflow-driven security can be implemented.

SaaS and Workflow-Driven Security

SaaS offerings are also key components that must be integrated into software-defined workflows and their security. Part 4 considers the case of a closed service operated by a third party that has its own internal security and assets are held internally. The SaaS system is therefore responsible for ensuring that participants are authenticated for controlling their access to assets within the service as well as controlling any external access that it provides. In the context of a software-defined workflow that includes the SaaS system as one component, we have very similar needs for initialization and execution as in our examples above. If the service allows federation with external identity and access management system (IAM), some of these may be done through that IAM. And when a production requires short-lifetime authorization policies, the SaaS service needs to provide ways to support workflow-driven security policies, some of which may need to be set by systems external to the SaaS service. Part 4 double clicks on this use case.

Service Specific Authorizations

In considering authorization rules for different types of components, we’ve come to realize they often have different needs in how access controls may be scoped to particular portions of the service and to particular actions. File systems often use policies scoped to particular files or folders and actions based on CRUD (create, read, update, delete) operations. Messaging systems may scope policies to particular channels with actions such as create queue, read queue, send to queue, delete queue. Part 4 takes up  messaging and asset resolution systems as two examples of how scopes and actions can be defined for specific types of subsystems.

Keep the Feedback Coming

We hope that reading this will encourage you do download Part 4 and read about how CSAP secures software-defined workflows. Please reach out to MovieLabs if you have any questions about how to deploy any part of CSAP, including the new Part 4.

The next part of CSAP to be released is Part 5: Implementation Considerations. When we created CSAP, one of our goals was that it should not have any boxes that say “magic happens here.” Part 5 will give you insight into our thinking into the considerations we have about implementing CSAP, and it will be coming soon.

[1] Therefore, more likely to fail because complexity is the enemy of security.

[2] While a software-defined workflow will have some level of automation, our use of the term “workflow management” also applies to manual systems for scheduling work.

The post Announcing CSAP Part 4: Securing Software-Defined Workflows appeared first on MovieLabs.

]]>
PSST…I have to tell you something – Part 2 https://movielabs.com/psst-i-have-to-tell-you-something-2/?utm_source=rss&utm_medium=rss&utm_campaign=psst-i-have-to-tell-you-something-2 Thu, 22 Sep 2022 00:35:00 +0000 https://movielabs.com/?p=11376 Sending messages through workflows - Part 2

The post PSST…I have to tell you something – Part 2 appeared first on MovieLabs.

]]>

In the first part of this blog series we discussed the benefits of using messaging systems for communication among the systems and participants in production workflows, especially those spanning clouds or organizations. In this blog we’ll go deeper into some of the key concepts that apply to these messaging systems and ways in which interoperability can be achieved.

Messages and Messaging Systems

We need to deal with both simple messages, in near real-time, between two participants, like this:

There are many different types of messaging and event distribution systems. What they all have in common is the ability to distribute data payloads from producers to one or more consumers of that data. In a workflow, messaging payloads often consist of notifications – information about something that has happened in a workflow, like a new version having been created, approved, or published – or requests, e.g., “start the next step in the workflow.” At this level, messaging is machine-to-machine communication; the application or service that receives the message may then convert it into something more appropriate for a person, such as an email or a text message.

The Messaging System has three logical components:

  • Producers, which create and send messages
  • Message Routers, which securely transmit a message from a Producer to Consumers.
  • Message Routers support one or more Router Names, which are used to direct messages from producers to appropriate consumers. Note: In this blog, Named Router is short for “A Message Router that supports a particular Router Name.

The Messaging System itself creates Message Routers and gives Producers and Consumers the (implementation dependent) information they need to connect to them. Since there are so many ways to implement these components,1 this blog will only talk about message systems at the level of these components.

The Messaging System has a few major responsibilities. It creates Message Routers and their Router Names, usually as part of a workflow management process, and it hands out information about authorized Producers and Consumers so they can connect to a Named Router.

Message Routers can have multiple components, depending on the underlying implementation. They can also distribute messages in several ways: fan-out, where a single message is delivered to all the consumers connected to Message Router; round-robin, where each message is read by a single one; and filtered, where the message is delivered to consumers by matching some criteria (which may be in the message itself or known to the router based on the Consumer’s stated needs).

Both producers and consumers must have a connection to a Named Router so they can send or receive messages. This is implementation dependent and can be, for example, a URL, a socket connection, or something else. We won’t need that very often in this blog, and use “connection info” as a generic term for it.

Router Names are the primary way Producers and Consumers discover Message Routers and connect to them. Some implementations will support multiple Router Names in a single Message Router, while others have a one-to-one relationship between a router and a name. Some messaging systems call these “topics”, “channels”, or “exchanges”; the conceptual match is good, though there is a great deal of technical variation in how these other terms are applied.

For our examples, we’ll assume that the Messaging System has been asked to create some Named Routers, and that workflow management distributes their connection info to workflow components that need them. Creating these is generally part of workflow setup, and distributing connection info can be part of setup or done on demand when individual components of the workflow get initialized.

Interoperability and Automation

These machine-to-machine messages can then notify individuals or automated processes about things like publication and status; and trigger workflow automation, such as the provisioning of a creative session for an artist or the start of a rendering session.

To transfer messages across different domains, a message Consumer in one domain can read a message from a Named Router, translate or filter or adapt it as needed, and give it to another Named Router inside the second domain.

Although different messaging systems can use many different low-level protocols which have to be translated across domains, interoperability is vastly simplified if there is agreement on the actual content of the messages. Even if Message Routers use different mechanisms, the messages themselves can be standardized and interoperable.

The Contents of Messages

It’s worth going into this interoperability in a little more detail – we can’t make an interoperable message system unless all the participants know what the messages mean. There are two main parts to this:

  • A shared way of describing what the message is about. This includes both events and commands . Events say that something has happened and include, for example “this asset has changed” or “this process is complete.” Commands are something that someone wants to happen and include things like “approve this” and “transform this OCF into a proxy.”
  • A shared way of describing everything the message has to contain that is needed to understand and act on a command or respond appropriately to an event. For example, an event that indicates that an asset has changed needs a way to identify the asset, and a request to turn an OCF into a proxy needs to say which OCF is involved, and maybe even which service provider is to do the work.

To cover the first point, we are working with industry partners on common names for standard events and commands in the workflow. For the second, much of it can be based on the Ontology for Media Creation (https://mc.movielabs.com), which provides a shared way of describing workflow components, and the use of identifiers in the messages rather than full data payloads (as discussed in our blog on resolvers here: Through the Looking Glass – MovieLabs.)

Examples:

The examples use a streamlined view of the dailies process. We won’t cover all the pieces, but the ones not covered can use messaging in the same way as the parts we look at.

Message Producer and Message Consumer are very generic concepts, and we use them to refer to applications that actually produce or consume messages, such as a dashboard or workflow automation service. Consumers can deal with messages in many ways, for example by acting on the message itself or by sending it on somewhere else.

Often, both producers and consumers are integration points that connect applications and services that are not message-aware to ones that are. For example, in the dailies process, “unload data cards” can be a single application, but it could just as well be a script that waits for something to happen in a watch folder and sends a message when something new appears. Similarly, “review and approve” can be an application that receives and acts on messages directly from “grade color”, or it could be an application that converts the approval request into an email which is them sent to the participant responsible for review and approval; its outbound messages can be sent by an application, through a service fronted by a web form, or by email to a system that converts the email into a message.

Simple Message System

Example 1: Messaging in an Approval Workflow

First, we’ll look at the handoff from “grade color” to “review & approve.” Review and approval is an iterative process, and later we’ll expand that in more detail.

Once color grading is done, the result has to be sent to someone to approve. This has two parts: the approver has to be notified that there is something to approve, and then send out the result of the review. Workflow management creates two Named Routers, one called “request color grading” and the other called “review color grading.” The color grading component is a consumer of “grade color” messages from the “request color grading” named router and a producer of “request review” messages which it sends to the “review color grading” named router.

When the Edit stage is done, it sends a message to the “request color grading” router, indicating the Asset, and a note saying something like “reason: initial grading.” The Grade Color task reads from that router and does its work – it can allocate the work to an individual or an internal system that manage pending work (perhaps itself message based). Work happens, and the asset is ready for review. The Grade Color task sends a message to the “review color grading” router. The “Review & Approve” task can send two messages: “approved” and “rejected: reason.” It sends the first one to the Approval Notification message router (see the next example) and the second back to the “request color grading” router, where someone will pick it up.

An important thing to note is that very few messages exist on their own; they are part of some grander context. For this example, the context might be a work order, which is passed with all of the messages.

Simple Message System
This paradigm – tasks writing to a Named Router to request work, other tasks reading from that Named Router to get work, and then sending a notification to another Named Router when the work is completed – can be used for all the components in the “Edit and Color Management Workflow” box.

Example 2: Messaging to Multiple Recipients

In the first example, we saw individual messages going from one producer to one consumer, and how a consumer can receive messages from multiple producers. An equally common case is getting messages to more than one consumer. For that we’ll look at the output of the “Approve Color Grading” task.

The workflow, as diagramed, shows only a single consumer for the approval message. It is much more likely that multiple participants will be interested in the approval. For this example, we’ll add another: a workflow management console application in Edit and Color Management Workflow that is used to track progress, look for late delivery and so on. The message still has to go to the Dailies Screening task, of course.

Review & Approve sends the approval message – an event – to the Approval Notification named router which, for now, has two consumers – the E&CM management console, and the starting point of the Dailies Screening task.

Approval Notification is explicitly a fan-out Message Router – the same message is delivered to multiple consumers. This wasn’t necessary for the other message routers in the previous example, but they could well have been, to support locally centralized monitoring and logging.

So far, so simple, and it’s easy to see how fan-out applies to all sorts of notifications.

Simple Message System
Let’s take this one step further and say that the Dailies screening task runs on different messaging infrastructure (perhaps provided by a different vendor) which will have been set up earlier with the message routers it needs (just like the E&CM system.) We will focus on just one, the “new work order” named router.

Dailies Screening adds its own things to incoming messages’ context and sends it to a Dailies Screening message router. (It can also add an entirely new context.) Since the message contents are standard, there should be very little need to translate the message itself, other than adding or changing the context.

That message is received by two applications: one the local management and tracking console, as before, and the other a “start workflow” task. (It could be set up to send a message directly to the first step of the dailies process, of course.)

“Start Dailies” reads the message and sends a request for work to the “dailies screening: transcode” task, which receives it and starts work.

Simple Message System

This may look complicated, but the only new concept it has introduced is fan-out Message Routers. By using standard messaging system components, we are able to:

  • Send the same message to multiple recipients
  • Translate from one infrastructure to another with an application that receives from one messaging system and sends to a different messaging system.
  • Only add locally important information to the message, rather than having to translate the entire thing

There is complexity, of course, in deciding which message routers are needed to support a particular workflow, and even for a single workflow there are many ways to design the system. Workflow designers also have to think about the kinds of context they need in messages, especially when they cross from one domain to another. However, once these systems are established, they can run 24/7 without interruption and as organizations can build “libraries” of messages and messaging systems they can be arranged and rearranged based on the specific needs of each production or task.

Next Steps

As we’ve demonstrated there are considerable opportunities for our industry to automate mundane and repetitive tasks with computer systems. But as our workflows are complex and inherently multi-team and often multi-organizational, we need to take the time upfront to define a flexible communication system to allow those systems to talk to each other. We hope these introductions to Messaging Systems explain why we believe they offer the solution, if correctly designed, to our workflow challenges and provide the basic components required.

We have deliberately not opined on which organizations should build, operate, and integrate with such messaging systems as we believe there’s considerable opportunity for multiple companies to participate and create value by enabling this interoperability. But we do believe that the interoperability and reusability of messaging system integrations can strongly benefit from common practices on the contents of message headers and some aspects of their payloads, especially for recurring patterns like “Review & Approve.” We look forward to working with industry partners on developing these and to seeing them used by workflow management systems and integration platforms to improve automation and reuse.

Expect to see more from MovieLabs as we explore message headers and payload contents, how messaging interfaces with workflow-driven security (see CSAP) and our connected ontologies. In the meantime, feel free to reach out and join this discussion – send us a message and tell us what you think of interoperable message systems for content production.

[1] That variety is apparent in various service providers’ implementations.

The post PSST…I have to tell you something – Part 2 appeared first on MovieLabs.

]]>
PSST… I have to tell you something – Part 1 https://movielabs.com/psst-i-have-to-tell-you-something/?utm_source=rss&utm_medium=rss&utm_campaign=psst-i-have-to-tell-you-something Wed, 25 May 2022 23:59:29 +0000 https://movielabs.com/?p=10562 Sending messages through workflows - Part 1

The post PSST… I have to tell you something – Part 1 appeared first on MovieLabs.

]]>

Clear communication is critical to the content creation process. And while today’s productions somehow manage to compensate for inefficient communication mechanisms, there is a growing and urgent need to streamline the way we communicate and exchange information as we continue to scale up to meet the increasing demand for content.  In our blog “Cloud. Work. Flows”, we identified some missing components that are required to enable software defined workflows. We highlighted that a more efficient messaging system will be critical to improve communication between participants (which could be people or machines) in a complex workflow system.  We’ve addressed communication elements in the Ontology for Media Creation which covers some aspects on what needs to be communicated. Recently we’ve been turning our attention to the how to express that communication in the most efficient manner.  For example, in the 2030 Vision our first principle states that content goes straight to the cloud and does not need to be moved. Once ingestion to the cloud has completed, the first participants in the chain will need to be notified that the content is now in the cloud and ready to be worked on and ideally including a location for that content.  A similar workflow notification message is required when a task has been finished and the work is ready for review by another team member.  In this post we’ll discuss the benefits of a common approach to communicating these repeating types of workflow messages. In a subsequent post we’ll get into the technicalities of how we think such a system could be built including considerations to enable it to span cloud infrastructures and tools.

The Art of the Message

We need to deal with both simple messages, in near real-time, between two participants, like this:

Simple Message System

Synchronous Real-Time Messages between two known participants.

And more complex messages, especially as we move to more automated systems, where the participants may not know who will pick up the messages and may not receive replies for hours or even days. Take for an example:

Render

Asynchronous Messages between one sender and multiple potential recipients.

In this example the Render Manager (the message sender) doesn’t know which nodes may respond or when they may respond. There are thousands of such nuanced examples in production workflows that we need to consider when thinking about the sorts of messages that could be sent between systems. We need a messaging approach that can accommodate all of these message types, and also the complexity of multi-cloud infrastructure when messages may be flowing between systems that are not all owned/leased or operated by the same organization and on the same infrastructure.

Software Messaging Systems

At MovieLabs we’ve been thinking about approaches to these messaging problems. One approach is using point-to-point API calls between all these disparate systems and while appropriate for many use cases we don’t believe this will scale to whole productions or studios – there would simply be too many custom integrations to get all the possible components of a workflow to work together[1]. We see the best way to manage the highly asynchronous delivery of information to multiple (potentially unknown to the sender) destinations is to decouple the mechanics of the communication – the what from the how.  In software systems this can be managed in a more automated way using Message Queues[2]. A message queue allows a message to be sent blindly (the sender does not need to specifically know who will read it). Specific queues are typically associated with a particular topic, any other participant with an interest in that topic can then subscribe to the queue and receive its messages whenever they’re ready.

Message queues – or, more broadly, message systems – are a natural fit for software-defined workflows: their raison d’être is to provide a communication mechanism where senders and receivers can operate without knowing anything about each other beyond how to communicate (the message queue) and some expectations about the contents of the communication (the messages.) This separation allows applications to run independently and, just as importantly, be developed independently.

As long as the sending and receiving applications can both access the message queue, it doesn’t matter where the applications are running; they can be in the same cloud, in different clouds, on a workstation in a cloud, or even in two organizations. Agreeing on commonality of some aspects of message headers and message contents can enable interoperability especially in that cross-organizational use case. For example, if a message from an editorial department to a VFX house includes a commonly agreed upon place to put shot and sequence identifiers, a workflow management system at the VFX house can route that message to the appropriate recipients internally.

The use of messaging systems rather than point-to-point integrations also makes it easier to gather together operational data for logging, dashboards, and detecting/acting on exceptions and errors.

Benefits

As we look to the 10 Principles of the 2030 Vision, we can see that messaging is key to enabling the “publish” function (where access to files is pushed through a workflow, as tasks are created) and enabling participants to “subscribe” to those files, tasks, or changes. Principles 1 and 2 of the 2030 Vision state that assets go to the cloud and do not need to move, which means sharing the location of those files becomes a key message that will also need to be shared between systems. By enabling a robust multi-cloud[3] ecosystem with broadly distributed and understandable messages, we hope to unlock the true flexibility of software-defined workflows But we do not need to wait for the entirety of the 2030 Vision ecosystem to be built before we can take advantage of messaging systems – there are many use cases that can be deployed now to enable a more interoperable and flexible workflow in 2022 and beyond.

Next Up…

In our next post we’ll discuss the software elements of a messaging system, types of messages and the use cases we hope to enable with one.  Make sure you stay tuned, so you get the message…

[1] Not to say that API calls don’t have their place in interoperability. We’re very supportive of applications exposing APIs for system-to-system communication with small amounts of data or messages that need a guaranteed quickish response.

[2] Message queues are a familiar element of software engineering, suited to a wide a variety of inter-process communication problems. Operating systems use them internally, and business and process automation software make heavy use of them.

[3] We define “cloud” in the 2030 Vision as private, public and hybrid infrastructures connected to the internet and therefore envision productions that need to span across all of those.  See the multi-cloud blog here for more details.

The post PSST… I have to tell you something – Part 1 appeared first on MovieLabs.

]]>
A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 https://movielabs.com/a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022/?utm_source=rss&utm_medium=rss&utm_campaign=a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022 Tue, 08 Mar 2022 19:05:18 +0000 https://movielabs.com/?p=10538 Mark Turner reviews the 2022 HPA Panel featuring a progress update on the 2030 Vision from Autodesk, Avid, Disney, Google, Microsoft and Universal.

The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

]]>
In the last week of February, entertainment technology luminaries from across the world gathered at the Hollywood Post Alliance’s Tech Retreat. Rubbing their eyes as they adjusted to the bright Palm Springs sunshine after two years of working-from-home in pandemic-induced Zoom and Teams isolation, the event was a resurgence with a sold-out crowd gathering for four days of conference sessions, informal and spontaneous conversations and advanced technology demonstrations.  MovieLabs, the non-profit technology joint-venture of the major Hollywood Studios, was also present en masse with a series of sessions highlighting the progress and next steps toward it’s “2030 Vision” for the future of media creation.

Seth Hallen, Managing Director of Light Iron and HPA President, who presented a panel at the HPA Tech Retreat on the recent cloud postproduction of the upcoming feature film ‘Biosphere’, commented that “this year’s Tech Retreat had a number of important themes including the industry’s continued embrace of cloud-based workflows and the MovieLabs 2030 Vision as a roadmap for continued industry alignment and implementation.”

MovieLabs CEO Richard Berger was joined by a panel of technology leaders from across studios, cloud providers and software companies to discuss how they see the 2030 Vision, what it means for their organizations and how they are democratizing the vision to form a shared roadmap for the whole industry.  Introducing the panel, Berger provided the context for the discussion and the original vision paper, “our goal was to provide storytellers with the ability to harness new technologies, so that the only limits they face are their own imaginations and all with speed and efficiency not possible today”.

Of course no discussion about the future of production technology can start without reflecting on the impacts of COVID and the opportunities for change it provides.  Eddie Drake, Head of Technology for Marvel Studios said “the pandemic accelerated our plans to go from on-prem to a virtualized infrastructure…and it created a nice environment for change management to get our users used to working in that sort of way”.  Jeff Rosica, CEO of AVID summarized this pivotal opportunity in time we have because “if we weren’t aligned, if we were all off in a different direction doing our own things, we’d have a mess on our hands because this is a massive transformation. This is bigger than anything we’ve done as an industry before”.  Matt Siverston, VP and Chief Architect, Entertainment and Media Solutions at Autodesk is relative newcomer to both Autodesk and the industry and explained how the 2030 Vision was used as shorthand for the job description in his new role at Autodesk, explaining when “all your largest customers tell you exactly what they want, it’s probably pretty smart to listen” and he’s looking forward to seeing how “we can all collaborate together to make it a reality”.

The panel discussed the work so far done in cloud based production workflows and the work still to be done, Drake of Marvel said “we’re going to be working very aggressively” with both vendors and in-house software teams to accelerate cloud deployments in key areas where they see the most immediate opportunity including set-to-cloud (where he sees tools are maturing), dailies processes, the turnover process, editorial, mastering and delivery.  Michael Wise, SVP and CTO Universal Pictures explained they have been focusing their cloud migration on distributed 3D asset creation pipelines leveraging Azure on a global basis, initially at DreamWorks but soon on live action features as well, all so they can leverage talent from around the world. Wise said “As we’ve done that work we’ve been leaning into the work of MovieLabs and the ETC to make sure what we’re building leverages emerging industry standards including the ontology  VFX interop specs from ETC and interoperability from MovieLabs”.

Buzz Hays a “recovering producer”, industry post veteran and now Global Lead Entertainment Industry Solutions for Google Cloud summarized the improvements that we can enjoy from a cloud-based workflow saying “what we’re looking at is how can we make this a more efficient process and eliminate the progress bars and delays that can end up costing money?” Hanno Basse, CTO Media & Entertainment for Microsoft Azure, agreed and added “you need to rearchitect what you’re doing – why are you going into the cloud?” and he listed the main reasons Microsoft is seeing for cloud migrations; including enabling global collaboration, talent using remote workstations from anywhere, and enabling a more secure workflow where all assets are protected to the same and consistent level.  Picking up on the security theme Hays challenges the perceived notion that there is a conflict between security and productivity and questioned “why are those mutually exclusive?” and that we should “come up with solutions that are invisible to the end user, that are secure, that tick all the boxes and are truly hybrid in nature that work on-prem and are multi-cloud”. Hayes went on to explain how zero-trust security, aligned with the MovieLabs Common Security Architecture for Production, works based on the notion of flipping security “inside out” to secure the core data first, rather than focusing on external perimeters and keeping bad actors out.  “Ultimately”, he said “until we get to the ‘single-source of truth’ cloud version, then there are copies of everything flying around productions and you never get all those back”.

Building workflows that leverage interoperability between common building blocks was a core theme of the discussion and was embraced by all the panelists.  Wise from Universal said “A bad outcome would be a ‘lift and shift’ from the on-premises technologies and specs and just putting them in the cloud. We’ve got a moment in time to make our systems interoperable…and interoperability is the key not just for asset reuse but also asset creation and distribution”.  Basse from Microsoft was more prescriptive in what interoperability needs to include and that we have to “have the industry come together and define some common data models, common APIs, common ways of accessing the data, how that data relates to others and handing it off from one step in the workflow to the next”.  He gave the example of 3D assets that are typically recreated because prior versions cannot be easily discovered and shared between applications and productions. During his seven years at 20th Century Fox the White House was destroyed in at least 10 movies and TV shows and every time the asset was recreated from scratch. Allowing assets to be reused and interoperable between different pipelines and applications will therefore open workflow efficiencies speeding content time to market.

Basse makes the case that creative applications that are running in the cloud on Virtual Machines are not the optimal solution for where we need to get to, but an interim step towards ultimately becoming SaaS based services and running on serverless infrastructure.

When discussing the opportunities ahead the panelists were also agreed that no one company can do this migration itself and that it will require work to share data and collaborate together.  Sivertson from Autodesk said “our intention is to be very open with data access and our APIs as the data is not ours, the data is our customers and they should be able to decide where it goes…if providers jealously guard the data as a source of differentiation you’ll probably get left behind”.  Rosica explains how the 2030 Vision enables AVID to have a common shared goal as we’ve all agreed what the “desired state is and what the outcomes are that we’re looking for, and that allows us to develop roadmap plans, not just for ourselves but all of our partners in the industry, as we all need to interoperate together”.

Interestingly many of the themes explored in the HPA Tech Retreat panel echo the key learnings in MovieLabs’ latest paper in the 2030 Series – an Urgent Memo to the C-Suite which explains how investments in production technology can enable the time savings, efficiencies and workflow optimizations from a cloud-centric, automatable, software-defined workflow.  It will certainly be interesting to see how far the industry has come in the 2030 journey by the HPA Tech Retreat 2023, hopefully without the masks and COVID protocols!

 

The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

]]>
MovieLabs Urgent Memo to the C-Suite https://movielabs.com/movielabs-urgent-memo-to-the-c-suite/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-urgent-memo-to-the-c-suite Wed, 16 Feb 2022 00:04:19 +0000 https://movielabs.com/?p=10501 MovieLabs makes the case that Investing in Production Technology and Cloud Centricity is No Longer an Option – it is Table Stakes.

The post MovieLabs Urgent Memo to the C-Suite appeared first on MovieLabs.

]]>
We published our 2030 Vision white paper “The Evolution of Media Creation” with its goal being, “to empower storytellers to tell more amazing stories while delivering at a speed and efficiency not possible today.” In that paper, we described 10 principles as key elements of the 2030 world we envisioned. Our call to action to the industry was “to collaborate by appropriate means to achieve shared goals and continue to empower future storytellers and the creative community.” When writing the “2030 Vision”, we debated over the target audience, but ultimately concluded that it should be aimed at production technologists (CTOs, CIOs, Cloud Companies, SaaS providers, Technology Companies, Software Architects) i.e. those who would not only recognize the challenges that we were highlighting, the merits of the principles we articulated, and also help in designing the technical solutions. However, we also highlighted that enabling the vision would take more than just technologists. The realization of the Vision also requires alignment and support from senior leadership across finance, marketing, operations, producers and even board members who provide organizations guidance on strategy, governance and long-term risk.

Since releasing that original white paper, production technology leaders from across the industry have embraced the 2030 Vision making it the industry’s reference for the future of media creation. And while having this alignment is absolutely critical for our shared vision’s success, today we’re releasing a new white paper [Urgent Memo to the C-Suite LINK] aimed at leadership across the content creation ecosystem – Chief Executives, Chief Financial Officers, Chief People Officers as well as board members, production executives and production companies. And we have a simple message – companies that want to not just survive but thrive in the modern content ecosystem need to invest in production technology now.

Much like investments in Distribution Technology 10 years ago enabled the rapid rise in consumer demand for streaming media services, we now need to make a corresponding investment in production technology to more efficiently create the content that our growing, global audiences are demanding. Let’s define what we mean by production technology – it’s often assumed to be just be on-set technologies like virtual production, cameras and LED walls but it’s broader than just that and also includes all technology and systems used to create final movies and shows including asset management, creative software tools, onboarding and talent scheduling, managing jobs, networks and infrastructure and so much more.

In an effort to place this technology vision in a business context, against the backdrop of what our industry is now facing, we have identified 5 trends that are shaping content creation and 3 strategic imperatives that organizations should follow now to ensure they can stay ahead of those trends. Technology is certainly a key part, but this is not a technical paper nor a call for technology investment just for the sake of it. There are clearly rationalized reasons why and how we must invest now to ensure competition and choice in the future and to not recreate the mistakes of the past, where we’ve had multiple opportunities to reinvent our content creation ecosystem but shied away from making the difficult, fundamental changes which could have unlocked significant efficiencies and value. Our new “Memo to the C-Suite” paper is marked “Urgent” because these changes are transformational and will take time – so we all need to act now to realize our shared vision as soon as possible.

Our industry is at a critical inflection point as emerging technologies (cloud, automation, AI, real-time engines) approach mass adoption and we reemerge from the pandemic which has both once crippled our industry and enlivened it. We cannot waste this opportunity to reinvent our 100-year-old production processes and create a more dynamic content creation ecosystem that is optimized for the sorts of content consumers are demanding now and will do in the future.

And while this paper is clearly not literally a “memo to the c-suite”, it does goes down easy. So, download your copy of the MovieLabs Urgent Memo to the C-Suite here and encourage your colleagues and friends to do the same.

The time for action is now. For more information, please follow MovieLabs on LinkedIN #2030Vision.

The post MovieLabs Urgent Memo to the C-Suite appeared first on MovieLabs.

]]>
Through the Looking Glass https://movielabs.com/through-the-looking-glass/?utm_source=rss&utm_medium=rss&utm_campaign=through-the-looking-glass Tue, 01 Feb 2022 09:14:16 +0000 https://movielabs.com/?p=10295 Locating assets in a multi-cloud workflow.

The post Through the Looking Glass appeared first on MovieLabs.

]]>

Some Background

In our July 2021 blog, “Cloud.Work.Flow” we listed several gaps which will need to be closed to enable the 2030 Vision for Software-Defined Workflows that span multiple cloud infrastructures – which is the way we expect all workflows to ultimately run. In this blog we’ll address one of those gaps and how we’re thinking about systems to close it – namely the issue that “applications need to be able to locate and retrieve assets across all clouds.”

To understand why this is a problem we need to dig a little into the way software applications store files. Why do we need to worry about applications? Because almost all workflow tasks are now conducted by some sort of software system – most creative tasks, even capture devices like cameras are running complex software. The vast majority of this software can access the internet and therefore private and public cloud resources, and yet is still based on legacy file systems from the 1980s. Our challenge with interconnecting all of the applications in the workflow therefore often boils down to how applications are storing their data. If we fix that, we can move on to some more advanced capabilities in creative collaboration.

Typically, a software application stores the locations of the files it needs using file paths that indicate where they are stored on a locally accessible file system (like “C:/directory/subdirectory/file_name”). So, for example, an editing application will store the edits being made in an EDL file that is recorded locally (as it’s being created and constantly amended), and the project includes an index with the locations of all the files being manipulated by the editor. Media Asset Management systems also store the locations of files in a database, with similar file paths, like a trail of breadcrumbs, to follow and locate the files. If the files in these file systems move or are not where the application expects them to be when it needs them, then trouble ensues.

Most applications are built this way, and whereas they can be adapted to work on cloud resources (for example by mounting cloud storage to look like a local file system), they are not inherently “cloud aware” and still maintain the names and locations of needed files internally. There are 3 major drawbacks with this approach in collaborative workflows like media creation:

  1. Locating a shared file may depend on having a common file system environment. E.g., NAS drives must always be mounted with the same drive letter.
  2. Locating the file is complicated when the file name plus the file path is the guarantee of uniqueness.
  3. Moving a file (i.e., copy then delete) will break any reference to the file.

We are instead aiming for a cloud foundation which supports a dynamic multi-participant workflow and where:

  • Files can move, if necessary, without breaking anything.
  • Files don’t have to move, if it’s not necessary.
  • If files exist in more than one place, the application can locate the most convenient instantiation.
  • Systems, subject to suitable permissions, can locate files wherever they are stored.
  • The name of a file is no longer an important consideration in locating it or in understanding its contents or its provenance.[1]

With these objectives in mind, we have been designing and testing a better approach to storing files required for media workflows. We’ll reveal more later in 2022 but for now we wanted to give you a preview of our thinking.

Identifying Identifiers

To find these files anywhere across the cloud, what we need is a label that always and uniquely refers to a file, no matter where it is. This kind of label is usually called an identifier. The label must be “sticky” in that it should always apply to the same file, and only to that file. By switching to an identifier for a file, instead of an absolute file location, we can free up a lot of our legacy workflows and enable our cross-cloud future.

Our future solution therefore needs to operate in this way:

  • Participating workflow applications should all refer to files by a common and unique identifier
  • Any workflow component can “declare” where a file is (for example, when a file is created)
  • Any workflow component can turn a unique identifier into at least one location (using the declaration above)
  • Locations are expressed in a common way – by using URLs.

URLs (Uniform Resource Locators) are the foundation of the internet and can be used to describe local file locations (e.g., file://), standard network locations (e.g., http:// or https://), proprietary network locations (e.g., s3://) or even SaaS locations (e.g., box:// used by the web service company Box).

The key to this scenario is a web service that and share that, when presented with a unique identifier, will return the URL location, or locations, of that file. We call this service a resolver, and it’s a relatively simple piece of code that is acting in a similar way to a highly efficient librarian who, when presented with the title and author of a book, can tell you on which shelf and location to go and get it.

Even though MovieLabs created the industry standard Entertainment ID Registry (EIDR), we are not proposing here any universal and unique identifier for each of the elements within all productions going on (that would be a massive undertaking), instead we believe that each production, studio, or facility will run their own identifier registries and resolvers.

We have discussed before why we believe in the importance of splitting the information about a file (for example what type of file it is, what it contains, where it came from, which permission various participants have, its relationships to other files, etc.) from the actual location of the file itself. In many cases applications don’t need to access the file (and therefore won’t need to use the resolver) because they often just need to know information about the file and that can be done via an asset manager. We can envision a future whereby MAMs contain rich information about a file and just the identifier(s) used for it; and utilize a resolver to handle the actual file locations.

With this revised approach we can now see that our application uses an external Resolver service to give it URLs which it can reach out to on a network and retrieve the file(s) that it needs.

The diagram above shows how the Application now keeps a list of URIs which it can use an external Resolver to turn into a URL for the files it needs.  The URL can be resolved by the network into web servers, SaaS systems or directly to cloud storage services.  So, in the example of our editing application, now the application maintains sets of unique file identifiers (for the location of the EDL any of the required media elements for the edit) and the resolver points to an actual location, whenever the application needs to find and open those files. The application is otherwise unchanged.

Why use Identifiers and Resolvers, instead of just URLs?

Let us be clear – there are many benefits in simply switching applications to use URLs instead of file paths – that step alone would indeed open up cloud storage and a multitude of SaaS services that would help make our workflows more efficient. However, from the point of view of an application, URLs alone are absolute and therefore do not address our concerns of enabling multiple applications to simultaneously access, move, edit, and change those files. By inserting a resolver in the middle, we can abstract away from the application the need to track where every file is kept and enable more of our objectives including the ability to have multiple locations for each file. Also by using a resolver, if any application needs to move a file it does not need to know or communicate with every other application that also may use that same file, now or in the future. Instead, it simply declares the file’s location to the resolver, knowing that every other participant software application can locate the file, even if that application is added much later in the workflow.

In our editing example above, the “resolver aware” editing application knows that it needs video file “XYZ” for a given shot, but it does not need to “lock” that file and as such it can be simultaneously accessed, referenced, and perhaps edited by other applications. For example, in an extreme scenario, video XYZ could be updated with new VFX elements by a remote VFX artist’s application that seamlessly drops the edited video into the finished shot – without the editor needing to do anything but review and approve, the EDL itself is unchanged and none of the applications involved need to have an awareness of the filing systems used by others.

The resolver concept also has another key advantage; with some additional intelligence, the resolver can return the closest copy of a file to the requesting application. Even though Principle 1 in the 2030 Vision indicates that all files should exist in the cloud with a “single source of truth,” we do also recognize that sometimes files will need to be duplicated to enable speed of performance – for example to reduce the latency of a remote virtual workstation in India for assets that were originally created in London. In those cases the resolver can help as the applications can all share one unique identifier for a file, but the network layer can return the original location for participants in Europe and the location of a cached copy in Asia for a participant requesting access from India.

What needs to happen to enable this scenario?

MovieLabs is busy designing and testing these and other new concepts for enabling seamless multi-cloud interoperability and building out software defined workflows.  We’ll be publishing more details of our approach during 2022.  Meanwhile there’s an immediate opportunity for all application developers, SaaS providers, hyperscale cloud service companies and others in the broader ecosystem to consider these approaches to interoperable workflows than span infrastructures and the boundaries of specific applications’ scope.

We welcome the input of other companies as we collectively work through these issues and ultimately test and deploy resolver-based systems, feel free to reach out to discuss your thoughts with us.

To ensure you are kept updated with all MovieLabs news and this new architecture be sure to follow us on LinkedIn.

 

[1] Today such information is often encoded or crammed into a file name or the combination of file name and file path.

The post Through the Looking Glass appeared first on MovieLabs.

]]>
CLOUD. WORK. FLOWS https://movielabs.com/cloud-work-flows/?utm_source=rss&utm_medium=rss&utm_campaign=cloud-work-flows Tue, 20 Jul 2021 18:26:35 +0000 https://movielabs.com/?p=8348 Examining cloud ingest, publication and subscription-enabled workflows and the gaps preventing us from reaching the 2030 Vision.

The post CLOUD. WORK. FLOWS appeared first on MovieLabs.

]]>
MovieLabs has been busy in the last few months assessing cloud infrastructures (which in our definition includes private, hybrid and hyperscale service providers) and systems for their ability to support modern media workflows. Our review has been a broad assessment looking at the flow of video and audio assets into the cloud, between systems in clouds, and between applications required to run critical creative tasks. We’ve started from the very beginning of the content creation process – from production design through script breakdown on through every step in which assets are ingested into the cloud for processing—and then followed the primary movement of tasks, participants and assets across a cloud infrastructure, to complete a workflow.

Today, we’re publishing a list of gaps we have identified in that assessment—gaps between today’s reality and the 2030 Vision. Our intent is to create an industry dialog about how to close these gaps,  collectively. MovieLabs is taking on project work in each of these areas (and we’ll share more on that later), but closing these gaps will require engagement from the whole community – creatives, tools providers, service vendors, studios and other content owners.

The MovieLabs 2030 Vision calls for software-based workflows in which files/assets are uploaded to the cloud (or clouds) and stay there. References to those assets are exchanged, and the assets are accessed by participants across many tasks. We’re not considering how a single task is carried out in the cloud – that is something which is generally possible today, and while there are benefits (such as enabling remote work in a pandemic), the migration to the cloud of a single task within a production workflow does not fully take advantage of the cloud. Instead, we’re discussing how the entirety of production workflows, every task and application, could run in the cloud with seamless interactions between them. The benefits of this are not only efficiency (less wasted time in moving and copying files), but also lower risk of errors, less task duplication, more opportunities for automation, better security, better visibility to workflow status, and more of the most precious production resources (creative time and budget) to apply to actual creative tasks that will make the content better.

So, let’s look at the current impediments we see to enabling more cloud-based workflows…

1) Much faster network connectivity is needed for ingestion of large sets of media files.

 

Major productions today generate millions of individual assets  – from pre-greenlight through distribution. For cloud-based workflows, each asset requires “ingest” into a production store in the cloud. That includes not only camera image files and audio assets captured during active production, but all files created during production—the script, production notes, participant-to-participant communication, 3D assets, camera metadata, audio stems, proxy video files, and more.

As we look at these files, it’s clear that the smaller files are not a major concern for the industry. Many cloud-based collaboration platforms routinely upload a modest number of small files (<10MB) and do so with standard broadband connections, including cellular links. Indeed, some of this data is cloud-native (for example, chat files or metadata generated by SaaS apps) and do not need uploading at all.

However, today’s increasingly complex productions create huge volumes of data, often amounting to many terabytes at a time, which can cause substantial upload headaches. For example, a Sony camera shooting in 16-bit RAW 4K at 60fps will generate 2.12TB per hour in the form of 212,000 OCF files of approximately 10MB each. A multi-camera shoot with supporting witness cameras, uncompressed audio, proxies, high-resolution image files, and production metadata becomes a data hotspot streaming vast amounts of data into the cloud (or, more likely, multiple clouds). The volume of data will only increase as capture technology and production techniques evolve.

The table below illustrates the time required for file transfers using various internet connection speeds:

Table shows transfer time for various file sizes and different bandwidth speeds. Table can be used to estimate the bandwidth needed for a production based on requirements. E.g., if a production is shooting 2 hours of footage/day in ALEXA LF RAW, it will generate 4TB of data per day per camera. Anything less than a 1Gbit/s connection will be insufficient to keep up with the daily shoot schedule.

We’ve color-coded the table to indicate in green the upload times that would be generally acceptable, on-par with hard-drive-based file delivery services. Yellow indicates upload times around 24 hours, and red identifies times that are entirely impractical. While ultra-fast internet connections may not top the  list of budget items for any production (especially on smaller independent projects), the faster the media can be ingested to the cloud, the faster downstream processes can start, accelerating and reducing overall costs.

There are multiple technologies available to mitigate the upload problem, i.e., ways to accelerate transfers or compress data, use of transportable drives, bringing compute to the edge, etc. Evaluation of these and other techniques is beyond this blog’s scope but suffice it to say that most cloud-based productions would benefit from having an uncontended upload and download internet connection of greater than 1Gbps.

Bandwidth, however, is not the only constraint on ingestion to the cloud. The ingest step must also include elements required to enable a software-defined workflow (SDW) downstream. That includes assignment of rights and permissions to files, indexing and pre-processing of media, augmentation of metadata, and asset/network security. These requirements need to be well-defined upfront so that ingested files can be accessed or referenced downstream by other participants and applications. Which leads us to …

2) There is no easy way to find and retrieve assets and metadata stored across multiple clouds.

As we have explored in other blogs (such as interoperable architectures), production assets could, and likely will, be scattered across any number of private, hybrid, and hyperscale clouds. Therefore, applications need to be able to find and retrieve assets across all clouds. Breaking this down, two key steps emerge. First, determining which assets are needed by a process. This is often a challenge, e.g., requiring knowledge of versions and approval status of that asset. And then second, determining where those assets are actually located in the clouds.

These should be considered separate processes, as not all applications need to perform both tasks. Bridging these processes in cloud-based workflows means that each asset needs to be uniquely identifiable so that applications can consistently identify an asset independent of its location and then locate and access the asset.

Architectural clarity on separating these functions is an important prerequisite to addressing this gap. It will also require the industry to develop multi-cloud mechanisms for resolving asset identifiers into asset locations and the integration of those mechanisms with workflow and storage orchestration systems, work that will likely take many years to complete.

3) We need more interoperability between commercial ecosystems.

In the early days of mobile data, what consumers could do with their data-capable cell phones was controlled by the cellular operators. Consumers were constrained by their choice of operator in the devices they could use, the services they could access, and the apps they could install. The connections between those commercial ecosystems were limited. That service model has fallen away because it constrained consumer freedom to go anywhere on the Internet, load any apps they chose, on any compatible device they chose.

We are still in the early days of cloud production and yet we’re seeing parallels with those constrained ecosystems from the early mobile internet. That means, for example, that production SaaS services today sometimes obscure where media files are held, allowing data to be accessed only through the service’s applications. As a result, cloud production systems can sometimes deliver less functionality than on-premises systems, which often include, as a basic function, the ability for any application to access and manipulate data stored anywhere on the network.

In any new and fast-changing environment, an internal ecosystem model can be a great way to launch new services fast, to deliver a great customer experience, and to innovate quickly. However, as these services mature, internal ecosystems can confront problems in scale that limit the broader adoption of new technologies and systems. For example, if file locations are not exposed to other online services, media must be moved out of one internal ecosystem and into another, in order to perform work. That could mean moving from the cloud to on-prem infrastructure and then back again or moving from one cloud infrastructure to another and then back again. Those movements are inefficient, costly and violate a core principle of the 2030 Vision, i.e., that media moves to the cloud and stays there with applications coming to the media. It also creates security challenges since every movement and additional copy of media must be secured and tracked, with security policies applied across workflows and identities also managed and tracked across ecosystems.

Today’s content workflows are too complex for any one service, toolset, or platform to provide all the functionality that content creators need. Therefore, we need easy and efficient ways for content creators to take advantage of multiple commercial ecosystems, with standardized interfaces and gateways between them that allow tasks and participants to extend across ecosystems and implement fully interoperable workflows.

To achieve the full benefits of the 2030 Vision, we envision a future in which commercial ecosystems include technical features :

  1. Files and/or critical metadata are exposed and available across ecosystems so that they can be replicated or accessed by third party services (for example, by way of an API).
  2. Authentication and authorization can also be managed across ecosystems, for example, providing the ability to share federated sign-on so that a single identity can be shared across services and enabling external systems to securely change access controls via API.
  3. Security auditing of actions on all platforms is open enough to allow external services with a common security architecture to track the authorized or unauthorized use of assets, applications, or workflows on the platform.

The 2030 Vision will require dynamic security policies that extend, and enable participant authentication, across multiple internal ecosystems, including granular control of authorization (e.g., access controls) to the level of individual participants, individual tasks, and individual data and metadata assets. That will require commercial ecosystems to incorporate a high level of interoperability and communication across ecosystems in order to deliver dynamic policies that change frequently to enable  real-time security management for end-to-end production workflows.

4) We still must resolve issues with remote desktops for creative tasks.

Until all creative tools are cloud-native SaaS products, cloud media files will be manipulated most often using existing applications operating on cloud-based virtual machines. In a prior blog, we assessed several technical shortcomings in those technologies that prevent media ingested to the cloud from being manipulated in the same way as on local machines. These limitations and considerations were explored in our remote desktop blog and include problems such as lack of support for HDR, high bit depth video. and surround sound in remote desktop systems. Until we close those gaps, the ability to manipulate media files and collaborate in the cloud will be stunted.

5) People, software and systems cannot easily and reliably communicate concepts with each other.

The next key group of issues to resolve relate to communicating workflows and concepts. That communication could be human-to-human, but also human-to-machine and ultimately machine-to-machine, which will enable automation of many repetitive or mundane production tasks.

Effective software-defined workflows need standardized mechanisms to describe assets, participants, security, permissions, communication protocols, etc. Those mechanisms are required to allow any cloud service or software application to participate in a workflow and understand the dialog that is occurring. For example, a number of common words and terms of art are understood by context – slate, shot, and take, for instance. All have different meanings depending on their exact context, and it’s hard for machines to understand that nuance.

In addition to describing the individual elements of a production, we need to describe how elements relate to one another. These relationships, for example, allow a proxy to be uploaded in real-time and to stay connected to the RAW file original – which could arrive in the cloud hours or days later. Such a system needs to allow two assets stored on different clouds to be moved, revised, processed, deep archived and re-hydrated, all without losing connections to each other. The same is true of other less tangible elements such as production notes made on a particular shot – which must be related to the files captured on that shot and other information that could be useful later such as the camera and lens configurations, wardrobe decisions and even time of day and positions of the lighting. These elements and their relationships need to be defined in a common way so all connected systems can create and manage the connections between elements.

6) It is difficult to communicate messages to systems and other workflow participants, especially across clouds and organization

Software-Defined Workflows require a large amount of coordinated communication between people and systems. Orchestration systems control the many parts of an SDW by allocating tasks and assets to participants on certain infrastructures. For those systems to work, we need agreed methods for the component systems to coordinate with each other—to communicate, for example, that an ingest has started, been completed or somehow failed. By standardizing aspects of this collaboration system, developers can write applications that create tasks with assets, create sub-tasks from tasks, create relations between assets and metadata, and pass messages or alerts down a workflow that appear as notifications for subsequent users or applications. These actions require an understanding of preceding actions, plus open standards for describing and communicating those actions in order to deploy at scale and allow messages to ripple out throughout a workflow. As an example, if an EDL is changed that impacts a VFX provider, the VFX provider should be notified automatically when the relevant change has occurred.

Our objective here is to standardize the mundane integrations that do not differentiate a software product or service in order to enable interoperability, which then frees up developer resources to focus on the innovative components and features that truly do differentiate products.

7) There is no easy way to manage security in a workflow spanning multiple applications and infrastructure.

Our cloud-based approach (as explained in the MovieLabs 2030 Common Architecture for Production (CSAP) is a zero-trust architecture. This approach requires every participant (whether a user, automated service, application or device) to be authenticated before joining any workflow and then authorized to access or modify any particular asset. This allows secure ingest and processing of assets in the cloud. Realizing the benefits of this aspect of the 2030 Vision, however, also requires closing some key gaps.

When content owners allocate work to be done (either to vendors or within their own organization’s security systems), they select rights and privileges which typically are constrained to the cloud service or systems on which the work is occurring. In the case of service providers, the contract stipulates certain security protections and usually requires external audits to validate the protections are understood and implemented correctly. In addition, each of the major hyperscale cloud service providers also provide identity, authorization and security services for storage and services running on their clouds. Some of these cloud tools, but not all, extend across to other cloud service providers. The result is a potential hodgepodge of security tools, systems and processes that do not interoperate. Since complexity is the enemy of good security, security models and frameworks should identify and standardize commonalties now, before the security implementations get too complex.

Today the industry is in a quandary as to which security and identity services to use for authorizing and authenticating users to support workflows with assets, tools and participants scattered across multiple infrastructures. The MovieLabs CSAP was designed to provide a common architecture to deal with these issues in an interoperable manner and we’re working now with the industry to enable its implementation across clouds and application ecosystems.

8) There is no easy way to manage authentication of workflow participants from multiple facilities and organizations.

In today’s workflow a post-production vendor may require a creative user to login to a local workstation to work, with another login required to access the SaaS dailies platform to review the director’s notes, and a third login needed (with separate credentials) to run file transfers for assets to work on. In an ideal world, one login would be usable across all platforms with policies from the production determining permissions. Those policies along with work assignments and roles would seamlessly manage the user’s access to assets, tools and applications without requiring creation and maintenance of separate credentials for every system.

Our industry is unique in the number of independent contractors and small companies that are of critical significance to productions. A single Production User ID (PUID) system would make many lives easier, as well as allowing software tools to identify participants in a consistent way. This PUID system would make it much easier to onboard creatives to productions and remove them afterwards, with much lower chance of users forgetting or writing down on post-it notes the dozens of combinations of username and passwords for each system.

9) We will need a comprehensive change management plan to train and onboard creative teams to these new more efficient ways of working.

Many of these cloud-based workflow changes will require new or adapted tools and processes. Much of the complexity can be obscured from individual users, but there are always usability lessons, training, and change management issues to consider when implementing a new way of working. Productions are high-risk, high-stress endeavors, so we need to implement these systems and onboard teams without upsetting workflows. Developing trust amongst creative teams takes many years and experience in actual productions. The changes proposed here likewise will need considerable time to establish trust and convince creatives that they can securely power productions with better efficiency and improved collaboration. Fortunately, the software-defined workflows described here use the same mechanisms available in other collaboration platforms already widely used today – Slack for real-time collaboration, Google Docs for multi-person editing, Microsoft Teams for integrated media and calling. Those tools provide the model for real-time and rapid decision-making that we want to bring to media creation.

As the industry looks to ramp back up after the COVID shutdowns, it’s worth noting that the true potential of the cloud for production workflows was not exploited during temporary work-from-home tasks. If we can execute on a more collaborative view of entire production systems operating across cloud infrastructures, we believe we can “build back better” and enable far more efficiency in our new workflows.

If the industry can close these nine gaps, we will be closer to realizing a true multi-cloud-based workflow from end to end. Some of these challenges are beyond what any one company can solve (e.g., the availability of low cost, massively high bandwidth internet connections). Still, there are areas where we can work together to close the gaps. To that end, MovieLabs has been working to define some of the required specifications, architectures, and best practices and in subsequent posts, we will elaborate on some of these solutions in more detail.

The post CLOUD. WORK. FLOWS appeared first on MovieLabs.

]]>