Gaps Archives - MovieLabs https://movielabs.com/category/gaps/ Driving Innovation in Film and TV Content Creation and Distribution Tue, 09 Jan 2024 04:07:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://movielabs.com/wp-content/uploads/2021/10/cropped-favicon-32x32.png Gaps Archives - MovieLabs https://movielabs.com/category/gaps/ 32 32 Are we there yet? Part 3 https://movielabs.com/are-we-there-yet-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-3 Tue, 09 Jan 2024 00:52:58 +0000 https://movielabs.com/?p=13486 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>

In this final part of our blog series on the current gaps between where are now and realizing the 2030 Vision, we’ll address the last two sections of the original whitepaper and look specifically at gaps around, Security and Identity, and Software-Defined Workflows. As with previous blogs in this series (see Parts 1 and 2) we’ll include both the gap as we see it, an example as it applies in a real workflow, and the broader implications of the gap.

So let’s get started with…

MovieLabs 2030 Vision Principle 6
  1. Inconsistent and inefficient management of identity and access policies across the industry and between organizations.

    Example: A producer wants to invite two studio executives, a director and an editor, into a production cloud service but the team has 3 different identity management systems. There’s no common way to identify the correct people to provide access to critical files or to provision that access.

    This is an issue addressed in the original 2030 Vision, which called for a common industry-wide Production User ID (or PUID) to identify individuals who will be working on a production. While there are ways today to stitch together different identify management and access control solutions between different organizations, they are point to point, require considerable software or configuration expertise, and are not “plug and play.”

MovieLabs 2030 Vision Principle 7
  1. Difficulty in securing shared multi-cloud workflows and infrastructure.

    Example: A production includes assets spread across a dozen different cloud infrastructures, each of which is under control of a different organization, and yet all need a consistent and studio-approved level of security.

    MovieLabs believes the current ”perimeter” security model is not sufficient to cope with the complex multi-organizational, multi-infrastructure systems that will be commonplace in the 2030 Vision. Instead, we believe the industry needs to pivot to a more modern ”zero-trust” approach to security, where the stance changes from ”try to prevent intruders” to every access to an asset or service is authenticated and checked for authorization. To that end, we’ve developed the Common Security Architecture for Production which is based on a Zero Trust Foundation, take a look at this blog to learn more.

MovieLabs 2030 Vision Principle 8
  1. Reliance on file paths/locations instead of identifiers.

    Example: A vendor requires a number of assets to do their work (e.g., a list of VFX plates to pull or a list of clips) that today tend to be copied as a file tree structure or zipped together to be shared along with a manifest of the files.

    In a world where multiple applications, users and organizations can be simultaneously pulling on assets, it becomes challenging for applications to rely on file names, locations, and hierarchies. MovieLabs instead is recommending unique identifiers for all assets that can be resolved via a service to specify where a specific file is actually stored. This intermediate step provides an abstraction layer and allows all applications to be able to find and access all assets. For more information, see Through the Looking Glass.

MovieLabs 2030 Vision Principle 9
  1. Reliance on email for notifications and manual processing of workflow tasks.

    Example: A vendor is required to do a task on a video asset and is sent an email, a PDF attachment containing a work order, a link to a proxy video file for the work to be done, and a separate link to a cloud location where the RAW files are. It takes several hours/days for the vendor to extract the required work, download, QC, and store the media assets, and then assign the task on an internal platform to someone who can do the work. The entire process is reversed to send the completed work back to the production/studio.

    By having non-common systems to send workflow requests, asset references and assign work to individual people, we have created an inherently inefficient industry. In the scenario above, a more efficient system would be for the end user to receive an automated notification from a production management system that includes a definition of the task to be done and links to the cloud location of the proxies and RAW files, with all access permissions already assigned so they can start their work. Of course, our industry is uniquely distributed between organizations that handle very nuanced tasks in the completion of a professional media project. This complicates the flow of work and work orders, but there are new software systems that can enable seamless, secure, and automated generation of tasks. We can strip weeks out of major production schedules simply by being more efficient in handoffs between departments, vendors and systems.

  2. Monolithic systems and the lack of API-first solutions inhibit our progress towards interoperable modern application stacks.

    Example: A studio would like to migrate their asset management and creative applications to a cloud workflow that includes workflow automation, but the legacy nature of their software means that many tasks need to be done through a GUI and that it needs to be hosted on servers and virtual machines that mimic the 24/7 nature of their on-premises hardware.

    Modern applications are designed as a series of micro-services which are assembled and called dynamically depending on the process, which enables considerable scaling and also lighter weight applications that can deploy on a range of compute instances (e.g., on workstations, virtual machines or even behind browsers). While the pandemic proved we can have creative tasks running remotely or from the cloud a lot of those processes were ”brute forced” with remote access or cloud VMs running legacy software and are not the intended end goal of a ”cloud native” software stack for media and entertainment. We recognize this is an enormous gap to fix and will take beyond the 2030 timeframe to move all of the most vital applications/services to modern software platforms. However we need the next-generation of software systems to enable open APIs and deploy in modern containers to accelerate the interoperable and dynamic future that is possible within the 2030 Vision.

MovieLabs 2030 Vision Principle 10
  1. Many workflows include unnecessarily time consuming and manual steps.

    Example: A director can’t remotely view a final color session in real time from her location, so she needs to wait for a full render of the sequence, for it to be uploaded to a file share, for an email with the link to be sent, and then for her to download it and find a monitor that matches the one that was used for the grade.

    We could write so many examples here. There’s just way too little automation and way too much time wasted in resolving confusions, writing metadata, reading it back, clarifying intent, sending emails, making calls etc. Many of the technologies exist to fix these issues, but we need to redevelop many of our control plane functions to adopt to a more efficient system which requires investment in time, staff, and development. But those that do the work will come out leaner, faster and more competitive at the end of the process. We recommend that all participants in the ecosystem take honest internal efficiency audits to look for opportunities to improve and prioritize the most urgent issues to fix.

Phew!  So, there we have it. For anyone that believes the 2030 Vision is “doable” today, there are 24 reasons why MovieLabs disagrees. Don’t consider this post a negative, we still have time to resolve these issues, and it’s worth being honest about the great progress completed but also what’s still to do.

Of course, there’s no point making a list of things to do without a meaningful commitment to cross them off. MovieLabs and the studios can’t do this alone, so we’re laying down the gauntlet to the industry – help us, to help us all. MovieLabs will be working to close those gaps that we can affect, and we’ll be publishing our progress on this blog and on LinkedIn. We’re asking you to do the same – share what your organization is doing with us by contacting info@movielabs.com and use #2030Vision in your posts.

There are three specific calls to action from this blog for everyone in the technical community:

  1. The implementation gaps listed in all parts of this blog are the easiest to close – the industry has a solution we just need the commitment and investment to implement and adopt what we already have. These are ones we can rally around now, and MovieLabs has already created useful technologies like the Common Security Architecture for Production, the Ontology for Media Creation, and the Visual Language.
  2. For those technical gaps where the industry needs to design new solutions, sometimes individual companies can pick these ideas up and run with them, develop their own products, and have some confidence that if when they build them customers will come. Some technical gaps can only be closed by industry players coming together, with appropriate collaboration models, to create solutions that enable change, competition, and innovation. There are existing forums to do that work including SMPTE and the Academy Software Foundation, and MovieLabs hosts working groups as well.
  3. And though not many issues are in the Change Management category right now, we still need to work together to share and educate how these technologies can be combined to make the creative world more efficient.

We’re more than 3 years into our Odyssey towards 2030. Join us as we battle through the monsters of apathy, slay the cyclops of single mindedness, and emerge victorious in the calm and efficient seas of ProductionLandia. We look forward to the journey where heroes will be made.

-Mark “Odysseus” Turner

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>
Are we there yet? Part 2 https://movielabs.com/are-we-there-yet-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-2 Thu, 14 Dec 2023 03:15:54 +0000 https://movielabs.com/?p=13461 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>

In Part 1 of this blog series we looked at the Gaps in Interoperability, Operational Support and Change Management that are impeding our journey to the 2030 Vision’s destination (the mythical place we call “ProductionLandia”). In these latter parts we’ll examine the gaps we have identified that are specific to each of the Principles of the 2030 Vision. For ease of reference, the Gaps below are numbered starting from 9 (because we had 1-8 in Part 1 of the blog). For each Gap we list the Principle, a workflow example of the problem, and the implications for the Gap.

In this post we’ll look just at the gaps around the first 5 Principles of the 2030 Vision which address a new cloud foundation.

MovieLabs 2030 Vision Principle 1
  1. Limitations of sufficient bandwidth and performance, plus auto recovery from variability in cloud connectivity.

    Example: Major productions can generate terabytes of captured data per day during production and getting it to the cloud to be processed is the first step.

    Even though there are studio and post facilities with large internet connections, there are still many more locations, especially remote or overseas ones, where the bandwidth is not large enough, the throughput not guaranteed or predictable enough, such as to hobble cloud-based productions at the outset. Some of the benefits in cloud-based production involve the rapid access for teams to manipulate assets as soon as they are created and for that we need big pipes into the cloud(s), that are both reliable and self-healing. Automatic management of those links and data transfers is vital as they will be used for all media storage and processing.

  2. Lack of universal direct camera, audio, and on-set data straight to the cloud.

    Example: Some new cameras are now supporting automated upload of proxies or even RAW material direct to cloud buckets. But for the 2030 Vision to be realized we need a consistent, multi-device on-set environment to be able to upload all capture data in parallel to the cloud(s) including all cameras, both new and legacy.

    We’re seeing great momentum with camera to cloud in certain use cases (with limited support from newer camera models) sending files to specific cloud platforms or SaaS environments. But we’ve got some way to go before it’s as simple and easy to deploy a camera-to-cloud environment as is it to rent cameras, memory cards/hard drives, and a DIT cart today. We also need support for multiple clouds (including private clouds) and or SaaS platforms so that the choice of camera-to-cloud environment is not a deciding factor that locks downstream services into a specific infrastructure choice. We’ve also included in the gap that it’s not just ”camera to cloud” but “capture to cloud” that we need, which includes on-set audio and other data streams that may be relevant to later production stages including lighting, lenses, and IOT devices. All of that needs to be securely and reliably delivered to redundant cloud locations before physical media storage on set can be wiped.

  3. Latency between “single source of truth in cloud” and multiple edge-based users.

    Example: A show is shooting in Eastern Europe, posting in New York, with producers in LA and VFX companies in India. Which cloud region should they store the media assets in?

    As an industry we tend to talk about “the cloud” as a singular thing or place, but in reality of course it is not – it’s made up of private data centers, and various data centers which hyperscale cloud providers tend to arrange into different “availability zones” or “regions” which must be declared when storing media. As media production is a global business the example above is very real, it leads to the question – where should we store the media and when should we duplicate it for performance and/or resiliency? This is also one of the reasons why we believe multi-cloud systems need to be supported because it’s also possible that the assets for a production are scattered across different availability zones, cloud accounts (depending on which vendor has “edit rights” on the assets at any one time), and cloud providers (public, private and hybrid infrastructures). The gap here is that currently decisions need to be made, potentially involving IT systems teams and custom software integrations, about where to store assets to ensure they are available, at very low latency (sub 25 milliseconds round trip – see Is the Cloud Ready to Support Millions of Remote Creative Workers? for more details) for the creative users who need to get to them. By 2030 we’d expect some “intelligent caching’” systems or other technologies that would understand, or even predict, where certain assets need to be for users and stage them close enough for usage before they are needed. This is one of the reasons why we reiterate that we expect, and encourage, media assets to be distributed across cloud service providers and regions and merely ”act” as a single storage entity even though they may be quite disparate. This is also implies that applications need to be able to operate across all cloud providers because they may not be able to predict or control where assets are in the cloud.

  4. Lack of visibility of the most efficient resource utilization within the cloud , especially before the resources are committed.

    Example: When a production today wants to rent an editorial system, it can accurately predict the cost, and map it straight to their budget. But with the cloud equivalent it’s very hard to get an upfront budget because the costs for cloud resources rely on predicting usage, which is hard to know including hours of usage, amount of storage required, data egress, etc.

    Creative teams take on a lot when committing to a show, usually with a fixed budget and timeline. It’s hard to ask them to commit to unknown costs, especially for variables which are hard to control at the outset – could you predict how many takes for a specific scene? How many times a file will be accessed or downloaded? Or how many times a database queried? Even if they could accurately predict usage, most cloud billing is done in arrears, and therefore the costs are not usually known until after the fact, and consequently it’s easy to overrun costs and budgets without even knowing it.

    Similarly, creative teams would also benefit from greater education and transparency concerning the most efficient ways to use cloud products. Efficient usage will decrease costs and enhance output and long-term usage.

    For cloud computing systems to become as ubiquitous as the physical equivalent, providers need to find ways to match the predictability and efficient use of current on-premises hardware, but with the flexibility to burst and stretch when required and authorized to do so.

MovieLabs 2030 Vision Principle 2
  1. Too few cloud-aware/cloud-native apps, which necessitates a continued reliance on moving files (into clouds, between regions, between clouds, out of clouds).

    Example: An editor wants to use a cloud SaaS platform for cutting their next show, but the assets are stored in another cloud, the dailies system providing reference clips is on a third, and the other post vendors are using a private cloud.

    We’re making great progress with getting individual applications and processes to move to the cloud but we’re in a classic ”halfway” stage where it’s potentially more expensive and time consuming to have some applications/assets operating in the cloud and some not. That requires moving assets into and out of a specific cloud to take advantage of its capabilities and if certain applications or processes are available only in one cloud then moving those assets specifically to that cloud, which is the the sort of “assets chasing tasks” from the offline world that this principle was designed to avoid in the cloud world. We need to keep pushing forward with modern applications that are multi-cloud native and can migrate seamlessly between clouds to support assets stored in multiple locations. We understand this is not a small task or one that will be quick to resolve. In addition, many creative artists used Mac OS and that is not broadly available in cloud instances and in a way that can be virtualized to run on myriad cloud compute types.

  2. Audio post-production workflows (e.g., mixing, editing) are not natively running in the cloud.

    Example: A mixer wants to remotely work on a mix with 9.1.6 surround sound channels that are all stored in the cloud. However most cloud based apps only support 5.1 today, and the audio and video channels are streamed separately so the sync between the audio and the video can be “soft” in a way that it can be hard to know if the audio is truly playing back in sync.

    The industry has made great strides in developing technologies to enable final color (up to 12 bit) to be graded in the cloud, but now similar attention needs to be paid to the audio side of the workflows. Audio artists can be dealing with thousands, or even tens of thousands of small files and they have unique challenges which need to be resolved to enable all production tasks to be completed in the cloud without downloading assets to work remotely. The audio/video sync and channel count challenges above are just illustrative of the clear need for investment and support of both audio and video cloud workflows simultaneously to get to our “ProductionLandia” where both can be happening concurrently on the same cloud asset pool.

MovieLabs 2030 Vision Principle 3
  1. Lack of communication between cross-organizational systems (AKA “too many silos”) and inability to support cross-organizational workflows and access.

    Example: A director uses a cloud-based review and approval system to provide notes and feedback on sequences, but today that system is not connected to the workflow management tools used by her editorial department and VFX vendors, so the notes need to be manually translated into work orders and media packages.

    As discussed above we’re in a transition phase to the cloud, and as such we have some systems that may be able to receive communication (messages, security permission requests) and commands (API calls), whereas other systems are unaware of modern application and control plane systems. Until we have standard systems for communicating (both routing and common payloads for messages and notifications) and a way for applications to interoperate between systems controlling different parts of the workflow, then we’ll have ongoing issues with cross-organizational inefficiencies. See the MovieLabs Interoperability Paper for much more on how to enable cross-torganizational interop.

MovieLabs 2030 Vision Principle 4
  1. No common way to describe each studio’s archival policy for managing long term assets.

    Example: Storage service companies and MAM vendors need to customize their products to adapt to each different content owner’s respective policies and rules for how archival assets are selected and should be preserved.

    The selection of which assets need to be archived and the level of security robustness, access controls, and resilience are all determined by studio archivists depending on the type of asset. As we look to the future of archives we see a role for a common and agreed way of describing those policies so any software storage system, asset management or automation platform could read the policies and report compliance against them. Doing so will simplify the onboarding of new systems with confidence.

MovieLabs 2030 Vision Principle 5
  1. Challenges of measuring fixity across storage infrastructures.

    Example: Each studio runs a checksum against an asset before uploading it to long term storage. Even though storage services and systems run their own checks for fixity those checksums or other mechanisms are likely different than the studios’ and not exposed to end clients. So instead, the studio needs to run their own checks for digital degradation by occasionally pulling that file back out of storage and re-running the fixity check.

    As there’s no commonality between fixity systems used in major public clouds, private clouds, and storage systems, the burden of checking that a file is still bit-perfect falls on the customer to incur the time, cost, and inconvenience of pulling the file out of storage, rehashing it, and comparing to the original recorded hash. This process is an impediment to public cloud storage and the efficiencies it offers for the (very) long term storage it offers for archival assets.

  2. Proprietary formats need to be archived for many essence and metadata file types.

    Example: A studio would like to maintain original camera files (OCF) in perpetuity as the original photography captured on set, but the camera file format is proprietary, and tools may not be available in 10, 20, or 100 years’ time. The studio needs to decide if it should store the assets anyway or transcode them to another format for the archive.

    The myriad of proprietary files and formats in our industry contain critical information for applications to preserve creative intent, history, or provenance, but that proprietary data becomes a problem if it is necessary to open the file in years or decades, perhaps after the software is not even available. We have a few current and emerging examples in some areas of public specifications and standards, and open source software that can enable perpetual access, but the industry has been slow to appreciate the legacy challenges in preserving access to this critical data in the archive.

In the final part of this blog series, we’ll address the gaps remaining within the Principles covering Security and Identity and Software-Defined Workflows… Stay Tuned…

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>
Are we there yet? Part 1 https://movielabs.com/are-we-there-yet-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-1 Wed, 26 Jul 2023 16:13:10 +0000 https://movielabs.com/?p=13094 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>

It’s mid-2023, we’re about 4 years into our odyssey towards “ProductionLandia” – an aspirational place where video creation workflows are interoperable, efficient, secure-by-nature and seamlessly extensible. It’s the destination. The 2030 Vision is our roadmap to get there. Each year at MovieLabs we check the industry’s progress towards this goal, adjusting focus areas, and generally providing navigation services to ensure we’re all going to arrive in port in ProductionLandia at the same time and with a suite of tools, services and vendors that work seamlessly together. As part of that process, we take a critical look at where we are collectively as an M&E ecosystem – and what work still needs to be done – we call this “Gap Analysis”.

Before we leap into the recent successes and the remaining gaps, let’s not bury the lead – while there has been tremendous progress, we have not yet achieved the 2030 Vision (that’s not negative, we have a lot of work to do and it’s a long process). So, despite some bold marketing claims from some industry players, there’s a lot more in the original 2030 Vision white paper than lifting and shifting some creative processes to the cloud, the occasional use of virtual machines for a task or a couple of applications seamlessly passing a workflow process between each other. The 2030 Vision describes a paradigm shift that starts with a secure cloud foundation, and also reinvents our workflows to be composable and more flexible, removing the inefficiencies of the past, and includes the change management that is necessary to give our creative colleagues the opportunity to try, practice and trust using these new technologies on their productions. The 2030 Vision requires an evolution in the industry’s approach to infrastructure, security, applications, services and collaboration and that was always going to be a big challenge. There’s still much to be done to achieve dynamic and interoperable software-defined workflows built with cloud-native applications and services that securely span multi-cloud infrastructures.

Status Check

But even though we are not there yet, we’re actually making amazing progress based on where we started (albeit with a global pandemic to give a kick of urgency to our journey!). So many major companies including cloud services companies, creative application tool companies, creative service vendors and other industry organizations have now backed the 2030 Vision; it is no longer just the strategy of the major Hollywood studios but has truly become the industry’s “Vision.” The momentum is truly behind the vision now, and it’s building – as is evident in the 2030 Showcase program that we launched in 2022 to highlight and share 10 great case studies where companies large and small are demonstrating Principles of the Vision that are delivering value today.

We’ve also seen the industry respond to our previous blogs on gaps including what was missing around remote desktops for creative applications, software-defined workflows  and cloud infrastructures. We can now see great progress with camera to cloud capture, automated VFX turnovers, final color pipelines now technically possible in the cloud, amazing progress on real-time rendering and iteration via virtual production, creative collaboration tools and more applications opening their APIs to enable new and unpredictable innovation.

Mind the Gaps

So, in this two-part Blog, let’s look at what’s still missing. Where should the industry now focus its attention to keep us moving and accelerate innovation and the collective benefits of a more efficient content creation ecosystem? We refer to these challenges as “gaps” between where we are today and where we need to be in “ProductionLandia.” When we succeed in delivering the 2030 Vision, we’ll have closed all of these gaps. As we analyze where we are in 2023 we see these gaps falling into the 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-Defined Workflows), plus 3 underlying ones that bind them altogether:

image: 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-defined Workflows), plus 3 underlying ones that bind them altogether

In this Part 1 of the Blog we’ll look at the gaps related to these areas. In Part 2 we’ll look at the gaps we view as most critical for achieving each of the principles of the vision, but let’s start with those binding challenges that link them all.

It’s worth noting that some gaps involve fundamental technologies (a solution doesn’t exist or a new standard, or open source project is required) some are implementation focused (e.g., technology exists but needs to be implemented/adopted by multiple companies across the industry to be effective – our cloud security model CSAP  is an example here where a solution is now ready to be implemented) and some are change management gaps (e.g., we have a viable solution that is implemented but we need training and support to effect the change). We’ve steered clear of gaps that are purely economic in nature as MovieLabs does not get involved in those areas. It’s probably also worth noting that some of these gaps and solutions are highly related, so we need to close some to support closing others.

Interoperability Gaps

  1. Handoffs between tasks, teams and organizations still require large scale exports/imports of essence and metadata files, often via an intermediary format. Example: Generation of proxy video files for review/approval of specific editorial sequences. These handovers are often manual, introducing the potential for errors, omissions of key files, security vulnerabilities and delays. See note1.
  2. We still have too many custom point-to-point implementations rather than off-the-shelf integrations that can be simply configured and deployed with ease. Example: An Asset Management System currently requires many custom integrations throughout the workflow, which makes changing it out for an alternative a huge migration project. Customization of software solutions adds complexity and delay and makes interoperability considerably harder to create and maintain.
  3. Lack of open, interoperable formats and data models. Example: Many applications create and manage their own sequence timeline for tracking edits and adjustments instead of rallying around open equivalents like OpenTimelineIO for interchange. For many use cases, closing this gap requires the development of new formats, data models, and their implementation.”.
  4. Lack of standard interfaces for workflow control and automation. Example: A workflow management software cannot easily automate multiple tasks in a workflow by initiating applications or specific microservices and orchestrate their outputs to form an output for a new process. Although we have automation systems in some parts of the workflow the lack of standard interfaces again means that implementors frequently have to write custom connectors to get applications and processes to talk to each other.
  5. Failure to maintain metadata and a lack of common metadata exchange across components of the larger workflow. Example: Passing camera and lens metadata from on-set to post-production systems for use in VFX workflows. Where no common metadata standards exist, or have not been implemented, systems rarely pass on data they do not need for their specific task as they have no obligation to do so, or don’t know which target system may need it. A more holistic system design however would enable non-adjacent systems to be able to find and retrieve metadata and essence from upstream processes and to expose data to downstream processes, even if they do not know what it may be needed for.

Operational Support

  1. Our workflows, implementations and infrastructures are complex and typically cross between boundaries of any one organization, system or platform. Example: A studio shares both essence and metadata with external vendors to host on their own infrastructure tenants but also less structured elements such as work orders (definitions of tasks), context, permissions and privileges with their vendors. Therefore, there is a need for systems integrators and implementors to take the component pieces of a workflow and to design, configure, host, and extend them into complete ecosystems. These cloud-based and modern software components will be very familiar to IT systems integrators, but they need the skills and understanding in our media pipelines to know how to implement and monetize them in a way which will work in our industry. We therefore have a mismatch gap between those that understand cloud-based IT infrastructures and software, and those that understand the complex media assets and processes that need to operate on those infrastructures. There are few companies to chose from that have the correct mixture of skills to understand both cloud and software systems as well as media workflow systems, and we’ll need a lot more of them to support the industry wide migration.
  2. We also need systems that match our current support models. Example: A major movie production can be simultaneously operating across multiple countries and time zones in various states of production and any down system can cause backlogs in the smooth operations. The media industry can work some unusual and long hours, at strange times of the day and across the world – demanding a support environment that can support it with specialists that understand the challenges of media workflows and not just open an IT ticket that will be resolved when the weekday support comes in at 9am on Monday. In the new 2030 world, these problems are compounded by the shared nature of the systems – so it may be hard for a studio or production to understand which vendor is responsible if (when) there are workflow problems – who do you call when applications and assets seamlessly span infrastructures? How do you diagnose problems?

Change Management

  1. Too few creatives have tried and successfully deployed new ‘2030 workflows’ to be able to share and train others. Example: Parts of the workflow like Dailies have migrated successfully to the cloud, but we’re yet to see a major production running from ”camera to master” in the cloud – who will be the first to try it? Change Management comprises many steps before new processes are considered “just the way we do things.” There are many steps but the main ones we need to get through are:
    • Educating and socializing the various stakeholders about the benefits of the 2030 vision, for their specific areas of interest
    • Involving creatives early in the process of developing new 2030 workflows
    • Then demonstrating value of new 2030 workflows to creatives with tests, PoCs, limited trials and full productions
    • Measuring cost/time savings and documenting them
    • Sharing learnings with others across the industry to build confidence.

Shortly, we’ll add a Part II to this blog which will add to the list of gaps with those that are most applicable to each of the 10 Principles of the Vision. In the meantime, there’re eight gaps here which the industry can start thinking about, and do please let us know if you think you already have solutions to these challenges!

[1] The Ontology for Media Creation (OMC) can assist in common payloads for some of these files/systems.

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>
MovieLabs Urgent Memo to the C-Suite https://movielabs.com/movielabs-urgent-memo-to-the-c-suite/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-urgent-memo-to-the-c-suite Wed, 16 Feb 2022 00:04:19 +0000 https://movielabs.com/?p=10501 MovieLabs makes the case that Investing in Production Technology and Cloud Centricity is No Longer an Option – it is Table Stakes.

The post MovieLabs Urgent Memo to the C-Suite appeared first on MovieLabs.

]]>
We published our 2030 Vision white paper “The Evolution of Media Creation” with its goal being, “to empower storytellers to tell more amazing stories while delivering at a speed and efficiency not possible today.” In that paper, we described 10 principles as key elements of the 2030 world we envisioned. Our call to action to the industry was “to collaborate by appropriate means to achieve shared goals and continue to empower future storytellers and the creative community.” When writing the “2030 Vision”, we debated over the target audience, but ultimately concluded that it should be aimed at production technologists (CTOs, CIOs, Cloud Companies, SaaS providers, Technology Companies, Software Architects) i.e. those who would not only recognize the challenges that we were highlighting, the merits of the principles we articulated, and also help in designing the technical solutions. However, we also highlighted that enabling the vision would take more than just technologists. The realization of the Vision also requires alignment and support from senior leadership across finance, marketing, operations, producers and even board members who provide organizations guidance on strategy, governance and long-term risk.

Since releasing that original white paper, production technology leaders from across the industry have embraced the 2030 Vision making it the industry’s reference for the future of media creation. And while having this alignment is absolutely critical for our shared vision’s success, today we’re releasing a new white paper [Urgent Memo to the C-Suite LINK] aimed at leadership across the content creation ecosystem – Chief Executives, Chief Financial Officers, Chief People Officers as well as board members, production executives and production companies. And we have a simple message – companies that want to not just survive but thrive in the modern content ecosystem need to invest in production technology now.

Much like investments in Distribution Technology 10 years ago enabled the rapid rise in consumer demand for streaming media services, we now need to make a corresponding investment in production technology to more efficiently create the content that our growing, global audiences are demanding. Let’s define what we mean by production technology – it’s often assumed to be just be on-set technologies like virtual production, cameras and LED walls but it’s broader than just that and also includes all technology and systems used to create final movies and shows including asset management, creative software tools, onboarding and talent scheduling, managing jobs, networks and infrastructure and so much more.

In an effort to place this technology vision in a business context, against the backdrop of what our industry is now facing, we have identified 5 trends that are shaping content creation and 3 strategic imperatives that organizations should follow now to ensure they can stay ahead of those trends. Technology is certainly a key part, but this is not a technical paper nor a call for technology investment just for the sake of it. There are clearly rationalized reasons why and how we must invest now to ensure competition and choice in the future and to not recreate the mistakes of the past, where we’ve had multiple opportunities to reinvent our content creation ecosystem but shied away from making the difficult, fundamental changes which could have unlocked significant efficiencies and value. Our new “Memo to the C-Suite” paper is marked “Urgent” because these changes are transformational and will take time – so we all need to act now to realize our shared vision as soon as possible.

Our industry is at a critical inflection point as emerging technologies (cloud, automation, AI, real-time engines) approach mass adoption and we reemerge from the pandemic which has both once crippled our industry and enlivened it. We cannot waste this opportunity to reinvent our 100-year-old production processes and create a more dynamic content creation ecosystem that is optimized for the sorts of content consumers are demanding now and will do in the future.

And while this paper is clearly not literally a “memo to the c-suite”, it does goes down easy. So, download your copy of the MovieLabs Urgent Memo to the C-Suite here and encourage your colleagues and friends to do the same.

The time for action is now. For more information, please follow MovieLabs on LinkedIN #2030Vision.

The post MovieLabs Urgent Memo to the C-Suite appeared first on MovieLabs.

]]>
Through the Looking Glass https://movielabs.com/through-the-looking-glass/?utm_source=rss&utm_medium=rss&utm_campaign=through-the-looking-glass Tue, 01 Feb 2022 09:14:16 +0000 https://movielabs.com/?p=10295 Locating assets in a multi-cloud workflow.

The post Through the Looking Glass appeared first on MovieLabs.

]]>

Some Background

In our July 2021 blog, “Cloud.Work.Flow” we listed several gaps which will need to be closed to enable the 2030 Vision for Software-Defined Workflows that span multiple cloud infrastructures – which is the way we expect all workflows to ultimately run. In this blog we’ll address one of those gaps and how we’re thinking about systems to close it – namely the issue that “applications need to be able to locate and retrieve assets across all clouds.”

To understand why this is a problem we need to dig a little into the way software applications store files. Why do we need to worry about applications? Because almost all workflow tasks are now conducted by some sort of software system – most creative tasks, even capture devices like cameras are running complex software. The vast majority of this software can access the internet and therefore private and public cloud resources, and yet is still based on legacy file systems from the 1980s. Our challenge with interconnecting all of the applications in the workflow therefore often boils down to how applications are storing their data. If we fix that, we can move on to some more advanced capabilities in creative collaboration.

Typically, a software application stores the locations of the files it needs using file paths that indicate where they are stored on a locally accessible file system (like “C:/directory/subdirectory/file_name”). So, for example, an editing application will store the edits being made in an EDL file that is recorded locally (as it’s being created and constantly amended), and the project includes an index with the locations of all the files being manipulated by the editor. Media Asset Management systems also store the locations of files in a database, with similar file paths, like a trail of breadcrumbs, to follow and locate the files. If the files in these file systems move or are not where the application expects them to be when it needs them, then trouble ensues.

Most applications are built this way, and whereas they can be adapted to work on cloud resources (for example by mounting cloud storage to look like a local file system), they are not inherently “cloud aware” and still maintain the names and locations of needed files internally. There are 3 major drawbacks with this approach in collaborative workflows like media creation:

  1. Locating a shared file may depend on having a common file system environment. E.g., NAS drives must always be mounted with the same drive letter.
  2. Locating the file is complicated when the file name plus the file path is the guarantee of uniqueness.
  3. Moving a file (i.e., copy then delete) will break any reference to the file.

We are instead aiming for a cloud foundation which supports a dynamic multi-participant workflow and where:

  • Files can move, if necessary, without breaking anything.
  • Files don’t have to move, if it’s not necessary.
  • If files exist in more than one place, the application can locate the most convenient instantiation.
  • Systems, subject to suitable permissions, can locate files wherever they are stored.
  • The name of a file is no longer an important consideration in locating it or in understanding its contents or its provenance.[1]

With these objectives in mind, we have been designing and testing a better approach to storing files required for media workflows. We’ll reveal more later in 2022 but for now we wanted to give you a preview of our thinking.

Identifying Identifiers

To find these files anywhere across the cloud, what we need is a label that always and uniquely refers to a file, no matter where it is. This kind of label is usually called an identifier. The label must be “sticky” in that it should always apply to the same file, and only to that file. By switching to an identifier for a file, instead of an absolute file location, we can free up a lot of our legacy workflows and enable our cross-cloud future.

Our future solution therefore needs to operate in this way:

  • Participating workflow applications should all refer to files by a common and unique identifier
  • Any workflow component can “declare” where a file is (for example, when a file is created)
  • Any workflow component can turn a unique identifier into at least one location (using the declaration above)
  • Locations are expressed in a common way – by using URLs.

URLs (Uniform Resource Locators) are the foundation of the internet and can be used to describe local file locations (e.g., file://), standard network locations (e.g., http:// or https://), proprietary network locations (e.g., s3://) or even SaaS locations (e.g., box:// used by the web service company Box).

The key to this scenario is a web service that and share that, when presented with a unique identifier, will return the URL location, or locations, of that file. We call this service a resolver, and it’s a relatively simple piece of code that is acting in a similar way to a highly efficient librarian who, when presented with the title and author of a book, can tell you on which shelf and location to go and get it.

Even though MovieLabs created the industry standard Entertainment ID Registry (EIDR), we are not proposing here any universal and unique identifier for each of the elements within all productions going on (that would be a massive undertaking), instead we believe that each production, studio, or facility will run their own identifier registries and resolvers.

We have discussed before why we believe in the importance of splitting the information about a file (for example what type of file it is, what it contains, where it came from, which permission various participants have, its relationships to other files, etc.) from the actual location of the file itself. In many cases applications don’t need to access the file (and therefore won’t need to use the resolver) because they often just need to know information about the file and that can be done via an asset manager. We can envision a future whereby MAMs contain rich information about a file and just the identifier(s) used for it; and utilize a resolver to handle the actual file locations.

With this revised approach we can now see that our application uses an external Resolver service to give it URLs which it can reach out to on a network and retrieve the file(s) that it needs.

The diagram above shows how the Application now keeps a list of URIs which it can use an external Resolver to turn into a URL for the files it needs.  The URL can be resolved by the network into web servers, SaaS systems or directly to cloud storage services.  So, in the example of our editing application, now the application maintains sets of unique file identifiers (for the location of the EDL any of the required media elements for the edit) and the resolver points to an actual location, whenever the application needs to find and open those files. The application is otherwise unchanged.

Why use Identifiers and Resolvers, instead of just URLs?

Let us be clear – there are many benefits in simply switching applications to use URLs instead of file paths – that step alone would indeed open up cloud storage and a multitude of SaaS services that would help make our workflows more efficient. However, from the point of view of an application, URLs alone are absolute and therefore do not address our concerns of enabling multiple applications to simultaneously access, move, edit, and change those files. By inserting a resolver in the middle, we can abstract away from the application the need to track where every file is kept and enable more of our objectives including the ability to have multiple locations for each file. Also by using a resolver, if any application needs to move a file it does not need to know or communicate with every other application that also may use that same file, now or in the future. Instead, it simply declares the file’s location to the resolver, knowing that every other participant software application can locate the file, even if that application is added much later in the workflow.

In our editing example above, the “resolver aware” editing application knows that it needs video file “XYZ” for a given shot, but it does not need to “lock” that file and as such it can be simultaneously accessed, referenced, and perhaps edited by other applications. For example, in an extreme scenario, video XYZ could be updated with new VFX elements by a remote VFX artist’s application that seamlessly drops the edited video into the finished shot – without the editor needing to do anything but review and approve, the EDL itself is unchanged and none of the applications involved need to have an awareness of the filing systems used by others.

The resolver concept also has another key advantage; with some additional intelligence, the resolver can return the closest copy of a file to the requesting application. Even though Principle 1 in the 2030 Vision indicates that all files should exist in the cloud with a “single source of truth,” we do also recognize that sometimes files will need to be duplicated to enable speed of performance – for example to reduce the latency of a remote virtual workstation in India for assets that were originally created in London. In those cases the resolver can help as the applications can all share one unique identifier for a file, but the network layer can return the original location for participants in Europe and the location of a cached copy in Asia for a participant requesting access from India.

What needs to happen to enable this scenario?

MovieLabs is busy designing and testing these and other new concepts for enabling seamless multi-cloud interoperability and building out software defined workflows.  We’ll be publishing more details of our approach during 2022.  Meanwhile there’s an immediate opportunity for all application developers, SaaS providers, hyperscale cloud service companies and others in the broader ecosystem to consider these approaches to interoperable workflows than span infrastructures and the boundaries of specific applications’ scope.

We welcome the input of other companies as we collectively work through these issues and ultimately test and deploy resolver-based systems, feel free to reach out to discuss your thoughts with us.

To ensure you are kept updated with all MovieLabs news and this new architecture be sure to follow us on LinkedIn.

 

[1] Today such information is often encoded or crammed into a file name or the combination of file name and file path.

The post Through the Looking Glass appeared first on MovieLabs.

]]>
CLOUD. WORK. FLOWS https://movielabs.com/cloud-work-flows/?utm_source=rss&utm_medium=rss&utm_campaign=cloud-work-flows Tue, 20 Jul 2021 18:26:35 +0000 https://movielabs.com/?p=8348 Examining cloud ingest, publication and subscription-enabled workflows and the gaps preventing us from reaching the 2030 Vision.

The post CLOUD. WORK. FLOWS appeared first on MovieLabs.

]]>
MovieLabs has been busy in the last few months assessing cloud infrastructures (which in our definition includes private, hybrid and hyperscale service providers) and systems for their ability to support modern media workflows. Our review has been a broad assessment looking at the flow of video and audio assets into the cloud, between systems in clouds, and between applications required to run critical creative tasks. We’ve started from the very beginning of the content creation process – from production design through script breakdown on through every step in which assets are ingested into the cloud for processing—and then followed the primary movement of tasks, participants and assets across a cloud infrastructure, to complete a workflow.

Today, we’re publishing a list of gaps we have identified in that assessment—gaps between today’s reality and the 2030 Vision. Our intent is to create an industry dialog about how to close these gaps,  collectively. MovieLabs is taking on project work in each of these areas (and we’ll share more on that later), but closing these gaps will require engagement from the whole community – creatives, tools providers, service vendors, studios and other content owners.

The MovieLabs 2030 Vision calls for software-based workflows in which files/assets are uploaded to the cloud (or clouds) and stay there. References to those assets are exchanged, and the assets are accessed by participants across many tasks. We’re not considering how a single task is carried out in the cloud – that is something which is generally possible today, and while there are benefits (such as enabling remote work in a pandemic), the migration to the cloud of a single task within a production workflow does not fully take advantage of the cloud. Instead, we’re discussing how the entirety of production workflows, every task and application, could run in the cloud with seamless interactions between them. The benefits of this are not only efficiency (less wasted time in moving and copying files), but also lower risk of errors, less task duplication, more opportunities for automation, better security, better visibility to workflow status, and more of the most precious production resources (creative time and budget) to apply to actual creative tasks that will make the content better.

So, let’s look at the current impediments we see to enabling more cloud-based workflows…

1) Much faster network connectivity is needed for ingestion of large sets of media files.

 

Major productions today generate millions of individual assets  – from pre-greenlight through distribution. For cloud-based workflows, each asset requires “ingest” into a production store in the cloud. That includes not only camera image files and audio assets captured during active production, but all files created during production—the script, production notes, participant-to-participant communication, 3D assets, camera metadata, audio stems, proxy video files, and more.

As we look at these files, it’s clear that the smaller files are not a major concern for the industry. Many cloud-based collaboration platforms routinely upload a modest number of small files (<10MB) and do so with standard broadband connections, including cellular links. Indeed, some of this data is cloud-native (for example, chat files or metadata generated by SaaS apps) and do not need uploading at all.

However, today’s increasingly complex productions create huge volumes of data, often amounting to many terabytes at a time, which can cause substantial upload headaches. For example, a Sony camera shooting in 16-bit RAW 4K at 60fps will generate 2.12TB per hour in the form of 212,000 OCF files of approximately 10MB each. A multi-camera shoot with supporting witness cameras, uncompressed audio, proxies, high-resolution image files, and production metadata becomes a data hotspot streaming vast amounts of data into the cloud (or, more likely, multiple clouds). The volume of data will only increase as capture technology and production techniques evolve.

The table below illustrates the time required for file transfers using various internet connection speeds:

Table shows transfer time for various file sizes and different bandwidth speeds. Table can be used to estimate the bandwidth needed for a production based on requirements. E.g., if a production is shooting 2 hours of footage/day in ALEXA LF RAW, it will generate 4TB of data per day per camera. Anything less than a 1Gbit/s connection will be insufficient to keep up with the daily shoot schedule.

We’ve color-coded the table to indicate in green the upload times that would be generally acceptable, on-par with hard-drive-based file delivery services. Yellow indicates upload times around 24 hours, and red identifies times that are entirely impractical. While ultra-fast internet connections may not top the  list of budget items for any production (especially on smaller independent projects), the faster the media can be ingested to the cloud, the faster downstream processes can start, accelerating and reducing overall costs.

There are multiple technologies available to mitigate the upload problem, i.e., ways to accelerate transfers or compress data, use of transportable drives, bringing compute to the edge, etc. Evaluation of these and other techniques is beyond this blog’s scope but suffice it to say that most cloud-based productions would benefit from having an uncontended upload and download internet connection of greater than 1Gbps.

Bandwidth, however, is not the only constraint on ingestion to the cloud. The ingest step must also include elements required to enable a software-defined workflow (SDW) downstream. That includes assignment of rights and permissions to files, indexing and pre-processing of media, augmentation of metadata, and asset/network security. These requirements need to be well-defined upfront so that ingested files can be accessed or referenced downstream by other participants and applications. Which leads us to …

2) There is no easy way to find and retrieve assets and metadata stored across multiple clouds.

As we have explored in other blogs (such as interoperable architectures), production assets could, and likely will, be scattered across any number of private, hybrid, and hyperscale clouds. Therefore, applications need to be able to find and retrieve assets across all clouds. Breaking this down, two key steps emerge. First, determining which assets are needed by a process. This is often a challenge, e.g., requiring knowledge of versions and approval status of that asset. And then second, determining where those assets are actually located in the clouds.

These should be considered separate processes, as not all applications need to perform both tasks. Bridging these processes in cloud-based workflows means that each asset needs to be uniquely identifiable so that applications can consistently identify an asset independent of its location and then locate and access the asset.

Architectural clarity on separating these functions is an important prerequisite to addressing this gap. It will also require the industry to develop multi-cloud mechanisms for resolving asset identifiers into asset locations and the integration of those mechanisms with workflow and storage orchestration systems, work that will likely take many years to complete.

3) We need more interoperability between commercial ecosystems.

In the early days of mobile data, what consumers could do with their data-capable cell phones was controlled by the cellular operators. Consumers were constrained by their choice of operator in the devices they could use, the services they could access, and the apps they could install. The connections between those commercial ecosystems were limited. That service model has fallen away because it constrained consumer freedom to go anywhere on the Internet, load any apps they chose, on any compatible device they chose.

We are still in the early days of cloud production and yet we’re seeing parallels with those constrained ecosystems from the early mobile internet. That means, for example, that production SaaS services today sometimes obscure where media files are held, allowing data to be accessed only through the service’s applications. As a result, cloud production systems can sometimes deliver less functionality than on-premises systems, which often include, as a basic function, the ability for any application to access and manipulate data stored anywhere on the network.

In any new and fast-changing environment, an internal ecosystem model can be a great way to launch new services fast, to deliver a great customer experience, and to innovate quickly. However, as these services mature, internal ecosystems can confront problems in scale that limit the broader adoption of new technologies and systems. For example, if file locations are not exposed to other online services, media must be moved out of one internal ecosystem and into another, in order to perform work. That could mean moving from the cloud to on-prem infrastructure and then back again or moving from one cloud infrastructure to another and then back again. Those movements are inefficient, costly and violate a core principle of the 2030 Vision, i.e., that media moves to the cloud and stays there with applications coming to the media. It also creates security challenges since every movement and additional copy of media must be secured and tracked, with security policies applied across workflows and identities also managed and tracked across ecosystems.

Today’s content workflows are too complex for any one service, toolset, or platform to provide all the functionality that content creators need. Therefore, we need easy and efficient ways for content creators to take advantage of multiple commercial ecosystems, with standardized interfaces and gateways between them that allow tasks and participants to extend across ecosystems and implement fully interoperable workflows.

To achieve the full benefits of the 2030 Vision, we envision a future in which commercial ecosystems include technical features :

  1. Files and/or critical metadata are exposed and available across ecosystems so that they can be replicated or accessed by third party services (for example, by way of an API).
  2. Authentication and authorization can also be managed across ecosystems, for example, providing the ability to share federated sign-on so that a single identity can be shared across services and enabling external systems to securely change access controls via API.
  3. Security auditing of actions on all platforms is open enough to allow external services with a common security architecture to track the authorized or unauthorized use of assets, applications, or workflows on the platform.

The 2030 Vision will require dynamic security policies that extend, and enable participant authentication, across multiple internal ecosystems, including granular control of authorization (e.g., access controls) to the level of individual participants, individual tasks, and individual data and metadata assets. That will require commercial ecosystems to incorporate a high level of interoperability and communication across ecosystems in order to deliver dynamic policies that change frequently to enable  real-time security management for end-to-end production workflows.

4) We still must resolve issues with remote desktops for creative tasks.

Until all creative tools are cloud-native SaaS products, cloud media files will be manipulated most often using existing applications operating on cloud-based virtual machines. In a prior blog, we assessed several technical shortcomings in those technologies that prevent media ingested to the cloud from being manipulated in the same way as on local machines. These limitations and considerations were explored in our remote desktop blog and include problems such as lack of support for HDR, high bit depth video. and surround sound in remote desktop systems. Until we close those gaps, the ability to manipulate media files and collaborate in the cloud will be stunted.

5) People, software and systems cannot easily and reliably communicate concepts with each other.

The next key group of issues to resolve relate to communicating workflows and concepts. That communication could be human-to-human, but also human-to-machine and ultimately machine-to-machine, which will enable automation of many repetitive or mundane production tasks.

Effective software-defined workflows need standardized mechanisms to describe assets, participants, security, permissions, communication protocols, etc. Those mechanisms are required to allow any cloud service or software application to participate in a workflow and understand the dialog that is occurring. For example, a number of common words and terms of art are understood by context – slate, shot, and take, for instance. All have different meanings depending on their exact context, and it’s hard for machines to understand that nuance.

In addition to describing the individual elements of a production, we need to describe how elements relate to one another. These relationships, for example, allow a proxy to be uploaded in real-time and to stay connected to the RAW file original – which could arrive in the cloud hours or days later. Such a system needs to allow two assets stored on different clouds to be moved, revised, processed, deep archived and re-hydrated, all without losing connections to each other. The same is true of other less tangible elements such as production notes made on a particular shot – which must be related to the files captured on that shot and other information that could be useful later such as the camera and lens configurations, wardrobe decisions and even time of day and positions of the lighting. These elements and their relationships need to be defined in a common way so all connected systems can create and manage the connections between elements.

6) It is difficult to communicate messages to systems and other workflow participants, especially across clouds and organization

Software-Defined Workflows require a large amount of coordinated communication between people and systems. Orchestration systems control the many parts of an SDW by allocating tasks and assets to participants on certain infrastructures. For those systems to work, we need agreed methods for the component systems to coordinate with each other—to communicate, for example, that an ingest has started, been completed or somehow failed. By standardizing aspects of this collaboration system, developers can write applications that create tasks with assets, create sub-tasks from tasks, create relations between assets and metadata, and pass messages or alerts down a workflow that appear as notifications for subsequent users or applications. These actions require an understanding of preceding actions, plus open standards for describing and communicating those actions in order to deploy at scale and allow messages to ripple out throughout a workflow. As an example, if an EDL is changed that impacts a VFX provider, the VFX provider should be notified automatically when the relevant change has occurred.

Our objective here is to standardize the mundane integrations that do not differentiate a software product or service in order to enable interoperability, which then frees up developer resources to focus on the innovative components and features that truly do differentiate products.

7) There is no easy way to manage security in a workflow spanning multiple applications and infrastructure.

Our cloud-based approach (as explained in the MovieLabs 2030 Common Architecture for Production (CSAP) is a zero-trust architecture. This approach requires every participant (whether a user, automated service, application or device) to be authenticated before joining any workflow and then authorized to access or modify any particular asset. This allows secure ingest and processing of assets in the cloud. Realizing the benefits of this aspect of the 2030 Vision, however, also requires closing some key gaps.

When content owners allocate work to be done (either to vendors or within their own organization’s security systems), they select rights and privileges which typically are constrained to the cloud service or systems on which the work is occurring. In the case of service providers, the contract stipulates certain security protections and usually requires external audits to validate the protections are understood and implemented correctly. In addition, each of the major hyperscale cloud service providers also provide identity, authorization and security services for storage and services running on their clouds. Some of these cloud tools, but not all, extend across to other cloud service providers. The result is a potential hodgepodge of security tools, systems and processes that do not interoperate. Since complexity is the enemy of good security, security models and frameworks should identify and standardize commonalties now, before the security implementations get too complex.

Today the industry is in a quandary as to which security and identity services to use for authorizing and authenticating users to support workflows with assets, tools and participants scattered across multiple infrastructures. The MovieLabs CSAP was designed to provide a common architecture to deal with these issues in an interoperable manner and we’re working now with the industry to enable its implementation across clouds and application ecosystems.

8) There is no easy way to manage authentication of workflow participants from multiple facilities and organizations.

In today’s workflow a post-production vendor may require a creative user to login to a local workstation to work, with another login required to access the SaaS dailies platform to review the director’s notes, and a third login needed (with separate credentials) to run file transfers for assets to work on. In an ideal world, one login would be usable across all platforms with policies from the production determining permissions. Those policies along with work assignments and roles would seamlessly manage the user’s access to assets, tools and applications without requiring creation and maintenance of separate credentials for every system.

Our industry is unique in the number of independent contractors and small companies that are of critical significance to productions. A single Production User ID (PUID) system would make many lives easier, as well as allowing software tools to identify participants in a consistent way. This PUID system would make it much easier to onboard creatives to productions and remove them afterwards, with much lower chance of users forgetting or writing down on post-it notes the dozens of combinations of username and passwords for each system.

9) We will need a comprehensive change management plan to train and onboard creative teams to these new more efficient ways of working.

Many of these cloud-based workflow changes will require new or adapted tools and processes. Much of the complexity can be obscured from individual users, but there are always usability lessons, training, and change management issues to consider when implementing a new way of working. Productions are high-risk, high-stress endeavors, so we need to implement these systems and onboard teams without upsetting workflows. Developing trust amongst creative teams takes many years and experience in actual productions. The changes proposed here likewise will need considerable time to establish trust and convince creatives that they can securely power productions with better efficiency and improved collaboration. Fortunately, the software-defined workflows described here use the same mechanisms available in other collaboration platforms already widely used today – Slack for real-time collaboration, Google Docs for multi-person editing, Microsoft Teams for integrated media and calling. Those tools provide the model for real-time and rapid decision-making that we want to bring to media creation.

As the industry looks to ramp back up after the COVID shutdowns, it’s worth noting that the true potential of the cloud for production workflows was not exploited during temporary work-from-home tasks. If we can execute on a more collaborative view of entire production systems operating across cloud infrastructures, we believe we can “build back better” and enable far more efficiency in our new workflows.

If the industry can close these nine gaps, we will be closer to realizing a true multi-cloud-based workflow from end to end. Some of these challenges are beyond what any one company can solve (e.g., the availability of low cost, massively high bandwidth internet connections). Still, there are areas where we can work together to close the gaps. To that end, MovieLabs has been working to define some of the required specifications, architectures, and best practices and in subsequent posts, we will elaborate on some of these solutions in more detail.

The post CLOUD. WORK. FLOWS appeared first on MovieLabs.

]]>
The Problem with Current Production Workflows https://movielabs.com/the-problem-with-current-production-workflows/?utm_source=rss&utm_medium=rss&utm_campaign=the-problem-with-current-production-workflows Fri, 21 May 2021 14:46:42 +0000 https://movielabs.com/?p=8123 In this video, Mark Turner, MovieLab's Program Director of Production Technology, explains the problem with current production workflows. He paints a picture of the ideal world of production workflow, called Productionlandia. This idyllic world is part of the MovieLabs 2030 Vision.

The post The Problem with Current Production Workflows appeared first on MovieLabs.

]]>

For more MovieLabs videos be sure to subscribe to our YouTube channel.

The post The Problem with Current Production Workflows appeared first on MovieLabs.

]]>
See you at HPA 2021 https://movielabs.com/see-you-at-hpa-2021/?utm_source=rss&utm_medium=rss&utm_campaign=see-you-at-hpa-2021 Fri, 12 Mar 2021 17:43:15 +0000 https://movielabs.com/?p=7720 For over 25 years, an annual pilgrimage to the Palm Springs desert was required for the full immersion experience of what many just called “the HPA.”

The post See you at HPA 2021 appeared first on MovieLabs.

]]>
The Hollywood Professional Association’s annual Tech Retreat has, over its many years, become a mecca for those seeking to understand the future of media and entertainment technology and the vision of its most important thought leaders. “See you at HPA” became a knowing New Years greeting among the industry’s technorati.

 And at this year’s virtual Tech Retreat, MovieLabs intends to honor the spirit of the physical Tech Retreat. We’ll be sharing some of our latest thinking with three video case studies including our latest work on cloud security, software defined workflows and an introduction for a new common Visual Language. We also will be hosting five roundtables to provide opportunities for direct industry conversations.  

 MovieLabs will also be partnering with the HPA on the second day of the annual SuperSession – presenting “Live from the Cloud – Without a Net – an ambitious, real-time demonstration of cloud centric, connectedcollaborative production through delivery workflows against the backdrop of how our industry will evolve as we continue to implement our 2030 Vision. 

The SuperSession will again be hosted by Jochen “JZ” Zell, where, on Day 1, he will take a look at seven filmmakers, who in the midst of the pandemic, shot films in London, Dubai, Ulaanbaatar (Mongolia), Mexico City, Brisbane and Hollywood, not only testing themselves from a health and safety perspective, but absolutely pushing and redefining creative, remote, collaborative, cloud-based workflows. 

And of course, this year’s SuperSession is an outsized follow up to last year’s event. In 2020, the SuperSession was all about how the cloud was changing production, culminating in a presentation where the conference hall somehow tuned into an ad hoc German Oktoberfest Biergarten.  More than 500 attendees raised their beer filled steins to the making of a film called “The Lost Lederhosen,” some of which was shot on a virtual stage complete with LED wall, and edited, color corrected, and finished right in front of the audience. 

And even though all of these techies were apparently making toasts to the recovery of lederhosenthis celebration and palatableexcited enthusiasm was really about vision. Specially, the MovieLabs 2030 Vision. 

 Six months earlier in August of 2019, MovieLabs had published its groundbreaking and thought-provoking white paper, The Evolution of Media Creation – a 10 Year Vision for the Future of Media Production, Post and Creative Technologies.” The industry was inspired. 

 And the 2020 Tech Retreat would be just the place to discuss and apparently utilize some missing leather breeches to demonstrate the collaborative, cuttingedgecloud based virtual production capabilities of an industry seeking to embrace and implement this “MovieLabs 2030 Vision.” The raised steins of those attending loudly signaled that this “vision” was shared 

Just a few short weeks after the 2020 Tech Retreat, the COVID19 global pandemic locked down most of the world and with it, content creation. As the industry began to cope with the impactit became clear that the 2020 SuperSession was a prescient dress rehearsal for the cloudbased collaborative workflows that would need to be implemented In order for production to resume in a new, safer, and more restricted way  being remote, connected and cloud enabled was no longer visionary, it was necessary.  

 Now, a year into the pandemic, our dedicated, resilient and innovative technology community engineered and “Macgyvered” our way to resumption through sheer tenacity and efforts nothing short of heroic. We are creating content again – and we are increasingly doing it in the cloud.   

 At the end of the 2020 Tech Retreat, perhaps after a few too many steins and shouts of Prost!, some prematurely proclaimed 2020 the new 2030thinking that much of our cloud journey had been accomplished. But it is now clear, especially as we begin to increasingly use cloud-based systems and tools, that there is still much to be done if we are to truly accomplish the ten principles outlined in the MovieLabs vision.”  

 At this year’s 2021 Tech Retreat, MovieLabs will be leading much of the conversation around what we are learning, what we are thinking, and the work that lies ahead. While this year’s event will be decidedly virtualthe Tech Retreat’s new content platformHPA ENGAGEpromises to deliver that same depth of thought leadership, information and human interaction – complete with breakfast roundtables – and lunch ones too. There will even be cocktails delivered to your door if you sign up to register “while supplies last.” 

 While we are going to miss seeing everyone in person, at MovieLabs, our goal is to provide the industry with a briefing on some of the topics we are focused on in this coming year, as well as to provide an opportunity to both join the discussion and to better understand the task and goals that will help realize the “vision” of our industry’s cloud future.  

 Our MovieLabs Tech Retreat participation kicks off with the release of our first video presentation  Software-Defined Workflows” which will be presented by our MovieLabs CTO, Jim Helman A key aspect of the “2030 Vision” is how our industry’s cloud-based workflows will be powered by software-mediated collaboration and automation, as articulated in our whitepaper  The Evolution of Production Workflows. Underpinning these workflows are a number of concepts around common ways to express aspects of workflow that will be core to a completely interoperable cloud-based future. In addition to this whitepaper, our recent blog on SoftwareDefined Workflows is a good way to prepare to join the conversation. Jim will open this topic up for discussion in a Lunch Roundtable that we will be hosting on “Multi-Cloud Challenges & Opportunities” on March 16 at Noon PT.  

 Continuing the theme around the need to express workflows in common and deliberate ways, MovieLabs will be showing at the 2021 HPA Tech Retreat a preview of a common visual language to express workflows, which will help to drive automation and interoperability. MovieLabs’ Production Technology Specialist Chris Vienneau will present “A Visual Language Primer” as the first look and public discussion around how creating a common way to describe and draw workflows is an important step for our industry to literally get on the same page.  

 MovieLabs just published the first three parts of a reference security architecture for securing the assets and processes in cloud workflows. It is a collaboration-oriented zero-trust security architecture designed to not interfere with creative work. It is concerned with securing and protecting the integrity of assets, processes, and workflows in the collaborative environment of media production. It is not concerned with protecting the underlying infrastructure. In fact, it is designed to protect production on an infrastructure that is not trusted. 
 

Spencer StephensMovieLabs’ Senior VP, Production Technology and Security’s recent blog post on How to Secure the Cloud as a Universal Production Resource speaks to why our perimeterbased security approach of the past will not be practical for our cloud future where security should be intrinsic within media workflows and for media assets and their creative participants themselvesWatch for Spencer’s Tech Retreat presentation on “Why DWe Need a Common Security Architecture” and join our Lunch Roundtable on The Need for a Common Security Model” that Spencer will be moderating on Tuesday, March 23 at 1:00 PM PT.  

 Another new MovieLabs concept for the industry to consider will also be discussed at the Tech Retreat. “What is a Production User ID and Why do we Need It, a Breakfast Roundtable hosted by Mark TurnerMovieLabs’ Program Director, Production Technology on March 16 at 9:00 AM PTwill begin a discussion on the role identitplays in not just security authentication but also how the ability to identify individual participants and their roles can unlock numerous automation and efficiency initiatives as we begin to work across projects, companies, and clouds.  

 Raymond DrewryMovieLabs Principal Scientist, will host Breakfast Roundtable on March 23 at 9:00 AM PT entitled “DAM MAM, WHATS A PAM? which will provide a look at how we need to think about and very likely rethink our approach to assetstheir metadata and how the two are linked as we discuss the future of asset management. 

 On March 24 at 9:00 AM PT, Leon Silverman, who is an advisor to MovieLabs 2030 Vision on Strategy and Industry Relations will host a Breakfast Roundtable on “Realizing the 2030 Vision – Aligning the Industry.”  This session will explore a number of themes of interest to the HPA community including:  How cloud ready are we as an industry? What challenges does the industry need to overcome in order for the MovieLabs 2030 vision to be realized? What additional industry education do we need to “get on the same page? and How can I help? 

 And in the true meaning of the SuperSession, MovieLabs will be participating in a super final session of the 2021 HPA Tech Retreat – an ambitious demonstration that will be a worthy  follow up to last years “Lost Lederhosen” demoOn March 24 at 10:00 AM PT, entitled, Live from the Cloud – Without a Net,” HPA’s Jochen “JZ” Zell and MovieLabs Mark Turner will host live demonstration of production through editorial, VFX, conform, color, sound and delivery using 5G, public and private hybrid cloud and a variety of tools, infrastructure and workflows. All with an eye as to where we are on our path to MovieLabs “2030 Vision.”  This will be one of those demonstrations that will be talked about for years to come. And hopefully, someone will find those lederhosen.  

 This past year, MovieLabs has been hard at work with our Studio partners, laying the groundwork for our cloud future, but at the HPA Tech Retreat, the industry has an important opportunity to contribute to and to join the conversation. Read our whitepapers and blogs. View our HPA Presentations. Join the conversation at our Roundtables, and help our industry make this most important journey to the cloud.  

 “See you at HPA.” 

The post See you at HPA 2021 appeared first on MovieLabs.

]]>
Distributing Workflows through the Clouds https://movielabs.com/distributing-workflows-through-the-clouds/?utm_source=rss&utm_medium=rss&utm_campaign=distributing-workflows-through-the-clouds Tue, 15 Dec 2020 18:39:24 +0000 https://movielabs.com/?p=7060 The need for interoperable platform architectures to make work flow.

The post Distributing Workflows through the Clouds appeared first on MovieLabs.

]]>
What’s in a cloud?

For simplicity, the technology world tends to describe “the cloud” as if it were a single amorphous mass, which of course it is not. The cloud really consists of interconnected private data centers, co-location providers, and hyperscale cloud service providers (which are further subdivided into distinct regions and specific facilities often called availability zones). In the 2030 Vision paper, we relied on the convention of a singular cloud to help define a principle that “ALL ASSETS ARE CREATED OR INGESTED STRAIGHT INTO THE CLOUD AND DO NOT NEED TO BE MOVED.” In this post I will discard the convention of a monolithic cloud and discuss different parts of the cloud infrastructure and the need to interconnect those parts to make media production work and flow (i.e., to make a workflow).

In describing workflows spanning cloud infrastructures, we should recognize initially that managing a mix of different cloud infrastructures is analogous to managing the mix of different infrastructures that make up traditional media workflows. Our current infrastructure for creative workflows is distributed across different studio and network facilities, post houses, VFX houses, sound facilities, dubbing services and so on. When a vendor requires a set of files in a production, the simple (but inefficient and slow) solution is to copy the files, transfer them to the vendor’s infrastructure, and when the work is done, return any new files, including any updated versions of the original set, back to the studio or production. In the 2030 Vision this inefficiency is eliminated by maintaining one copy of the file in cloud infrastructure and inviting vendors and applications to come to the file to do the work. This element of the MovieLabs vision introduces efficiencies, but also a new type of infrastructure complexity that we explore below.

Also important to note, the MovieLabs definition of cloud includes public cloud service providers (which we dub “hyperscale cloud services”) and “… any internet-accessible storage platform that can be used as a common space for collaboration and exchange of data.” So while other industries discuss multi-cloud in the context of an application that can be deployed in one or more public clouds, our definition of multi-cloud refers more to a production and its ability to deploy across multiple internet-connected infrastructures. It’s an important distinction because a production can comprise many creative applications and higher-level production management systems (such as MAMs, schedulers, workflow orchestration or automation tools) that enable a holistic view of the assets, tasks, and participants in a production – all of which may be scattered across different cloud infrastructures.

Enabling choice and flexibility

Now that we have established that we’re living already in a world of multiple infrastructures, we address what happens when we move to the clouds and need to maintain the flexibility of those infrastructures. Different creative vendors choose different cloud providers or infrastructure to host services for their specialist tasks, and we need to embrace them all to expand a market of choice and competition in cloud services and applications. As an example, in one use case for cloud interoperability assets for Production 1 are stored in Cloud A, and assets for Production 2 are stored in Cloud B, requiring the studio behind both productions to interface with both Cloud A and Cloud B to manage their slate. In another more likely scenario, files for both productions may be scattered intentionally across Clouds A, B and perhaps even C – based on the choices of vendors who create the files and rely on different cloud infrastructures. For example, an audio file may be created at a sound mixing facility and reside on a private cloud, but a VFX vendor on the same production may create and manage 3D assets on Cloud B, which is used for rendering. The complex use of multiple infrastructures does not break the 2030 Vision, but in fact highlights the flexibility of the model. However, it does make asset management somewhat harder and require a certain level of cloud-agnostic abstraction between the infrastructure and the compute/application layers.

The challenge is to enable interoperability such that assets can be placed in any infrastructure and applications can freely discover and operate upon those assets from any other infrastructure. That type of interoperability would replicate the flexibility of today but with the almost infinite computing power and flexibility of the cloud and without the need to duplicate files. Any participant in the ecosystem could then contribute (with security and permissions) to an active production using their preferred infrastructure with their preferred business model, relying on a hyperscale cloud service provider (OpEx), a private cloud data center (CapEx), or any combination of both.

In fact, we already are beginning to see cloud migration occurring in our industry based on exactly this model. Today many production companies and vendors have existing investments in on-prem infrastructure. Their goal is to sweat those assets, while simultaneously preparing for a time when that equipment is retired. When that time arrives, the work should switch seamlessly to the cloud if by then the right pieces are in place. If we design an interoperable architecture now that accommodates both existing on-prem equipment and interfaces seamlessly with cloud resources, then we can accelerate cloud migration by making it simpler and more cost effective for all to make that switch.

Enhancing efficiencies and reducing barriers to viable cloud adoption speeds implementation of the 2030 Vision. It can be challenging to move one software application (whether licensed or developed in-house) to one cloud infrastructure, let alone multiple clouds. In the interoperable scenario above, potentially all software may need to discover and access files securely on any cloud platform to enable the benefits of multiple cloud infrastructures. Enabling that scenario is complex, but there is a real opportunity if the industry does that work together instead of alone or piecemeal.

At MovieLabs we’re looking to assist by working with application and infrastructure providers to develop cloud-agnostic standardized interfaces that lower the development effort for all ecosystem participants – from the large studios and public cloud providers to application developers to small production houses with limited IT resources. Enabling all types of productions to configure software-defined workflows that easily span multiple clouds is a key benefit of the 2030 Vision.

Where we go next …

MovieLabs is devoting significant attention to the challenges of using multiple cloud infrastructures in production workflows. We expect to have more to say on that in future blogs, but the crux of the solution comes down to the power of the network and its ability to interconnect all the parts of the infrastructure seamlessly while embracing variation across clouds and technologies. We have to enable productions and studios to benefit from the innovation and flexibility of the cloud, while obscuring the infrastructure complexity from the creative team, allowing them to focus on their art. It’s not an easy task, but the opportunities are considerable if we get it right. It takes an industry, and we look forward to working together with all of you to achieve that goal.

#MovieLabs2030, #ML2030Cloud

 

 

 

The post Distributing Workflows through the Clouds appeared first on MovieLabs.

]]>
Is the Cloud Ready to Support Millions of Remote Creative Workers? https://movielabs.com/is-the-cloud-ready-to-support-millions-of-remote-creative-workers/?utm_source=rss&utm_medium=rss&utm_campaign=is-the-cloud-ready-to-support-millions-of-remote-creative-workers Mon, 07 Dec 2020 16:43:31 +0000 https://movielabs.com/?p=6989 Assessing the readiness of Virtual Desktop Infrastructure (VDI) to support creative users for the 2030 vision

The post Is the Cloud Ready to Support Millions of Remote Creative Workers? appeared first on MovieLabs.

]]>
The Movielabs 2030 Vision for the future of production technology includes the principle that media will be stored in the cloud and applications will come to the media (not the other way around). That principle anticipates that many creative applications will be rebuilt to be “cloud native” or will require high power virtual machines/workstations (VMs) to run in the cloud where the media will be residing. We expect it will be many years before our most used creative applications will be rearchitected to be “cloud native,” and therefore we focused our attention on assessing the current versions of those applications on cloud-based VMs using Virtual Desktop Infrastructure (VDI) to stream those experiences to users.

2020 readiness assessment

We have looked especially at the work needed to enable the full 2030 Vision using these virtualized machines. Our benchmark is the sort of quality, latency and performance that a user can experience today on a physical machine in a production facility – what is required to replicate that experience from a cloud-based VM?  We have largely been assuming the same quality levels we use today for these experiences. (For example, editing of 4K material is often done using 1080p proxies with editors using 2-3 screens, 1 or 2 for UX and 1 for media playback.)  Of course, the quality and size of files will continue to increase, and work that is currently done at 1080p and 8-bit color will no doubt move to 4K and 8k with 10-bit or greater color precision, but our assessment is based largely on whether we can replicate today’s quality levels with cloud-based infrastructure.

In this post we’ll summarize the key findings from that research and call for the industry to accelerate innovation in some areas that are otherwise inhibiting an industry-wide migration of creative tasks to VDI. VDI is a mature technology and is used by millions of workers globally, but typically not for challenging tasks in the media creation industry that have unique issues to address.

COVID solutions get us only part way there

It’s worth noting that the global pandemic has accelerated the remote performance of creative tasks in a “work from home” way. However, we do not view this temporary situation in the same way as a wholesale movement of all workflow tasks to a cloud-powered infrastructure. The COVID response has been via a series of temporary installations, workarounds and workflow adjustments to accommodate social distancing. While COVID in some ways has prepared creatives for a future where work does not require physical co-location with the media and workstations, the 2030 Vision includes a much bolder version of cloud-based “work from anywhere” (including at a primary place of work). Productions and creatives should be able to tap into the many additional benefits that cloud-based workflows offer. When assets reside entirely in the cloud and do not move between workplaces, any user can work on any machine with security, permissions and authorizations intact. It is that vision, rather than the narrower 2020 COVID scenarios, that informs our cloud readiness assessments.

Creative work profiles

To assist our readiness assessments, we defined several categories of creative user profiles, along with some broad requirements for enabling that work to be performed remotely by VMs and VDI. These categories expand on typical worker use cases and address some unique requirements of our industry:

Creative Worker Type Example Use Cases Max Tolerable Latency Downlink Bandwidth per User
1.     General Production Use Cases (Task or Knowledge Workers) Data entry & management, MAM operations, production accounting 100 – 300 ms 5 – 20 Mbps
2.     Base GPU Creative Workstations General workloads including video and sound editing, compositing, etc., using mostly mouse & keyboard <30 ms for frame accurate control

<250 ms for review

20 – 60 Mbps

 

 More specialized use cases modify the demands of the base Creative Workstation:
GPU-A Color Accurate 10-bit, for color accurate editing, compositing, review of HDR WCG content, color grading on broadcast monitor < 30 ms for frame accurate control

< 250 ms for review

40 – 60 Mbps
GPU-B Color Critical 12-bit minimum for color grading on projector (DI), final color review and approvals, Color QC < 30 ms for frame accurate control

< 250 ms for review

Expected 60-90 Mbps, when systems can be tested
GPU-C Ultra-Low Latency Workstations Tablet, Pen or Touch interface users such as VFX artists (need pixel perfect rendering, with no softening from compression) < 25 ms stable and sustained 20 – 60 Mbps

 

Note 1: The estimates are generally based on 1080p streams at 24-30 fps and 8-bit color.

Note 2: Among existing codecs H.264 seems prevalent. To achieve higher color bit depth, we anticipate other codecs such as H.265 will be better suited, which will impact bandwidth (possibly in a positive direction).

The table helps us assess industry readiness to run creative applications in the cloud. Virtual Machines are generally available across clouds for Creative Worker Types 1 and 2 (General Production Use Cases and Base GPU Creative Workstations). However …

We need to be better than today for creative tasks to fully migrate

Our stretch cases in GPU-A, GPU-B and GPU-C are a different story. It is here where we have identified a number of gaps that must be filled to enable migration of full creative workflows to cloud-based virtual workstations. We can’t hope to migrate entire production workflows to the cloud without finding a way to run these creative applications, at mass scale, with similar performance to what an on-premises artist may experience today.

We’ve identified the following gaps the industry needs to close in order to migrate all tasks to cloud-based VMs and support the GPU-A, B and C users:

  1. No standards exist for measuring or comparing the quality of video from streaming VDI systems (which tend to use subjective terms such as “High”, “Medium” and “Low” to describe quality settings). The lack of objective metrics for the various dimensions of video quality (color, resolution, artifacts, frame jitter) make it difficult to compare the performance of solutions or establish where errors arise in a system. If a consumer sees a video glitch in a streaming OTT show, it likely will be ignored. However, a creative making or approving that content will not ignore a glitch, but will need to know if it was caused by a VDI system or is resident in the native content. In that circumstance creatives have little choice but to rewind and replay the content to ascertain (hopefully) where the problem lies. This uncertainty introduces delays in production that likely would not occur in an on-prem studio environment. To provide creatives with measurable certainty and trustin the VDI ecosystem, the industry needs to develop guidelines and agreed quality standards.
  2. VDI systems offer good support for 8-bit content with standard dynamic range in the legacy broadcast format Rec.-709. However, production is now moving to 10-bit or greater color depth, HDR and wider color spaces for Color Accurateuse cases. No direct VDI system currently supports those requirements; although support for these Color Accurate use cases is available with additional software/hardware and native support may be imminent in early 2021. It is worth noting that both the codecs employed by the VDI to stream the VM and the client machine must support 10-bit color. Typical thin clients do not support more than 8-bit color. The goal posts are likely to move again as we envision a more challenging environment coming from …
  3. … truly Color Critical VDI systems. We have defined some expectations for these GPU-B capable systems, but are not expecting systems to be available for several more years. Yet 12-bit depth quality will be required for Digital Intermediate (DI) color grading and color review in order to give a director and colorist the full flexibility available with on-prem systems today.
  4. Another issue with a high-end machine running in the cloud far away from a creative user is the connection between the remote machine and the screen available locally to the creative. While standard mechanisms exist that enable local workstations to communicate display modes and color spaces directly to attached monitors, remote VDI systems need to relay that display signaling to distant monitors over the internet. Currently, we are not aware of standards that support communication over VDI infrastructure between a remote PC and a professional video monitor that would facilitate remote calibration and confirmation of display modes and color spaces.
  5. A related issue is the connection of high-end production peripherals – color grading panels, scopes, audio mixing boards. These devices usually directly connect via USB or ethernet to a local PC with predictable latency and responsiveness. Artists need those I/O devices to be extended with connectivity to the remote VM via VDI platforms with predictable levels of responsiveness. Predictability is key – users can adjust to slight latency between input and response, but unpredictable latency can cause frustration and make it difficult for a user to adapt.
  6. Collaboration via VDI is also highly compromised compared to on-prem scenarios where multiple creatives in a physical room can discuss and collaborate in real time. Virtual machines currently do not have integrated support for simultaneous sharing of content with more than one destination. While Over-The-Shoulder (OTS) solutions exist to allow another user to see the output of a VDI session, those solutions typically require additional hardware or software, complicating the overall setup and introducing the potential for additional issues around security and synchronization. Additional complexity can reduce confidence on the part of creatives that the solution outputs the same view of the same content at the same time. Collaboration will need to improve before 2, 3, 5 or even 20 people will feel truly together when sharing streaming content from a single cloud storage source with overlaid communication tools.
  7. Support for multi-channel audio (5.1 and greater) is limited in VDI system, and there is no support for object-based audio systems. As so many audio tasks rely on surround sound at a minimum, further improvement will be required to ensure that sound editors confidently hear what a consumer will hear, not a mixed-down stereo version over VDI.
  8. Lastly, additional cloud technology will be required to address the dichotomy between a VM needing to be close to the user (to minimize latency) and the 2030 Vision principle that the “application comes to the media”, especially when that media may not be co-located in the same cloud or same cloud region as the VM. If one VDI user is in London and another is in LA, where should the media be stored so that both have a good low latency connection to it? As a compromise the media could reside in NYC, so both users have an equally bad experience, or some sort of local caching could be used to improve the experience of both. Regardless of the solution adopted, pre-positioning of data is a tough challenge that will require additional innovation to address fully. 

A flag in the sand …

We are hopeful that when we repeat this assessment in a few years, cloud VDI systems will be much improved and the areas outlined above will have been addressed. The 2030 Vision describes a world where thousands of workers on a major movie or TV series collaborate entirely online via virtual desktops using cloud-based media and existing media tools. The infrastructure is forming to enable that future, but there is work to be done across cloud infrastructure, video compression, creative applications, and VDI service providers to enable cloud collaboration to deliver the same experience online that creatives enjoy offline today.

#MovieLabs2030, #ML2030Cloud

 

 

The post Is the Cloud Ready to Support Millions of Remote Creative Workers? appeared first on MovieLabs.

]]>