Mark Turner, Author at MovieLabs https://movielabs.com Driving Innovation in Film and TV Content Creation and Distribution Tue, 09 Jan 2024 04:07:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://movielabs.com/wp-content/uploads/2021/10/cropped-favicon-32x32.png Mark Turner, Author at MovieLabs https://movielabs.com 32 32 Are we there yet? Part 3 https://movielabs.com/are-we-there-yet-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-3 Tue, 09 Jan 2024 00:52:58 +0000 https://movielabs.com/?p=13486 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>

In this final part of our blog series on the current gaps between where are now and realizing the 2030 Vision, we’ll address the last two sections of the original whitepaper and look specifically at gaps around, Security and Identity, and Software-Defined Workflows. As with previous blogs in this series (see Parts 1 and 2) we’ll include both the gap as we see it, an example as it applies in a real workflow, and the broader implications of the gap.

So let’s get started with…

MovieLabs 2030 Vision Principle 6
  1. Inconsistent and inefficient management of identity and access policies across the industry and between organizations.

    Example: A producer wants to invite two studio executives, a director and an editor, into a production cloud service but the team has 3 different identity management systems. There’s no common way to identify the correct people to provide access to critical files or to provision that access.

    This is an issue addressed in the original 2030 Vision, which called for a common industry-wide Production User ID (or PUID) to identify individuals who will be working on a production. While there are ways today to stitch together different identify management and access control solutions between different organizations, they are point to point, require considerable software or configuration expertise, and are not “plug and play.”

MovieLabs 2030 Vision Principle 7
  1. Difficulty in securing shared multi-cloud workflows and infrastructure.

    Example: A production includes assets spread across a dozen different cloud infrastructures, each of which is under control of a different organization, and yet all need a consistent and studio-approved level of security.

    MovieLabs believes the current ”perimeter” security model is not sufficient to cope with the complex multi-organizational, multi-infrastructure systems that will be commonplace in the 2030 Vision. Instead, we believe the industry needs to pivot to a more modern ”zero-trust” approach to security, where the stance changes from ”try to prevent intruders” to every access to an asset or service is authenticated and checked for authorization. To that end, we’ve developed the Common Security Architecture for Production which is based on a Zero Trust Foundation, take a look at this blog to learn more.

MovieLabs 2030 Vision Principle 8
  1. Reliance on file paths/locations instead of identifiers.

    Example: A vendor requires a number of assets to do their work (e.g., a list of VFX plates to pull or a list of clips) that today tend to be copied as a file tree structure or zipped together to be shared along with a manifest of the files.

    In a world where multiple applications, users and organizations can be simultaneously pulling on assets, it becomes challenging for applications to rely on file names, locations, and hierarchies. MovieLabs instead is recommending unique identifiers for all assets that can be resolved via a service to specify where a specific file is actually stored. This intermediate step provides an abstraction layer and allows all applications to be able to find and access all assets. For more information, see Through the Looking Glass.

MovieLabs 2030 Vision Principle 9
  1. Reliance on email for notifications and manual processing of workflow tasks.

    Example: A vendor is required to do a task on a video asset and is sent an email, a PDF attachment containing a work order, a link to a proxy video file for the work to be done, and a separate link to a cloud location where the RAW files are. It takes several hours/days for the vendor to extract the required work, download, QC, and store the media assets, and then assign the task on an internal platform to someone who can do the work. The entire process is reversed to send the completed work back to the production/studio.

    By having non-common systems to send workflow requests, asset references and assign work to individual people, we have created an inherently inefficient industry. In the scenario above, a more efficient system would be for the end user to receive an automated notification from a production management system that includes a definition of the task to be done and links to the cloud location of the proxies and RAW files, with all access permissions already assigned so they can start their work. Of course, our industry is uniquely distributed between organizations that handle very nuanced tasks in the completion of a professional media project. This complicates the flow of work and work orders, but there are new software systems that can enable seamless, secure, and automated generation of tasks. We can strip weeks out of major production schedules simply by being more efficient in handoffs between departments, vendors and systems.

  2. Monolithic systems and the lack of API-first solutions inhibit our progress towards interoperable modern application stacks.

    Example: A studio would like to migrate their asset management and creative applications to a cloud workflow that includes workflow automation, but the legacy nature of their software means that many tasks need to be done through a GUI and that it needs to be hosted on servers and virtual machines that mimic the 24/7 nature of their on-premises hardware.

    Modern applications are designed as a series of micro-services which are assembled and called dynamically depending on the process, which enables considerable scaling and also lighter weight applications that can deploy on a range of compute instances (e.g., on workstations, virtual machines or even behind browsers). While the pandemic proved we can have creative tasks running remotely or from the cloud a lot of those processes were ”brute forced” with remote access or cloud VMs running legacy software and are not the intended end goal of a ”cloud native” software stack for media and entertainment. We recognize this is an enormous gap to fix and will take beyond the 2030 timeframe to move all of the most vital applications/services to modern software platforms. However we need the next-generation of software systems to enable open APIs and deploy in modern containers to accelerate the interoperable and dynamic future that is possible within the 2030 Vision.

MovieLabs 2030 Vision Principle 10
  1. Many workflows include unnecessarily time consuming and manual steps.

    Example: A director can’t remotely view a final color session in real time from her location, so she needs to wait for a full render of the sequence, for it to be uploaded to a file share, for an email with the link to be sent, and then for her to download it and find a monitor that matches the one that was used for the grade.

    We could write so many examples here. There’s just way too little automation and way too much time wasted in resolving confusions, writing metadata, reading it back, clarifying intent, sending emails, making calls etc. Many of the technologies exist to fix these issues, but we need to redevelop many of our control plane functions to adopt to a more efficient system which requires investment in time, staff, and development. But those that do the work will come out leaner, faster and more competitive at the end of the process. We recommend that all participants in the ecosystem take honest internal efficiency audits to look for opportunities to improve and prioritize the most urgent issues to fix.

Phew!  So, there we have it. For anyone that believes the 2030 Vision is “doable” today, there are 24 reasons why MovieLabs disagrees. Don’t consider this post a negative, we still have time to resolve these issues, and it’s worth being honest about the great progress completed but also what’s still to do.

Of course, there’s no point making a list of things to do without a meaningful commitment to cross them off. MovieLabs and the studios can’t do this alone, so we’re laying down the gauntlet to the industry – help us, to help us all. MovieLabs will be working to close those gaps that we can affect, and we’ll be publishing our progress on this blog and on LinkedIn. We’re asking you to do the same – share what your organization is doing with us by contacting info@movielabs.com and use #2030Vision in your posts.

There are three specific calls to action from this blog for everyone in the technical community:

  1. The implementation gaps listed in all parts of this blog are the easiest to close – the industry has a solution we just need the commitment and investment to implement and adopt what we already have. These are ones we can rally around now, and MovieLabs has already created useful technologies like the Common Security Architecture for Production, the Ontology for Media Creation, and the Visual Language.
  2. For those technical gaps where the industry needs to design new solutions, sometimes individual companies can pick these ideas up and run with them, develop their own products, and have some confidence that if when they build them customers will come. Some technical gaps can only be closed by industry players coming together, with appropriate collaboration models, to create solutions that enable change, competition, and innovation. There are existing forums to do that work including SMPTE and the Academy Software Foundation, and MovieLabs hosts working groups as well.
  3. And though not many issues are in the Change Management category right now, we still need to work together to share and educate how these technologies can be combined to make the creative world more efficient.

We’re more than 3 years into our Odyssey towards 2030. Join us as we battle through the monsters of apathy, slay the cyclops of single mindedness, and emerge victorious in the calm and efficient seas of ProductionLandia. We look forward to the journey where heroes will be made.

-Mark “Odysseus” Turner

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>
Are we there yet? Part 2 https://movielabs.com/are-we-there-yet-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-2 Thu, 14 Dec 2023 03:15:54 +0000 https://movielabs.com/?p=13461 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>

In Part 1 of this blog series we looked at the Gaps in Interoperability, Operational Support and Change Management that are impeding our journey to the 2030 Vision’s destination (the mythical place we call “ProductionLandia”). In these latter parts we’ll examine the gaps we have identified that are specific to each of the Principles of the 2030 Vision. For ease of reference, the Gaps below are numbered starting from 9 (because we had 1-8 in Part 1 of the blog). For each Gap we list the Principle, a workflow example of the problem, and the implications for the Gap.

In this post we’ll look just at the gaps around the first 5 Principles of the 2030 Vision which address a new cloud foundation.

MovieLabs 2030 Vision Principle 1
  1. Limitations of sufficient bandwidth and performance, plus auto recovery from variability in cloud connectivity.

    Example: Major productions can generate terabytes of captured data per day during production and getting it to the cloud to be processed is the first step.

    Even though there are studio and post facilities with large internet connections, there are still many more locations, especially remote or overseas ones, where the bandwidth is not large enough, the throughput not guaranteed or predictable enough, such as to hobble cloud-based productions at the outset. Some of the benefits in cloud-based production involve the rapid access for teams to manipulate assets as soon as they are created and for that we need big pipes into the cloud(s), that are both reliable and self-healing. Automatic management of those links and data transfers is vital as they will be used for all media storage and processing.

  2. Lack of universal direct camera, audio, and on-set data straight to the cloud.

    Example: Some new cameras are now supporting automated upload of proxies or even RAW material direct to cloud buckets. But for the 2030 Vision to be realized we need a consistent, multi-device on-set environment to be able to upload all capture data in parallel to the cloud(s) including all cameras, both new and legacy.

    We’re seeing great momentum with camera to cloud in certain use cases (with limited support from newer camera models) sending files to specific cloud platforms or SaaS environments. But we’ve got some way to go before it’s as simple and easy to deploy a camera-to-cloud environment as is it to rent cameras, memory cards/hard drives, and a DIT cart today. We also need support for multiple clouds (including private clouds) and or SaaS platforms so that the choice of camera-to-cloud environment is not a deciding factor that locks downstream services into a specific infrastructure choice. We’ve also included in the gap that it’s not just ”camera to cloud” but “capture to cloud” that we need, which includes on-set audio and other data streams that may be relevant to later production stages including lighting, lenses, and IOT devices. All of that needs to be securely and reliably delivered to redundant cloud locations before physical media storage on set can be wiped.

  3. Latency between “single source of truth in cloud” and multiple edge-based users.

    Example: A show is shooting in Eastern Europe, posting in New York, with producers in LA and VFX companies in India. Which cloud region should they store the media assets in?

    As an industry we tend to talk about “the cloud” as a singular thing or place, but in reality of course it is not – it’s made up of private data centers, and various data centers which hyperscale cloud providers tend to arrange into different “availability zones” or “regions” which must be declared when storing media. As media production is a global business the example above is very real, it leads to the question – where should we store the media and when should we duplicate it for performance and/or resiliency? This is also one of the reasons why we believe multi-cloud systems need to be supported because it’s also possible that the assets for a production are scattered across different availability zones, cloud accounts (depending on which vendor has “edit rights” on the assets at any one time), and cloud providers (public, private and hybrid infrastructures). The gap here is that currently decisions need to be made, potentially involving IT systems teams and custom software integrations, about where to store assets to ensure they are available, at very low latency (sub 25 milliseconds round trip – see Is the Cloud Ready to Support Millions of Remote Creative Workers? for more details) for the creative users who need to get to them. By 2030 we’d expect some “intelligent caching’” systems or other technologies that would understand, or even predict, where certain assets need to be for users and stage them close enough for usage before they are needed. This is one of the reasons why we reiterate that we expect, and encourage, media assets to be distributed across cloud service providers and regions and merely ”act” as a single storage entity even though they may be quite disparate. This is also implies that applications need to be able to operate across all cloud providers because they may not be able to predict or control where assets are in the cloud.

  4. Lack of visibility of the most efficient resource utilization within the cloud , especially before the resources are committed.

    Example: When a production today wants to rent an editorial system, it can accurately predict the cost, and map it straight to their budget. But with the cloud equivalent it’s very hard to get an upfront budget because the costs for cloud resources rely on predicting usage, which is hard to know including hours of usage, amount of storage required, data egress, etc.

    Creative teams take on a lot when committing to a show, usually with a fixed budget and timeline. It’s hard to ask them to commit to unknown costs, especially for variables which are hard to control at the outset – could you predict how many takes for a specific scene? How many times a file will be accessed or downloaded? Or how many times a database queried? Even if they could accurately predict usage, most cloud billing is done in arrears, and therefore the costs are not usually known until after the fact, and consequently it’s easy to overrun costs and budgets without even knowing it.

    Similarly, creative teams would also benefit from greater education and transparency concerning the most efficient ways to use cloud products. Efficient usage will decrease costs and enhance output and long-term usage.

    For cloud computing systems to become as ubiquitous as the physical equivalent, providers need to find ways to match the predictability and efficient use of current on-premises hardware, but with the flexibility to burst and stretch when required and authorized to do so.

MovieLabs 2030 Vision Principle 2
  1. Too few cloud-aware/cloud-native apps, which necessitates a continued reliance on moving files (into clouds, between regions, between clouds, out of clouds).

    Example: An editor wants to use a cloud SaaS platform for cutting their next show, but the assets are stored in another cloud, the dailies system providing reference clips is on a third, and the other post vendors are using a private cloud.

    We’re making great progress with getting individual applications and processes to move to the cloud but we’re in a classic ”halfway” stage where it’s potentially more expensive and time consuming to have some applications/assets operating in the cloud and some not. That requires moving assets into and out of a specific cloud to take advantage of its capabilities and if certain applications or processes are available only in one cloud then moving those assets specifically to that cloud, which is the the sort of “assets chasing tasks” from the offline world that this principle was designed to avoid in the cloud world. We need to keep pushing forward with modern applications that are multi-cloud native and can migrate seamlessly between clouds to support assets stored in multiple locations. We understand this is not a small task or one that will be quick to resolve. In addition, many creative artists used Mac OS and that is not broadly available in cloud instances and in a way that can be virtualized to run on myriad cloud compute types.

  2. Audio post-production workflows (e.g., mixing, editing) are not natively running in the cloud.

    Example: A mixer wants to remotely work on a mix with 9.1.6 surround sound channels that are all stored in the cloud. However most cloud based apps only support 5.1 today, and the audio and video channels are streamed separately so the sync between the audio and the video can be “soft” in a way that it can be hard to know if the audio is truly playing back in sync.

    The industry has made great strides in developing technologies to enable final color (up to 12 bit) to be graded in the cloud, but now similar attention needs to be paid to the audio side of the workflows. Audio artists can be dealing with thousands, or even tens of thousands of small files and they have unique challenges which need to be resolved to enable all production tasks to be completed in the cloud without downloading assets to work remotely. The audio/video sync and channel count challenges above are just illustrative of the clear need for investment and support of both audio and video cloud workflows simultaneously to get to our “ProductionLandia” where both can be happening concurrently on the same cloud asset pool.

MovieLabs 2030 Vision Principle 3
  1. Lack of communication between cross-organizational systems (AKA “too many silos”) and inability to support cross-organizational workflows and access.

    Example: A director uses a cloud-based review and approval system to provide notes and feedback on sequences, but today that system is not connected to the workflow management tools used by her editorial department and VFX vendors, so the notes need to be manually translated into work orders and media packages.

    As discussed above we’re in a transition phase to the cloud, and as such we have some systems that may be able to receive communication (messages, security permission requests) and commands (API calls), whereas other systems are unaware of modern application and control plane systems. Until we have standard systems for communicating (both routing and common payloads for messages and notifications) and a way for applications to interoperate between systems controlling different parts of the workflow, then we’ll have ongoing issues with cross-organizational inefficiencies. See the MovieLabs Interoperability Paper for much more on how to enable cross-torganizational interop.

MovieLabs 2030 Vision Principle 4
  1. No common way to describe each studio’s archival policy for managing long term assets.

    Example: Storage service companies and MAM vendors need to customize their products to adapt to each different content owner’s respective policies and rules for how archival assets are selected and should be preserved.

    The selection of which assets need to be archived and the level of security robustness, access controls, and resilience are all determined by studio archivists depending on the type of asset. As we look to the future of archives we see a role for a common and agreed way of describing those policies so any software storage system, asset management or automation platform could read the policies and report compliance against them. Doing so will simplify the onboarding of new systems with confidence.

MovieLabs 2030 Vision Principle 5
  1. Challenges of measuring fixity across storage infrastructures.

    Example: Each studio runs a checksum against an asset before uploading it to long term storage. Even though storage services and systems run their own checks for fixity those checksums or other mechanisms are likely different than the studios’ and not exposed to end clients. So instead, the studio needs to run their own checks for digital degradation by occasionally pulling that file back out of storage and re-running the fixity check.

    As there’s no commonality between fixity systems used in major public clouds, private clouds, and storage systems, the burden of checking that a file is still bit-perfect falls on the customer to incur the time, cost, and inconvenience of pulling the file out of storage, rehashing it, and comparing to the original recorded hash. This process is an impediment to public cloud storage and the efficiencies it offers for the (very) long term storage it offers for archival assets.

  2. Proprietary formats need to be archived for many essence and metadata file types.

    Example: A studio would like to maintain original camera files (OCF) in perpetuity as the original photography captured on set, but the camera file format is proprietary, and tools may not be available in 10, 20, or 100 years’ time. The studio needs to decide if it should store the assets anyway or transcode them to another format for the archive.

    The myriad of proprietary files and formats in our industry contain critical information for applications to preserve creative intent, history, or provenance, but that proprietary data becomes a problem if it is necessary to open the file in years or decades, perhaps after the software is not even available. We have a few current and emerging examples in some areas of public specifications and standards, and open source software that can enable perpetual access, but the industry has been slow to appreciate the legacy challenges in preserving access to this critical data in the archive.

In the final part of this blog series, we’ll address the gaps remaining within the Principles covering Security and Identity and Software-Defined Workflows… Stay Tuned…

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>
MovieLabs releases Visual Language v1.2 with expanded coverage across on-set production, networking and security https://movielabs.com/movielabs-releases-visual-language-v1-2-with-expanded-coverage-across-on-set-production-networking-and-security/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-releases-visual-language-v1-2-with-expanded-coverage-across-on-set-production-networking-and-security Thu, 24 Aug 2023 00:32:04 +0000 https://movielabs.com/?p=13294 New Version 1.2 continues to expand breadth and depth of the Visual Language for Media Creation

The post MovieLabs releases Visual Language v1.2 with expanded coverage across on-set production, networking and security appeared first on MovieLabs.

]]>

When we launched the Visual Language for Media Creation, we had no idea how well received it would be and that we’d be continuing to expand it 2 years later. But we’re hearing from organizations that they appreciate the common approach to designing workflows and diagrams that can be immediately interpreted by their colleagues using a shared visual language. We’re also now starting to see software tools natively supporting the Visual Language and the Icons within it for interfaces which is also exciting.

So today we’re announcing an expansion of the language into new areas that were requested by tool makers, member studios, the Hollywood Professional Alliance (HPA) and the SMPTE RIS On-Set Virtual Production group. The focus for v1.2 was production technology terms and icons around on-set and virtual production workflows. We also added some additional terms and icons to help diagram hybrid cloud/on-prem workflows and some security services for CSAP.

Here are the key highlights:

  • Production Infrastructure: Several new terms to add the following to your workflows: Asset Manager, Encoder, LED Lighting and Display, Head Mounted Display, LIDAR, Motion Capture, Motion Control, Renderer, Video Router, and Video Switch.
  • Network Infrastructure: Several new terms for infrastructure network views: Firewall, Mobile Device, and Network Switch.
  • Security Services: Several new icons for existing CSAP terms for services: Authentication Service, Authorization Service, and CSAP service.
  • Realtime and Time Critical Master Shapes: New master shapes to indicate that workflows or processes are real-time or time critical.

Later this Fall, we’ll be adding new extensions to the Visual Language so stay tuned to our announcements on LinkedIn or Twitter.  You can also see all of the icons which have a defined term in the MovieLabs vocabulary on our documentation site at: Vocabulary | MovieLabs. As a reminder, MovieLabs also provides templates with example workflows for use in major design tools like Visio, Powerpoint, KeyNote and LucidChart.

Example workflow image from MovieLabs Visual Language v1.2

An example of a workflow featuring new icons and terms from MovieLabs Visual Language v1.2

Please reach out to MovieLabs at office@movielabs.com to let us know how you’re implementing the visual language and if there are specific expansions you’d like us to address next.

The post MovieLabs releases Visual Language v1.2 with expanded coverage across on-set production, networking and security appeared first on MovieLabs.

]]>
Are we there yet? Part 1 https://movielabs.com/are-we-there-yet-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-1 Wed, 26 Jul 2023 16:13:10 +0000 https://movielabs.com/?p=13094 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>

It’s mid-2023, we’re about 4 years into our odyssey towards “ProductionLandia” – an aspirational place where video creation workflows are interoperable, efficient, secure-by-nature and seamlessly extensible. It’s the destination. The 2030 Vision is our roadmap to get there. Each year at MovieLabs we check the industry’s progress towards this goal, adjusting focus areas, and generally providing navigation services to ensure we’re all going to arrive in port in ProductionLandia at the same time and with a suite of tools, services and vendors that work seamlessly together. As part of that process, we take a critical look at where we are collectively as an M&E ecosystem – and what work still needs to be done – we call this “Gap Analysis”.

Before we leap into the recent successes and the remaining gaps, let’s not bury the lead – while there has been tremendous progress, we have not yet achieved the 2030 Vision (that’s not negative, we have a lot of work to do and it’s a long process). So, despite some bold marketing claims from some industry players, there’s a lot more in the original 2030 Vision white paper than lifting and shifting some creative processes to the cloud, the occasional use of virtual machines for a task or a couple of applications seamlessly passing a workflow process between each other. The 2030 Vision describes a paradigm shift that starts with a secure cloud foundation, and also reinvents our workflows to be composable and more flexible, removing the inefficiencies of the past, and includes the change management that is necessary to give our creative colleagues the opportunity to try, practice and trust using these new technologies on their productions. The 2030 Vision requires an evolution in the industry’s approach to infrastructure, security, applications, services and collaboration and that was always going to be a big challenge. There’s still much to be done to achieve dynamic and interoperable software-defined workflows built with cloud-native applications and services that securely span multi-cloud infrastructures.

Status Check

But even though we are not there yet, we’re actually making amazing progress based on where we started (albeit with a global pandemic to give a kick of urgency to our journey!). So many major companies including cloud services companies, creative application tool companies, creative service vendors and other industry organizations have now backed the 2030 Vision; it is no longer just the strategy of the major Hollywood studios but has truly become the industry’s “Vision.” The momentum is truly behind the vision now, and it’s building – as is evident in the 2030 Showcase program that we launched in 2022 to highlight and share 10 great case studies where companies large and small are demonstrating Principles of the Vision that are delivering value today.

We’ve also seen the industry respond to our previous blogs on gaps including what was missing around remote desktops for creative applications, software-defined workflows  and cloud infrastructures. We can now see great progress with camera to cloud capture, automated VFX turnovers, final color pipelines now technically possible in the cloud, amazing progress on real-time rendering and iteration via virtual production, creative collaboration tools and more applications opening their APIs to enable new and unpredictable innovation.

Mind the Gaps

So, in this two-part Blog, let’s look at what’s still missing. Where should the industry now focus its attention to keep us moving and accelerate innovation and the collective benefits of a more efficient content creation ecosystem? We refer to these challenges as “gaps” between where we are today and where we need to be in “ProductionLandia.” When we succeed in delivering the 2030 Vision, we’ll have closed all of these gaps. As we analyze where we are in 2023 we see these gaps falling into the 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-Defined Workflows), plus 3 underlying ones that bind them altogether:

image: 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-defined Workflows), plus 3 underlying ones that bind them altogether

In this Part 1 of the Blog we’ll look at the gaps related to these areas. In Part 2 we’ll look at the gaps we view as most critical for achieving each of the principles of the vision, but let’s start with those binding challenges that link them all.

It’s worth noting that some gaps involve fundamental technologies (a solution doesn’t exist or a new standard, or open source project is required) some are implementation focused (e.g., technology exists but needs to be implemented/adopted by multiple companies across the industry to be effective – our cloud security model CSAP  is an example here where a solution is now ready to be implemented) and some are change management gaps (e.g., we have a viable solution that is implemented but we need training and support to effect the change). We’ve steered clear of gaps that are purely economic in nature as MovieLabs does not get involved in those areas. It’s probably also worth noting that some of these gaps and solutions are highly related, so we need to close some to support closing others.

Interoperability Gaps

  1. Handoffs between tasks, teams and organizations still require large scale exports/imports of essence and metadata files, often via an intermediary format. Example: Generation of proxy video files for review/approval of specific editorial sequences. These handovers are often manual, introducing the potential for errors, omissions of key files, security vulnerabilities and delays. See note1.
  2. We still have too many custom point-to-point implementations rather than off-the-shelf integrations that can be simply configured and deployed with ease. Example: An Asset Management System currently requires many custom integrations throughout the workflow, which makes changing it out for an alternative a huge migration project. Customization of software solutions adds complexity and delay and makes interoperability considerably harder to create and maintain.
  3. Lack of open, interoperable formats and data models. Example: Many applications create and manage their own sequence timeline for tracking edits and adjustments instead of rallying around open equivalents like OpenTimelineIO for interchange. For many use cases, closing this gap requires the development of new formats, data models, and their implementation.”.
  4. Lack of standard interfaces for workflow control and automation. Example: A workflow management software cannot easily automate multiple tasks in a workflow by initiating applications or specific microservices and orchestrate their outputs to form an output for a new process. Although we have automation systems in some parts of the workflow the lack of standard interfaces again means that implementors frequently have to write custom connectors to get applications and processes to talk to each other.
  5. Failure to maintain metadata and a lack of common metadata exchange across components of the larger workflow. Example: Passing camera and lens metadata from on-set to post-production systems for use in VFX workflows. Where no common metadata standards exist, or have not been implemented, systems rarely pass on data they do not need for their specific task as they have no obligation to do so, or don’t know which target system may need it. A more holistic system design however would enable non-adjacent systems to be able to find and retrieve metadata and essence from upstream processes and to expose data to downstream processes, even if they do not know what it may be needed for.

Operational Support

  1. Our workflows, implementations and infrastructures are complex and typically cross between boundaries of any one organization, system or platform. Example: A studio shares both essence and metadata with external vendors to host on their own infrastructure tenants but also less structured elements such as work orders (definitions of tasks), context, permissions and privileges with their vendors. Therefore, there is a need for systems integrators and implementors to take the component pieces of a workflow and to design, configure, host, and extend them into complete ecosystems. These cloud-based and modern software components will be very familiar to IT systems integrators, but they need the skills and understanding in our media pipelines to know how to implement and monetize them in a way which will work in our industry. We therefore have a mismatch gap between those that understand cloud-based IT infrastructures and software, and those that understand the complex media assets and processes that need to operate on those infrastructures. There are few companies to chose from that have the correct mixture of skills to understand both cloud and software systems as well as media workflow systems, and we’ll need a lot more of them to support the industry wide migration.
  2. We also need systems that match our current support models. Example: A major movie production can be simultaneously operating across multiple countries and time zones in various states of production and any down system can cause backlogs in the smooth operations. The media industry can work some unusual and long hours, at strange times of the day and across the world – demanding a support environment that can support it with specialists that understand the challenges of media workflows and not just open an IT ticket that will be resolved when the weekday support comes in at 9am on Monday. In the new 2030 world, these problems are compounded by the shared nature of the systems – so it may be hard for a studio or production to understand which vendor is responsible if (when) there are workflow problems – who do you call when applications and assets seamlessly span infrastructures? How do you diagnose problems?

Change Management

  1. Too few creatives have tried and successfully deployed new ‘2030 workflows’ to be able to share and train others. Example: Parts of the workflow like Dailies have migrated successfully to the cloud, but we’re yet to see a major production running from ”camera to master” in the cloud – who will be the first to try it? Change Management comprises many steps before new processes are considered “just the way we do things.” There are many steps but the main ones we need to get through are:
    • Educating and socializing the various stakeholders about the benefits of the 2030 vision, for their specific areas of interest
    • Involving creatives early in the process of developing new 2030 workflows
    • Then demonstrating value of new 2030 workflows to creatives with tests, PoCs, limited trials and full productions
    • Measuring cost/time savings and documenting them
    • Sharing learnings with others across the industry to build confidence.

Shortly, we’ll add a Part II to this blog which will add to the list of gaps with those that are most applicable to each of the 10 Principles of the Vision. In the meantime, there’re eight gaps here which the industry can start thinking about, and do please let us know if you think you already have solutions to these challenges!

[1] The Ontology for Media Creation (OMC) can assist in common payloads for some of these files/systems.

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>
Turning the Spotlight on the Showcase https://movielabs.com/turning-the-spotlight-on-the-showcase/?utm_source=rss&utm_medium=rss&utm_campaign=turning-the-spotlight-on-the-showcase Thu, 15 Jun 2023 21:44:01 +0000 https://movielabs.com/?p=13071 Reflections on the 2030 Showcase Program

The post Turning the Spotlight on the Showcase appeared first on MovieLabs.

]]>

We just opened submissions for the second year of the MovieLabs 2030 Showcase program. You can see the existing case studies we selected and posted last year on our website at www.movielabs.com/2030showcase. These first selections are the start of a library of learnings that MovieLabs will host featuring case studies from companies on the bleeding edge of innovation that have been willing to share their journey, their challenges, and the lessons they learned in building solutions which deliver on the 2030 Vision principles today.

While we can’t accept every qualifying submission our goal is to highlight what we believe to be the best examples of the principles being put into practice today and that will complement the rest of the case study library. As we approached this year’s showcase, we wanted to be as transparent as possible about the selection criteria we use to narrow down the list of submissions, and especially how we evaluate implementations that adopt aspects of the 10 principles. So, in this blog I’ll share what we’ve been doing to ensure that all the 2023 entrants have the opportunity to prepare their best possible submission.

The purpose of the 2030 Showcase program is to enable MovieLabs to highlight case studies from organizations, large and small, that are demonstrating delivery against the 10 principles of the 2030 Vision today. We don’t want anyone to think that the industry should wait until 2029 to deliver on the 2030 Vision, in fact we’ve seen great progress in many areas and so the Showcase program allows us to celebrate that work. However, it’s also clear that there’s more work to be done (see our upcoming blog on the current open gaps that we have identified) so let’s not try suggesting that all of the principles can be fully achieved today as that’s not yet the case.

Showcase Submission Best Practices

Before we look into how we interpret the principles, let’s just recap some best practices for the Showcase program:

  • The Showcase program is not an awards program. MovieLabs does not promote or provide awards to companies or products.  The Showcase highlights case studies of actual implementations with real workflows that demonstrate progress towards the 2030 Vision. So while it’s not an award ascribed to products or the companies that built them, the program is designed to share compelling examples of more efficient, secure and interoperable workflows for the benefit of the industry
  • It’s not about products or companies, but about the case studies. Last year we had to decline some submissions because they didn’t contain an actual case study but rather a demonstration of how the whole company or product aligns to the 2030 Vision. While that is great to hear (and you should blog about it yourselves!!), the Showcase program is about being able to demonstrate real-world implementations of the 2030 Principles, describe the key learnings, include benefits achieved with real metrics, and show how problems were solved or new capabilities were achieved on real workflows.
  • Less is sometimes more. We have 10 principles and we’re looking for alignment against one or more of them, certainly not all 10. We don’t believe that’s even possible in 2023 (see below) because to fulfill some of these principles (like Principles 5 and 6) will require an entire industry effort. So, we suggest focusing on a really deep story around a handful of principles that can be well documented rather than try and make a more tenuous case in an attempt to include more of them.
  • The principles are at a high level – the real detail is in the whitepaper. We often abbreviate the 10 Principles of the 2030 Vision to make them easier to convey quickly but when assessing which principles your case study reached you should also consult the more detailed descriptions of each in the original 2030 white paper.  There are also areas where we have provided subsequent material like additional whitepapers (security, software defined workflows) and blogs (ontology, interop, zero trust etc.) which are also worth studying to see how we interpret the principles themselves.

Objective Analysis

The 10 Principles of the 2030 Vision were never designed to be used as the basis of a certification program, rather, they are high level guiding philosophies, so it can be challenging even for us to objectively measure against them! But we also want to remove subjectivity wherever possible when we’re looking at case studies, so for full transparency we’re listing below what we believe are good examples of demonstrating factors for the 2030 Vision at the present moment in time. That’s important because as we get closer to 2030 we hope to raise the bar in our assessments, but at this stage we wanted to give room for features which are moving us in the right direction, even if they don’t reach the letter of the 2030 Vision just yet.

Principle









Examples that demonstrate…
  • Workflows in which all content is created or uploaded to shared cloud (public, private, hybrid) storage where applications and services can access it
  • The above using systems that bridge file/object boundaries, use global namespaces, or remove boundaries between clouds, silos, and domains
    • A workflow including multiple tools and services accessing content in the same storage, e.g., all of pre-production, editorial, or VFX, etc. (At this stage we feel it’s too hard for end-to-end workflows to have all applications coming to the media, but we’d like to see entire department or pipeline workflows that do.)
    • High-performance, cloud-based workstations working together on shared media
    • Creative applications (even via plug-ins) directly accessing and manipulating cloud assets without copying them locally. (Exemptions for caching for speed/latency, may be OK in some cases.)
    • Workflows with systems using block storage to automatically cache required assets from object storage when necessary for performance and then return any changes back to object store afterwards
      • Propagating assets as a publish function without moving them, using notifications (to people and/or services) and changes to access controls, if needed
      • The above happening automatically upon the completion of a task
      • Stretch goal – automatically removing permissions for those no longer needing access
        • Storing Archive assets in the cloud with an easily searchable system that makes them an accessible library
        • Other systems accessing the library to surface archive assets more broadly
        • Documenting Archive rules in policies that drive automation of the archive
          • Using open standards or formats that can make essence files accessible over very long time periods
          • Stretch goal – The above that also makes them modifiable
            • The industry wide Production User ID system posited in the paper does not currently exist, so for now we have narrowed the scope for now
            • Using a single or federated identity management system with a single identity for each user across a workflow that integrates multiple vendors or a significant number of tools and services from different providers (e.g., all of pre-production, editorial, or VFX)
            • The above, possibly with more than one identity used for authentication, but with all authorization policy management tied to a single common user identifier
              • Implementing and using a CSAP Zero Trust Foundation for a significant workflow
              • Workflow management systems changing authorization policies based on task assignment and completion
              • Using a single machine-readable language for communicating authorization policies across systems
              • Using asset level encryption and key management systems
                • Managing assets using complex relationships to other assets, context, tasks, and participants
                • Implementing the Ontology for Media Creation in a workflow
                  • Dynamically configuring or creating workflows based on asset inputs, outputs, or policies
                  • Workflows that spin up, shut down or trigger additional tasks automatically without human intervention
                  • Using systems to translate human readable workflows into machine readable workflows, and vice versa
                  • Using service meshes or messaging buses to handle notifications throughout a multi-step pipeline
                    • Workflow processes that used to take hours or overnight, now happening in so much faster that it changes the way work is done, for example:
                      • Rendering In-Camera VFX with a game engine
                      • Collaborating on complex visual assets in an Omniverse
                      • Rendering conforms with final color in real-time
                      • High-quality rotoscoping in real-time
                      • Producing dailies immediately instead of overnight
                      • Processing on a TV episode that used to take hours now only taking the 22-minute duration of the episode
                      • Examples that don’t demonstrate..
                        • Fast file movers between clouds or private infrastructure
                        • Fast uploaders to cloud
                        • Cross-connect services that open up access between clouds
                        • Single purpose SaaS applications (e.g., cloud-based video clipping or annotation tools)
                          • Vendors or services downloading cloud assets, doing their work offline, and reuploading variants later
                          • Using local storage area networks behind a firewall or private network
                            • Manually notifying (e.g., sending emails) to downstream users, tasks, or services
                            • Manually changing permissions for publication
                            • Moving assets to participants performing the next task in the workflow. (Creating unique deliverables in an accessible location, e.g., a transcode required only for the next task, is OK.)
                              • Fast indexing of media assets, with or without AI
                              • Bucket-to-bucket movement of files for cost optimization across cloud tiers
                                • Using an organization or team identity to access cloud media/storage/services without enforcing granular access controls for individual team members
                                • Simply using Single Sign-On or Federated Login for some services
                                • Simply using the same login name across multiple identity management systems
                                  • Workflows relying on perimeter security
                                    • Rigid workflows using pre-determined data flows or decision making
                                      • Using video conferencing for collaboration, e.g., remote editing with multiple collaborators
                                      • An existing process, e.g., encoding, running faster now because of Moore’s law, but without it changing the process
                                      • We’re always interested in seeing examples of case studies that demonstrate any of these principles although this year we have a special interest on 3 key MovieLabs focus areas:

                                        For more information on the 2030 Showcase program and how to apply visit: www.movielabs.com/showcase-submissions. We’re looking forward to seeing your submissions for the 2030 Showcase!

                                        The post Turning the Spotlight on the Showcase appeared first on MovieLabs.

                                        ]]>
                                        New Visual Language Templates https://movielabs.com/new-visual-language-templates/?utm_source=rss&utm_medium=rss&utm_campaign=new-visual-language-templates Thu, 27 Apr 2023 00:01:30 +0000 https://movielabs.com/?p=12636 Expanded support for new diagram tools to bring the MovieLabs Visual Language for Media Creation to wherever you work

                                        The post New Visual Language Templates appeared first on MovieLabs.

                                        ]]>

                                        To assist the industry in easily picking up and using the Visual Language for Media Creation, we have created templates for major workflow design applications and services. In April 2023, we introduced support for Apple Keynote, Omnigraggle (for Mac) and Autodesk Autocad (for Windows and Mac). All of these templates include the basic shapes (representing Assets, Infrastructure, Tasks, Participants and Contexts) as well as the icons, lines and arrowheads. We also include some recommendations and samples. We’ll keep these templates updated as the Visual Language continues to expand.

                                        Below is a summary of the new and existing templates available now on the Visual Language page. More detailed information and notes for developers looking to integrate visual language into their apps/services are available on our documentation site.

                                        Keynote Template

                                        NEW Apple “Keynote” for Mac/iOS

                                        For Keynote, we have imported and organized all the shapes, icons, and lines so you can copy and paste them into your presentation. We have included several slides about the language, included some examples and provided slides with all the icons with their terms.

                                        Omingraffle Template

                                        NEW – OmniGroup “Omnigraffle” for Mac

                                        For Omnigraffle, we have imported and organized all of the shapes so they are ready to be used natively within the application or shared within your facility. You can easily copy, connect, or format the shapes and connect them via native lines and shapes. We have created a sample document.

                                        Microsoft PowerPoint and Visio Templates

                                        Microsoft “PowerPoint” and “Visio” for Windows & Mac

                                        The existing templates have been updated to v1.2 with more examples and the latest icons.

                                        All Assets and Templates

                                        Get all the assets and templates listed above in a zip file download.

                                        Lucid Software’s “LucidChart”

                                        In addition, we can share the MovieLabs library of icons and asset shapes for the LucidChart online SaaS service. To request access to the template, email us at lucid@movielabs.com so we can share permissions.

                                        Contact Us

                                        We love to know what’s you’re using the Visual Language for and if you have specific feedback or requested extra features or icons! Please email us at office@movielabs.com.

                                        The post New Visual Language Templates appeared first on MovieLabs.

                                        ]]>
                                        AWS Blog Series on Mapping CSAP to AWS Services https://movielabs.com/aws-csap/?utm_source=rss&utm_medium=rss&utm_campaign=aws-csap Thu, 13 Oct 2022 15:41:18 +0000 https://movielabs.com/?p=11476 The post AWS Blog Series on Mapping CSAP to AWS Services appeared first on MovieLabs.

                                        ]]>

                                        The post AWS Blog Series on Mapping CSAP to AWS Services appeared first on MovieLabs.

                                        ]]>
                                        MovieLabs Launches 2030 Showcase Program to Help Advance Strategic Growth in the M&E Industry https://movielabs.com/movielabs-launches-2030-showcase-program-to-help-advance-strategic-growth-in-the-me-industry/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-launches-2030-showcase-program-to-help-advance-strategic-growth-in-the-me-industry Wed, 29 Jun 2022 16:54:14 +0000 https://movielabs.com/?p=10689 Designed to Highlight Organizations Implementing the Principles of the 2030 Vision, Enabling Interoperability, and Responding to the Changing Media Landscape

                                        The post MovieLabs Launches 2030 Showcase Program to Help Advance Strategic Growth in the M&E Industry appeared first on MovieLabs.

                                        ]]>

                                        San Francisco, June 29, 2022 – MovieLabs, the technology joint venture of the major Hollywood motion picture studios, has launched the 2030 Showcase Program to recognize organizations in the Media & Entertainment industry that are applying emerging cloud and production technologies to advance the industry in reinventing the media creation ecosystem and to help realize the MovieLabs 2030 Vision and its goals of enhanced efficiency and interoperability.

                                        Organizations that can demonstrate how they are implementing aspects of the MovieLabs 2030 Vision and moving from “principles to practice” are invited to submit their case studies to the 2030 Showcase Program. Submissions that demonstrate significant alignment with the 2030 Vision will be included in the MovieLabs 2030 case studies hosted on MovieLabs.com and may be referenced by MovieLabs at events, trade shows and speeches. Additionally, select 2030 Showcase participants will also be invited to present their case study at a private event with production technology leaders from across the MovieLabs studio members – Paramount Pictures, Sony Pictures, Universal Pictures, Walt Disney Studios, and Warner Bros. Discovery.

                                        The MovieLabs 2030 Vision lays out 10 key principles for the evolution of media creation with which Showcase participants should be aligned with one or more:

                                        1. All assets are created or ingested straight to the cloud and do not need to move.
                                        2. Applications come to the media.
                                        3. Propagation and distribution of assets is a ‘publish’ function.
                                        4. Archives are deep libraries with access policies matching speed, availability and security to the economics of the cloud.
                                        5. Preservation of digital assets includes the future means to access and edit them.
                                        6. Every individual on a project is identified, verified and their access permissions efficiently and consistently managed.
                                        7. All media creation happens in a highly secure environment that adapts rapidly to changing threats.
                                        8. Individual media elements are referenced, tracked, interrelated and accessed using a universal linking system.
                                        9. Media workflows are non-destructive and dynamically created using common interfaces, underlying data formats and metadata.
                                        10. Workflows are designed around real-time iteration and feedback.

                                        To enter in the 2030 Showcase program, interested organizations are invited to submit a short video (<5 mins) describing their work, and illustrating how it aligns with one or more of the 10 principles of the MovieLabs 2030 Vision listed above. The deadline for submissions is midnight on July 29. Entrants into this year’s 2030 Showcase program will be evaluated by MovieLabs in time to be highlighted at IBC in Amsterdam this September. Application videos should be uploaded to a file sharing/cloud service and a link to the submission emailed to ShowcaseProgram@movielabs.com. More details of the program and submission criteria are available at www.movielabs.com/2030Showcase.

                                        Richard Berger, CEO MovieLabs, said: “Since we launched the MovieLabs 2030 Vision in 2019, we have been working with M&E businesses to share the dramatic changes the media industry is going through and explain the strategic importance of working together and investing in a unified approach to harness these emerging technologies in a way that enhances efficiency and interoperability. Many companies have incorporated the 2030 principles into their product road maps and strategic vision. The 2030 Showcase is our opportunity to highlight and recognize these companies and their progress, and to help the rest of the industry come together to build a more efficient future as we create modern global content at scale to satisfy consumer demand.”

                                        About MovieLabs

                                        Motion Picture Laboratories, Inc. (MovieLabs) is a non-profit technology research lab jointly run by Paramount Pictures Corporation, Sony Pictures Entertainment Inc., Universal Pictures, Walt Disney Pictures and Television, and Warner Bros. Entertainment Inc.

                                        MovieLabs enables member studios to work together to understand new technologies and enhance interoperability and efficiency. We help set the bar for future technology advancement and then define voluntary specifications, standards, and workflows that deliver the industry’s vision. Our goal is always to empower storytellers with new technologies that help deliver the best of future media for consumers.

                                        For more information, please contact:

                                        Clare Plaisted
                                        PRComs
                                        Tel: +1 703 300 3054
                                        clare@prcoms.com

                                        The post MovieLabs Launches 2030 Showcase Program to Help Advance Strategic Growth in the M&E Industry appeared first on MovieLabs.

                                        ]]>
                                        PSST… I have to tell you something – Part 1 https://movielabs.com/psst-i-have-to-tell-you-something/?utm_source=rss&utm_medium=rss&utm_campaign=psst-i-have-to-tell-you-something Wed, 25 May 2022 23:59:29 +0000 https://movielabs.com/?p=10562 Sending messages through workflows - Part 1

                                        The post PSST… I have to tell you something – Part 1 appeared first on MovieLabs.

                                        ]]>

                                        Clear communication is critical to the content creation process. And while today’s productions somehow manage to compensate for inefficient communication mechanisms, there is a growing and urgent need to streamline the way we communicate and exchange information as we continue to scale up to meet the increasing demand for content.  In our blog “Cloud. Work. Flows”, we identified some missing components that are required to enable software defined workflows. We highlighted that a more efficient messaging system will be critical to improve communication between participants (which could be people or machines) in a complex workflow system.  We’ve addressed communication elements in the Ontology for Media Creation which covers some aspects on what needs to be communicated. Recently we’ve been turning our attention to the how to express that communication in the most efficient manner.  For example, in the 2030 Vision our first principle states that content goes straight to the cloud and does not need to be moved. Once ingestion to the cloud has completed, the first participants in the chain will need to be notified that the content is now in the cloud and ready to be worked on and ideally including a location for that content.  A similar workflow notification message is required when a task has been finished and the work is ready for review by another team member.  In this post we’ll discuss the benefits of a common approach to communicating these repeating types of workflow messages. In a subsequent post we’ll get into the technicalities of how we think such a system could be built including considerations to enable it to span cloud infrastructures and tools.

                                        The Art of the Message

                                        We need to deal with both simple messages, in near real-time, between two participants, like this:

                                        Simple Message System

                                        Synchronous Real-Time Messages between two known participants.

                                        And more complex messages, especially as we move to more automated systems, where the participants may not know who will pick up the messages and may not receive replies for hours or even days. Take for an example:

                                        Render

                                        Asynchronous Messages between one sender and multiple potential recipients.

                                        In this example the Render Manager (the message sender) doesn’t know which nodes may respond or when they may respond. There are thousands of such nuanced examples in production workflows that we need to consider when thinking about the sorts of messages that could be sent between systems. We need a messaging approach that can accommodate all of these message types, and also the complexity of multi-cloud infrastructure when messages may be flowing between systems that are not all owned/leased or operated by the same organization and on the same infrastructure.

                                        Software Messaging Systems

                                        At MovieLabs we’ve been thinking about approaches to these messaging problems. One approach is using point-to-point API calls between all these disparate systems and while appropriate for many use cases we don’t believe this will scale to whole productions or studios – there would simply be too many custom integrations to get all the possible components of a workflow to work together[1]. We see the best way to manage the highly asynchronous delivery of information to multiple (potentially unknown to the sender) destinations is to decouple the mechanics of the communication – the what from the how.  In software systems this can be managed in a more automated way using Message Queues[2]. A message queue allows a message to be sent blindly (the sender does not need to specifically know who will read it). Specific queues are typically associated with a particular topic, any other participant with an interest in that topic can then subscribe to the queue and receive its messages whenever they’re ready.

                                        Message queues – or, more broadly, message systems – are a natural fit for software-defined workflows: their raison d’être is to provide a communication mechanism where senders and receivers can operate without knowing anything about each other beyond how to communicate (the message queue) and some expectations about the contents of the communication (the messages.) This separation allows applications to run independently and, just as importantly, be developed independently.

                                        As long as the sending and receiving applications can both access the message queue, it doesn’t matter where the applications are running; they can be in the same cloud, in different clouds, on a workstation in a cloud, or even in two organizations. Agreeing on commonality of some aspects of message headers and message contents can enable interoperability especially in that cross-organizational use case. For example, if a message from an editorial department to a VFX house includes a commonly agreed upon place to put shot and sequence identifiers, a workflow management system at the VFX house can route that message to the appropriate recipients internally.

                                        The use of messaging systems rather than point-to-point integrations also makes it easier to gather together operational data for logging, dashboards, and detecting/acting on exceptions and errors.

                                        Benefits

                                        As we look to the 10 Principles of the 2030 Vision, we can see that messaging is key to enabling the “publish” function (where access to files is pushed through a workflow, as tasks are created) and enabling participants to “subscribe” to those files, tasks, or changes. Principles 1 and 2 of the 2030 Vision state that assets go to the cloud and do not need to move, which means sharing the location of those files becomes a key message that will also need to be shared between systems. By enabling a robust multi-cloud[3] ecosystem with broadly distributed and understandable messages, we hope to unlock the true flexibility of software-defined workflows But we do not need to wait for the entirety of the 2030 Vision ecosystem to be built before we can take advantage of messaging systems – there are many use cases that can be deployed now to enable a more interoperable and flexible workflow in 2022 and beyond.

                                        Next Up…

                                        In our next post we’ll discuss the software elements of a messaging system, types of messages and the use cases we hope to enable with one.  Make sure you stay tuned, so you get the message…

                                        [1] Not to say that API calls don’t have their place in interoperability. We’re very supportive of applications exposing APIs for system-to-system communication with small amounts of data or messages that need a guaranteed quickish response.

                                        [2] Message queues are a familiar element of software engineering, suited to a wide a variety of inter-process communication problems. Operating systems use them internally, and business and process automation software make heavy use of them.

                                        [3] We define “cloud” in the 2030 Vision as private, public and hybrid infrastructures connected to the internet and therefore envision productions that need to span across all of those.  See the multi-cloud blog here for more details.

                                        The post PSST… I have to tell you something – Part 1 appeared first on MovieLabs.

                                        ]]>
                                        A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 https://movielabs.com/a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022/?utm_source=rss&utm_medium=rss&utm_campaign=a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022 Tue, 08 Mar 2022 19:05:18 +0000 https://movielabs.com/?p=10538 Mark Turner reviews the 2022 HPA Panel featuring a progress update on the 2030 Vision from Autodesk, Avid, Disney, Google, Microsoft and Universal.

                                        The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

                                        ]]>
                                        In the last week of February, entertainment technology luminaries from across the world gathered at the Hollywood Post Alliance’s Tech Retreat. Rubbing their eyes as they adjusted to the bright Palm Springs sunshine after two years of working-from-home in pandemic-induced Zoom and Teams isolation, the event was a resurgence with a sold-out crowd gathering for four days of conference sessions, informal and spontaneous conversations and advanced technology demonstrations.  MovieLabs, the non-profit technology joint-venture of the major Hollywood Studios, was also present en masse with a series of sessions highlighting the progress and next steps toward it’s “2030 Vision” for the future of media creation.

                                        Seth Hallen, Managing Director of Light Iron and HPA President, who presented a panel at the HPA Tech Retreat on the recent cloud postproduction of the upcoming feature film ‘Biosphere’, commented that “this year’s Tech Retreat had a number of important themes including the industry’s continued embrace of cloud-based workflows and the MovieLabs 2030 Vision as a roadmap for continued industry alignment and implementation.”

                                        MovieLabs CEO Richard Berger was joined by a panel of technology leaders from across studios, cloud providers and software companies to discuss how they see the 2030 Vision, what it means for their organizations and how they are democratizing the vision to form a shared roadmap for the whole industry.  Introducing the panel, Berger provided the context for the discussion and the original vision paper, “our goal was to provide storytellers with the ability to harness new technologies, so that the only limits they face are their own imaginations and all with speed and efficiency not possible today”.

                                        Of course no discussion about the future of production technology can start without reflecting on the impacts of COVID and the opportunities for change it provides.  Eddie Drake, Head of Technology for Marvel Studios said “the pandemic accelerated our plans to go from on-prem to a virtualized infrastructure…and it created a nice environment for change management to get our users used to working in that sort of way”.  Jeff Rosica, CEO of AVID summarized this pivotal opportunity in time we have because “if we weren’t aligned, if we were all off in a different direction doing our own things, we’d have a mess on our hands because this is a massive transformation. This is bigger than anything we’ve done as an industry before”.  Matt Siverston, VP and Chief Architect, Entertainment and Media Solutions at Autodesk is relative newcomer to both Autodesk and the industry and explained how the 2030 Vision was used as shorthand for the job description in his new role at Autodesk, explaining when “all your largest customers tell you exactly what they want, it’s probably pretty smart to listen” and he’s looking forward to seeing how “we can all collaborate together to make it a reality”.

                                        The panel discussed the work so far done in cloud based production workflows and the work still to be done, Drake of Marvel said “we’re going to be working very aggressively” with both vendors and in-house software teams to accelerate cloud deployments in key areas where they see the most immediate opportunity including set-to-cloud (where he sees tools are maturing), dailies processes, the turnover process, editorial, mastering and delivery.  Michael Wise, SVP and CTO Universal Pictures explained they have been focusing their cloud migration on distributed 3D asset creation pipelines leveraging Azure on a global basis, initially at DreamWorks but soon on live action features as well, all so they can leverage talent from around the world. Wise said “As we’ve done that work we’ve been leaning into the work of MovieLabs and the ETC to make sure what we’re building leverages emerging industry standards including the ontology  VFX interop specs from ETC and interoperability from MovieLabs”.

                                        Buzz Hays a “recovering producer”, industry post veteran and now Global Lead Entertainment Industry Solutions for Google Cloud summarized the improvements that we can enjoy from a cloud-based workflow saying “what we’re looking at is how can we make this a more efficient process and eliminate the progress bars and delays that can end up costing money?” Hanno Basse, CTO Media & Entertainment for Microsoft Azure, agreed and added “you need to rearchitect what you’re doing – why are you going into the cloud?” and he listed the main reasons Microsoft is seeing for cloud migrations; including enabling global collaboration, talent using remote workstations from anywhere, and enabling a more secure workflow where all assets are protected to the same and consistent level.  Picking up on the security theme Hays challenges the perceived notion that there is a conflict between security and productivity and questioned “why are those mutually exclusive?” and that we should “come up with solutions that are invisible to the end user, that are secure, that tick all the boxes and are truly hybrid in nature that work on-prem and are multi-cloud”. Hayes went on to explain how zero-trust security, aligned with the MovieLabs Common Security Architecture for Production, works based on the notion of flipping security “inside out” to secure the core data first, rather than focusing on external perimeters and keeping bad actors out.  “Ultimately”, he said “until we get to the ‘single-source of truth’ cloud version, then there are copies of everything flying around productions and you never get all those back”.

                                        Building workflows that leverage interoperability between common building blocks was a core theme of the discussion and was embraced by all the panelists.  Wise from Universal said “A bad outcome would be a ‘lift and shift’ from the on-premises technologies and specs and just putting them in the cloud. We’ve got a moment in time to make our systems interoperable…and interoperability is the key not just for asset reuse but also asset creation and distribution”.  Basse from Microsoft was more prescriptive in what interoperability needs to include and that we have to “have the industry come together and define some common data models, common APIs, common ways of accessing the data, how that data relates to others and handing it off from one step in the workflow to the next”.  He gave the example of 3D assets that are typically recreated because prior versions cannot be easily discovered and shared between applications and productions. During his seven years at 20th Century Fox the White House was destroyed in at least 10 movies and TV shows and every time the asset was recreated from scratch. Allowing assets to be reused and interoperable between different pipelines and applications will therefore open workflow efficiencies speeding content time to market.

                                        Basse makes the case that creative applications that are running in the cloud on Virtual Machines are not the optimal solution for where we need to get to, but an interim step towards ultimately becoming SaaS based services and running on serverless infrastructure.

                                        When discussing the opportunities ahead the panelists were also agreed that no one company can do this migration itself and that it will require work to share data and collaborate together.  Sivertson from Autodesk said “our intention is to be very open with data access and our APIs as the data is not ours, the data is our customers and they should be able to decide where it goes…if providers jealously guard the data as a source of differentiation you’ll probably get left behind”.  Rosica explains how the 2030 Vision enables AVID to have a common shared goal as we’ve all agreed what the “desired state is and what the outcomes are that we’re looking for, and that allows us to develop roadmap plans, not just for ourselves but all of our partners in the industry, as we all need to interoperate together”.

                                        Interestingly many of the themes explored in the HPA Tech Retreat panel echo the key learnings in MovieLabs’ latest paper in the 2030 Series – an Urgent Memo to the C-Suite which explains how investments in production technology can enable the time savings, efficiencies and workflow optimizations from a cloud-centric, automatable, software-defined workflow.  It will certainly be interesting to see how far the industry has come in the 2030 journey by the HPA Tech Retreat 2023, hopefully without the masks and COVID protocols!

                                         

                                        The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

                                        ]]>