Ontology & Data Archives - MovieLabs https://movielabs.com/category/ontology-data/ Driving Innovation in Film and TV Content Creation and Distribution Tue, 09 Jan 2024 04:07:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://movielabs.com/wp-content/uploads/2021/10/cropped-favicon-32x32.png Ontology & Data Archives - MovieLabs https://movielabs.com/category/ontology-data/ 32 32 Are we there yet? Part 3 https://movielabs.com/are-we-there-yet-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-3 Tue, 09 Jan 2024 00:52:58 +0000 https://movielabs.com/?p=13486 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>

In this final part of our blog series on the current gaps between where are now and realizing the 2030 Vision, we’ll address the last two sections of the original whitepaper and look specifically at gaps around, Security and Identity, and Software-Defined Workflows. As with previous blogs in this series (see Parts 1 and 2) we’ll include both the gap as we see it, an example as it applies in a real workflow, and the broader implications of the gap.

So let’s get started with…

MovieLabs 2030 Vision Principle 6
  1. Inconsistent and inefficient management of identity and access policies across the industry and between organizations.

    Example: A producer wants to invite two studio executives, a director and an editor, into a production cloud service but the team has 3 different identity management systems. There’s no common way to identify the correct people to provide access to critical files or to provision that access.

    This is an issue addressed in the original 2030 Vision, which called for a common industry-wide Production User ID (or PUID) to identify individuals who will be working on a production. While there are ways today to stitch together different identify management and access control solutions between different organizations, they are point to point, require considerable software or configuration expertise, and are not “plug and play.”

MovieLabs 2030 Vision Principle 7
  1. Difficulty in securing shared multi-cloud workflows and infrastructure.

    Example: A production includes assets spread across a dozen different cloud infrastructures, each of which is under control of a different organization, and yet all need a consistent and studio-approved level of security.

    MovieLabs believes the current ”perimeter” security model is not sufficient to cope with the complex multi-organizational, multi-infrastructure systems that will be commonplace in the 2030 Vision. Instead, we believe the industry needs to pivot to a more modern ”zero-trust” approach to security, where the stance changes from ”try to prevent intruders” to every access to an asset or service is authenticated and checked for authorization. To that end, we’ve developed the Common Security Architecture for Production which is based on a Zero Trust Foundation, take a look at this blog to learn more.

MovieLabs 2030 Vision Principle 8
  1. Reliance on file paths/locations instead of identifiers.

    Example: A vendor requires a number of assets to do their work (e.g., a list of VFX plates to pull or a list of clips) that today tend to be copied as a file tree structure or zipped together to be shared along with a manifest of the files.

    In a world where multiple applications, users and organizations can be simultaneously pulling on assets, it becomes challenging for applications to rely on file names, locations, and hierarchies. MovieLabs instead is recommending unique identifiers for all assets that can be resolved via a service to specify where a specific file is actually stored. This intermediate step provides an abstraction layer and allows all applications to be able to find and access all assets. For more information, see Through the Looking Glass.

MovieLabs 2030 Vision Principle 9
  1. Reliance on email for notifications and manual processing of workflow tasks.

    Example: A vendor is required to do a task on a video asset and is sent an email, a PDF attachment containing a work order, a link to a proxy video file for the work to be done, and a separate link to a cloud location where the RAW files are. It takes several hours/days for the vendor to extract the required work, download, QC, and store the media assets, and then assign the task on an internal platform to someone who can do the work. The entire process is reversed to send the completed work back to the production/studio.

    By having non-common systems to send workflow requests, asset references and assign work to individual people, we have created an inherently inefficient industry. In the scenario above, a more efficient system would be for the end user to receive an automated notification from a production management system that includes a definition of the task to be done and links to the cloud location of the proxies and RAW files, with all access permissions already assigned so they can start their work. Of course, our industry is uniquely distributed between organizations that handle very nuanced tasks in the completion of a professional media project. This complicates the flow of work and work orders, but there are new software systems that can enable seamless, secure, and automated generation of tasks. We can strip weeks out of major production schedules simply by being more efficient in handoffs between departments, vendors and systems.

  2. Monolithic systems and the lack of API-first solutions inhibit our progress towards interoperable modern application stacks.

    Example: A studio would like to migrate their asset management and creative applications to a cloud workflow that includes workflow automation, but the legacy nature of their software means that many tasks need to be done through a GUI and that it needs to be hosted on servers and virtual machines that mimic the 24/7 nature of their on-premises hardware.

    Modern applications are designed as a series of micro-services which are assembled and called dynamically depending on the process, which enables considerable scaling and also lighter weight applications that can deploy on a range of compute instances (e.g., on workstations, virtual machines or even behind browsers). While the pandemic proved we can have creative tasks running remotely or from the cloud a lot of those processes were ”brute forced” with remote access or cloud VMs running legacy software and are not the intended end goal of a ”cloud native” software stack for media and entertainment. We recognize this is an enormous gap to fix and will take beyond the 2030 timeframe to move all of the most vital applications/services to modern software platforms. However we need the next-generation of software systems to enable open APIs and deploy in modern containers to accelerate the interoperable and dynamic future that is possible within the 2030 Vision.

MovieLabs 2030 Vision Principle 10
  1. Many workflows include unnecessarily time consuming and manual steps.

    Example: A director can’t remotely view a final color session in real time from her location, so she needs to wait for a full render of the sequence, for it to be uploaded to a file share, for an email with the link to be sent, and then for her to download it and find a monitor that matches the one that was used for the grade.

    We could write so many examples here. There’s just way too little automation and way too much time wasted in resolving confusions, writing metadata, reading it back, clarifying intent, sending emails, making calls etc. Many of the technologies exist to fix these issues, but we need to redevelop many of our control plane functions to adopt to a more efficient system which requires investment in time, staff, and development. But those that do the work will come out leaner, faster and more competitive at the end of the process. We recommend that all participants in the ecosystem take honest internal efficiency audits to look for opportunities to improve and prioritize the most urgent issues to fix.

Phew!  So, there we have it. For anyone that believes the 2030 Vision is “doable” today, there are 24 reasons why MovieLabs disagrees. Don’t consider this post a negative, we still have time to resolve these issues, and it’s worth being honest about the great progress completed but also what’s still to do.

Of course, there’s no point making a list of things to do without a meaningful commitment to cross them off. MovieLabs and the studios can’t do this alone, so we’re laying down the gauntlet to the industry – help us, to help us all. MovieLabs will be working to close those gaps that we can affect, and we’ll be publishing our progress on this blog and on LinkedIn. We’re asking you to do the same – share what your organization is doing with us by contacting info@movielabs.com and use #2030Vision in your posts.

There are three specific calls to action from this blog for everyone in the technical community:

  1. The implementation gaps listed in all parts of this blog are the easiest to close – the industry has a solution we just need the commitment and investment to implement and adopt what we already have. These are ones we can rally around now, and MovieLabs has already created useful technologies like the Common Security Architecture for Production, the Ontology for Media Creation, and the Visual Language.
  2. For those technical gaps where the industry needs to design new solutions, sometimes individual companies can pick these ideas up and run with them, develop their own products, and have some confidence that if when they build them customers will come. Some technical gaps can only be closed by industry players coming together, with appropriate collaboration models, to create solutions that enable change, competition, and innovation. There are existing forums to do that work including SMPTE and the Academy Software Foundation, and MovieLabs hosts working groups as well.
  3. And though not many issues are in the Change Management category right now, we still need to work together to share and educate how these technologies can be combined to make the creative world more efficient.

We’re more than 3 years into our Odyssey towards 2030. Join us as we battle through the monsters of apathy, slay the cyclops of single mindedness, and emerge victorious in the calm and efficient seas of ProductionLandia. We look forward to the journey where heroes will be made.

-Mark “Odysseus” Turner

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>
Are we there yet? Part 2 https://movielabs.com/are-we-there-yet-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-2 Thu, 14 Dec 2023 03:15:54 +0000 https://movielabs.com/?p=13461 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>

In Part 1 of this blog series we looked at the Gaps in Interoperability, Operational Support and Change Management that are impeding our journey to the 2030 Vision’s destination (the mythical place we call “ProductionLandia”). In these latter parts we’ll examine the gaps we have identified that are specific to each of the Principles of the 2030 Vision. For ease of reference, the Gaps below are numbered starting from 9 (because we had 1-8 in Part 1 of the blog). For each Gap we list the Principle, a workflow example of the problem, and the implications for the Gap.

In this post we’ll look just at the gaps around the first 5 Principles of the 2030 Vision which address a new cloud foundation.

MovieLabs 2030 Vision Principle 1
  1. Limitations of sufficient bandwidth and performance, plus auto recovery from variability in cloud connectivity.

    Example: Major productions can generate terabytes of captured data per day during production and getting it to the cloud to be processed is the first step.

    Even though there are studio and post facilities with large internet connections, there are still many more locations, especially remote or overseas ones, where the bandwidth is not large enough, the throughput not guaranteed or predictable enough, such as to hobble cloud-based productions at the outset. Some of the benefits in cloud-based production involve the rapid access for teams to manipulate assets as soon as they are created and for that we need big pipes into the cloud(s), that are both reliable and self-healing. Automatic management of those links and data transfers is vital as they will be used for all media storage and processing.

  2. Lack of universal direct camera, audio, and on-set data straight to the cloud.

    Example: Some new cameras are now supporting automated upload of proxies or even RAW material direct to cloud buckets. But for the 2030 Vision to be realized we need a consistent, multi-device on-set environment to be able to upload all capture data in parallel to the cloud(s) including all cameras, both new and legacy.

    We’re seeing great momentum with camera to cloud in certain use cases (with limited support from newer camera models) sending files to specific cloud platforms or SaaS environments. But we’ve got some way to go before it’s as simple and easy to deploy a camera-to-cloud environment as is it to rent cameras, memory cards/hard drives, and a DIT cart today. We also need support for multiple clouds (including private clouds) and or SaaS platforms so that the choice of camera-to-cloud environment is not a deciding factor that locks downstream services into a specific infrastructure choice. We’ve also included in the gap that it’s not just ”camera to cloud” but “capture to cloud” that we need, which includes on-set audio and other data streams that may be relevant to later production stages including lighting, lenses, and IOT devices. All of that needs to be securely and reliably delivered to redundant cloud locations before physical media storage on set can be wiped.

  3. Latency between “single source of truth in cloud” and multiple edge-based users.

    Example: A show is shooting in Eastern Europe, posting in New York, with producers in LA and VFX companies in India. Which cloud region should they store the media assets in?

    As an industry we tend to talk about “the cloud” as a singular thing or place, but in reality of course it is not – it’s made up of private data centers, and various data centers which hyperscale cloud providers tend to arrange into different “availability zones” or “regions” which must be declared when storing media. As media production is a global business the example above is very real, it leads to the question – where should we store the media and when should we duplicate it for performance and/or resiliency? This is also one of the reasons why we believe multi-cloud systems need to be supported because it’s also possible that the assets for a production are scattered across different availability zones, cloud accounts (depending on which vendor has “edit rights” on the assets at any one time), and cloud providers (public, private and hybrid infrastructures). The gap here is that currently decisions need to be made, potentially involving IT systems teams and custom software integrations, about where to store assets to ensure they are available, at very low latency (sub 25 milliseconds round trip – see Is the Cloud Ready to Support Millions of Remote Creative Workers? for more details) for the creative users who need to get to them. By 2030 we’d expect some “intelligent caching’” systems or other technologies that would understand, or even predict, where certain assets need to be for users and stage them close enough for usage before they are needed. This is one of the reasons why we reiterate that we expect, and encourage, media assets to be distributed across cloud service providers and regions and merely ”act” as a single storage entity even though they may be quite disparate. This is also implies that applications need to be able to operate across all cloud providers because they may not be able to predict or control where assets are in the cloud.

  4. Lack of visibility of the most efficient resource utilization within the cloud , especially before the resources are committed.

    Example: When a production today wants to rent an editorial system, it can accurately predict the cost, and map it straight to their budget. But with the cloud equivalent it’s very hard to get an upfront budget because the costs for cloud resources rely on predicting usage, which is hard to know including hours of usage, amount of storage required, data egress, etc.

    Creative teams take on a lot when committing to a show, usually with a fixed budget and timeline. It’s hard to ask them to commit to unknown costs, especially for variables which are hard to control at the outset – could you predict how many takes for a specific scene? How many times a file will be accessed or downloaded? Or how many times a database queried? Even if they could accurately predict usage, most cloud billing is done in arrears, and therefore the costs are not usually known until after the fact, and consequently it’s easy to overrun costs and budgets without even knowing it.

    Similarly, creative teams would also benefit from greater education and transparency concerning the most efficient ways to use cloud products. Efficient usage will decrease costs and enhance output and long-term usage.

    For cloud computing systems to become as ubiquitous as the physical equivalent, providers need to find ways to match the predictability and efficient use of current on-premises hardware, but with the flexibility to burst and stretch when required and authorized to do so.

MovieLabs 2030 Vision Principle 2
  1. Too few cloud-aware/cloud-native apps, which necessitates a continued reliance on moving files (into clouds, between regions, between clouds, out of clouds).

    Example: An editor wants to use a cloud SaaS platform for cutting their next show, but the assets are stored in another cloud, the dailies system providing reference clips is on a third, and the other post vendors are using a private cloud.

    We’re making great progress with getting individual applications and processes to move to the cloud but we’re in a classic ”halfway” stage where it’s potentially more expensive and time consuming to have some applications/assets operating in the cloud and some not. That requires moving assets into and out of a specific cloud to take advantage of its capabilities and if certain applications or processes are available only in one cloud then moving those assets specifically to that cloud, which is the the sort of “assets chasing tasks” from the offline world that this principle was designed to avoid in the cloud world. We need to keep pushing forward with modern applications that are multi-cloud native and can migrate seamlessly between clouds to support assets stored in multiple locations. We understand this is not a small task or one that will be quick to resolve. In addition, many creative artists used Mac OS and that is not broadly available in cloud instances and in a way that can be virtualized to run on myriad cloud compute types.

  2. Audio post-production workflows (e.g., mixing, editing) are not natively running in the cloud.

    Example: A mixer wants to remotely work on a mix with 9.1.6 surround sound channels that are all stored in the cloud. However most cloud based apps only support 5.1 today, and the audio and video channels are streamed separately so the sync between the audio and the video can be “soft” in a way that it can be hard to know if the audio is truly playing back in sync.

    The industry has made great strides in developing technologies to enable final color (up to 12 bit) to be graded in the cloud, but now similar attention needs to be paid to the audio side of the workflows. Audio artists can be dealing with thousands, or even tens of thousands of small files and they have unique challenges which need to be resolved to enable all production tasks to be completed in the cloud without downloading assets to work remotely. The audio/video sync and channel count challenges above are just illustrative of the clear need for investment and support of both audio and video cloud workflows simultaneously to get to our “ProductionLandia” where both can be happening concurrently on the same cloud asset pool.

MovieLabs 2030 Vision Principle 3
  1. Lack of communication between cross-organizational systems (AKA “too many silos”) and inability to support cross-organizational workflows and access.

    Example: A director uses a cloud-based review and approval system to provide notes and feedback on sequences, but today that system is not connected to the workflow management tools used by her editorial department and VFX vendors, so the notes need to be manually translated into work orders and media packages.

    As discussed above we’re in a transition phase to the cloud, and as such we have some systems that may be able to receive communication (messages, security permission requests) and commands (API calls), whereas other systems are unaware of modern application and control plane systems. Until we have standard systems for communicating (both routing and common payloads for messages and notifications) and a way for applications to interoperate between systems controlling different parts of the workflow, then we’ll have ongoing issues with cross-organizational inefficiencies. See the MovieLabs Interoperability Paper for much more on how to enable cross-torganizational interop.

MovieLabs 2030 Vision Principle 4
  1. No common way to describe each studio’s archival policy for managing long term assets.

    Example: Storage service companies and MAM vendors need to customize their products to adapt to each different content owner’s respective policies and rules for how archival assets are selected and should be preserved.

    The selection of which assets need to be archived and the level of security robustness, access controls, and resilience are all determined by studio archivists depending on the type of asset. As we look to the future of archives we see a role for a common and agreed way of describing those policies so any software storage system, asset management or automation platform could read the policies and report compliance against them. Doing so will simplify the onboarding of new systems with confidence.

MovieLabs 2030 Vision Principle 5
  1. Challenges of measuring fixity across storage infrastructures.

    Example: Each studio runs a checksum against an asset before uploading it to long term storage. Even though storage services and systems run their own checks for fixity those checksums or other mechanisms are likely different than the studios’ and not exposed to end clients. So instead, the studio needs to run their own checks for digital degradation by occasionally pulling that file back out of storage and re-running the fixity check.

    As there’s no commonality between fixity systems used in major public clouds, private clouds, and storage systems, the burden of checking that a file is still bit-perfect falls on the customer to incur the time, cost, and inconvenience of pulling the file out of storage, rehashing it, and comparing to the original recorded hash. This process is an impediment to public cloud storage and the efficiencies it offers for the (very) long term storage it offers for archival assets.

  2. Proprietary formats need to be archived for many essence and metadata file types.

    Example: A studio would like to maintain original camera files (OCF) in perpetuity as the original photography captured on set, but the camera file format is proprietary, and tools may not be available in 10, 20, or 100 years’ time. The studio needs to decide if it should store the assets anyway or transcode them to another format for the archive.

    The myriad of proprietary files and formats in our industry contain critical information for applications to preserve creative intent, history, or provenance, but that proprietary data becomes a problem if it is necessary to open the file in years or decades, perhaps after the software is not even available. We have a few current and emerging examples in some areas of public specifications and standards, and open source software that can enable perpetual access, but the industry has been slow to appreciate the legacy challenges in preserving access to this critical data in the archive.

In the final part of this blog series, we’ll address the gaps remaining within the Principles covering Security and Identity and Software-Defined Workflows… Stay Tuned…

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>
MovieLabs releases RDF & JSON versions of Ontology for Media Creation https://movielabs.com/movielabs-releases-rdf-json-schemas/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-releases-rdf-json-schemas Mon, 04 Dec 2023 22:55:38 +0000 https://movielabs.com/?p=13422 As we announced when we released Version 2.0 of the Ontology for Media Creation (OMC), we’ve also been working on both JSON and RDF implementations and now they're entering Public Preview.

The post MovieLabs releases RDF & JSON versions of Ontology for Media Creation appeared first on MovieLabs.

]]>

GitHub

This is the first formal release of the JSON schema for the Ontology for Media Creation, with the schema, documentation, and examples all now available in GitHub. We’ve also updated the RDF schema to reflect changes in version 2.0 of the OMC.

You’ll need a GitHub account to access the resources and schema-dependent documentation here:
https://github.com/MovieLabs/OMC

You can also directly access the schemas without GitHub directly with the links below, though we encourage you to use GitHub for the documentation, examples, and release notes.

For JSON,
https://movielabs.com/omc/json/schema/v2.0

For the three RDF schemas

 

We look forward to seeing what developers can do with these powerful new tools to build implementations leveraging the Ontology for Media Creation for media workflow interchange. We’d also like your feedback – let us know if you find any bugs or want to request future features that would be useful.

If you have any questions or comments on these releases please reach out to us on ontology@movielabs.com or open an issue on  GitHub.   We’ll respond as soon as we can!

The post MovieLabs releases RDF & JSON versions of Ontology for Media Creation appeared first on MovieLabs.

]]>
MovieLabs releases v2.0 of the Ontology for Media Creation https://movielabs.com/movielabs-releases-v2-0-of-the-ontology-for-media-creation/?utm_source=rss&utm_medium=rss&utm_campaign=movielabs-releases-v2-0-of-the-ontology-for-media-creation Wed, 16 Aug 2023 23:55:34 +0000 https://movielabs.com/?p=13278 Based on feedback from implementations and expansions of the covered concepts and terms

The post MovieLabs releases v2.0 of the Ontology for Media Creation appeared first on MovieLabs.

]]>

As part of the 2030 Vision, MovieLabs recognized the need for the systems within a workflow to be able to communicate to each other in a consistent manner. We highlighted the need for common vocabulary and definitions of relationships for use by human-to-human, human-to-machine and machine-to-machine communication. That need drove us to develop the Ontology for Media Creation (OMC) to provide consistent naming and definitions of terms, as well as ways to express how various concepts and components relate to one another in production workflows.1. We initially released version 1.0 of the OMC in the Autumn of 2021 and now it’s time for a major expansion and update, so today we’re pleased to announce Version 2.0 of the OMC.

We have made many revisions and additions based on feedback from several organizations that are implementing OMC in their products and services using a variety of database technologies as well as from our own implementations. As the OMC was always designed to be extensible, we’ve also added new areas of coverage (including Versioning) and will continue to expand later this year and into 2024. The changes to some core terms also makes this a Version 2.0 release as parts of it are not completely compatible with previous versions. It is therefore a recommended upgrade for all OMC implementors. This version of OMC serves as the basis for future extensions, and some of the changes are to improve compatibility in future releases.

What’s New

The major changes in this version are:

  • Added many new Narrative and Production Elements in Context (e.g., Greenery, Prosthetics, anchors for upcoming Audio and CG work, etc.)
  • Added concepts for script breakdown to use for various kinds of specialized activities (effects, stunts, Mo-Cap, etc.)
  • Added new section (Part 3B) for managing versions, revisions, variants etc.
  • Updated Camera Metadata to conform with SMPTE RIS OSVP Camera and Lens Metadata.
  • Simplified and clarified Shot and Sequence. The functionality is the same, but it is easier to use now.
  • Clarified “Role” in Participants and added the new concept of “Work Unit” to encapsulate the combination of a Participant and a Task.
  • We made relationship naming more consistent and improved uniformity of presentation for relationships that can be in Contexts
  • Numerous clarifications and bug fixes based on developer feedback.

Version 2.0 Available Now

Version 2.0 (and prior versions) of the Ontology for Media Creation are available now on the MovieLabs Media Creation Documentation Site at:

What’s Next

We continue to expand the scope of coverage of OMC and currently are working to expand into CG Assets, Audio Assets, and On-set Data.

An RDF version of OMC 2.0 will be available for download shortly. This Fall we’ll also be releasing a JSON version of the Ontology for Media Creation that will be published in a GitHub repo (if you want preview access before it becomes public, please email us at ontology@movielabs.com). Also, reach out to us with your thoughts on the Ontology, how you are implementing it and the sorts of use cases you are supporting – we’d love to hear how it’s working for you!

[1] Watch this video for a primer on the value of OMC in workflows: Software-Defined Workflows Explained.

The post MovieLabs releases v2.0 of the Ontology for Media Creation appeared first on MovieLabs.

]]>
Are we there yet? Part 1 https://movielabs.com/are-we-there-yet-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-1 Wed, 26 Jul 2023 16:13:10 +0000 https://movielabs.com/?p=13094 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>

It’s mid-2023, we’re about 4 years into our odyssey towards “ProductionLandia” – an aspirational place where video creation workflows are interoperable, efficient, secure-by-nature and seamlessly extensible. It’s the destination. The 2030 Vision is our roadmap to get there. Each year at MovieLabs we check the industry’s progress towards this goal, adjusting focus areas, and generally providing navigation services to ensure we’re all going to arrive in port in ProductionLandia at the same time and with a suite of tools, services and vendors that work seamlessly together. As part of that process, we take a critical look at where we are collectively as an M&E ecosystem – and what work still needs to be done – we call this “Gap Analysis”.

Before we leap into the recent successes and the remaining gaps, let’s not bury the lead – while there has been tremendous progress, we have not yet achieved the 2030 Vision (that’s not negative, we have a lot of work to do and it’s a long process). So, despite some bold marketing claims from some industry players, there’s a lot more in the original 2030 Vision white paper than lifting and shifting some creative processes to the cloud, the occasional use of virtual machines for a task or a couple of applications seamlessly passing a workflow process between each other. The 2030 Vision describes a paradigm shift that starts with a secure cloud foundation, and also reinvents our workflows to be composable and more flexible, removing the inefficiencies of the past, and includes the change management that is necessary to give our creative colleagues the opportunity to try, practice and trust using these new technologies on their productions. The 2030 Vision requires an evolution in the industry’s approach to infrastructure, security, applications, services and collaboration and that was always going to be a big challenge. There’s still much to be done to achieve dynamic and interoperable software-defined workflows built with cloud-native applications and services that securely span multi-cloud infrastructures.

Status Check

But even though we are not there yet, we’re actually making amazing progress based on where we started (albeit with a global pandemic to give a kick of urgency to our journey!). So many major companies including cloud services companies, creative application tool companies, creative service vendors and other industry organizations have now backed the 2030 Vision; it is no longer just the strategy of the major Hollywood studios but has truly become the industry’s “Vision.” The momentum is truly behind the vision now, and it’s building – as is evident in the 2030 Showcase program that we launched in 2022 to highlight and share 10 great case studies where companies large and small are demonstrating Principles of the Vision that are delivering value today.

We’ve also seen the industry respond to our previous blogs on gaps including what was missing around remote desktops for creative applications, software-defined workflows  and cloud infrastructures. We can now see great progress with camera to cloud capture, automated VFX turnovers, final color pipelines now technically possible in the cloud, amazing progress on real-time rendering and iteration via virtual production, creative collaboration tools and more applications opening their APIs to enable new and unpredictable innovation.

Mind the Gaps

So, in this two-part Blog, let’s look at what’s still missing. Where should the industry now focus its attention to keep us moving and accelerate innovation and the collective benefits of a more efficient content creation ecosystem? We refer to these challenges as “gaps” between where we are today and where we need to be in “ProductionLandia.” When we succeed in delivering the 2030 Vision, we’ll have closed all of these gaps. As we analyze where we are in 2023 we see these gaps falling into the 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-Defined Workflows), plus 3 underlying ones that bind them altogether:

image: 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-defined Workflows), plus 3 underlying ones that bind them altogether

In this Part 1 of the Blog we’ll look at the gaps related to these areas. In Part 2 we’ll look at the gaps we view as most critical for achieving each of the principles of the vision, but let’s start with those binding challenges that link them all.

It’s worth noting that some gaps involve fundamental technologies (a solution doesn’t exist or a new standard, or open source project is required) some are implementation focused (e.g., technology exists but needs to be implemented/adopted by multiple companies across the industry to be effective – our cloud security model CSAP  is an example here where a solution is now ready to be implemented) and some are change management gaps (e.g., we have a viable solution that is implemented but we need training and support to effect the change). We’ve steered clear of gaps that are purely economic in nature as MovieLabs does not get involved in those areas. It’s probably also worth noting that some of these gaps and solutions are highly related, so we need to close some to support closing others.

Interoperability Gaps

  1. Handoffs between tasks, teams and organizations still require large scale exports/imports of essence and metadata files, often via an intermediary format. Example: Generation of proxy video files for review/approval of specific editorial sequences. These handovers are often manual, introducing the potential for errors, omissions of key files, security vulnerabilities and delays. See note1.
  2. We still have too many custom point-to-point implementations rather than off-the-shelf integrations that can be simply configured and deployed with ease. Example: An Asset Management System currently requires many custom integrations throughout the workflow, which makes changing it out for an alternative a huge migration project. Customization of software solutions adds complexity and delay and makes interoperability considerably harder to create and maintain.
  3. Lack of open, interoperable formats and data models. Example: Many applications create and manage their own sequence timeline for tracking edits and adjustments instead of rallying around open equivalents like OpenTimelineIO for interchange. For many use cases, closing this gap requires the development of new formats, data models, and their implementation.”.
  4. Lack of standard interfaces for workflow control and automation. Example: A workflow management software cannot easily automate multiple tasks in a workflow by initiating applications or specific microservices and orchestrate their outputs to form an output for a new process. Although we have automation systems in some parts of the workflow the lack of standard interfaces again means that implementors frequently have to write custom connectors to get applications and processes to talk to each other.
  5. Failure to maintain metadata and a lack of common metadata exchange across components of the larger workflow. Example: Passing camera and lens metadata from on-set to post-production systems for use in VFX workflows. Where no common metadata standards exist, or have not been implemented, systems rarely pass on data they do not need for their specific task as they have no obligation to do so, or don’t know which target system may need it. A more holistic system design however would enable non-adjacent systems to be able to find and retrieve metadata and essence from upstream processes and to expose data to downstream processes, even if they do not know what it may be needed for.

Operational Support

  1. Our workflows, implementations and infrastructures are complex and typically cross between boundaries of any one organization, system or platform. Example: A studio shares both essence and metadata with external vendors to host on their own infrastructure tenants but also less structured elements such as work orders (definitions of tasks), context, permissions and privileges with their vendors. Therefore, there is a need for systems integrators and implementors to take the component pieces of a workflow and to design, configure, host, and extend them into complete ecosystems. These cloud-based and modern software components will be very familiar to IT systems integrators, but they need the skills and understanding in our media pipelines to know how to implement and monetize them in a way which will work in our industry. We therefore have a mismatch gap between those that understand cloud-based IT infrastructures and software, and those that understand the complex media assets and processes that need to operate on those infrastructures. There are few companies to chose from that have the correct mixture of skills to understand both cloud and software systems as well as media workflow systems, and we’ll need a lot more of them to support the industry wide migration.
  2. We also need systems that match our current support models. Example: A major movie production can be simultaneously operating across multiple countries and time zones in various states of production and any down system can cause backlogs in the smooth operations. The media industry can work some unusual and long hours, at strange times of the day and across the world – demanding a support environment that can support it with specialists that understand the challenges of media workflows and not just open an IT ticket that will be resolved when the weekday support comes in at 9am on Monday. In the new 2030 world, these problems are compounded by the shared nature of the systems – so it may be hard for a studio or production to understand which vendor is responsible if (when) there are workflow problems – who do you call when applications and assets seamlessly span infrastructures? How do you diagnose problems?

Change Management

  1. Too few creatives have tried and successfully deployed new ‘2030 workflows’ to be able to share and train others. Example: Parts of the workflow like Dailies have migrated successfully to the cloud, but we’re yet to see a major production running from ”camera to master” in the cloud – who will be the first to try it? Change Management comprises many steps before new processes are considered “just the way we do things.” There are many steps but the main ones we need to get through are:
    • Educating and socializing the various stakeholders about the benefits of the 2030 vision, for their specific areas of interest
    • Involving creatives early in the process of developing new 2030 workflows
    • Then demonstrating value of new 2030 workflows to creatives with tests, PoCs, limited trials and full productions
    • Measuring cost/time savings and documenting them
    • Sharing learnings with others across the industry to build confidence.

Shortly, we’ll add a Part II to this blog which will add to the list of gaps with those that are most applicable to each of the 10 Principles of the Vision. In the meantime, there’re eight gaps here which the industry can start thinking about, and do please let us know if you think you already have solutions to these challenges!

[1] The Ontology for Media Creation (OMC) can assist in common payloads for some of these files/systems.

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>
From Script to Data – Part 3 https://movielabs.com/from-script-to-data-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=from-script-to-data-part-3 Thu, 08 Dec 2022 23:15:51 +0000 https://movielabs.com/?p=11810 Using the Ontology for Media Creation in physical and post-production

The post From Script to Data – Part 3 appeared first on MovieLabs.

]]>

Introduction to Part 3

This is the third and final part of our blog series “From Script to Data”, which shows how to use the Ontology for Media Creation to improve communication and automation in the production process. Part 1 went from the script to a set of narrative elements, and Part 2 moved from narrative elements to production elements. Here we will use OMC to go from a set of production elements into the world of filming, slates, shots, and sequences.

Combining Narrative Elements and Production Elements

Toe bone connected to the foot bone
Foot bone connected to the heel bone
Heel bone connected to the ankle bone

Back bone connected to the shoulder bone
Shoulder bone connected to the neck bone
Neck bone connected to the head bone
Hear the word of the Lord.

Even though we have extracted and mapped many of the narrative and production elements, there’s still something missing before filming starts: where is filming going to happen? Just as Narrative Props, Wardrobe, and Characters are depicted by production elements, Narrative Locations have to be depicted as well. The Ontology defines Production Location for this.

Production Location: A real place that is used to depict the Narrative Location or used for creating the Creative Work.

We’ll use two Production Locations. Production Scene 2A uses a stage for filming Sven to be overlaid on the VFX/CG rendering of the satellite, and Production Scene 2B is a different stage with a built set of the inside of Sven’s spaceship. (Don’t worry – the jungle scenes will be on location in Hawaii…)

This shows how Narrative Locations are depicted by Production Locations. The depiction may need just the Production Location, but it can also require sets and set dressing at the Production Location.

The Production Locations are connected to Production Scenes. Adding them in to the last diagram from Part 2 of this blog series lets us see the all the production elements needed for the Production Scenes. Note – You need lots of other things too (cameras, lights, and so on) some of which are covered in OMC Part 8: Infrastructure [1].
The full diagram below brings together all of the pieces we have talked about so far together. There are some extra relationships added to show how it’s done. This is full of information, and can be hard to read, so click the image to make it larger. Most or all of this information currently exists in various forms – departmental spreadsheets, script supervisors’ notes, the first AD’s physical or digital notebook, the producer’s head, channels in collaboration tools, and innumerable Post-it notes – but it is currently difficult or nearly impossible to bring it all together so individuals, work teams, and organizations can use it. Future production automaton systems can deal with level of complexity and extract the information that individual participants need, whether it is needed to examine choices and consequences or ensure that the right information goes to the right place to support a particular task.

A production team can do lots of things with the information behind this representation. Many of these are done manually today but can be automated because OMC is well defined and machine-readable. For example, now we have the data standardized all production applications can generate, share and edit the data enabling users to:

  • For some things (e.g., a Character that appears in more than one scene) readers have to decide whether to add multiple rows (as above), risking copy and paste mistakes and making the spreadsheet longer; add multiple columns, e.g., a single row for a character with a column for each scene, which requires rejigging things when someone appears in a new scene; or having a single row per character with all the Narrative Scenes in it with some kind of separator, which requires agreeing on the separator and editing lists when something changes. All of this is do-able in a small production but adds risk, overhead and chances of discrepancy in a large one.
  • Find production scenes that have the same Production Location and arrange shoot days to take that into account, including which actors need to be present.
  • Based on the shoot day for a Production Scene, find the Production Props and Costumes needed for it, and schedule them (and any precursors) accordingly.
  • Automatically track changes in the script and propagate through the production pipeline.
  • Change a character name – or anything else such as actor, prop, or location – in one place and have it propagate through the rest of the system: Sven can reliably become Hjalmar, for example.

Time to Film

Dem bones, dem bones gonna walk around.
Dem bones, dem bones gonna walk around.
Dem bones, dem bones gonna dance around.
Now hear the word of the Lord.

The diagram above is, of course, incomplete, but OMC supports the missing items, including other Participants (the director, cinematographer, camera crew, and so on) and Assets (set dressing, for example) and, as mentioned above, can express infrastructure components as well.

Including that level of detail would make this even longer than it already is, so we’ll imagine that it’s all in place – so at last, it is time to film something. In order to do this, we have to introduce two new concepts. Both are very complex, but here we will stick to just basic information and things related to connectivity.

The first of these is: what to call the recording of the acted out scene? ‘Footage’ was used when it was done on film measured in feet and inches. ‘Recording’ is more often used for just sound. OMC uses ‘Capture’, which covers audio, video, motion capture, and whatever else may show up one day:

Capture: The result of recording any event by any means

The second is: How is the capture connected back to a to the Production Scene? In any production there’s a lot of other information needed as well: the camera used, the take number (since almost nothing is right the first time), and so on. Traditionally, this was written on a slate or clapperboard which was recorded at the start the Capture, which made it easy to find and hard to lose.  This term is still used in modern productions, even if there is no physical writing surface involved. OMC makes it hard to lose this information – this digital slate with its information is connected to the capture with a relationship – and makes it easy for software to access and use. (Some current systems support extracting information either by a person or with image processing software from a slate in a video capture and saving it elsewhere, but this is not ideal.)

Slate: Used to capture key identifying information about what is being recorded on any given setup and take.

The Slate has a great deal of information in it, for which see the section in OMC: Part 2 Context.  For now, we’ll focus just use the Slate Unique Identifier (UID) which is a semi-standard way of specifying important information in a single string; the Production Scene, which can be extracted from a standard Slate UID; and the Take, the counter of how many times this scene has been captured.

When a production scene is captured, all that is necessary is to create a Slate, connect it to the Production Scene, add the Take and Slate UID, and then record the action. This is repeated for each take, and for each camera angle or camera unit (if it’s being filmed by more than one camera) or type of capture (such as motion capture.).

The actual Captured media is just another kind of Asset which should be linked with the Slate ID. It may not be convenient to use natively e.g. if it is an OCF) and may require some sort of processing. The process of generating proxies and other video files derived from the OCF is a topic for another day, but it essentially deals with transforming Assets from one structural form into another, while still maintaining their relationships back to the production scene, and hence the production elements in them and the underlying narrative elements.

As long as the Slate ID continues to be associated with the all the media captured on-set it can link all of the OCF’s, proxies and audio files even as they head off into different post-production processes. These processes can be done by different internal groups or external vendors with their own media storage systems, but at long as the Asset identifiers and connections to a Slate are retained it doesn’t matter how things are stored. In the MovieLabs 2030 Vision, Assets don’t have to move, but in the short and intermediate term they often need to. Identifiers and the Slate remove many of the difficulties with this, for example by allowing the use of unstructured object storage rather than hierarchical directories, which are often application-specific. (See our blog on using an asset resolver for more information.)

This diagram shows two takes and their Captures for Production Scene 2A. Each take (represented by the Slate) generates three captures: an audio file, a video file, and a point cloud.

This one shows a single take for production scene 2B with a more detailed view of the resulting Assets: an Audio file, an Asset Group representing camera files (OCF), and a proxy derived from the OCF.
Now we have some actual video that we can use for VFX, editing, and all of the other magic that happens between filming and the finished creative work. The end result of this is an ordered collection of media that represents whatever it is that is wanted at a particular stage of the process. This is typically called a Sequence.

Sequence: An ordered collection of media used to organize units of work.

OMC calls the media used to create this sequence ‘shots.’ ‘Shot’ is used to mean many different things in the production process; for a discussion of some of these, see the section on Shot in OMC: Part 2 Context. The definition of Shot used here is:

Shot: A discrete unit of visual narrative with a specified beginning and end.

A sequence is just a combination of Shots presented in a particular order with specific timing, and in live action film-making a Shot is most often a portion of a Capture – ‘a portion’ because the creative team may decide to use only some of a particular capture. Storyboards can also be used as Shots, for instance, as can other Sequences. This means that a Shot has a reference to its source material.

Finally, a Sequence has a set of directions for turning those Shots into a Sequence. There are several formats for this, such as EDL and AAF. OMC abstracts these into a general Sequence Chronology Descriptor (SCD) which has basic timing information about the portions of shots and where in the Sequence they appear. Exact details of how the Sequence is constructed are application-specific, using a format such as an EDL or OpenTimelineIO. An SCD is an OMC Asset, and the application-specific representation is used as the SCD’s functional characteristics.

The SCD allows some visibility into Sequences for applications that may not understand a particular detailed format. It is useful for general planning and tracking, and is another example of OMC making connections that in current productions are manual and easily lost, such as knowing what sequences have to be re-done if a capture is re-shot or a prop is changed.

OMC: Part 2 Context has details on how portions of Shots are specified for adding to a Sequence. This diagram shows the end result at a relatively coarse level of granularity. The Sequence uses the SCD to combine three captured Assets (or portions of those Assets) into a finished representation of Production Scene 2b.

Capture, Shot and Sequence have a lot of details not mentioned here, since we have been emphasizing the connectivity of things as opposed to the details of the various elements. For example:
  • A Sequence has to be viewable. Often, this means playing it back in an editing tool, but for review and approval, for example, a playable video is needed. In this case, the video is an Asset, connected to the Sequence with a “derived from” relationship.
  • A Sequence (or an Asset derived from it) can be used as part of another Sequence.
  • In the example above, if a Shot isn’t ready, it can be replaced by part of a Storyboard.

This last diagram shows all of the things we have talked about, in all 3 blogs, in one complete view of the data and relationships. From this, you can see the ripple effect if, for example, the design of the communicator prop changes. This starts with remaking the production prop, carries on through re-filming or re-rendering the production scenes where the communicator is used, and on to the finished sequences for the narrative scene. This visibility into how everything is connected can help reduce unexpected surprises late in the production process.

Conclusion

This blog has shown how to use the Ontology for Media Creation to move from production elements to some filmed content, and concludes this series of blogs on using the OMC in the context of a real production.

Thinking beyond the relatively simple examples in this blog series, which use just a couple of scenes and characters, a major production is not just a logistical and creative challenge but also a massive data wrangling operation.  And that data is often the cause of complexity and confusion – at MovieLabs we believe we can help simplify that problem dramatically to allow the creative team to spend their precious resources on being creative.

We believe that there are four main benefits from using OMC in this way:

First, using common, standard terms and data models reduces miscommunication, whether between people or between software-based systems. We explored this in the first blog in this series, and the lessons apply to all the others as well.

Second, being explicit about the connections between elements of the production makes it easier to understand dependencies and the consequence of changes, both of which have an effect on scheduling and budget. We dove into this kind of model in Part 2 and then used it heavily in Part 3, which also demonstrates some concrete applications of the model.

Third, OMC enables a new generation of software and applications. OMC is primarily a way of clarifying communication, and clear machine to machine communication is essential in the distributed and cloud-based world. These new applications we’re expecting will support the broader 2030 Vision and can cover everything from script breakdown and scheduling through to on-set activities, VFX, the editorial process, and archives.

Finally, having consistent data is hugely beneficial for emerging technologies such as machine learning and complex data visualization and we hope therefore the OMC will unlock a wave of innovative new software in our industry to accelerate productions and improve the quality of life for all involved.

These blogs are not theoretical – we have been using the OMC in our own proof-of-concept work where we model real production scenarios and this data connectivity is a vital part in delivering a software defined workflow (for more on Software Defined Workflows, watch this video) where we are exploring efficiency and automation in the production process.

The Ontology for Media Creation is an expanding set of Connected Ontologies – we will continue to add extra definitions, scope and concepts as we broaden the breadth and depth of what it covers, especially as it becomes more operationally deployed. For example, we are currently working on OMC support for versions and variants as well as expanding into new areas of the workflow such as  computer graphics assets.  In practical terms, the Ontology is available as RDF and JSON, and software developers are working with both. Please let us know if you’d like to try it out in an implementation.

If you found this blog series useful then let us know, and if you’re interested in additional blogs or how-to-guides let us know a specific use case and we can address it (email: office@movielabs.com).

There’s also a wealth of useful information at mc.movielabs.com and movielabs.com/production-technology/ontology-for-media-creation/.

[1] We are working on expanding the Infrastructure portions of OMC.

The post From Script to Data – Part 3 appeared first on MovieLabs.

]]>
From Script to Data – Part 2 https://movielabs.com/from-script-to-data-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=from-script-to-data-part-2 Tue, 25 Oct 2022 21:26:12 +0000 https://movielabs.com/?p=11599 Using the Ontology for Media Creation to improve communication and automation in the production process

The post From Script to Data – Part 2 appeared first on MovieLabs.

]]>

Introduction to Part 2

This is the second part of our blog series “From Script to Data”, which shows how to use the Ontology for Media Creation to improve communication and automation in the production process. Part 1 went from the script to a set of narrative elements, and here we will use OMC to make the transition from narrative elements to production elements. Part 3 will take those production elements through filming and some aspects of post-production.

Production Elements

Dem bones Dem bones Dem dry bones
Dem bones Dem bones Dem dry boness
Dem bones Dem bones Dem dry bones,
Hear the word of the Lord

We now have a good abstract understanding of the script and its contents. What we don’t have is any idea of what the onscreen presentation looks like, who’s going to play the characters, and so on.

In this section, we bring in two new concepts.

Asset: A physical or digital object or collection of objects specific to the creation of a Creative Work.

Participant: The entities (people, organizations, and services) that are responsible for the production of the Creative Work.

Assets and Participants can be very complex in their details, but they both contains two broad types of information:

  • Functional Characteristics say what an asset is used for or what a participant does: is an Asset a prop or a costume, for example, and is a participant a director or sound engineer.
  • Structural Characteristics say what an asset or participant is: is the asset a physical thing or a CG model or a piece of video and is the participant a person or an organization or a software service.

You can find more about how this works and why we made this choice in Part 3: Assets and Part 4: Participants.

The ontology uses Assets and Participants to create production elements – the stuff that is needed to turn the narrative into a finished film or TV show. In this section, we’ll look at a few different production elements. This abandons the spreadsheet view of the world because connections between things become much more prevalent and too hard to describe in non-graphical ways.

Many productions use storyboards to give a general idea of the flow of a scene. Storyboards are a particular kind of Asset, connected to a scene. Each frame of a storyboard can be thought of as an Asset as well – you might want to send single frames to different departments – so the storyboard itself is a composite asset. We won’t go into the details of Asset groups – for this exercise, the fact that it’s a storyboard is more important than the fact that it is an Asset.

example diagram

Movie and TV production is a visual medium, and the narrative elements eventually have to be turned into either physical or digital assets that are used in the production process. These don’t just appear out of nowhere – there is an iterative process that goes from narrative element to something that shows how it should be represented when it is turned into a production element. The ontology calls the result of this process Concept Art, which is a kind of asset that is a creative representation of something from the narrative. It exists for many elements of the production, and here we’ll show it for props and wardrobe.

Sometimes there are different ideas about how something should be represented – does Sven’s repair tool look like a socket wrench, a soldering iron, or a multimeter? – and it is up to the production team to decide which to use.

Concept Art: Images that illustrate ideas for potential depictions of elements of the creative intent.

example diagram

There is another sort of artwork not covered here – artwork or other material that is used to inspire during the production, such as images of hi- and low-tech tools to look at when thinking about the concept art for Sven’s repair tool. In the Ontology, these work much the same as concept art, and can be connected to individual narrative elements, to entire scenes, or even to the whole production. This kind of Asset is called Creative Reference Material.

Creative Reference Material: Images or other material used to inform the creation of a production element, to help convey a tone or look, etc.

Now we need Actors to portray the characters. Actors are a kind of Participant, as are Directors, Cinematographers, and so on. What’s special about Actors is that they need to be connected to the Characters. Some characters can be portrayed by more than one Actor (e.g., voice and motion capture, or actor and stunt double), and some Actors might portray more than one Character. We’ll add one for Kiera, who will be voice-only, and two for Sven – the main actor and a stunt double to use in a later scene.

Actors and Characters are connected together by a Portrayal, and the Portrayal is connected to a Production Scene. Portrayals are connected to lots of other pieces too (costumes, props, and so on.) but we won’t cover that here – some of it will be shown in the diagram in the next section though, and you can look at OMC: Part 7 Relationships to see the kinds of things that can be represented.

example diagram
Shooting the film requires real props and real costumes. These are pretty simple: props and costumes are Assets, and are connected to their respective Narrative Props and Wardrobe. The person doing the shooting schedule can discover which props and costumes have to be available for a particular scene using the relationships that have been built up, ideally with machine assistance from the graph-and-data application described above.
example diagram

The next piece to think about is actually filming (or animating, in the case of an animated work) the narrative scenes. Looking at Narrative Scene 2, it has two very different parts: the first, in which Sven is happily repairing the satellite, and the second, where Sven flees the Trilobot and returns to his ship. These divisions are called production scenes and are created and used as required by artistic choices (different color management, different locations for filming) and technical requirements (requiring a green screen or equivalent vs filming as-is on a beach.) The HSM production team decided to break Narrative Scene 2 into two production scenes, dividing them just after Sven says “What the…?” It is possible to do this in other ways, of course, driven by creative or technical requirements. It is also possible for automated tools to make first suggestions about where to make these divisions, based on explicit notations in the script or inferred break points.

A Production Scene is going to be a central organizing element, you might think of it as a little bit like a call sheet. Lots of things are likely to be related to any given Production Scene, like the physical location, the crew and actors required, the date it is being filmed, all the props, wardrobe, infrastructure, etc. that will be needed as well as the Assets created during that filming of that scene.

The divisions into production scenes can be changed during filming, and so it is very important to be clear about what production scene is being used as the basis for a particular activity (filming, recording, rendering, etc.) See OMC: Part 2: Media Creation Context for lots and lots of details, most of which we will gloss over here.

The important thing for this overview is that a Production Scene has a Scene Descriptor that uniquely distinguishes it from all other production scenes past, present, and future. Production elements used in the Production Scene are tied though a chain of relationships back to the narrative elements. (That connection isn’t shown in this diagram, but the prop and costume in production scene 2a are the ones shown in the diagram above. In part 3 of the blog we’ll put all these pieces together into a complete graph.)

example diagram

Conclusion to Part 2

This blog has shown how to use OMC to move from narrative elements to production elements. Once we have actors portraying characters and a set of real (or computer-rendered) props the production team can refine budgets and work on scheduling. It also means that if the script changes, those changes can propagate clearly and quickly to the people in the production team in charge of casting, call sheets, and prop fabrication.

In part 3, we’ll follow Sven and his communicator into filming and beyond.

If you found this blog series useful then let us know, and if you’re interested in additional blogs or how-to-guides let us know a specific use case and we can address it (email: office@movielabs.com). There’s also a wealth of useful information on the MovieLabs website at www.movielabs.com and the Ontology for Media Creation website.

The post From Script to Data – Part 2 appeared first on MovieLabs.

]]>
From Script to Data – Part 1 https://movielabs.com/from-script-to-data-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=from-script-to-data-part-1 Wed, 28 Sep 2022 19:59:46 +0000 https://movielabs.com/?p=11412 Using the Ontology for Media Creation to breakdown Scripts

The post From Script to Data – Part 1 appeared first on MovieLabs.

]]>

1 Introduction

The script provides the narrative of a movie – its characters, plot, locations, necessary props, and more – but it doesn’t tell you how to turn it into a finished product, which is both a creative process and a logistical and organizational project. Getting to the things you need to organize and run a complex production process is currently a mostly manual undertaking developed through years of experience, aided and abetted by spreadsheets, emails, and many conversations to resolve any discrepancies between the various organizations and departments involved. This blog series is a guide to using the MovieLabs Ontology for Media Creation (OMC) in the production process to support automation and reduce miscommunication among the people and organizations involved. It starts with the script breakdown and goes through pre-production and planning phases, filming, and a bit of the editorial process. OMC has other uses too, for example in distribution and analytics.

The OMC provides a common data model with clear meanings and common data formats. This makes it easier for applications to exchange data with less effort and fewer errors, which in turn simplifies integration and automation. If each piece of the common data is managed consistently, e.g., in a common data store, there is much less chance of data drifting; many common data errors are easy for humans to understand, but hard for automation. Just as importantly, an ontology also provides relationships, which are well-defined connections between one thing and another. Relationships make it easy to understand dependencies, see the potential consequences of changes, and discover connections that may not have been obvious.

We expect the initial uses for the OMC to revolve around data exchange, but over time we expect applications will be available that use OMC to transform the script into data and information for department production plans and provide intuitive user interfaces for creation and visualization of the concepts described here. In fact, this blog series can be thought of as a roadmap for organizations interested in building those services.

Part 1 of this 3-part blog series describes how to use the Ontology for Media Creation to do a script breakdown and assumes some familiarity with the concepts in OMC. Since we’ll be discussing the breakdown processes at the conceptual level, we make no assumptions about the implementation mechanism (OMC is available as both RDF and JSON).

To ground this process in reality we wanted to use an actual production and script so during our series we’ll illustrate the examples using “HyperSpace Madness” (available here). HyperSpace Madness (HSM) was written as a fully animated short film, but here it’s treated as a live-action short to illustrate various aspects of using OMC. We focus on how components come into being and the connections between components, so some of the detailed data for individual items will be glossed over.

Start With the Script

Producing a narrative work, whether film or television, starts from the script. There will be many iterations on the script, but eventually parts of it will be stable enough to start production of some or all of the creative work. The production process is built out of lots and lots of components – some large, some small, but all important – and all of these eventually tie back to the script. There are many ways to turn a script into a finished production. This blog shows one way of doing it and doesn’t attempt to cover all of the complexity and possibilities.

Notes on Flow and Format in the Blog

  • The diagrams start with representing simple things as spreadsheet rows, progress to graph-based representations of simple things, and finally to fully connected views of everything discussed
  • Definitions of the form “Term: definition” are taken from the OMC documentation.

2 The Script and Narrative Elements

Ezekiel cried dem dry bones
Ezekiel cried dem dry bones
Ezekiel cried dem dry bones
Now I hear the word of the Lord

The script provides the bones of the movie. It is about the narrative world: the action is in outer space or an exoplanetary jungle, even if it is filmed someplace else; the characters Sven and Keira are romantically inclined, even if the actors portraying them have never met each other. The production process brings it to life and needs to draw the viewer in — something that provokes Coleridge’s “suspension of disbelief.”

There is a lot of freedom in the narrative world. You can change the names of characters, add scenes, change props from metal to plastic, all with little apparent cost. However, every one of those changes will make its way through to the rest of the production, requiring new dialog, new locations, and new physical or computer-generated props. The connection between the narrative world and the production world underpins many of the relationships in OMC.

All of the necessary production information has to be extracted from the script and shared with the people and organizations involved in the production. The process is called ‘breakdown’ and each department handles it differently with each needing different information. This information is usually shared in spreadsheets, for which there is no standard format, and no standard naming of things. OMC provides a common model for both structure and terms as well as the means to represent the elements from each departmental breakdown and their relationships with each other.

In this post we’ll turn a portion of the script into its components. In the ontology, these are called “narrative elements.”

Which narrative elements do we get out of the script?

  • Narrative Scene: Taken from the narrative itself and traditionally defined by creative intent and various kinds of unity (e.g., time, place, action, or theme).
    Character: A sentient entity (usually a person, but not always) in the Script whose specific identity is consequential to the narrative. A Character is generally identified by a specific name.
  • Narrative Prop: A named object related to or interacting with characters that is implied or understood to be necessary for the narrative
  • Narrative Location: A location specified or implied by the narrative.
  • Narrative Wardrobe: The clothing for a Character in the narrative.
    It sounds simple enough. What follows, however, shows mistakes that can happen when these elements are extracted manually.

It sounds simple enough. What follows, however, shows mistakes that can happen when these elements are extracted manually.

For HSM, we’ll deal with Narrative Scenes 2 and 3. (The narrative scene number is in the left margin of the script, following usual practice.) These narrative scenes don’t have names in the script, so we will provide some:

Narrative Scene Number
2
3
Narrative Scene Name
Space
Space – moments later

What Characters are in these Narrative Scenes?

Character
Sven
sven
Keira
Narrative Scene
2
3
2
Notes

Voiceover

Prop Name
Satellite Repair Tool
Narrative Scene
2
Character Using
Sven

There are two Narrative Locations.

Narrative Location
Around the Satellite
Inside Sven’s Ship
Narrative Scene
Space
Space – moments later

And one Wardrobe item.

Wardrobe
Sven’s Spacesuit
Worn By
Svenn
Narrative Scene
2, 3

There are several things to notice here:

  • For some things (e.g., a Character that appears in more than one scene) readers have to decide whether to add multiple rows (as above), risking copy and paste mistakes and making the spreadsheet longer; add multiple columns, e.g., a single row for a character with a column for each scene, which requires rejigging things when someone appears in a new scene; or having a single row per character with all the Narrative Scenes in it with some kind of separator, which requires agreeing on the separator and editing lists when something changes. All of this is do-able in a small production but adds risk, overhead and chances of discrepancy in a large one.
  • There is an explicit relationship between the prop (or wardrobe) and the character using it. This forces the same decisions and adds the same risks as Characters in Narrative Scenes and adds complexity when a Narrative Prop is used in more than one scene or by more than one character. In addition, there’s an implicit relationship with Narrative Scenes lurking in the Character information anyway.
  • Whoever did the Wardrobe piece followed a different convention (Narrative Scenes all in one column) from whoever did the Character piece (one row per narrative Scene).
  • Whoever did the Narrative Location piece used scene names, not scene numbers, and followed the “multiple scene per column” convention.
  • Sven’s name appears in three different forms.

All of this could be managed by standard conventions, iron discipline, and pivot tables, but the first two are hard to enforce and the last is hard to maintain – what if different departments do pivot tables differently?

What we need is a representation that has one data object per narrative element (character, wardrobe, narrative scene, etc.) and shows the connections between them. With that, things like a Character’s name only need to be done in one place and it is easy to see what is connected to what. This also deals with the problem of how to reference things: narrative elements connect to narrative elements, without having to worry about whether to use a scene number or a scene name.

A partial diagram for what we’ve been looking at is:

example diagram

For simplicity, the diagram doesn’t show that Sven is wearing his space suit in a particular scene, but the ontology can express that. For showing “Sven uses the prop in narrative scene 2’, the prop is connected to both Sven and the scene, but the underlying ontology allows stating that as one fact, which is useful if someone else uses it in another scene (or even the same scene.)

The end goal is to get a first pass of this automatically from the Script. As a starting point, an application that allows a user to construct an editable, extendable graphical diagram of the elements and their relationships can be built with existing technology. The application can automatically provide the necessary identifiers and provide access to the elements in a common format. Such an application still requires a person to break down the script, but the results are more consistent and easier to share than breakdown done by spreadsheet and email. Scripts often adhere to one of a set of standard conventions (for spacing, character names, scene boundaries, and so on.) Because of this, a bare-bones structure can be produced using simple text processing, which is a possible next evolutionary stage. In the longer term, AI and machine learning have the potential to automate this even further.

Even so, sometimes spreadsheets are useful for sharing information. The information in the ontology can be turned into tabular data as needed.

Conclusion

This blog has shown how to use OMC to turn a script into data representing narrative elements. These data elements can improve communication, reduce errors, and encourage better collaboration and automation. In Part 2 we’ll look at the transition from narrative elements to production elements, and in Part 3 we’ll take those production elements from filming through post-production.

If you found this blog series useful then let us know, and if you’re interested in additional blogs or how-to-guides let us know a specific use case and we can address it (email: office@movielabs.com).

There’s also a wealth of useful information at mc.movielabs.com and movielabs.com/production-technology/ontology-for-media-creation/.

The post From Script to Data – Part 1 appeared first on MovieLabs.

]]>
A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 https://movielabs.com/a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022/?utm_source=rss&utm_medium=rss&utm_campaign=a-vision-through-the-clouds-of-palm-springs-at-the-hpa-tech-retreat-2022 Tue, 08 Mar 2022 19:05:18 +0000 https://movielabs.com/?p=10538 Mark Turner reviews the 2022 HPA Panel featuring a progress update on the 2030 Vision from Autodesk, Avid, Disney, Google, Microsoft and Universal.

The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

]]>
In the last week of February, entertainment technology luminaries from across the world gathered at the Hollywood Post Alliance’s Tech Retreat. Rubbing their eyes as they adjusted to the bright Palm Springs sunshine after two years of working-from-home in pandemic-induced Zoom and Teams isolation, the event was a resurgence with a sold-out crowd gathering for four days of conference sessions, informal and spontaneous conversations and advanced technology demonstrations.  MovieLabs, the non-profit technology joint-venture of the major Hollywood Studios, was also present en masse with a series of sessions highlighting the progress and next steps toward it’s “2030 Vision” for the future of media creation.

Seth Hallen, Managing Director of Light Iron and HPA President, who presented a panel at the HPA Tech Retreat on the recent cloud postproduction of the upcoming feature film ‘Biosphere’, commented that “this year’s Tech Retreat had a number of important themes including the industry’s continued embrace of cloud-based workflows and the MovieLabs 2030 Vision as a roadmap for continued industry alignment and implementation.”

MovieLabs CEO Richard Berger was joined by a panel of technology leaders from across studios, cloud providers and software companies to discuss how they see the 2030 Vision, what it means for their organizations and how they are democratizing the vision to form a shared roadmap for the whole industry.  Introducing the panel, Berger provided the context for the discussion and the original vision paper, “our goal was to provide storytellers with the ability to harness new technologies, so that the only limits they face are their own imaginations and all with speed and efficiency not possible today”.

Of course no discussion about the future of production technology can start without reflecting on the impacts of COVID and the opportunities for change it provides.  Eddie Drake, Head of Technology for Marvel Studios said “the pandemic accelerated our plans to go from on-prem to a virtualized infrastructure…and it created a nice environment for change management to get our users used to working in that sort of way”.  Jeff Rosica, CEO of AVID summarized this pivotal opportunity in time we have because “if we weren’t aligned, if we were all off in a different direction doing our own things, we’d have a mess on our hands because this is a massive transformation. This is bigger than anything we’ve done as an industry before”.  Matt Siverston, VP and Chief Architect, Entertainment and Media Solutions at Autodesk is relative newcomer to both Autodesk and the industry and explained how the 2030 Vision was used as shorthand for the job description in his new role at Autodesk, explaining when “all your largest customers tell you exactly what they want, it’s probably pretty smart to listen” and he’s looking forward to seeing how “we can all collaborate together to make it a reality”.

The panel discussed the work so far done in cloud based production workflows and the work still to be done, Drake of Marvel said “we’re going to be working very aggressively” with both vendors and in-house software teams to accelerate cloud deployments in key areas where they see the most immediate opportunity including set-to-cloud (where he sees tools are maturing), dailies processes, the turnover process, editorial, mastering and delivery.  Michael Wise, SVP and CTO Universal Pictures explained they have been focusing their cloud migration on distributed 3D asset creation pipelines leveraging Azure on a global basis, initially at DreamWorks but soon on live action features as well, all so they can leverage talent from around the world. Wise said “As we’ve done that work we’ve been leaning into the work of MovieLabs and the ETC to make sure what we’re building leverages emerging industry standards including the ontology  VFX interop specs from ETC and interoperability from MovieLabs”.

Buzz Hays a “recovering producer”, industry post veteran and now Global Lead Entertainment Industry Solutions for Google Cloud summarized the improvements that we can enjoy from a cloud-based workflow saying “what we’re looking at is how can we make this a more efficient process and eliminate the progress bars and delays that can end up costing money?” Hanno Basse, CTO Media & Entertainment for Microsoft Azure, agreed and added “you need to rearchitect what you’re doing – why are you going into the cloud?” and he listed the main reasons Microsoft is seeing for cloud migrations; including enabling global collaboration, talent using remote workstations from anywhere, and enabling a more secure workflow where all assets are protected to the same and consistent level.  Picking up on the security theme Hays challenges the perceived notion that there is a conflict between security and productivity and questioned “why are those mutually exclusive?” and that we should “come up with solutions that are invisible to the end user, that are secure, that tick all the boxes and are truly hybrid in nature that work on-prem and are multi-cloud”. Hayes went on to explain how zero-trust security, aligned with the MovieLabs Common Security Architecture for Production, works based on the notion of flipping security “inside out” to secure the core data first, rather than focusing on external perimeters and keeping bad actors out.  “Ultimately”, he said “until we get to the ‘single-source of truth’ cloud version, then there are copies of everything flying around productions and you never get all those back”.

Building workflows that leverage interoperability between common building blocks was a core theme of the discussion and was embraced by all the panelists.  Wise from Universal said “A bad outcome would be a ‘lift and shift’ from the on-premises technologies and specs and just putting them in the cloud. We’ve got a moment in time to make our systems interoperable…and interoperability is the key not just for asset reuse but also asset creation and distribution”.  Basse from Microsoft was more prescriptive in what interoperability needs to include and that we have to “have the industry come together and define some common data models, common APIs, common ways of accessing the data, how that data relates to others and handing it off from one step in the workflow to the next”.  He gave the example of 3D assets that are typically recreated because prior versions cannot be easily discovered and shared between applications and productions. During his seven years at 20th Century Fox the White House was destroyed in at least 10 movies and TV shows and every time the asset was recreated from scratch. Allowing assets to be reused and interoperable between different pipelines and applications will therefore open workflow efficiencies speeding content time to market.

Basse makes the case that creative applications that are running in the cloud on Virtual Machines are not the optimal solution for where we need to get to, but an interim step towards ultimately becoming SaaS based services and running on serverless infrastructure.

When discussing the opportunities ahead the panelists were also agreed that no one company can do this migration itself and that it will require work to share data and collaborate together.  Sivertson from Autodesk said “our intention is to be very open with data access and our APIs as the data is not ours, the data is our customers and they should be able to decide where it goes…if providers jealously guard the data as a source of differentiation you’ll probably get left behind”.  Rosica explains how the 2030 Vision enables AVID to have a common shared goal as we’ve all agreed what the “desired state is and what the outcomes are that we’re looking for, and that allows us to develop roadmap plans, not just for ourselves but all of our partners in the industry, as we all need to interoperate together”.

Interestingly many of the themes explored in the HPA Tech Retreat panel echo the key learnings in MovieLabs’ latest paper in the 2030 Series – an Urgent Memo to the C-Suite which explains how investments in production technology can enable the time savings, efficiencies and workflow optimizations from a cloud-centric, automatable, software-defined workflow.  It will certainly be interesting to see how far the industry has come in the 2030 journey by the HPA Tech Retreat 2023, hopefully without the masks and COVID protocols!

 

The post A Vision through the Clouds of Palm Springs at the HPA Tech Retreat 2022 appeared first on MovieLabs.

]]>
Through the Looking Glass https://movielabs.com/through-the-looking-glass/?utm_source=rss&utm_medium=rss&utm_campaign=through-the-looking-glass Tue, 01 Feb 2022 09:14:16 +0000 https://movielabs.com/?p=10295 Locating assets in a multi-cloud workflow.

The post Through the Looking Glass appeared first on MovieLabs.

]]>

Some Background

In our July 2021 blog, “Cloud.Work.Flow” we listed several gaps which will need to be closed to enable the 2030 Vision for Software-Defined Workflows that span multiple cloud infrastructures – which is the way we expect all workflows to ultimately run. In this blog we’ll address one of those gaps and how we’re thinking about systems to close it – namely the issue that “applications need to be able to locate and retrieve assets across all clouds.”

To understand why this is a problem we need to dig a little into the way software applications store files. Why do we need to worry about applications? Because almost all workflow tasks are now conducted by some sort of software system – most creative tasks, even capture devices like cameras are running complex software. The vast majority of this software can access the internet and therefore private and public cloud resources, and yet is still based on legacy file systems from the 1980s. Our challenge with interconnecting all of the applications in the workflow therefore often boils down to how applications are storing their data. If we fix that, we can move on to some more advanced capabilities in creative collaboration.

Typically, a software application stores the locations of the files it needs using file paths that indicate where they are stored on a locally accessible file system (like “C:/directory/subdirectory/file_name”). So, for example, an editing application will store the edits being made in an EDL file that is recorded locally (as it’s being created and constantly amended), and the project includes an index with the locations of all the files being manipulated by the editor. Media Asset Management systems also store the locations of files in a database, with similar file paths, like a trail of breadcrumbs, to follow and locate the files. If the files in these file systems move or are not where the application expects them to be when it needs them, then trouble ensues.

Most applications are built this way, and whereas they can be adapted to work on cloud resources (for example by mounting cloud storage to look like a local file system), they are not inherently “cloud aware” and still maintain the names and locations of needed files internally. There are 3 major drawbacks with this approach in collaborative workflows like media creation:

  1. Locating a shared file may depend on having a common file system environment. E.g., NAS drives must always be mounted with the same drive letter.
  2. Locating the file is complicated when the file name plus the file path is the guarantee of uniqueness.
  3. Moving a file (i.e., copy then delete) will break any reference to the file.

We are instead aiming for a cloud foundation which supports a dynamic multi-participant workflow and where:

  • Files can move, if necessary, without breaking anything.
  • Files don’t have to move, if it’s not necessary.
  • If files exist in more than one place, the application can locate the most convenient instantiation.
  • Systems, subject to suitable permissions, can locate files wherever they are stored.
  • The name of a file is no longer an important consideration in locating it or in understanding its contents or its provenance.[1]

With these objectives in mind, we have been designing and testing a better approach to storing files required for media workflows. We’ll reveal more later in 2022 but for now we wanted to give you a preview of our thinking.

Identifying Identifiers

To find these files anywhere across the cloud, what we need is a label that always and uniquely refers to a file, no matter where it is. This kind of label is usually called an identifier. The label must be “sticky” in that it should always apply to the same file, and only to that file. By switching to an identifier for a file, instead of an absolute file location, we can free up a lot of our legacy workflows and enable our cross-cloud future.

Our future solution therefore needs to operate in this way:

  • Participating workflow applications should all refer to files by a common and unique identifier
  • Any workflow component can “declare” where a file is (for example, when a file is created)
  • Any workflow component can turn a unique identifier into at least one location (using the declaration above)
  • Locations are expressed in a common way – by using URLs.

URLs (Uniform Resource Locators) are the foundation of the internet and can be used to describe local file locations (e.g., file://), standard network locations (e.g., http:// or https://), proprietary network locations (e.g., s3://) or even SaaS locations (e.g., box:// used by the web service company Box).

The key to this scenario is a web service that and share that, when presented with a unique identifier, will return the URL location, or locations, of that file. We call this service a resolver, and it’s a relatively simple piece of code that is acting in a similar way to a highly efficient librarian who, when presented with the title and author of a book, can tell you on which shelf and location to go and get it.

Even though MovieLabs created the industry standard Entertainment ID Registry (EIDR), we are not proposing here any universal and unique identifier for each of the elements within all productions going on (that would be a massive undertaking), instead we believe that each production, studio, or facility will run their own identifier registries and resolvers.

We have discussed before why we believe in the importance of splitting the information about a file (for example what type of file it is, what it contains, where it came from, which permission various participants have, its relationships to other files, etc.) from the actual location of the file itself. In many cases applications don’t need to access the file (and therefore won’t need to use the resolver) because they often just need to know information about the file and that can be done via an asset manager. We can envision a future whereby MAMs contain rich information about a file and just the identifier(s) used for it; and utilize a resolver to handle the actual file locations.

With this revised approach we can now see that our application uses an external Resolver service to give it URLs which it can reach out to on a network and retrieve the file(s) that it needs.

The diagram above shows how the Application now keeps a list of URIs which it can use an external Resolver to turn into a URL for the files it needs.  The URL can be resolved by the network into web servers, SaaS systems or directly to cloud storage services.  So, in the example of our editing application, now the application maintains sets of unique file identifiers (for the location of the EDL any of the required media elements for the edit) and the resolver points to an actual location, whenever the application needs to find and open those files. The application is otherwise unchanged.

Why use Identifiers and Resolvers, instead of just URLs?

Let us be clear – there are many benefits in simply switching applications to use URLs instead of file paths – that step alone would indeed open up cloud storage and a multitude of SaaS services that would help make our workflows more efficient. However, from the point of view of an application, URLs alone are absolute and therefore do not address our concerns of enabling multiple applications to simultaneously access, move, edit, and change those files. By inserting a resolver in the middle, we can abstract away from the application the need to track where every file is kept and enable more of our objectives including the ability to have multiple locations for each file. Also by using a resolver, if any application needs to move a file it does not need to know or communicate with every other application that also may use that same file, now or in the future. Instead, it simply declares the file’s location to the resolver, knowing that every other participant software application can locate the file, even if that application is added much later in the workflow.

In our editing example above, the “resolver aware” editing application knows that it needs video file “XYZ” for a given shot, but it does not need to “lock” that file and as such it can be simultaneously accessed, referenced, and perhaps edited by other applications. For example, in an extreme scenario, video XYZ could be updated with new VFX elements by a remote VFX artist’s application that seamlessly drops the edited video into the finished shot – without the editor needing to do anything but review and approve, the EDL itself is unchanged and none of the applications involved need to have an awareness of the filing systems used by others.

The resolver concept also has another key advantage; with some additional intelligence, the resolver can return the closest copy of a file to the requesting application. Even though Principle 1 in the 2030 Vision indicates that all files should exist in the cloud with a “single source of truth,” we do also recognize that sometimes files will need to be duplicated to enable speed of performance – for example to reduce the latency of a remote virtual workstation in India for assets that were originally created in London. In those cases the resolver can help as the applications can all share one unique identifier for a file, but the network layer can return the original location for participants in Europe and the location of a cached copy in Asia for a participant requesting access from India.

What needs to happen to enable this scenario?

MovieLabs is busy designing and testing these and other new concepts for enabling seamless multi-cloud interoperability and building out software defined workflows.  We’ll be publishing more details of our approach during 2022.  Meanwhile there’s an immediate opportunity for all application developers, SaaS providers, hyperscale cloud service companies and others in the broader ecosystem to consider these approaches to interoperable workflows than span infrastructures and the boundaries of specific applications’ scope.

We welcome the input of other companies as we collectively work through these issues and ultimately test and deploy resolver-based systems, feel free to reach out to discuss your thoughts with us.

To ensure you are kept updated with all MovieLabs news and this new architecture be sure to follow us on LinkedIn.

 

[1] Today such information is often encoded or crammed into a file name or the combination of file name and file path.

The post Through the Looking Glass appeared first on MovieLabs.

]]>