Software Defined Workflows Archives - MovieLabs https://movielabs.com/category/software-defined-workflows/ Driving Innovation in Film and TV Content Creation and Distribution Wed, 10 Jan 2024 20:18:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://movielabs.com/wp-content/uploads/2021/10/cropped-favicon-32x32.png Software Defined Workflows Archives - MovieLabs https://movielabs.com/category/software-defined-workflows/ 32 32 Am I Authorized to Do That? https://movielabs.com/am-i-authorized-to-do-that/?utm_source=rss&utm_medium=rss&utm_campaign=am-i-authorized-to-do-that Wed, 10 Jan 2024 20:18:31 +0000 https://movielabs.com/?p=13527 Why CSAP separates authentication and authorization

The post Am I Authorized to Do That? appeared first on MovieLabs.

]]>

CSAP (the Common Security Architecture for Production) is a workflow driven Zero Trust security architecture designed for production workflows.

This is the third blog in our zero trust series, and in the first two we have explored the concept of trust and what it means in a zero trust architecture. The first two blogs are:

  1. Can I trust you?
  2. I don’t trust you, you don’t trust me, now what?

The subtitle of our third blog is “Why CSAP separates authentication and authorization” and the reason is because that is how zero-trust works! So, end of blog.

No, not really.

Let’s discuss why zero trust separates authentication and authorization.

To paraphrase a statement in the NIST zero trust architecture document, the goal of zero trust is to prevent unauthorized access to data and services coupled with making the access control enforcement as granular as possible. That means, authorized and authenticated entities (user, application, service, and device) can access the resources to the exclusion of all other entities (i.e., unauthenticated or unauthorized, which includes attackers). NIST SP 800-207, Zero Trust Architecture, goes on to say:

NIST SP 800-207

So separating authentication and authorization isn’t something we made up for CSAP; it’s fundamental to zero trust.

Let’s break that down. Clearly–and we discussed this in depth in the first blog in this series–you cannot trust anything that hasn’t been authenticated.

NIST SP 800-207
We love explaining security in parables. Here’s one for you: you live in a city, you’re alone in your home, and you just made yourself a cup of tea. The doorbell rings, and you open the front door to find three people in utility company overalls, who say you’ve got a problem with your gas supply. You hadn’t noticed anything because you just boiled water on the stove to make tea. Do you let them in? The short answer is “no!!!”. In fact, it’s difficult to think of a city where it was OK to open the door to start with! Letting them in, or even opening the front door, is implied trust. And if you let them in, you’re authorizing them to do anything they want in your home because there are three of them and one of you.

So, let’s deal with authentication first. You must check their ID and call the utility company to see if they are employees, why they were sent to your house and what the problem is that they came to fix. If you’re happy that they are real, it still doesn’t mean you should let them come in, and until you do, they’ve been authenticated but not authorized.

In our scenario, even if the utility company believes there is something wrong with your gas supply it can’t be in your home because you can’t smell gas and your stove works. Obviously, if you could smell gas, you wouldn’t have lit your stove. So that means you only need to give them access your gas meter which, as is the case in many suburban areas in United States, is attached to the outside of your home. You authorize them to enter the area beside your home where the gas meter is, but no more than that. You’ve let them inside the protect surface for your lawnmower but not the protect surface for your home.

However, let’s back up to the fresh cup of tea and the ringing doorbell. You look at your doorbell camera and recognize (i.e., authenticate) a close friend.

NIST SP 800-207

You let them in (i.e., authorize them to enter). In both cases (assuming the three people that visited you earlier were utility company employees), you have authenticated whoever is at the door, but your authorization only extends as far as is necessary or appropriate.

One more before we go back to cybersecurity. Let’s suppose you’ve been in bed for a week with a bad case of flu and your home is, quite reasonably in the circumstances, a mess and not the normal no-clutter environment you like to live in. Another friend arrives unannounced to visit. It’s someone you’d normally invite in, but in the circumstances, you explain through the intercom that you’re not up for visitors. What does this show? While you’d welcome this friend into your home (i.e., authorize entry) at one time, at another time you don’t want to (i.e., not authorize entry).

This illustrates something about managing authentication and authorization in a system: authentication is relatively static, whereas in dynamic environments such as film and TV production, it’s vital to be able to quickly and easily change who is authorized to do something. The NIST zero trust architecture uses the term “dynamic security policies” and CSAP uses the term “authorization policies”.

Take this one step further. Security policies are created to serve current needs for authorization. In CSAP, one source of that need is workflow management. That’s a different place than where identity is managed. The latter being an organization function, and normally it only changes during onboarding and off-boarding.

So, we hope you now understand why authentication and authorization are separated.

This is a functional or logical separation. It doesn’t necessarily mean there needs to be two distinct system.

For example, an identity and access management system (IAM) is in use that manages both user identity and user roles, and there is a storage pool with role based access management (RBAC) where access is controlled according the role assigned to a user. Workflow management wants an assistant editor to ingest a new set of dailies. It generates a request to grant (authorize) access to that assistant editor.

Without changing the IAM system or the access management method of the storage pool, or adding a new system, the assistant editor can be authorized to access the dailies in two ways:

  1. The RBAC for the dailies is changed to include a role already assigned to the assistant editor by the IAM system.
  2. A new role, one that already has access to the dailies, is assigned to the assistant editor.
NIST SP 800-207

The authorization service, which is simply acting on behalf of the workflow management, is using one of the two options to create a match between a role the user has and a role that can access the dailies.

Job done. And now back to that Cup of Tea…

The post Am I Authorized to Do That? appeared first on MovieLabs.

]]>
Are we there yet? Part 3 https://movielabs.com/are-we-there-yet-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-3 Tue, 09 Jan 2024 00:52:58 +0000 https://movielabs.com/?p=13486 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>

In this final part of our blog series on the current gaps between where are now and realizing the 2030 Vision, we’ll address the last two sections of the original whitepaper and look specifically at gaps around, Security and Identity, and Software-Defined Workflows. As with previous blogs in this series (see Parts 1 and 2) we’ll include both the gap as we see it, an example as it applies in a real workflow, and the broader implications of the gap.

So let’s get started with…

MovieLabs 2030 Vision Principle 6
  1. Inconsistent and inefficient management of identity and access policies across the industry and between organizations.

    Example: A producer wants to invite two studio executives, a director and an editor, into a production cloud service but the team has 3 different identity management systems. There’s no common way to identify the correct people to provide access to critical files or to provision that access.

    This is an issue addressed in the original 2030 Vision, which called for a common industry-wide Production User ID (or PUID) to identify individuals who will be working on a production. While there are ways today to stitch together different identify management and access control solutions between different organizations, they are point to point, require considerable software or configuration expertise, and are not “plug and play.”

MovieLabs 2030 Vision Principle 7
  1. Difficulty in securing shared multi-cloud workflows and infrastructure.

    Example: A production includes assets spread across a dozen different cloud infrastructures, each of which is under control of a different organization, and yet all need a consistent and studio-approved level of security.

    MovieLabs believes the current ”perimeter” security model is not sufficient to cope with the complex multi-organizational, multi-infrastructure systems that will be commonplace in the 2030 Vision. Instead, we believe the industry needs to pivot to a more modern ”zero-trust” approach to security, where the stance changes from ”try to prevent intruders” to every access to an asset or service is authenticated and checked for authorization. To that end, we’ve developed the Common Security Architecture for Production which is based on a Zero Trust Foundation, take a look at this blog to learn more.

MovieLabs 2030 Vision Principle 8
  1. Reliance on file paths/locations instead of identifiers.

    Example: A vendor requires a number of assets to do their work (e.g., a list of VFX plates to pull or a list of clips) that today tend to be copied as a file tree structure or zipped together to be shared along with a manifest of the files.

    In a world where multiple applications, users and organizations can be simultaneously pulling on assets, it becomes challenging for applications to rely on file names, locations, and hierarchies. MovieLabs instead is recommending unique identifiers for all assets that can be resolved via a service to specify where a specific file is actually stored. This intermediate step provides an abstraction layer and allows all applications to be able to find and access all assets. For more information, see Through the Looking Glass.

MovieLabs 2030 Vision Principle 9
  1. Reliance on email for notifications and manual processing of workflow tasks.

    Example: A vendor is required to do a task on a video asset and is sent an email, a PDF attachment containing a work order, a link to a proxy video file for the work to be done, and a separate link to a cloud location where the RAW files are. It takes several hours/days for the vendor to extract the required work, download, QC, and store the media assets, and then assign the task on an internal platform to someone who can do the work. The entire process is reversed to send the completed work back to the production/studio.

    By having non-common systems to send workflow requests, asset references and assign work to individual people, we have created an inherently inefficient industry. In the scenario above, a more efficient system would be for the end user to receive an automated notification from a production management system that includes a definition of the task to be done and links to the cloud location of the proxies and RAW files, with all access permissions already assigned so they can start their work. Of course, our industry is uniquely distributed between organizations that handle very nuanced tasks in the completion of a professional media project. This complicates the flow of work and work orders, but there are new software systems that can enable seamless, secure, and automated generation of tasks. We can strip weeks out of major production schedules simply by being more efficient in handoffs between departments, vendors and systems.

  2. Monolithic systems and the lack of API-first solutions inhibit our progress towards interoperable modern application stacks.

    Example: A studio would like to migrate their asset management and creative applications to a cloud workflow that includes workflow automation, but the legacy nature of their software means that many tasks need to be done through a GUI and that it needs to be hosted on servers and virtual machines that mimic the 24/7 nature of their on-premises hardware.

    Modern applications are designed as a series of micro-services which are assembled and called dynamically depending on the process, which enables considerable scaling and also lighter weight applications that can deploy on a range of compute instances (e.g., on workstations, virtual machines or even behind browsers). While the pandemic proved we can have creative tasks running remotely or from the cloud a lot of those processes were ”brute forced” with remote access or cloud VMs running legacy software and are not the intended end goal of a ”cloud native” software stack for media and entertainment. We recognize this is an enormous gap to fix and will take beyond the 2030 timeframe to move all of the most vital applications/services to modern software platforms. However we need the next-generation of software systems to enable open APIs and deploy in modern containers to accelerate the interoperable and dynamic future that is possible within the 2030 Vision.

MovieLabs 2030 Vision Principle 10
  1. Many workflows include unnecessarily time consuming and manual steps.

    Example: A director can’t remotely view a final color session in real time from her location, so she needs to wait for a full render of the sequence, for it to be uploaded to a file share, for an email with the link to be sent, and then for her to download it and find a monitor that matches the one that was used for the grade.

    We could write so many examples here. There’s just way too little automation and way too much time wasted in resolving confusions, writing metadata, reading it back, clarifying intent, sending emails, making calls etc. Many of the technologies exist to fix these issues, but we need to redevelop many of our control plane functions to adopt to a more efficient system which requires investment in time, staff, and development. But those that do the work will come out leaner, faster and more competitive at the end of the process. We recommend that all participants in the ecosystem take honest internal efficiency audits to look for opportunities to improve and prioritize the most urgent issues to fix.

Phew!  So, there we have it. For anyone that believes the 2030 Vision is “doable” today, there are 24 reasons why MovieLabs disagrees. Don’t consider this post a negative, we still have time to resolve these issues, and it’s worth being honest about the great progress completed but also what’s still to do.

Of course, there’s no point making a list of things to do without a meaningful commitment to cross them off. MovieLabs and the studios can’t do this alone, so we’re laying down the gauntlet to the industry – help us, to help us all. MovieLabs will be working to close those gaps that we can affect, and we’ll be publishing our progress on this blog and on LinkedIn. We’re asking you to do the same – share what your organization is doing with us by contacting info@movielabs.com and use #2030Vision in your posts.

There are three specific calls to action from this blog for everyone in the technical community:

  1. The implementation gaps listed in all parts of this blog are the easiest to close – the industry has a solution we just need the commitment and investment to implement and adopt what we already have. These are ones we can rally around now, and MovieLabs has already created useful technologies like the Common Security Architecture for Production, the Ontology for Media Creation, and the Visual Language.
  2. For those technical gaps where the industry needs to design new solutions, sometimes individual companies can pick these ideas up and run with them, develop their own products, and have some confidence that if when they build them customers will come. Some technical gaps can only be closed by industry players coming together, with appropriate collaboration models, to create solutions that enable change, competition, and innovation. There are existing forums to do that work including SMPTE and the Academy Software Foundation, and MovieLabs hosts working groups as well.
  3. And though not many issues are in the Change Management category right now, we still need to work together to share and educate how these technologies can be combined to make the creative world more efficient.

We’re more than 3 years into our Odyssey towards 2030. Join us as we battle through the monsters of apathy, slay the cyclops of single mindedness, and emerge victorious in the calm and efficient seas of ProductionLandia. We look forward to the journey where heroes will be made.

-Mark “Odysseus” Turner

The post Are we there yet? Part 3 appeared first on MovieLabs.

]]>
Are we there yet? Part 2 https://movielabs.com/are-we-there-yet-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-2 Thu, 14 Dec 2023 03:15:54 +0000 https://movielabs.com/?p=13461 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>

In Part 1 of this blog series we looked at the Gaps in Interoperability, Operational Support and Change Management that are impeding our journey to the 2030 Vision’s destination (the mythical place we call “ProductionLandia”). In these latter parts we’ll examine the gaps we have identified that are specific to each of the Principles of the 2030 Vision. For ease of reference, the Gaps below are numbered starting from 9 (because we had 1-8 in Part 1 of the blog). For each Gap we list the Principle, a workflow example of the problem, and the implications for the Gap.

In this post we’ll look just at the gaps around the first 5 Principles of the 2030 Vision which address a new cloud foundation.

MovieLabs 2030 Vision Principle 1
  1. Limitations of sufficient bandwidth and performance, plus auto recovery from variability in cloud connectivity.

    Example: Major productions can generate terabytes of captured data per day during production and getting it to the cloud to be processed is the first step.

    Even though there are studio and post facilities with large internet connections, there are still many more locations, especially remote or overseas ones, where the bandwidth is not large enough, the throughput not guaranteed or predictable enough, such as to hobble cloud-based productions at the outset. Some of the benefits in cloud-based production involve the rapid access for teams to manipulate assets as soon as they are created and for that we need big pipes into the cloud(s), that are both reliable and self-healing. Automatic management of those links and data transfers is vital as they will be used for all media storage and processing.

  2. Lack of universal direct camera, audio, and on-set data straight to the cloud.

    Example: Some new cameras are now supporting automated upload of proxies or even RAW material direct to cloud buckets. But for the 2030 Vision to be realized we need a consistent, multi-device on-set environment to be able to upload all capture data in parallel to the cloud(s) including all cameras, both new and legacy.

    We’re seeing great momentum with camera to cloud in certain use cases (with limited support from newer camera models) sending files to specific cloud platforms or SaaS environments. But we’ve got some way to go before it’s as simple and easy to deploy a camera-to-cloud environment as is it to rent cameras, memory cards/hard drives, and a DIT cart today. We also need support for multiple clouds (including private clouds) and or SaaS platforms so that the choice of camera-to-cloud environment is not a deciding factor that locks downstream services into a specific infrastructure choice. We’ve also included in the gap that it’s not just ”camera to cloud” but “capture to cloud” that we need, which includes on-set audio and other data streams that may be relevant to later production stages including lighting, lenses, and IOT devices. All of that needs to be securely and reliably delivered to redundant cloud locations before physical media storage on set can be wiped.

  3. Latency between “single source of truth in cloud” and multiple edge-based users.

    Example: A show is shooting in Eastern Europe, posting in New York, with producers in LA and VFX companies in India. Which cloud region should they store the media assets in?

    As an industry we tend to talk about “the cloud” as a singular thing or place, but in reality of course it is not – it’s made up of private data centers, and various data centers which hyperscale cloud providers tend to arrange into different “availability zones” or “regions” which must be declared when storing media. As media production is a global business the example above is very real, it leads to the question – where should we store the media and when should we duplicate it for performance and/or resiliency? This is also one of the reasons why we believe multi-cloud systems need to be supported because it’s also possible that the assets for a production are scattered across different availability zones, cloud accounts (depending on which vendor has “edit rights” on the assets at any one time), and cloud providers (public, private and hybrid infrastructures). The gap here is that currently decisions need to be made, potentially involving IT systems teams and custom software integrations, about where to store assets to ensure they are available, at very low latency (sub 25 milliseconds round trip – see Is the Cloud Ready to Support Millions of Remote Creative Workers? for more details) for the creative users who need to get to them. By 2030 we’d expect some “intelligent caching’” systems or other technologies that would understand, or even predict, where certain assets need to be for users and stage them close enough for usage before they are needed. This is one of the reasons why we reiterate that we expect, and encourage, media assets to be distributed across cloud service providers and regions and merely ”act” as a single storage entity even though they may be quite disparate. This is also implies that applications need to be able to operate across all cloud providers because they may not be able to predict or control where assets are in the cloud.

  4. Lack of visibility of the most efficient resource utilization within the cloud , especially before the resources are committed.

    Example: When a production today wants to rent an editorial system, it can accurately predict the cost, and map it straight to their budget. But with the cloud equivalent it’s very hard to get an upfront budget because the costs for cloud resources rely on predicting usage, which is hard to know including hours of usage, amount of storage required, data egress, etc.

    Creative teams take on a lot when committing to a show, usually with a fixed budget and timeline. It’s hard to ask them to commit to unknown costs, especially for variables which are hard to control at the outset – could you predict how many takes for a specific scene? How many times a file will be accessed or downloaded? Or how many times a database queried? Even if they could accurately predict usage, most cloud billing is done in arrears, and therefore the costs are not usually known until after the fact, and consequently it’s easy to overrun costs and budgets without even knowing it.

    Similarly, creative teams would also benefit from greater education and transparency concerning the most efficient ways to use cloud products. Efficient usage will decrease costs and enhance output and long-term usage.

    For cloud computing systems to become as ubiquitous as the physical equivalent, providers need to find ways to match the predictability and efficient use of current on-premises hardware, but with the flexibility to burst and stretch when required and authorized to do so.

MovieLabs 2030 Vision Principle 2
  1. Too few cloud-aware/cloud-native apps, which necessitates a continued reliance on moving files (into clouds, between regions, between clouds, out of clouds).

    Example: An editor wants to use a cloud SaaS platform for cutting their next show, but the assets are stored in another cloud, the dailies system providing reference clips is on a third, and the other post vendors are using a private cloud.

    We’re making great progress with getting individual applications and processes to move to the cloud but we’re in a classic ”halfway” stage where it’s potentially more expensive and time consuming to have some applications/assets operating in the cloud and some not. That requires moving assets into and out of a specific cloud to take advantage of its capabilities and if certain applications or processes are available only in one cloud then moving those assets specifically to that cloud, which is the the sort of “assets chasing tasks” from the offline world that this principle was designed to avoid in the cloud world. We need to keep pushing forward with modern applications that are multi-cloud native and can migrate seamlessly between clouds to support assets stored in multiple locations. We understand this is not a small task or one that will be quick to resolve. In addition, many creative artists used Mac OS and that is not broadly available in cloud instances and in a way that can be virtualized to run on myriad cloud compute types.

  2. Audio post-production workflows (e.g., mixing, editing) are not natively running in the cloud.

    Example: A mixer wants to remotely work on a mix with 9.1.6 surround sound channels that are all stored in the cloud. However most cloud based apps only support 5.1 today, and the audio and video channels are streamed separately so the sync between the audio and the video can be “soft” in a way that it can be hard to know if the audio is truly playing back in sync.

    The industry has made great strides in developing technologies to enable final color (up to 12 bit) to be graded in the cloud, but now similar attention needs to be paid to the audio side of the workflows. Audio artists can be dealing with thousands, or even tens of thousands of small files and they have unique challenges which need to be resolved to enable all production tasks to be completed in the cloud without downloading assets to work remotely. The audio/video sync and channel count challenges above are just illustrative of the clear need for investment and support of both audio and video cloud workflows simultaneously to get to our “ProductionLandia” where both can be happening concurrently on the same cloud asset pool.

MovieLabs 2030 Vision Principle 3
  1. Lack of communication between cross-organizational systems (AKA “too many silos”) and inability to support cross-organizational workflows and access.

    Example: A director uses a cloud-based review and approval system to provide notes and feedback on sequences, but today that system is not connected to the workflow management tools used by her editorial department and VFX vendors, so the notes need to be manually translated into work orders and media packages.

    As discussed above we’re in a transition phase to the cloud, and as such we have some systems that may be able to receive communication (messages, security permission requests) and commands (API calls), whereas other systems are unaware of modern application and control plane systems. Until we have standard systems for communicating (both routing and common payloads for messages and notifications) and a way for applications to interoperate between systems controlling different parts of the workflow, then we’ll have ongoing issues with cross-organizational inefficiencies. See the MovieLabs Interoperability Paper for much more on how to enable cross-torganizational interop.

MovieLabs 2030 Vision Principle 4
  1. No common way to describe each studio’s archival policy for managing long term assets.

    Example: Storage service companies and MAM vendors need to customize their products to adapt to each different content owner’s respective policies and rules for how archival assets are selected and should be preserved.

    The selection of which assets need to be archived and the level of security robustness, access controls, and resilience are all determined by studio archivists depending on the type of asset. As we look to the future of archives we see a role for a common and agreed way of describing those policies so any software storage system, asset management or automation platform could read the policies and report compliance against them. Doing so will simplify the onboarding of new systems with confidence.

MovieLabs 2030 Vision Principle 5
  1. Challenges of measuring fixity across storage infrastructures.

    Example: Each studio runs a checksum against an asset before uploading it to long term storage. Even though storage services and systems run their own checks for fixity those checksums or other mechanisms are likely different than the studios’ and not exposed to end clients. So instead, the studio needs to run their own checks for digital degradation by occasionally pulling that file back out of storage and re-running the fixity check.

    As there’s no commonality between fixity systems used in major public clouds, private clouds, and storage systems, the burden of checking that a file is still bit-perfect falls on the customer to incur the time, cost, and inconvenience of pulling the file out of storage, rehashing it, and comparing to the original recorded hash. This process is an impediment to public cloud storage and the efficiencies it offers for the (very) long term storage it offers for archival assets.

  2. Proprietary formats need to be archived for many essence and metadata file types.

    Example: A studio would like to maintain original camera files (OCF) in perpetuity as the original photography captured on set, but the camera file format is proprietary, and tools may not be available in 10, 20, or 100 years’ time. The studio needs to decide if it should store the assets anyway or transcode them to another format for the archive.

    The myriad of proprietary files and formats in our industry contain critical information for applications to preserve creative intent, history, or provenance, but that proprietary data becomes a problem if it is necessary to open the file in years or decades, perhaps after the software is not even available. We have a few current and emerging examples in some areas of public specifications and standards, and open source software that can enable perpetual access, but the industry has been slow to appreciate the legacy challenges in preserving access to this critical data in the archive.

In the final part of this blog series, we’ll address the gaps remaining within the Principles covering Security and Identity and Software-Defined Workflows… Stay Tuned…

The post Are we there yet? Part 2 appeared first on MovieLabs.

]]>
Announcing CSAP v1.3 https://movielabs.com/announcing-csap-v1-3/?utm_source=rss&utm_medium=rss&utm_campaign=announcing-csap-v1-3 Wed, 02 Aug 2023 03:51:42 +0000 https://movielabs.com/?p=13198 Including Updates and new Content in the CSAP architecture

The post Announcing CSAP v1.3 appeared first on MovieLabs.

]]>

Introducing v1.3 of the Common Security Architecture for Production (CSAP)

As developers learn about implementing CSAP, their feedback helps us refine the CSAP architecture and we are now publishing CSAP version 1.3.

This round of changes is modest but we feel it makes the architecture cleaner and easier to understand how to implement it.

Below is a summary of the key changes from version 1.2:

The functions of the Asset Protection Service have been merged into the Authorization Service

There is no change in functionality because of this amendment but it is becoming clear that managing asset access authorizations is a core role of the authorization service and should not be a separate function.

The distinction between the supporting components Trust Inference and Continuous Trust Validation has been removed.

The market is showing that continuous trust validation is part of the trust engine in authentication systems that provide trust inference. The v1.3 architecture simply shows trust inference in the supporting security components. There is no change in functionality, we have simply removed what has become an unnecessary distinction.

The official Visual Language representation of CSAP has changed

We think our new representation makes it easier to understand that CSAP is a collection of services that provide the functionality necessary for CSAP to support the 3 levels of security. The three services of authorization, authentication and the authorization rule distribution that make up the CSAP core components are now shown as services within a CSAP infrastructure shape.

Similarly, we are representing the CSAP support components as seven services within an infrastructure shape (see the Visual Language to see how Infrastructure and Services are quickly identifiable with Shapes and Icons).

Put those together along with a couple of new Visual Language security icons and the new CSAP Overview diagram looks like this:

new CSAP overview diagram

You will see that we are representing Global Security Management, that’s the source of security policies that are external to the production management/CSAP authorization set up, as a service.

In this diagram, production management is made up of workflow management and asset management. It’s illustrative of the two broad elements of production management that drive CSAP.

CSAP Part 5A has been updated to include the CSAP Zero-trust Foundation

CSAP is a zero-trust architecture for securing media production and the way to implement CSAP is to start with zero-trust. In a recent blog post we talked about all the different things that zero-trust could mean in our production context, the various “zero-trust” products being offered and we introduced the concept of the CSAP Zero-trust Foundation (ZTF). The CSAP ZTF is a zero-trust security model with a certain set of features necessary for building CSAP.

CSAP Part 5: Implementation Considerations is a living document that we plan to add to. We initially published Part 5A, 5B and 5C and with version 1.3, we have added an expanded version of the CSAP ZTF blog post to Part 5A. It’s worth a read if you’re sitting there wondering where to start on your CSAP journey.

Keep the Feedback Coming

We hope that reading this will encourage you to read the new versions which are available both as online documents on our documentation website and as downloadable PDF documents. Please reach out to MovieLabs if you have any questions about how to deploy any part of CSAP at csap@movielabs.com.

We’ll keep adding to the implementation considerations as and when we see a need, and we’ll publish the final part of the main document set, Part 6: Policy Description, at a later date.

The post Announcing CSAP v1.3 appeared first on MovieLabs.

]]>
Are we there yet? Part 1 https://movielabs.com/are-we-there-yet-part-1/?utm_source=rss&utm_medium=rss&utm_campaign=are-we-there-yet-part-1 Wed, 26 Jul 2023 16:13:10 +0000 https://movielabs.com/?p=13094 Gap Analysis for the 2030 Vision

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>

It’s mid-2023, we’re about 4 years into our odyssey towards “ProductionLandia” – an aspirational place where video creation workflows are interoperable, efficient, secure-by-nature and seamlessly extensible. It’s the destination. The 2030 Vision is our roadmap to get there. Each year at MovieLabs we check the industry’s progress towards this goal, adjusting focus areas, and generally providing navigation services to ensure we’re all going to arrive in port in ProductionLandia at the same time and with a suite of tools, services and vendors that work seamlessly together. As part of that process, we take a critical look at where we are collectively as an M&E ecosystem – and what work still needs to be done – we call this “Gap Analysis”.

Before we leap into the recent successes and the remaining gaps, let’s not bury the lead – while there has been tremendous progress, we have not yet achieved the 2030 Vision (that’s not negative, we have a lot of work to do and it’s a long process). So, despite some bold marketing claims from some industry players, there’s a lot more in the original 2030 Vision white paper than lifting and shifting some creative processes to the cloud, the occasional use of virtual machines for a task or a couple of applications seamlessly passing a workflow process between each other. The 2030 Vision describes a paradigm shift that starts with a secure cloud foundation, and also reinvents our workflows to be composable and more flexible, removing the inefficiencies of the past, and includes the change management that is necessary to give our creative colleagues the opportunity to try, practice and trust using these new technologies on their productions. The 2030 Vision requires an evolution in the industry’s approach to infrastructure, security, applications, services and collaboration and that was always going to be a big challenge. There’s still much to be done to achieve dynamic and interoperable software-defined workflows built with cloud-native applications and services that securely span multi-cloud infrastructures.

Status Check

But even though we are not there yet, we’re actually making amazing progress based on where we started (albeit with a global pandemic to give a kick of urgency to our journey!). So many major companies including cloud services companies, creative application tool companies, creative service vendors and other industry organizations have now backed the 2030 Vision; it is no longer just the strategy of the major Hollywood studios but has truly become the industry’s “Vision.” The momentum is truly behind the vision now, and it’s building – as is evident in the 2030 Showcase program that we launched in 2022 to highlight and share 10 great case studies where companies large and small are demonstrating Principles of the Vision that are delivering value today.

We’ve also seen the industry respond to our previous blogs on gaps including what was missing around remote desktops for creative applications, software-defined workflows  and cloud infrastructures. We can now see great progress with camera to cloud capture, automated VFX turnovers, final color pipelines now technically possible in the cloud, amazing progress on real-time rendering and iteration via virtual production, creative collaboration tools and more applications opening their APIs to enable new and unpredictable innovation.

Mind the Gaps

So, in this two-part Blog, let’s look at what’s still missing. Where should the industry now focus its attention to keep us moving and accelerate innovation and the collective benefits of a more efficient content creation ecosystem? We refer to these challenges as “gaps” between where we are today and where we need to be in “ProductionLandia.” When we succeed in delivering the 2030 Vision, we’ll have closed all of these gaps. As we analyze where we are in 2023 we see these gaps falling into the 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-Defined Workflows), plus 3 underlying ones that bind them altogether:

image: 3 key categories from the original vision (Cloud Foundations, Security and Identity, Software-defined Workflows), plus 3 underlying ones that bind them altogether

In this Part 1 of the Blog we’ll look at the gaps related to these areas. In Part 2 we’ll look at the gaps we view as most critical for achieving each of the principles of the vision, but let’s start with those binding challenges that link them all.

It’s worth noting that some gaps involve fundamental technologies (a solution doesn’t exist or a new standard, or open source project is required) some are implementation focused (e.g., technology exists but needs to be implemented/adopted by multiple companies across the industry to be effective – our cloud security model CSAP  is an example here where a solution is now ready to be implemented) and some are change management gaps (e.g., we have a viable solution that is implemented but we need training and support to effect the change). We’ve steered clear of gaps that are purely economic in nature as MovieLabs does not get involved in those areas. It’s probably also worth noting that some of these gaps and solutions are highly related, so we need to close some to support closing others.

Interoperability Gaps

  1. Handoffs between tasks, teams and organizations still require large scale exports/imports of essence and metadata files, often via an intermediary format. Example: Generation of proxy video files for review/approval of specific editorial sequences. These handovers are often manual, introducing the potential for errors, omissions of key files, security vulnerabilities and delays. See note1.
  2. We still have too many custom point-to-point implementations rather than off-the-shelf integrations that can be simply configured and deployed with ease. Example: An Asset Management System currently requires many custom integrations throughout the workflow, which makes changing it out for an alternative a huge migration project. Customization of software solutions adds complexity and delay and makes interoperability considerably harder to create and maintain.
  3. Lack of open, interoperable formats and data models. Example: Many applications create and manage their own sequence timeline for tracking edits and adjustments instead of rallying around open equivalents like OpenTimelineIO for interchange. For many use cases, closing this gap requires the development of new formats, data models, and their implementation.”.
  4. Lack of standard interfaces for workflow control and automation. Example: A workflow management software cannot easily automate multiple tasks in a workflow by initiating applications or specific microservices and orchestrate their outputs to form an output for a new process. Although we have automation systems in some parts of the workflow the lack of standard interfaces again means that implementors frequently have to write custom connectors to get applications and processes to talk to each other.
  5. Failure to maintain metadata and a lack of common metadata exchange across components of the larger workflow. Example: Passing camera and lens metadata from on-set to post-production systems for use in VFX workflows. Where no common metadata standards exist, or have not been implemented, systems rarely pass on data they do not need for their specific task as they have no obligation to do so, or don’t know which target system may need it. A more holistic system design however would enable non-adjacent systems to be able to find and retrieve metadata and essence from upstream processes and to expose data to downstream processes, even if they do not know what it may be needed for.

Operational Support

  1. Our workflows, implementations and infrastructures are complex and typically cross between boundaries of any one organization, system or platform. Example: A studio shares both essence and metadata with external vendors to host on their own infrastructure tenants but also less structured elements such as work orders (definitions of tasks), context, permissions and privileges with their vendors. Therefore, there is a need for systems integrators and implementors to take the component pieces of a workflow and to design, configure, host, and extend them into complete ecosystems. These cloud-based and modern software components will be very familiar to IT systems integrators, but they need the skills and understanding in our media pipelines to know how to implement and monetize them in a way which will work in our industry. We therefore have a mismatch gap between those that understand cloud-based IT infrastructures and software, and those that understand the complex media assets and processes that need to operate on those infrastructures. There are few companies to chose from that have the correct mixture of skills to understand both cloud and software systems as well as media workflow systems, and we’ll need a lot more of them to support the industry wide migration.
  2. We also need systems that match our current support models. Example: A major movie production can be simultaneously operating across multiple countries and time zones in various states of production and any down system can cause backlogs in the smooth operations. The media industry can work some unusual and long hours, at strange times of the day and across the world – demanding a support environment that can support it with specialists that understand the challenges of media workflows and not just open an IT ticket that will be resolved when the weekday support comes in at 9am on Monday. In the new 2030 world, these problems are compounded by the shared nature of the systems – so it may be hard for a studio or production to understand which vendor is responsible if (when) there are workflow problems – who do you call when applications and assets seamlessly span infrastructures? How do you diagnose problems?

Change Management

  1. Too few creatives have tried and successfully deployed new ‘2030 workflows’ to be able to share and train others. Example: Parts of the workflow like Dailies have migrated successfully to the cloud, but we’re yet to see a major production running from ”camera to master” in the cloud – who will be the first to try it? Change Management comprises many steps before new processes are considered “just the way we do things.” There are many steps but the main ones we need to get through are:
    • Educating and socializing the various stakeholders about the benefits of the 2030 vision, for their specific areas of interest
    • Involving creatives early in the process of developing new 2030 workflows
    • Then demonstrating value of new 2030 workflows to creatives with tests, PoCs, limited trials and full productions
    • Measuring cost/time savings and documenting them
    • Sharing learnings with others across the industry to build confidence.

Shortly, we’ll add a Part II to this blog which will add to the list of gaps with those that are most applicable to each of the 10 Principles of the Vision. In the meantime, there’re eight gaps here which the industry can start thinking about, and do please let us know if you think you already have solutions to these challenges!

[1] The Ontology for Media Creation (OMC) can assist in common payloads for some of these files/systems.

The post Are we there yet? Part 1 appeared first on MovieLabs.

]]>
I don’t trust you, you don’t trust me, now what? https://movielabs.com/i-dont-trust-you-you-dont-trust-me-now-what/?utm_source=rss&utm_medium=rss&utm_campaign=i-dont-trust-you-you-dont-trust-me-now-what Thu, 11 May 2023 05:00:26 +0000 https://movielabs.com/?p=12661 With so many Zero Trust Options, where do you start with CSAP?

The post I don’t trust you, you don’t trust me, now what? appeared first on MovieLabs.

]]>

We know you are keen to get the Common Security Architecture for Production (CSAP) up and running, but you may be wondering how you transition from perimeter security to CSAP. Let’s see if we can help.

Implementing a CSAP architecture can start with the CSAP Zero-Trust Foundation (CSAP ZTF). There is more than one way to approach zero-trust and the CSAP ZTF is a zero-trust implementation with particular characteristics necessary to fully implement CSAP. The requirement it places on the approach are not out of the ordinary and might be present in zero-trust implementations for other information technology system. CSAP ZTF it is not media production specific.

From the CSAP Zero-Trust Foundation, CSAP functionality is added on top to enable implementations to achieve CSAP level 100.

CSAP Zero-Trust Foundation

Why are we defining the CSAP Zero-Trust Foundation?

Zero-trust is not a well-defined term so if we say, “build CSAP on top of a zero-trust architecture” it isn’t helpful. In fact, there are many ways to define zero-trust, for example:

  • Never trust, always verify. All network devices are untrusted until they have been authenticated, or;
  • Zero Trust Architecture, NIST Special Publication 800-207 or;
  • How your current or potential security vendor defines it.

Obviously the first definition is useful in as much as you have an idea what it means but at a network level. The NIST document is the best reference around but implementing it completely could, depending on your risk profile, result in something that is more complicated than is necessary for your needs. And the third one is one of the reasons we are defining the CSAP ZTF.

What is the CSAP Zero-Trust Foundation?

CSAP ZTF (because we love acronyms) is a zero-trust architecture implemented using the same off-the-shelf zero-trust solutions, for example those offered by leading cloud services providers, as any organization might use to implement zero-trust. Those solutions have a comprehensive array of features and a different selection might be made with different approaches.

Think of those solutions as a Tapas menu: usually you wouldn’t eat absolutely everything on it, but CSAP ZTF are like the dishes you just must have!

Tapas Menu
Unlike perimeter security models, zero-trust architectures are deny-by-default and start with a very simple rule: everything must be authenticated before it can take part regardless of how it is connected. This leads us to the basic features required of a zero-trust implementation for it to be a CSAP ZTF:
  1. It is universally deny-by-default.
  1. Nothing can take part in any workflow unless it has been appropriately authenticated. At minimum this applies to users, computer systems and services.
  2. Nothing can take part in a specific workflow unless it has been authorized to conduct the activity.
  1. It has separate authentication and authorization services. Unlike perimeter security models, an authenticated user might present a token to a service, but authorization to do anything goes directly to the policy enforcement point associated with that service. (See the diagram below.)
  2. All authorizations are defined by security policies that are created and stored in an identifiable component of the system. This component becomes part of the CSAP Authorization Service.
  3. The implementation assumes that the network is under the control of an intruder. The only exception would be if micro-segmentation is required for systems that have no options for intrinsic security, but the emphasis is on the word “micro.”
  4. All network traffic and system usage is continuously analyzed for abnormal activity.

Note that isn’t a complete list of what is required in a zero-trust implementation. As we said, we’re just making sure you include the recommended items on the Tapas menu.

Security Is Controlled by Policies

A zero-trust security implementation is driven by security policies – there is no trust, meaning there are no default authentication or authorization defaults. Building a CSAP ZTF means having an identifiable point or points that are the source of the security policies that say what can be authorized to do what. These policies are of the type “allow,” meaning they permit activity – there is no need in zero-trust for policies that deny an activity since zero-trust is deny-by-default1.

As you design your zero-trust implementation, the thread that holds it together is the policies.

Authentication and Authorization Are Not the Same Thing

In the first blog post in this series, Can I Trust You? we examined the two components of trust:

  • Trustworthiness: determining if you can trust something
  • Authentication: determining whether something is the trusted thing it claims to be

The first is a decision that you make using criteria that you create or take from other realms. The second is part of the security architecture and involves the presentation and validation of credentials.

In this first blog post in this series, we described a trust boundary2 in this way “if something is authenticated it is trusted within that boundary but not (necessarily) outside of it.” In that blog post, our allegory was trusting a cardiac surgeon to perform heart surgery but not pulmonary surgery or brain surgery. The heart is inside the trust boundary, the pulmonary system and the brain are outside.

A trust boundary is a combination of authentication and authorization. In that example, all the medical staff are authenticated prior to being authorized to do something. Furthermore, the surgeon can’t just operate on any patient just because they have been authenticated. For a cardiac surgeon to perform surgery on you, someone (the patient for example) must allow (authorize) the surgeon to perform surgery. And, even though the surgeon has been authenticated, you do not allow (authorize) the surgeon to perform pulmonary or brain surgery. Let’s look at this as a workflow3.

  1. A patient’s doctor suspects a patient has a heart problem, so the doctor refers the patient to a cardiologist meaning the cardiologist is authorized to treat the patient.
  2. The cardiologist determines the patient needs surgery and authorizes heart surgery.
  3. The patient is admitted to hospital and prepared for surgery. Before surgery can commence, the surgeon and the anesthetist must agree it can commence (again, authorization).

While this oversimplifies4 the way the medical profession works, what we have is a workflow laid down by hospital policies, and the authorization to perform each step comes from that being the next step in the care process. This has many of the same properties as a media workflow.

Going back to authentication for a minute, the hospital effectively operates a zero-trust security model inasmuch they don’t trust anyone to perform surgery just because they are in the hospital and wearing scrubs. The surgical staff must be authenticated one way or another.

This does not mean that authentication and authorization must be handled by different systems, although as CSAP functions are added doing so may prove more efficient, but the functions must be separable. For example, authenticating something should not provide an immutable set of authorizations as is the case with perimeter security when a user token from an identity and access management system includes access privileges. We will come to the reason for that.

Collapse the Protect Surface

John Kindervag, often credited with defining zero-trust, defines “protect surface” as the thing you are trying to protect, it is the attacker’s target, and it is where you put your protection measures. The protect surface is as close as possible around the thing that is protected. Kindervag, uses the Secret Service’s method for protecting the US President as an example.

Secret Service
Rather than relying on a security perimeter around the neighborhood the presidential motorcade is driving through, the Secret Service’s protect surface is reduced to the president’s vehicle. The protect surface is guarded by the agents walking alongside the vehicle working with the agents inside the vehicle and in conjunction with the agents in the following vehicle.

Kindervag calls the uniformed agents and police officers standing along the street “security theater.”5 Their role is to protect against the low hanging fruit, for examples individuals in the crowd who charge toward the president’s vehicle and to intimidate anyone planning an attack. But the protect surface is around the president’s vehicle.

Applying this analogy to your system, you must define your protect surfaces, and you must make them as small as possible. You may decide your protect surface is around each server in your system, as is the case with the services in a mesh network, or you may decide that is impractical for part of your system and use network segmentation. If you use the latter strategy, the operative word is segmentation, and everything on that segment must have a reason for being there. For example, it’s unlikely that your data ingest systems need to be in the same network segment as your rendering nodes because they are at opposite ends of a VFX workflow.

Where Do I Start?

Regardless of what your security system is today, you start by formulating a plan to implement a zero-trust security solution that meets the needs of CSAP ZTF. There is a wealth of literature and solution providers out there to help you with implementing zero-trust and we have a short reading list at the end of this blog. To reemphasize the point, CSAP ZTF is not a media specific zero-trust implementation, it’s a zero-trust solution that might be implemented in any organization. CSAP ZTF has required features that not all zero-trust solutions might have.

Two factors are relevant to you existing security solution:

  1. What can be kept and re-used?
  2. What is my zero-trust deployment process?

On the first point, re-use isn’t just about (say) keeping your identity management system. It can go deeper, for example can I keep the same access controls on assets such as access control lists (ACLs) or role-based access control (RBAC)? Or is changing attribute-based access control (ABAC) or relationship-based access control (ReBAC) a better proposition?

On the second point, for example, if part of your infrastructure is on-premises, adequately secured, and does not need to interact with systems (external or internal) built on the principles of the MovieLabs 2030 Vision then you might decide to put that further out on your deployment schedule.

One more example that bridges the two points: if you are using microsegmentation for a small group of systems, and it is truly secure, then perhaps you keep it and focus on deploying a policy enforcement point at the point where the microsegment is accessed.

Map Your Workflows

Whether you are starting with an existing infrastructure protected by a traditional perimeter security approach or you are building new workflows on a cloud infrastructure, you need to start by mapping your workflows so that you understand who will be doing what tasks and with what assets and infrastructure. One of the advantages we have securing production over someone securing a corporate network is that our workflows are known and, at least to some extent, documented. You know how your dailies workflow works; you know how your VFX rendering workflow works. In the corporate environment, workflows are generally opaque6 to the IT department and require exploration before zero-trust can be implemented.

Architect Your Zero-Trust System and Create Policies

Once you have your protect surfaces defined, meaning you know exactly what you are protecting and they are as small as possible, and you know your workflows, you are ready to architect your system and deploy it.

Each policy must authorize as little as possible; to reduce complexity and increase manageability it is better to have many policies authorizing similar things than have a single multi-part policy that covers everything. Each policy should be only as complicated as is necessary to authorize a particular part of the workflow. For example, one policy might authorize access to assets by authorizing access to the storage location, and another policy authorizes access to a SaaS service.

It is likely that every policy will have components that are specific to a particular infrastructure, for example, a policy authorizing access to assets on one cloud provider’s infrastructure may be different from a policy that authorizes the same activity of a different cloud provider’s infrastructure. In CSAP we define two classes of polices:

  • Authorization Policies are an abstract expression of what is authorized.
  • Authorization Rules are Authorization Policies translated to the specific needs of a particular infrastructure.

It isn’t necessary to make that distinction when implementing the CSAP ZTF, however doing so will probably reduce the complexity of processing the policies at the policy enforcement point.

Policy Enforcement Points

A zero-trust architecture has policy enforcement points where a decision is made on whether something is authorized by a policy. In CSAP, we refer to policies in this context as Authorization Policies, and one of the parameters in those policies is the identity of the user (or more generally, the Participant) that is authorized to conduct the activity. The policy enforcement point accepts the user’s identity token and uses that in combination with the authorization policy to determine if the activity is authorized.

This differs from a traditional approach where the user’s token includes access claims (meaning what they are authorized to do). In CSAP, the authorization policy is not a property stored in the user’s record in the identity management system.

The two approaches can be seen in this diagram with the conventional approach on the left and the zero-trust approach on the right.

policy enforcement points
In both cases, authentication and access privileges/authorization are required and processed before the user can access assets. The difference is that in the conventional approach (left) access privileges are part of the user token which was created when the user logged it whereas in the zero-trust case (right), authorization is managed by policies which can be fine grained and of limited lifetime.

In a later blog post, we will describe how the CSAP architecture maps to the zero-trust model in NIST SP 800-207 (see reading list). We refer to NIST’s policy enforcement points and policy decision points collectively as policy enforcement points. All the functions are still there. We anticipated that there will be many ways that the policy enforcement point can be implemented. In some cases, the policy enforcement point is implemented using native security components of the infrastructure (for example, access controls in storage).

Monitor Everything

Everything must be monitored. You are looking for activity that is denied and authorized activity that is not consistent with previous activity or the workflows. Knowing what is legitimate allows the construction of trust inference where, for example, if a user’s credentials are used to access something from an unusual location, the attempt can be denied or subjected to a higher level of verification.

Similarly, it is important to know when legitimate activity is denied because of the absence of an authorization policy.

Lastly…

In this blog we have examined the CSAP ZTF which is a zero-trust architecture with a certain set of characteristics. These characteristics are common to other zero-trust system although not necessarily present in all “zero-trust” products. We’ll continue to expand this series with more useful implementation advice on CSAP, meanwhile if you have any questions or comments on this series, please reach out at info@movielabs.com.

Suggested Reading

If you wish to understand more about zero-trust architectures, we have homework for you.

i

Zero Trust Networks: Building Secure Systems in Untrusted Networks

by Evan Gilman and Doug Barth, O’Reilly, ISBN: 1491962194

i

Zero Trust Architecture

by Scott W. Rose, Oliver Borchert, Stuart Mitchell, Sean Connelly, NIST Special Publication 800-207

i

Zero Trust Security: An Enterprise Guide

by Jason Garbis and Jerry W. Chapman, Apress, ISBN 148426701X
Project Zero Trust: A Story about a Strategy for Aligning Security and the Business, 1st Edition by George Finney (Author), John Kindervag (Foreword) ISBN 1119884845

i

BeyondCorp: A New Approach to Enterprise Security

by Rory Ward and Betsy Beyer, Google Research

AWS, Google Cloud Platform and Azure have useful documentation on using their security services to assist you in building zero-trust security into your cloud platform, however we believe that having a basic understanding of CSAP will help you deciding how to use those services to create the CSAP ZTF.

[1] The only policies in CSAP that “deny” anything are the global security policies but they are not part of the CSAP Zero-Trust Foundation.

[2] Unrelated to a security perimeter.

[3] If this looks like a real medical workflow, it is a complete accident.

[4] An understatement!

[5] The term security theater was coined by computer security expert Bruce Schneier in his book Beyond Fear. He has applied the term to the TSA security measures introduced at airports following 9/11.

[6] The IT department provides email services but they don’t necessarily know how they are used other than to send messages to other people. For example, emails that recap decisions made in a meeting, sending email to yourself as a way of making notes, or filing emails in folders as a way of tracking different contract negotiations.

The post I don’t trust you, you don’t trust me, now what? appeared first on MovieLabs.

]]>
Announcing CSAP v1.2 Part 5: Implementation Considerations https://movielabs.com/announcing-csap-v1-2-part-5-implementation-considerations/?utm_source=rss&utm_medium=rss&utm_campaign=announcing-csap-v1-2-part-5-implementation-considerations Wed, 21 Dec 2022 16:49:37 +0000 https://movielabs.com/?p=11919 What the CSAP architects were thinking about and why there’s no magic here!

The post Announcing CSAP v1.2 Part 5: Implementation Considerations appeared first on MovieLabs.

]]>

Introducing Part 5 of the Common Security Architecture for Production (CSAP)

Today we are publishing Part 5: Implementation Considerations of the Common Security Architecture for Production (CSAP) v1.2. The CSAP architects were keen to design an architecture that did not have any boxes labelled “magic happens here,” and Part 5 offers some insight into our thinking and how we avoid them.

Part 5 conveys what the CSAP architects envisioned regarding the operational and technical implementation of CSAP, as well as lessons learned from implementing it. It’s not an implementation guide; its goal is to explain the architecture at the next level of detail to assist those who are actively implementing CSAP today.

CSAP Part 5 is divided into three parts to respond to where implementors are in their journey.

In Part 5A: Implementation Considerations – Starting Out we discuss how you get from here, a perimeter-based security system, to there, CSAP. The journey starts out by migrating to a zero-trust security (although it is probably more accurate to say zero-trust security philosophy), which is the entry point to CSAP. Recall that CSAP is a workflow-driven zero-trust security architecture for securing media production in the cloud, so the zero-trust philosophy comes first.

Part 5A then discusses what it means to get from zero-trust to CSAP levels 100, 200 and 300, The CSAP levels are not recommended practices or robustness. The levels describe required capabilities and functionality. Remember that a uniform CSAP level does not need to be applied across the entire production – some assets or services may be deemed to require level 100, some 200, and the most secure deemed to need level 300. CSAP is designed so that the decision as to which security level to apply can be made outside of CSAP, for example from risk analysis.

Part 5A wraps up looking at the core concept of trust, the more detailed version of our “Can I Trust You?” blog post.

Part 5B: Implementation Considerations – CSAP Core gets into the CSAP core security functions. A big part of this, and the place where every activity starts, is authentication. Part 5B addresses not only participant1 authentication but also device and application authentication, and the core principle of mutual authentication. . The most common security when accessing a web service is Transport Layer Security (TLS) which is the S in https://. TLS allows the user’s to authenticate the web service. The user is authenticated by a secondary mechanism, in the simplest case a username and password. The two different mechanisms add to the complexity, but neither authenticates the user’s device. It’s just assumed that the user’s device can be trusted because the user is using it. CSAP requires mutual authentication which means that the service may require a mechanism to authenticate the user’s device, for example using mutual transport layer security (mTLS) rather than TLS.

Of course, there are alternatives to authenticating the device using certificates, for example, by evaluating the trustworthiness of the device. When a customer logs into one British bank, the bank’s service looks for remote control help desk software running on the customer’s device. This class of software is used by criminals as they seek to social engineer access to a victim’s account. If this software is detected, the customer is granted read-only access to their account but cannot do anything like authorize a payment. This is an example of extended device policy to add more security to the ecosystem.

In addition, Part 5A covers techniques for application authentication along with notes on

Part 5B has a section on the implementation issues surrounding the handling of authorization, including authorization policies and the creation and processing of authorization rules.

We conclude Part 5B with a section on the user experience drawing on the work done by the designers of Google’s BeyondCorp security architecture, and particularly the Explanation Engine. This is important because it is vital that the security is connected back to the user so that they know why they’ve just been denied something or are required to authenticate to access something.

The third of the Part 5 documents is Part 5C: Implementation Considerations – Approaches. In this document, we address some examples of how CSAP might be implemented in certain circumstances.

We start out examining techniques for implementing a zero-trust network at layers 2 and 3 such as Software-Defined Perimeters, and how a layer 7 (or layer 8 depending on your point of view) zero-trust network can be created using a service mesh.

Part 5C addresses access controls and how they can be managed. Spoiler alert: if you are using access controls to authorize a user’s access to a resource, you could add that user to the access control list for the resource, or you could add the user to a group that is authorized to access the resource.

We close Part 5C with a section on end-to-end encryption looking at both where it might be used and some considerations in implementing it. End-to-end encryption is important to realize the CSAP capability of operating securely on an untrusted infrastructure. We don’t get into the mathematics of cryptography, but we do look at the practicalities like key management.

Keep the Feedback Coming

We hope that reading this will encourage you to download CSAP Part 5, or the entire CSAP document package if you haven’t done so already – both are available here. Please reach out to MovieLabs if you have any questions about how to deploy any part of CSAP, including the new Part 5 and give us feedback as you rollout it out across your systems and environments. In 2023, we’ll be looking for Showcase examples of active CSAP deployments that will become working examples for others in the industry to learn from.

[1] Participant is a defined term in the MovieLabs ontology for media creation and could include a user, an organization, a service, and so on.

The post Announcing CSAP v1.2 Part 5: Implementation Considerations appeared first on MovieLabs.

]]>
From Script to Data – Part 3 https://movielabs.com/from-script-to-data-part-3/?utm_source=rss&utm_medium=rss&utm_campaign=from-script-to-data-part-3 Thu, 08 Dec 2022 23:15:51 +0000 https://movielabs.com/?p=11810 Using the Ontology for Media Creation in physical and post-production

The post From Script to Data – Part 3 appeared first on MovieLabs.

]]>

Introduction to Part 3

This is the third and final part of our blog series “From Script to Data”, which shows how to use the Ontology for Media Creation to improve communication and automation in the production process. Part 1 went from the script to a set of narrative elements, and Part 2 moved from narrative elements to production elements. Here we will use OMC to go from a set of production elements into the world of filming, slates, shots, and sequences.

Combining Narrative Elements and Production Elements

Toe bone connected to the foot bone
Foot bone connected to the heel bone
Heel bone connected to the ankle bone

Back bone connected to the shoulder bone
Shoulder bone connected to the neck bone
Neck bone connected to the head bone
Hear the word of the Lord.

Even though we have extracted and mapped many of the narrative and production elements, there’s still something missing before filming starts: where is filming going to happen? Just as Narrative Props, Wardrobe, and Characters are depicted by production elements, Narrative Locations have to be depicted as well. The Ontology defines Production Location for this.

Production Location: A real place that is used to depict the Narrative Location or used for creating the Creative Work.

We’ll use two Production Locations. Production Scene 2A uses a stage for filming Sven to be overlaid on the VFX/CG rendering of the satellite, and Production Scene 2B is a different stage with a built set of the inside of Sven’s spaceship. (Don’t worry – the jungle scenes will be on location in Hawaii…)

This shows how Narrative Locations are depicted by Production Locations. The depiction may need just the Production Location, but it can also require sets and set dressing at the Production Location.

The Production Locations are connected to Production Scenes. Adding them in to the last diagram from Part 2 of this blog series lets us see the all the production elements needed for the Production Scenes. Note – You need lots of other things too (cameras, lights, and so on) some of which are covered in OMC Part 8: Infrastructure [1].
The full diagram below brings together all of the pieces we have talked about so far together. There are some extra relationships added to show how it’s done. This is full of information, and can be hard to read, so click the image to make it larger. Most or all of this information currently exists in various forms – departmental spreadsheets, script supervisors’ notes, the first AD’s physical or digital notebook, the producer’s head, channels in collaboration tools, and innumerable Post-it notes – but it is currently difficult or nearly impossible to bring it all together so individuals, work teams, and organizations can use it. Future production automaton systems can deal with level of complexity and extract the information that individual participants need, whether it is needed to examine choices and consequences or ensure that the right information goes to the right place to support a particular task.

A production team can do lots of things with the information behind this representation. Many of these are done manually today but can be automated because OMC is well defined and machine-readable. For example, now we have the data standardized all production applications can generate, share and edit the data enabling users to:

  • For some things (e.g., a Character that appears in more than one scene) readers have to decide whether to add multiple rows (as above), risking copy and paste mistakes and making the spreadsheet longer; add multiple columns, e.g., a single row for a character with a column for each scene, which requires rejigging things when someone appears in a new scene; or having a single row per character with all the Narrative Scenes in it with some kind of separator, which requires agreeing on the separator and editing lists when something changes. All of this is do-able in a small production but adds risk, overhead and chances of discrepancy in a large one.
  • Find production scenes that have the same Production Location and arrange shoot days to take that into account, including which actors need to be present.
  • Based on the shoot day for a Production Scene, find the Production Props and Costumes needed for it, and schedule them (and any precursors) accordingly.
  • Automatically track changes in the script and propagate through the production pipeline.
  • Change a character name – or anything else such as actor, prop, or location – in one place and have it propagate through the rest of the system: Sven can reliably become Hjalmar, for example.

Time to Film

Dem bones, dem bones gonna walk around.
Dem bones, dem bones gonna walk around.
Dem bones, dem bones gonna dance around.
Now hear the word of the Lord.

The diagram above is, of course, incomplete, but OMC supports the missing items, including other Participants (the director, cinematographer, camera crew, and so on) and Assets (set dressing, for example) and, as mentioned above, can express infrastructure components as well.

Including that level of detail would make this even longer than it already is, so we’ll imagine that it’s all in place – so at last, it is time to film something. In order to do this, we have to introduce two new concepts. Both are very complex, but here we will stick to just basic information and things related to connectivity.

The first of these is: what to call the recording of the acted out scene? ‘Footage’ was used when it was done on film measured in feet and inches. ‘Recording’ is more often used for just sound. OMC uses ‘Capture’, which covers audio, video, motion capture, and whatever else may show up one day:

Capture: The result of recording any event by any means

The second is: How is the capture connected back to a to the Production Scene? In any production there’s a lot of other information needed as well: the camera used, the take number (since almost nothing is right the first time), and so on. Traditionally, this was written on a slate or clapperboard which was recorded at the start the Capture, which made it easy to find and hard to lose.  This term is still used in modern productions, even if there is no physical writing surface involved. OMC makes it hard to lose this information – this digital slate with its information is connected to the capture with a relationship – and makes it easy for software to access and use. (Some current systems support extracting information either by a person or with image processing software from a slate in a video capture and saving it elsewhere, but this is not ideal.)

Slate: Used to capture key identifying information about what is being recorded on any given setup and take.

The Slate has a great deal of information in it, for which see the section in OMC: Part 2 Context.  For now, we’ll focus just use the Slate Unique Identifier (UID) which is a semi-standard way of specifying important information in a single string; the Production Scene, which can be extracted from a standard Slate UID; and the Take, the counter of how many times this scene has been captured.

When a production scene is captured, all that is necessary is to create a Slate, connect it to the Production Scene, add the Take and Slate UID, and then record the action. This is repeated for each take, and for each camera angle or camera unit (if it’s being filmed by more than one camera) or type of capture (such as motion capture.).

The actual Captured media is just another kind of Asset which should be linked with the Slate ID. It may not be convenient to use natively e.g. if it is an OCF) and may require some sort of processing. The process of generating proxies and other video files derived from the OCF is a topic for another day, but it essentially deals with transforming Assets from one structural form into another, while still maintaining their relationships back to the production scene, and hence the production elements in them and the underlying narrative elements.

As long as the Slate ID continues to be associated with the all the media captured on-set it can link all of the OCF’s, proxies and audio files even as they head off into different post-production processes. These processes can be done by different internal groups or external vendors with their own media storage systems, but at long as the Asset identifiers and connections to a Slate are retained it doesn’t matter how things are stored. In the MovieLabs 2030 Vision, Assets don’t have to move, but in the short and intermediate term they often need to. Identifiers and the Slate remove many of the difficulties with this, for example by allowing the use of unstructured object storage rather than hierarchical directories, which are often application-specific. (See our blog on using an asset resolver for more information.)

This diagram shows two takes and their Captures for Production Scene 2A. Each take (represented by the Slate) generates three captures: an audio file, a video file, and a point cloud.

This one shows a single take for production scene 2B with a more detailed view of the resulting Assets: an Audio file, an Asset Group representing camera files (OCF), and a proxy derived from the OCF.
Now we have some actual video that we can use for VFX, editing, and all of the other magic that happens between filming and the finished creative work. The end result of this is an ordered collection of media that represents whatever it is that is wanted at a particular stage of the process. This is typically called a Sequence.

Sequence: An ordered collection of media used to organize units of work.

OMC calls the media used to create this sequence ‘shots.’ ‘Shot’ is used to mean many different things in the production process; for a discussion of some of these, see the section on Shot in OMC: Part 2 Context. The definition of Shot used here is:

Shot: A discrete unit of visual narrative with a specified beginning and end.

A sequence is just a combination of Shots presented in a particular order with specific timing, and in live action film-making a Shot is most often a portion of a Capture – ‘a portion’ because the creative team may decide to use only some of a particular capture. Storyboards can also be used as Shots, for instance, as can other Sequences. This means that a Shot has a reference to its source material.

Finally, a Sequence has a set of directions for turning those Shots into a Sequence. There are several formats for this, such as EDL and AAF. OMC abstracts these into a general Sequence Chronology Descriptor (SCD) which has basic timing information about the portions of shots and where in the Sequence they appear. Exact details of how the Sequence is constructed are application-specific, using a format such as an EDL or OpenTimelineIO. An SCD is an OMC Asset, and the application-specific representation is used as the SCD’s functional characteristics.

The SCD allows some visibility into Sequences for applications that may not understand a particular detailed format. It is useful for general planning and tracking, and is another example of OMC making connections that in current productions are manual and easily lost, such as knowing what sequences have to be re-done if a capture is re-shot or a prop is changed.

OMC: Part 2 Context has details on how portions of Shots are specified for adding to a Sequence. This diagram shows the end result at a relatively coarse level of granularity. The Sequence uses the SCD to combine three captured Assets (or portions of those Assets) into a finished representation of Production Scene 2b.

Capture, Shot and Sequence have a lot of details not mentioned here, since we have been emphasizing the connectivity of things as opposed to the details of the various elements. For example:
  • A Sequence has to be viewable. Often, this means playing it back in an editing tool, but for review and approval, for example, a playable video is needed. In this case, the video is an Asset, connected to the Sequence with a “derived from” relationship.
  • A Sequence (or an Asset derived from it) can be used as part of another Sequence.
  • In the example above, if a Shot isn’t ready, it can be replaced by part of a Storyboard.

This last diagram shows all of the things we have talked about, in all 3 blogs, in one complete view of the data and relationships. From this, you can see the ripple effect if, for example, the design of the communicator prop changes. This starts with remaking the production prop, carries on through re-filming or re-rendering the production scenes where the communicator is used, and on to the finished sequences for the narrative scene. This visibility into how everything is connected can help reduce unexpected surprises late in the production process.

Conclusion

This blog has shown how to use the Ontology for Media Creation to move from production elements to some filmed content, and concludes this series of blogs on using the OMC in the context of a real production.

Thinking beyond the relatively simple examples in this blog series, which use just a couple of scenes and characters, a major production is not just a logistical and creative challenge but also a massive data wrangling operation.  And that data is often the cause of complexity and confusion – at MovieLabs we believe we can help simplify that problem dramatically to allow the creative team to spend their precious resources on being creative.

We believe that there are four main benefits from using OMC in this way:

First, using common, standard terms and data models reduces miscommunication, whether between people or between software-based systems. We explored this in the first blog in this series, and the lessons apply to all the others as well.

Second, being explicit about the connections between elements of the production makes it easier to understand dependencies and the consequence of changes, both of which have an effect on scheduling and budget. We dove into this kind of model in Part 2 and then used it heavily in Part 3, which also demonstrates some concrete applications of the model.

Third, OMC enables a new generation of software and applications. OMC is primarily a way of clarifying communication, and clear machine to machine communication is essential in the distributed and cloud-based world. These new applications we’re expecting will support the broader 2030 Vision and can cover everything from script breakdown and scheduling through to on-set activities, VFX, the editorial process, and archives.

Finally, having consistent data is hugely beneficial for emerging technologies such as machine learning and complex data visualization and we hope therefore the OMC will unlock a wave of innovative new software in our industry to accelerate productions and improve the quality of life for all involved.

These blogs are not theoretical – we have been using the OMC in our own proof-of-concept work where we model real production scenarios and this data connectivity is a vital part in delivering a software defined workflow (for more on Software Defined Workflows, watch this video) where we are exploring efficiency and automation in the production process.

The Ontology for Media Creation is an expanding set of Connected Ontologies – we will continue to add extra definitions, scope and concepts as we broaden the breadth and depth of what it covers, especially as it becomes more operationally deployed. For example, we are currently working on OMC support for versions and variants as well as expanding into new areas of the workflow such as  computer graphics assets.  In practical terms, the Ontology is available as RDF and JSON, and software developers are working with both. Please let us know if you’d like to try it out in an implementation.

If you found this blog series useful then let us know, and if you’re interested in additional blogs or how-to-guides let us know a specific use case and we can address it (email: office@movielabs.com).

There’s also a wealth of useful information at mc.movielabs.com and movielabs.com/production-technology/ontology-for-media-creation/.

[1] We are working on expanding the Infrastructure portions of OMC.

The post From Script to Data – Part 3 appeared first on MovieLabs.

]]>
Can I Trust You? https://movielabs.com/can-i-trust-you/?utm_source=rss&utm_medium=rss&utm_campaign=can-i-trust-you Tue, 01 Nov 2022 17:05:14 +0000 https://movielabs.com/?p=11634 Building Trust in Secure Systems

The post Can I Trust You? appeared first on MovieLabs.

]]>

Trust

CSAP (the Common Security Architecture for Production) is a Zero Trust architecture but to understand zero-trust, we must first have a common understanding of what “trust” means. OK, so if we take the phrase “zero-trust” and stop there, we don’t need to understand what trust means because we don’t even have it – but that approach doesn’t get us anywhere. “Zero-trust security” means not trusting anything until it has been verified as something trustable and that seems like a better place to start.

Mayer, Davis, and Schoorman (1995) define trust as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other party will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.” This is an excellent definition for our purposes because it hints at the consequences of trusting something that is not trustworthy.

In a debate on a security forum recently, one contributor was claiming that zero-trust is a paradox. The argument being you can’t know with absolute certainty that you can trust something. That’s true but then again, quantum mechanics says you can’t know with absolute certainty what state matter is in, but we don’t need to let either get in our way, unless we are planning to feed Schrödinger’s cat.

 Building Trust Relationships

To trust something, we need a trust relationship. There are two factors in creating a trust relationship.

  1. Determining whether something can be trusted
  2. Determining whether something claiming to be a trusted entity is indeed that entity and not an impostor

The first of these is a decision, and decision may not be the right word because ideally it should be reassessed continuously, based on factors that vary from one person to another, from one organization to another, from one situation to another, and it all comes down to risk assessment. As in Meyer, et al., the definition of trust is in the eye of the beholder.

Whether formally or unconsciously, risk assessment happens all the time. A threshold is set, factors evaluated, and a decision is made whether it’s above or below the threshold. That doesn’t mean it has to be done with a spreadsheet — we do it in the first second when we meet someone for the first time and whenever we get in our cars, we are subconsciously making a risk assessments like “it was fine last time I drove it.”

Determining whether an entity can be trusted is outside of the scope of CSAP, but the determination must be made. In cybersecurity, we need to be more formal. We decide to trust a server because of its endpoint security and the knowledge that all CVEs have been patched. We do that regardless of the security model.

The second of the two factors is the fundamental role of identity management. It is the thing that makes zero-trust work, and zero-trust means eschewing implicit trust. For example, implicit trust means trusting a device because of the network port or the VPN it is connected through. Explicit trust means not trusting a trusted user’s device unless it has been authenticated as the trusted device it purports to be.

Since trust is central to any zero-trust architecture, including CSAP, robust identity management is a prerequisite.

Trust and Authorization

If I say I trust you, I probably don’t mean I trust you to do everything. I might trust a cardiac surgeon to perform heart surgery (and would want to before they picked up a scalpel in my vicinity), but that doesn’t mean I’m going to trust them to do brain surgery on me. Trust has boundaries.

trust boundary

A trust boundary

In CSAP, trust boundaries are set by authorization. I verify you are the cardiac surgeon you claim to be (authentication) and I’m going to let you do my heart surgery (authorization). Brain surgery falls under deny by default.

So, we now have three factors:

  1. Determining whether something can be trusted
  2. Determining whether something claiming to be a trusted entity is indeed that entity and not an impostor
  3. Determining whether the trusted entity is permitted to do what it’s just asked to do.

In CSAP terms, the first factor is part of the determination of whether something is to be included in workflows. Can a user contribute something to the workflow and have references been checked before they were hired?

When it comes to systems, it is the role of the same security tools we use today. For example, endpoint security is installed on a server, have all relevant CVEs been patched, etc.

The second factor is the role of the CSAP authentication service which uses some combination of identity management and certificate authorities.

The third factor is the role of the CSAP authorization service. Here, the question that must be answered is: how does the authorization service know what to authorize? After all, CSAP is deny-by-default and nothing useful is going to get done unless something is authorized. CSAP supports a wide range of options. At its simplest, authorization is created manually, isn’t very granular and lasts for a long time.

But CSAP is workflow-driven security, and its authorization rules can be created in response to requests from the workflow management. After all, what thing better knows what should be authorized in a part of the workflow than the entity that is determining what should be done in that part of the workflow? CSAP’s authorization rules can be as granular and specific as is required and can be supported by the implementation of workflow management and CSAP components. Trust me.

The post Can I Trust You? appeared first on MovieLabs.

]]>
From Script to Data – Part 2 https://movielabs.com/from-script-to-data-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=from-script-to-data-part-2 Tue, 25 Oct 2022 21:26:12 +0000 https://movielabs.com/?p=11599 Using the Ontology for Media Creation to improve communication and automation in the production process

The post From Script to Data – Part 2 appeared first on MovieLabs.

]]>

Introduction to Part 2

This is the second part of our blog series “From Script to Data”, which shows how to use the Ontology for Media Creation to improve communication and automation in the production process. Part 1 went from the script to a set of narrative elements, and here we will use OMC to make the transition from narrative elements to production elements. Part 3 will take those production elements through filming and some aspects of post-production.

Production Elements

Dem bones Dem bones Dem dry bones
Dem bones Dem bones Dem dry boness
Dem bones Dem bones Dem dry bones,
Hear the word of the Lord

We now have a good abstract understanding of the script and its contents. What we don’t have is any idea of what the onscreen presentation looks like, who’s going to play the characters, and so on.

In this section, we bring in two new concepts.

Asset: A physical or digital object or collection of objects specific to the creation of a Creative Work.

Participant: The entities (people, organizations, and services) that are responsible for the production of the Creative Work.

Assets and Participants can be very complex in their details, but they both contains two broad types of information:

  • Functional Characteristics say what an asset is used for or what a participant does: is an Asset a prop or a costume, for example, and is a participant a director or sound engineer.
  • Structural Characteristics say what an asset or participant is: is the asset a physical thing or a CG model or a piece of video and is the participant a person or an organization or a software service.

You can find more about how this works and why we made this choice in Part 3: Assets and Part 4: Participants.

The ontology uses Assets and Participants to create production elements – the stuff that is needed to turn the narrative into a finished film or TV show. In this section, we’ll look at a few different production elements. This abandons the spreadsheet view of the world because connections between things become much more prevalent and too hard to describe in non-graphical ways.

Many productions use storyboards to give a general idea of the flow of a scene. Storyboards are a particular kind of Asset, connected to a scene. Each frame of a storyboard can be thought of as an Asset as well – you might want to send single frames to different departments – so the storyboard itself is a composite asset. We won’t go into the details of Asset groups – for this exercise, the fact that it’s a storyboard is more important than the fact that it is an Asset.

example diagram

Movie and TV production is a visual medium, and the narrative elements eventually have to be turned into either physical or digital assets that are used in the production process. These don’t just appear out of nowhere – there is an iterative process that goes from narrative element to something that shows how it should be represented when it is turned into a production element. The ontology calls the result of this process Concept Art, which is a kind of asset that is a creative representation of something from the narrative. It exists for many elements of the production, and here we’ll show it for props and wardrobe.

Sometimes there are different ideas about how something should be represented – does Sven’s repair tool look like a socket wrench, a soldering iron, or a multimeter? – and it is up to the production team to decide which to use.

Concept Art: Images that illustrate ideas for potential depictions of elements of the creative intent.

example diagram

There is another sort of artwork not covered here – artwork or other material that is used to inspire during the production, such as images of hi- and low-tech tools to look at when thinking about the concept art for Sven’s repair tool. In the Ontology, these work much the same as concept art, and can be connected to individual narrative elements, to entire scenes, or even to the whole production. This kind of Asset is called Creative Reference Material.

Creative Reference Material: Images or other material used to inform the creation of a production element, to help convey a tone or look, etc.

Now we need Actors to portray the characters. Actors are a kind of Participant, as are Directors, Cinematographers, and so on. What’s special about Actors is that they need to be connected to the Characters. Some characters can be portrayed by more than one Actor (e.g., voice and motion capture, or actor and stunt double), and some Actors might portray more than one Character. We’ll add one for Kiera, who will be voice-only, and two for Sven – the main actor and a stunt double to use in a later scene.

Actors and Characters are connected together by a Portrayal, and the Portrayal is connected to a Production Scene. Portrayals are connected to lots of other pieces too (costumes, props, and so on.) but we won’t cover that here – some of it will be shown in the diagram in the next section though, and you can look at OMC: Part 7 Relationships to see the kinds of things that can be represented.

example diagram
Shooting the film requires real props and real costumes. These are pretty simple: props and costumes are Assets, and are connected to their respective Narrative Props and Wardrobe. The person doing the shooting schedule can discover which props and costumes have to be available for a particular scene using the relationships that have been built up, ideally with machine assistance from the graph-and-data application described above.
example diagram

The next piece to think about is actually filming (or animating, in the case of an animated work) the narrative scenes. Looking at Narrative Scene 2, it has two very different parts: the first, in which Sven is happily repairing the satellite, and the second, where Sven flees the Trilobot and returns to his ship. These divisions are called production scenes and are created and used as required by artistic choices (different color management, different locations for filming) and technical requirements (requiring a green screen or equivalent vs filming as-is on a beach.) The HSM production team decided to break Narrative Scene 2 into two production scenes, dividing them just after Sven says “What the…?” It is possible to do this in other ways, of course, driven by creative or technical requirements. It is also possible for automated tools to make first suggestions about where to make these divisions, based on explicit notations in the script or inferred break points.

A Production Scene is going to be a central organizing element, you might think of it as a little bit like a call sheet. Lots of things are likely to be related to any given Production Scene, like the physical location, the crew and actors required, the date it is being filmed, all the props, wardrobe, infrastructure, etc. that will be needed as well as the Assets created during that filming of that scene.

The divisions into production scenes can be changed during filming, and so it is very important to be clear about what production scene is being used as the basis for a particular activity (filming, recording, rendering, etc.) See OMC: Part 2: Media Creation Context for lots and lots of details, most of which we will gloss over here.

The important thing for this overview is that a Production Scene has a Scene Descriptor that uniquely distinguishes it from all other production scenes past, present, and future. Production elements used in the Production Scene are tied though a chain of relationships back to the narrative elements. (That connection isn’t shown in this diagram, but the prop and costume in production scene 2a are the ones shown in the diagram above. In part 3 of the blog we’ll put all these pieces together into a complete graph.)

example diagram

Conclusion to Part 2

This blog has shown how to use OMC to move from narrative elements to production elements. Once we have actors portraying characters and a set of real (or computer-rendered) props the production team can refine budgets and work on scheduling. It also means that if the script changes, those changes can propagate clearly and quickly to the people in the production team in charge of casting, call sheets, and prop fabrication.

In part 3, we’ll follow Sven and his communicator into filming and beyond.

If you found this blog series useful then let us know, and if you’re interested in additional blogs or how-to-guides let us know a specific use case and we can address it (email: office@movielabs.com). There’s also a wealth of useful information on the MovieLabs website at www.movielabs.com and the Ontology for Media Creation website.

The post From Script to Data – Part 2 appeared first on MovieLabs.

]]>