OWL 360 Degree Camera and Component

A 180 and 360 Degree Stereoscopic Camera and Component for Unreal Engine with precise Viewport color matching, output to a Render Target, Alpha Channel support, Movie Render Queue integration and various other features.

Last updated 2 days ago

Overview

The OWL 360 Degree Camera and Capture Component let you capture 180 and 360 Degree live outputs from Unreal with identical colors to your Viewport and a wide range of additional useful rendering a compositing features.

You can use it for three main purposes:

  1. Live-streaming 180/ 360 content from Unreal Engine over-the-network or to the internet via a wide range of media protocols.

  2. Creating Sequences which can be rendered in Movie Render Queue via our 180/ 360 Rendering pass.

  3. Previsualising your renders live (to check proportions, pacing etc), before you send them to Movie Render Queue, via VR Headset or a Digital Twin of your venue, which can even be done for a full show.

360 Degree Camera Actor

The OWL 360 Degree Camera is a standalone Actor that you use in your Unreal level like any other camera.

To use the OWL 360 Degree Camera:

  1. Go to Place Actors and in the Off World Live section find the OWL 360 Camera Actor and drag and drop it into your level

  2. You can now use the Camera Actor in Unreal like you would any standard camera:

360 Degree Capture Component

The OWL 360 Degree Capture Component adds the OWL 180/ 360 Degree capture capabilities to whatever Camera or Actor you attach it to. This is useful if you have an existing sequence which you want to add a 360 Camera output to.

To use the OWL Capture Component:

  1. Select any Unreal Camera, Cinecam or Actor in your World Outliner and at the top of its Details panel click Add and type and select OWL 360 Capture:

  2. This will add an OWL Capture Component to your Camera/ Actor which you will see in your Components list. Make sure that this OWL Component is attached to your Cinecam or Actor by dragging it and seeing the green tick:

Use with Movie Render Queue

  • The OWL 360 Render pass for Movie Render Queue will automatically sync with whatever 360 Camera is selected in the Camera Cuts of your Unreal Sequence.

  • You can use either the 360 Camera Actor or Capture Component.

  • Whatever settings you select below, will automatically be picked up in Movie Render Queue apart from the Resolution setting, which you independently set in Movie Render Queue.

Previsualisation

  • The OWL 360 Camera/ Component is excellent for pre-visualisation because it color-matches your Movie Render Queue output, while also rendering in real-time.

  • So instead of waiting for hours/ days to see what your render will look like, you can preview it in real-time on a screen or headset to get your shots perfectly as you want them before initiating your final pixel high-resolution output.

  • This is particularly useful for thinking about audience details like the proximity of objects to the camera, scale of your scene to the viewer and the pace of the camera through your levels.

Set-Up

  • You can live-stream the Render Target of the camera through our Spout/ NDI Senders to either a headset or venue previsualisation software.

  • If you are outputting VR content then you can use the VR.NDIUntethered app in Oculus to receive a live NDI feed direct to the headset (with stereoscopic support).

    • With DLSS enabled, it should be possible to have 4K resolution live output at 20-30FPS which is very effective for previs.

  • If you are making content for a Dome/ immersive venue, you can use the DomeViewer software both in Oculus or Windows to preview your content in a digital twin of your venue (either birds-eye or as an audience member):

  • You can also build a digital twin of your venue in Unreal and use the Media Input Wizard to set up an Unreal Media Plate with the correct Mesh of your venue’s screen.

    • You can then stream the 360 Camera/ Component output into your digital twin via the OWL Spout/ NDI Senders and it will play with perfect color accuracy in your virtual venue.

Live-Streaming

You can live-stream the Render Target of the OWL 360 Camera/ Component using both the OWL Media Outputs and Unreal’s Media Capture framework:

OWL Media Outputs

  • The easiest way to set this up is to use the OWL Media Output Wizard, which will let you set up an OWL 360 Camera along with any media outputs you need.

  • OWL has outputs for Spout, Virtual Webcam, RTMP, SRT, RTSP, HTTP and Save to File which aren’t available in Unreal.

  • Alternatively, you can select the Render Target from the 360 Camera/ Component in any of the OWL Media Output Actors to instantly live-stream, as explained in their individual documentation.

  • This works in Editor, Runtime and in Packaged games.

Unreal Media Capture

  • Unreal has a Media Capture Framework which can send Render Targets.

  • It has a large number of configurations and is excellent for studio outputs such as capture cards and SMPTE 2110 (Rivermax), which you will need to configure by adding their respective plugins and media captures.

  • You can also use the OWL Spout and NDI Media Output integrations via this method, to take advantage of the studio workflow configuration options.

  • It is for Editor/ Runtime usage only so won’t work in packaged games.

  • To set up:

  1. Go to Edit> Plugins and ensure that the Media Framework Utilities Plugin is enabled:

  2. In Windows> Virtual Production select Media Capture:

  3. In the Media Capture window, select the Render Target you want to send from the list and select the Media Output you want to send it via and then press Capture to begin sending:

Blueprint Set-Up

  • The OWL 360 Camera Actor is made up of a Scene Component, a Camera Component and a Capture Component:

    full-image
  • The details for all of these components can be accessed from the OWL360CameraActor, but to change some of these settings via Blueprints you will need to know which settings are a part of which component.

  • The Transform settings for the Camera are controlled just as any other Actor in Unreal and can be keyframed or controlled dynamically via Blueprints.

  • The Off World Live Capture settings are unique to the Capture Component and can be controlled via Blueprints by accessing the CaptureComponent as a child of the 360CameraActor, below is an example of setting some of the settings dynamically via a Custom Event in the Level Blueprint:

    full-image

Performance

Exposure

  • You may need to adjust the Exposure of your camera when you first add it to your level to match the colors of the Viewport.

  • By default, the 360 Camera/ Component, is set to Manual Exposure with an Exposure Compensation of 10:

  • This may not match the Auto-Exposure of Unreal, so you should adjust the value in the Details panel until your colors match.

  • For more information see the Exposure section in Trouble-Shooting below.

Seams

  • The OWL 360 Camera uses a cubemap of cameras to generate its raw pixels.

  • Each of these cameras runs Unreal’s post-process effects separately and so there can be issues with screen-space effects, which use the edge of the camera (since the edges of the separate cameras will be different).

  • It is mostly quite easy to manage such issues using the Face Blend Percent option which overscans each of the cameras and blends the edges imperceptibly to remove any clashing pixels (see the vertical line below disappear for example):

  • There can also be issues with particle based effects like Niagara but these are easily managed by changing the settings of the particle system.

  • You can see more information how to manage these issues in the Trouble-Shooting section below.

Frame Rate

Rendering to the Render Target is performance intensive because you are rendering an additional set of pixels to your Viewport and your output is likely much higher resolution than your Viewport.

To increase your live frame rate from the 360 Camera/ Component the main options are:

  1. If you are just using the camera for previz, then you can set a very low resolution, which will be enough to see the Editor preview and see the effect of post-process changes etc.

  2. You can use the Editor and Runtime Viewport Rendering tickboxes in the OWL Editor drop down to switch off the Viewport (If you have Nanite in your scene, you cannot switch off the Editor Viewport or it will crash Unreal.)

    1. This is a toggle that you can also activate in Blueprints.

  3. Enable upsampling with TSR/ DLSS, with the setting in the Details panel of the 360 Camera/ Component following the guide below. This is particularly effective at higher resolutions.

Visual Quality

N.B. Check the Exposure Compensation section above if your 360 Camera is darker or brighter than your Viewport to easily align the two.

Bloom is set to 0 by default in the Details panel of the 360 Camera/ Component because it can cause seams but you can use the tips below to manage it.

  • The OWL 360 Camera uses the identical rendering pipeline as the Unreal Viewport and so should have complete color consistency with what you see and do in the Viewport.

  • This includes responding to Post Process Volumes but certain post-process effects can cause visual issues as mentioned in the Seams section above with tips in the Trouble-Shooting section below.

  • You can use OCIO to deliver a precise color pipeline if you need it for your production purposes.

  • In this respect the OWL Cinecam is more color precise than Unreal’s Panoramic Capture Component which uses a Scene Capture 2D Actor (not the same as the Viewport rendering pipeline) with various Scene View Extensions to add post process effects.

  • If you use DLSS then you may see a reduction in visual quality, especially in very fast moving scenes because of the predictive nature of the upsampling algorithms.

Lumen vs. Path-Tracing

  • The OWL 360 Degree Camera supports both Lumen and Path-Tracing render pipelines.

  • Lumen is the default Global Illumination method for Unreal Engine and is highly recommended.

  • It renders significantly faster (>10x/ frame) than Path-Tracing and can be combined with Upsampling to increase this even further.

  • Path-Tracing doesn’t offer any advantages over other CPU-based renderers like Arnold, V-Ray or Corona, but can be useful in very specific circumstances such as with Unreal’s Metahumans or to better understand the behaviour of light in your scene.

  • Lumen can cause seams to appear (differences in visuals at the edge of the six faces that generate the camera image) but these are easily dealt with following the tips below.

Features

Render Target Output

  • You can capture the output of the camera to a Render Target for previs, live-streaming or adding to a Material.

  • Either add an existing one or go to Create New Asset> Render Target to make a new one.

  • Adding a Render Target will render a separate output from your Unreal Viewport so you will see a decrease in your FPS.

Pause Rendering

  • Pause Rendering is used to stop the Render Target from capturing the camera.

  • When ticked you will see the camera preview also show a pause sign:

Projection Type

  • We currently support the following Projection Types (each is optimised to only render the pixels needed):

Monoscopic

  • Cubemap

  • Equirectangular (you can customise the Vertical and Horizontal FOV as below)

  • DomeMaster (Fisheye)

  • Stereographic (Tiny Planet)

  • 180 Equirectangular (Equirectangular with a 180 degree Field-of-View)

Stereoscopic

  • 180 Equirectangular (Side-by-side)

  • 360 Equirectangular (Top-and-bottom)

Custom

  • In the Advanced section below, you can add an Output Mask and adjust the Vertical and Horizontal FOV to only render the specific pixels you need:

  • This is availabe for mono and stereoscopic formats:

Resolution

  • This sets the Resolution of the Render Target which is capturing the live output of the 360 Camera and the editor Preview:

  • For previz, it’s better to keep this value small, because it consumes a lot of VRAM, unless you want to stream to a headset for live-previewing a sequence.

  • This value isn’t used for your Movie Render Queue resolution which has separate settings.

Face Blend Percent

  • This setting creates an overlap between the different faces seamlessly blending the pixels:

  • You should use it to manage seams which can occur with screen space effects like Bloom, Particles, Volumetric fog etc.

  • The recommended setting is 2-5%.

  • You should also use this with Stereoscopic 360 to create a seamless render.

Override Internal Face Dimensions

If you want to increase detail and clarity your render (via super-sampling), it’s better to use the r.screenpercentage console variable with TSR enabled in Movie Render Queue because that includes temporal anti-aliasing.

  • This setting lets you increase the pixel density of the individual cube faces that make up your render so that you can render at a fixed resolution but get a level of detail that would otherwise only come with a higher resolution.

  • You can use this setting for previs of a static shot but for rendering we recommend the screen percentage approach above.

Post-Process Pipeline

  • The standard Unreal rendering pipeline is the Tonemapping option which will give you colors that match the Unreal Viewport (and Movie Render Queue):

  • If you are using a lot of Bloom in your output, you can select the Seamless 360 Bloom option.

    • This utilises our seamless bloom algorithm but doesn’t include Tonemapping so changes the colors.

    • We recommend to use this only if you really need a lot of Bloom.

Post-Process Update Location

  • This setting controls whether to follow the Post-Process Settings of the OWL 360 Camera or a separate Unreal Camera.

  • You will always see a Camera Component and an OWL360CaptureComponent as below:

    • If you are using the 360 Camera Actor, then we recommend you modify your Post-Process Settings in the OWL360Capture Component Details panel and select 360 Component in this drop down.

    • If you are adding the 360 Capture Component to an existing Unreal Cinecam or Camera, then we recommend you select Camera Component in this drop-down in order to inheirit the Post-Process Settings from that Camera.

  • In either case you can use a Post Process Volume and it will affect the OWL 360 Camera or Capture Component as expected.

    • If you modify an option in the Post Process Settings of the Unreal Camera or OWL Camera Component Details panel, it will override the setting from the Post Process Volume.

Disable Bloom

  • This is a tickbox to completely disable bloom in your render, in the case that setting a Face Blend Percent doesn't stop seams.

  • Bloom is the most problemative post-process effect because it can have a very wide spread across your image.

  • If you need a light source with high Bloom another option is to use Cube-Face Rotation below to keep it within a single cube-face from the 360 Camera which stops artefacts appearing when the light source crosses the edges of the different cube faces.

Path-Tracing

  • Path Tracing previews in your Render Target your scene as it would look in the path-tracer, which can be useful for render previz.

  • If you have a high Resolution, it will take some seconds for the final image to emerge.

  • You can use the path-tracer in combination with the Snapshot function to capture the output to a png:

Visualisation Type

  • If you are outputting Stencil Layers for color grading, you can use this to preview the different layers:

Upscaling and DLSS

NVIDIA Streamline/ Frame Gen doesn’t work with OWL tools and we recommend not to include it in your active Plugins when using OWL.
We recommend to only include these plugins if you want to use DLSS with OWL:

  • Enable Upscaling lets you increase performance by using AI upscaling to render a % of the pixels in your output:

  • This is much more effective at higher resolutions where the total number of pixels to be rendered is much greated.

  • Generally, at around 70% in 4K you will see a significant performance improvement without major artefacts.

  • By default, the AI upscaling method used is Unreal’s TSR (Temporal Super Resolution) but you can also use NVIDIA’s DLSS or other upscaling solutions, which will work automatically when you enable them in the Engine:

  • If you want to use DLSS you need to select these options after installing and activating the plugin in your Plugins section in Unreal. If DLSS is installed it will override TSR/ any other anti-aliasing setting in your Project Settings:

  • You may see a difference in performance between TSR and DLSS (including different visual artefacts).

Face Control

  • This setting lets you control which pixels are rendered in your projections.

  • It will automatically update depending on the Projection Type you select above, but you can also manually adjust it if you need.

  • If you change the Horizontal or Vertical Field-Of-View settings in the Advanced section below, you can click Auto-Calculate to adjust how much of each-face is rendered to match your settings.

  • Reset will turn all the faces back on again.

  • For example see in this Domemaster projection, one face is off and four faces only are rendering half the pixels as only this is required to produce the Dome output.

  • You can use the Debug Face Colors in the Advanced section below to see which faces are being rendered and how:

Snapshot

  • Snapshot will instantly generate a png from the Render Target of what the camera is seeing at that moment in time.

  • You can configure the settings and metadata that go with the Snapshot in the Advanced section.

Advanced

  • In the Advanced section there are multiple features for changing the output of the Projection Type which are helpful if you are rendering to a customised output.

  • All these features require a PRO license:

Cube Face Rotation

  • This lets you rotate the cube faces from which the 180/ 360 projections are generated.

  • This rotation can be key-framed in Sequencer.

  • This can be beneficial for:

    • Eliminating seams (in the case of post-process effects like Bloom)

    • Keeping the most prominent feature in the correct viewing point of the audience or VR viewer.

    • Rendering driving plates by positioning the cube face edges at the horizon line.

Show Debug Face Colors

  • This lets you see the position of each of the cube faces that make up your render so you can use the cube face rotation, camera rotation or other settings to adjust them:

  • When you are using an Output Mask the debug face colors will show you the Horizontal and Vertical Field-Of-View settings that are optimised to your required pixels.

  • You can then use the Face Control section to only render the pixels you need.

Camera Rotation

  • Setting a Camera Rotation value can enable you to rotate the 360 output of your image while leaving the actual camera transform unchanged.

  • This can be helpful for creating keyframed animations, then editing your output at a later stage for a specific output format.

Horizontal and Vertical Field-Of-View

  • Changing the Horizontal or Vertical FOV will narrow the range of the projection.

  • This can be used for custom shapes, annular etc.

  • For example, the Equirectangular 180 projection has the Horizontal Field-of-View set to 180 degrees instead of 360 degrees:

Output Mask

  • To add a fully customized crop to your render, you can use the Output Mask.

  • This requires a PNG texture in black and white, with black being the intended masked area, which you can drag into your Content Browser:

  • Then select the Image Texture from the Output Mask section, which will then apply the mask to your selected Projection Type.

  • You can adjust the Field-of-View settings to only select the pixels you need for your mask and then use Auto-Calculate in the Face Control section to render only those pixels:

Near Clip Plane

  • Near Clip Plane value can be increased if you automatically want to not render objects that are too close to the 360 Camera:

Alpha Channel

N.B. if you remove an Actor from your scene which has particles or other effects rendering over it, those will also be removed because of how Unreal’s render pipeline works.


If you are using Movie Render Queue, then it’s better to use Stencil Layers because they will have completely accurate colors and post process effects.

  • Alpha Settings lets you show and hide objects from the 360 Camera.

  • The Show Only list allows you to select Actors in your scene so only those are rendered (against an alpha channel background).

  • You can also Invert Alpha to create a mask:

  • The Hide Only list removes Actors from your scene (such as the cylinder in the case below):

Preview

  • If the Render Target is unpaused, your preview window will automatically show the exact output of the 360 Camera/ Component, including any modifications you have made.

  • You can switch this off if you prefer.

Post Process Settings

  • When the Post Process Update Location is set to 360 Component (see setting above), these Post Process Settings will be editable and will carry through to the final 360 render.

  • Most Post Process Settings will not cause a problem for 360 renders, but Bloom, Dirt Masks and other screen space based effects may need a significant Face Blend Percentage to work as intended.

Render Flags

  • You can toggle these on/ off to preview the effect of different render passes in the Render Target:

  • This can be useful for de-bugging if seams are coming from certain post-process effects.

OCIO Color Management

Use the Color section to add OCIO configs to the 360 Camera Render Target to previs your content in the correct color space of your computer:

  1. Enable the OpenColorIO Plugin in Edit> Plugins, restarting if prompted.

  2. Right click in the Content Browser and search Miscellaneous> OpenColourIO Configuration

  3. In the OCIO Actor you can modify the input and output color spaces you want to use, as well as the config file that has the instructions on how to use them:

    1. Configuration File: Unreal provides a lightweight version of ACES which may be sufficient for your needs but otherwise you will should replace it with a comprehensive library such as this (click Reload and Rebuild after you have changed the file destination to your target library).

    2. Desired Color Spaces: Is where you chose the color space you want to work in:

      1. Utility Linear Rec 709 sRGB is the default working color space for Unreal Editor so you need this option as it will typically be what you convert from in your renders.

        1. If your monitor works in a different colorspace and you have already modified your Project Settings then select that option instead.

      2. If you are using the ACES color workflow then you also should select ACES> ACEScg so you can select this as your output color space in Movie Render Queue.

      3. You can add other working color spaces here as well if you need (they can be selected between both in your preview and in your render settings).

    3. Desired Display-View: Is where you select the color space of your monitor/s or display:

      1. sRGB is the standard for computer monitors and web content.

      2. If you have a specialist monitor/ display then you can select it from the list.

      3. You can add more than one option here if you will be toggling between different color spaces. You can select between these in Viewport/ 360 Camera / Movie Render Queue.

    4. Context: This is an advanced setting which lets you define specific shots for which you want to use different color spaces, this is normally reserved for high-end studio pipelines.

  4. Load the OCIO config Actor in the Details panel in Color> OpenColorIO Configuration, select the color spaces you need and then tick Enable OCIO:

  5. If you haven’t selected OCIO in your Viewport, you should instantly see the difference in colors in your 360 Camera preview:

Compositing

N.B. All cameras must have the same projection type for Compositing to work.

It is not possible to render multiple cameras to EXR layers while compositing is active, all cameras outputs are merged.

There is no cap on the number of 360 Cameras/ Components you can composite in Movie Render Queue but adding more with have an impact on V/RAM.

  • You can use the OWL OWL 360 Compositing Track in Unreal Sequencer to composite between multiple 360 Cameras/ Components for effects like cross-fading.

  • You can use this both in live-rendering and Movie Render Queue to keep your complete workflow inside Unreal (rather than having to use external compositing or mapping software).

  • However, in live-rendering, you need to manage the performance very carefully as you need at least two active Render Targets which will be very heavy on frame rate and VRAM.

Adding a Compositing Track

  1. To perform a crossfade transition, start with two OWL 360 Cameras in the level.

    • These cameras can be in nested sequences, the recursion limit into nested sequences is 15.

    • The Composite Track will override your Camera Cuts so the 360 Cameras/ Components selected inside it will be rendered.

  2. Ensure that both 360 Cameras have been added to your sequence:

  3. Click the Add Track button on the top left of the sequencer in the top level sequence and add OWL 360 Compositing Track:

  4. Once clicked you will see the OWL 360 Compositingtrack with three buttons to the right:

    1. Preview: (Play icon) Opens the Editor Preview window to simulate the final composite.

    2. Refresh: (Recycle icon) Refreshes the list of 360 Cameras/ Components that can be added for compositing (in case you have added new ones) and automatically unbinds any Keyframe sections no-longer connected.

    3. Add: (Plus icon) Lets you add 360 Cameras/ Components from this and nested sequences to your composite.

Composite Options

  • There are different compositing Blend Modes available, which you should choose depending on the content you are working with:

    • Over (Alpha Compositing): places the foreground layer on top of the background layer using the foreground's transparency.

      • Use this for standard layering where you want one object to simply sit in front of another.

    • Multiply: looks at the color information in each channel and multiplies the base color by the blend color. The result is always darker. Multiplying any color with white leaves the color unchanged; multiplying with black produces black.

      • Use this for adding shadows, creating "stained glass" effects, or removing white backgrounds from line art.

    • Add (Linear Dodge): sums the color values of both layers. The result is always brighter. If the sum of the values exceeds the maximum (1.0 or 255), it clips to pure white.

      • Use this for light effects like muzzle flashes, glows, lens flares, or sparks.

    • Subtract: takes the background pixel values and subtracts the foreground pixel values. If the result is negative, it simply becomes black.

      • Use this for comparing two nearly identical shots to find differences, or "punching holes" in an image based on brightness.

    • Screen: is the inverse of Multiply. It flips the values, multiplies them, and flips them back. It results in a brighter image, but unlike "Add," it is much softer and rarely clips to pure white.

      • Use this for compositing elements with black backgrounds (like smoke or fire) without blowing out the highlights.

    • Difference:looks at the color information in each channel and subtracts the darker value from the lighter value.

      • Use this for alignment: If you have two identical images perfectly stacked, the result will be pure black. If they shift even slightly, the "difference" will glow, showing you exactly where they don't match.

Creating a Composite

  1. Use the Add button to select the 360 Cameras/ Components you want to composite, each of which will be added as a sub-track:

  2. The order of the sub-tracks can be changed using the arrow / caret buttons.

    1. The bottom sub-track will be composited last and each sub-track above it next.

  3. If you want to change the order of the sub-tracks but not change the order of the Keyframes (below) you will need to unbind the Keyframes (rebinding the Keyframes to other cameras will then automatically match the camera to that Keyframe (see below).

  4. The drop-down in the top Track lets you select the Blend Mode you want to use between the different 360 Cameras/ Components:

    full-image
  5. Expanding a sub-track, reveals the Opacity setting:

    1. 1.0 is completely opaque; and

    2. 0.0 is transparent.

  6. You need to add a start and endKeyframe for the Opacity of the top sub-track to begin and end the composite.

    1. The start is normally value 1 so that only the top 360 Camera shows at the beginning of the composite:

      full-image
    2. The end is normally value 0 so the top 360 Camera is transparent (and the 360 Camera below it will fully show):

      full-image
    3. At the end of the composite, the sub-track below should have Opacity value 1 (opaque), so its 360 Camera is fully visible.

  7. Always remember to save the Sequence so it can be picked up automatically in Movie Render Queue:

Previewing the Composite

N.B. All cameras must have the same projection type for Compositing to work.
Previewing multiple 360 Cameras/ Components will be performance heavy because you will have multiple Render Targets writing pixels.


To help this, you can Pause Rendering on the bottom camera (so it shows a static opaque frame) and just show the composite transition from the top camera.
Currently the Preview system is capped at 5 different 360 Cameras/ Components.

  1. Ensure that all360 Cameras/ Components have a unique Render Target:

  2. Use the Preview icon to launch the Composite Preview Window:

  3. Scrub the Sequencer timeline to Preview the composite (for example in the image below you can see the bottom camera appearing at the end of the composite):

Change Composite Order

If you want to change the 360 Cameras/ Components in a composite but keep the Keyframe and Opacity settings, then you need to unbind and rebind them:

  1. To unbind: In the Sequencer track right-click on theKeyframe section of the 360 Camera/ Component you want to release/ move and select the option unbind:

    1. The unbound section will now show Unbound before the 360 Camera/ Component name in both the Sequencer and the Keyframe Tracks:

    2. This track will be ignored at render time unless you bind it to another 360 Camera/ Component.

  2. To bind: In the Sequencer track right-click on theKeyframe section of the 360 Camera/ Component you want to reconnect and select the option Bind To:

    1. You will now see that the Track in Sequencer has renamed to the 360 Camera/ Component you just selected from the Bind To list:

    2. For example, in the Sequence above, OWL360CameraActor2 is now the top camera and so will be composited first (you will render this camera first and then reveal any cameras underneath it as the composite reveals).

Crossfading between Camera Cut tracks

You can use the OWL 360 Compositing Track alongside shot tracks to crossfade between multiple camera cuts or sub sequences. For this you will need to have at least two 360 cameras in your scene with matching projection types and resolutions.

  1. Create a level sequence for each camera cut that will be used:

  2. Add an OWL360CameraActor and a Camera Cuts track to each Level Sequence, keyframe any camera movement or camera property changes within this sequence. Repeat this for the other cameras in their respective sequences:

  3. Create a Master Sequence to house any Subsequences and Compositing Tracks. This sequence can also contain animation of any non camera elements in the final render:

  4. Add a shot track for each of the Subsequences to be crossfaded.

  5. Position the Shot Tracks above each other, overlapping for the duration of the crossfade transition.

  6. Add a Subsequence Track for each subsequence and position them at the same overlap as the shot tracks.

  7. Add both OWL360Camera actors to the level sequence and create a Compositing Track that fades between the two:

  8. You can preview the Composite as explained above and then render the Master Sequence to render the full composite transition:

Camera Lerp Transition

Sometimes a camera cut blend is used to make one camera move from it’s location to second camera’s and pass on the camera cut to that second camera. This effect is not possible in the same way for a 360 camera so a camera position lerp workaround can be used.

  1. Add two OWL360CameraActors to a sequence

  2. Create any desired keyframes for these cameras

  3. Create a start and end keyframe for the transform of both cameras within a frame range, this will be the transition section of the sequence. For this example it is the frames between frame 60 and 80. Copy the transform keyframes from the end transition camera and paste them in to the position for camera 1.

  4. With Camera 1 docked we can preview this transform interpolation and see the camera moving from camera 1’s position to camera 2’s over the transition period.

  5. Add a camera cuts track for the second camera at the end keyframe of the transition.

  6. Lock the viewport to the camera cuts track to preview the full transition from camera 1 to 2.

Trouble-Shooting

Seams

  • When you are using Lumen for Global Illumination, seams can appear at the edges of the cubemap which renders the raw pixels of the 360 Camera. These steps should solve them:

    • Face Blend Percent: Try this value at 5% and then increase as required. It will eliminate most seams.

    • Hardware Ray-Tracing: Change this setting in your Project Settings. This can reduce seams because Lumen relies less on screen space effects to generate lighting:

    • Cube Face Rotation: Using the Cube Face Rotation settings can prevent seams by ensuring that elements that affect screen-space effects are always positioned inside a face (see above).

    • Turn off Screen Traces: This fixes a lot of interior seams caused by bounce lighting.

      • Find the Post Process Volume in your scene, search for Screen Traces, then untick the checkbox.

      • You may notice some Global Illumination is lost by doing this and your image will have a more stark contrast.

    • Remove Meshes (that cause seams due to Indirect Lighting) from Lumen using the Mesh Details panel to disable one of the following:

      • For Software Ray Tracing, uncheck Affect Distance Field Lighting.

      • For Hardware Ray Tracing, uncheck Visible in Ray Tracing.

      • You can also separate complex objects in your modeling software to ensure that Unreal Engine can shade them more accurately, which can improve Lumen reflections and reduce seams.

    • Add Temporal Anti-Aliasing Samples to fast-moving imagery to remove seams and improve shadows and reflections.

Exposure

  • When you drop the 360 Camera into your scene it can be over or under exposed compared to your Viewport.

  • Exposure is set to Manual by default to avoid seams and artefacts caused by Auto Exposure.

  • The 360CameraActor automatically sets an Exposure Compensation of 10 when brought in to a level.

  • This can be too high or low for some scenes.

  • Adjust this setting in the Post Process Settings using the Exposure Compensation slider:

Post-Process Effects

  • For tips on how to deal with Bloom, Particles, Volumetric Fog, God-Rays, Water, Dirt Masks etc. please see the separate workflow article.

Frame Rate Drop when in Background

  • By default, Unreal throttles the CPU when Unreal is in the background vs other software which causes the frame rate to drop.

    • To change this, go to Editor Preferences and untick Use Less CPU when in Background:

Audio not Playing when Editor in Background

  • By default, Unreal will stop audio playing when the Editor is in the background vs other software.

    • To change this, pick the Engine version you need in your File Explorer and go to the file path:

      full-image
    • In Config go to BaseEngine.ini and open the notepad:

      full-image
    • Search for UnfocusedVolumeMultiplier and change the value from 0.0 to 1.0:

      full-image
    • Save the file and close and re-open Unreal.

    • Your audio will now play continuously no matter if Unreal is in the background or minimised.