How Unreal Engine Camera Works

How Unreal Engine Camera Works

Tags
Unreal Engine
Camera
Published
Published June 1, 2020
Author
Tianqi Li
Unreal camera system is made up by several classes, including APlayerController, USpringArmComponent, UCameraComponent, APlayerCameraManager and UCameraModifier.
In your project, you probably not wanna use all of it, especially USpringArmComponent and UCameraModifier. New samples Valley of the Ancient and Lyra doesn’t use them.
If you really want to base your camera code on UCameraModifier, your can check out this project.

Flow

The built-in third person camera works like the following:
notion image

Receiving Input

This happens in APlayerController::TickPlayerInput which is called by APlayerController’s tick.
It is our responsibility to call APlayerController::AddYawInput and APlayerController::AddPitchInput after receiving input.

Update Control Rotation

This happens in APlayerController::UpdateRotationwhich is also called by APlayerController’s tick, just a little bit latter.
In this phase ControlRotation will be updated by RotationInput and a list of UCameraModifiers.

Update Spring Arm

This happens in USpringArmComponent’s own tick, UCameraComponent is attached to a socket on USpringArmComponent, at the end of tick USpringArmComponent will update this socket’s transform.
void USpringArmComponent::UpdateDesiredArmLocation(bool bDoTrace, bool bDoLocationLag, bool bDoRotationLag, float DeltaTime) { ... // Update socket location/rotation RelativeSocketLocation = RelCamTM.GetLocation(); RelativeSocketRotation = RelCamTM.GetRotation(); UpdateChildTransforms(); }

Update and Blend View Targets

This happens in APlayerCameraManager::UpdateCamera which is ticked just before rendering.
APlayerCameraManager will first update two view targets. There is two view targets in APlayerCameraManager, a current one and a pending one, each one can be a actor with a UCameraComponent or a ACameraActor or just a location you hooked in.
Update a view target will generate a FMinimalViewInfo (i.e. POV), details of it can be seen in APlayerCameraManager::UpdateViewTarget.
Then two view target’s FMinimalViewInfo will blend to a new one, which after being processed by a list of UCameraModifiers will be the final POV used for rendering.

Sequence

Check out ULevelSequencePlayer::UpdateCameraCut for how sequence handles view targets.
By default, when transit back from sequence camera to pawn, blending time is zero.