Amazon IVS Broadcast SDK: iOS Guide
The Amazon Interactive Video Service (IVS) iOS broadcast SDK provides the interfaces required to broadcast to Amazon IVS on iOS.
The AmazonIVSBroadcast
module implements the interface described in this
document. The following operations are supported:
-
Set up (initialize) a broadcast session.
-
Manage broadcasting.
-
Attach and detach input devices.
-
Manage a composition session.
-
Receive events.
-
Receive errors.
Latest version of iOS broadcast SDK: 1.7.1 (Release Notes)
Reference documentation: For information on the most
important methods available in the Amazon IVS iOS broadcast SDK, see the reference
documentation at https://aws.github.io/amazon-ivs-broadcast-docs/1.7.1/ios/
Sample code: See the iOS sample repository on GitHub:
https://github.com/aws-samples/amazon-ivs-broadcast-ios-sample
Platform requirements: iOS 11 or greater.
Getting Started
Install the Library
We recommend that you integrate the broadcast SDK via CocoaPods. (Alternatively, you can manually add the framework to your project.)
Recommended: Integrate the Broadcast SDK (CocoaPods)
Releases are published via CocoaPods under the name
AmazonIVSBroadcast
. Add this dependency to your Podfile:
pod 'AmazonIVSBroadcast'
Run pod install
and the SDK will be available in your
.xcworkspace
.
Alternate Approach: Install the Framework Manually
-
Download the latest version from https://broadcast.live-video.net/1.7.1/AmazonIVSBroadcast.xcframework.zip
. -
Extract the contents of the archive.
AmazonIVSBroadcast.xcframework
contains the SDK for both device and simulator. -
Embed
AmazonIVSBroadcast.xcframework
by dragging it into the Frameworks, Libraries, and Embedded Content section of the General tab for your application target.
Implement IVSBroadcastSession.Delegate
Implement IVSBroadcastSession.Delegate
, which allows you to receive
state updates and device-change notifications:
extension ViewController : IVSBroadcastSession.Delegate { func broadcastSession(_ session: IVSBroadcastSession, didChange state: IVSBroadcastSession.State) { print("IVSBroadcastSession did change state \(state)") } func broadcastSession(_ session: IVSBroadcastSession, didEmitError error: Error) { print("IVSBroadcastSession did emit error \(error)") } }
Request Permissions
Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)
Here, we check whether the user has already granted permissions and, if not, we ask for them:
switch AVCaptureDevice.authorizationStatus(for: .video) { case .authorized: // permission already granted. case .notDetermined: AVCaptureDevice.requestAccess(for: .video) { granted in // permission granted based on granted bool. } case .denied, .restricted: // permission denied. @unknown default: // permissions unknown. }
You need to do this for both .video
and .audio
media
types, if you want access to cameras and microphones, respectively.
You also need to add entries for NSCameraUsageDescription
and
NSMicrophoneUsageDescription
to your Info.plist
.
Otherwise, your app will crash when trying to request permissions.
Disable the Application Idle Timer
This is optional but recommended. It prevents your device from going to sleep while using the broadcast SDK, which would interrupt the broadcast.
override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) UIApplication.shared.isIdleTimerDisabled = true } override func viewDidDisappear(_ animated: Bool) { super.viewDidDisappear(animated) UIApplication.shared.isIdleTimerDisabled = false }
(Optional) Set Up AVAudioSession
By default, the broadcast SDK will set up your application’s
AVAudioSession
. If you want to manage this yourself, set
IVSBroadcastSession.applicationAudioSessionStrategy
to
noAction
. Without control of the AVAudioSession
, the
broadcast SDK cannot manage microphones internally. To use microphones with the
noAction
option, you can create an
IVSCustomAudioSource
and provide your own samples via an
AVCaptureSession
, AVAudioEngine
or another tool that
provides PCM audio samples.
If you are manually setting up your AVAudioSession
, at a minimum you
need to set the category as .record
or .playbackAndRecord
,
and set it to active
. If you want to record audio from Bluetooth
devices, you need to specify the .allowBluetooth
option as well:
do { try AVAudioSession.sharedInstance().setCategory(.record, options: .allowBluetooth) try AVAudioSession.sharedInstance().setActive(true) } catch { print("Error configuring AVAudioSession") }
We recommend that you let the SDK handle this for you. Otherwise, if you want to choose between different audio devices, you will need to manually manage the ports.
Create the Broadcast Session
The broadcast interface is IVSBroadcastSession
. Initialize it as
shown below:
let broadcastSession = try IVSBroadcastSession( configuration: IVSPresets.configurations().standardLandscape(), descriptors: IVSPresets.devices().frontCamera(), delegate: self)
Also see Create the Broadcast Session (Advanced Version)
Set the IVSImagePreviewView for Preview
If you want to display a preview for an active camera device, add the preview
IVSImagePreviewView
for the device to your view hierarchy:
// If the session was just created, execute the following // code in the callback of IVSBroadcastSession.awaitDeviceChanges // to ensure all devices have been attached. if let devicePreview = try broadcastSession.listAttachedDevices() .compactMap({ $0 as? IVSImageDevice }) .first? .previewView() { previewView.addSubview(devicePreview) }
Start a Broadcast
The hostname that you receive in the ingestEndpoint
response field of
the GetChannel
endpoint needs to have rtmps://
prepended
and /app
appended. The complete URL should be in this format:
rtmps://{{ ingestEndpoint }}/app
try broadcastSession.start(with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY)
Stop a Broadcast
broadcastSession.stop()
Manage Lifecycle Events
Audio Interruptions
There are several scenarios where the broadcast SDK will not have exclusive access to audio-input hardware. Some example scenarios that you need to handle are:
-
User receives a phone call or FaceTime call
-
User activates Siri
Apple makes it easy to respond to these events by subscribing to
AVAudioSession.interruptionNotification
:
NotificationCenter.default.addObserver( self, selector: #selector(audioSessionInterrupted(_:)), name: AVAudioSession.interruptionNotification, object: nil)
Then you can handle the event with something like this:
// This assumes you have a variable `isRunning` which tracks if the broadcast is currently live, and another variable `wasRunningBeforeInterruption` which tracks whether the broadcast was active before this interruption to determine if it should resume after the interruption has ended. @objc private func audioSessionInterrupted(_ notification: Notification) { guard let userInfo = notification.userInfo, let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } switch type { case .began: wasRunningBeforeInterruption = isRunning if isRunning { broadcastSession.stop() } case .ended: defer { wasRunningBeforeInterruption = false } guard let optionsValue = userInfo[AVAudioSessionInterruptionOptionKey] as? UInt else { return } let options = AVAudioSession.InterruptionOptions(rawValue: optionsValue) if options.contains(.shouldResume) && wasRunningBeforeInterruption { try broadcastSession.start( with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY) } @unknown default: break } }
App Going Into Background
Standard applications on iOS are not allowed to use cameras in the background.
There also are restrictions on video encoding in the background: since hardware
encoders are limited, only foreground applications have access. Because of this,
the broadcast SDK automatically terminates its session and sets its
isReady
property to false
. When your application
is about to enter the foreground again, the broadcast SDK reattaches all the
devices to their original IVSMixerSlotConfiguration
entries.
The broadcast SDK does this by responding to
UIApplication.didEnterBackgroundNotification
and
UIApplication.willEnterForegroundNotification
.
If you are providing custom image sources, you should be prepared to handle these notifications. You may need to take extra steps to tear them down before the stream is terminated.
See Use Background Video for a workaround that enables streaming while your application is in the background.
Media Services Lost
In very rare cases, the entire media subsystem on an iOS device will crash. In this scenario, we can no longer broadcast. It is up to your application to respond to these notifications appropriately. At a minimum, subscribe to these notifications:
-
mediaServicesWereLostNotification
— Respond by stopping your broadcast and completely deallocating your IVSBroadcastSession
. All internal components used by the broadcast session will be invalidated. -
mediaServicesWereResetNotification
— Respond by notifying your users that they can broadcast again. Depending on your use case, you may be able to automatically start broadcasting again at this point.
Advanced Use Cases
Here we present some advanced use cases. Start with the basic setup above and continue here.
Create a Broadcast Configuration
Here we create a custom configuration with two mixer slots that allow us to bind
two video sources to the mixer. One (custom
) is full screen and laid
out behind the other (camera
), which is smaller and in the bottom-right
corner. Note that for the custom
slot we do not set a position, size,
or aspect mode. Because we do not set these parameters, the slot uses the video
settings for size and position.
let config = IVSBroadcastConfiguration() try config.audio.setBitrate(128_000) try config.video.setMaxBitrate(3_500_000) try config.video.setMinBitrate(500_000) try config.video.setInitialBitrate(1_500_000) try config.video.setSize(CGSize(width: 1280, height: 720)) config.video.defaultAspectMode = .fit config.mixer.slots = [ try { let slot = IVSMixerSlotConfiguration() // Do not automatically bind to a source slot.preferredAudioInput = .unknown // Bind to user image if unbound slot.preferredVideoInput = .userImage try slot.setName("custom") return slot }(), try { let slot = IVSMixerSlotConfiguration() slot.zIndex = 1 slot.aspect = .fill slot.size = CGSize(width: 300, height: 300) slot.position = CGPoint(x: config.video.size.width - 400, y: config.video.size.height - 400) try slot.setName("camera") return slot }() ]
Create the Broadcast Session (Advanced Version)
Create an IVSBroadcastSession
as you did in the basic example, but provide your
custom configuration here. Also provide nil
for the device array, as we
will add those manually.
let broadcastSession = try IVSBroadcastSession( configuration: config, // The configuration we created above descriptors: nil, // We’ll manually attach devices after delegate: self)
Iterate and Attach a Camera Device
Here we iterate through input devices that the SDK has detected. The SDK will only return built-in devices on iOS. Even if Bluetooth audio devices are connected, they will appear as a built-in device. For more information, see Known Issues and Workarounds.
Once we find a device that we want to use, we call attachDevice
to
attach it:
let frontCamera = IVSBroadcastSession.listAvailableDevices() .filter { $0.type == .camera && $0.position == .front } .first if let camera = frontCamera { broadcastSession.attach(camera, toSlotWithName: "camera") { device, error in // check error } }
Swap Cameras
// This assumes you’ve kept a reference called `currentCamera` that points to the current camera. let wants: IVSDevicePosition = (currentCamera.descriptor().position == .front) ? .back : .front // Remove the current preview view since the device will be changing. previewView.subviews.forEach { $0.removeFromSuperview() } let foundCamera = IVSBroadcastSession .listAvailableDevices() .first { $0.type == .camera && $0.position == wants } guard let newCamera = foundCamera else { return } broadcastSession.exchangeOldDevice(currentCamera, withNewDevice: newCamera) { newDevice, _ in currentCamera = newDevice if let camera = newDevice as? IVSImageDevice { do { previewView.addSubview(try finalCamera.previewView()) } catch { print("Error creating preview view \(error)") } } }
Create a Custom Input Source
To input sound or image data that your app generates, use
createImageSource
or createAudioSource
. Both these
methods create virtual devices (IVSCustomImageSource
and
IVSCustomAudioSource
) that can be bound to the mixer like any other
device.
The devices returned by both these methods accept a CMSampleBuffer
through its onSampleBuffer
function:
-
For video sources, the pixel format must be
kCVPixelFormatType_32BGRA
,420YpCbCr8BiPlanarFullRange
, or420YpCbCr8BiPlanarVideoRange
. -
For audio sources, the buffer must contain Linear PCM data.
You cannot use an AVCaptureSession
with camera input to feed a custom
image source while also using a camera device provided by the broadcast SDK. If you
want to use multiple cameras simultaneously, use
AVCaptureMultiCamSession
and provide two custom image
sources.
Custom image sources primarily should be used with static content such as images, or with video content:
let customImageSource = broadcastSession.createImageSource(withName: "video") try broadcastSession.attach(customImageSource, toSlotWithName: "custom")
Monitor Network Connectivity
It is common for mobile devices to temporarily lose and regain network connectivity while on the go. Because of this, it is important to monitor your app’s network connectivity and respond appropriately when things change.
When the broadcaster's connection is lost, the broadcast SDK's state will change
to error
and then disconnected
. You will be notified of
these changes through the IVSBroadcastSessionDelegate
. When you receive
these state changes:
-
Monitor your broadcast app’s connectivity state and call
start
with your endpoint and stream key, once your connection has been restored. -
Important: Monitor the state delegate callback and ensure that the state changes to
connected
after callingstart
again.
Detach a Device
If you want to detach and not replace a device, detach it with
IVSDevice
or IVSDeviceDescriptor
:
broadcastSession.detachDevice(currentCamera)
ReplayKit Integration
To stream the device’s screen and system audio on iOS, you must integrate with
ReplayKitIVSReplayKitBroadcastSession
. In your
RPBroadcastSampleHandler
subclass, create an instance of
IVSReplayKitBroadcastSession
, then:
-
Start the session in
broadcastStarted
-
Stop the session in
broadcastFinished
The session object will have three custom sources for screen images, app audio,
and microphone audio. Pass the CMSampleBuffers
provided in
processSampleBuffer
to those custom sources.
To handle device orientation, you need to extract ReplayKit-specific metadata from the sample buffer. Use the following code:
let imageSource = session.systemImageSource; if #available(iOSApplicationExtension 11.0, *) { if let orientationAttachment = CMGetAttachment(sampleBuffer, key: RPVideoSampleOrientationKey as CFString, attachmentModeOut: nil) as? NSNumber, let orientation = CGImagePropertyOrientation(rawValue: orientationAttachment.uint32Value) { switch orientation { case .up, .upMirrored: imageSource.setHandsetRotation(0) case .down, .downMirrored: imageSource.setHandsetRotation(Float.pi) case .right, .rightMirrored: imageSource.setHandsetRotation(-(Float.pi / 2)) case .left, .leftMirrored: imageSource.setHandsetRotation((Float.pi / 2)) } } }
It is possible to integrate ReplayKit using IVSBroadcastSession
instead of IVSReplayKitBroadcastSession
. However, the
ReplayKit-specific variant has several modifications to reduce the internal memory
footprint, to stay within Apple’s memory ceiling for broadcast extensions.
Get Recommended Broadcast Settings
To evaluate your user’s connection before starting a broadcast, use
IVSBroadcastSession.recommendedVideoSettings
to run a brief test.
As the test runs, you will receive several recommendations, ordered from most to
least recommended. In this version of the SDK, it is not possible to reconfigure the
current IVSBroadcastSession
, so you must deallocate it and then create
a new one with the recommended settings. You will continue to receive
IVSBroadcastSessionTestResults
until the result.status
is Success
or Error
. You can check progress with
result.progress
.
Amazon IVS supports a maximum bitrate of 8.5 Mbps (for channels whose
type
is STANDARD
), so the maximumBitrate
returned by this method never exceeds 8.5 Mbps. To account for small fluctuations in
network performance, the recommended initialBitrate
returned by this
method is slightly less than the true bitrate measured in the test. (Using 100% of
the available bandwidth usually is inadvisable.)
func runBroadcastTest() { self.test = session.recommendedVideoSettings(with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY) { [weak self] result in if result.status == .success { this.recommendation = result.recommendations[0]; } } }
Use Background Video
You can continue a non-RelayKit broadcast, even with your application in the background.
To save power and keep foreground applications responsive, iOS gives only one application at a time access to the GPU. The Amazon IVS Broadcast SDK uses the GPU at multiple stages of the video pipeline, including compositing multiple input sources, scaling the image, and encoding the image. While the broadcasting application is in the background, there is no guarantee that the SDK can perform any of these actions.
To address this, use the createAppBackgroundImageSource
method. It
enables the SDK to continue broadcasting both video and audio while in the
background. It returns an IVSBackgroundImageSource
, which is a normal
IVSCustomImageSource
with an additional finish
function. Every CMSampleBuffer
provided to the background image source
is encoded at the frame rate provided by your original
IVSVideoConfiguration
. Timestamps on the
CMSampleBuffer
are ignored.
The SDK then scales and encodes those images and caches them, automatically looping that feed when your application goes into the background. When your application returns to the foreground, the attached image devices become active again and the pre-encoded stream stops looping.
To undo this process, use removeImageSourceOnAppBackgrounded
. You do
not have to call this unless you want to explicitly revert the SDK’s background
behavior; otherwise, it is cleaned up automatically on deallocation of the
IVSBroadcastSession
.
Notes: We strongly recommend that you call this method as part of configuring the broadcast session, before the session goes live. The method is expensive (it encodes video), so performance of a live broadcast while this method is running may be degraded.
Example: Generating a Static Image for Background Video
Providing a single image to the background source generates a full GOP of that static image.
Here is an example using CIImage:
// Create the background image source guard let source = session.createAppBackgroundImageSource(withAttemptTrim: true, onComplete: { error in print("Background Video Generation Done - Error: \(error.debugDescription)") }) else { return } // Create a CIImage of the color red. let ciImage = CIImage(color: .red) // Convert the CIImage to a CVPixelBuffer let attrs = [ kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue, kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue, ] as CFDictionary var pixelBuffer: CVPixelBuffer! CVPixelBufferCreate(kCFAllocatorDefault, videoConfig.width, videoConfig.height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, attrs, &pixelBuffer) let context = CIContext() context.render(ciImage, to: pixelBuffer) // Submit to CVPixelBuffer and finish the source source.add(pixelBuffer) source.finish()
Alternately, instead of creating a CIImage of a solid color, you can use bundled images. The only code shown here is how to convert a UIImage to a CIImage to use with the previous sample:
// Load the pre-bundled image and get it’s CGImage guard let cgImage = UIImage(named: "image")?.cgImage else { return } // Create a CIImage from the CGImage let ciImage = CIImage(cgImage: cgImage)
Example: Video with AVAssetImageGenerator
You can use an AVAssetImageGenerator
to generate
CMSampleBuffers
from an AVAsset
(though not an HLS
stream AVAsset
):
// Create the background image source guard let source = session.createAppBackgroundImageSource(withAttemptTrim: true, onComplete: { error in print("Background Video Generation Done - Error: \(error.debugDescription)") }) else { return } // Find the URL for the pre-bundled MP4 file guard let url = Bundle.main.url(forResource: "sample-clip", withExtension: "mp4") else { return } // Create an image generator from an asset created from the URL. let generator = AVAssetImageGenerator(asset: AVAsset(url: url)) // It is important to specify a very small time tolerance. generator.requestedTimeToleranceAfter = .zero generator.requestedTimeToleranceBefore = .zero // At 30 fps, this will generate 4 seconds worth of samples. let times: [NSValue] = (0...120).map { NSValue(time: CMTime(value: $0, timescale: CMTimeScale(config.video.targetFramerate))) } var completed = 0 let context = CIContext(options: [.workingColorSpace: NSNull()]) // Create a pixel buffer pool to efficiently feed the source let attrs = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue, kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue, kCVPixelBufferWidthKey: videoConfig.width, kCVPixelBufferHeightKey: videoConfig.height, ] as CFDictionary var pool: CVPixelBufferPool! CVPixelBufferPoolCreate(kCFAllocatorDefault, nil, attrs, &pool) generator.generateCGImagesAsynchronously(forTimes: times) { requestTime, image, actualTime, result, error in if let image = image { // convert to CIImage then CVpixelBuffer let ciImage = CIImage(cgImage: image) var pixelBuffer: CVPixelBuffer! CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &pixelBuffer) context.render(ciImage, to: pixelBuffer) source.add(pixelBuffer) } completed += 1 if completed == times.count { // Mark the source finished when all images have been processed source.finish() } }
It is possible to generate CVPixelBuffers
using an
AVPlayer
and AVPlayerItemVideoOutput
. However,
that requires using a CADisplayLink
and executes closer to
real-time, while AVAssetImageGenerator
can process the frames much
faster.
Limitations
Your application needs the background audio entitlement
createAppBackgroundImageSource
can be called only while your
application is in the foreground, since it needs access to the GPU to
complete.
createAppBackgroundImageSource
always encodes to a full GOP. For
example, if you have a keyframe interval of 2 seconds (the default) and are
running at 30 fps, it encodes a multiple of 60 frames.
-
If fewer than 60 frames are provided, the last frame is repeated until 60 frames are reached, regardless of the trim option’s value.
-
If more than 60 frames are provided and the trim option is
true
, the last N frames are dropped, where N is the remainder of the total number of submitted frames divided by 60. -
If more than 60 frames are provided and the trim option is
false
, the last frame is repeated until the next multiple of 60 frames is reached.
Known Issues and Workarounds
Please report all issues to Amazon IVS Support.
-
Changing Bluetooth audio routes can be unpredictable. If you connect a new device mid-session, iOS may or may not automatically change the input route. Also, it is not possible to choose between multiple Bluetooth headsets that are connected at the same time.
Workaround: If you plan to use a Bluetooth headset, connect it before starting the broadcast and leave it connected throughout the broadcast.
-
A bug in ReplayKit causes rapid memory growth when plugging in a wired headset during a stream.
Workaround: Start the stream with the wired headset already plugged in, use a Bluetooth headset, or do not use an external microphone.
-
If at any point during a ReplayKit stream you enable the microphone and then interrupt the audio session (e.g., with a phone call or by activating Siri), system audio will stop working. This is a ReplayKit bug that we are working with Apple to resolve.
Workaround: On an audio interruption, stop the broadcast and alert the user.
-
AirPods do not record any audio if the
AVAudioSession
category is set torecord
. By default, the SDK usesplayAndRecord
, so this issue manifests only if the category is changed torecord
.Workaround: If there is a chance that AirPods will be used to record audio, use
playAndRecord
even if your application is not playing back media. -
When AirPods are connected to an iOS 12 device, no other microphone can be used to record audio. Attempting to switch to an internal microphone immediately reverts back to the AirPods.
Workaround: None. If AirPods are connected to iOS 12, they are the only device that can record audio.
-
Submitting audio data faster than realtime (using a custom audio source) results in audio drift.
Workaround: Do not submit audio data faster than realtime.
-
Audio artifacts can appear at bitrates under 68 kbps when using a high sample rate (44100 Hz or greater) and two channels.
Workaround: Increase the bitrate to 68 kbps or higher, decrease the sample rate to 24000 Hz or lower, or set channels to 1.