1. A short introduction about ONVIF
Open Network Video Interface Forum (ONVIF) is an open industry forum which was established in 2008 by Axis Communications, Bosch Security Systems, and Sony Corporation. It is committed to standardize communication between network devices to ensure interoperability between network products for the security market. Since its inception, ONVIF has published several documents and specifications describing and defining a flexible, scalable, and evolving interface that defines how security devices may be addressed and utilized. Along with its other activities, ONVIF seeks to provide a better and clearer understanding of the standard and its capabilities.
2. What is needed for the programmers
Following 2 documentations and 1 software are probably going to be adequate for the most of the programmers who are asked to program to reach and control the Onvif cameras.
I had written a small software for the demostration of the ability connecting to the ONVIF-cameras. First of the small demostrations software contains a GUI which connects to the live video streams (h264, mjpeg streams ) all over the world (Nokia QT4 C++ IDE, vlc codec c++ wrapper) as seen in Figure 2.1.
Second of the small demostrations was for connecting to the live ONVIF cameras on the internet or inside a local network. GUI can be seen on the Figure 2.2-2.3 (Visual Studio .Net C#, Vlc C# wrapper, ONVIF webservices (.wsdl) and SOAP communication).
On the GUI, ONVIF SCANNER call back function is created by the help of the pseudo codes taken from the Discovery part of the ONVIF Programmer Guide. Connecting to the ONVIF cameras is achieved by choosing one of the possible cameras listed by the ONVIF SCANNER. Coonecting to the Onvif cameras and streaming over RTSP or UDP requires 3-5 other parts of the Onvif Programmer Guide (Initial Setup and Administration, Authentication, Streaming and/or Storage and/or Starting a Recording from a Remote Device). Connecting to 2 cameras and streaming over them was only to demostrate multi-thread capability of the software.
For creating/testing new user profiles and testing your console programs, you should download and install ONVIF Device Manager software. Besides this, it may give you nice ideas while creating final GUI for your software as seen on the Figure 2.4.
As i explained shortly on the above, ONVIF programmers guide mainly gives you whole you need under the 8 partitions for the standard. If you want to have a look at the programmers guide, you are going to find out the details of the following parts. I give a review on the documentation of Onvig Programmer Guide. Most of the writing are taken from the guide itself.
Each part of the documentation involves a short description, prerequisites, targeted services and technologies and pseudo codes for the programmers.
3. Review on the ONVIF Programmer Guide
I give some shortened form of 8 parts of the ONVIF Programmer Guide. All the writings and figures are taken from the ONVIF Programmer Guide.
ONVIF devices support WS-Discovery, which is a mechanism that supports probing a network to find ONVIF capable devices. For example, it enables devices to send Hello messages when they come online to let other devices know they are there. In addition, clients can send Probe messages to find other devices and services on the network. Devices can also send Bye messages to indicate they are leaving the network and going offline.
Messages are sent over UDP to a standardized multicast address and UDP port number. All the devices that match the types and scopes specified in the Probe message respond by sending ProbeMatch messages back to the sender.
WS-Discovery is normally limited by the network segmentation at a site since the multicast packages typically do not traverse routers. Using a Discovery Proxy could solve that problem, but details about this topic are beyond the scope of this document. For more information, see [ONVIF/Discovery] and [WS-Discovery]
No prerequisites are needed.
3.1.2. Targeted Services and Technologies
[ONVIF/Device] and [devicemgmt.wsdl]
In the Discovery use case, we send a WS-Discovery Probe message and wait for ProbeMatch responses. The responses are processed, and relevant info is stored in a list for processing later, as shown in Section 22.214.171.124, ONVIF::ProcessMatch.
// Send WS-Discovery Probe, collect the responses and then
// process the responses.
// Send probe. See chapter 4.3.1 for details
probe = ONVIF::DiscoverySendProbe(scopes, types);
// Wait a while for responses
// This fetch next probe match so that we can put it into the list
// See chapter 4.3.2 for details
probematch = ONVIF::DiscoveryReadResponse(probe);
// Store info about the match, first check for duplicates
if (!in_list(probematcheslist, probematch))
// Process the responses, see chapter 5.1.3 for details.
foreach (probeMatch in probematcheslist)
HINT: To maximize the number of discovered devices, best practice would be to collect all available responses before processing the contents of each individual response message. Responses may be lost if too much time is spent processing individual responses during the brief time that all the devices on the network are responding to the probe.
3.2. Initial Setup and Administration
3.2.1 First Actions After Discovery
After ONVIF devices are discovered using WS-Discovery, you would typically access a device using the supplied XAddrs to test where it is reachable. Use device.GetSystemDateAndTime to accomplish this because it should not require authentication. You can also consider calling device. GetDeviceInformation and device.GetCapabilities.
- Devices must already be discovered.
- At least one valid XAddrs for the device service entry point.
126.96.36.199 Targeted Services and Technologies
- [ONVIF/Device] and [devicemgmt.wsdl]
When processing the list of discovered devices, first do a device.GetDeviceSystemDateAndTime() call:
- To determine if the device is reachable with the supplied XAddr
- To obtain the device time
- To determine the time offset, which will be
This chapter describes how to use the security function included in the ONVIF specification. It covers authenticating by WS-UsernameToken, communicating by TLS, and streaming over HTTPS.
When a device requires authentication to access a web service, the client uses WSUsernameToken for the device. The ONVIF specification does not include an example that shows how WS-UsernameToken works, so this chapter describes how to establish authentication between a client and a device.
3.3.2 User Management
This section describes setting up user information for authentication, emphasizing how to set the parameters of the commands for user registration.
3.3.3 Certificate Management and Usage
According to the ONVIF specification, a client can use TLS to connect to a device, and some methods are defined for setting up the TLS connection. The procedure for setting up a TLS connection differs depending on the type of authentication being used, but the specification does not define the details for how to use these methods to set up the connection. Therefore, this section provides an example of how to handle a certificate for server authentication of TLS.
Both “self-signed certificate” and “signed certificate in Certificate Authority (CA)” methods can be for TLS communication. The following figures show the basic architecture for using these certificates.
3.3.4 Real-Time Streaming via RTP / RTSP / HTTPS
According to the ONVIF specification, clients can get real-time streaming via TLS. However, the specification does not provide a parameter for specifying HTTPS in tt:TransportProtocol, so implementing this feature is not obvious. This section provides an example of how to get Realtime streaming via RTP/RTSP/HTTPS (the procedure for HTTP and RTSP authentication, however, is not described here).
This chapter describes the real-time video and audio streaming function in the ONVIF specification. These configurations are controlled using media profiles. Each configuration becomes effective when it is added to a profile. The following diagram shows a media profile.
A media profile includes the following for configuring video streaming capabilities:
- VideoSourceConfiguration – Contains a reference to a VideoSource and a Bounds structure containing either the entire VideoSource pixel area or a sub-portion.
- VideoEncoderConfiguration – Contains encoding data that consists of codec, pixel resolution, and quality specifications.
Receiving media streaming involves getting a stream URI from a certain media profile.
3.4.1 Using an Existing Profile for Media Streaming
A device with a media configuration service enabled includes at least one media profile at boot. This use case shows how to start video streaming with UDP Unicast by using an existing media profile.
3.4.2 Media Profile Configuration
A media profile consists of configuration entities such as video/audio source configuration, video/audio encoder configuration, or PTZ configuration. This use case describes how to change one configuration entity which has been already added to the media profile.
3.4.3 Creating a New Media Profile and Adding an Entity
The NVT presents different available profiles depending on its capabilities. This use case describes how to create a new media profile. It is useful when, for example, we receive multiple streaming.
3.4.4 Multicast Streaming
According to the ONVIF specification, a client can control multicast streaming of a device, and some methods are defined for multicast streaming setup and control. A client needs to specify how to control multicast streaming. This section provides two samples for IPv4 streaming where a client sets multicast streaming configuration and controls the RTP stream. Also, a “bad practice” for the multicast stream setting is described (but not recommended).
3.4.5 Audio Backchannel Handling
This use case shows how a bidirectional audio connection could be established using the ONVIF RTSP extension. The NVT in this example provides one audio output that can be connected to a loudspeaker. It may be able to decode G.711, G.726, or AAC audio. The client is able to stream G.711 audio.
a. The necessary settings for stream setup are set up. The client uses the device I/O service to request the available audio outputs and their configurations.
b. An existing media profile that is already configured with VideoSourceConfiguration and VideoEncoderConfiguration, as well as AudioSource- and AudioEncoderConfiguration, is used. To configure this profile for a backchannel connection, a suitable AudioDecoder and AudioOutputConfiguration is added.
No parameters are available to configure the decoder. The client asks for the decoding capabilities of the specific configuration, and selects a suitable one that supports G.711 encoding.
c. After requesting the stream URI, an RTSP connection is established.
The ONVIF core specification already includes an example of how a Unicast RTSP connection with backchannel support is established. To provide additional information, this use case establishes an HTTP tunnelled RTP connection.
3.4.6 Setting Up Metadata Streaming
Metadata streaming is a way to receive event notifications in real-time over an RTP or RTSP stream. The transport can be included in a number of protocols supported by the device: RTP, RTP multicast, RTP over RTSP, and RTP over RTSP over HTTP. First, a media profile is set up that contains a MetadataConfiguration with the desired event filter. After that, the stream URI for that profile can be fetched and used. For more information regarding event notification, see Chapter 3.6, Eventing.
This chapter describes how to control the PTZ. A PTZ-capable NVT may have one or many PTZ nodes. The PTZ node may be a mechanical PTZ driver, an uploaded PTZ driver on a video encoder, or a digital PTZ driver. The PTZ node is the lowest level entity of the PTZ Control, and it specifies the supported PTZ capabilities. PTZConfiguration has a node token and default settings which are indicated by URI. A PTZConfiguration is added in the media profile. Therefore, we can control PTZ operation through the media profile.
3.5.1 Adding a PTZ Configuration into a Media Profile
This use case describes how to add a new PTZ configuration into a specific media profile. The PTZ configuration cannot be added to default media profiles, so you must add the PTZ configuration before attempting a PTZ operation. The new PTZ configuration can be verified by calling GetProfiles or GetProfile in the media service.
3.5.2 Changing a PTZ Configuration
This use case describes how to change a PTZ configuration. The PTZ service provides absolute move, relative move, and continuous move operations. This enables you to change PTZ control operations easily.
3.5.3 Move Operation
This use case describes how to move the PTZ unit.
3.5.4 Set / Goto Preset Position
The preset function can save the current device position parameters. If we set the preset position, the device can move there. Preset operations are set according to the media profile. This use case describes how to set a /goto preset position.
The ONVIF specification includes three different types of event notifications:
- Real-time Pull-Point Notification Interface
- Basic Notification Interface (WS-BaseNotification)
- Notification Streaming Interface (metadata streaming)
The following section describes the GetEventProperties action, which is a way of finding out what notifications a device supports and what information they contain. The next two sections describe how to set up the subscriptions for the two first methods above, and the last section describes how the Notification Message is processed. More information about the Notification Streaming Interface appears in Section 7.6, Setting Up Metadata Streaming.
3.6.1 GetEvent Properties
The GetEventProperties action returns the various event topics that the device supports.
3.6.2 Setting Up PullPoint Subscription
PullPoint subscription is used when a client wants to fetch event notifications from a service. This is an ONVIF extension to the standard WS-BaseNotification mechanisms. First, a subscription is created and a subscriptionReference is returned, which is used in PullMessages requests to fetch the actual event notifications. If no notifications are available, the response is delayed.
3.6.4 Processing NotificationMessage
The notification message looks the same regardless of the method by which it was delivered. The following sections provide a simple example of how such a message can be processed. The content of a Notification can be vendor-specific.
3.7.1 Starting a Local Recording
This use case demonstrates how to start a local recording on a device. The device has embedded storage (such as an SD card) to store the data. The client has already set up a media profile on the device with a token Profile1 that should be used for recording.
First, the client asks for the existing recordings on the storage unit. In this example, the client uses an existing recording and possibly overwrites or adds new data. A new recording could also be created, if this is supported by the device.
Next, the client changes the configuration of the recording. It stores the necessary information about the source (like the name, location, or IP address of the device), the description of the content, and the retention time.
Then, it creates a RecordingJob that transfers the data from the RecordingSource (in this use case, the media profile) to the recording. The Recording Job mode is set to Active, so the device starts the recording automatically and no interaction from the client is necessary.
3.7.2 Starting a Recording from a Remote Device
This use case shows how to setup a remote recording from another camera in the network. Therefore the client has configured a media profile on the remote device and has requested the stream URL.
The client has also already created a recording (MyRec) that should be used to store the data. It sets up a Recording Job that transfers the data from the receiver to the recording. The RecordingJob has an AutoCreateReceiver. If this flag is set to true, a Receiver is automatically created in the Receiver Service and is associated with the recording job. If the recording job is deleted this receiver will also be deleted without client interaction. The client has to configure the receiver with the already known RTSP URI and the stream setup. Then it can start the recording job by setting the recording job mode to Active.
3.7.3 Finding a Recording
This use case describes how a simple search for recordings could be done. The client wants to create a list of available recording footage in a recording. This list could be used to replay the data and give information about which events happened during this time.
In this use case, the device has one recording with audio, video, and meta tracks. The client looks for the IsDataPresent event to find the times when a recording job was started and when it was stopped. Therefore, the client sets up a FindEvents job and waits for the results. Afterwards, it can go through the list and find the start and stop times of the recording job.
This chapter focuses on display devices, which are devices that provide the Display service interface and functionality.
A display device provides video outputs which represent monitors or displays. A video output provides so-called panes. A pane is a region within the video output where a stream can be displayed after decoding. The pane is bound to the video output with its associated layout, which defines one or more regions to display. The structure holding the PaneLayouts is an ordered list.
This is essential because with overlapping panes, the top elements in the list are displayed over the lower elements as shown in the following tree diagram of the Layout.
panes visible on a VideoOutput. One pane is represented by a PaneLayout entity. The Pane parameter contains the reference to the associated PaneConfiguration and an Area object. The Area contains the values for top, bottom, right, and left, which describe the geometrical dimensions of the pane.
NOTICE: Make sure that you do not mix up the order when parsing or serializing the Layout structures, because the display might behave in an unexpected manner. For more information, see section 2 of [ONVIF/Display-Service] which describes Layout.
NOTICE: The area coordinate values are expressed in normalized units that range between -1.0 and 1.0. If two continuous display regions share the same border value with ranges that do not overlap, then they do not overlap at all. See the additional descriptions in [display.wsdl].
You should read the Annex parts of the ONVIF programmers guide. An example about discovery of ONVIF cameras on a network from the ONVIF programmers guide.
4. Communication Traces from Use Case Examples
The following SOAP traces are used in the ONVIF use cases described throughout the Application Programmers Guide.
SOAP Communication Trace for Discovery
The following trace refers to Section 1.
In the examples below,
- Types: dn:NetworkVideoTransmitter
<strong><?-- Discovery.Probe message --></strong>
<?xml version="1.0" encoding="UTF-8"?>
<e:Envelope xmlns:e="http://www.w3.org/2003/05/soap-envelope" xmlns:w="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:d="http://schemas.xmlsoap.org/ws/2005/04/discovery" xmlns:dn="http://www.onvif.org/ver10/network/wsdl">
<strong><?-- Discovery.ProbeMatch response (one of many similar responses) --></strong>
<?xml version="1.0" encoding="UTF-8"?>
type/audio_encoder onvif://www.onvif.org/hardware/MODEL onvif://www.onvif.org
5. Last news from the ONVIF standardization.
5.1. ONVIF Expands Profile Concept with Physical Access Control, IP Video Integration Release Candidate
27th August 2013. ONVIF, the leading global standardization initiative for IP-based physical security products, announced the availability of the ReleaseCandidate for Profile C, which enables interoperability between clients and devices of physical access control systems (PACS) and network-based video systems. This new Profile, which is available for review on the ONVIF website,extends the functionality of the ONVIF global interface specification into physical access control.
With Profile C, systems integrators, specifiersand consultants will be able to more easily deploy an integrated IP-based videoand access control solution from a variety of different video and access controlproviders. Compatibility between edge devices and clients helps to simplify installationand user training by reducing the need for multiple proprietary monitoringsystems to handle different PACS devices.
“Integration between IP-based physical accesscontrol systems and video surveillance is no longer considered a luxury intoday’s market, and is becoming a necessary component for many different typesof users,” said Baldvin Gislason Bern, Chairman of ONVIF’s Profile C WorkingGroup. “With Profile C, users and specifiers will be able to integrate theProfile C products of their choosing without relying on existing integrationsbetween manufacturers.” >>read more>>
5.2. ONVIF Publishes Profile G Release Candidate for Video Storage, Recording
14th August 2013. ONVIF, the leading global standardization initiative for IP-based physical security products, announced today the ReleaseCandidate for Profile G, the specification designed to store, search, retrieve and playback media on devices or clients that support recording capabilities and on-board storage. This new Profile is now available for review on the ONVIFwebsite.
As with Profile S, which ONVIF introduced in2012 as the standard interface to stream video and audio between conformantdevices and clients, Profile G now brings video playback into the Profileconcept. Having global interface specifications, with specific functionalitieseasily identified by Profiles, makes it easier for end users, integrators,consultants and manufacturers to harness the opportunities offered by networkvideo technology.
“The introduction of Profile G will completethe circuit between live video and the other half of the equation, which isvideo storage,” said Steven Dillingham, Chairman of ONVIF’s Profile G WorkingGroup and Software Engineer for Vidsys. “This further refines the level ofinteroperability among ONVIF-conformant products,” he added. >>read more>>