MotionTree’s R&D is partly focused on the creation of the RVS. It’s an innovating language allowing developers to describe easily interactions from the really heart of the mp4 file. The goal of this language is to simplify as it maximum the interaction’s description in order to help non technicians users create interactivity as they wish without any syntax issues.

The RVS is aimed to be transformed to any kind of cross platform interactive language, like MPEG4-BIFS or HTML5.


Multi-State Video Coding (MSVC) is a multiple description scheme based on frame-wise splitting of the video sequence into two or more subsequences. Each subsequence is encoded separately to generate descriptions which can be decoded independently.

Second Screen

A second screen refers to the use of a computing device (commonly a mobile device, such as a tablet or smartphone) to provide an enhanced viewing experience for content on another device, such as a television. Many applications in the “second screen” are designed to give another form of interactivity to the user and another way to sell advertising content.


Media Source Extensions (MSE) is a W3C specification that allows JavaScript to send byte streams to media codecs within web browsers that support HTML5 video. Among other possible uses, this allows the implementation of client-side prefetching and bufferingcode for streaming media entirely in JavaScript.


Binary Format for Scenes (BIFS) is a binary format for two- or three-dimensional audiovisual content. It is based on VRML and part 11 of the MPEG-4 standard, scene description and application engine published in 2005.

BIFS is MPEG-4 scene description protocol to compose MPEG-4 objects, describe interaction with MPEG-4 objects and to animate MPEG-4 objects.


High Efficiency Video Coding (HEVC) is a video compression standard, a successor to H.264/MPEG-4 AVC (Advanced Video Coding), that was jointly developed by the ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group(VCEG) as ISO/IEC 23008-2 MPEG-H Part 2 and ITU-T H.265.

HEVC is said to double the data compression ratio compared to H.264/MPEG-4 AVC at the same level of video quality. It can alternatively be used to provide substantially improved video quality at the same bit rate. It can support 8K UHD and resolutions up to 8192×4320.


MotionTree’s R&D is developping RVS to BIFS engine. This engine is created in parallel of the RVS language and tested everyday in production. While its been tested on concrete productions, the research team works on it to answer any needs of the everyday users. The engine is cross platform and so can be used on any configuration (MacOS, Windows, Linux, Android, iOS).


Versions of HEVC which currently provide features of temporal, SNR, spatial scalability and a combination of these scalabilities. The design of HEVC enables temporal scalability when a hierarchical temporal prediction structure is used.

The design philosophy of the SHVC standard is to achieve high scalable coding efficiency using relative simple system architecture. Another consideration is to keep the architecture design maximally aligned with the Multi-View extensions of HEVC (MV-HEVC).


Dynamic Adaptive Streaming over HTTP (DASH), also known as MPEG-DASH, is an adaptive bitrate streaming technique that enables high quality streaming of media content over the Internet delivered from conventional HTTP web servers. Similar to Apple’sHTTP Live Streaming (HLS) solution, MPEG-DASH works by breaking the content into a sequence of small HTTP-based file segments, each segment containing a short interval of playback time of a content that is potentially many hours in duration, such as a movie or the live broadcast of a sports event.


Qt is a cross-platform application framework that is widely used for developing application software that can be run on various software and hardware platforms with little or no change in the codebase, while having the power and speed of native applications.

We currently use Qt 5.3 for our applications.


GPAC Project on Advanced Content (GPAC) is an implementation of the MPEG-4 Systems standard written in ANSI C. The GPAC framework is being developed at École nationale supérieure des télécommunications (ENST) and provides tools for media playback, vector graphics and 3D rendering, MPEG-4 authoring and distribution.

GPAC provides three sets of tools based on a core library called libgpac:

  • A multimedia player, cross-platform command-line based MP4Client or with a GUI Osmo4.
  • A multimedia packager, MP4Box.
  • Some server tools, around multiplexing and streaming (under development).

GPAC is cross-platform. It is written in (almost 100% ANSI) C for portability reasons, attempting to keep the memory footprint as low as possible. It is currently running under Windows, Linux, Solaris, Windows CE(SmartPhone, PocketPC 2002/2003), iOS, Android, Embedded Linux (familiar 8, GPE) and recent Symbian OS systems.