bada – Media

Hi ,welcome to the presentation on Media namespace of bada platform Lets go into more details In this lecture I will cover topics related to playing different type of audio, video files/streams Then I will explain what are the device camera functionalities bada provides and how to exploit them Then I will discuss about handling image files, processing images features provided in bada along with some utility APIs Audio/video recording is an important aspect of media namespace, I will discuss about AudioRecorder and VideoRecorder for the same Lastly I will discuss the audio/video encoding/decoding mechanism in bada The Media namespace defines interfaces and classes that lets the application to integrate audio, video, and image processing functions into your application Key features provided by media namespace are image/video encoding, decoding and conversion between different formats, Playing audio and video from media files stored on a device or streaming over the network, Recording audio and video, Using the device camera to display a live preview and capture a still image, Digital Rights Management for digital media, Using codec to encode and decode the audio/video data In the next few slides I will be explaining how to play the media files, what are the media formats supported and how to handle DRM contents The Player class provides methods for playing audio and video content in the local storage or in the remote server It also provides basic player controls such as play, pause, resume, The Player class supports the playback of media files The Player class can play audio or video files on local storage devices and play audio or video streams saved on the server In addition, the Player class provides basic media player functions such as the play, pause, stop and resume functions Also, the class provides additional time-based seeking, volume control and looping functions This Player class has been created to support applications such as the Media Player Not only does bada provide the Player class but it also provides the DRM class (that is Digital Rights Management) The DRM class automatically identifies the rights management scheme such as OMA or WMDRM and enables the playback of the file in the appropriate way The DRM manipulation functions of the DRM class belong to the DRM_CONTENT privilege group It also provides method to capture a frame of the video during playback Few important APIs are listed down here Before playing any media content it should be opened first OpenBuffer() opens an audio or video content to play on the memory OpenFile() opens an audio or video file to be played OpenUrl() Opens an audio or video streaming content to play through the specified URL Play() Starts the playback or resumes the playback Pause() Pauses the playback Stop() Stops the playback Close() Closes the opened media SeekTo() Changes the current playback position SetLooping() Sets an audio or video player to be in a loop SetRenderingBuffer() Sets the rendering buffer for the video playback GetCurrentMediaStreamInfoN() Gets the current media stream information CaptureVideo() Captures the video frame Now, let’s find out about the audio and video formats and streaming protocols the Player class supports For video, the H.264, H.263, MPEG4 and VC-1 formats are supported and for audio, the PCM, AMR-NB, AAC, AAC+, EAAC+, MIDI, MP3 and WMA formats are supported For streaming, player class supports RTSP and HTTP protocols Sometimes, depending on the application, more than one media file is required to be played

simultaneously For example, in a game, the sound effects and background music may be played at the same time To support these cases, up to 10 Player instances can be created at the same time and the player class supports playback of multiple audio streams Multiple compressed audio stream with different audio codec can be played simultaneously and multiple uncompressed audio format can be played simultaneously The IPlayerEventListener interface specifies the methods used to notify the status of the media player during the media playing events The player engine works asynchronously Therefore, it is important to implement this listener to ensure that the player flows correctly When each operation of the Player is completed, an event is generated, and a callback method from this class is called OnPlayerOpened() gets called when an audio/video content is opened asynchronously OnPlayerBuffering() gets called when the streaming data is being buffered OnPlayerEndOfClip() gets called when the Player reaches the end of the clip OnPlayerInterrupted() gets called when the Player is being interrupted by a task of higher priority than the Player OnPlayerReleased() gets called when the interrupting Player has been released OnPlayerErrorOccurred() gets called when an error has occurs while the Player is working OnPlayerSeekCompleted() gets called when the position of the audio/video content moves asynchronously The IPlayerVideoEventListener interface specifies the method used to get the decoded video frame Once the current video frame is decoded, decoded frame buffer is sent to the application using this callback method of Player Programmatic control to the device camera is quite essential when we talk about the Smartphone development platform In the coming slides I will explain what features are provided in bada to exploit the device camera The Camera class provided by bada belongs to the CAMERA privilege group The basic functions of the Camera are taking a picture and providing a real-time preview The Camera supports the RGB and YCbCr color formats, provides control functions for the Autofocus and Flash, and provides adjustment functions for the Brightness, Contrast, Exposure, ISO Level and White Balance For real-time previews, settings for the preview format, preview frame rate and preview resolution are also provided Now, let’s see how we can perform a preview using this Camera class Camera class provides APIs to power on/off the device camera Many getter/setter functions are provided to get/set the camera settings like brightness, contrast, whitebalance, image capture format and resolution, preview format, preview resolution etc Apart from these it provides APIs to capture image, preview video data, zoomin/zoomout functionalities also ICameraEventListener interface specifies the methods used to notify the status of the camera and camera events The camera engine works asynchronously Therefore, it is important to implement this listener to ensure smooth performance of the camera This provides the callbacks for events like image captured, previewed, focused etc You can get a list of preview formats supported by the Camera by calling the GetSupportedPreviewFormatListN function, and then can select one of the formats To draw the Camera’s preview on the screen, you have to prepare the overlay region As with the Video Player, the preview is drawn in the background buffer of the overlay region In general, for a camera preview, you have to create a new Form and set the orientation of the Form to landscape Then, you have to create a new overlay region, add the overlay regionto the Form and set the Form as the current form After this, you need to acquire the buffer info of the background buffer of the overlay region, pass it to the Camera class and start the preview Now, I will give you a brief introduction on the procedures to take a picture

To prepare taking a picture, create and implement the ICameraEventListener object Then, construct the Camera specifying the parameters for the ICameraEventListener and the camera device Then turn the camera on by using the PowerOn function of the Camera If necessary, you can set various Camera options For example, you can set the preview format, the preview resolution, the ISO level, the brightness, effect, contrast, exposure and white balance Finally, call the Camera’s StartPreview function, specifying the background buffer of the OverlayRegion The preview will then start When the picture is taken, the Capture function of the Camera is called If the Capture function of the Camera is called, the corresponding event listener is invoked to deliver the captured image To use the zoom function during the preview, you can use the ZoomIn and ZoomOut functions of the Camera There are two listener functions, the OnCameraCaptured and OnCameraPreviewed functions in the Listener object OnCameraCaptured is the function that notifies the program that the picture has been taken successfully by the Capture function of the Camera The OnCameraPreviewed function is the function that is called for each frame of the preview In the StartPreview function of the Camera, you can define whether to use this listener or not Now lets see how to access the device camera and preview the camera sensor data In this example I have used the OverlayRegion control to display the camera data In line 1 to 8: initialization of different variables are done In line 11: Camera object is instantiated In line 12: Camera event listener, the ICameraEventListener implementation, object is instantiated In line 17: camera instance is constructed with the primary camera of the device and the camera event listener In line 18: device camera is powered on In line 20: Validated bounds that can be used as the input bounds of the GetOverlayRegionN() method are evaluated In line 22: Displayable OverlayRegion for the primary camera with valid bounds is created In line 23: Information related to the background buffer of OverlayRegion is retrieved In line 24: It starts displaying the preview image of the camera device on the UI OverlayRegion AudioIn and AudioOut provides a mechanism for recording and playing raw uncompressed pulse code modulated (PCM) data In next few slides I will explain each individual functionalities in details The AudioIn class provides a mechanism to record audio at a low level First, the AudioIn class saves the PCM audio data as is, unlike the AudioRecorder which saves audio in a known file format In the case of AudioIn, developers have to manually create and then specify a buffer Multi-buffer is supported and auto-switching between buffers is supported The AudioIn class specifications are as follows First, microphone input devices are supported, and mono and stereo channel options are supported Both 8 bit and 16 bit sampling depths are supported In the process of preparing AudioIn, you have to specify the recording source device, the recording audio as either mono or stereo, the sampling type, the sampling rate and the sampling depth In the last preparation step, the function to add a buffer for the recording is provided There are also various Get functions These functions get the minimum and maximum buffer size for recording and get the type, SampleRate and SampleType of the channel optimized to the features of the device Additionally, to actually perform a recording, the Start, Stop and Reset functions are provided I will now show you the general procedures to record audio in the buffer using AudioIn First the AudioIn instance is created, constructed and then prepared The recording buffer is added to the created AudioIn and the recording starts If the operation is interrupted, the OnAudioInInterrupted function is called and the AudioIn is automatically

stopped If the interruption is released and the AudioIn becomes available, the OnAudioInReleased function is called At this time, in general, AudioIn is restarted If the OnAudioInbufferIsFilled function is called, you have to move the contents from the buffer to another storage device and rewind the buffer so that the recording can continue If necessary, you can stop a recording using the AudioIn Stop function and then you use the data stored in the buffer Then, to release the AudioIn, call the Unprepare function of AudioIn and delete the object This interface provides various methods that are called during the operations of AudioIn AudioIn captures audio data from the device in asynchronous mode, and calls the listener’s methods to pass the captured audio data OnAudioInBufferIsFilled(): Gets called when the device has filled up a buffer with PCM data completely OnAudioInInterrupted(): Gets called when an input device is interrupted by a task of higher priority than AudioIn OnAudioInReleased(): Gets called when an interrupted input device is released As mentioned previously, an important advantage of AudioIn is that it supports multi-buffer As you know, ‘multi-buffer’ means using more than one buffer Let’s see how AudioIn works when two buffers are added to the class If you start recording after adding two buffers, of course, the first buffer is used Then if the first buffer becomes full, the OnAudioInbufferIsFilled function is called At this time, the first buffer will come as the first parameter At the same time, the AudioIn starts recording using the second buffer If the second buffer is full, the first buffer is used Even if one buffer becomes full, the application can continue a recording instead of stopping, therefore the latency that is caused when the application performs something to handle the full buffer and then resumes the recording, can be reduced Lets see how to use the AudioIn class for recording PCM data from the device microphone In line 4: AudioIn instance is getting constructed with the audio-in event listener, that is implementation of IAudioInEventListener interface In line 6: AudioIn instance prepares the specified audio input device with the application-defined settings In line 9 to 14: Input buffer for the audio input device are constructed In line 17 to 19: Input buffers are sent to specified input device In line 22: Audio data from the audio input device are read and filled into the input buffer The AudioOut class supports low level audio playback The playable data is uncompressed PCM data Like the AudioIn class, this class supports mono and stereo channels This class also supports a sample depth of 8bit and 16bit and provides multi-buffer support The AudioOut class is very similar to the AudioIn class, except that the data of the buffer is not recorded but played The class provides the SetVolume function so that the user can control the volume during playback In more detail, the AudioOut class provides the Prepare and Unprepare functions as in AudioIn class, but you have to add a buffer by calling the WriteBuffer function and not the Add buffer The AudioOut class also provides Get and SetVolume functions which are not supported by AudioIn The AudioOut class provides Start and Stop functions like the AudioIn class Here, the Reset function is used to disconnect from the connected buffers and to return to the Prepare state Additionally, the functions to get the minimum and maximum values of the buffer size, and to get the settings optimized to the device are the same as those of the AudioIn class IAudioOutEventListener represents a listener that receives the AudioOut related events

AudioOut works in asynchronous mode, and when the application plays audio data with the AudioOut class, the caller must implement this interface to receive an event from AudioOut OnAudioOutBufferEndReached(): Gets called when the device has written a buffer completely OnAudioOutInterrupted(): Gets called when the output device is interrupted by a task of higher priority than AudioOut OnAudioOutReleased(): Gets called when the interrupted output device is released Now lets see how to use the AudioOut APIs to play raw audio files with an example For simplicity I have demonstrated code that is relevant to play functionality here Full example code is available in API reference of AudioOut class documnetation In line 4: AudioOut instance is constructed with the audio-out event listener instance In line 6: Audio output device gets prepared with the defined settings In line 9: We are creating the data buffer containing the audio data to be played In line 10: Data buffer is sent to the audio output device In line 12 to 16: We are preparing some more data buffer and provide them to audio output device In line 18: We are starting audio output device to play the provide data I will start with features and functionalities provided in bada for handling image files and image processing Majority of the APIs in Image class require IMAGE privilege, so ensure your manifest file has got it Media::Image class facilitates the mechanism to encode and decode images from file/buffer, of course with few restrictions It supports decoding of JPEG, GIF, PNG, BMP, TIFF and WBMP format only and supported encoding formats are JPEG, PNG and BMP Apart from these it also facilitates JPEG recompression for reducing the image size and storing or restoring image data to or from the persistent storage directly Few important APIs of Image class are listed down here CompressJpeg(), Recompresses an encoded image file to reduce its size as per the specified limit CompressJpegN(), Recompresses an encoded image data to the byte buffer to reduce its data size as per the specified limit ConvertN(), Converts the image file to the specified image format DecodeN(), Decodes an image file/data into the decoded bitmap DecodeToBufferN(), Decodes an image file/data into the decoded byte buffer without resizing DecodeUrl(), Accepts an image URL to receive the decoded resized bitmap as per the specified width and height EncodeToBufferN(), Encodes the specified bitmap data into a byte buffer EncodeToFile(), Encodes the specified bitmap data into a file Lets see how to decode a image file to get the bitmap and draw it on the canvas We have different versions of DecodeN() API to perform this operation, for demonstration purpose we have used only one of them You can find other API details in the help document In Line 8, 9: We are creating and constructing the image instance In line 11: We are decoding the image file and get the pointer to Bitmap instance In line 12: Create a rectangle that will hold the bitmap In line 13: Get a canvas instance of the current form In line 14, 15: Draw a bitmap onto the provided rectangular area over the canvas and show it The ImageUtil class provides static utility APIs for performing many useful operations on the Image buffer such as conversion of image pixel format, resizing an image, image flipping to generate the mirror image and rotating an image

In next few slides I will explain about audio recording functionality provided in bada Audio Recorder class belong to the RECORDING privilege group The Audio Recorder is basically the class that supports high level audio recording “High level” refers to the fact that the buffer control is automatically performed You can save a file in an actual container file format such as WAVE, AMR, AAC You can set the properties of the Recording such as the Container Format, the Maximum Recording Size, the Maximum Recording Time and the Recording Quality In addition, to control the Recording, you can use the Record, Pause, Stop and Mute functions For file IO operations, the CreateAudioFile, Close and Cancel functions are provided The Audio Record operation also uses the IAudioRecorderEventListener class to notify the program when the state changes The Event Listener object basically provides a function to listen to a state change when the Recorder is started, paused, stopped, canceled and closed When the user-specified Maximum Time or Maximum Size is reached, the OnAudiorecorderEndReached function is called If the recording has a problem, the OnAudiorecorderErrorOccurred function will be called Lets see how we can use the AudioRecoder APIs to record audio data from the audio input device and store it to a file In line 4: AudioRecorder instance is getting created In line 6: Audio recorder event listener, that is implementation of IAudioRecorderEventListener inerface, is instantiated In line 7: AudioRecorder instance is constructed with the Audio recorder event listener In line 9: It is creating the file to hold the audio data In line 11: Audio recording is started VideoRecorder facilitates recording video from the camera device In the coming slides I will explain how to record video using bada APIs Video Recorder uses the Camera Basically, the Video Recorder is almost the same as the Audio Recorder Therefore, I’ll explain it by concentrating on the differences The most important difference is that the construct function of the Video Recorder receives the camera as a parameter while the Audio Recorder receives an input device such as the microphone To determine which codec must be used to make a video recording, you can get the codec list supported by the device by calling the GetSupportedCodecListN function and specify the required codec by calling the SetCodec function To specify the recording resolution, you can get the supported recording resolutions by calling the GetSupportedRecordingResolutionListN function and set the resolution by calling the SetRecordingResolution function The supported video quality levels are Recording Quality Low, Recording Quality Medium and Recording Quality High You also have to specify a parameter for whether to record audio whilst recording video Finally, you can set the video format to either MP4 or 3GP IVideoRecorderEventListener interface specifies the methods used to notify the status of the video recorder The video recorder engine works asynchronously Therefore, it is important to implement this listener to ensure that the recorder flows correctly When video recording operations such as cancel, close, start, pause, stop occurs an event is generated and corresponding callback method gets called Lets see how the VideoRecorder APIs can be used to record video data from the device camera In line 3 to 10: An overlay region is created to preview the camera data

In line 13 to 15: Camera instance is created and powered on to provide the video data In line 16 and 17: VideoRcorder instance is created and constructed with the video recorder event listener and the camera instance In line 18: Camera data is displayed/previewed on the attached overlay region control In line 19: File to hold the video data is created In line 20: Recording of Video data from the camera is started Audio/Video encoder/decoder facilitates conversion of raw data to compress data and vice-versa In the coming slides I will explain about how to do this and what are the APIs available for the same The AudioDecoder class provide API to decode compressed audio stream like mp3, aac and amr to raw/pcm data All the API of AudioDecoder class are synchronous in nature Source data of AAC and AMR decoder must be raw compressed data without header The Probe API is used to query the stream properties like sample type, channel type, sample rate Decode() API decodes the audio data from the source buffer and stores the decoded data into a destination buffer Lets see how to use the AudioDecoder APIs to decode mp3 data In this slide we have defined the variables and objects to be used in this example In line 11: AudioDecoder instance is constructed with the MP3 codec type In line 12: Source byte buffer is created In line 13: File instance is constructed with file path “/Home/test.mp3“ In line 14: Audio file content is read to the source byte buffer In line 16: Audio data in source byte buffer is checked if it can be decoded In line 17: Destination buffer to hold the decoded data is constructed In line 19: Audio data in the source buffer is decoded and stored into the destination buffer The AudioEncoder facilitates encoding of RAW/PCM audio data to compressed audio stream format, AAC or AMR-NB Encode() method encodes the audio data from the source buffer and stores the encoded data into the destination buffer The VideoDecoder class is used decode compressed H264 ,H263 and MPEG4 streams to raw i.e YUV420 data format All APIs of VideoDecoder class are synchronous in nature Probe() used to query the input encoded data properties like width, height, pixel format Decode() method decodes the video data from the source buffer and stores the decoded data into the destination buffer The VideoEncoder class is used to encode RAW i.e YUV420 data to compressed H264,H263 and MPEG4 streams All API of VideoEncoder class are synchronous in nature Simple Profile is supported for MPEG4 profile 0 and 3 are supported for H263 Baseline profile is supported for H264 Encode() API encodes the video data from the source buffer and stores the encoded data in the destination buffer Now, I will wrap up this lecture on the Media name space The Media name space includes various classes to allow you to easily use media files such as audio, video and image files The Image class enables you to encode and decode image files

In addition, the Player class is used to support playing back media content The Audio/VideoRecorder enables you to create audio or video recording files The AudioIn and Out classes support low-level recording and low-level playing In general, these classes are used for sound processing or developing music applications Camera class supports real-time previewing and capturing Codec framework supports Audio/Video Encoding/Decoding Here are some exercise for you to explore more into the classes of Media namespace Here are some exercise for you to explore more into the classes of Media namespace