Xbox announces upcoming MKV support

Xbox announced MKV support at gamescom this week.  This was picked up and blogged by MajorNelson and is seeing rave reviews on Reddit.

I’m glad to see the excitement for MKV, as it’s what I’ve been working on these days!  In particular, I’ll be trying to answer questions regarding the extent of MKV support in this AMA thread.  I’m interested in feature requests and to make sure we have support for the most common scenarios.

Posted in Uncategorized | Comments Off

HLS v3 – The new old thing

Azure Media Services now supports a down-level version of HLS, HLS v3.  
Use: format=m3u8-aapl-v3

A little history:

As of version 4, September 2011, Apple/HLS added support for multiple-languages: Alternative Audio Renditions these were called. Multi-language support was the motivation behind choosing HLS v4 two years ago for our service. At that time, Android was seeing a lot of active development and with its largely international customer base, it was assumed that Android would adopt HLS v4 for these multi-lingual markets. Not so, as history would prove. Here we are two and a half years later and HLS v4 support is only available on the very latest Android drops. So, instead, we are reaching back and dusting off the HLS v3 draft spec from November 2010, and building backward compatibility. We are also motivated by the connected TV space, which has a lower adoption rate for device updates.

To use HLS v3, simply use:

*.ism/manifest(format=m3u8-aapl-v3)

This will mux the default audio language into the video.

But what about multi-language scenarios, since HLS v3 doesn’t support those?

HLS v4 allows video and audio to be delivered in a decoupled fashion (like Smooth, DASH and HDS), whereas HLS v3 must have the audio muxed into the video data, and both are delivered as a single stream. Since HLS v3 does not have native multi-language support, but our server does, we were able to allow content-owners to set the audio track that they want muxed in the URL using the ‘audioTrack’ attribute. If you don’t set anything, we just mux in the default audio.

You can force the language that you want muxed into the video by using the audioTrack attribute.

It looks like this:

*.ism/manifest(format=m3u8-aapl-v3,audioTrack=audio_eng)

*.ism/manifest(format=m3u8-aapl-v3,audioTrack=audio_spa)

Or

*.ism/manifest(format=m3u8-aapl-v3,audioTrack=audio)

*.ism/manifest(format=m3u8-aapl-v3,audioTrack=audio_1)

Depending on if your audio languages were detected/defined in the source media.

You can get the audioTrack name from the Smooth manifest Language, or HLS v4 manifest.

You can set your own track names by using the title=”audio_spa” attribute in the .ism file used for your multi-bitrate MP4 assets.  Similarly, you can use systemLanguage=”spa” to set the Language attribute in the manifests. Add the attributes to the line in the .ism which has the src=”yourAudio.mp4”. You can also do some renaming in the .ism/.ismc of Smooth assets if you’re feeling brave. Sorry there’s no support for this type of manipulation in the platform, perhaps some of you would like to contribute to the Azure Media Services .NET SDK Extensions project on GitHub?  For doing these types of touch-ups, there is a manifest parser/generator from Mariano, one of our friends at SouthWorks.

So now HLS works on all Android devices, right?

Nope, not a chance.

Android is highly fractured and has varying levels of implementation for streaming media. The media stacks are different from device to device, and even within a single device the media stack can be duplicated/different. If you’ve been trying to make this work for a while, you’ve seen that the same Android version may work differently on different handsets; that’s because each manufacturer has the option of tweaking the OS. All to say, it’s not an easy problem to solve, and as a stream provider, the best I can do is make sure my streams are conformant – device support/upgrades is up to Android.

We also did some deeper testing of 4.1.2 debug builds which demonstrated that the quality of the underlying native implementation was low for that release. Some manufacturers had forked the code and done some fixes, but generally, I would not expect devices 4.1.2 and below to work. This is consistent with the findings of other streaming server and player framework providers.

For the Android devices with version 4.2.2, it would seem that there are varying degrees of implementation. Some adaptive bitrate logic implementation is particularly poor (dropped frames are not triggering bitrate switching, latching to a particular bitrate or bitrate switching causes a re-start). Other devices with 4.2.2 do reasonable well, and the latest Android drops are much better.

The MSDN doc will have a table on the device testing we were able to accomplish as part of building this feature, it is in no way extensive, but does cover a good number of popular devices that can be purchased today. Just to be clear, the goal of HLS v3 is to provide spec-compliant HLS streams which pass Apple validator and other Transport Stream compliance suites; not to ‘work on all Android devices’, so your millage may vary, take it up with the the other guys.

On the connected TV side, we’ve gotten good feedback from our partners thus far.  We also got a thumbs up from the popular JW player folks (note, set type:”HLS” in the embed code for JW Player).

Posted in Uncategorized | Tagged , , , | Comments Off

Encoding versus Packaging – Streaming media terminology explained

What is the difference between Encoding and Packaging?

Short answer:  It’s the difference between building toys, and gift-wrapping them.

The terminology used in media-streaming is hard to master.  There are a lot of moving parts and concepts that are linked one to the other.  In this post I review some of the most used terms and try to draw analogies that are easy to grasp.  We’ll go from content creation to content consumption.

Production of digital media is reasonably well understood, you have contributions from you creative department, or have purchased or licensed content.  At some point, these go through analog to digital conversion and typically are stored as very-high-resolution and very-low-compression (very-high-quality) files.  These are often referred to as Mezzanine or Source files, they form your input streams for encoding.  These are your ideas and your raw materials, you’ll use these to make toys for kids to play with.

Encoding (transcoding) is the process of rendering a frame of the input stream onto a bitmap buffer, analyzing it (and the frames just prior and just after), and then re-compressing it back into the target encoding profile.  Encoding is hard work it requires lots of memory and CPU cycles.  It uses codecs to build the frames of video or audio.

I refer to Encoding as ‘building toys’ in a ‘toy factory’.  You need specialized machines for this type of work.  It is CPU intensive to decompress, analyze and compress video.  In this analogy, the toys are made of codecs.

Production

An encoding profile, in the context of adaptive bitrate streaming, is a set of target resolutions and bitrates.  Think 1080p/3Mbps, 720p/1.5Mbps, 480p/800kbps, 240p/400kbps; plus at least one audio stream.  With an encoding profile, you are choosing how many sizes of the same toy you need.  The output of the encoding process is often call a digital intermediate.  It is not the source/mezzanine, but neither is it ready to stream.

MultiBitrateEncoding

Packaging (trans-muxing) is the process of taking the frames of video and describing their sequence and presentation order and timing.  Specific methods of describing video frames are used, these could be ISO-14996 MP4 containers and their derivatives (Smooth, HDS, CSF), or MPEG2 – Transport Stream (TS).  Typically an adaptive bitrate asset will have a group of streams which are then referenced in a manifest that describes additional metadata for the streams themselves.  The manifest will describe how to request media information for each stream and can take different form depending on the protocol (Smooth Manifests, HDS Manifests, DASH Media Presentation Description manifests, HLS Manifests).  The product of the Packaging process is ready-to-stream a protocol; more sophisticated servers can work directly from digital intermediates to create protocols on demand.

I often describe Packaging as ‘gift wrapping’, all you need is gift-wrapping skills and to choose what color wrapping paper (protocol) to use (for your target client framework).  This operation is typically bandwidth limited, as the CPU has very little to do when simply re-wrapping the media (it mostly just copies around the compressed video frames in memory).

Packaging1

A client framework will use the protocol semantics to first read a manifest, then request media, and finally feed frames of video into a decoding pipeline.  In this analogy, imagine that the client framework is your child that doesn’t care much for the toy, but only likes unwrapping a particular color of paper; if the wrapping is not right, it gets all upset and refuses to let anyone play with the toy.

Packaging2

A decoding pipeline on a device is capable of reading the frames of video which have been encoded with a particular codec, and decompress them back into an image for you to view.  This is your other child, the one who doesn’t care about wrapping, but only likes a few kinds of toys, it refuses to play with any other types (not all devices can decode all codecs), and this child gets particularly upset if you tell it you have one kind of toy, but substitute in another (pipeline initialization and data consistency are important).

Have fun playing with media!

RockingHorse

Posted in Windows Azure Media Services | Tagged , , , , | Comments Off

Building Awesome: The server that streamed the Olympics

For the last two years, I have been working to extend the capabilities to of the Origin Server for Window Azure Media Services. In January of 2013, Windows Azure Media Services (WAMS) became generally available as a Platform as a Service (PaaS) on Windows Azure, Microsoft’s cloud computing platform. Just over a year later, it would stream the largest live sporting event in history – millions of concurrent users and over 10,000 hours of unique content over the 16-day Sochi Winter Olympic Games. This article describes the capabilities of Microsoft’s origin server, as well as the evolution of the product in the last two years to meet today’s demanding streaming media space.

The starting point: IIS-MS 4.1

As a free extension to IIS Servers, IIS-MS 4.1 offered several market-leading features upon its release. This included Live Smooth Streaming ingest, with archiving to Smooth Streaming and HLS v3, and the re-streaming of these live archives. It offered AES envelope encryption of HLS v3 live streams and archives, as well as simple VOD Smooth Streaming for both clear and PlayReady encrypted content.
IIS-MS 4.1 was a product which worked very well in conjunction with Expression Encoder and Transform Manager to allow the authoring of professional-grade adaptive-streaming workflows. These products together have prepared and streamed a significant proportion of internet video solutions in the last few years.

IISMS41

[IIS-MS 4.1 Origin Server Capabilities]

The capital and operational costs of creating and hosting libraries of video content is significant and ever-expanding for content owners. There was a need for a highly scalable solution, which would meet the needs of the complex content-preparation workflows, as well as the demands of the fractured mobile client space. Content owners, all the while, needed a Service Level Agreement backed by the world’s largest software company.

Along came the Cloud: IIS-MS 5.0 launches in WAMS

In 2012, the origin server was re-architected to meet the expected and existing needs of the exponentially growing media streaming space. It was adapted to source from Azure Storage and extended to allow just-in-time transmuxing capabilities for VOD content. This allowed the server to source from Smooth Streaming and create HLS v4 on a per-request basis. In addition, the server was extended to source from more standard formats, it could now use GOP-aligned multi-birate ISO-MP4 file-sets.
Windows Azure Media Services added scalable Origin Services to the Media Processing Services that had been in beta since May 2012. The origin services team undertook the 2012 London Summer Olympic Games for of NBC in the USA and Deltatre in other countries, thereby testing the server at scale.
In January 2013, the Windows Azure Media Services (WAMS) became generally available, with the capabilities of Expression Encoder, Transform Manager and IIS-Media Services all extended and wrapped in a scalable Platform as a Service. WAMS offered the first Service Level Agreement for encoding and streaming in the Cloud media-processing and streaming space.

The origin server now offered the following storage to streaming format capabilities:

IIS-MS 5.0 WAMS

[IIS-MS 5.0 Capabilities at WAMS GA, Jan 2013]

Keeping up with demands: New formats and partnerships

Early 2013 proved to be a very busy time. Microsoft had been working with various standardization bodies to promote interoperability of streaming media formats for several years. With 2013 came the first specifications for MPEG DASH, Dynamic Adaptive Streaming over HTTP. With such a specification, that prescribes the format used to advertise adaptive bitrate media, both the Servers and Clients are affected: there are no Servers to create streams for Clients, and no Clients to test streams from Servers. The Server team took on this challenge and implemented the MPEG DASH Live Profile for its Media Presentation Description (MPD) manifest format, and simultaneously jumped in with the Common Streaming Format (CSF) for media packaging, derived from the latest ISO-MP4 specs. This was demonstrated in time for the National Association of Broadcasters Show, and paved the way for the development of open-source DASH Live Profile client implementations by members of the DASH Industry Forum, as well as DASH player frameworks from the Windows Azure Media Services client team. Our implementation also offered muxing from Smooth+PlayReady to DASH/CSF+CENC, which was the first publicly available Common Streaming Format with Common Encryption. As the specifications were still in flux, this format was not announced as generally available: however, anyone wishing to consume and develop clients from our DASH reference streams could do so.
Not only was there significant work in the standardization domain, but also in the commercial domain. After the success of the 2012 London Summer Olympics, WAMS was selected by NBC to partner with Adobe and Akamai in delivering the 2014 Sochi Winter Olympic games. In addition, Deltatre had expanded its worldwide Olympics streaming footprint to 22 countries for Sochi. All six datacenters in which WAMS was deployed were to be used. With the NBC partnership came a requirement to implement HTTP Dynamic Streaming (HDS) to target Adobe Primetime Flash clients, as well as SCTE-35 advertisement signaling in both HDS and HLS.

As of summer 2013, the origin server now offered:

IIS-MS 6.0 WAMS Live

[IIS-MS 6.0 Live Capabilities, August 2013]

The learnings from the 2012 Olympics were being applied to make Live Streaming generally available for WAMS customers after the 2014 Olympics. The goal was not simply to meet the needs of NBC and Deltatre, but to build a Live Streaming Service as an integral part of WAMS which could then be leveraged by any WAMS user.
While IIS-MS 4.1’s Live ingest was designed as a single instance which both archived and re-streamed, this did not meet the needs of a cloud server deployment, nor the growing scale of the Olympics. The ingest “Channel” was re-architected to provide multi-machine redundancy and failover. Its archiving format was overhauled to take advantage of the distributed nature of the Azure Storage, thereby increasing archiving speed, throughput and reliability. The ingest and archiving components were broken away from the origin services, and two server types now work in concert to deliver live streams, increasing resilience while separating concerns. This offers diverse topology options, such as providing a known amount of Origin streaming SLA per Live channel, and the separation of Live workloads from VOD workloads without breaking CDN caches. This truly made the solution viable as a production service for both Live and VOD workloads in the cloud.

LiveWorkflow

[Diagram of Live Channel, Azure Storage and Origin server]

One of the problems with traditional Origin servers is Live to VOD transition.  That is, there is usually one URL for the Live stream, and one for VOD.  With WAMS, there is only a single URL for the Asset, either Live or VOD.  That is, whether the Asset is Live or the live stream has ended and it is now VOD content: the URL does not change.  This greatly simplifies the content distribution workflow.  Each Olympic broadcaster had a choice of over a dozen event feeds, as well as static feeds of the torch, stadium, city views, etc, as well as the feeds they were producing onsite themselves: on-location, interviews, new-desk, etc.   Each event feed or production feed had several programs throughout the day, with differing start and stop times.  The ability to pre-define the Live and VOD event URLs and traffic them to content management systems greatly simplified the workflow and their ability to begin events on time and with confidence.  A clever use of encoder automation allowed broadcasters to use their normal program scheduling and management systems to trigger the start, go-live and stop of each program through the signals normally embedded in the transport stream.

In addition to a single URL for Live and VOD, the Live event had access to the full DVR window for that program.  Both of these are the result of decoupling the stream archiving from the origin services.  The origin is able to read from both the archiving servers, or the archives they are actively writing to Azure Storage.  This allows a seamless transition once the event is over, and full access to all parts of the archive.  Users could just as easily be watching the live edge of the stream, or be seeking or re-starting from the beginning of the event.

Optimizing the Origin for Live Sources

Cloud storage services are considerably different from local disks.  They are HTTP endpoints, not NTFS, FAT32, SMB nor NAS/SAN systems.  With either Azure Storage or Amazon S3, comes the issues associated with an HTTP storage endpoint.  Looking through Amazon docs, you’ll quickly see that for any heavy throughput you’ll be recommended to use their CloudFront system, which brings the files to local disks on edge servers.  Azure Storage is the same:  it has massive scale, redundancy and security, but also limitations as to concurrency and throughput while retaining the low continuous latency required for live streaming.  For example, a 4 second delay in providing the live manifest to an iOS device will stall playback; and when coupled with CDN caching tiers: the effect is amplified.  There is no time for missed requests nor a margin of error for origin server overload.

To ensure smooth and consistent delivery of media, several improvements were made to the origin server.  We added request aggregation:  requests which required new data are cued with any other requests that need the same data; theses are then spooled out when the data is fetched (from either the Channel or Azure Storage).  Origin request aggregation greatly reduces the transactions per second on the source content, this frees up Azure Networking and Storage to meet the needs of other loads, while providing the best streaming-request handling.

Once the source data is fetched, we cache the input source data for use by any subsequent requests:  since we are simultaneously streaming Smooth, HLS and HDS, all protocols can re-use the source content.

Due to the high concurrency and wide distribution of the streams, some broadcasters had over a dozen parent CDN caches.  This meant that we were seeing multiple, often concurrent, requests for each bitrate, of each protocol, plus manifests.  Since the origin dynamically produces the responses based on the requests, we added output caching.   If you’ve setup traditional streaming topologies, it is likely that you’ve incorporated ARR caches to serve this purpose — which is what we did for the 2012 and previous Olympics, but no more.  For Live and ‘Viral Video’ scenarios, the WAMS origin servers can now easily handle the maximum bandwidth of their outbound network cards, with comparatively little internal traffic and CPU load.

A Gold Medal Performance

The 2014 Olympics were a monumental success across all fronts, it deserves a much more detailed account than can be given here, suffice to say it broke a number of records for Live streaming for scale, reliability, concurrency, automation, protocol diversity, device diversity and content delivery. All of this, only one year after the Windows Azure Media Services made its official debut.

Since the Olympics, we have been building the business model to go to market with Live Streaming, as well as allowing select partners to start ramping up using Live Services.

Massive Scalability: Enterprise workloads for streaming media

Not only was 2013 a banner year for Media Services, but the XBox One launched in November, bringing with it the XBox Game DVR service. The backend for Game Clips is Windows Azure Media Services. Users have generated millions of clips to date and stream terabytes of content daily – including during the Olympics.
At the recent SharePoint Conference 2014, it was announced that Windows Azure Media Services would be the backend for a new enterprise video streaming service for Office 365: Office Video will provide securely encrypted video at the enterprise scale.
While XBox Game DVR demonstrates massive scale, Office 365 requires new investments in encryption and security. O365 is one of the main drivers behind just-in-time encryption and sourcing from storage-encrypted media – a requirement shared by Hollywood production studios: that content shall never be un-encrypted while at rest.

As we complete the O365 requirements in the first half of 2014, the server capability begins to resemble:

WAMS March 2014

[IIS-MS 6.0, March 2014]

Looking to the future: What’s next?

By popular demand, we have recently added the option to stream to HLS v3: for backward compatibility to older Android devices, connected TVs and set top boxes.

As of this writing, mid-March, 2014, we are still completing the Storage Decryption work and to complement AES Envelope Encryption.  We are also closing the gap on dynamically muxing Smooth+PlayReady sources to HLS+PlayReady. The goal is to favor Dynamic Packaging and Dynamic Encryption versus static packaging and encryption: sourcing from standard MP4s or Smooth, that are either clear or storage-encrypted, we will be able to support protocols with various media encryption options.

How you can contribute:  Feedback

We are nothing if we cannot meet the needs of our users, I invite you to leave feedback on feature requests on:
http://feedback.windowsazure.com/forums/169396-media-services
You can also reach us on our Forums on MSDN and StackOverflow:
http://www.windowsazure.com/en-us/support/forums/

Posted in Windows Azure Media Services | Tagged , , | Comments Off

A simple scenario: Upload, Encode and Package, Stream

What’s going on when I use the Window Azure Portal to encode a job?

The Windows Azure Portal is built on top of the various REST APIs of the underlying Azure components. The presentation layer is HTML5, and uses a mix of REST and the .Net SDKs on the server-side — you could build your own portal to manage your media workflow.
In the next few weeks, under the Media Services tab, we will update the pages to include some simple code snippets that walk you through this simple scenario:

  • Create an Asset and Upload a file
  • Encode to Smooth and Package to HLS
  • Stream to both Smooth and HLS

Here is what is going on in those snippets:

Click through these slowly and try to match the various arrows to the lines of code below.

Main Program:

// Create .Net console app
// Set project properties to use the full .Net Framework (not Client Profile)
// With NuGet Package Manager, install windowsazure.mediaservices
// add: using Microsoft.WindowsAzure.MediaServices.Client;

var context = new CloudMediaContext("your_media_account", "your_media_account_key");

//Slide 1:
string inputAssetId = CreateAssetAndUploadFile(context);

//Slide 2:
IJob job = EncodeAndPackage(context, inputAssetId);

var smoothAsset = job.OutputMediaAssets.FirstOrDefault();
var hlsAsset = job.OutputMediaAssets.LastOrDefault();

//Slide 3:
string smoothStreamingUrl = GetStreamingUrl(context, smoothAsset.Id);
string hlsStreamingUrl = GetStreamingUrl(context, hlsAsset.Id);

Console.WriteLine("\nSmooth Url: \n" + smoothStreamingUrl); Console.WriteLine("\nHLS Url: \n" + hlsStreamingUrl); Console.ReadKey();
//

First slide:

private static string CreateAssetAndUploadFile(CloudMediaContext context) {

var inputFilePath = @"C:\demo\bing_social_search.mp4";

var assetName = Path.GetFileNameWithoutExtension(inputFilePath);

var inputAsset = context.Assets.Create(assetName, AssetCreationOptions.None);

var assetFile = inputAsset.AssetFiles.Create(Path.GetFileName(inputFilePath));

assetFile.UploadProgressChanged += new EventHandler<UploadProgressChangedEventArgs>(assetFile_UploadProgressChanged);
assetFile.Upload(inputFilePath);

return inputAsset.Id;
}

//Monitor progress:
static void assetFile_UploadProgressChanged(object sender, UploadProgressChangedEventArgs e) {
Console.WriteLine(string.Format("{0}   Progress: {1:0}   Time: {2}",
((IAssetFile)sender).Name, e.Progress, DateTime.UtcNow.ToString(@"yyyy_M_d__hh_mm_ss")));
}
//

Second slide:

private static IJob EncodeAndPackage(CloudMediaContext context, string inputAssetId) {

var inputAsset = context.Assets.Where(a => a.Id == inputAssetId).FirstOrDefault();
if (inputAsset == null)
throw new ArgumentException("Could not find assetId: " + inputAssetId);

var encodingPreset = "H264 Smooth Streaming SD 16x9"; // <a href="http://msdn.microsoft.com/en-us/library/windowsazure/jj129582.aspx#H264Encoding">http://msdn.microsoft.com/en-us/library/windowsazure/jj129582.aspx#H264Encoding</a>

IJob job = context.Jobs.Create("Encoding " + inputAsset.Name + " to " + encodingPreset + " and Packaging to HLS");

IMediaProcessor latestWameMediaProcessor = (from p in context.MediaProcessors where p.Name == "Windows Azure Media Encoder" select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

ITask encodeTask = job.Tasks.AddNew("Encoding", latestWameMediaProcessor, encodingPreset, TaskOptions.None);
encodeTask.InputAssets.Add(inputAsset);
encodeTask.OutputAssets.AddNew(inputAsset.Name + " as " + encodingPreset, AssetCreationOptions.None);

var packagingToSmoothConfig = @"<taskDefinition xmlns=""<a href="http://schemas.microsoft.com/iis/media/v4/TM/TaskDefinition#&quot;&quot;><name>Smooth">http://schemas.microsoft.com/iis/media/v4/TM/TaskDefinition#""><name>Smooth</a> Streams to Apple HTTP Live Streams</name><description xml:lang=""en""/><inputDirectory/><outputFolder/><properties namespace=""<a href="http://schemas.microsoft.com/iis/media/AppleHTTP">http://schemas.microsoft.com/iis/media/AppleHTTP</a>#"" prefix=""hls""><property name=""maxbitrate"" value=""10000000"" /><property name=""segment"" value=""10"" /><property name=""encrypt"" value=""false"" /><property name=""pid"" value="""" /><property name=""codecs"" value=""false"" /><property name=""backwardcompatible"" value=""false"" /><property name=""allowcaching"" value=""true"" /><property name=""passphrase"" value="""" /><property name=""key"" value="""" /><property name=""keyuri"" value="""" /><property name=""overwrite"" value=""true"" /></properties><taskCode><type>Microsoft.Web.Media.TransformManager.SmoothToHLS.SmoothToHLSTask, Microsoft.Web.Media.TransformManager.SmoothToHLS, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35</type></taskCode></taskDefinition>";

IMediaProcessor latestPackagerMediaProcessor = (from p in context.MediaProcessors where p.Name == "Windows Azure Media Packager" select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();

ITask packagingTask = job.Tasks.AddNew("Packaging to HLS", latestPackagerMediaProcessor, packagingToSmoothConfig, TaskOptions.None);
packagingTask.InputAssets.Add(encodeTask.OutputAssets[0]);
packagingTask.OutputAssets.AddNew(inputAsset.Name + " encoded and packaged to HLS", AssetCreationOptions.None);

job.StateChanged += new EventHandler<JobStateChangedEventArgs>(JobStateChanged);
job.Submit();
job.GetExecutionProgressTask(CancellationToken.None).Wait();

return job;
}

static void JobStateChanged(object sender, JobStateChangedEventArgs e) {
Console.WriteLine(string.Format("{0}\n  State: {1}\n  Time: {2}\n\n",
((IJob)sender).Name, e.CurrentState, DateTime.UtcNow.ToString(@"yyyy_M_d__hh_mm_ss")));
}
//

Third slide:

(provisioning of the origins is done through the management portal scale page)

private static string GetStreamingUrl(CloudMediaContext context, string outputAssetId) {
var daysForWhichStreamingUrlIsActive = 365;

var outputAsset = context.Assets.Where(a => a.Id == outputAssetId).FirstOrDefault();

var accessPolicy = context.AccessPolicies.Create(outputAsset.Name,
                   TimeSpan.FromDays(daysForWhichStreamingUrlIsActive),
                   AccessPermissions.Read);

var assetFiles = outputAsset.AssetFiles.ToList();

var assetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith("m3u8-aapl.ism")).FirstOrDefault();
if (assetFile != null) {
var locator = context.Locators.CreateLocator(LocatorType.OnDemandOrigin, outputAsset, accessPolicy);

Uri hlsUri = new Uri(locator.Path + assetFile.Name + "/manifest(format=m3u8-aapl)");
return hlsUri.ToString();
}

assetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith(".ism")).FirstOrDefault();
if (assetFile != null) {
var locator = context.Locators.CreateLocator(LocatorType.OnDemandOrigin, outputAsset, accessPolicy);
Uri smoothUri = new Uri(locator.Path + assetFile.Name + "/manifest");
return smoothUri.ToString();
}

assetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith(".mp4")).FirstOrDefault();
if (assetFile != null) {
var locator = context.Locators.CreateLocator(LocatorType.Sas, outputAsset, accessPolicy);
var mp4Uri = new UriBuilder(locator.Path);
mp4Uri.Path += "/" + assetFile.Name;
return mp4Uri.ToString();
}
return string.Empty;
}
//

Now you put that smooth URL into http://smf.cloudapp.net/healthmonitor or the HLS Url into your iOS device, and you’re ready to go.

Posted in Windows Azure Media Services | Comments Off

Windows Azure Media Services is now live!

As of this morning, Windows Azure Media Services is now live and open for business.

Launched seven months ago in Preview, it has matured and changed based on the great feedback we recieved in the formus, and direct feedback from our customers:  Thank you!

We are actively investing in the platform to make it an ecosystem of offerings, while improving the core components.

You can read more on:

 

 

 

 

Posted in Uncategorized | Comments Off

Dynamic Packaging and Encoding and Streaming Reserved Units

Today, we got this question on our forums:

Does anyone have any information on the new ‘on the fly converting’ for media services?

I am trying to allow multiple users (hundreds) the ability to upload video files at the same time.  Obviously, currently the queuing process takes too long.  Would like to know if this new way might help.

Thanks.

Things tend to get pushed down quickly in the forums, and since I own this feature, I thought I would cover the answer in a blog post.  But first things first, read this post about architecting a system for user-contributed content.

Yes, that feature is shipping in the coming days, we refer to it as dynamic packaging, you could also call it just-in-time packaging, dynamic muxing, etc.

Specifically, it will offer the ability to:

Transmux from a single source format into two different streaming formats.

What it will NOT do:

Take a single-bitrate source file and produce multi-bitrate streams.  That requires re-encoding is very CPU intensive.  See this post for the difference between muxing and encoding.

Supported input formats:

  • MP4
  • Smooth

Supported output formats:

  • Smooth
  • HLS v4

Supported scenarios:

  • Primarily AAC and H.264 codecs (you can use VC-1, but it will not remux to HLS).
  • A single MP4 to single-bitrate smooth and single-bitrate HLS.
  • Many closed GOP, GOP aligned MP4′s to multi-bitrate smooth or multi-bitrate HLS.
  • Multi-bitrate Smooth to HLS, and of course, Smooth.

Unsupported scenarios at this time:

  • Encrypted content as source files, neither Storage Encryption nor Common Encryption.

Pictures always help:


Cost and setup:

At this time, dynamic packaging will only be offered if you reserve origin capacity in the management portal.  That is, you need to enable and buy an origin reserved unit (~200$/mo charged by 24hr period increments, this is a blog, pricing changes, check the management portal for details).

If you are not using a reserved origin to stream, then you are sharing a small pool of origin servers amongst all the media services users of a datacenter.  We call this the preview-pool, and are not updating it at this time to support dynamic packaging.  There is no Service Level Agreements for the preview pool, if someone’s video goes viral, you’ll be competing with them for bandwidth.  If you’re serious about streaming, get a reserved unit, that’s what they are for.

To reserve on-demand streaming capacity:

Go to the scale page in the management portal and move the slider to 1.  If you are provisioned with a reserved unit, you’re all set.

It takes a few minutes to spin up an origin reserved unit for you, if it’s been a few hours and you don’t see confirmation in the ‘scale’ page of the management UI, then there was not any capacity within our reservation system in the datacenter.  This triggers a request for increased capacity internally, but these need to be provisioned, which takes a while.  We all like to think that ‘the cloud’ is infinity elastic; yes, in theory, but in practice racks of servers are powered down when unused and there are an army of guys around the world with box-cutters and screw drivers racking servers.  There are well over 10,000 subscribers to media services, if everyone asks for 5 reserved units, we’ll be keeping those guys busy.

Back to the original question:

The original poster was looking for the quickest path to putting user contributed content back out there.  That works if the content is ‘just right’.  But it tends to go sideways quickly: all your users are capturing video in different ways, using different codecs and container formats (mov != mp4 except in special cases).  By encoding each of these, even to a very simple (quicker) single-bitrate encoding profile, you create uniformity in the content that you are streaming.  Without uniformity, you will get random failures that will be hard to trace: “this video plays on iOS, but not Android”  “my videos wont stream at all”; and you’ll spend all sorts of hours tracing this down.  Trust me, I do this for customers all week long — we are taking on this burden so you don’t have to.  Skipping the encode step seems like the quickest path, but it will bring you pain.  Pay for encoding reserved units (99$/mo) and you will be able to manage your queue, and mostly, you won’t sit in line with everyone else in the datacenter.

Dynamic Packaging Server Manifests

When you encode to multi-bitrate mp4 with the Windows Azure Media Encoder, the system will produce a server manifest for you.  If you are familiar with the smooth streaming format, this is the .ism file that essentially says: this file is a video track, so is this file, and this file, and this one is an audio track.  When Dynamically Packaging from MP4, you need to explicitly tell the server that these input files are MP4s.  This is done with some metadata in the <head/> section:

 <?xml version="1.0" encoding="utf-8"?>
  <smil xmlns="http://www.w3.org/2001/SMIL20/Language">
  <head>
   <meta name="formats" content="mp4" />
  </head>
  <body>
   <switch>
    <video src="yourFile.mp4" />
    <audio src="yourFile.mp4" />
   </switch>
  </body>
 </smil>

In the case of several MP4 files, each muxed with the same audio track, it would look like this:

<?xml version="1.0" encoding="utf-8"?>
 <smil xmlns="http://www.w3.org/2001/SMIL20/Language">
 <head>
 <meta name="formats" content="mp4" />
 </head>
 <body>
 <switch>
 <video src="yourFile_BR1.mp4" />
 <video src="yourFile_BR2.mp4" />
 <video src="yourFile_BR3.mp4" />
 <video src="yourFile_BR4.mp4" />
 <audio src="yourFile_BR1.mp4" />
 </switch>
 </body>
 </smil>

But is it going to work?

The original poster did not want to wait in a queue before he started streaming.  The trouble is, he will only find out if he has problems once he’s actually streaming.  Unless he sets up a quality check of some sort, he is only going to find out because his users tell him; or may only find out when they abandon his site altogether  — which, unfortunately, is more typical of internet users: “if it doesn’t work, I have better things to do with my time”.

To mitigate this risk, we have set up an MP4 Preprocessor task within the existing Windows Azure Media Packager.  It analyses the input files and checks that they are streamable to Smooth and HLS.  You can make the task fail if the input cannot be streamed to your desired format.  Id rather not maintain code snippets for it here in my blog, but it’s quite easy to use if you’ve built any sort of encoding/packaging workflow against WAMS.  It is documented on MSDN here in the Dynamic Packaging section.
If you’re not going to encode with WAMS, you should at least check your files.  The actual runtime for checking an asset is less than a minute for an asset as large as a few Gigs (set-up time to run the tasks varies proportionally to asset size);  you do need a slot in the queue, however.  The idea is to fail early, before streams are out there on your web property.

So how do I stream?

Just create a locator and append your .ism file and manifest type.
Create the locator:

var accessPolicy = _context.AccessPolicies.Create(assetName, TimeSpan.FromDays(365), AccessPermissions.Read );
var locator = _context.Locators.CreateLocator(LocatorType.OnDemandOrigin, asset, accessPolicy);

Then for smooth:

UriBuilder ub = newUriBuilder(locator.Path);
ub.Path += “/yourFile.ism/manifest”;
Uri smoothUri = ub.Uri;

or for HLS:

UriBuilder ub = newUriBuilder(locator.Path);
ub.Path += “/yourFile.ism/manifest(format=m3u8-aapl)”;
Uri hlsUri = ub.Uri;

That’s it, it just works.  If you’re interested in the techy details, read on.

At runtime, the server will get the manifest request, it will peek into the .ism file and find the files your have listed, it will then open each and look up the video frame information, it will look for synchronization points across tracks and build the manifest response accordingly.  When it gets fragment requests, it will go back into the video file for that quality level and find the required video frames, it will then build the response from the raw video frames into smooth or HLS, as per the request.

Posted in Windows Azure Media Services | Tagged | 2 Comments

Taking storage and streaming for a spin with Transform Manager

So perhaps you’re thinking:

I’m not ready to move my encoding workflow to the cloud, but I’m interested in exploring storage and streaming alternatives.

.
Ok, I understand where you’re coming from.  I’ve worked with some of the largest content owners and getting it just right is not trivial.  Or perhaps your studio contracts do not allow you to egress media assets unless you’ve applied the stipulated digital rights management encryption (PlayReady, I hope!).

Many large content owners have much more flexibility in their choice of storage, origin service and CDN providers.  In this blog, I’ll touch on leveraging storage and origin services in Windows Azure Media Services without significantly altering your asset creation workflow, or writing any lines of code.

Prior to focusing on the cloud, our Media Services team had all been heads-down on various on-premise products and client frameworks.  I had been working on Transform Manager.  It’s an extensible media workflow tool which also offers a few transmuxing capabilities:  Mp4 to Smooth Streaming, clear Smooth Streaming to PlayReady-protected Smooth Streaming, and (clear or PlayReady) Smooth Streaming to HLS.  It also integrates nicely with Expression Encoder and our HPC Cluster technologies.  While I can’t mention who uses it, I estimate it lights up a significant proportion of on-premise workflows worldwide.  So, perhaps you’re one of the tens of thousands who have downloaded and are using Transform Manager.  Great, but that doesn’t help your storage and streaming needs.

Move storage and streaming to the cloud!

Since TM is an extensible framework, anyone can build additional workflow tasks.  Given my knowledge of the product, I decided to write one myself.

You can find it on codeplex at:  http://createassettask.codeplex.com

What’s great about it?

  • TM is a watch-folder based tool: drop the files in a folder, and the workflow is kicked off.  No coding, no scripting.
  • Just enter your Media Services credentials using the Transform Manager user interface.

What does it do?

  • This task will create and upload a media asset to Windows Azure Media Services.
  • It can also create a streaming URL for that asset at the same time.
  • It outputs a json object with the data you’ll need to use or stream the asset

For existing TM users: you can chain this task at the end of your usual workflow.  Just send a copy of the asset up to the cloud, get a streaming URL and give it to your QA team.  It’s that easy.

For CMS users: you can set your CMS’s media id into the “IAsset.AlternateId” field.  This will allow you to use your CMS id to make queries against your Media Services account to find the asset again.  Otherwise, the task can use the TM job ID as the AlternateId, allowing you to correlate TM jobs to Media Services Assets.

The output of the task is a json object with the

  • Asset.Id
  • Asset.AlternateId
  • PrimaryFile.Name
  • PrimaryFileUri (if you asked for one)

So you can pick up the json object with a web-page and start playing it, or read it with your CMS to pipe this information back into your system.

You do have to build it yourself in visual studio, I haven’t released it as a binary.  All the build instructions are in the main CreateAssetTask.cs file at the top.  The parameters are explained in the CreateAssetTask.xml, and will be visible in the Transform Manager user interface.

It would be easy to modify this task to run encode jobs in the cloud after uploading, but that would be a whole other blog post.

If you just need some sample code to build your own TM task, or to see how to upload an asset or get an origin url, go ahead and use it for that too.

Posted in Windows Azure Media Services | Tagged | Comments Off

So you want to build your own YouTube?

Here is a reference architecture for building out a user-contributed video gallery using Windows Azure Media Services.

Not a lot of time to blog this week, perhaps I’ll have time in the future to really work this scenario over with full examples including the web and worker roles, azure tables and such.

So there’s the presentation and below is some code to provide that basic interaction with Media Services.

Note, when using a browser client to PUT directly to storage, you will need a client framework such as Flash or Silverlight and a crossdomain.xml / clientaccesspolicy.xml in the $root folder of your storage account to avoid CORS issues with the simpler XMLHttpRequest.send().

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Configuration;

/// Microsoft.WindowsAzure.MediaServices.Client
///
/// Reference:
///  C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Edm.dll
///  C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.OData.dll
///  C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Services.dll
///  C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Services.Client.dll
///  C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\System.Spatial.dll
/// Download from: http://www.microsoft.com/en-us/download/details.aspx?id=29306
///
///  C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0\Microsoft.WindowsAzure.MediaServices.Client.dll
/// Download from: http://www.microsoft.com/en-us/download/details.aspx?id=30153
///
///  C:\Program Files\Windows Azure SDK\v1.6\bin\Microsoft.WindowsAzure.StorageClient.dll
/// Download from WebPI: http://go.microsoft.com/fwlink/?linkid=255386
///                      On the Products tab select All, then find and install the
///                      Windows Azure SDK 1.6 for Visual Studio 2010 (November 2011).
///
/// General Support:
///  Windows Azure Media Services Forums: http://social.msdn.microsoft.com/Forums/da-dk/MediaServices/threads
///
using Microsoft.WindowsAzure.MediaServices.Client;
using Microsoft.WindowsAzure.StorageClient;
using Microsoft.WindowsAzure;

namespace WindowsAzureMediaServices.Helper
{
    public class MediaHelper
    {
        private static string thumbnailToken = "_thumbnail";
        private static string smoothToken = "_smooth";
        private static string windowsAzureMediaEncoderName = "Windows Azure Media Encoder";

        ///<summary>
        /// Call this once per thread to get the context for any further calls. Keep the object and use it for all calls from that thread.
        /// </summary>
        /// context object to be passed to other helper functions for this thread.
        public static CloudMediaContext GetContext(HelperConfigInfo info)
        {
            if (!info.IsInitialized)
                throw new ArgumentNullException("info");

            CloudMediaContext context = null;

            if (!String.IsNullOrEmpty(info.scope))
            {
                context = new CloudMediaContext(new Uri(info.apiServer), info.wamsAccountName, info.wamsAccountKey, info.scope, info.acsBaseAddress);
            }
            else if (!String.IsNullOrEmpty(info.apiServer))
            {
                context = new CloudMediaContext(new Uri(info.apiServer), info.wamsAccountName, info.wamsAccountKey);
            }
            else
            {
                context = new CloudMediaContext(info.wamsAccountName, info.wamsAccountKey);
            }
            return context;
        }

        ///<summary>
        /// Gets the base uri with SAS token for uploads
        /// </summary>
        ///
        ///
        /// Append the filename to the path using a UriBuilder  http://server/path?SasToken
        public static Uri GetUploadUri(HelperConfigInfo info, Guid idForThisMedia)
        {
            if (info == null)
                throw new ArgumentNullException("info");
            if(!info.IsInitialized)
                throw new ArgumentException("make sure info is initialized");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            CloudMediaContext context = GetContext(info);
            if (context == null)
                throw new ApplicationException("failed to get context");

            // Check if this asset exists:
            IAsset duplicate = GetAssetByName(context, idForThisMedia.ToString());
            if (duplicate != null)
            {
                throw new ArgumentException("idForThisMedia is being re-used.");
            }

            Uri uploadUri = null;

            //
            // Create an empty asset
            //
            IAsset inputAsset = context.Assets.CreateEmptyAsset(idForThisMedia.ToString(), AssetCreationOptions.None);

            //
            // Get a SAS url:
            //
            IAccessPolicy writePolicy = context.AccessPolicies.Create("Policy For Copying", TimeSpan.FromDays(info.accessPolicyDurationInDays), AccessPermissions.Write | AccessPermissions.List);
            ILocator destinationLocator = context.Locators.CreateSasLocator(inputAsset, writePolicy, DateTime.UtcNow.AddMinutes(info.locatorStartTimeInMinutes));

            uploadUri = new Uri(destinationLocator.Path);

            return uploadUri;
        }

        ///<summary>
        /// Disables all SAS locators on the asset
        /// </summary>
        ///
        ///
        ///
        public static void DisableUploadUri(HelperConfigInfo info, Guid idForThisMedia)
        {
            if (info == null)
                throw new ArgumentNullException("info");
            if (!info.IsInitialized)
                throw new ArgumentException("make sure info is initialized");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            CloudMediaContext context = GetContext(info);
            if (context == null)
                throw new ApplicationException("failed to get context");

            // Check if this asset exists:
            IAsset asset = GetAssetByName(context, idForThisMedia.ToString());
            if (asset != null)
            {
                foreach (var locator in asset.Locators)
                {
                    if (locator.Type == LocatorType.Sas)
                    {
                        context.Locators.Revoke(locator);
                    }
                }
            }
        }

        public static void RunJob(HelperConfigInfo info, Guid idForThisMedia)
        {
            if (info == null)
                throw new ArgumentNullException("info");
            if (!info.IsInitialized)
                throw new ArgumentException("make sure info is initialized");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            CloudMediaContext context = GetContext(info);
            if (context == null)
                throw new ApplicationException("failed to get context");

            //
            // Get Input Asset
            //
            IAsset inputAsset = GetAssetByName(context, idForThisMedia.ToString());
            if (inputAsset == null)
                throw new ApplicationException("failed to get asset");

            //
            // Publish it
            //
            //This is required to search for and add files that were added by the StorageClient copy/upload calls.
            inputAsset.Publish(); //May throw if there are no files.

            //
            // Check file-count, should be 1
            //
            inputAsset = GetAssetByName(context, idForThisMedia.ToString());
            if (inputAsset.Files.Count() != 1)
                throw new ApplicationException("Asset should have one file, idForThisMedia:" + idForThisMedia); //This will throw at the thread callback level, catch at app level.

            //
            // Create a job, name it idForThisMedia
            //
            IJob job = context.Jobs.Create(idForThisMedia.ToString());

            //
            // Get the media processor
            //
            IMediaProcessor windowsAzureMediaEncoder = (from a in context.MediaProcessors
                                                        where a.Name == windowsAzureMediaEncoderName
                                                        select a).First();

            //
            // Add the Encode task.
            //
            ITask encodeTask = job.Tasks.AddNew(idForThisMedia.ToString() + smoothToken,
                                                    windowsAzureMediaEncoder,
                                                    info.windowsAzureMediaEncoder_EncodeConfig,
                                                    TaskCreationOptions.None);
            encodeTask.InputMediaAssets.Add(inputAsset);
            encodeTask.OutputMediaAssets.AddNew(idForThisMedia.ToString() + smoothToken, true, AssetCreationOptions.None);

            //
            // Add the Thumbnail task.
            //
            ITask thumbnailTask = job.Tasks.AddNew(idForThisMedia.ToString() + thumbnailToken,
                                                    windowsAzureMediaEncoder,
                                                    info.windowsAzureMediaEncoder_ThumbnailConfig,
                                                    TaskCreationOptions.None);
            thumbnailTask.InputMediaAssets.Add(inputAsset);
            thumbnailTask.OutputMediaAssets.AddNew(idForThisMedia.ToString() + thumbnailToken, true, AssetCreationOptions.None);

            //
            // Add this job to the job queue
            //
            job.Submit();

            //
            // The ouput assets are not actually created util the job is submitted.
            // At that point, they can be refreshed and updated.
            //
            IAsset smoothAsset = job.OutputMediaAssets[0];
            smoothAsset = RefreshAsset(context, smoothAsset.Id);
            smoothAsset.Name = idForThisMedia.ToString() + smoothToken;
            context.Assets.Update(smoothAsset);
            IAsset thumbnailAsset = job.OutputMediaAssets[1];
            thumbnailAsset = RefreshAsset(context, thumbnailAsset.Id);
            thumbnailAsset.Name = idForThisMedia.ToString() + thumbnailToken;
            context.Assets.Update(thumbnailAsset);
        }

        ///<summary>
        /// Creates or returns an existing SAS Url for the thumbnail.jpg. Usable immediately.
        /// </summary>
        ///
        ///
        /// Usable URL or null
        public static Uri GetThumbnailUri(HelperConfigInfo info, Guid idForThisMedia)
        {
            if (info == null)
                throw new ArgumentNullException("info");
            if (!info.IsInitialized)
                throw new ArgumentException("make sure info is initialized");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            CloudMediaContext context = GetContext(info);
            if (context == null)
                throw new ApplicationException("failed to get context");

            Uri thumbnailUri = null;

            //
            // Get the asset
            //
            IAsset asset = GetAssetByName(context, idForThisMedia + thumbnailToken);
            if (asset == null)
                throw new ApplicationException("Could not find asset using idForThisMedia: " + idForThisMedia);

            //
            // Find the .jpg in the asset
            //
            IFileInfo jpgFile = (from f in asset.Files
                                 where f.Name.EndsWith("2.jpg")
                                 select f).FirstOrDefault();
            if (jpgFile == null)
                throw new ApplicationException("Could not find a .jpg file in the asset.id: " + asset.Id);

            //
            // Look for an existing locator
            //
            ILocator locator = null;
            var locators = from rows in asset.Locators where rows.Type == LocatorType.Sas orderby rows.ExpirationDateTime select rows;
            if (locators != null && locators.Count() > 0)
            {
                //Get the one that expires last:
                locator = locators.LastOrDefault();
            }

            //
            // Check the existing locator
            //
            //TODO: Test this logic
            if (locator != null)
            {
                //Check if it will expire within _locatorRecreationThresholdDays
                if (locator.ExpirationDateTime = maxLocators)
                    {
                        var earliest = locators.FirstOrDefault();
                        if (earliest != null)
                        {
                            context.Locators.Revoke(earliest);
                        }
                    }
                }
            }

            //
            // Create a locator if required.
            //
            if (locator == null)
            {
                var accessPolicyTimeout = TimeSpan.FromDays(info.accessPolicyDurationInDays);
                IAccessPolicy readPolicy = context.AccessPolicies.Create("Read Policy " + idForThisMedia + thumbnailToken, accessPolicyTimeout, AccessPermissions.Read);
                var startTime = DateTime.UtcNow.AddMinutes(info.locatorStartTimeInMinutes);
                locator = context.Locators.CreateSasLocator(asset, readPolicy, startTime);
            }

            //
            // Build Uri
            //
            if (locator != null)
            {
                UriBuilder ub = new UriBuilder(locator.Path);
                ub.Path += "/" + jpgFile.Name;
                thumbnailUri = ub.Uri;
            }

            return thumbnailUri;
        }

        ///<summary>
        /// Creates or returns an existing Origin Url for the smooth asset. Usable after 30 seconds.
        /// </summary>
        ///
        ///
        /// Usable URL or null
        public static Uri GetSmoothStreamingUri(HelperConfigInfo info, Guid idForThisMedia)
        {

            if (info == null)
                throw new ArgumentNullException("info");
            if (!info.IsInitialized)
                throw new ArgumentException("make sure info is initialized");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            CloudMediaContext context = GetContext(info);
            if (context == null)
                throw new ApplicationException("failed to get context");

            Uri smoothUri = null;

            //
            // Get the asset
            //
            IAsset asset = GetAssetByName(context, idForThisMedia + smoothToken);
            if (asset == null)
                throw new ApplicationException("Could not find asset using idForThisMedia: " + idForThisMedia);

            //
            // Find the .ism in the asset
            //
            IFileInfo ismFile = (from f in asset.Files
                                 where f.Name.EndsWith(".ism")
                                 select f).FirstOrDefault();
            if (ismFile == null)
                throw new ApplicationException("Could not find a .ism file in the asset.id: " + asset.Id);

            //
            // Look for an existing locator
            //
            ILocator locator = null;
            var locators = from rows in asset.Locators where rows.Type == LocatorType.Origin orderby rows.ExpirationDateTime select rows;
            if (locators != null && locators.Count() > 0)
            {
                //Get the one that expires last:
                locator = locators.LastOrDefault();
            }

            //
            // Check the existing locator
            //
            //TODO: Test this logic
            if (locator != null)
            {
                //Check if it will expire within _locatorRecreationThresholdDays
                if (locator.ExpirationDateTime = maxLocators)
                    {
                        var earliest = locators.FirstOrDefault();
                        if (earliest != null)
                        {
                            context.Locators.Revoke(earliest);
                        }
                    }
                }
            }

            //
            // Create a locator if required.
            //
            if (locator == null)
            {
                var accessPolicyTimeout = TimeSpan.FromDays(info.accessPolicyDurationInDays);
                IAccessPolicy readPolicy = context.AccessPolicies.Create("Read Policy " + idForThisMedia + smoothToken, accessPolicyTimeout, AccessPermissions.Read);
                var startTime = DateTime.UtcNow.AddMinutes(info.locatorStartTimeInMinutes);
                locator = context.Locators.CreateWindowsAzureCdnLocator(asset, readPolicy, startTime);
            }

            //
            // Build Uri
            //
            if (locator != null)
            {
                UriBuilder ub = new UriBuilder(locator.Path);
                ub.Path +=  ismFile.Name + "/manifest";
                smoothUri = ub.Uri;
            }

            return smoothUri;
        }

        ///<summary>
        /// Query for a fresh reference to a job during threaded operations
        /// </summary>
        ///
        ///
        /// The job or null
        public static IJob GetJobByName(CloudMediaContext context, string jobName)
        {
            // Use a Linq select query to get an updated reference by Id.
            IJob theJob = (from j in context.Jobs
                           where j.Name == jobName
                           select j).FirstOrDefault();
            return theJob;
        }

        ///<summary>
        /// Deletes the associated assets.
        /// </summary>
        ///
        ///
        /// "Deleted" or "Input=DeleteResult, Smooth=DeleteResult, Thumbnail=DeleteResult"
        public static string DeleteAllRelatedAssets(CloudMediaContext context, Guid idForThisMedia)
        {
            string result = string.Empty;

            //
            // Validate input params:
            //
            if (context == null)
                throw new ArgumentNullException("context");
            if (idForThisMedia == Guid.Empty)
                throw new ArgumentException("Provide a non-empty Guid for idForThisMedia");

            String inputResult = DeleteAssetByName(context, idForThisMedia.ToString());
            String smoothResult = DeleteAssetByName(context, idForThisMedia + smoothToken);
            String thumbnailResult = DeleteAssetByName(context, idForThisMedia + thumbnailToken);

            if (inputResult == "Deleted" &&
                smoothResult == "Deleted" &&
                thumbnailResult == "Deleted")
            {
                //Worked
                result = "Deleted";
            }
            else
            {
                //Something failed, give full results:
                result = "Input=" + inputResult + ", Smooth=" + smoothResult + ", Thumbnail=" + thumbnailResult;
            }

            return result;
        }

        #region Internal Helper functions

        ///<summary>
        /// Deletes the asset and the associated locators.
        /// </summary>
        ///
        ///
        /// Deleted or error.
        internal static string DeleteAssetByName(CloudMediaContext context, string name)
        {
            IAsset asset = GetAssetByName(context, name);

            if (asset == null)
                return "NotFound";

            try
            {
                foreach (ILocator locator in asset.Locators)
                {
                    context.Locators.Revoke(locator);
                }
                int numContentKeys = asset.ContentKeys.Count();
                for (int i = 0; i < numContentKeys; i++)
                {
                    asset.ContentKeys.RemoveAt(i);
                }
                context.Assets.Delete(asset);
            }
            catch (Exception e)
            {
                return "Failed: " + e.Message;
            }

            return "Deleted";
        }

        ///<summary>
        /// Query for a fresh reference to an asset during threaded operations.
        /// </summary>
        ///
        ///
        /// The asset or null.
        internal static IAsset GetAssetByName(CloudMediaContext context, string name)
        {
            // Use a Linq select query to get an updated reference by Id.
            IAsset theAsset = (from a in context.Assets
                               where a.Name == name
                               select a).FirstOrDefault();
            return theAsset;
        }

        ///<summary>
        /// Query for a fresh reference to an asset during threaded operations.
        /// </summary>
        ///
        ///
        /// The asset or null.
        internal static IAsset RefreshAsset(CloudMediaContext context, string assetId)
        {
            // Use a Linq select query to get an updated reference by Id.
            IAsset theAsset = (from a in context.Assets
                              where a.Id == assetId
                              select a).FirstOrDefault();
            return theAsset;
        }

        #endregion

    }
}

Here is a bit of code which uses the above in a command line app.
The command line app consolidates the client, upload web-role, the RunJob worker and the Publishing worker.
There are no cross domain hurdles when using a full .Net client for the PUT to storage — as noted above, you need another strategy in-browser.

Dictionary<Guid, string> files = new Dictionary<Guid, string>();
files.Add(Guid.NewGuid(), @"D:\MS\CONTENT\1.MOV");
files.Add(Guid.NewGuid(), @"D:\MS\CONTENT\2.MOV");
files.Add(Guid.NewGuid(), @"D:\MS\CONTENT\3.MOV");

foreach (var file in files)
{
    Console.WriteLine(file.ToString()); //Track these guids in your content management system.
}

System.Threading.Tasks.Parallel.ForEach(files, file =>
{
    try
    {

        //Get the upload Url
        Uri uploadUrl = MediaHelper.GetUploadUri(info, file.Key);

        //Use it.  In this test, we just upload a file using a PUT.
        //But you could pass it down to an app so that it can handle the upload.
        if (!DoSomethingWithUploadUrl(info, uploadUrl, file.Value))
            return;

        //Disable the upload Url:
        MediaHelper.DisableUploadUri(info, file.Key);

        //Now we verify the asset and kick off the encode:
        MediaHelper.RunJob(info, file.Key);

        //Wait for the job:
        bool success = false;
        {
            CloudMediaContext threadContext = MediaHelper.GetContext(info);
            success = CheckJobProgress(threadContext, file.Key.ToString());
        }

        if (success)
        {
            Uri jpgUrl = MediaHelper.GetThumbnailUri(info, file.Key);
            Console.WriteLine("Jpg: " + jpgUrl.ToString());
            DoSomethingWithJpgUrl(jpgUrl);

            Uri smoothUrl = MediaHelper.GetSmoothStreamingUri(info, file.Key);
            Console.WriteLine("Smooth: " + smoothUrl.ToString());
            System.Threading.Thread.Sleep(30000); //Wait 30s for the streaming servers to get the locator table updates.
            DoSomethingWithStreamingUrl(smoothUrl);
        }
        else
        {
            Console.WriteLine("Main thread simple test failed.");
        }
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
    }
});

And for completeness, here are the functions used in the above:

        #region TestApp Specific

        //Derive from WebClient so that you can override base class settings:
        public class WebUpload : WebClient
        {
            protected override WebRequest GetWebRequest(Uri address)
            {
                WebRequest request = (WebRequest)base.GetWebRequest(address);
                // Perform any customizations on the request.
                request.Timeout = 600000; //Set upload timeout to 10min
                return request;
            }

        }
        private static bool DoSomethingWithUploadUrl(HelperConfigInfo info, Uri uploadUrl, string mediaFile)
        {
            string fileName = Path.GetFileName(mediaFile);

            UriBuilder ub = new UriBuilder(uploadUrl);
            ub.Path += "/" + fileName;

            Uri fullUploadUrl = ub.Uri;

            bool uploadOK = false;
            WebUpload wc = new WebUpload();
            try
            {
                Console.WriteLine(mediaFile + " upload start " + DateTime.Now);
                byte[] responce = wc.UploadFile(fullUploadUrl, "PUT", mediaFile);
                Console.WriteLine(mediaFile + " upload done " + DateTime.Now);
                UTF8Encoding enc = new UTF8Encoding();
                string resp = enc.GetString(responce);
                Console.WriteLine(resp);
                uploadOK = true;
            }
            catch (Exception e)
            {
                Console.WriteLine(e.Message);
            }
            return uploadOK;
        }

        private static void DoSomethingWithJpgUrl(Uri jpgUrl)
        {
            WebClient wc = new WebClient();
            byte[] responceData = null;
            try
            {
                responceData = wc.DownloadData(jpgUrl);
            }
            catch (Exception e)
            {
                Console.WriteLine("Failed to download manifest: " + e.Message);
            }
            if (responceData == null || responceData.Length == 0)
            {
                Console.WriteLine("Failed to download manifest: no data.");
            }
            else
            {
                FileStream fs = new FileStream("c:\\temp\\output\\mainThreadUserMediaGuid.jpg", FileMode.Create);
                BinaryWriter br = new BinaryWriter(fs);
                br.Write(responceData);
                br.Close();
                fs.Close();
            }
        }

        private static void DoSomethingWithStreamingUrl(Uri smoothUrl)
        {
            WebClient wc = new WebClient();
            byte[] responceData = null;
            try
            {
                responceData = wc.DownloadData(smoothUrl);
            }
            catch (Exception e)
            {
                Console.WriteLine("Failed to download manifest: " + e.Message);
            }
            if (responceData == null || responceData.Length == 0)
            {
                Console.WriteLine("Failed to download manifest: no data.");
            }
            else
            {
                FileStream fs = new FileStream("c:\\temp\\output\\mainThreadUserMediaGuid.ismc", FileMode.Create);
                BinaryWriter br = new BinaryWriter(fs);
                br.Write(responceData);
                br.Close();
                fs.Close();
            }
        }
        #endregion

        /// <summary>
        /// Expected polling interval in milliseconds.  Adjust this interval as needed based on estimated job completion times.
        /// </summary>
        const int _JobProgressInterval = 20000;

        /// <summary>
        /// Check the job progress and wait for completion or failure.  If job does not exist, this will wait in the queued state: for async logic.
        /// </summary>
        /// <param name="context"></param>
        /// <param name="jobName"></param>
        /// <returns></returns>
        public static bool CheckJobProgress(CloudMediaContext context, string jobName)
        {
            // Flag to indicate when job state is finished.
            bool jobCompleted = false;
            bool success = false;
            JobState state = JobState.Queued;

            double loops = 0;
            while (!jobCompleted)
            {
                // Get state:
                IJob job = MediaHelper.GetJobByName(context, jobName);
                if (job == null)
                {
                    state = JobState.Queued; //Force to Queued
                    Console.WriteLine("Job not found, force to Queued State");
                }
                else
                {
                    state = job.State;
                }

                // Report:
                Console.WriteLine(jobName + " " + state + " elapse time: " + loops * _JobProgressInterval / 1000.0);
                loops++;

                // Check:
                switch (state)
                {
                    case JobState.Finished:
                        jobCompleted = true;
                        success = true;
                        break;
                    case JobState.Queued:
                    case JobState.Scheduled:
                    case JobState.Processing:
                        //Do nothing.
                        break;
                    case JobState.Error:
                        jobCompleted = true;

                        // Dig into the main MediaServices.Client for error handling:
                        if (job != null)
                        {
                            foreach (var task in job.Tasks)
                            {
                                var ed = task.ErrorDetails;
                                if (ed != null)
                                {
                                    foreach (var item in ed)
                                    {
                                        Console.WriteLine(String.Format("Job failed while building the asset. Error code: {0} Message: {1}", item.Code, item.Message));
                                    }
                                }
                            }
                        }
                        break;
                    default:
                        // Not normal to be here!
                        jobCompleted = true;
                        break;
                }

                // Wait for the specified job interval before checking state again.
                if (!jobCompleted)
                    System.Threading.Thread.Sleep(_JobProgressInterval);
            }
            return success;
        }

Posted in Windows Azure Media Services | Tagged | 8 Comments

Creating a simple media asset

First things first, lets get a file into Windows Azure Media Services so that we can leverage all that cloud power.

Today we look at the following steps using the Media Services Client .Net SDK:

  • Getting a CloudMediaContext object.
  • Creating an Asset

That’s it, two easy steps.

But before you learn to walk, there’s a bit of crawling to do.  You need to have three bits of software installed to build against the Media Services .Net SDK:

WCF Data Services 5.0

Reference:
 C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Edm.dll
 C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.OData.dll
 C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Services.dll
 C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\Microsoft.Data.Services.Client.dll
 C:\Program Files (x86)\Microsoft WCF Data Services\5.0\bin\.NETFramework\System.Spatial.dll
Download from: http://www.microsoft.com/en-us/download/details.aspx?id=29306

Azure Storage Client

Reference:
 C:\Program Files\Windows Azure SDK\v1.6\bin\Microsoft.WindowsAzure.StorageClient.dll
Download from WebPI: http://go.microsoft.com/fwlink/?linkid=255386  
                     On the Products tab select All, then find and install the 
                     Windows Azure SDK 1.6 for Visual Studio 2010 (November 2011).
Yes, we ARE moving toward 1.7 SP1.

Media Services SDK

Reference:
 C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0\Microsoft.WindowsAzure.MediaServices.Client.dll
Download from: http://www.microsoft.com/en-us/download/details.aspx?id=30153

Here is a little bit more about each step of building that first media asset:

1. What is a CloudMediaContext?

The .Net SDK does a number of things for you so you don’t need to be an expert at making REST calls.  In addition to surfacing all the REST functionality, it also simplifies things by managing connections, redirects, uploads, downloads, provides storage encryption prior to uploads, securely transfers configuration information and keys, it gives you the ability to enumerate your media objects:

  • Assets and their associated objects: Files, Locators, ContentKeys
  • Jobs and their associated objects: Tasks, Configurations, Assets
  • AccessPolicies and Media Processors

To make all this happen, the CloudMediaContext creates a state which holds your user credentials, connectivity information, and the state or data of some of the recently queried objects.  It is important to understand that this state or data can become stale if other threads or the Media Services act upon the objects.  In some cases, you can simply re-query the data, in others, you are best to discard the Context and request a new one.

To get a CloudMediaContext, you need your Media Services credentials.  You can retrieve these using the Azure Management Portal.  In this blog post, you created your account.  Now lets go back and get the credentials you need to use it: login to the Azure Management Portal.

Click on the Media Services icon on the far left.  Choose your Media Service account, in my case ndrouineast, and click Manage Keys at the bottom.

Take note of the Media Service name and the Primary Media Service Access Key, you will need those in the simple call:

CloudMediaContext context = new CloudMediaContext(
             "ndrouineast",
             "H6sdfsdfsdfsdfsdfsdfsdfsdfsdfsdfsdfsdfsdff4=");

Now that you have the context, you can do all sorts of things.

2. Create a new Asset using context.AssetCollection

Lots of ways to create an asset:

  • Create a new empty asset then add files to it.
  • Use some of the convenience features of the SDK to both create and upload at once.
  • Run a Job in which a Task creates an Asset as it’s output.
  • Use the BulkIngest SDK.

We’ll just cover creating an asset and uploading a single file.

IAsset asset = context.Assets.Create(“drive:path\filename.ext”, AssetCreationOptions.None);

That’s it a one-liner again.

If that’s all you need right now, you can stop reading.

Under the hood, the call above will: create an IAsset and add it to the assets collection; create an IFileInfo to put in the asset; create an AccessPolicy with write permission, request a SAS locator for the asset container in Azure Storage using the AccessPolicy, upload the file into your storage account, and finally revoke the SAS url.

Fair warning: we’re probably pulling this one-liner version of create, we feel it is just a little too simple: we need to make all sorts of assumptions in the SDK for that under-the-hood work that just don’t have the granularity that advanced users will need.  There is such a thing as too easy.  I’ll update the blog post when the new SDK is out, we’re looking at maybe 5 lines of code instead.  Meanwhile enjoy the simplicity.

So I hope you’ve had a chance to read about uploading into Azure Storage with Aspera.  So how does that tie in here?  Well, the first thing you’ll notice is that the above is a blocking call because of the upload.  You can also create an asset in three steps:

  1. Create an Empty Asset
  2. Upload a file to the storage container
  3. Update the asset to included the added files.

To create an empty asset, it’s a simple call:

IAsset inputAsset = context.Assets.CreateEmptyAsset(“assetName”, AssetCreationOptions.None);

At this point, you’ll want to add a file to it.  If you are not using our SDK to do the uploads as part of the create call, you can just get a SAS url for the asset container, add your file name to it and do a PUT on it.  The Azure Storage REST Api will process that and create a file for you.

Here’s how to get that SAS UploadUri:

IAccessPolicy writePolicy = context.AccessPolicies.Create(“Policy For Copying”, TimeSpan.FromMinutes(estimatedUploadMaxTime), AccessPermissions.Write | AccessPermissions.List);

ILocator destinationLocator = context.Locators.CreateSasLocator(asset, writePolicy, DateTime.UtcNow.AddMinutes(-5));

Uri uploadUri = newUri(destinationLocator.Path);

Now you can craft your own HttpRequest from the uploadUri, add you filename and you’re in full control.  Skip over the Aspera bit if that’s not for you, but you do need to do a final step, so read on.

So where does Aspera come in?

Well that uploadUri’s first Uri segment will be the asset contianer name, in the form: asset_guid.  You can now browse to the asset container name using the Aspera Desktop Client and upload that 3Gig file at blazing fast speeds.  Soon you’ll be using the Aspera SDK to script all that and you’ll really be off to the races.

Another upload option is the Azure Storage CloudBlob class, which can upload for you.

Once the file is in storage, either with an HttpRequest or Aspera, or the Azure Storage SDK, you need to let the Media Services know that you’ve added files that it isn’t aware of.  To Media Services, that is still an empty asset.

asset.Publish();

Under the hood, this is enumerating the files in the asset container, creating IFileInfo objects for each file and adding these to the asset in our databases.

That .Publish call is another tricky call which we’ve found doesn’t cover all the user scenarios properly.  I’ll blog about how to ‘do this right’ when the next Media Services SDK comes out.

We covered how to add an asset and populate it with a file in this post.  There are several variations on doing this, which make more sense when they are taken in context of a larger application or workflow.  I’m just trying to build a base that I can refer back to later.

Posted in Windows Azure Media Services | Tagged | 3 Comments