I like the idea of Apple’s iTunes Match service but I’ve had some issues getting it to work the way I think it should, especially with the copies of CDs that I own that I’ve ripped. The main issue is songs showing an iCloud Status of “Waiting” constantly, and those same songs not able to be downloaded to any of my iOS devices. This is what I did to fix it and get it to where all of my songs are uploaded or matched on my OS X and Windows desktops and laptops, and available to download on my iOS devices. Hopefully it will help someone having a similar problem.
The basic fix is to find any song that is in an iCloud Status of “Error” and fix that error, either by deleting that entry, locating the song (if iTunes could not find it) or some other remedy. Since this did fix my problem, I’m assuming the synchronization process between iCloud and the local machine does not handle nor report errors very well, and either timeouts or just fails when it encounters them.
There are several benefits to using iTunes match – when it works:
The main issue is when songs from albums that I own (CDs ripped, Vinyl converted) are stuck in a “Waiting” state on iTunes for OS X and iTunes for Windows, and show as not downloadable (no “download from cloud button”). This state persists even when I’ve tried to force an update (from the iTunes menu File -> Library -> Update iCloud Music Library. The “Waiting” state looks like the screenshot below.
It appears that there was an error in the iTunes “Update iCloud Music Library” process or the normal process to try and match music. But there is no error log. To detect the error, you have to locate at the “iCloud Status” for each song.
To do this and detect error:
I never found an error log that showed the exact errors, only this indicator in iCloud Status. Since I did this, I’ve had no issues on any of my devices.
There are, obviously and intuitively, differences between testing an iOS app on the Xcode simulator, and testing on a real device. The obvious ones run the gamut from no camera on the simulator to the way the keyboard works differently on both. The intuitive ones, in my mind, come from the fact that the Simulator is running on a different operating system (OSX) than the devices (iOS) that the app is intended for.
The difference that repeatedly bites me is: CAPITALIZATION matters.
The majority of the apps I do at JoSara MeDia are HTML5 apps in a framework called Baker. If you are interested, the rationale behind this is that most of the apps are either coming from books or eBooks (and hence are already in a format close to HTML, like ePub) or are heading in that direction (so we want to make conversion easy).
I was putting in a very nice jPlayer-based audio player (called jquery.mb.miniAudioPlayer, checkout the link, it is quite well done), and it looked great on the simulator, as you can see on the screenshots below. I tested it on several different simulator devices – all looked as expected, all performed the autoplay function, when expected.
In case you are interested, this is from a forthcoming “coffee table poetry book as an app” project called Quebradillas.
But, once I transferred the app to a device (either through cable or TestFlight) the audio player graphics did not make the transition (see screenshot below). And neither did the autoplay functionality.
The autoplay issue was, again, capitalization: the parameter in one of the examples had autoplay in camelCase (i.e., autoPlay), but in the mb.miniAudioPlayer.js, the parameter was simply “autoplay.”
By noting this, I aim to remind my future self to use capitalization as one of the first items to check when apps look different in the simulator vs. on the device, especially when using HTML5 app frameworks.
All of the apps JoSara MeDia currently has in the Apple app store (except for the latest one) are self-contained; all of the media (video, audio, photos, maps, etc.) are embedded in the app. This means that if a user is on a plane or somewhere that they have no decent network connection that the apps will work fine, with no parts saying “you can only view this with an internet connection.”
This strategy works very well except for two main problems:
These two issues prompted me to use the release of our Quebec City app as a testing ground for moving the videos included in the app (and the largest space consuming media in the app) into an on-demand “cloud” storage system. I determined the best solution for this is to use Apple’s HTTP Live Streaming (HLS) solution.
There are still many things I am figuring out about using HLS, and I would welcome comments on this strategy.
For most apps, there is no way to predict what bandwidth your users will have when they click on a video inside your app. And there is an “instant gratification” requirement (or near instant) that must be fulfilled when a user clicks on the play button.
Have you ever started a video, have it show as grainy or lower quality, and then get more defined as the video plays? This is an example of using HLS with variant playlists (other protocols do this as well).
Simply put, with HLS a video is segmented into several time segment files (denoted by the file extension .ts) which are included in a playlist file (denoted by the file extension .m3u8) which describes the video and the segments. The playlist is a human readable file that can be edited if needed (and I determined I needed to, see below).
Added on to this is a “variant playlist” which is a master playlist file in the same format that points to other playlist files. The concept behind the variant playlist is to have videos of multiple resolutions and sizes but with the same time segments (this should be prefaced with “I think”, and comments appreciated). When a video player starts playing a video described by a variant playlist, it starts with the lowest bandwidth utilization playlist (which is by definition smaller in size and therefore should download and start to play the quickest, thus satisfying that most human need, instant gratification), determines through a handshake what bandwidth and resolution the device playing the video can handle, and ratchets up to the best playlist in the variant playlist to continue playing. I am assuming (by observation) that it only will ratchet up to a higher resolution playlist at the time segment breaks (which is also why I think the segments all have to be the same length).
There are two links that provide standards for videos for Apple iOS devices and Apple TVs (links below):
These standards do overlap a bit, but, as you would expect, the Apple TV standards have higher resolution because of an always connected, higher bandwidth (minimum wifi) connectivity than one can expect with an iPhone or iPad.
To support iPhones, iPad and Apple TVs, the best strategy would be to have 3 or 4 streams:
Thus the steps become:
My videos are in several shapes and resolutions, since they come from what ever device I have on my at the time. They are usually from an Olympus TG-1 (which has been with me through the Grand Canyon, in Hawaii, in cenotes in the Yucatan and now in Quebec City) which is my indestructible image and video default, or some kind of iOS device. Both are set to shoot in the highest quality possible. This makes the native videos very large (and the apps that they are embedded in larger still).
There are several tools to convert the videos. These are the ones I’ve looked into:
Once you have your videos converted, the next step is to build the segmented files from these videos plus the playlists that contains the metadata and location of the segmented files (these are the files that end with .ts). There may be other tools, but there are only two that I have found.
Finally, you need to build a variant playlist, which is a playlist that points to all of the other playlists of different resolution/bandwidth option time segments.
Currently, I am using a combination of Elastic Transcoder and manual editing. I take the variant playlist that comes out of Elastic Transcoder (which contains 400K, 1M and 2M playlists), then edit it to add the playlist I created using the mediafilesegmenter, the higher-res version. This gives a final variant playlist with four options that straddle the iOS device requirement list and the Apple TV requirement list.
Most of my apps are HTML5 using a standard called HPUB. This is to take advantage of multiple platforms, as HPUB files can be converted with a bit of work to ePub files for enhanced eBooks.
To use the videos in HTML5 is straightforward – just use the <video> tag and put the variant playlist file in the “src=” parameter.
In the end, the videos work, and seem to work for users around the world, with low or high bandwidth, as expected. I’m sure there are things that can be done to make them better.
I’ve used the mediastreamvalidator command from the Apple Developer tools pretty extensively. It doesn’t like some of the things about the AWS Elastic Transcoder generated files, but it is valuable in pointing out others.
Here are some changes I’ve made based on the validator, and other feedback:
Error: Illegal MIME type - this one took me a bit. The m3u8 files generated by AWS are fine, but files such as those generated from the mediastreamsegmenter tool do not pass this check. They get tagged by the error “–> Detail: MIME type: application/octet-stream”. In AWS S3 there is a drop down list of MIME types in the “Metadata” section, but none of the recommended Apple MIME types are there. The files generated by AWS have the MIME type “application/x-mpegURL”, which is one of the recommended ones. Since it is not a selection in the drop down, it took me a while to determine that you can actually just manually enter the MIME type into the field, even if it is not in the drop down list. Doh!
Time segment issues – whether utilizing AWS Elastic Transcoder or the mediafilesegmenter cmd line tool, I’ve always used 10 second segments. Unfortunately, either Elastic Transcoder isn’t exact or the mediastreamvalidator tool does not agree with Transcoder’s output. Here’s an example as a snip from mediastreamvalidator’s output:
Error: Different target durations detected
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_1m.m3u8
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_400k.m3u8
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_2m.m3u8
This is basically saying the the “HI” version of the playlist (which was created using Apple’s mediafilesegmenter cmd-line tool) is ten seconds, but the AWS Elastic Transcoder created playlists (the three that start with “hls”) are 13…when the job to create them was set for 10 seconds. I am still trying to figure this one out, so any pointers would be appreciated.
File permissions – when hosting the playlist and time segment files on an AWS S3 bucket, uploading the files causes the permissions for the files always need to be reset to be readable (either “Public” or set up correctly for a secure file stream. This seems obvious, but working through the issues that the validator brought up had me uploading files multiple times, and this always came back to bite me as an error in the validator.
HLS v3 vs. v4 – except for the fact that you have to have separate audio and video streams in v4, I’m still clueless as to when and why you would use one version over the other. It would seem that a single audio stream would be needed for really really low bandwidth. But separating out the video and audio streams is quite a bit of extra work (I would be thrilled if someone would leave a comment about a simple tool to do this). I can see some advantage in separate steams, in that it would allow the client to choose a better video stream with lower quality audio based on its own configuration. More to learn here for sure.
Client usage unknowns – now that the videos work, how do I know which variant is being played? It would be good to know if all four variants were being utilized, and under what circumstances they are being consumed (particular devices? bandwidths?). There is some tracking on AWS which I can potentially use to determine this.
I hope this helps anyone else working their way through using Apple’s HTTP Live Streaming. Any and all comments appreciated. Thanks to “tidbits” from Apple and the Apple forums for his assistance as I work my way through this.
To see the app that this is used on, click on the App Store logo. I’d appreciate feedback especially from those not in the US as to their perceptions of (a)how long it takes the videos to start and (b)how long it take the quality to ramp up.
Quebec City, the app/enhanced eBook I wrote and developed for my gorgeous wife’s birthday (as that’s where we went to explore and celebrate) is now available in the Apple App Store for free…at least until my wife tells me to not make it free.
Like our Grand Canyon app that has been the top rated Grand Canyon app on the app store for several years, this app has videos, images, slide shows, maps and anything else we could cram in there!
The app has chapters on:
The app is available for iPhone and iPad.
The website we have utilized for beta testing of apps, TestFlightApp.com, shuts down on February 26th, 2015. All app testing will be moved to the iOS8 TestFlight app and managed through Apple’s iTunes Connect.
As a developer, and a user there are many more PROs to this than CONs. The new TestFlight app for iOS8 radically simplifies the process of beta testing apps.
In the old TestFlight.com, a developer had to:
There were several places where that process could get stuck and could indeed go wrong.
With the new TestFlight iOS8 app, the steps are much simpler:
The device ID mapping is done by Apple. No changes in the provisioning profile are needed.
What else is different? Here are the cons:
Overall, the PROs far outweigh the CONs, and hopefully some of the other pieces will show up in the future.
Existing users can be exported from TestFlightApp.com into CSV files for import as external users on Apple’s iTunesConnect web site (where user management is now controlled). Detailed instructions here.
For those of you that do not get the challenges of living in the Apple Developer world, a bit of background: To deploy an iOS app outside of the Apple App Store, either as a “beta” with an Ad-Hoc Distribution profile, or as an Enterprise with an Apple Enterprise Developer account, a Apple Provisioning Profile is required. This profile is built on Apple’s Developer Web site and requires a developer certificate (“trust” the developer!), a list of devices (up to 100) or the domain of the Enterprise (depending on whether this is for Ad-Hoc or Enterprise Distribution), and an app ID. This information is used to generate the provisioning profile, which is distributed along with the app to identify which devices are allowed to utilize the app.
For reasons only known to Apple, provisioning profiles, even Enterprise Provisioning profiles, expire once a year. Perhaps this is Apple’s way of ensuring that Enterprise’s keep up with their $299 annual fee to keep their Enterprise Developer License.
Recently, Apple extended the time validity of the certificates generated under the Enterprise Developer License to three years. But the profiles all still expire after one year.
This has spawned a huge marketplace for MDM (Mobile Device Management) solutions, used to help deploy (or redeploy, in the case of an expired profile) apps in an Enterprise.
It is easy to see what day a profile expires (it is visible on the device under Settings/General/Profiles, generates pop-up warning on the device, and it visible in your Apple Developer account page). But, because of a last minute customer call, we needed to know when would it really expire. This customer did not have an MDM solution, and, though we had built into our app and forced upgrade functionality, if the profile expires, the app stops working.
This is obviously a major issue with Enterprises deploying Apple apps internally. When given enough warning, it can be handled, even without an MDM.
But, given less than 24 hours notice, what we really needed to know was not only the data, but the time the profile would expire.
When you build a profile, you need to get it into XCode (the Apple IDE) to use it. This can be done from XCode, or you can download the profile as a file, then double click on it and open it in XCode.
In other words, the profile (shown to the right) is just a file. It is in the format of a “plist file”, a properties list file.
Since we were trying to determine what time the profile expired, and we could not find that information anywhere, we decided to look into the file. We opened it with a simple text editor (right click, select “Open With” and select your favorite editor.
Most of it was quite easy to read, as plist files are XML. You can tell it is a plist file as it starts with this information:
<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC “-//Apple//DTD PLIST 1.0//EN” “http://www.apple.com/DTDs/PropertyList-1.0.dtd”>
The file contains the developer certificate, which is a long character string that looks like garbage. But after that data string, there is more information, including the follow nugget we had been looking for:
Not only the date, but the time (in GMT, or Zulu time).
Hope this helps anyone else who has the need to look for the same information. Obviously, the best practice is to avoid waiting until the last minute. But if you do, it is good to know how much time you have.
I’ve had a mid-2011 11 inch Mac Air for two years. This was my first Mac laptop, and the size (perfect for traveling), the instant on and several other features sold me on it. I had Compaq laptops for my duration at Compaq (of course) and had meandered from Sony VAIO’s (good product) to ASUS netbooks before deciding that paying four times the cost of a Windows laptop might actually be worth it. It would be difficult at this point to convince me to go back to Windows (though I do keep a Windows desktop for some apps).
But I upgrade to the just announced Mac Air 13 inch for several reasons:
My local Apple store, who I have a good relationship with, had the fully loaded 13″ (8 GB RAM, 512 Flash storage and the upgraded processor) in stock. My son’s big ole Windows laptop was giving him fits so he was the designated hand-me-down recipient of the 11″ Mac Air.
This lead me to try Apple’s Migration Assistant.
I have never been a big fan of automated migration programs. They either seem to miss a configuration (or several), don’t move all your files, or just plain don’t work.
In addition, I had three types of XCODE development profiles and certificates on my Mac: one set for Media Sourcery, one set for JoSara MeDia (our publishing company) and one customer’s (an Enterprise License that we develop under for them). Having just been through the un-documented gyrations of renewing and reissuing the one Apple Enterprise cert/profile, I was not optimistic.
However, after a false start or two, Migration Assistant blew my incredibly low expectations away.
It not only moved all my files, it:
Except for the Microsoft Office license (yes I run Office for Mac, and will as long as my customers use it).
My main hiccup was when I first set it up, Migration Assistant projected a nice 75 hours for copying files over. That issue was attributed because Larry has too many WiFi networks at home, including a new one from an AirPort Time Capsule (more on that in another post). When I made certain that both laptops were on the same WiFi network, Migration Assistant projected a more reasonable 4-5 hours to copy everything over.
I let it run over night, and started getting used to a bigger screen (which isn’t easy…the 11″ is nice…the things we do for our customers). But, just for precautions, I asked my son not to delete anything on the old Mac for a while.