// archives

Apple

This tag is associated with 11 posts
ReviewsTabCurrentVersioniOS9

Steps to Rate and Review Apps on the Apple App Store

An App’s visibility on the Apple App Store is enhanced by its ratings and reviews. We’ve received many great ratings and reviews for our JoSara MeDia apps that are in the Apple App Store. And the steps to provide App Store reviews and ratings have changed slightly between the current iOS10 and the forthcoming iOS11 (now in preview).

What some users are not aware of is that the rating visibility of an App is reset with each new version. Even a minor change to an app will cause the ratings to reset. For example, we made a minor change to our Grand Canyon app to make the videos fit better when the iPhone 7 and 7+ were released. Even though the Grand Canyon app has only five star ratings (13 at last count) those ratings are not visible on the new version unless a user explicitly selects to look for reviews and ratings for “All Versions.”

There are several ways to ensure re-ratings, such as in-app pop-ups (Update: this will change with iOS11 as Apple will require developers to use its own in-app rating API, which will limit the number of times a user is prompted).. But users can also go into the App Store from their iPhone or iPad and easily rate apps.

The main differences between App Store reviews and ratings in iOS10 and the iOS11 preview are:

  • the app developer will be given a method (most likely in iTuneConnect) to select whether they would like ratings to carry across updates (hat tip to Reddit user emlombardo for pointing this out)
  • to rate an app on iOS11 can be done from the app’s page in the app store without going into the “Write a Review” section. This should provide for more ratings, albeit without reviews.

Here are the steps. Screenshots are below.

  1. Tap on the App Store Icon
  2. Tap in Search
  3. Type in “josara” to search for JoSara MeDia apps (or any other app name or app developer if you want to rate their apps)
  4. Scroll to the app you want to rate and tap on that apps name
  5. For iOS 10 and earlier versions:
    1. tap on the “Reviews” tab (between “Details” and “Related)
    2. tap on “Write a Review”
    3. tap on a star rating (5 stars is best, 1 star is worst)
    4. if you like, enter a title and a review of the app
  6. For iOS 11 (as of the current beta)
    1. scroll down to the “Ratings and Reviews” section
    2. tap on a star rating
    3. if you like tap on “Write a Review”

Click on the screenshots for App Store reviews and ratings in the table below to see larger versions.

iOS10 and earlier
iOS11
Locate App Store
Locate Search icon
Enter Search Terms
View Search Results
Select App to Rate and/or Review
For iOS10, tap on "Reviews" header.

For iOS11, scroll down to "Reviews" section
For iOS 10, tap on the "All Versions" tab to see ratings and reviews for all versions.

For iOS11, as of the current preview release, I can find no such option
To Rate an App:

For iOS 10, tap on "Write a Review." Star ratings are inside the "Write a Review" section. Tap a star, then tap send. Reviews are optional.

For iOS11, simply tap on the star rating.
To Review an App, tap on the "Write a Review" label in both versions

RWDevCon Mug

RWDevCon 2017 – Notes and Thoughts

RWDevCon 2017I attended the third RWDevCon March 30 – April 1 in Alexandria, VA. (Yes I know that was 10 days ago, but I had a lot of notes.) It was the third time RWDevCon was held, thus I was one of a reported 12 faithful attendees who have attended all three conferences. I even dragged a colleague from one of my customers to attend this year.

Why do I return for the joy of back to back eight hour days of seminars (this year with a third all day workshop for added pleasure)?

  • Hands-on Tutorials. Before each conference, Ray Wenderlich (the “RW” in the DevCon) and his team send out the slides that will be covered in the tutorials (a small PDF) along with a Zip that includes PDFs of slides and Xcode of all of the demos and labs (with multiple start and stop points) that will be covered as well…and this Zip is huge. This size comparison drives home the point: this is the most hands-on conference of any developer conference I’ve been to. I was prepared this time, had all of these files loaded in Notability on my iPad and was taking notes on the PDFs (aside: I was a skeptic about the Apple Pencil, but this note-taking setup has converted me). In the previous two years, there were some pacing issues, where there was just way too much to type in the demos, and that got in the way of listening to the instructor. When the instructors type while they talk, and expect you to listen while you talk, laptops get closed and people start to just listen…which is okay, but not optimal. This year, I only experienced that one time. Ray and his team, with all of the practice they put into these sessions, have it down.
  • RWDevConConference organizers who listen. I (and every other attendee) received an email from Ray asking what types of sessions we’d like to see in the conference. I do not recall all of my responses, but I did list “error handling”, “unit testing”, “machine learning” and “application architecture”. All of these appeared on the agenda. Even a small item, like a request to have something other than those nasty, incredibly-bad-for-you Coke products during the breaks was listened to…with Naked Juice now as an option. When they listen to their attendees, good things happen.
  • Ray and Vicki. As I was walking into the workshop the first day, I ran into Ray, looking very bleary eyed. Though I knew he’d probably been up all night reviewing sessions and entertaining the vast horde of Brits he seems to employ (!), he stopped for a quick chat and update. Vicki walks through with her ever present smile, knowing everyone’s name. Though I only see these two once a year, they are very enjoyable people and I revel in their success. They make no excuses about being introverts, but practice their presentations over and over until they seem comfortable in the spotlight. I hope RWDevCon grows to dominate the world, just so they can enjoy the fruits of their labor.RWDevCon
  • The RWDevCon team. They work hard, they are there to help you learn, and they are a lot of fun. And Brian lets me win at the card games (sometimes). Ray and Vicki, at every conference so far, talk about inclusion, about letting people feel not only comfortable but like they belong. They walk the walk.
  • Beer. Ray and Vickie call this friendship, camaraderie…but let’s face it – we’re here for the beer (and the card games. and the people). I must ding Mr. Wenderlich, though for two beer notes:
    • At the bash at the Carlyle Club, Ray was telling me how wonderful the beer was that he was drinking; when he left to go do host-type things after setting his bottle on the bar, I asked the bartender for one…only to find out that Ray had just quaffed the last one (but left an inch in the bottle…I did NOT drink after Ray). I do not remember what beer this was as I have blotted it from my mind.
    • For the receptions at the hotel, the catered food and mojitos were quite good. But when Sam Adams is the best beer of the selection…time to head down to the Trademark for their excellent selection of beers….or to the post-conference bottle share…but that is a later session.

It would be great to see RWDevCon grow into something much larger, so that those that put in the hard work could realize that success. But it is also excellent at the size that it currently is. Ray, Vickie and the team have a hard balancing act to do.

Bottom line: I cannot recommend this conference more highly to any and all iOS developers. Most of these sessions provided me with enough information that I could use what I learned immediately. Some are difficult enough that I’ll need to review the data before knowing enough to be dangerous. But the amount of tools and education gained in these three days provides a high ROI on time and money spent.

All day workshop – App Architecture (Josh Berlin and René Cacheaux)

FreedomIsntFreeIPAIt was a tough choice between the Debug workshop versus this one, but need won out. I’d recently completed a cycle count and inventory Swift application in a very tight timeline, and I KNEW I had broken many app architecture rules in haste….so I went to task to re-learn and hopefully be amazed by some new ideas as well. Had I known attendees would be getting a copy of Derek’s (who was leading the other workshop) Advanced Apple Debugging and Reverse Engineering book at the conference close, I would have had more incentive to choose this session…until Josh and René put out an app architecture book.

After meeting one of my three favorite nephews for brews and dinner at the Trademark downstairs the night before, I was primed and ready (obligatory local beer picture included) for the next morning’s 8 hours session.

What I learned:

  • Josh and René are both from Austin, where I’ll be living soon. Beers will be shared.
  • From the intro: “Thoughtful design of the boundaries between your apps subsystems is the foundation of a stable codebase.” Deep stuff for 9am!
  • Dependency Injection (the demos used the Swinject library, there are others available). The first demo pulled an API call out of the view controller and put it into a Swinject Assembly….need to use this as we refactor our cycle count app.
  • Storyboard Dependency Injection.
  • Use case driven development. This is not a new subject, a quick google of that phrase shows many old articles on university web sites, IBM, ACM, etc. But the workshop (specifically Demo2) showed swift version of implementing use case objects.
  • Unidirectional data flow with a state store.
  • Redux – state stored outside of the view controller
  • Some RxSwift as an appetizer to the session tomorrow (this actually had me change my schedule to attend the RxSwift session instead of the Fastlane session).

After the workshop, with a full brain, sore butt from sitting, an hour to spare until the opening reception and the threat of bad running weather in the ensuing days, I headed out to run down King Street, a very cool old set of blocks that runs towards the river, right into more running trails (obligatory running scenery picture included – session details after the photo).

AlexandriaRun

Friday sessions

There were three tracks of sessions. There are the ones I selected to attend. There are several others that I either worked through the demo material or plan on doing so…as, so far, one cannot be in two places at once.

Machine Learning on iOS (Alexis Gallagher)

Why I attended: I want to employ machine learning with two of our products (Secure Workflow and Clinical Decision Support).

What I learned:

  • Docker can actually be run on my wimpy old MBA, making me glad I didn’t upgrade (but of course, when the chipset gets refreshed, I’ll be the first in line).
  • The setup was well done, starting the training part of the machine learning demo first (so it would actually complete on a wimpy old MBA) and then going through the theory.
  • The docker image had Google Tensor Flow, and we used the inception3 algorithm.
  • Alexis has some great smiles…and some rather frightening frowns.
  • We made training data by pulling images out of videos, classifying those videos as smiling and frowning, and then letting the Docker image train on the pulled out images against the baseline.
  • The tensorFlow console is accessible in the Docker, and allows you to browse deeply through the Inceptionv3 network.
  • I spoke with Alexis after the session, and discussed with him how I wanted to employ machine learning. Based on my description, he suggested linear regression and pointed me towards the Machine Learning course on Coursera taught by Andrew Ng of Stanford University   (which he also mentioned at the top of his “Where to go from here?” slide and the end of the session). A session of it just started, and it was a great suggestion (thanks, Alexis).

iOS Concurrency (Audrey Tam)

Why I attended: We have an old objective-C app that is getting converted to Swift, and it needs some concurrency help at one customer as their network is slower. I’d like to put parts of the data refresh in the background while updating the animations of the workflow tasks and notes. Plus Audrey is my wife’s name, so there you go.

What I learned:

  •  I have a lot of work to do.
  • What not to do (deadlocks, race conditions, priority inversions)…same concurrency problems that exist in all programing languages
  • Dispatch groups, dispatch barrier tasks
  • Async operations

Building Reusable Frameworks (Eric Cerney)

Why I attended: The old objective-C app I mentioned earlier has frameworks, and they need to be updated.

What I learned:

  • “A library is a tool, but a framework is a way of life.” Let’s do t-shirts!
  • Explanation of some of the differences between the three main dependency managers (Swift Package Manager, Carthage, CocoaPods)
  • SPM still doesn’t support iOS
  • using ‘git tag’ to manage versions
  • Demo 1 walked through Swift Package Manager; Demo 2 walked through building a multi-platform library (which, as I’ve heard, is a tool, not a framework…)
  • great demo on access control, and a good list of “gotchas”  on bundle specifications, global values, and singletons

RWDevCon RxSwiftRxSwift in Practice (Marin Todorov)

Why I attended: Reactive programming. Buzz words. Got a taste during the app architecture workshop. And I sensed a book was coming (obligatory signed book page picture included, only found Marin and Ash though…there’s a lot of authors!)

What I learned:

  • Don’t try to use reactive programming for everything (also mentioned in one of the inspiration talks)
  • Asynchronous code and observables. Demo 1 walked through “Variable” and emitting and subscribing to events
  • Using Observables with “bindTo” to tie incoming JSON directly to a tableView. This I will definitely use as we update our workflow app.
  • Using “bag = DisposeBag()” to get rid of subscriptions…takin’ out the trash!
  • I’ll walk through the book for more detail. We should be able to use this in the tableViews that are in our cycle count and inventory app, which shows which aisles still have items to be counted since this is made available via RESTful web service, and currently gets updated in a non-reactive way if the current user assigns themselves an aisle to count.

Saturday sessions

Practical Unit Testing I (Jack Wu)

Why I attended: Code is never tested enough. Especially mine.

What I learned:

  • When Jack says “practical”, Jack means practical. Lots of good points about balancing the time it takes to write and maintain tests versus having “good” and “useful” tests. Another way to read this: “Jack hates testing code as much as I do, so let’s do it efficiently and quickly and no one will get hurt.”
  • Write tests before you refactor, make sure the tests succeed, then refactor.
  • How to write a basic unit test
  • How to write a UI Test in Xcode, and how to make them not so darn slow
  • You can refactor you code to be more testable, and this makes for easier to understand code. My lead developer is a refactoring machine, and his code is always testable….make sense.
  • I started in the session writing tests for the next version of the cycle count and inventory app. This was a very practical and applicable session.

Swift Playgrounds in Depth (Jawwad Ahmad)

Why I attended: I didn’t get to use playgrounds enough as a kid. It was a tough choice between this one and Practical Unit Testing II.

What I learned:

  • Playgrounds are still flaky. Several folks had to restart Xcode (myself included) to get the Live View to work.
  • IndefiniteExecution in a playground….cool
  • Reading from and writing to a file in a playground…quite useful
  • moving code into a framework to use in a playground

Advanced iOS Design Patterns (Joshua Greene)

Why I attended: The description talks about authentication, auto re-login, data and thread safety designs.

What I learned:

  • This was one session where I could not keep up with all of the typing, and for the most part sat back and listened.
  • Demo 1 walked through MutlicastClosureDelegate; Demo 2 walked through the Visitor pattern.
  • During the lab time, I actually started going through the “On-boarding” seminar slides and demo/labs. I’ll be running back through both of these sessions again. The on-boarding piece is quite useful for first time training users, even (or especially) for Enterprise apps.

Swift Error Handling (Mike Katz)

Why I attended: My error handling looks like the if-then-else statement from hell.

What I learned:

  • I’m going to “borrow” all of Mike’s error handling routines.
  • throws, try (not the rugby kind of try I’m used to!), do/catch
  • pass errors up (from inside to the call)
  • RETHROWS!
  • Using a Result-type and map
  • The Lab went through ways to provide error responses to AlamoFire and also a way to do auto-retries after timeout errors

RWDevConClosing session

  • The RW team pushes the session and conference evaluations hard, because they compile them on the fly. And, in this closing session immediately after the last inspiration talk, Ray details a summary post-mortem and asks for more feedback. This is the only conference that I can recall that does this.
  • One way they push the evaluations is they give out prizes (you get an entry ticket for each evaluation). And, for the third year in the row, I won…nothing.
  • But I did get two books, both of which have been previously mentioned (RxSwift and Advanced Apple Debugging). And managed to get both of them signed (obligatory signed book pictures)

Post-conference bottle share

Why I attended: I brought two great beers from local Houston breweries and they needed to be tasted. (obligatory Houston beer picture included, especially since several people asked about them. They were mighty tasty)

What I learned:

One could make a case that this wasn’t really part of the conference. But it was. We traded beer stories, travel stories, family stories, tried to kill a monster in a dungeon while bluffing (more card games), and generally had a great time. All were invited.

RWDevConConclusion

With the amount of pre-conference setup, conference materials and notes gathered from this (and the previous two) RWDevCon, the investment here will continue to pay off as they are used and referenced. Next comes incorporating these into release plans for the apps we already have deployed, and those we will deploy in the future.

Some additional photos included here at the end.

RayClosing

MarinKeynote

Turtonator

BottleShare

Screen Shot 2017-01-09 at 8.56.04 PM

iTunes Match – fixing “Waiting” and un-downloadable songs

I like the idea of Apple’s iTunes Match service but I’ve had some issues getting it to work the way I think it should, especially with the copies of CDs that I own that I’ve ripped. The main issue is songs showing an iCloud Status of “Waiting” constantly, and those same songs not able to be downloaded to any of my iOS devices. This is what I did to fix it and get it to where all of my songs are uploaded or matched on my OS X and Windows desktops and laptops, and available to download on my iOS devices. Hopefully it will help someone having a similar problem.

The basic fix is to find any song that is in an iCloud Status of “Error” and fix that error, either by deleting that entry, locating the song (if iTunes could not find it) or some other remedy. Since this did fix my problem, I’m assuming the synchronization process between iCloud and the local machine does not handle nor report errors very well, and either timeouts or just fails when it encounters them.

Why iTunes Match

There are several benefits to using iTunes match – when it works:

  • song availability on multiple devices (Mac, Windows Desktop, iPhone, iPad, Apple TV)
  • higher quality versions of songs
  • vinyl conversion. I’ve ripped several of my old LPs (some of which I cannot find in iTunes) and would like to use iTunes Match to make high-quality versions of these songs available across all of my devices.

Issues

The main issue is when songs from albums that I own (CDs ripped, Vinyl converted) are stuck in a “Waiting” state on iTunes for OS X and iTunes for Windows, and show as not downloadable (no “download from cloud button”). This state persists even when I’ve tried to force an update (from the iTunes menu File -> Library -> Update iCloud Music Library. The “Waiting” state looks like the screenshot below.

iTunes Match

Solution

It appears that there was an error in the iTunes “Update iCloud Music Library” process or the normal process to try and match music. But there is no error log. To detect the error, you have to locate at the “iCloud Status” for each song.
To do this and detect error:

  • Turn on visibility of iCloud Status. Select Songs from your Library, then select View -> Show View Options. See the screenshot below.
  • Check the box next to “iCloud Status”

iTunes Match

  • Sort on the iCloud Status column. There should be “Matched”, “Uploaded”, “Waiting”, “Ineligible”, and, in my case, “Error” (see screenshot)

iTunes Match

  • Fix the songs whose iCloud Status is “Error”. In my case there were some that could not be found, and some that were from an old iTunes account. I removed those songs from iTunes, restarted iTunes, and hit the “Update iCloud Music Library” link. Everything that was Waiting became either “Matched” or “Uploaded” on OS X and Windows, and all songs became available for download. Note that there are some titles marked as “Ineligible”. Most of mine were digital booklets, objects that would never sync anyway. These did not need any fixing.

I never found an error log that showed the exact errors, only this indicator in iCloud Status. Since I did this, I’ve had no issues on any of my devices.

Xcode capitalization problem

Xcode – Simulator vs. Device: CAPITALIZATION matters

There are, obviously and intuitively, differences between testing an iOS app on the Xcode simulator, and testing on a real device. The obvious ones run the gamut from no camera on the simulator to the way the keyboard works differently on both. The intuitive ones, in my mind, come from the fact that the Simulator is running on a different operating system (OSX) than the devices (iOS) that the app is intended for.

The difference that repeatedly bites me is: CAPITALIZATION matters.

The majority of the apps I do at JoSara MeDia are HTML5 apps in a framework called Baker. If you are interested, the rationale behind this is that most of the apps are either coming from books or eBooks (and hence are already in a format close to HTML, like ePub) or are heading in that direction (so we want to make conversion easy).

I was putting in a very nice jPlayer-based audio player (called jquery.mb.miniAudioPlayer, checkout the link, it is quite well done), and it looked great on the simulator, as you can see on the screenshots below. I tested it on several different simulator devices – all looked as expected, all performed the autoplay function, when expected.

Quebradillas Audio 2

In case you are interested, this is from a forthcoming “coffee table poetry book as an app” project called Quebradillas.

Quebradillas Audio 1

But, once I transferred the app to a device (either through cable or TestFlight) the audio player graphics did not make the transition (see screenshot below). And neither did the autoplay functionality.

Quebradillas on Device

 

Xcode capitalization problemThe culprit, of course, was two instance of capitalization. One was in the name of the included css file – in the head of the two pages, the “q” in “jquery” was lower case, and, as you can see from the Xcode screenshot, the file name itself was “jQuery.” This was acceptable in the simulator, which runs on OSX, but would not work (and, interestingly, did not pop up an error anywhere) on the devices tested (iOS). After looking at the javascript code in the jquery plugin, I could see that the “Vm” and “P” were icon placeholders…which lead me to the css file misspelling.

The autoplay issue was, again, capitalization: the parameter in one of the examples had autoplay in camelCase (i.e., autoPlay), but in the mb.miniAudioPlayer.js, the parameter was simply “autoplay.”

 

By noting this, I aim to remind my future self to use capitalization as one of the first items to check when apps look different in the simulator vs. on the device, especially when using HTML5 app frameworks.

AWS Elastic Transcoder

Using Apple’s HLS for video streaming in apps

Quebec CityOverview

All of the apps JoSara MeDia currently has in the Apple app store (except for the latest one) are self-contained; all of the media (video, audio, photos, maps, etc.) are embedded in the app. This means that if a user is on a plane or somewhere that they have no decent network connection that the apps will work fine, with no parts saying “you can only view this with an internet connection.”

This strategy works very well except for two main problems:

  • the apps are large, frequently over the size limit Apple designates as the maximum for downloading over cellular. I have no data that says this would limit downloads, but it seems obvious;
  • if we want to migrate apps to the AppleTV platform (and of course we do!), we have to have a much more cloud centric approach, as Apple TV has limited storage space.

These two issues prompted me to use the release of our Quebec City app as a testing ground for moving the videos included in the app (and the largest space consuming media in the app) into an on-demand “cloud” storage system. I determined the best solution for this is to use Apple’s HTTP Live Streaming (HLS) solution.

There are still many things I am figuring out about using HLS, and I would welcome comments on this strategy.

What is HLS and why would you use it

For most apps, there is no way to predict what bandwidth your users will have when they click on a video inside your app. And there is an “instant gratification” requirement (or near instant) that must be fulfilled when a user clicks on the play button.

Have you ever started a video, have it show as grainy or lower quality, and then get more defined as the video plays? This is an example of using HLS with variant playlists (other protocols do this as well).

Simply put, with HLS a video is segmented into several time segment files (denoted by the file extension .ts) which are included in a playlist file (denoted by the file extension .m3u8) which describes the video and the segments. The playlist is a human readable file that can be edited if needed (and I determined I needed to, see below).

Added on to this is a “variant playlist” which is a master playlist file in the same format that points to other playlist files. The concept behind the variant playlist is to have videos of multiple resolutions and sizes but with the same time segments (this should be prefaced with “I think”, and comments appreciated). When a video player starts playing a video described by a variant playlist, it starts with the lowest bandwidth utilization playlist (which is by definition smaller in size and therefore should download and start to play the quickest, thus satisfying that most human need, instant gratification), determines through a handshake what bandwidth and resolution the device playing the video can handle, and ratchets up to the best playlist in the variant playlist to continue playing. I am assuming (by observation) that it only will ratchet up to a higher resolution playlist at the time segment breaks (which is also why I think the segments all have to be the same length).

Options for building your videos and playlists

There are two links that provide standards for videos for Apple iOS devices and Apple TVs (links below):

These standards do overlap a bit, but, as you would expect, the Apple TV standards have higher resolution because of an always connected, higher bandwidth (minimum wifi) connectivity than one can expect with an iPhone or iPad.

To support iPhones, iPad and Apple TVs, the best strategy would be to have 3 or 4 streams:

  • a low bandwidth/low resolution stream for devices on cellular connections
  • a mid-range solution for iPhones and iPad on WiFi
  • a hi bandwidth/hi resolution stream for always connected devices like Apple TVs

Thus the steps become:

  1. Convert your native video streams into these multiple resolutions;
  2. Build segmented files with playlists of each resolution;
  3. Build a variant playlist that points to each of the single playlists;
  4. Deploy
  5. Make sure that when you deploy, the variant playlist and files are on a content distribution network which will get them cached around the world for faster deployment (only important if you are assuming worldwide customers…which you should).
  6. Put the video tags in your apps.

Converting your native video streams:

My videos are in several shapes and resolutions, since they come from what ever device I have on my at the time. They are usually from an Olympus TG-1 (which has been with me through the Grand Canyon, in Hawaii, in cenotes in the Yucatan and now in Quebec City) which is my indestructible image and video default, or some kind of iOS device. Both are set to shoot in the highest quality possible. This makes the native videos very large (and the apps that they are embedded in larger still).

There are several tools to convert the videos. These are the ones I’ve looked into:

  • Quicktime – the current version of Quicktime is mostly useless in these endeavors. But Quicktime 7 does have all of the settings required in the standards links from the first paragraph in this section. One could go through and set those exactly as specified, and get video output, then put them through the Apple mediasegmenter command line tool. If you do not have an Amazon Web Services (AWS) account, this would most likely be the way to proceed…as long as this version of Quicktime is supported.  To get to these options go to File -> Export, and select “Movie to MPEG-4″ and click on the “Options” button. All of the parameters that are suggested in the standards for iPhone and iPad are available here for tweaking and tuning. Quicktime 7 can still be downloaded from Apple at this link.Quicktime 7
  • Amazon Web Service (AWS) Elastic Transcoder – for those of us that are lazy like me, AWS offers “Elastic Transcoder” which provides present HLS options for 400K, 1M and 2M. This encodings are done through batch jobs, with detailed output selections. There are options for HLS3 and HLS4. HLS4 requires a split between audio and video. This may be a later standard, but I could not get the inputs and outputs correct…therefore, I went with the HLSv3 standards. The setup requires:
    • Setting up S3 repositories that hold the original videos, and destination repositories for the transcodes files and thumbnails (if that option is selected)
    • Creating a “Pipeline” that uses these S3 repositories
    • Setting up an Elastic Transcoder “job” (as an old programmer, I’m hoping this is a reflection on the batch job status of old!) in this pipeline, where you tell it what kind of transcodes that should come out of the job, and what kind of playlist.

AWS Elastic Transcoder

  • iMovie – iMovie has several output presets, but I did not find a way to easily adapt them to the settings in the standards.

Building segmented files

Once you have your videos converted, the next step is to build the segmented files from these videos plus the playlists that contains the metadata and location of the segmented files (these are the files that end with .ts). There may be other tools, but there are only two that I have found.

  • Apple dev tools – Apple’s HLS Tools require an Apple developer account. To find them, go to this Apple developer page, and scroll down to the “Download” section on the right hand side (requires developer login). The command to create streams from “non-live” videos is the mediafilesegmenter. To see the options, either use the man pages (type “man mediafilesegmenter” at the command line) or just type the command for a summary of the options. There are options for the time length of the segments, creation of a file for variant playlists, encryption options and others. I found through much trial and error that using the “file base” option (-f) to put the files in a particular folder, and omitting the “base URL” option (-b) (which I didn’t at first, not realizing that the variant playlist which points at the individual stream playlists can point to file folders to make it neat) worked best. In the end, I used this command to create 10 second segments of my original (not the encoded) files, to create a hi-resolution/hi bandwidth option.
  • Amazon Web Service (AWS) Elastic Transcoder – the Elastic Transcoder not only converts/encodes the videos, but will also build the segments (and the variant playlists). As you can see from the prior screenshot, there are system presets for HLSV3 and v4 (again, I used V3) for 400K, 1M and 2M. The job created in Elastic Transcoder will build the segmented files in designated folders with designated naming prefixes, all with the same time segment. I have, however, seen some variance of quality in using Elastic Transcoder…or at least a disagreement between the Apple HLS Tools validator and the directives given in the Elastic Transcoder jobs. More on that in the results and errors section.

Building variant playlists

Finally, you need to build a variant playlist, which is a playlist that points to all of the other playlists of different resolution/bandwidth option time segments.

  • Apple dev tools – the variantplaylistcreator cmd-line command will take output files from the mediafilesegmenter cmd-line command and build a variant playlist.
  • Amazon Web Service (AWS) Elastic Transcoder – are you detecting a pattern here? As part of the Elastic Transcoder jobs, the end result is to specify either an HLSv3 or HLSv4 variant playlist. I selected v3, as the v4 selection requires separate audio and video streams and I could never quite get those to work.
  • Manual – the playlist and variant playlist files are human readable and editable.

Currently, I am using a combination of Elastic Transcoder and manual editing. I take the variant playlist that comes out of Elastic Transcoder (which contains  400K, 1M and 2M playlists), then edit it to add the playlist I created using the mediafilesegmenter, the higher-res version. This gives a final variant playlist with four options that straddle the iOS device requirement list and the Apple TV requirement list.

Putting the videos into your apps

Most of my apps are HTML5 using a standard called HPUB. This is to take advantage of multiple platforms, as HPUB files can be converted with a bit of work to ePub files for enhanced eBooks.

To use the videos in HTML5 is straightforward – just use the <video> tag and put the variant playlist file in the “src=” parameter.

Results and Errors

In the end, the videos work, and seem to work for users around the world, with low or high bandwidth, as expected. I’m sure there are things that can be done to make them better.

I’ve used the mediastreamvalidator command from the Apple Developer tools pretty extensively. It doesn’t like some of the things about the AWS Elastic Transcoder generated files, but it is valuable in pointing out others.

Here are some changes I’ve made based on the validator, and other feedback:

Screen Shot 2016-02-18 at 8.51.03 PMScreen Shot 2016-02-18 at 8.51.21 PM
Error: Illegal MIME type -
this one took me a bit. The m3u8 files generated by AWS are fine, but files such as those generated from the mediastreamsegmenter tool do not pass this check. They get tagged by the error–> Detail:  MIME type: application/octet-stream”. In AWS S3 there is a drop down list of MIME types in the “Metadata” section, but none of the recommended Apple MIME types are there. The files generated by AWS have the MIME type “application/x-mpegURL”, which is one of the recommended ones. Since it is not a selection in the drop down, it took me a while to determine that you can actually just manually enter the MIME type into the field, even if it is not in the drop down list. Doh!

Time segment issues – whether utilizing AWS Elastic Transcoder or the mediafilesegmenter cmd line tool, I’ve always used 10 second segments. Unfortunately, either Elastic Transcoder isn’t exact or the mediastreamvalidator tool does not agree with Transcoder’s output. Here’s an example as a snip from mediastreamvalidator’s output:

Error: Different target durations detected

–> Detail:  Target duration: 10 vs Target duration: 13

–> Source:  BikeRide/BikeRideHI.m3u8

–> Compare: BikeRide/hls_1m.m3u8

–> Detail:  Target duration: 10 vs Target duration: 13

–> Source:  BikeRide/BikeRideHI.m3u8

–> Compare: BikeRide/hls_400k.m3u8

–> Detail:  Target duration: 10 vs Target duration: 13

–> Source:  BikeRide/BikeRideHI.m3u8

–> Compare: BikeRide/hls_2m.m3u8

This is basically saying the the “HI” version of the playlist (which was created using Apple’s mediafilesegmenter cmd-line tool) is ten seconds, but the AWS Elastic Transcoder created playlists (the three that start with “hls”) are 13…when the job to create them was set for 10 seconds. I am still trying to figure this one out, so any pointers would be appreciated.

File permissions – when hosting the playlist and time segment files on an AWS S3 bucket, uploading the files causes the permissions for the files always need to be reset to be readable (either “Public” or set up correctly for a secure file stream. This  seems obvious, but working through the issues that the validator brought up had me uploading files multiple times, and this always came back to bite me as an error in the validator.

HLS v3 vs. v4 – except for the fact that you have to have separate audio and video streams in v4, I’m still clueless as to when and why you would use one version over the other. It would seem that a single audio stream would be needed for really really low bandwidth. But separating out the video and audio streams is quite a bit of extra work (I would be thrilled if someone would leave a comment about a simple tool to do this). I can see some advantage in separate steams, in that it would allow the client to choose a better video stream with lower quality audio based on its own configuration. More to learn here for sure.

Client usage unknowns – now that the videos work, how do I know which variant is being played? It would be good to know if all four variants were being utilized, and under what circumstances they are being consumed (particular devices? bandwidths?). There is some tracking on AWS which I can potentially use to determine this.

I hope this helps anyone else working their way through using Apple’s HTTP Live Streaming. Any and all comments appreciated. Thanks to “tidbits” from Apple and the Apple forums for his assistance as I work my way through this.

Download on the App StoreTo see the app that this is used on, click on the App Store logo. I’d appreciate feedback especially from those not in the US as to their perceptions of (a)how long it takes the videos to start and (b)how long it take the quality to ramp up.

Audrey In Quebec City

Quebec City app/enhanced eBook now available for free in the Apple App Store

Audrey In Quebec CityQuebec City, the app/enhanced eBook I wrote and developed for my gorgeous wife’s birthday (as that’s where we went to explore and celebrate) is now available in the Apple App Store for free…at least until my wife tells me to not make it free.

Like our Grand Canyon app that has been the top rated Grand Canyon app on the app store for several years, this app has videos, images, slide shows, maps and anything else we could cram in there!

The app has chapters on:

  • Quartier Petit-Champlain
  • Old Quebec
  • Parliament Hill
  • The Citadel
  • Bike Riding in Quebec City
  • Montmorency Falls
  • The Ferry across the St. Lawrence River to Lévis

and others.

The app is available for iPhone and iPad.

Download on the App Store

 

FallsAndStairsFromTram

Screen Shot 2015-02-10 at 3.50.11 PM

TestFlightApp.com goes away February 26, 2015 – use new iOS 8 Test Flight App

IMG_1295The website we have utilized for beta testing of apps, TestFlightApp.com, shuts down on February 26th, 2015. All app testing will be moved to the iOS8 TestFlight app and managed through Apple’s iTunes Connect.

As a developer, and a user there are many more PROs to this than CONs. The new TestFlight app for iOS8 radically simplifies the process of beta testing apps.

In the old TestFlight.com, a developer had to:

  • invite users to TestFlight
  • get the user to register a device on TestFlight
  • take the UUID of the device and register it on developer.apple.com
  • put the device ID into a provisioning profile
  • use that profile to build
  • re-upload the build (or the new provisioning profile if that was all that changed) to TestFlightApp.com
  • distribute the build to designated TestFlightApp users

There were several places where that process could get stuck and could indeed go wrong.

With the new TestFlight iOS8 app, the steps are much simpler:

  • developer submits app (there are several steps involved here for developers, but they are basically the same as submitting an app to the app store. After Archiving, you “Submit” the app to a version on iTunes Connect that is marked “Prepare for Submission”
  • check the box for “TestFlightBetaTesting”Screen Shot 2015-02-10 at 3.50.11 PM
  • select “External Testers” (this will not be visible until the app goes through Beta App Review)
  • invite the test user via email (does not even have to be their iTunes email, as Apple will do the mapping)
  • the users will be asked to download the TestFlight app (if they have not already done so)
  • the app will then be available for install from the TestFlight appIMG_0128
  • apps installed via TestFlight app will have orange dots beside them
  • users will be notified when new versions are availableIMG_1296

The device ID mapping is done by Apple. No changes in the provisioning profile are needed.

What else is different? Here are the cons:

  • No Android. Obviously, since Apple acquired them, support for this has been dropped. It was an advantage to have all testers and testing in one web site.
  • Beta builds only available for 30 days. After that time, the developer must submit another build.
  • No TestFlight SDK. The developer could including TestFlight’s SDK and get more data on what parts of the app the users was testing. These feature has not yet been moved over to the new TestFlight app (if anyone has found it, please let us know).
  • Wait for Beta App Review. Apps do have to be submitted for “Beta App Review”. The first time this is done, it can take a few hours to a couple of days. After that, it is quite quick, if the developer answers a question concerning the level of changes in this build (fewer changes do not apparently require an extensive review).
  • Issues with Gmail invites. We’ve run across one issue with invites receive in Gmail that were not able to acknowledge the TestFlight invitation and thus allow the app to be run under the TestFlight app.
  • Works only with iOS8. This is not as big an issue as it was. We assume Apple waiting until the adoption rate was high enough before discontinuing the Test Flight app web service.
  • If this is an upgrade to an existing app store app by the same name, the Test Flight app will write over it. The user will be notified of this with a alert notification. The user can always get the production version back via the app store.

Overall, the PROs far outweigh the CONs, and hopefully some of the other pieces will show up in the future.

Existing users can be exported from TestFlightApp.com into CSV files for import as external users on Apple’s iTunesConnect web site (where user management is now controlled). Detailed instructions here.

MobileprovisionProfile Screenshot

What time does an Apple provisioning profile expire?

For those of you that do not get the challenges of living in the Apple Developer world, a bit of background: To deploy an iOS app outside of the Apple App Store, either as a “beta” with an Ad-Hoc Distribution profile, or as an Enterprise with an Apple Enterprise Developer account, a Apple Provisioning Profile is required. This profile is built on Apple’s Developer Web site and requires a developer certificate (“trust” the developer!), a list of devices (up to 100) or the domain of the Enterprise (depending on whether this is for Ad-Hoc or Enterprise Distribution), and an app ID. This information is used to generate the provisioning profile, which is distributed along with the app to identify which devices are allowed to utilize the app.

For reasons only known to Apple, provisioning profiles, even Enterprise Provisioning profiles, expire once a year. Perhaps this is Apple’s way of ensuring that Enterprise’s keep up with their $299 annual fee to keep their Enterprise Developer License.

Recently, Apple extended the time validity of the certificates generated under the Enterprise Developer License to three years. But the profiles all still expire after one year.

This has spawned a huge marketplace for MDM (Mobile Device Management) solutions, used to help deploy (or redeploy, in the case of an expired profile) apps in an Enterprise.

It is easy to see what day a profile expires (it is visible on the device under Settings/General/Profiles, generates pop-up warning on the device, and it visible in your Apple Developer account page). But, because of a last minute customer call, we needed to know when would it really expire. This customer did not have an MDM solution, and, though we had built into our app and forced upgrade functionality, if the profile expires, the app stops working.

This is obviously a major issue with Enterprises deploying Apple apps internally. When given enough warning, it can be handled, even without an MDM.

Apple Provisioning ProfileBut, given less than 24 hours notice, what we really needed to know was not only the data, but the time the profile would expire.

When you build a profile, you need to get it into XCode (the Apple IDE) to use it. This can be done from XCode, or you can download the profile as a file, then double click on it and open it in XCode.

In other words, the profile (shown to the right) is just a file. It is in the format of a “plist file”, a properties list file.

Since we were trying to determine what time the profile expired, and we could not find that information anywhere, we decided to look into the file. We opened it with a simple text editor (right click, select “Open With” and select your favorite editor.

Most of it was quite easy to read, as plist files are XML. You can tell it is a plist file as it starts with this information:

<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC “-//Apple//DTD PLIST 1.0//EN” “http://www.apple.com/DTDs/PropertyList-1.0.dtd”>
<plist version=”1.0″>
<dict>

The file contains the developer certificate, which is a long character string that looks like garbage. But after that data string, there is more information, including the follow nugget we had been looking for:

<key>ExpirationDate</key>
<date>2014-06-11T21:50:35Z</date>

Not only the date, but the time (in GMT, or Zulu time).

Hope this helps anyone else who has the need to look for the same information. Obviously, the best practice is to avoid waiting until the last minute. But if you do, it is good to know how much time you have.

Screen Shot 2013-08-09 at 10.20.59 PM

Packers Preseason – what to do when no broadcast available

Finally, the Packers are back. Sure, it’s only preseason. But offensive line injuries, new running backs, 1st round draft pick Datone Jones and new pickup Vince Young peaked Pack fans’ curiosity.

But calling around to our favorite watching spots, the Pack was nowhere to be viewed.

Rob, the great bartender at our normal spot, Kilburn’s, tells me that the Pack is not showing on NFL Ticket. A few more calls echoed that, with the gent from the BrickHouse Tavern stating that NFL Ticket showed a random set of preseason games.

Enter a bit of technology, and NFL Preseason Live.

For a mere $19.99 (which is more than I would spend on beer at Kilburn’s) you get access to all of the preseason games that are not blacked out (we could not watch the Texans game, but it was on regular local broadcast TV).

The broadcast comes on tablets or smartphones (iOS or Android) or browsers, in nice HD. And, using an Apple TV, I was able to display the game on our HDTV. For some reason, this only worked using AirPlay on a Mac. When we tried AirPlay from an iPad, it would not display.

There was a few times with a bit of lag. But it was in HD and, even better, the commercials were few and low volume. And the picture in picture worked well, so we could watch a second game.

It is doubtful that we would subscribe for the entire season, as the games are quite available…but for 20 bucks and some tech, the Packers preseason is taking care of. Now we just need the team to execute.

 

mid-2011 11" Mac Air vs. new 13" Mac Air

Apple Mac Air 13″ and Migration Assistant

I’ve had a mid-2011 11 inch Mac Air for two years. This was my first Mac laptop, and the size (perfect for traveling), the instant on and several other features sold me on it. I had Compaq laptops for my duration at Compaq (of course) and had meandered from Sony VAIO’s (good product) to ASUS netbooks before deciding that paying four times the cost of a Windows laptop might actually be worth it. It would be difficult at this point to convince me to go back to Windows (though I do keep a Windows desktop for some apps).

But I upgrade to the just announced Mac Air 13 inch for several reasons:

  • Size. Yeah, I know, I said that. But customers squinting at the 11 inch screen to see demos just didn’t get the point across;
  • Battery life. The new 13″ was spec’d at 12 hours of battery life. Running multiple apps plus XCODE and sometimes Eclipse just wasted the battery on the little 11″.
  • Performance. See above…sure, I could shut some apps off, but why should I?

My local Apple store, who I have a good relationship with, had the fully loaded 13″ (8 GB RAM, 512 Flash storage and the upgraded processor) in stock. My son’s big ole Windows laptop was giving him fits so he was the designated hand-me-down recipient of the 11″ Mac Air.

This lead me to try Apple’s Migration Assistant.

I have never been a big fan of automated migration programs. They either seem to miss a configuration (or several), don’t move all your files, or just plain don’t work.

In addition, I had three types of XCODE development profiles and certificates on my Mac: one set for Media Sourcery, one set for JoSara MeDia (our publishing company) and one customer’s (an Enterprise License that we develop under for them). Having just been through the un-documented gyrations of renewing and reissuing the one Apple Enterprise cert/profile, I was not optimistic.

However, after a false start or two, Migration Assistant blew my incredibly low expectations away.

It not only moved all my files, it:

  • moved all of the certs and profiles that XCODE requires, without any additional configuration;
  • moved all WiFi configurations;
  • moved browser history;

Except for the Microsoft Office license (yes I run Office for Mac, and will as long as my customers use it).

My main hiccup was when I first set it up, Migration Assistant projected a nice 75 hours for copying files over. That issue was attributed because Larry has too many WiFi networks at home, including a new one from an AirPort Time Capsule (more on that in another post). When I made certain that both laptops were on the same WiFi network, Migration Assistant projected a more reasonable 4-5 hours to copy everything over.

I let it run over night, and started getting used to a bigger screen (which isn’t easy…the 11″ is nice…the things we do for our customers). But, just for precautions, I asked my son not to delete anything on the old Mac for a while.

Re-reading MSandT

Re-reading Tad William's Memory, Sorrow and Thorn

click on the image for more info and to support this blog

Dusk Before the Dawn

Dusk Before the Dawn

Software By the Kilo

Software by the Kilo

Archives

%d bloggers like this: