I attended the third RWDevCon March 30 – April 1 in Alexandria, VA. (Yes I know that was 10 days ago, but I had a lot of notes.) It was the third time RWDevCon was held, thus I was one of a reported 12 faithful attendees who have attended all three conferences. I even dragged a colleague from one of my customers to attend this year.
Why do I return for the joy of back to back eight hour days of seminars (this year with a third all day workshop for added pleasure)?
It would be great to see RWDevCon grow into something much larger, so that those that put in the hard work could realize that success. But it is also excellent at the size that it currently is. Ray, Vickie and the team have a hard balancing act to do.
Bottom line: I cannot recommend this conference more highly to any and all iOS developers. Most of these sessions provided me with enough information that I could use what I learned immediately. Some are difficult enough that I’ll need to review the data before knowing enough to be dangerous. But the amount of tools and education gained in these three days provides a high ROI on time and money spent.
It was a tough choice between the Debug workshop versus this one, but need won out. I’d recently completed a cycle count and inventory Swift application in a very tight timeline, and I KNEW I had broken many app architecture rules in haste….so I went to task to re-learn and hopefully be amazed by some new ideas as well. Had I known attendees would be getting a copy of Derek’s (who was leading the other workshop) Advanced Apple Debugging and Reverse Engineering book at the conference close, I would have had more incentive to choose this session…until Josh and René put out an app architecture book.
After meeting one of my three favorite nephews for brews and dinner at the Trademark downstairs the night before, I was primed and ready (obligatory local beer picture included) for the next morning’s 8 hours session.
What I learned:
After the workshop, with a full brain, sore butt from sitting, an hour to spare until the opening reception and the threat of bad running weather in the ensuing days, I headed out to run down King Street, a very cool old set of blocks that runs towards the river, right into more running trails (obligatory running scenery picture included – session details after the photo).
There were three tracks of sessions. There are the ones I selected to attend. There are several others that I either worked through the demo material or plan on doing so…as, so far, one cannot be in two places at once.
Machine Learning on iOS (Alexis Gallagher)
Why I attended: I want to employ machine learning with two of our products (Secure Workflow and Clinical Decision Support).
What I learned:
iOS Concurrency (Audrey Tam)
Why I attended: We have an old objective-C app that is getting converted to Swift, and it needs some concurrency help at one customer as their network is slower. I’d like to put parts of the data refresh in the background while updating the animations of the workflow tasks and notes. Plus Audrey is my wife’s name, so there you go.
What I learned:
Building Reusable Frameworks (Eric Cerney)
Why I attended: The old objective-C app I mentioned earlier has frameworks, and they need to be updated.
What I learned:
Why I attended: Reactive programming. Buzz words. Got a taste during the app architecture workshop. And I sensed a book was coming (obligatory signed book page picture included, only found Marin and Ash though…there’s a lot of authors!)
What I learned:
Practical Unit Testing I (Jack Wu)
Why I attended: Code is never tested enough. Especially mine.
What I learned:
Swift Playgrounds in Depth (Jawwad Ahmad)
Why I attended: I didn’t get to use playgrounds enough as a kid. It was a tough choice between this one and Practical Unit Testing II.
What I learned:
Advanced iOS Design Patterns (Joshua Greene)
Why I attended: The description talks about authentication, auto re-login, data and thread safety designs.
What I learned:
Swift Error Handling (Mike Katz)
Why I attended: My error handling looks like the if-then-else statement from hell.
What I learned:
Post-conference bottle share
Why I attended: I brought two great beers from local Houston breweries and they needed to be tasted. (obligatory Houston beer picture included, especially since several people asked about them. They were mighty tasty)
What I learned:
One could make a case that this wasn’t really part of the conference. But it was. We traded beer stories, travel stories, family stories, tried to kill a monster in a dungeon while bluffing (more card games), and generally had a great time. All were invited.
With the amount of pre-conference setup, conference materials and notes gathered from this (and the previous two) RWDevCon, the investment here will continue to pay off as they are used and referenced. Next comes incorporating these into release plans for the apps we already have deployed, and those we will deploy in the future.
Some additional photos included here at the end.
Notes on the 2017 SXSW Health Tech sessions I attended (some with photos, some with photos of slides from the presenters) in order of relevancy to current projects. The sessions (and links to each if you want to jump down) are:
To see notes from other SXSW2017 sessions:
This was an excellently balanced and informative presentation where Dr. Duncan presented the technical perspective and Dr. Pevnick presented the data analytics and research perspective. I took pictures of most of their slides, the pertinent ones are included here.
Cedars Sinai starting allowing patients the option through their patient portal (voluntary) to link wearable devices and their readings, and integrated those readings into their EPIC EMR system. With little advertising they got up to 2800 patients (out of 130,000 portal users) sending in readings.
An interesting session title, especially given that two of the panelists with devices also had apps that were critical to their devices. The incongruity was somewhat rectified by the discussion that the focus was on the device, as opposed to YAAS (Yet Another App Syndrome, my acronym).
Panel: Lu Zhang (NewGen Capital, VC), Stuart Blitz (SeventhSense Biosystems), Janica Alvarez (Naya), Jeff Dachis (OneDrop)
I could have elected to wait in the two lines for Joe Biden (one for wrist bands, one to get in) and his cancer moonshot discussion. And, as I found out later, I also could have fanboyed out and found the Game of Thrones session (which I wasn’t aware of) which was right new door to Biden (apparently).
But the statistics and perspectives presented in this SXSW Health Tech session were a reminder of the size of the problems of diabetes and pre-diabetes.
Panel: Dr. Baker Harrell (It’s Time Texas), Michael Mackert (UT Austin), Nish Parekh (IBM Watson), Stephen Pont (Dell Medical Children’s)
This was a Texas focused session, which featured using technology to reach all Texans. Statistics were presented about smartphone penetration (e.g., there almost everywhere) and the app called “Choose Healthier“, a collaboration between It’s Time Texas and the Dell Children’s Medical Center was introduced. It initially contained events and location information for in and around Austin at the time of the presentation.
The slide below shows stats from a PEW on smartphone penetration from 2016. The point of the panel was that apps could be delivered to all people regardless of income level or demographic factors.
This is the session where I got stuck in an elevator on the way up to the Austin Chamber of Commerce. Lovely! Apparently this is the only way to get up to the chamber of commerce. We weren’t in there for longer than ten minutes, and since it was raining out it wasn’t too steamy…just another bit of excitement at SXSW.
Panel – Brian Baum, Charles Huang, Karen DeSalvo, Sukanya Soderland
This panel had an interesting mix of local and national perspectives, all of whom agreed that data collection is hard but data integration is harder. One of the best slides was one I got a mostly crappy photo of (if you get stuck in an elevator you don’t have the best choice of seats, or so I found out). But it talks about the amount of money that is invested in segments of healthcare that create or utilize data…versus integrating or sharing it. That slide is below.
Karen DeSalvo, the former director of the ONC, shared the goals of data and system integration between the public and private sector. Little discussion on what would happen with these goals with the new administration.
At this panel, Brian Baum introduced Connected Health Austin, a local initiative. There was discussion on defined data communities within Austin, and all they “solve the same problem differently everywhere” followed by discussion on how Connected-Health Austin would be different in this regard. I heard of several of these type initiatives in Austin during SXSW, hopefully they will all inter-connect.
Panel – Abhas Gupta, Andrew Rosenthal, Carine Carmy and Matt Klitus
The focus here was on providing advice for starting a company in the health tech sector.
Presenters: Slava Rubin (one of the founders of IndieGogo), Bill Clark (CEO of Microventures)
First Democracy VC is their joint venture that focuses on Equity Crowdfunding that was made possible by Reg CF, Title III. Slava and Bill said all of their ventures thus far has reached their funding goals. A slide they shared (at the end of these notes) shows that as March 2017 about 230 Reg CF offerings have been filed with the SEC (this is since it ‘went live’ in May 2016).
Slava shared a brief timeline leading up to the availability of equity crowdfunding.
A slide the gents shared on current equity crowdfunding statistics is shown below.
I like the idea of Apple’s iTunes Match service but I’ve had some issues getting it to work the way I think it should, especially with the copies of CDs that I own that I’ve ripped. The main issue is songs showing an iCloud Status of “Waiting” constantly, and those same songs not able to be downloaded to any of my iOS devices. This is what I did to fix it and get it to where all of my songs are uploaded or matched on my OS X and Windows desktops and laptops, and available to download on my iOS devices. Hopefully it will help someone having a similar problem.
The basic fix is to find any song that is in an iCloud Status of “Error” and fix that error, either by deleting that entry, locating the song (if iTunes could not find it) or some other remedy. Since this did fix my problem, I’m assuming the synchronization process between iCloud and the local machine does not handle nor report errors very well, and either timeouts or just fails when it encounters them.
There are several benefits to using iTunes match – when it works:
The main issue is when songs from albums that I own (CDs ripped, Vinyl converted) are stuck in a “Waiting” state on iTunes for OS X and iTunes for Windows, and show as not downloadable (no “download from cloud button”). This state persists even when I’ve tried to force an update (from the iTunes menu File -> Library -> Update iCloud Music Library. The “Waiting” state looks like the screenshot below.
It appears that there was an error in the iTunes “Update iCloud Music Library” process or the normal process to try and match music. But there is no error log. To detect the error, you have to locate at the “iCloud Status” for each song.
To do this and detect error:
I never found an error log that showed the exact errors, only this indicator in iCloud Status. Since I did this, I’ve had no issues on any of my devices.
Back in the day, my son collected Pokemon cards, played Pokemon on Gameboy, and taught me about Pikachu, Snorlaxes, and other interesting creatures…as I’m sure the kids of many others my age did. As my son grew older, he gave his Pokemon card collection to someone much younger who had more enthusiasm (a very generous move, one he semi-regretted when he saw the prices for some of those cards on eBay!) and moved on to other things. Now in his mid-twenties, my son and I are playing Pokemon Go, semi-together from 200 miles away.
Despite the articles about “nerd herd” and getting the geeks out from behind their computers (which is a pretty good thing, IMHO), in addition to the afore mentioned family camaraderie (and I loudly applaud those friends of mine that are actively playing with their kids), there are other obvious reasons certain people should become familiar with this app/game:
Pokemon Go is the top Free app (with in-app purchases) on the Apple App Store and Google Play Store in the US, the UK and multiple other countries, and has been there since it’s release. It is the fastest app to reach 10 million downloads worldwide, reaching that mark in seven days (source). It also currently leads all apps in daily usage time (i.e., how long do users actually have the app opened). (source).
It did have a bit of a head start in both content and database:
There are some characteristics of the game that are familiar, especially to those familiar with previous pokemon games. But the basics are similar to anyone who has used any count/goal based program: collect everything and level up. This is a common development model, whether it is for a beer drinking app like Untappd (see my breakdown of the Untappd app here), a healthcare/shopping app like Walgreens or game apps. There are badges for most everything (similar to programs like Untappd) though I seem to rarely look at them, other than for counts.
These are holes that will be filled, either in future releases or by independent developers. There are already examples of an entire ecosystem springing up around the game; Chat apps (see this developer’s app blog) as an example, I assume to be used to tell people when a rare pokemon is near. There are also several hacks, such as maps that use the app protocols to determine locations of pokemon, pokestops, etc. (most of these can be found in the pokemondev sub on reddit). Some of these are getting shutdown; one even mentioned a “cease and desist” order.
The “augmented reality” piece, where you can use your device’s camera to see pokemon on the background of the real world, is interesting but unnecessary in this game. It is such a battery sucker that I do not know of any players that have not yet turned it off. It is being used primarily as a novelty (I found a pokemon at a landmark) or by businesses to lure pokemon hunters in.
The estimates of how much the game has made the various parties varies. One estimate says that Apple, purely on the percentage that they receive from in-app purchases through the app, will make $3 BILLION in revenue over the next couple of years (source). Since Apple gets 30% of in-app purchases, that would imply an estimate of $7 BILLION in revenue for Niantic (one would assume this gets shared with Nintendo for licensing).
There is, of course, no need to spend money in the game if you choose not to (full disclosure: I do not). Sensor Tower is estimating $1.6 million per day in the US spent. And the app has not yet been launched in Japan where the average spend per mobile user is higher, and the Pokemon craze is even more rabid.
Nintendo’s stock price doubled following the release of the app (chart here) though it has retreated a bit from those highs.
Local businesses are taking advantage as well. Yelp now lets users filter based on pokestop locations. Many shopping areas and downtowns will have multiple pokestops near them. In the game, there are items known as “Lures” which do what the name implies (they lure pokemon to a pokestop for 30 minutes). When this happens, the pokestop lights up on the map, shooting purple pieces up like flares. Small businesses near pokestops are dropping these lures to lure people in while they hunt.
Pokemon Go is almost as well-known these first few weeks for server crashes as it is for having more users than most other applications. Since Niantic spun out of Google, one would assume that they have Google infrastructure. They don’t have Amazon Web Services (AWS), as the Amazon CTO has humorously repeatedly offered health over twitter whenever the servers are down.
As the game added multiple countries over this past weekend (July 16), the servers supporting the game crashed repeatedly, causing the game to be in operable most of that Saturday morning.
The image on the right is all that the players see. There is no notice that the game is having server issues. So users either continue to press “retry” (which comes up after a few minutes of this screen) or kill the app and start over…both of which cause more login attempts and impact on the servers.
From a capacity planning standpoint, one would assume that there would be a trending analysis done on the initial implementation based in the United States before adding in the multiple additional countries. Either this was not done or it was done incorrectly, causing capacity to crash the servers.
This is tolerated somewhat humorously (check out the Pokemon Go reddit forums for examples) for now. But if there are tours, events and other plans made around the app ( as there were that Saturday), this will not be acceptable by the user community for long.
Interestingly as of this writing, Niantic is advertising for a Software Engineer – Server Infrastructure...probably a much needed position just now!
My fellow joggers: we have an enormous advantage in this game of Pokemon Go. And this infuriates my son…and is the only reason I can even begin to keep up with him in this game (and with the many teenagers that are on summer break and do not have to work). That advantage is that mileage matters in several different facets of the program:
It may be obvious, but the downsides to running with the game are:
I have an old Google Glass from an earlier development project. Glass would be a great accessory for this game, and for all games that combine real-world with augmented reality. The ability to see landmarks and have heads-up display facts and stats was one of the benefits of Glass. Unfortunately, the issues it had, particularly with battery life, would have to be fixed. And it had a sweat problem (i.e., sweat be bad for Glass). But image just running along and speaking commands to Glass about throwing Pokeballs…those that make claims of “nerd herd” would have a field day with that one!
My current collection is below. Have fun!
There are, obviously and intuitively, differences between testing an iOS app on the Xcode simulator, and testing on a real device. The obvious ones run the gamut from no camera on the simulator to the way the keyboard works differently on both. The intuitive ones, in my mind, come from the fact that the Simulator is running on a different operating system (OSX) than the devices (iOS) that the app is intended for.
The difference that repeatedly bites me is: CAPITALIZATION matters.
The majority of the apps I do at JoSara MeDia are HTML5 apps in a framework called Baker. If you are interested, the rationale behind this is that most of the apps are either coming from books or eBooks (and hence are already in a format close to HTML, like ePub) or are heading in that direction (so we want to make conversion easy).
I was putting in a very nice jPlayer-based audio player (called jquery.mb.miniAudioPlayer, checkout the link, it is quite well done), and it looked great on the simulator, as you can see on the screenshots below. I tested it on several different simulator devices – all looked as expected, all performed the autoplay function, when expected.
In case you are interested, this is from a forthcoming “coffee table poetry book as an app” project called Quebradillas.
But, once I transferred the app to a device (either through cable or TestFlight) the audio player graphics did not make the transition (see screenshot below). And neither did the autoplay functionality.
The autoplay issue was, again, capitalization: the parameter in one of the examples had autoplay in camelCase (i.e., autoPlay), but in the mb.miniAudioPlayer.js, the parameter was simply “autoplay.”
By noting this, I aim to remind my future self to use capitalization as one of the first items to check when apps look different in the simulator vs. on the device, especially when using HTML5 app frameworks.
All of the apps JoSara MeDia currently has in the Apple app store (except for the latest one) are self-contained; all of the media (video, audio, photos, maps, etc.) are embedded in the app. This means that if a user is on a plane or somewhere that they have no decent network connection that the apps will work fine, with no parts saying “you can only view this with an internet connection.”
This strategy works very well except for two main problems:
These two issues prompted me to use the release of our Quebec City app as a testing ground for moving the videos included in the app (and the largest space consuming media in the app) into an on-demand “cloud” storage system. I determined the best solution for this is to use Apple’s HTTP Live Streaming (HLS) solution.
There are still many things I am figuring out about using HLS, and I would welcome comments on this strategy.
For most apps, there is no way to predict what bandwidth your users will have when they click on a video inside your app. And there is an “instant gratification” requirement (or near instant) that must be fulfilled when a user clicks on the play button.
Have you ever started a video, have it show as grainy or lower quality, and then get more defined as the video plays? This is an example of using HLS with variant playlists (other protocols do this as well).
Simply put, with HLS a video is segmented into several time segment files (denoted by the file extension .ts) which are included in a playlist file (denoted by the file extension .m3u8) which describes the video and the segments. The playlist is a human readable file that can be edited if needed (and I determined I needed to, see below).
Added on to this is a “variant playlist” which is a master playlist file in the same format that points to other playlist files. The concept behind the variant playlist is to have videos of multiple resolutions and sizes but with the same time segments (this should be prefaced with “I think”, and comments appreciated). When a video player starts playing a video described by a variant playlist, it starts with the lowest bandwidth utilization playlist (which is by definition smaller in size and therefore should download and start to play the quickest, thus satisfying that most human need, instant gratification), determines through a handshake what bandwidth and resolution the device playing the video can handle, and ratchets up to the best playlist in the variant playlist to continue playing. I am assuming (by observation) that it only will ratchet up to a higher resolution playlist at the time segment breaks (which is also why I think the segments all have to be the same length).
There are two links that provide standards for videos for Apple iOS devices and Apple TVs (links below):
These standards do overlap a bit, but, as you would expect, the Apple TV standards have higher resolution because of an always connected, higher bandwidth (minimum wifi) connectivity than one can expect with an iPhone or iPad.
To support iPhones, iPad and Apple TVs, the best strategy would be to have 3 or 4 streams:
Thus the steps become:
My videos are in several shapes and resolutions, since they come from what ever device I have on my at the time. They are usually from an Olympus TG-1 (which has been with me through the Grand Canyon, in Hawaii, in cenotes in the Yucatan and now in Quebec City) which is my indestructible image and video default, or some kind of iOS device. Both are set to shoot in the highest quality possible. This makes the native videos very large (and the apps that they are embedded in larger still).
There are several tools to convert the videos. These are the ones I’ve looked into:
Once you have your videos converted, the next step is to build the segmented files from these videos plus the playlists that contains the metadata and location of the segmented files (these are the files that end with .ts). There may be other tools, but there are only two that I have found.
Finally, you need to build a variant playlist, which is a playlist that points to all of the other playlists of different resolution/bandwidth option time segments.
Currently, I am using a combination of Elastic Transcoder and manual editing. I take the variant playlist that comes out of Elastic Transcoder (which contains 400K, 1M and 2M playlists), then edit it to add the playlist I created using the mediafilesegmenter, the higher-res version. This gives a final variant playlist with four options that straddle the iOS device requirement list and the Apple TV requirement list.
Most of my apps are HTML5 using a standard called HPUB. This is to take advantage of multiple platforms, as HPUB files can be converted with a bit of work to ePub files for enhanced eBooks.
To use the videos in HTML5 is straightforward – just use the <video> tag and put the variant playlist file in the “src=” parameter.
In the end, the videos work, and seem to work for users around the world, with low or high bandwidth, as expected. I’m sure there are things that can be done to make them better.
I’ve used the mediastreamvalidator command from the Apple Developer tools pretty extensively. It doesn’t like some of the things about the AWS Elastic Transcoder generated files, but it is valuable in pointing out others.
Here are some changes I’ve made based on the validator, and other feedback:
Error: Illegal MIME type - this one took me a bit. The m3u8 files generated by AWS are fine, but files such as those generated from the mediastreamsegmenter tool do not pass this check. They get tagged by the error “–> Detail: MIME type: application/octet-stream”. In AWS S3 there is a drop down list of MIME types in the “Metadata” section, but none of the recommended Apple MIME types are there. The files generated by AWS have the MIME type “application/x-mpegURL”, which is one of the recommended ones. Since it is not a selection in the drop down, it took me a while to determine that you can actually just manually enter the MIME type into the field, even if it is not in the drop down list. Doh!
Time segment issues – whether utilizing AWS Elastic Transcoder or the mediafilesegmenter cmd line tool, I’ve always used 10 second segments. Unfortunately, either Elastic Transcoder isn’t exact or the mediastreamvalidator tool does not agree with Transcoder’s output. Here’s an example as a snip from mediastreamvalidator’s output:
Error: Different target durations detected
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_1m.m3u8
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_400k.m3u8
–> Detail: Target duration: 10 vs Target duration: 13
–> Source: BikeRide/BikeRideHI.m3u8
–> Compare: BikeRide/hls_2m.m3u8
This is basically saying the the “HI” version of the playlist (which was created using Apple’s mediafilesegmenter cmd-line tool) is ten seconds, but the AWS Elastic Transcoder created playlists (the three that start with “hls”) are 13…when the job to create them was set for 10 seconds. I am still trying to figure this one out, so any pointers would be appreciated.
File permissions – when hosting the playlist and time segment files on an AWS S3 bucket, uploading the files causes the permissions for the files always need to be reset to be readable (either “Public” or set up correctly for a secure file stream. This seems obvious, but working through the issues that the validator brought up had me uploading files multiple times, and this always came back to bite me as an error in the validator.
HLS v3 vs. v4 – except for the fact that you have to have separate audio and video streams in v4, I’m still clueless as to when and why you would use one version over the other. It would seem that a single audio stream would be needed for really really low bandwidth. But separating out the video and audio streams is quite a bit of extra work (I would be thrilled if someone would leave a comment about a simple tool to do this). I can see some advantage in separate steams, in that it would allow the client to choose a better video stream with lower quality audio based on its own configuration. More to learn here for sure.
Client usage unknowns – now that the videos work, how do I know which variant is being played? It would be good to know if all four variants were being utilized, and under what circumstances they are being consumed (particular devices? bandwidths?). There is some tracking on AWS which I can potentially use to determine this.
I hope this helps anyone else working their way through using Apple’s HTTP Live Streaming. Any and all comments appreciated. Thanks to “tidbits” from Apple and the Apple forums for his assistance as I work my way through this.
To see the app that this is used on, click on the App Store logo. I’d appreciate feedback especially from those not in the US as to their perceptions of (a)how long it takes the videos to start and (b)how long it take the quality to ramp up.
Dr. Edward Tufte is doing his one-day seminar tour. I sent two of my team to attend on the first day in Austin, and I went on the second day in Houston. If that doesn’t send the message that I think this is a very worthwhile and valuable seminar, then let me be clearer: Dr. Tufte has been and remains the expert in data visualization and he not only keeps up with developments in this area, he explores it and expands it by doing his own developments.
The fee was $380, and includes all for of Dr. Tufte’s books, which cost $100 by themselves. I was not aware that these four gorgeous books were self-published by his own Graphics Press; gives my JoSara MeDia something to aspire to.
There was no set delineation of the presentation, though in typical Tufte fashion there were handouts and suggested reading during the “study hall” period. I got there early, sat on the front row and had Dr. Tufte come down the row, introduce himself and ask what I did while signing the books. We talked a bit about medical records, EPIC (a large EMR company) and how faxes still dominate the medical field.
Besides geeking out with Dr. Tufte, what did I get out of it?
The outline below is my own, just to arrange my notes. They are here for my bad memory, and for your consumption.
Study hall had several assigned readings in the one hour time set aside, which Dr. Tufte roamed around, signing and talking. Also in the agenda is a set of “Special Interest Topics” (ten sections of these) and two selections of “Homework”. Wonder if I should set deadlines for my guys to get this done… :)
Dr. Tufte went through multiple examples of “information as the interface.” From the seminar page on his website:
“Fundamental design strategies for all information
displays: sentences, tables, diagrams, maps, charts,
images, video, data visualizations, and randomized
displays for making graphical statistical inferences.”
Example #1 – Stephen Malinowski’s Music Animation Machine (try it out at the link)
Example #2 – National Weather Service (the base site is linked to, but the site reviewed was a specific forecast, enter a zip code to see the particular page)
mentions little data, with no more little data graphics…which doesn’t force viewers to have to figure out graphs.
Everyone knows how to use, read and view numbers, words and simple graphics.
Tufte: “Minimize design-figuring-out-time, Maximize content reasoning time”
A lot of data on one scrollable page. “Humans are good at scanning and recognizing what they are looking for” (example: finding your name on a long list of names)
Tufte: “Being read to from a Powerpoint, the rate of information transfer asymptotically approaches zero.” This guy is witty, eh?
Tufte: “Only two industries call their customers users: illegal drugs and software.” OUCH!
1st: view/eye, 2nd: scroll, 3rd: drill down
Example #3: Policy Story from the NY Times (old article, can’t find online)
Logo and author links show responsibility, accountability, credibility
There are 60 numbers in the story, no graphics, reinforcing the earlier point about that everyone knows how to use/read/view numbers and words…no need to get fancy.
Tufte: “Use experts to get the presentation/article out of your voice and into the experts voice.”
The graph in this article is terrible, sourced from a lobbyist group, with no defensible numbers.
Example #4: Health Article from NY Times (again, an old article). Dr. Tufte does like the NYTimes website, and uses them frequently in examples.
Main point here is a graph of charges vs. Medicare reimbursement. The graphic uses annotations on the graphic, which helps the reader to immediately know how to read the display.
Dr. Tufte continually emphasizing comparing corporate IT properties to Google News, Google Maps, NY Times and WSJ. “Put your IT material next to these. Aim high.”
Using the box score, one level down from the home page. An example of numbers and words, a table of lots of numbers, that is viewed all season for every game and has been for a decade. Great example….though the example he showed didn’t have the cool underwear ad that mine captured! Score!
Tufte makes a point about ordering by interest or mathematical order, not alphabetically.
He ends this segment with my favorite quote: “No matter how beautiful your interface is, it would be better if there is less of it!”
From the seminar page on his website:
“A new, widely-adopted method for presentations:
meetings are smarter, more effective, 20% shorter.”
Dr. Tufte shows an Amazon article where they have no powerpoint, all meetings start with a 30 minute study hall. Quote on the article (not sure who to attribute): “Powerpoint is easy for the presenter, hard for the audience.”
From the seminar page of his website:
“Standards of comparison for workaday and for cutting
edge visualizations. How to identify excellent
information architectures and use them as models and
comparison sets for your own work and for the work
of your contractors. Monitoring the designs of others.”
This section was a bit of a jumble, with various topics, but some excellent examples.
Covered scientific publishing, discussing how jargon is reduced as articles go from the back of Nature to the front of Nature to the more populist web sites and publications. Most people only read the abstracts (so true), and the abstract should state (like a thesis) problem-relevance-solution.
After a break, Dr. Tufte once again mentioned the woefulness that is government and corporate IT Dashboards. He referenced again the ESPN.COM box score page as a easily readable dashboard with tons of numbers as a “standard for comparison.”
The point of an information display – “To help thinking about the content”
Example #1: NYTime Article with Annotated Linking
Tufte notes that NYTimes employs 40 “Graphics News Reporters.” Do not use plain un-descriptive lines, use annotations on lines. Also goes through a diagram on pg. 78 of BEAUTIFUL EVIDENCE which uses annotated lines in a diagram tracking SARS patients.
Example #2: Tim Berners-Lee, his original paper suggesting the Internet and linking
Tufte showed the manager’s comment that he originally put on top of Berners-Lee’s paper “Vague but exciting…”. And a good quote from the document, a hierarchy of nouns to the flatness of verbs.
Example #3: xkcd – over lap of items on the front page of college website on left side intersecting with items you really want to know…with only the college’s name in the intersection.
Example #4: Google Maps – again, enforces compare this with your workaday presentations, since everyone uses and knows how to use this seemingly complex app. Compare diagrams to Google Maps. Satellite view – overlay data on top of them.
Example #5: Popular Music chart (from pg. 90-91 of VISUAL EXPLANATION). I found a similar one online, it is shown to the right. It is a flat interface, Tufte showed an iPad version of it in a video and on screen which had the artists names clickable, playing their music with videos behind the diagram. Very cool.
Then he showed the Viz-A-Matic….a graph generate that showed what NOT to do.
From the seminar page:
New ideas on spectatorship, consuming reports.
How to assess the credibility of a presentation
and its presenter, how to detect cherry-picking,
how to reason about alternative explanations.
Tufte: “A presentation or graphic should provide reason to believe.”
Tufte: “An open mind but not an empty head.”
Two things in presentations as a spectator – Content and Credibility
Watch out for Cherry Picking (which is not when a Houston Rocket stays back toward his basket waiting for a long pass) – picking only certain details to support points. Also, not linking to the source documents, and being in a “rage to conclude“.
Tufte: “Why go to a presentation whose conclusion you agree with?”
Measurements that are in presentations – have a sense of what is relative. See how the measurements are actually made; get out into the field. See directly. The fog of data will fall from your eyes. Tufte mentions when a chemical company was policing themselves, collecting water samples in clear water…”sampling to please.” People and institutions cannot keep their own score.
Search Google Images for data, the search results are not gamed like the normal Google text searches are.
A little bit of talk about displays, and a demo of movie panning over very nice maps of the Swiss Alps. This is something I’ve tried to do with the Grand Canyon app, and I’ll try again.
Dr. Tufte also talked about Small Multiples and Sparklines briefly, but these are covered in detail in his books.
Dr. Tufte demonstrated a tool called “Image Quilts” by Adam Schwarz. It is a chrome plug-in/extension. I played with it, using my friend and artist Barbara Franklet’s images through a google image search.
He closed with his Fundamentals. As stated in the beginning, there are six in the book, but I counted seven.
Tufte: “move to web-based presentations. move away from flatland.”
I cannot recommend this seminar and these books highly enough.
The website we have utilized for beta testing of apps, TestFlightApp.com, shuts down on February 26th, 2015. All app testing will be moved to the iOS8 TestFlight app and managed through Apple’s iTunes Connect.
As a developer, and a user there are many more PROs to this than CONs. The new TestFlight app for iOS8 radically simplifies the process of beta testing apps.
In the old TestFlight.com, a developer had to:
There were several places where that process could get stuck and could indeed go wrong.
With the new TestFlight iOS8 app, the steps are much simpler:
The device ID mapping is done by Apple. No changes in the provisioning profile are needed.
What else is different? Here are the cons:
Overall, the PROs far outweigh the CONs, and hopefully some of the other pieces will show up in the future.
Existing users can be exported from TestFlightApp.com into CSV files for import as external users on Apple’s iTunesConnect web site (where user management is now controlled). Detailed instructions here.
There are several options for cloud storage with different pricing options and some slight difference in features. Pricing is changing quite a bit. The table below tries to show for a given amount of storage which option is the cheapest amongst the major players.
Note that I ignored the variety of promotions; for example, I received 50GB for free from Box when I installed their iOS client and signed up for an account in January. I did include extra space that the user gets when doing somewhat simple tasks (like inviting friends as DropBox provides).
I also did not include multi-user (cost per user per month) plans, as a lot of them state “custom pricing”.
There are other options out there like MEGA and BRIGHT COVE which I did not include yet in this comparison.
I’ve tried to note where there are other things which might impinge on the amount of storage (for example, the Google drive storage is shared by several applications.
For the color coding, Green is the best price for the amount of storage, Red is the worst.
For those of you that are more organized that I, take note that you can get a total of 44 GB for free by simply by signing up for all of these (5GB from Amazon, 10GB from Box, 2GB from DropBox, 15GB from Google Drive, 5GB from iCloud and 7GB from MS One Drive); but like your car keys, you just gotta remember where you put everything.
Everybody except DropBox offers at least 5GB for free; one would assume DropBox will change that soon. With recent announcements, DropBox also has the most expensive options in several tiers, so one would assume that will change as well.
Amazon and Google come out as the least expensive options the majority of the time.
I included the iOS 8 iCloud Drive pricing that came out of WWDC; with current pricing, Apple’s options suck…with the iOS pricing (which is not on a price sheet yet, but on presentations) they are actually competitive.
If you see any errors or changes, add a comment. This is from public pricing sheets (except for Amazon’s, which I had to login to find, and iOS8 pricing, which is only from WWDC presentations) as of July 6, 2014.
Long table after the break.