This last item is the most interesting thing in Musk’s ‘master plan deux’. Cool idea, but it seems very pie in the sky. Reliable, foolproof, autonomous intelligence is still many years away … and that’s just half of this concept. The other issue is getting people to volunteer their cars to a self-driving fleet — surrendering their vehicle for unknown times to unknown people.
The good thing about taxi cabs and current ride-sharing models is the cars are manned by a human: someone is always there to monitor the actions of the travelling passengers. Leaving my car in the sole possession of someone else is an uncomfortable notion. The cars may drive themselves but they don’t clean themselves or repair themselves.
The last sentence is a much clearer path, where Tesla owns and maintains dedicated vehicles for taxi services. It still depends on the realisation of autonomy but the business model is clear. It’s Uber … without the overhead costs of paying drivers.
The following is a discussion of Swift 3’s controversial approval of the ‘sealed by default’ proposal that puts constraints on subclassability. To contextualise the decision, it is first necessary to review how Swift approaches access control.
In Swift, types and members are defaulted to internal visibility. This means they are only visible within the scope of the same module. In another module, internal types are not accessible at all. Making these things accessible requires a public keyword for every symbol. This means nothing is exposed to the wider project by default unless it is explicitly marked; only things that the developer have chosen to be available to other modules are.
This sounds onerous but it actually makes sense from a codebase design perspective. Generally, most methods and properties written into a class or struct are implementation details which are irrelevant to other consumers. As code is read more often than it is written, the benefits of distinguishing a public and private API surface outweigh the burden of having to write a public declaration every so often.
This ideology is central to Swift, favouring explicit statements over implicit behaviours. This is done primarily, but not entirely, to express the best coding practices. Developers have to make a conscious decision which parts of the interface are public and which aren’t. It enables for potential performance benefits like static dispatch and intelligent high-level features like Generated Headers.
All of this strictness is uncomfortable to Objective-C developers which is a lax language; it lets everything be ambiguously public or private at the mercy of the programmer. It was uncomfortable to me. Swift allows for the same dynamic runtime features, but it wants those capabilities to be explicitly defined and constrained only to the symbols that requires them.
The title of the post has nothing to do with any of this functionality, of course. There are parallels that you can draw though with clear similarities in how Swift is thought about and designed.
‘Sealed by default’ is a separate concept to runtime manipulation or access control in regard to its functionality; sealed classes cannot be subclassed outside of the module they are declared in. The underlying premise of only enabling functionality when it is appropriate is the same, using keywords to denote special entitlements.
Objective-C barely has the concept of modules, let alone being sealed. Any class in Objective-C can be inherited and overridden regardless of what framework it resides in. Swift 2 already has some limitations on this freedom. Although anything can be subclassed by default in Swift 2, there is a final keyword that prevents any source from subclassing it (essentially becoming a reference type struct).
final is more restrictive than sealed which is more restrictive than open (the implicit Objective-C behaviour). Sealed classes are still open inside their own module. This allows flexibility for the module maker (supporting the common class cluster pattern) whilst remaining closed to the rest of the codebase.
The concept of sealed classes does not exist in Swift 2 at all but is going to be the new default in Swift 3. Developers of modules can add the ability for classes to be subclassed by anyone using the open keyword on relevant type declarations.
This choice for classes to be sealed by default with Swift 3 has caused a lot of controversy; even the core team admitted there was no consensus in their mailing list post approving the change. I think it is the right thing to do but it’s not hard to see why others are angry.
The change removes the capability for application developers to subclass third-party library and framework code. The module defines what can and can’t be overridden. Sealed doesn’t affect a developer’s own classes, but it does stop developers from overriding framework classes, like those found in UIKit and AppKit.
Developers can use clever subclassing tricks to resolve some bugs that exist in third-party frameworks. These are almost always unsupported brittle changes, though, that aren’t guaranteed to be stable or keep working between OS versions.
To be frank, it is a fluke that this stuff even works. Subclassing where you aren’t supposed to is essentially injecting code into someone else’s private components. Ask any Apple engineer and they will tell you never to subclass UIKit. In Objective-C, this is only expressed via documentation and guidelines. With Swift 3, it can be enforced in the code and is compulsorily adhered to.
Perhaps there is a debate here about the usefulness of subclassing to combat bugs. I don’t think it is very useful though and will get even less useful as people write Swift frameworks in Swift where classes aren’t even that common and instead relying on structs or enumerations. A good example here is to look at the adoption of C libraries, here is any C library, which are made up of free functions. These functions can and do have bugs with no recourse via inheritance. This has not stunted adoption.
In general, language design should not be decided by the possible existence of buggy code. However much we strive to make perfect code, there will always be bugs. Sealed by default also prevents a different swathe of bugs from happening as API users don’t have to rely on humans to check documentation about whether something is supported. Sealed, final and open allow coders to accurately convey how their APIs are meant to be used, at least more accurately than Objective-C did.
As highlighted by the preface of this post, I hope the parallels between stricter rules about inheritance and stricter public-private access control are self-evident.
Designing and enforcing rules for inheritance is aligned with Swift as a language. It would be inconsistent not to have sealed by default with explicit keywords to allow for stricter or looser inheritance. It brings several benefits. Static dispatch can be employed more frequently when the compiler can guarantee there are no external subclasses. Performance benefits for a GUI application are minimal, granted, but every little helps.
Of course, the primary reason is creating a programming model that is more correct with proper encapsulation and containment. Classes that aren’t meant to be subclassed, can’t be. That has to be better than an ambiguous spaghetti mess.
I think if you can understand and agree with the explicit marking of things as public or not, then you should hold no objection to the sealed by default proposal. Explicitness in cases of ambiguity is a theme of Swift. Rather than guessing or choosing a lazy default that accepts anything, it is stringent in its enforcement. Accommodating debugging or monkey patching — when it flies in the face of the overall language — makes no sense to me.
The last thing I’ll say is that doing ‘sealed by default’ with Swift 3 makes the most sense when you consider the project’s roadmap. Apple wants Swift 3 to be the last major source-breaking release. Deciding to be restrictive now, with sealed by default, and then backtracking later is not a source-breaking change. Apple can freely make things open again … if the change really is destructive. It’s not possible to go back the other way, from open to sealed, in a source-compatible matter later on.
Even without any knowledge of the pros or cons of the argument, logic indicates to do the more bullish thing now as the option to reverse it remains available.
Honestly, this sounds like Cue giving up. It seems that Apple was chasing a master plan for television (enough rumours and comments by TV execs to support it) and has now cancelled those plans, facing resistance from many parties over contractual terms.
This is disappointing for my view on what Apple needs strategically. I see original content as a necessity in order to stay relevant. Clarkson’s Top Gear show is a great example of something that is now completely outside of Apple’s control and will always be an ecosystem disadvantage. Amazon has no incentives to share its exclusive content to other platforms. They can shut out Apple TV indefinitely. Apple needs a magical agreement with the likes of Netflix and Amazon, or it needs its own leverage with its own compelling shows.
An inexpensive ~$1200 Hackintosh build is faster than any Mac hardware Apple currently ships as well as being significantly cheaper. I’m not surprised that buying your own components and assembling it yourself is cheaper than what Apple sells pre-configured; there’s a huge price gap between pre-built and self-built Windows PCs too. Official Apple Macs also come encased in custom-designed enclosures and casing that will never be matched by a generic ATX tower, which again mitigates the price difference somewhat.
Another reason why the Hackintosh solution compares so well on the price-performance scale against a Mac Pro is because Apple has left it languish and hasn’t updated its internals (or compensatory price drops). The 5K iMac is a great counterfactual here; a powerful computer that represents very good value for money, having been refreshed recently.
Obviously, I wish Apple would strive to make everything cheaper but the bigger strategical issue, in my view, is this power differential when Apple abandons products for years a time, an increasingly common occurrence.
Apple’s business model is to sanction only a handful of products for its platforms which is normally fine and attracts most of the total market. However, not updating these hardware lines on a regular basis is a disservice to the Mac platform. If they are no longer interested in these segments, the products should be discontinued, not left to linger like a rotting fruit. Right now, the biggest offenders are the Mac Pro and Mac Mini.
I’m not demanding year-over-year major overhauls but a pipeline of spec bumps and component improvements in line with the industry should be a requirement of these product categories staying in the lineup. I don’t really care if Apple wants to charge even more money than they do already for this … but it should be possible to buy sanctioned top-spec internals for Macs.
For average techies, Rundle describes how making a Hackintosh is actually pretty easy as long as you stick to the online community guides for what to do and what to buy. It seems as painless as building any computer from parts (which is not very hard at all, it’s like a 3D jigsaw puzzle where you have the assembly instructions for the solution).
A standout annoyance is the lack of iMessage support as it is seemingly tied to hardware serial numbers. My guess is this is related to the underlying iMessage encryption processes somehow. I would worry that this is a trend and incompatibility would spread to more Apple services as the company continues to enforce higher security policy across the board.
I think the biggest drawback is the necessity to wait when new software updates come out for others to verify compatibility. This is related to the perpetual looming threat that, one day, Apple software could cut the Hackintosh industry off completely (whether on purpose or just by coincidence) and nothing will work ever again. This is fine if you can bear converting your hardware investment into a plain Windows PC as a final backstop.
Everyone loves to debate product naming, including me. At least for iPhone, the branding has been easy to guess for many years thanks to the cyclical tick-tock cadence of major chassis design revamp followed by an ‘S’ series incremental component update.
This year, as all rumours are indicating, the cycle is changing. This year’s flagship iPhone looks almost the same as an iPhone 6s, which is itself a derivative of the iPhone 6 from 2014. Add into the equation the fact next year’s iPhone is rumoured to be a major ‘all glass’ design revamp, and it’s hard to say that the new 2016 iPhone will be called the iPhone 7.
It doesn’t feel like it lives up to the stature that the 7 nomenclature implies. It also puts the 2017 iPhone in a sticky situation for names: iPhone 7s doesn’t seem appropriate for a year when the device heads in a completely new direction design-wise. Apple could leapfrog and jump to straight to 8 but that feels weird given the generational history.
Names like ‘Extreme’ or ‘Air’ don’t really mesh with my sensibilities either. ‘Air’ sounds like a design change (iPad Air was significantly thinner and lighter than its predecessor) and ‘Extreme’ sounds corny. ‘Pro’ is the best suffix that has been suggested: would it be iPhone 6 Pro or iPhone 6s Pro? That’s a lot of syllables even for the 4.7 inch model, the 5.5 inch size would be awkward to speak and write: ‘iPhone 6s Plus Pro’.
Right now, I’d still bet on iPhone 7. It is the most obvious choice, even if it doesn’t quite fit the bill. In my mind, next year, Apple would move away from numbers at the logical design breakpoint, skipping the ‘7s’ conundrum and moving to something like ‘iPhone Air’ as the flagship branding.
The new Music app in iOS 10 looks different, drastically different. It’s been revamped with new font styles, navigation, layouts and animations. In general, app design is a combination of visual aesthetics and behaviour; evaluation of the changes Apple has made should consider both of these points. This is all subjective (which ultimately is what makes it hard) but the general consensus opinion on iOS 9 Music was one of confusion. The way it behaved was difficult to understand.
I think the new Music app represents a huge improvement in that area, greatly enhancing usability. Addressing my primary complaint, library navigation is now direct and obvious. There is a plain list of full-width buttons that directly open the primary views into the local music. If you want to see albums, tap Albums. If you want to start a playlist, tap Playlists. If you ever get lost in the menus, keeping hitting the Back button until you get back to this list.
It’s a huge improvement over the iOS 9 drop-down selector thingy which was a non-standard UI widget that actively hid navigation controls behind an additional button press. The iOS 10 app even allows for some personal customisation: tapping the Edit button reveals drag handles and additional toggleable rows. For example, in the old design, all users had to always see (and skip over) the Composers filter … which very few people care about. In iOS 10, the button defaults to hidden and can be turned on if desired.
One of these list items is ‘Downloaded Music’ which shows only tracks and albums that have been saved to local storage. Apple is plainly responding to user feedback that people couldn’t work out what was stored in the cloud and what wasn’t. Downloaded Music answers this question unambiguously, even adding an additional explanatory banner on detail views if the filter is applied. Circling back to the customisability, if being able to cleanly distinguish what is available in local storage is unimportant or unnecessary, the list item can simply be unchecked and hidden.
The persistent bottom bar tabs have also been tweaked to truly represent the primary features of the app. ‘Connect’ has been ditched from the tabs (a failed social network service does not justify such a primary position in the interface) and ‘New’ has been renamed to ‘Browse’. The ‘Library’ tab (nee ‘My Music’) has also been moved to the primary (first) slot in the toolbar, where it always should have been.
Up Next has also been reworked and appears inline to the Now Playing screen. Simply scroll down the view and the upcoming tracks are listed below — Up Next is no longer hidden behind yet another modal. It works better spatially: thinking of the view as a progressive timeline, the next songs are now ordered beneath the currently playing track. Shuffle and Repeat are also located nearby too, although I think there needs to be a way to Shuffle All from the main screen.
That’s the behaviour; big wins across the board as far as I’m concerned. Aesthetics are a different kettle of fish. iOS 9 Music was pretty boring in terms of visuals, mostly reusing stock UI components and doing an average job in areas where it did rely on custom elements.
The new app definitely makes more of a statement pushing iOS onto a new design trajectory with big bold fonts. The Apple News and Home app also have adopted this style. It’s not clear if Apple wants to move towards this style (which is defined by its use of heavy font weights and comically-large elements) for all of its apps — it’s still early stages.
I have mixed feeling on the appearance. I like the proliferation of buttons with backgrounds (such as the pink circle around the ••• button on Album list views) as well as the shift away from translucency for most things. The thick title fonts are a regression — especially as it is applied inconsistently. Navigation bar titles continue to use the normal system fonts, for example. The font size of the main headings is just laughable; it feels like you have cranked up the Dynamic Type accessibility options.
The Search screen is another example of inconsistency. Whilst the text field has been ballooned to a larger-than-normal size, the segmented control and ‘Cancel’ buttons are as small as ever with similarly small fonts. It does not match. The humongous components also seems like an inefficient use of screen space which is funny given 2016 was the year Apple reintroduced a 4-inch iPhone to its new device lineup.
Visually, I love the new Artist and Album list views. Putting two big photos of album art per row looks great and cleanly splits the screen in half on my 4.7 inch iPhone 6. I wish the section titles used a larger typeface as they get lost amongst the cells.
The new look for the Now Playing is decent. I like how the album art pops up from the page (shadows!) to signify the song is playing and recedes back into the frame when paused. This is capped by a subtle bounce effect. I also like how the scrubber thumb increases in size when in use and nudges the time labels out of the way when the user drags it to either end of the line. I don’t like the spacing between the song name and the scrolling secondary information row of text; the padding is too tight.
Another negative is the interaction to show and hide the modal is not 1:1 — it doesn’t follow the finger. As soon as a swipe gesture is detected, it fixes into the final state. The playfulness of being able to cancel the gesture mid-flight is lost; the previous incarnation of the app did this properly.
The ‘additional options’ sheet (activated by tapping a ••• button or long-pressing on album and song items) has also been redesigned to feature rounded-corner sub-sections. This view was poor in iOS 9 too and it hasn’t gotten better. In fact, I’d argue the applied border radius has made it worse. Your eyes are confused with the sudden appearance of four arbitrary blocks of content with irregular interactions. Moreover, buttons to ‘Love’ and ‘Dislike’ are presented side-by-side with near-identical iconography. Not to mention, every element in this view is tinted bright pink. It’s ugly. It’s even worse on smaller iOS devices where the middle list section will scroll if space is constrained. The scrollbar indicators in this state do not respect the corner radius and are naively (hideous).
Neither design was perfect but if I had to choose, I’d pick iOS 10’s attempt over iOS 9. The usability is the main reason for this: the simple fact Music is getting a ground-up redesign after just one year is enough evidence to prove that Apple messed up badly the first time. As I hopefully expressed above, it’s still a mixed bag as far as aesthetics are concerned for the iOS 10 Music app. I think the Heavy Fonts look would work better if Apple had gone all the way and brought it to every app in a wide-reaching system overhaul. That is not the case, so it (sadly) sticks out in the crowd.
Leading up to the event, the general opinion regarding watchOS was a wish for Apple to rethink the structure of the mental model. I think it was clear to everyone that complications, glances and apps was too much. Three related-but-separate views into the same application was overkill and exacerbated further by the Watch’s sluggishness, which made switching in and out of the different states frustrating.
The community consensus was asking Apple to ditch apps and focus on status update interactions, notifications and glances for quick actions. What Apple did was cut out glances and make complications a primary entry point into apps. Apps that are represented by complications are prioritised by the system and kept in memory as much as possible, enabling them to be launched instantly.
Apple has also stated that is redesigning apps to make their opening screens display usable summaries of information and place primary actions upfront. This has two benefits. In combination with the Dock — the new favourite apps view that appears anytime you press the side button — watchOS 3 retains much of the utility of Glances (quick information) even if they don’t exist anymore.
Activity was my most popular Glance by far on watchOS 2. With watchOS 3, I’ve put the Activity app into my Dock. As the screenshots in the Dock regularly refresh with latest content, I simply press the side button to ‘glance’ at my rings. Tapping on the preview jumps me into the app immediately thanks to instant launch. They’ve managed to successfully remove Glances entirely, reducing complexity, but retain most of the utility they offered. (Heart Rate is now present in the system as a standalone application.)
The changes also helps the OS feel more familiar for iOS users, as the Dock is similar to the iPhone multitasking interface. Just like complications, putting an app in the Dock tells the system to save it in RAM enabling instant launch most of the time. I only use four or five apps on the Watch regularly so I’ve put them in my Dock. With watchOS 3, my most frequently used apps are readily available and also launch in under a second. It’s great. Apps that haven’t been frozen in memory still launch as slow as ever obviously.
Again mirroring iPhone, swiping up from the bottom of the clock face reveals a new Control Center panel. It’s cool that they are carrying over the metaphor but the current design of watchOS Control Center is mediocre: it’s just a mess of buttons. I would like to see that cleaned up in future betas.
The Dock replaces the Friends circle as the action that happens when you single press the side button on the watch hardware. In fact, Friends has been removed entirely from watchOS. Messaging your favourite contacts is now handled, logically, by the Messages app. You can still double-click the side button to activate Apple Pay as before.
watchOS 3 also introduces a few new watch faces and I love them. I’m addicted to ‘Numerals’ and ‘Activity Digital’. Thanks to a new edge-to-edge swipe gesture, it’s also really easy to swap between them. I change to the Activity face when I’m consciously thinking about closing my rings for the day. When I’ve hit my daily activity goals, I simply swipe back to the minimalist Numerals face as the fitness information is no longer important to me. It’s so cool how the number moves with the hour hand around the day.
Aesthetically, I’m not a huge fan of Dark Mode. I think it restricts the colour palette for other elements (such as tab bar tint colour) leading to repetitive apps that have no distinctive personality: everyone trends towards dark backgrounds with blue and orange accents. This is especially true if Dark Mode means a theme that is meant to be easy on the eyes at night, not just an appearance style that is predominantly dark. Windows Phone attempts to combat the boringness of black with rich animation and fancy transitions, to some success. Even then, Microsoft is pivoting away from the darkness with recent software revisions, adding more vibrance and bright elements.
There’s also no getting away from the fact that a lot of apps are comprised mainly of full-colour photography feeds, like Facebook and Instagram. Full-colour images look terrible with dark chrome in scrolling lists; by their nature of being photo-realistic, they can’t match the surrounding UI. Dark Mode is crippling for these uses and it just so happens these uses are very common tasks for phones. What I’m saying is, for a lot of apps that are used by actual people, dark interfaces are not a good thing.
Dark Mode also ‘doubles’ the workload on developers and designers. It causes apps to split their resources between light and dark appearances ultimately compromising the beauty of both. I think many apps still look bad with just one colour scheme to consider, following the transition away from skeuomorphism. I believe there’s a lot more work to be done with what we have today before thinking about supporting another branch of the design language. I would be more in favour of a dark iOS if it was the new base UI, replacing the iOS 7 white aesthetic completely.
Ignoring personal preferences and in spite of those issues, I do think Dark Mode has a good chance of happening in the iOS 10 cycle. For one thing, a lot of people want it. I asked on Twitter about iOS 10 feature requests and many people asked for Dark Mode. I’m not sure if people want it because it looks cool or because it helps reduce eye strain at night. If it’s the latter, Apple has already started addressing that issue with Night Shift and I can see them pushing that further with a fully-fledged night UI toggle.
Another factor in Dark Mode’s favour is the looming rumours for the 2017 iPhone which will include an OLED display for the first time. In general, OLED devices prefer dark user interfaces as the screens are incredibly power-efficient when showing black pixels. OLED contrast levels are also very good so dark themes simply look nicer. Apple Watch UI is black for this reason; back backgrounds are so dark it blends in with the bezel.
Bringing Dark Mode into the ecosystem ahead of the OLED iPhone release allows third-party apps to start the transition sooner which means. That being said, I find it difficult to believe that Dark Mode will be present in the iOS 10.0 builds announced next week. The feature could likely come with a later iOS 10.x update. I reckon we’ll have another significant iOS feature update in mid-season, just like iOS 9.3 this year.
Zac has succeeded where I have failed, beating his Move goal for every day in the month just gone. Over a year since Apple Watch’s launch, I would have hoped that I could have managed the same thing at some point … alas laziness. What’s interesting though is that his achievement has actually motivated me to do it too. I was planning to go for the month-long award soon anyway but now I want it doubly so; there’s an implicit social pressure.
Apple’s software could do better to assist here: there is no social element to the Health app or Activity app. It would be cool if they build on the Activity app medals system and introduced things like shared leaderboards and achievements. Gamification can be annoying and cheesy but Apple has the design sophistication to execute well. Even something small like a dashboard of live Activity rings from family and friends, and the number of steps taken by them so far today would be incentivising. Nothing too in your face, no push notifications bragging about your social circle’s achievement, no pressure. Just a list that you can look at and see how you compare, if you choose.
It’s certainly a novel direction to take the MacBook line, adding dynamism to a keyboard layout that has remained the same for many years. A lot of Windows laptops includes a row of illuminated capacitive buttons but Apple is going further. It’s essentially replacing the function keys with a (really skinny) OLED touchscreen that can display any arbitrary UI. I think Apple chose OLED for the contrast levels, I can envision how the deep blacks of the screen look great alongside the piano black keyboard keys.
It’s not obvious to me how Apple is going to use this secondary display. Because it isn’t the primary display and because it can’t be a mandatory requirement to use OS X, as Apple will still be selling millions of Macs without a OLED accessory bar, I fear it might be an underused gimmick.
As Nintendo fans will know with the Wii U, making interfaces that interact between multiple screens is tough. What happens is that both displays battle for the user’s attention simultaneously but it turns out that ultimately one screen naturally monopolises the focus. In the case of MacBook, the primary canvas is the 15 inch Retina display. Demanding the laptop user to look down constantly is laborious and annoying. The natural laziness of people means most do not want to be nodding dogs; there’s a reason why touch-typing is so popular. Aside from physical strain, juggling multiple displays is simply a lot of information to take in. Creating UI conventions to signal when users need to check their dashboard display is incredibly hard. Putting critical information on the secondary display is a risk if the user simply forgets to check it.
The other end of the spectrum, then, is to keep the OLED screen content pretty much static. Limiting dynamism simplifies the mental load and enforces clear patterns of expectation about when the user is supposed to interact with the accessory display. Perhaps preferences allow for some customisation of what can appear there — the crucial point is that the buttons wouldn’t change passively whilst using OS X.
Although that would remove the problems I enumerated, it is a functionality tradeoff. What I’m describing in the second case is not that far removed from what exists already, i.e. a fixed set of function keys. In fact, it would be a regression in this case: the tactility of actual physical buttons would have been sacrificed. This is why I’m in a quandary. I would be concerned if Apple incorporated a significant new hardware change without a compelling use case to justify its existence.
A lot of people could argue that Force Touch was exactly that, a Mac hardware feature that was/is a dud. The impact with this rumour has more potential to be destructive. Force Touch on OS X can simply be ignored with no downside. An OLED button bar that replaces function keys cannot be ignored, it will have to be used by every new MacBook Pro owner. If its bad or mediocre, every customer will be impacted.
Google had its I/O conference this week, hosting its presentation of its latest announcements and outlook on what can best be described as a pop-star concert stage. I think the venue was a mistake but the presentation itself was markedly better than previous years. Clocking in it at two hours, the Google IO keynote is finally down to an acceptable length. Just a couple of years ago, they would run two 3 hour presentations on consecutive days.
One thing they unveiled was a FaceTime competitor called Duo. Specifically, there’s an element which struck a chord with me. When videocalling someone else, the recipient sees a live stream of the caller’s video as it rings. One side of the video call has already begun at the moment of the phone ringing. The other person can then pick up the call to start the two-way video, seamlessly transitioning into the conversation as the video of the person on the other end is already live.
It’s a fantastic streamlining of FaceTime. They also emphasised the instantaneous nature of the protocol allowing the two participants to community immediately after the call is confirmed. FaceTime’s usage model is a lot colder. One person asks to call someone else, the recipient sees the person’s name and a static image. When the call is answered, the video streams attempt to initiate a connection, which involves staring at a Connecting indicator for a few seconds, before finally succeeding to allow the two people can see and talk to each other.
The current FaceTime flow is as bad as a traditional phone call, which is basically what FaceTime is (in the same way iMessage is a 1:1 reproduction of SMS transmitted over the Internet). With Duo / Knock Knock, the call has effectively already begun as soon as the phone screen lights up on the receiving end.
Google showed how the caller could signify intent during the time waiting for the other person to respond. The user on the receiving end can pick up context from the Knock Knock video stream, such as where the person is, what they are doing or who they are with. Google showed potential with examples of people holding up movie tickets, engagement rings or simple facial expressions like happiness or sadness. (That being said, the product video — embedded above — did not do a good job of expressing the possibilities tastefully; it is too cheesy and felt too forced).
Aside from the speed and practical advantages, it’s also just damn cool to send your face to someone else. If the feature turns out to be gimmick, it encourages more people to do video calls in general, even if its just the novelty of how it works. I think it gives a meaningful benefit to picking the video option over audio, though. Even if they decline, you can imply something in those couple of seconds that would never happened otherwise. It’s almost like a transient Snapchat selfie with the opportunity to commit to a full conversation.
It’s a user experience thing that I hope Apple adopts. There are obvious knee-jerk fears of the dangers of letting people put live video onto someone else’s screen without explicit consent. I think these issues are easily mitigated by decent policy design, such as a (default) preference to only ‘enable Knock Knock for people in my contacts’. Careful attention will have to be given to the interface for callers too, especially early on, to explain what is happening — make it plain that the other person can see what you are doing right now even though you can’t see them yet. These are solvable social and technological problems and the benefits are huge, in my view.
Slight confession: I meant to write this post the same day as the event. I ended up being lazy and didn’t get to it until today. I’m glad I waited though, as it let me focus on what I was actually interested in. Almost subconsciously, my mind has concentrated on a couple of specific things.
Out of Google’s entire keynote, I can easily recall just two announcements: the Instant Apps demos and Knock Knock. Everything else is a vague blur or forgotten. Instant Apps is a technical quagmire with a lot of questions about implementation and its utility, so I’m holding off on judgements until its more set in stone … although the premise is intriguing. Duo is more concrete, complete with product videos, and made me genuinely excited. Alas, neither of these announcements have solid release dates, unfortunately. I can’t wait to check out Duo and Knock Knock sometime “later”.
iTunes is already an amalgamation of many different things. With iTunes 12.4, Apple has reinforced its piecemeal design further by reintroducing parts of iTunes 11 without properly considering all the edge-cases and window states. 12.4 adds a sidebar that replaces a popover UI control to manage views like Albums, Artists and Genres.
The sidebar is a resurrection from the days of iTunes 11. I like the sidebar better than the transient popover (OS X has enough screen real estate to allow such affordances) but it hasn’t been thought through. It’s shoddy and incomplete. There are distinct sidebar sources for Albums and Compilations and yet selecting Albums still shows Compilations in the detail view when you scroll down. Some media types do not hide the sidebar but have no sidebar items to choose between (Podcasts). Many media views have no sidebar at all, leading to jarring transitions between tabs, including all of Apple Music.
Again harkening back to iTunes 11, the Media Picker is now once again presented as a menu dropdown. In earlier versions of iTunes 12, the switcher for Music, Movies, TV Shows and such was presented as a mini tab bar, with a More button to reveal more. I actually prefer the new dropdown for overall usability as it features text labels alongside the glyphs. You can also edit the list to show just the items relevant to your library. However, it is now two clicks to change views which is a regression in efficiency. This is obviously frustrating if you context switch a lot but I don’t mind it — I rarely use iTunes for anything but music.
Most infuriatingly, iTunes has now made Compilations a second-class citizen in the library interface. There is now mandatory filtering, separating normal albums from compilations. To see this for yourself, click on Albums in the sidebar and scroll down. Previously, all of the albums and compilations would appear in one grid. With iTunes 12.4, they are now sectioned independently. This is so frustrating as much of my library consists of compilations and no longer participate in the normal album ordering.
As far as I can tell, there is no way to revert to the previous layout whilst maintaining a sort order by artist name. If you want to coalesce them and don’t mind sacrificing Artist ordering, change the View Options to sort by title.
In summary, iTunes continues to suck. It’s held back by its ageing codebase and the necessity for it to be a cross-platform program. A good version of the desktop app probably isn’t going to happen until Apple splits all the cogent functions into separate apps, like on iOS with dedicated apps for Music, Movies and more. I look forward to the Music.app revamp in iOS 10 to see an unconstrained representation of Apple’s vision for music software.
I don’t care who the focus group was, Apple isn’t going to give out personal analytics and other sensitive data to third-parties no matter the circumstances (see: ongoing frictions int the FBI and governments). Apple gives App Analytics reports to developers about the usage of their apps; the information provided is anonymised and vague. Even then, iOS users can still opt-out of supplying the information that is barely useful and no where close to personably identifiable. Its own advertising division, iAd, has been scaled back for similar reasons. Maybe Apple will start publishing data like ‘average playback position’ for episodes or total number of plays (a more accurate metric than raw download counts). I don’t think there’s any need to worry about invasion of privacy.
Business implications are different. Podcasting has remained an independent affair, surprisingly. Being realistic, Apple is now (reportedly) giving its podcast directory attention because it is being commercialised. Phenomena like the success of Serial have certainly drawn big business into the fray. This would be my best guess as to why Apple has taken an interest after years of maintaining the status quo with its podcast directory.
There’s a possibility Apple’s proactive involvement will be damaging. If I’m right about Apple’s motivation (influx of large corporations), then there’s a good chance independents will get shafted in whatever policies Apple implements. There’s also a chance that it’s a good thing. It’s not out of the question that Apple will add a storefront, so people can subscribe to shows for a monthly rate. Putting to one side the inevitable 30% cut, an easily-accessible subscription model Apple service could open up a new revenue stream for podcasts. More simply, Apple could also improve its podcast marketing and featured content efforts, potentially improving discoverability for good — but low listenership — shows.
You can complain about the App Store for an hour, but at the end of the day it was a great thing for a lot of people. It created good livings for many people (and great livings for a few) who never would have done so without its existence. There’s risks that Apple makes the podcast industry proprietary and closed but they have the same right as anyone else to do what they want. There will be winners and there will be losers.
I think its way too early to presume Apple’s involvement would be negative. Disruptive, sure, but not destructive. Again, consider the App Store. For all its flaws, you’d be hard-pressed to say it was a bad thing overall.
A nice enhancement of the Content ID system in favour of content creators. Rather than monetisation just not happening for videos whilst in dispute, the ads stay but any revenue is kept in quarantine until the dispute is resolved on a particular video. When the ruling on video rights is decided, the money is distributed to the appropriate party. Although the payment is delayed, it’s way better than before when false claims would simply render channels unprofitable.
Of course, this means nothing if the resolution process is unfair in its judgement and the content creator forfeits the revenue when its video was legitimate and valid. Hopefully, that doesn’t happen. I haven’t heard from many people who have had bad claims ruled against them so I think it’s rare for that to happen.
Apple is pushing the services category as a burgeoning part of its business with current and future growth potential. Focusing on and expanding services is a fascinating proposition as generally I’ve considered Apple as a company that sells hardware and bundles services for free. To me, these comments on the earnings call indicate Cook wants to develop services further in a serious way.
If Cook is being sincere, and not merely paying nice lip service in the middle of a hardware revenue slump, then it has huge implications on product direction. I’m wary that they might tread off the golden path, especially as internet services isn’t exactly something the company has shown to be comfortable with executing, but there are potential positive repercussions as well.
There’s a chance Apple dabbles in low-margin products as a result, for example. Selling customers high margin hardware and expecting associated high margin services purchases as well is far less compelling than a Kindle-esque strategy with cheaper hardware dependent upon ecosystem purchases. Arguably, Apple TV is destined to be exactly that. A cheap box with an assumed reliance on Apple subscription services.
On the negative side, I do think this means free iCloud storage will continue to be crippled for the foreseeable future with Apple encouraging people onto paid tiers. There’s a chance they bump the free quota slightly (currently 5 GB) but more likely is boost upsell opportunities for the paid plans. For example, I would not be surprised if Apple doubled the $0.99 per month tier to a 100 GB limit soon, up from 50 GB today. This is similar to how Apple pushed hardware ASP higher by keeping 16 GB model around and instead bumping the mid-tier to 64 GB.
Cook referred to Apple Music as Apple’s ‘first’ subscription service. You don’t need to be a seer to expect that more are coming. The prime candidate is something I dubiously dub ‘Apple Video’, the long-rumoured skinny bundle cable streaming service. Although Apple Video makes most sense in the context of Apple TV, it will certainly be available on every iOS device and Mac, maybe Android even. A video service can also be priced higher than music streaming, I would guess around the $30 price point, which is good news on the revenue growth front.
In the earnings call, it was noted that a big part of Services revenue growth is being driven by the App Store. This is entering dangerous territory for me, where Apple’s motivations are warped too far towards money rather than doing what is best for its developer community, in which I participate. Rumours of paid search are suddenly far more difficult to dismiss.
There are ways that Apple could boost revenue from the App Store that also simultaneously benefit developers and customers. If Apple can increase App Store monetisation for developers, it will see higher returns through the 30% cut. Specifically, Apple would need to increase income generation on monetisation platforms that it controls, like In-App Purchase or the initial upfront price of paid apps.
Avenues like ads can make developers rich but Apple gets nothing. It’s a leak of money that channels through their platform but they don’t get a slice of. Apple is backing out of the iAd business completely. Most of the richest developers on the App Store today make a large proportion of their income via advertising. Apple gets nothing.
If they could foster alternative monetisation strategies that go through their first-party payment systems and make developers switch away from advertising, Apple would be making money where they were previously making nothing. Successfully executing this would reduce the number of ads in apps whilst making Apple more money. That’s a win for customers, developers and Apple. Easy to say, way harder to actually find such monetisation avenues and do it.
Looking at the harsh realities, though, it is extremely difficult to see how any of this stuff adds meaningful revenue to Apple’s balance sheet. If the company wants to empower its future growth through its services businesses, it needs to offset billions of dollars of declining hardware revenue. Writing off billion dollar businesses as small sounds so flippant but for Apple it is true.
Apple Music has 13 million customers paying $10 a month right now. That’s $1.5 billion a year. Apple’s total yearly revenue for 2015 was approximately $230 billion. Apple Music is teensy-tiny on revenue terms. (No idea on profit, I’d guess it has around 30% margins). By the way, there is also a cannibalisation factor to account for here. People buying Apple Music will (logically) cut spending on iTunes downloads.
Perhaps, the way Apple drives non-negligible revenue growth is by a combination of many different things. For the last five years, Apple is the iPhone. In the future, maybe Apple is more of an ensemble affair with many different streams contributing to its overall numbers. There won’t be a single service that rivals its hardware income but a grouping of App Store income, Apple Music, (rumoured) Apple Video, iCloud tiers and whatever else could.
There’s nothing wrong with diversification per se but it is a change to how the company used to operate. Around the launch of iPad mini, there’s an obvious breakpoint in company strategy where they expanded from a couple of flagships to a myriad of variants in each category. The days of Apple’s products ‘all fitting on one table’ are long gone. If services do grow significantly, the metaphor really breaks down as its products would be intangibles.
You should read the list and watch the video before going any further. I could post this with a comment of agreement and say everything Viticci suggests in the MacStories concept video is useful and Apple should add it all to iOS. That’s a boring (and obvious) thing to do so I’ll spare the words. As iOS is an endless cycle of feature releases, ultimately almost everything in the article will probably come to light eventually.
This is one of the best iOS feature concepts I’ve watched, ever. It offers realistic ideas about how iOS could and can improve with interface designs that are nice to look at and fit well into the existing metaphors of the system.
The Control Center customisability is great, utilising the same jiggle indicator as the Home Screen to show mutability. Expanding the Messages app to handle more rich media types is an obvious future direction and the video does the idea justice with some cute UI work. Changes to the iCloud Drive app and Document Picker are well-warranted and the proposed layout is a great balance of Finder-esque power with overall iOS simplicity.
Their choice of side-by-side multitasking app switcher redesign is also nicely considered with a higher priority given to recent apps and an affordance for user-defined stickied favourite apps. I don’t like how they have chosen to activate a drag-and-drop mode, by exposing a drag handle alongside the Cut-Copy-Paste menu, but I don’t have a better answer to hand so it’s hard to genuinely critique it.
I love the subtle bounce animation the video uses for popovers that isn’t even mentioned explicitly; a nice quick effect to draw attention to the modal view. This is what iOS 6 had and what iOS 7 and later needs; bits of delightful whimsy that don’t get in the way of what you were actually trying to accomplish.
All that being said, the realities of making a video mockup versus actually creating the feature as an Apple engineer are different things. When you are making a video, each feature is about the same amount of work: think up an idea, make some assets and glyphs, incorporate that into a series of moving images.
I’m not claiming it’s easy to do, I couldn’t make these mockups, I’m saying each item can be taken of equal priority and equal importance. Implementing this stuff into a working, shipping, version of iOS is very different. I’m certain a lot of this stuff as is would have usability issues when actually made, there are lot of edge-case issues that pop up in development that don’t come through static screenshots and concept videos.
Different features have wildly different requirements about what is involved. Making rich Message previews for URLs and Notes is probably easier to do than change up the Apple Music machine learning algorithms to be more contextually relevant. Similarly, making Message previews for links is easier than making a framework for all third parties to integrate into message bubbles and show custom content and buttons.
Dark Theme is a great example of a feature that is easy to visualise in a couple of Photoshopped screenshots (MacStories’ video depicts a dark version of Messages, Calendar and Music) but actually doing it well at an OS level involves many more challenges than simply turning a white background black. There needs to be a lot of planning and thought for how the settings work, whether there are automatic options for sunrise/sunset or brightness, Control Center overrides, handling timezone changes, etcetera etcetera. Simply adding ‘night’ themes to every system app would be a huge undertaking for Apple’s design and engineering departments.
The varying workloads required is what ultimately determines what Apple tackles and when from the smorgasbord of potential thought-up features. Looking realistically at the major things on the list, my guess is iOS 10 will probably include a more flexible Control Center, a better multitasking app switcher, read receipts per Messages conversation, and drag-and-drop between side-by-side apps as an outside bet. Everything else is probably out of scope for this year.
This isn’t a criticism of Viticci’s work, he’s not intimating this is simple stuff, but many people watch these videos and believe as much, with a sentiment like ‘this guy on YouTube did it, why doesn’t Apple?’. The same applies to feature request written posts of course, but there’s something about the visceral quality of video that reinforces that feeling more than words on a page.