On the sapphire element, obviously it is going to more expensive to some degree than an established commodity (reinforced glass). However, I don’t think you can know how much more expensive it will be to produce when Apple is the one doubling the worldwide output of sapphire to use solely for itself.
On the PPI jump, I get the feeling an increase of 100 PPI, while impressive as a technical specification, isn’t going to translate into a meaningful real-world difference. Modern Android phones have displays with PPI in excess of 400 and I can’t tell the difference pixel-density wise between those and the iPhone’s 326 PPI display.
So why go to such an extreme then? I think it is merely a by-product of Apple wanting to go above the resolution of full HD, 1920x1080. What I’m saying is I don’t think Apple is increasing the PPI so they can advertise a better PPI — this characteristic isn’t going to be called ‘Retina+’ or anything.
What Apple can advertise though is the ability to watch TV shows and films at full resolution. If you are making the screen bigger, you have to change the resolution so you might as well kill two birds with one stone. Going to a 5.5 inch screen yields a 400 PPI alone. It’s not that much more of a leap to land at 440.
Even if the app turns out to be a massive flop, you have to admire the the engineering talent here. Prado says that Paper is one of Facebook’s longest running projects. They truly doubled down on this — somehow managing to overcome Facebook’s ‘ship quickly’ culture.
At a surface level, the idea is appealing, as manufacturers search for ways to prolong battery life in spite of the fact that advancements to battery capacity are nowhere to be seen. However, solar panels need to be relatively big and exposed the sun to work. This doesn’t really seem to gel with the form factor of a phone, which are inherently small and stowed in pockets when not in use.
Perhaps the technology will be more applicable as part of an iPad or MacBook, where the potential surface area of a solar panel is many orders of magnitude larger, but I just can’t see it for the iPhone.
The difference is, with the iPads and iPhones, a straight doubling the number of pixels across the same physical space had benefits beyond than manufacturing efficiency. For iOS, a pixel-doubled display meant better app compatibility. Pre-existing apps would just run in a 2x mode and developers only had to supply new assets to take advantage of the ‘Retina’ fidelity.
If Apple had chosen any other resolution for the iPhone 4 or iPad 3, developers would have had to redesign their apps from scratch and compatibility mode would have comprised horizontal and vertical letterboxing.
For OS X, unlike iOS, pixel-doubling is not a requirement. Due to the windowed nature of OS X, apps can run on devices with a whole variety of screen resolutions. There may be cost savings for Apple (by using iPad display sheets at a different size), but I can’t imagine that the savings are so significant as to dictate Apple’s course.
Ignoring the above issues, I still think extrapolating the Air’s display from the iPad’s display is a baseless argument. Apple could just as easily use the 15 inch MacBook Pro’s display (which has a DPI of 216), or the 13 inch MacBook Pro’s display (which has a DPI of 226), or something else entirely. Each of those options produce different end results for an ~11inch Air. The point is, you can’t tell.
I’m not certain what ‘core’ means, but that’s really good progress nonetheless. The difference between being able to say iOS has most of the content versus all of the content cannot be understated.
If the iOS device lineup consisted only of iPods and iPhones, I think Apple’s mobile OS could continue quite happily as a siloed one-application-at-a-time environment for many more years.
However, the iPad puts a spanner in the works. Everyone is realising for the iPad lineup to keep expanding, iOS has to grow up. The decision to run the iPad on iOS brought all the benefits of a lightweight power-efficient core, but it also brought the less-desirable characteristics as well. Most of these deficiencies don’t matter on a phone but become clear downsides on a tablet.
iPad hardware is outstripping the capabilities of the OS it runs … already. With the potential of an even bigger-screened slate on the horizon, in the form of an ‘iPad Pro’, the need to develop iOS’s multitasking and productivity workflows is more important than ever.
That being said, at the same time, users do not want to sacrifice iOS’ trademark intuitive simplicity either. That is why people love iPads and iPhones as much as they do. “Professional” does not mean complicated.
Therefore, any improvements Apple makes to iOS need to satisfy both of these criteria, to be both straightforward and advanced. It sounds paradoxical, but what realistically can be done?
Windowing is the response everyone wants to give. Let two apps live side by side onscreen. Done. Of course, it’s not that easy. On a desktop, windowing is implemented in a way that adds hierarchy, because windows can stack and occlude other windows. I think its clear to pretty much everyone by now that window management is not desirable for touch interfaces. Resizing is just too fiddly — Samsung proves this point perfectly with its latest TouchWiz stuff.
With Metro, Microsoft enforces constraints on how apps can appear side-by-side. The result is users can quickly “snap” a secondary app as a sidebar, taking up about a third of the display’s width, from anywhere in the UI with just a quick swipe from the edge of the screen. Not only easy, but understandable.
I think Apple could implement this limited form of windowing, in a way that builds upon the established platform that already exists. I mocked up what this would look like in practice, but let me elaborate on the technical details a bit further. With the iPhone 5, Apple stretched the iPhone screen resolution vertically by 176 pixels. Developers were forced to invest time making their apps’ layout vertically flexible so they looked on the iPhone 5. Effectively, Apple has seeded the app ecosystem to react to respond to changing vertical screen heights.
So, if Apple set the width of the sidebar panel to be the width of the current lineup of iPhones (640 pixels), the amount of additional work developers would have to do to adapt to the new system would minimise, because apps are already used to living in environments that are 640 pixels wide and have flexible heights. Essentially, when docked in the sidebar, iPad apps would render like elongated iPhone apps. The design of most iPhone apps lends well to tall viewports. Most apps are navbar, tableview, toolbar. Just increase the height of the tableview indefinitely to fill the additional space.
Taking this ‘panelling’ concept in a slightly different direction, iOS could do with some way to open apps in a more transient manner. With iOS 7, if you want to check Tweetbot, you have to make a concerted choice to leave what you are doing and switch. For things like Twitter, that doesn’t really make sense a lot of the time. I’d love to be able to peek at an app over-the-top of whatever I’m currently doing. Imagine something similar to how iPad Mail behaves in portrait, where the message list can be popped out with a swipe and dismissed just as quickly without losing the context of your current actions completely. This kind of approach is better suited to the 9.7inch iPad form factor we have today, as the system doesn’t need to dedicate a portion of the screen to a secondary app — it just shows and hides over the top of the current primary app.
Multitasking isn’t the only problem to tackle, of course. Ideally, iOS apps should be able to coexist and cooperate between each other. Popovers are an excellent way to expose this in UI, because of their self-contained nature. Apple already does this with integration with the stock apps: presenting a picker for photos, contacts or events in a popover is already common practice on the iPad. Apple just needs to extend this to support third-party datasources. Developers could provide UI that other developers can present to users as a modal popover, without waking up the entire app. XPC would ensure that data transfer between applications remains secure.
Apps should also have the capability to be ‘faceless’, so that other apps can query for data without needing any intermediary UI. This would enable apps like to draw on information available in other apps without pushing additional UI. For example, GarageBand could import sound-clips from apps like djay or Animoog in addition to the Music app. Similarly, a word processor could retrieve definitions from the users’ preferred dictionary app rather than stick to whatever the developer bundled with the app.
The notion of such deep access will undoubtedly sprout fear about data corruption from external sources, malicious or otherwise. However, this system of information exchange does not have to work in both directions. Read-only access alone would be enough to mature the iOS platform significantly.
In many ways, this system would end up being more sophisticated than what is available on OS X. The technologies that will enable this stuff to become reality exist in iOS today, such as remote view controllers, and are already used by Apple extensively for their own apps. Challenges arise when scaling up for third-party use, questions such as ‘which app gets priority for photo management’ are an issue.
In spite of the complexity, I strongly believe progress in this area is necessary … to make up for the shortcomings of a largely siloed OS. OS X doesn’t need this stuff because everything goes through a centralised filesystem, which iOS obviously lacks.
Having something that can analyse blood without cutting into the skin is a hard concept to wrap my head around. Having that strand of biological analysis in a smartwatch is equally difficult to visualise. How would it be marketed for one thing?
It’s easy to take a comical stance in light of this report, play the ‘Apple is doomed’ sarcasm card and move on. However, I think it’s important to remember that the US has never been a point where Apple has struggled. The proportion of iPhones sold relative to all phones sold in the US has grown consistently for ever.
The people who genuinely believe ‘Apple is doomed’ quite rightly pinpoint other regions as the key battlegrounds for the iPhone going forward; places like India and China.
This ad has spurred quite a controversy over the weekend. It’s certainly divisive. Personally, I think the ad is okay but nothing spectacular. It is really moody and very intense but those characteristics don’t necessarily make a good ad, a trap I feel too many people fall into.
That aside, what caught my eye in particular was the accompanying micro-site for the diving expedition. I noticed Apple had put social sharing buttons for Twitter and Facebook at the bottom of the page.
This gave me an idea. I searched Twitter for the string Apple suggests (“Scuba divers are taking iPad underwater to perform research and save our coral reefs”) to gauge how well the campaign was being received. As of right now, only four people have actually tweeted the page. Make of that what you will.
What’s stopping people grabbing a coffee and a biscuit on the way to work and then just leaving, with a total cost of 6p? Even if you take an hour to enjoy your coffee, it’s still cheaper than any other cafe in the city.