How Many Subscribers Does Apple Have Exactly?


“We are happy to report that we had an all-time revenue record in Services during the June quarter, driven by over 1 billion paid subscriptions, and we saw continued strength in emerging markets thanks to robust sales of iPhone,” said Tim Cook, Apple’s CEO.

Apple’s hardware sales growth has been negligible-to-flat for several quarters in a row. More than ever, their quarterly financial statements depend on the Services business to show a record result. This quarter, Apple reported 8% revenue growth for Services, topping $21.2 billion for the quarter, and announced it had reached 1 billion paid subscriptions on its books. They also said the installed base of active devices hit an all-time record, without disclosing a specific figure. (Apple has repeatedly argued that a bigger install base means more customers will engage with services over time, and so far that has held up.)

But that’s about all Apple will tell us as to the performance of Services. It hasn’t reported Apple Music subscriber numbers since 2019, nor has it ever given hard figures about the performance of Apple TV+, Apple Arcade, News, iCloud, or Apple One in general. A billion subscribers is a huge headline figure, but it obscures the real story of what most people think of when you say ‘Apple services’. Services includes the App Store, and so a majority of that 1 billion total includes In-App Purchase subscriptions from third-party apps in the App Store. Although we never know for sure because Apple won’t tell us, it follows that the majority of Services revenue growth hails from the 15-30% commission Apple collects on those in-app purchase transactions.

If I was a financial investor, I would be growing increasingly dissatisfied with the murkiness of the Services business. For Apple’s flagship growth unit, it’s really hard to get a read on its performance. The golden goose of Apple’s stronghold on the App Store is constantly under threat from regulation, but we can’t measure the potential impact on Services revenue. The success of Apple’s content services are a hedge against the risk of App Store commission drying up, but we don’t know anything about the state of those offerings — we can’t even say for sure they are successful.

Apple stopped reporting unit sales numbers for its hardware products in 2018, but it still reports revenue per division. Rather than a single ‘Hardware’ revenue total, Apple reports quarterly revenue breakdowns for iPhone, Mac, iPad, and Wearables, which gives some visibility into how each product line is doing over time. In contrast, Services is completely opaque. There is no breakdown provided, just one total revenue figure. I am surprised there isn’t more pressure here from Wall Street for Apple to reveal more details. A few years ago, Services was small enough that it didn’t really make sense to split it out. These days, though, Services is so large that it is bigger than the Mac, iPad and Wearables units combined in revenue terms. If you hypothetically split out the App Store as a sub-unit, there’s a decent chance it alone would be larger than the iPad on the balance sheet.

At the very least, I think it’s time for Apple to be more transparent about that subscription total. How many of the 1 billion subscriptions are for Apple’s services, versus third-party subscriptions? And how many unique users does that 1 billion subscriptions represent? It’s not even clear to me how it is calculated. A user who pays for iCloud and Apple Music presumably counts as two subscriptions, but as a subscriber to Apple One Premier, do I count as one subscription, or six?

iOS 17 Dramatically Improves Autocorrect With Smarter Algorithms And Smarter UI

During the WWDC announcement, Apple focused on how the keyboard autocorrect system in iOS 17 is powered by machine learning based on ‘transformer’ neural networks, with the aim to enhance accuracy and make the corrections feel more personalised to each user.

Having used iOS 17 for a month so far, you can definitely feel the difference. The corrections are better. It feels like it knows what you meant to type far more than any previous version of the software. It also seems more resilient to typing slang. I noticed it can cope with common texting lingo reductions like ‘wut’, opting to leave them alone instead of insisting a correction to the nearest word it finds in the dictionary. In a very unscientific test, I tried typing ‘wut’ on an iOS 16 phone — and it kept changing it to ‘wit’. Overall, the iOS 17 engine is more useful and less obstructive.

But algorithm improvements are only part of the story. Obviously, it still won’t get it right all the time. But in those cases, the experience of managing autocorrect is also improved through a superior UI. When the system does make a mistake, it is far less punishing as the interface now gives you way to quickly revert autocorrect changes. As you type, any corrected words are briefly underlined in blue. This means you can more easily notice when autocorrect changed something, and address it immediately, instead of getting through your whole message and only then spotting an error. Tapping on the underlined word shows a popup menu that lets you undo to what you literally typed, as well as some alternative suggestions to pick from. Word predictions are also much more useful, showing inline as you type. Just hit the spacebar to accept the suggestion and keep typing your message.

The smarter algorithms and smarter UI come together in a very tangible way to offer a meaningfully better experience.

The Apple Vision Non-Pro Will Still Be Very Expensive

The Apple Vision Pro headset is wowing everyone who gets a chance to try it, with universally positive impressions being published from press who got to have the 30-minute demo experience. In those articles, I’ve noticed a common refrain that I don’t think is accurate. The presented idea is that whilst this particular model is arriving “early next year” as something too expensive for the ordinary person, Apple is already working on the second-generation non-Pro model, expected in 2025, and that will solve the price problem and make the headset accessible for all.

In short, I think that’s way too optimistic. The visionOS platform has long-term potential to be a mass consumer device, but we are talking long-term. The second-generation Apple Vision will be just as out of reach to most as the Vision Pro is today. With the Vision Pro at $3500, a stripped down cheaper model is still going to be very pricey. I’m anticipating somewhere around the $2000 mark. For comparison, that is around $1000 more than the most expensive Quest headset; Meta launched the Quest Pro as a $1500 device in October 2022 and swiftly dropped it down to $999 in March.

The Meta Quest Pro does not sell in volumes anywhere near levels that could be deemed ‘mass consumer’. Meta has sold tens of millions of units over its lifetime, but that stat is dominated by sales of the entry-level Quests, which retail under $500 — and mostly targets the semi-casual VR gaming market.

I’m not sure the Apple Vision product line will ever reach prices that low, at least as Apple how envisions it (pun intended) today as an augmented reality spatial computer. The EyeSight feature alone must add hundreds of dollars in cost to the bill of materials — between the curved lenticular front-facing OLED display and the sensors needed to drive it. Without considering anything else, the existence of EyeSight means the lowest I can ever see an Apple headset going is $1500 — and that’s not a near term thing, that’s many years off.

Putting aside the myriad other drawbacks of the device’s form factor given current technological constraints, the price alone means I am very bearish on the mass consumer prospects on the Apple Vision product line. I’d wager it will take at least three generations of hardware evolution to get to something appealing to the mainstream.

It took Apple one year to sell ten million iPhones; I wouldn’t expect Apple to achieve 10 million visionOS unit sales until 2027-2028. To reiterate, that’s a five year timescale. They are playing a very long game.

Final Cut Pro And Logic Pro Announced For iPad


Apple today unveiled Final Cut Pro and Logic Pro for iPad. Video and music creators can now unleash their creativity in new ways that are only possible on iPad. Final Cut Pro and Logic Pro for iPad bring all-new touch interfaces that allow users to enhance their workflows with the immediacy and intuitiveness of Multi-Touch. Final Cut Pro for iPad introduces a powerful set of tools for video creators to record, edit, finish, and share, all from one portable device. Logic Pro for iPad puts the power of professional music creation in the hands of the creator — no matter where they are — with a complete collection of sophisticated tools for songwriting, beat making, recording, editing, and mixing. Final Cut Pro and Logic Pro for iPad will be available on the App Store as subscriptions starting Tuesday, May 23.

It’s been a while since Apple has released software with such craft and care, as is on display here. Without even using the apps, the screenshots stand on their own as an impressive feat. I love how these apps are sophisticated in scope whilst still highly accommodating to touch input. A fair few ‘pro’ apps that have come to iPad in recent years just assume the user is working with an attached keyboard and mouse. They basically give up on the touchscreen part of the tablet form factor, because it’s easier to get their desktop app ported that way. No such shortcuts have been taken here. You could ably use Final Cut and Logic with just your finger on an iPad screen. I love to see it.

When these apps ship in a couple of weeks time, there will immediately be a laundry list of complaints from pro users about missing features and reasons why these iPad apps can’t replace their Mac workflow; many of those reasons will be the fault of the platform itself, like file management or access to plugins and I/O. Those negative headlines will inevitably happen, but I don’t think it matters much. This is Apple seriously putting its stake in the ground, and some people will be able to use these apps for real right out of the gate. Apple can and will keep chipping away at solving the outstanding problems to capture more and more use cases.

Apple has been crying wolf about the iPad as a productivity machine for far too long. You can’t deny that this announcement is a great start to fight that narrative.

Humane Unveils Its Wearable Device For The First Time

Humane, the secretive startup founded by ex-Apple software design chief Imran Chaudri, finally went public with Chaudri showing off their device for the first time at the TED conference last week. I’ve seen a recording of the 15-minute presentation, which unfortunately is yet to be officially published online.

Chaudri’s talk is centred on the premise that technology (mainly through the smartphone) has invaded all of our lives too much. The idea is that personalised artificial intelligence can be used to dramatically change how we interact with technology. Rather than proactively opening an app to do something, AI can be an ambient thing that is there when you need it, works in the background of your life, and mostly stays out of your way. To make this a reality, Humane is introducing a new product: a wearable that resembles a rectangular pin badge. Chaudri is wearing one on his jacket pocket during the presentation. He sets out the vision of their product as something that is “screenless, seamless and sensing”.

Chaudri demonstrates the unobtrusive utility of their device by asking it ‘Where can I find a gift for my wife before I have to leave tomorrow?’. The badge audibly responds with a suggestion of a nearby shopping district. It’s a cool demo in that it gives a more useful contextual reply than what we’d expect from a typical response from Siri or a similar voice assistant would give today to that same query, with the Humane system clearly being infused with large language model smarts. (Assuming the demo is legit and not scripted canned responses, of course).

However, I do not see how that demo justifies the form factor of a clip-on screenless badge. In fact, if I wanted to actually go to said shopping mall, I am left wanting a screen to visualise the area on the map and show me directions. Innately, then, I think that a phone with a smarter inbuilt assistant supersedes the Humane product, as does a smartwatch for that matter. Watches have screens but they are just as seamless and subtle as a wearable on the front of your jacket, I’d argue.

Humane’s counter to the visual information problem seems to be the inclusion of a short-throw projector in the badge. This allows the device to beam text and images to a nearby surface. If you are standing upright, away from a table, this means holding up your hand awkwardly in front of you so the badge can project stuff onto it. So, despite the screenless pitch, what they have essentially created is another screen after all; one with unstable reproduction (thanks to your naturally shaky arm/body), relatively low color fidelity and resolution, and uncomfortable ergonomics.

We don’t have much else to go on in terms of technical specification, but Chaudri did stress that the device is meant to be used wholly independently; ‘you don’t need to carry a phone anymore’ was not said, but certainly an implied notion. Maybe there’s room for another wearable accessory in our lives, and if Humane had positioned their product in that vein, I would be far less sceptical. I am not onboard with the presented vision that they are pioneering the primary future of personal computing.

Some Apple Employees Unconvinced By Headset's Purpose

The New York Times:

As the company prepares to introduce the headset in June, enthusiasm at Apple has given way to skepticism, said eight current and former employees, who requested anonymity because of Apple’s policies against speaking about future products. There are concerns about the device’s roughly $3,000 price, doubts about its utility and worries about its unproven market. That dissension has been a surprising change inside a company where employees have built devices — from the iPod to the Apple Watch — with the single-mindedness of a moon mission.

Apple’s recent history of new product category launches denote major, culturally impactful, events: AirPods, Apple Watch, iPad, and of course — the pinnacle of the bunch — the iPhone. The Watch didn’t set the world on fire immediately, but I think it still qualifies to be among the bunch, racking up tens of millions of sales within two years. AirPods took a while to ramp up too.

The rumoured headset product is simply not going to meet that same level of appeal, but is that a necessarily a bad thing? It depends. If Apple presents it as the next big thing, laden with superlatives, then it will backfire. If it approaches its introduction in a more subdued fashion, in a similar vein to the launch of something like the Pro Display XDR, then it’s probably fine. Don’t set unrealistic expectations and people will not be disappointed. Apple Reality Pro is the start of a long journey, and the billions of dollars of research and development will eventually culminate in something monumental. Just not yet.

Being active in the augmented reality space is clearly important, and Apple has signalled as much with six major versions of ARKit under its belt already. The v1.0 headset hardware is the next step on the journey. In the best case, it will establish Apple as a market leader in the space, evangelise to developers and kickstart an ecosystem. It might make some inroads in the enterprise. I don’t think many ordinary people will get on board. The rumoured second-generation model has a better shot, but even that will probably be priced above $1000. But maybe by the time that second-gen comes out, Apple and developers noodling with the first-gen might have figured out some killer app use cases that will allow that price to be somewhat justified. If not, no biggie. There’s always the third-gen. And after that ships, the state of the art of technology will hopefully be closing in on making the ideal form factor — thin and light glasses — viable. And when that happens, Apple will be ready.

Eddy Cue once said, if Apple only did things that were as big as the iPhone, they would never release another product. Back when Apple was far less flush with cash, it had to make an immediate splash to survive, let alone thrive. Nowadays, the company doesn’t have that pressure, and it’s implausible that it could one-up itself forever. Sometimes it makes sense to start small.

The Big HomePod Is Back

Homes are varied and complicated and individual to each person. They represent a myriad of diverse problems, and that simply cannot be solved by a one-size-fits-all product. As such, it was competitively unsustainable for Apple to only have one smart speaker on sale, in the form of the HomePod mini. It doesn’t necessarily need a product lineup as diverse as what Amazon has cooked up with the Echo ecosystem, but it does need a lineup: an ecosystem of products that can span price points and use cases.

Enter, HomePod (second-generation). The much-beloved original HomePod is back, in almost its original form, for when you want the best possible sound in a stylish standalone desktop speaker. It’s a testament to how good the HomePod was in 2018 that Apple can get away with bringing back almost the exact same product five years on. (The lack of advancement simultaneously speaks volumes about Apple’s wavering interest in competing in the smart home market.) Questions of commitment aside, Apple now has the HomePod mini for bedrooms and kitchens, and the HomePod for spaces where you really want to enjoy listening to music, like in the main lounge or living room of the house.

Two SKUs is still not enough, but it’s a start. Next up, I think, should probably be something with a screen. Apple is reportedly working on exactly that, but we might not see it materialise for at least another eighteen months.

Apple Working On Touchscreen Mac Laptops


But rivals have increasingly added touch screens to personal computers, putting pressure on Apple to do the same. A Mac resurgence in recent years also has made the business a bigger moneymaker than the iPad – and the company wants to keep its computer line-up as compelling as possible.

Based on current internal deliberations, Apple could launch its first touch-screen Mac in 2025 as part of a larger update to the MacBook Pro, according to the people, who asked not to be identified because the plans are private. The current work calls for Apple’s first touch-screen MacBook Pro to retain a traditional laptop design, including a standard trackpad and keyboard. But the laptop’s screen would support touch input and gestures – just like an iPhone or iPad. Over time, Apple could expand touch support to more of its Mac models.

I don’t believe that many laptop buyers seek out touch as a wanted feature, but touchscreen Windows laptops are popular just because they are so pervasive in the marketplace. Of all the Windows laptops pedalled in retail stores, I’d wager half have touch screens on them. So, people buy them. And when they buy them, it turns out, they actually get used.

All the time, I see people swipe up and down on their vertical laptop screens to navigate webpages and zoom into photos with a pinch gesture. The ergonomics of this are naturally poor. Stretching your arm out forwards to reach the laptop screen quickly becomes uncomfortable. And yet, people still do it frequently. The touch screen is used as an accessory to primary mouse input. They swipe around a bit, then they go back to the mouse. They read a screenful of content, then they swipe to the next page, and put their arm back down. It’s a surprisingly subconsciously natural thing to do.

Apple has trained a generation on the expectation that screens respond to touch input, thanks to the popularity of the iPhone and iPad. In the perspective of the average user, the MacBook is the outlier here: why doesn’t touching the screen work?

In terms of implementation, it is not the Apple way to introduce a touch screen without a touch screen OS. Making macOS truly designed for touch is a huge undertaking, though, and doing that without compromising the mouse-keyboard experience is even harder. I’m not sure Apple has the bandwidth for it. However, given what we just said about touch on Windows laptops being secondary rather than primary input, maybe Apple can get away with doing very little. As Windows has demonstrated, adding touch to laptops does not necessitate a major reworking of the desktop UI. It would be nice if that was the case — better even — but it’s not required for users to be happy.

MLS Season Pass Pricing

It’s really cool that Apple found a sports league with an unencumbered portfolio of rights, to be able to strike a universal first-of-its-kind all-in streaming package with worldwide availability, no restrictions and blackouts. Fair credit to MLS too: the league thought ahead and purposefully organised its subordinate partners to make this a possibility; they were instructed to ensure all existing broadcast rights deals expired at the end of the 2022 season, so they could present a comprehensive unified offering to a streaming service. Apple bought in.

Rather than have games start at different times on different days spread across various channels, it will now be possible to pay for one subscription and watch every game live, or on-demand. The synchronisation of start times (7.30 pm local, Saturdays or Wednesdays) also allows Apple to offer a hosted whip-around show commentating highlights across all the games happening at once.

I do have some doubts about the pricing model. MLS Season Pass is priced at $14.99 per month, or $99 per season (discounted to $79 for Apple TV+ subscribers). This is a cost-effective offering if the customer is interested in watching most of the games. Soccer super fans do exist, and this will be great for them.

However, I’d wager most people only care about their local team. In that context, shelling out $99 a year for interest in watching one team (comprising at most two games a week) feels expensive. Previously, local regional networks would broadcast them available as part of a standard cable package, or you could see the games on something like ESPN — a channel that airs a plethora of different sports, not just soccer.

A slight wrinkle to this equation is that some percentage of games will be available to Apple TV+ subscribers without paying for the pass. Exactly how many is still unclear. Although Apple shared the MLS 2023 schedule yesterday, it did not designate which games will be in front of the paywall. A leaked plan from earlier this year suggested it could be as high as 40% of games, which I reckon would be a decent substitute for the old model of ‘free’ regional network broadcasting availability.

A team-friendly perk is that all club stadium season ticket holders will also get a subscription to MLS Season Pass included at no extra charge. What a nice thing to do. But it does reduce further the potential pool of super fans who would be interested in paying $99 to watch all the games in the first place.

Nevertheless, my understanding is Apple has the contractual freedom to adjust pricing as it sees fit. This is a ten year deal and they aren’t necessarily going to nail it first time. So, if the announced pricing structure underperforms, there’s room for product changes, like perhaps the introduction of a cheaper single team pass (maybe $50 a season?) or bundling it with something like Apple One Premier.

iCloud Advanced Data Protection


iCloud already protects 14 sensitive data categories using end-to-end encryption by default, including passwords in iCloud Keychain and Health data. For users who enable Advanced Data Protection, the total number of data categories protected using end-to-end encryption rises to 23, including iCloud Backup, Notes, and Photos. The only major iCloud data categories that are not covered are iCloud Mail, Contacts, and Calendar because of the need to interoperate with the global email, contacts, and calendar systems.

“What happens on your iPhone, stays on your iPhone” is what Apple boasted on a massive billboard it plastered on the side of a building, for all to see, at CES 2019. Although the essence of the ad was accurate, the messaging always felt a tad hollow. You couldn’t repeat it in good faith without hanging a couple of asterisks on the end.

iCloud Advanced Data Protection closes that gap and makes good on Apple’s wide-reaching marketing push towards privacy in full. If you want to, you can now fully encrypt the most vital sensitive information: Photos and Messages, including as part of an iCloud backup. End-to-end encryption means no one has the key to read that data, except you in the form of your Apple ID password (or device passcode, as Apple lets users unlock access to their account that way too). No other manufacturer offers a comprehensive end-to-end encryption option. It’s a big deal.

I won’t be turning this on myself. I value the safety net of Apple Support in the event I ever somehow forget my password more than the (mostly theoretical) risk that some entity may one day get their hands on my iCloud data. I won’t be recommending my family members do this either, for the same reason. Of course, if you are a potential target of a malicious nation state, like a political activist or journalist, you will probably choose differently. That’s great. What matters is the option is there. It’s so important because iOS does not let any third party service have low-level system access to offer an alternative cloud backup solution. iCloud Backup was the de facto only choice — aside from having no backup at all — and iCloud Backup was effectively an encryption backdoor … until now.

Apple has boldly presented Advanced Data Protection as a feature intended to rollout worldwide. Even if that is practically unrealistic, I am proud that Apple approached this the way they did. They are taking on the responsibility of countless legal battles and geopolitical angst. They could have negotiated this in private, but instead they are forcing the fight into the open. If end-to-encryption ultimately isn’t available in a certain region, we’ll know who to blame.

The New Price Of Apple TV 4K

The Apple TV hardware has two main issues: lagging OS, and price. This week’s hardware refresh naturally didn’t do much to change the software experience — although Siri features a slightly more modernised UI and per-user voice recognition now — but they did tackle the price problem.

The previous lineup was $149 for Apple TV HD, a product first released in 2015, and two models of Apple TV 4K varying by storage capacity; $179 for 32 GB and $199 for 64 GB. This was simply outrageous pricing, in a market where competent 4K streaming sticks can be picked up for under $49. The premium advantages of the Apple TV platform were simply not worth four times more. I bought it because I’m a sucker, but I’d never recommend it to family or friends.

The new lineup is $129 for Apple TV 4K with 64 GB and $149 for Apple TV 4K with 128 GB. This time around, the higher-end model also differs in features other than storage; the more expensive Apple TV has an Ethernet port for wired networking and a Thread radio, for communicating with the latest Thread-only smart home devices.

In raw numbers, the Apple TV 4K is 28% cheaper than it was a week ago. The cheapest Apple TV you can buy is now 15% cheaper, and actually of respectable, recommendable, spec: the latest A15 chip, full 4K HDR support, and plenty of storage for future-proofing / space for downloading a dozen Apple Arcade games. And the obsolete Apple TV HD is gone for good, thankfully.

This is fantastic news. The lineup is much more reasonable now. If someone is mad that their Roku or Fire Stick is ad-ridden or behaving laggy, suggesting a $129 solution is now possible — an order of magnitude more palatable than the old $179 price point.

That being said, $129 is still too much for the Apple TV to capture significant market share. I wish Apple went further with stripping down the base model to push the price down more. $99 really feels like the target to hit, and they didn’t quite get there. A hypothetical 32 GB Apple TV model for $99 would have been appealing; very few will benefit from having 64 GB or 128 GB onboard storage. If you are just streaming video, you don’t care about the storage space.

iPhone 14 Pro Always-On Display

It’s such an Apple simplification to show the exact same lock screen whether the phone is awake or not. It’s so similar that it is a stretch to even describe it as a “mode”. It’s just your lock screen, dimmed.

I like it. It’s the same way the Apple Watch works, it makes sense to me. When you invest in customising your lock screen using all the new widgets, dynamic wallpapers, font and colour options available in iOS 16, you get to enjoy your personalisation choices (quite literally) all day long on the iPhone 14 Pro. It feels like you have made the phone your own. The always-on feature also benefits from the iOS 16 maximised album art lock screen view, adding a splash of colour and vitality when your phone would otherwise be sitting dormant on your desk. The ability to glance at notifications and widgets adds some degree of utility, but it’s mostly just nice-to-have the screen stay on. Screen and battery technology have advanced to make it possible, so why shouldn’t phones work this way?

Of course, personal preference matters a lot here. Indeed, it’s never a good look when a significant portion of the initial embargoed reviews mentioned they turned off the feature entirely — arguably disabling one third of the Pro-exclusive features this generation — because they found the permanent aliveness of Apple’s always-on implementation too distracting. Some people simply prefer a simpler, muted, always-on state.

A Nightstand Mode perhaps, again cribbing from the Apple Watch.

The OS integration does feel a little incomplete. Obviously, some people want an option to get an Android-like always-on mode, where it just shows the time on a wholly black background. I don’t care for that so much but I do want a little more flexibility in how I am allowed to conditionally enable or disable always-on.

As I already said, I happen to like the feature as-is — but I don’t want the screen to shine brightly through the night whilst I’m trying to sleep, if only to avoid unnecessary battery drain. As of right now, the only way to get the screen to turn off at night is to use the Sleep Focus mode. The Sleep Focus is blessed with abilities other Focuses are not, and turning off always-on is one of them. I don’t do sleep tracking and I don’t really want to have a Focus-oriented device lifestyle. Until I got my 14, I had stuck to the pre-iOS 15 binary system of Do Not Disturb, or nothing. I have resorted to using Sleep, but I shouldn’t have to. Why can’t I set it so that Do Not Disturb also tames the always-on display?

Apple Wants To Triple Revenue From Advertising Division


Inside the ads group, Teresi has talked up expanding the business significantly. It’s generating about $4 billion in revenue annually, and he wants to increase that to the double digits. That means Apple needs to crank up its efforts. I believe that the iPhone maker will eventually expand search ads to Maps. It also will likely add them to digital storefronts like Apple Books and Apple Podcasts.

There’s nothing inherently bad about ads per se. Typically, any ‘damage’ done to the user experience of a product by showing ads is offset by a lower — or free — purchase price. That’s the balancing act at play; the user is compensated in exchange for having to consume advertising, thereby maintaining an overall equilibrium of customer satisfaction. For just one example, Google runs a widely successful suite of services, almost exclusively powered by an ad-supported monetisation model.

In the context of Apple, the same ideas apply. I don’t think anyone would complain if Apple launched an ad-supported tier of Apple TV+, or Apple Music, as long as it was proportionately cheaper than the ad-free tier. If anything, those tiers would likely be more popular than the existing offerings. After all, Spotify has huge market dominance in music streaming precisely because they offer a free ad-supported tier.

Where the tension is, is the expectation that Apple is going to insert more and more ads into the user experiences of its premium products, without any such compensation in return. All signs point to the fact that Apple is going to try and uphold its position as a premium company charging premium prices, whilst also sucking out more revenue apparently indiscriminately.

And that is a dangerous slippery slope that threatens the essence of Apple’s entire product lineup. It’s a risky venture. A big problem is that the feedback loop is not so sensitive; the increase in revenue is immediate but the observation of the cumulative negative impact felt by users is lagging. You can probably insert a few additional ads into iOS and get away with it. But overdo it, and then you start undermining the premium brand the company has carefully curated for so long, and then you start losing customers, maybe for good.

A Truly Smart Home Should Know Who Is In What Room

Actually, we fitted smart wall switches rather than independent light bulbs, but it’s the same difference.

Smart lights are the go-to accessory to kit out a smart home with. We set some up in our house a few years ago, and an obvious thing to do was to make it so all the lights turn themselves off at night, so we aren’t wasting electricity, all night long, if someone forgets to flick the switches before going upstairs to bed.

We achieved this by configuring an automation that sets a ‘Good Night’ scene at 2 AM. It turns off the TV and some other stuff too. It’s neat, useful even. But a time-based schedule is far from an ideal trigger for this. What if someone happens to stay up late? Well, tough luck, everything is still going to unceremoniously turn itself off. The automation is set at 2 AM because all members of the family have clocked out around midnight or 1 AM, and that leaves enough of a buffer to account for the occasions when people stay up for another hour or so. Still, it’s not a foolproof system. The hack also means that when people are — most often — usually in bed before 12, the lights accessories are not automatically turned off for another two hours, wasting electricity for no reason.

It also doesn’t help with all the times during normal waking hours when people turn lights on and then leave the room. If you want to solve for that situation, a fixed time automation is not sufficient. The next tool in the arsenal is motion sensors, setting stuff to turn off when no motion is detected for a while. I’ve attempted to use a couple of motion sensors from different brands, but they are all largely unsatisfactory at the job. They aren’t reliable in general, especially for larger rooms and people might be obscured by sofas or tables; a very common pitfall is that when people relax, like reading a book or watching TV, they tend not to move enough to trigger the sensor. Motion sensors can be successfully deployed in some specific scenarios, but they aren’t general purpose solutions to the task of turning stuff on when people aren’t there anymore.

A few dedicated room occupancy sensors do exist. They typically attach to door frames, and count how many people enter and exit each room. If the count is greater than zero, the room is considered occupied. Unfortunately, these kinds of sensors are cost prohibitive, somewhat ugly, and also imperfect; a missed count of just one person will mean the total is off, requiring manual intervention to reset it. People enter and exit rooms a lot; even something that is 99% accurate will be wrong enough times to be frustratingly annoying.

So, the ‘simple’ task of intelligently automating turning things off in a house remains an open problem to crack. The 2 AM Good Night automation I use at the moment is a crude hack, but works well enough to tease the possibilities of where we could go in the future. I think fully solving room occupancy is the key to doing this and a world of other related fun and useful features.

The promise of a smart home could really come into its own if we could reliably detect exactly who is in what room. If the iPhone knew what room you were in at all times, you could open the Lock Screen and immediately see relevant smart home controls for the lights and accessories in your immediate proximity, as that is odds on what you wanted to change. It would be great to be able to say to your watch ‘turn on the light’, and the virtual assistant actually do exactly what you intended to happen as if you were talking to a real person; turn on the light just in that room where you are and nothing else.

Reducing friction is vital to making smart home stuff feel more useful and less of a gimmick. If the home knew who was in the lounge, it could change to that person’s user profile on the TV automatically. Playback of a podcast on smart speakers could follow someone through the house, as they move from eating dinner in the dining room to relaxing in their room. It’d be really cool if a smart speaker could automatically avoid expletive-filled music when young kids were known to be nearby. Smart thermostats could adjust to the temperature preferences of the person in the room at any moment, and perhaps turn off heating altogether when the system reliably knew everyone had gone to their rooms at bedtime at night. And your wake-up alarm could turn itself off when it observed you walk out of your bedroom the next morning.

I don’t pretend to know how we get there technologically. I figure it would probably involve coordination between some camera-esque sensors dotted around the home and communication with the devices people carry on them, like phones and watches. Maybe Apple could take advantage of Ultra-Wide Band positioning to accurately follow people’s movements throughout a house, with various static nodes like HomePods or Apple TVs or Echo speakers or whatever working in concert to track and triangulate the signals. I hope manufacturers pull on that Thread and see where it takes us (pun intended).

M2 Display Limit

The jump to Apple Silicon and the M1 chip was nothing short of astonishing. The Mac got way, way, way, better, quite literally overnight. In the timeline of the Mac, 2020 will be remembered forever (and it’s not about COVID). Even more staggeringly, Rosetta 2 ensured old Intel binaries ran well enough that you generally couldn’t even tell that you were running through an emulation layer. Whether running new or old binaries on M1 Macs, everything performed at either the same speed or faster (typically significantly faster) than their Intel counterparts. The move to M1 came with no asterisks, downsides, or drawbacks. It was a perfectly executed transition.

Well, almost perfect. The move from Intel to Apple Silicon meant Apple’s best-selling machines, the 13-inch MacBook Air and MacBook Pro, went backwards in one regard. They were no longer capable of driving two external displays.

But that was easily excused. It is only the first-generation of Apple Silicon after all. Although using two monitors at once is not really an edge case, it’s certainly not a dealbreaker for a base model laptop. As such, the sang was remarked upon and quickly excused as a footnote; a strange quirk of first-gen engineering.

Year two, here comes M2. CPU is better, GPU is better … and yet the one-display limitation remains. Count me surprised. I was a bit taken aback by that. Now that the glow of Apple Silicon transition has faded slightly, the footnote pitfall is slightly harder to ignore.

I don’t think it’s unreasonable to expect any Mac to be able to output to two displays at once. It’s a weird thing to explain to normal people too; when is the last time a customer had to think twice about plugging in a second monitor to their computer? Decade-old Intel Macs could do it just fine. Plenty of old MacBook Air users spent the last two pandemic years working from home with a dual display setup, and now when they come to upgrade their years-old laptop, they are going to be unpleasantly surprised to find that the (otherwise shiny and new) 2022 MacBook Air can’t do that.

It’s still not a dealbreaker — and it doesn’t impact everyone of course — but it is a noteworthy con. I’m obviously not expecting the base chip to drive multiple 6K displays like the Pro/Max/Ultra chips can. An adequate bar would be the ability to output to two 4K screens at once, plus the laptop’s own screen. I don’t think I’m asking for much; just match what the old Intel machines could do.

I hope this isn’t a new product segmentation scheme on Apple’s part to differentiate pro and non-pro lines by how many screens they can support. That would be dumb. I don’t think that’s the case. The M3 will surely close the loop. Right?

Apple Announces iCloud Shared Photo Library


iCloud Shared Photo Library gives families a new way to share photos seamlessly with a separate iCloud library that up to six users can collaborate on, contribute to, and enjoy. Users can choose to share existing photos from their personal libraries, or share based on a start date or people in the photos. A user can also choose to send photos to the Shared Library automatically using a new toggle in the Camera app. Additionally, users will receive intelligent suggestions to share a photo that includes participants in the Shared Photo Library. Every user in the Shared Photo Library has access to add, delete, edit, or favorite the shared photos or videos, which will appear in each user’s Memories and Featured Photos so that everyone can relive more complete family moments.

I swear, sometimes it feels like Apple waits out for everyone to give up hope for a feature, only to deliver it on a silver platter the very next year. Cynically, better late than never. Practically, this is great news.

I love using the Photos app to scroll through years of images, edit and crop right on the phone, run photo screensavers on the family room Apple TV, and glance at relevant images in the Photos home screen widgets across my devices.

It all just works swimmingly … except for the rather crucial part about getting those photos collated in the first place. The strategy our family has adopted until now, which I assume is what everyone else does, was to assign one person’s account to be ‘primary’ and they are in charge of having all the photos. If others take pictures, they can send them to said person who saves them in the canonical archive.

This is obviously a clunky ‘solution’. The primary person takes on a lot of responsibility to manage the library, including doing all cropping and editing, and loses the ability to have a safe space for their own personal photos that they want to keep separate. A family member cannot see all the family pictures on their own phone unless they also keep copies on their own individual buckets of iCloud, something that is both annoying to manage manually and wastes gigabytes of our 2 TB storage plan with duplicated content. A particular pain point in our household is that this arrangement necessitates having the designated person’s account signed into the Apple TV, so that the Photos screensaver will work. Unfortunately, that means those Apple TVs are unable to participate in HomeKit because the HomeKit home configuration was set up on someone else’s account, and tvOS won’t let you sign in to both at the same time.

The newly announced iCloud Shared Photo Library codifies the impromptu approach we have all been using into an official feature, giving all the benefits without the aforementioned downsides. Photos can be saved to your personal library, or sent to the shared library — which all members can access and view. iOS will use machine learning to remind you to share images to the family library where appropriate, and can (optionally) do it automatically when it detects the family has gone a group trip together. Edits to the photos and metadata adjustments can be done by anyone, and automatically synchronise. It just works.

As Apple presented it, it seemed like the shared library would be tied to the Family Sharing system; the six users are the six people in your Family Sharing group. That would certainly be the Occam’s razor approach, removing the need for additional account management steps. However, apparently, that is not the case and the “up to six users” can include people that aren’t in your Family Sharing circle. That definitely opens up the feature to be useful to groups of people who want to share their photos but are not neatly contained into a single household, with just one shared payment method between them. It does raise some finicky questions, though, like how exactly does iCloud allocate the shared library’s storage. If someone contributes a photo, does the file size count against their personal iCloud storage quota, the person who created the shared library originally, or a wholly separate bucket altogether? Who pays for it, if you need more space? Who is in control of adding and removing people? Can you be removed from the shared library against your will, and if that happens, can you get a local copy of all the pictures of you before you lose access?