A Truly Smart Home Should Know Who Is In What Room

Actually, we fitted smart wall switches rather than independent light bulbs, but it’s the same difference.

Smart lights are the go-to accessory to kit out a smart home with. We set some up in our house a few years ago, and an obvious thing to do was to make it so all the lights turn themselves off at night, so we aren’t wasting electricity, all night long, if someone forgets to flick the switches before going upstairs to bed.

We achieved this by configuring an automation that sets a ‘Good Night’ scene at 2 AM. It turns off the TV and some other stuff too. It’s neat, useful even. But a time-based schedule is far from an ideal trigger for this. What if someone happens to stay up late? Well, tough luck, everything is still going to unceremoniously turn itself off. The automation is set at 2 AM because all members of the family have clocked out around midnight or 1 AM, and that leaves enough of a buffer to account for the occasions when people stay up for another hour or so. Still, it’s not a foolproof system. The hack also means that when people are — most often — usually in bed before 12, the lights accessories are not automatically turned off for another two hours, wasting electricity for no reason.

It also doesn’t help with all the times during normal waking hours when people turn lights on and then leave the room. If you want to solve for that situation, a fixed time automation is not sufficient. The next tool in the arsenal is motion sensors, setting stuff to turn off when no motion is detected for a while. I’ve attempted to use a couple of motion sensors from different brands, but they are all largely unsatisfactory at the job. They aren’t reliable in general, especially for larger rooms and people might be obscured by sofas or tables; a very common pitfall is that when people relax, like reading a book or watching TV, they tend not to move enough to trigger the sensor. Motion sensors can be successfully deployed in some specific scenarios, but they aren’t general purpose solutions to the task of turning stuff on when people aren’t there anymore.

A few dedicated room occupancy sensors do exist. They typically attach to door frames, and count how many people enter and exit each room. If the count is greater than zero, the room is considered occupied. Unfortunately, these kinds of sensors are cost prohibitive, somewhat ugly, and also imperfect; a missed count of just one person will mean the total is off, requiring manual intervention to reset it. People enter and exit rooms a lot; even something that is 99% accurate will be wrong enough times to be frustratingly annoying.

So, the ‘simple’ task of intelligently automating turning things off in a house remains an open problem to crack. The 2 AM Good Night automation I use at the moment is a crude hack, but works well enough to tease the possibilities of where we could go in the future. I think fully solving room occupancy is the key to doing this and a world of other related fun and useful features.

The promise of a smart home could really come into its own if we could reliably detect exactly who is in what room. If the iPhone knew what room you were in at all times, you could open the Lock Screen and immediately see relevant smart home controls for the lights and accessories in your immediate proximity, as that is odds on what you wanted to change. It would be great to be able to say to your watch ‘turn on the light’, and the virtual assistant actually do exactly what you intended to happen as if you were talking to a real person; turn on the light just in that room where you are and nothing else.

Reducing friction is vital to making smart home stuff feel more useful and less of a gimmick. If the home knew who was in the lounge, it could change to that person’s user profile on the TV automatically. Playback of a podcast on smart speakers could follow someone through the house, as they move from eating dinner in the dining room to relaxing in their room. It’d be really cool if a smart speaker could automatically avoid expletive-filled music when young kids were known to be nearby. Smart thermostats could adjust to the temperature preferences of the person in the room at any moment, and perhaps turn off heating altogether when the system reliably knew everyone had gone to their rooms at bedtime at night. And your wake-up alarm could turn itself off when it observed you walk out of your bedroom the next morning.

I don’t pretend to know how we get there technologically. I figure it would probably involve coordination between some camera-esque sensors dotted around the home and communication with the devices people carry on them, like phones and watches. Maybe Apple could take advantage of Ultra-Wide Band positioning to accurately follow people’s movements throughout a house, with various static nodes like HomePods or Apple TVs or Echo speakers or whatever working in concert to track and triangulate the signals. I hope manufacturers pull on that Thread and see where it takes us (pun intended).

M2 Display Limit

The jump to Apple Silicon and the M1 chip was nothing short of astonishing. The Mac got way, way, way, better, quite literally overnight. In the timeline of the Mac, 2020 will be remembered forever (and it’s not about COVID). Even more staggeringly, Rosetta 2 ensured old Intel binaries ran well enough that you generally couldn’t even tell that you were running through an emulation layer. Whether running new or old binaries on M1 Macs, everything performed at either the same speed or faster (typically significantly faster) than their Intel counterparts. The move to M1 came with no asterisks, downsides, or drawbacks. It was a perfectly executed transition.

Well, almost perfect. The move from Intel to Apple Silicon meant Apple’s best-selling machines, the 13-inch MacBook Air and MacBook Pro, went backwards in one regard. They were no longer capable of driving two external displays.

But that was easily excused. It is only the first-generation of Apple Silicon after all. Although using two monitors at once is not really an edge case, it’s certainly not a dealbreaker for a base model laptop. As such, the sang was remarked upon and quickly excused as a footnote; a strange quirk of first-gen engineering.

Year two, here comes M2. CPU is better, GPU is better … and yet the one-display limitation remains. Count me surprised. I was a bit taken aback by that. Now that the glow of Apple Silicon transition has faded slightly, the footnote pitfall is slightly harder to ignore.

I don’t think it’s unreasonable to expect any Mac to be able to output to two displays at once. It’s a weird thing to explain to normal people too; when is the last time a customer had to think twice about plugging in a second monitor to their computer? Decade-old Intel Macs could do it just fine. Plenty of old MacBook Air users spent the last two pandemic years working from home with a dual display setup, and now when they come to upgrade their years-old laptop, they are going to be unpleasantly surprised to find that the (otherwise shiny and new) 2022 MacBook Air can’t do that.

It’s still not a dealbreaker — and it doesn’t impact everyone of course — but it is a noteworthy con. I’m obviously not expecting the base chip to drive multiple 6K displays like the Pro/Max/Ultra chips can. An adequate bar would be the ability to output to two 4K screens at once, plus the laptop’s own screen. I don’t think I’m asking for much; just match what the old Intel machines could do.

I hope this isn’t a new product segmentation scheme on Apple’s part to differentiate pro and non-pro lines by how many screens they can support. That would be dumb. I don’t think that’s the case. The M3 will surely close the loop. Right?

Apple Announces iCloud Shared Photo Library


iCloud Shared Photo Library gives families a new way to share photos seamlessly with a separate iCloud library that up to six users can collaborate on, contribute to, and enjoy. Users can choose to share existing photos from their personal libraries, or share based on a start date or people in the photos. A user can also choose to send photos to the Shared Library automatically using a new toggle in the Camera app. Additionally, users will receive intelligent suggestions to share a photo that includes participants in the Shared Photo Library. Every user in the Shared Photo Library has access to add, delete, edit, or favorite the shared photos or videos, which will appear in each user’s Memories and Featured Photos so that everyone can relive more complete family moments.

I swear, sometimes it feels like Apple waits out for everyone to give up hope for a feature, only to deliver it on a silver platter the very next year. Cynically, better late than never. Practically, this is great news.

I love using the Photos app to scroll through years of images, edit and crop right on the phone, run photo screensavers on the family room Apple TV, and glance at relevant images in the Photos home screen widgets across my devices.

It all just works swimmingly … except for the rather crucial part about getting those photos collated in the first place. The strategy our family has adopted until now, which I assume is what everyone else does, was to assign one person’s account to be ‘primary’ and they are in charge of having all the photos. If others take pictures, they can send them to said person who saves them in the canonical archive.

This is obviously a clunky ‘solution’. The primary person takes on a lot of responsibility to manage the library, including doing all cropping and editing, and loses the ability to have a safe space for their own personal photos that they want to keep separate. A family member cannot see all the family pictures on their own phone unless they also keep copies on their own individual buckets of iCloud, something that is both annoying to manage manually and wastes gigabytes of our 2 TB storage plan with duplicated content. A particular pain point in our household is that this arrangement necessitates having the designated person’s account signed into the Apple TV, so that the Photos screensaver will work. Unfortunately, that means those Apple TVs are unable to participate in HomeKit because the HomeKit home configuration was set up on someone else’s account, and tvOS won’t let you sign in to both at the same time.

The newly announced iCloud Shared Photo Library codifies the impromptu approach we have all been using into an official feature, giving all the benefits without the aforementioned downsides. Photos can be saved to your personal library, or sent to the shared library — which all members can access and view. iOS will use machine learning to remind you to share images to the family library where appropriate, and can (optionally) do it automatically when it detects the family has gone a group trip together. Edits to the photos and metadata adjustments can be done by anyone, and automatically synchronise. It just works.

As Apple presented it, it seemed like the shared library would be tied to the Family Sharing system; the six users are the six people in your Family Sharing group. That would certainly be the Occam’s razor approach, removing the need for additional account management steps. However, apparently, that is not the case and the “up to six users” can include people that aren’t in your Family Sharing circle. That definitely opens up the feature to be useful to groups of people who want to share their photos but are not neatly contained into a single household, with just one shared payment method between them. It does raise some finicky questions, though, like how exactly does iCloud allocate the shared library’s storage. If someone contributes a photo, does the file size count against their personal iCloud storage quota, the person who created the shared library originally, or a wholly separate bucket altogether? Who pays for it, if you need more space? Who is in control of adding and removing people? Can you be removed from the shared library against your will, and if that happens, can you get a local copy of all the pictures of you before you lose access?

Apple Changes Rules Surrounding In-App Purchase Subscription Price Increases


Currently, when an auto-renewable subscription price is increased, subscribers must opt in before the price increase is applied. The subscription doesn’t renew at the next billing period for subscribers who didn’t opt in to the new price. This has led to some services being unintentionally interrupted for users and they must take steps to resubscribe within the app, from Settings on iPhone and iPad, or in the App Store on Mac.

With this update, under certain specific conditions and with advance user notice, developers may also offer an auto-renewable subscription price increase, without the user needing to take action and without interrupting the service. The specific conditions for this feature are that the price increase doesn’t occur more than once per year, doesn’t exceed US$5 and 50% of the subscription price, or US$50 and 50% for an annual subscription price, and is permissible by local law. In these situations, Apple always notifies users of an increase in advance, including via email, push notification, and a message within the app.

As the App Store (is forced to) relax rules around alternative payment systems, In-App Purchase is more sensitive to competition and has to do more to compete. Long term, this will be positive for customers with lower prices and better features across the board. In the short term, those same competition forces mean that Apple will have to pull back on some of the customer-friendly In-App Purchase policies to align with the market, to keep publishers onboard.

As evidenced in discovery of the Apple vs Epic trial, the churn from ‘ungrandfathering’ price increases was one factor that led Netflix to exit In-App Purchase in 2018.

The prior policy that meant a subscription’s price could not be increased without explicit user consent was incredibly favourable to the customer, but out of whack with general customer expectations. The vast majority of subscriptions in the world do not work that way. In-App Purchase was a stark outlier. It stood in contrast to even Apple’s own subscriptions like iCloud or Apple One; they increase their price freely with notification, but without consent.

So, now, In-App Purchase will work the same way. I don’t think it’s something to get too mad at Apple about. It’s the reality of business; you have to balance developer and customer interests. In this instance, Apple has still enforced appropriate price caps to stop abuse of the system. And In-App Purchase remains highly customer favourable overall, with how easy each subscription is to cancel.

The Apple Services Experience Is Not Good Enough

To me, the Apple brand ultimately stands for high-quality premium products, developed by teams of people that care deeply about what they are working on and have the freedom to sweat the details. Whilst they don’t always succeed, their consistency at achieving that feeling when it comes to hardware design is unrivalled. Every part, every component, every material, appears to have been thoroughly considered and debated. Nothing is rushed or skimped on. That permeates through to the end product, tangibly and intangibly so. That doesn’t mean everything they make is a surefire hit or a runaway success; just that someone cared about making it.

I’m not sure I could name a single Apple service that meets that bar. Apple’s services tick the boxes, and they mostly do what they promise. However, nothing comes close to the quality of experience I expect to have from things branded with the Apple logo. When I am using these apps, I am not filled with confidence that striving for greatness was a top priority. Far too often, meeting revenue goals and business objectives seem more important to their creation.

They are built to a passing grade, but nothing more. Basic features found in services from rival companies are either lacking altogether in Apple’s apps, or implemented half-heartedly and performance is sluggish. Browsing in Music and TV is painful, with an over-reliance on the infinite scroll. New content is just tacked on the bottom of already long lists. Meanwhile, the navigation bars are blank when they could include simple shortcut buttons and filters to help users navigate and explore. Moreover, these apps feature too many loading states and too much waiting around. They are akin to janky web apps, rather than richly-compelling responsive experiences.

Frequently, it seems the content teams and the tech teams are isolated from each other, when they really should be in sync and working together to make everything sing. Arcade is trapped inside a tab in the App Store app, and obvious synergies with Game Center are not exploited; Game Center remains in a quasi-extant state as a panel in Settings. The Library tab in TV is useless in the modern streaming service era. (Frankly, the entire TV app belies the content Apple is producing for it.) Another example: Apple Music relaunched the Radio tab significantly in 2020, boasting three live broadcasts and dozens of weekly shows, but all that is for nothing when it is still impossible for users to subscribe to a show to be notified when an artist goes live or when new episodes are available to listen to on-demand.

On a daily basis, I encounter issues, ranging from small niggles to significant gaps in functionality. These things have been in this state of mediocrity for many years. I’m losing faith that anyone in a position of power at the Services group cares enough to make them better. The high standards seen in the products of Apple’s hardware divisions are not reflected here.

Grading on a curve, Music and Fitness are the best, Arcade is in the middle, and TV+ and News+ are fighting it out at the bottom. But I stress, that’s grading on a curve of their own output. Apple’s best is not good enough. It is all middling to inferior, creaky and uninspired. Organisational dysfunction, a mountain of tech debt, distorted leadership incentives, and lack of passion likely all play a role. Whatever the cause, fixing this stems from the top. Services’ engineering and design teams have to be empowered with the resources and time to effectively execute and ultimately deliver excellence to customers, like the hardware teams clearly are.

Netflix Loses Subscribers For First Time In A Decade


Our revenue growth has slowed considerably as our results and forecast below show. Streaming is winning over linear, as we predicted, and Netflix titles are very popular globally. However, our relatively high household penetration - when including the large number of households sharing accounts - combined with competition, is creating revenue growth headwinds. The big COVID boost to streaming obscured the picture until recently. While we work to reaccelerate our revenue growth - through improvements to our service and more effective monetization of multi-household sharing - we’ll be holding our operating margin at around 20%. Key to our success has been our ability to create amazing entertainment from all around the world, present it in highly personalized ways, and win more viewing than our competitors.

Going into the quarter with Netflix providing guidance of +2.5 million subscriber adds, the company shocked everyone with poor results: minus 500,000 subscribers this quarter, another minus 2 million expected next quarter. They blamed a 700,000 subscriber loss on their exit from Russian market, which mitigates the current quarter numbers slightly. Obviously, the figures aren’t glowing, but I was struck by how strong the blowback on social media was to the news. Overnight, Netflix has suddenly become a service that nobody uses? That’s how my Twitter feed was acting like on Wednesday at least. Netflix may not be the exciting place anymore, but it’s undeniably the bedrock of modern television, and I don’t see that changing.

Firstly, the company is a behemoth on every metric; viewership, subscriber count and profitability. Sensations like Squid Game cannot be created by any other service; only Netflix has the worldwide content development infrastructure to get it made, and the immense audience reach to make it popular. Kicking Netflix out of culture is going to be really hard. I think nigh impossible. This is especially true when you consider they are the only streaming service with profitable operations. That means everybody else is having to sink into savings (and debt) to merely try and catch up. Maybe Disney+ will overtake them, but Netflix will be able to stick around as a top three player forever.

Of course, the investor base wanted growth and the shares got pummelled on the news for missing such expectations. My position is not talking about the stock market side. A funny parallel is the community opinion of Apple TV+, which has also gone through an about face recently. For two years straight, the general punditry has derided TV+ as a silly venture that will never succeed. Suddenly, Apple wins an Oscar and now everyone thinks TV+ is ruling the world. I have liked TV+ since the beginning, but the reality is TV+ is still an upstart with a long road ahead of it. It is hilarious how quickly people turn on something, positively in the recent case of TV+ or negatively in the recent case of Netflix. The knee-jerk reactions are rarely on the money, and do not reflect the fact that streaming is a very long game.

When TV+ was announced, I argued Apple had a good shot at being successful with it, at a time when the common take was that TV+ would be a failure and fade away. My argument on that was based on the idea that the fundamentals of streaming are relatively simple: get content people want to watch. The way to do that is to attract talent with the promise of money, audience and prestige. Resource-rich Apple had the money part guaranteed, and a decent runway at attaining the rest. Fast forward a couple of years and, sure enough, Apple has picked up the necessary prestige; audience size remains a question mark. Compare that to Netflix, which has a huge audience, a lot of money (recall they are the only profitable service so far) and decent — if not as much as it used to — levels of awards recognition and prestige. If I believed Apple could do it from scratch on that basis, so surely can Netflix from a position of incumbent strength. As long as those fundamentals remain strong, I’m not worried about Netflix’s future. The subscriber numbers need to be a lot worse before I deem it anything other than turbulence and growing pains.

Even as they reach saturation, they also have a lot more headroom to potentially exploit. Netflix has been somewhat complacent when it comes to business model expansion, with CEO Reed Hastings preferring a simpler streamlined approach. Now, he is forced to relent slightly and develop things like a cheaper ad-supported tier of Netflix; something which could catapult their market share even higher than it already is. They can also juice their financials and extract some incremental growth out of the announced crackdown on account sharing. They don’t need to annoy their entire user base with account verification screens; just the fraction that is fragrantly abusing the system with one password shared amongst three, four, or more households. That’s still tens of millions of people to try to monetise, which it can then reinvest in content for the long term. As a reminder of the scale here, Netflix has more freeloaders than most of these services have total subscribers. I think Netflix will be just fine.

Apple Introduces Mac Studio


Apple today introduced Mac Studio and Studio Display, an entirely new Mac desktop and display designed to give users everything they need to build the studio of their dreams. A breakthrough in personal computing, Mac Studio is powered by M1 Max and the new M1 Ultra, the world’s most powerful chip for a personal computer. It is the first computer to deliver an unprecedented level of performance, an extensive array of connectivity, and completely new capabilities in an unbelievably compact design that sits within arm’s reach on the desk.

I see the Mac Studio as the spiritual successor to the 2013 Mac Pro. It is meant to be small and compact enough to sit on the desk, not under the desk. It has a lot of IO ports for attaching external storage, additional displays and other peripherals, but it is not a user-expandable machine. The 2013 Mac Pro was compact, if only because Apple gambled on a future of GPU-oriented computation that never really panned out. Fast forward to the present day, and there is no need for trickery; it is the sheer efficiency of Apple Silicon enables the Mac Studio to boast top-tier performance in CPU and GPU benchmarks, all housed in an enclosure even smaller than the 2013 Mac Pro.

However, whereas that Mac Pro made a statement, the Mac Studio is wholly perfunctory in its design. The Mac Pro is a cooler object; a perfect cylinder in shape, a shiny reflective casing, it even had backlit USB ports that illuminated when an accelerometer detected the machine had been turned around. The Mac Studio is a boring box with rounded corners, and has no party tricks to speak of. The trashcan was a truly wild, out-there, design. Apple was admittedly less ambitious with the 2019 Mac Pro which resembles a traditional tower workstation, but that too leaves more of a lasting impression than the Mac Studio thanks to its unique lattice of milled circular vent holes.

In truth, the Mac Studio is basically just a fat Mac mini. Compared to a Mac Pro, or the 2021 MacBook Pro, or the colourful M1 iMac, the Mac Studio industrial design doesn’t offer much to get excited about — save from the philosophical milestone that is front-facing IO. That’s a bit of a shame because the introduction of a brand new model of Mac is precisely the best time to do something entirely new. But Apple opted to played it safe this time, perhaps because the failings of recent attempts to be more adventurous — like the butterfly keyboard — are still fresh in their minds. The Mac Studio contains radical innards in a plain exterior. That being said, in all other respects, the Mac Studio looks set to be a home run, so any feelings of disappointment will ultimately be fleeting.

Apple Announces Alternative Payment Systems Policy For Apps In Netherlands


Consistent with the ACM’s order, dating apps that are granted an entitlement to link out or use a third-party in-app payment provider will pay Apple a commission on transactions. Apple will charge a 27% commission on the price paid by the user, net of value-added taxes. This is a reduced rate that excludes value related to payment processing and related activities. Developers will be responsible for the collection and remittance of any applicable taxes, such as the Netherlands’ value-added tax (VAT), for sales processed by a third-party payment provider. Developers using these entitlements will be required to provide a report to Apple recording each sale of digital goods and content that has been facilitated through the App Store. This report will need to be provided monthly within 15 calendar days following the end of Apple’s fiscal month.

Apple is doing everything they can to toe the line to comply with the Netherlands ruling on alternate payment systems for dating apps. I’m not sure you could find a webpage more emblematic of the idiom of following the letter of the law, rather than the spirit of the law. They are also simultaneously appealing the decision and that tone comes across in the text too, as if each sentence is dripping with resentment.

I can only assume this is just the first bout in many rounds of back-and-forth over terms, that will be replicated and reproduced on a global scale eventually. This court ruling is on enabling competition for in-app payment systems, rather than the general monopoly of mobile app stores. However, the two are obviously inextricably linked. No one is going to use a third-party payment system when the saving compared to Apple’s built-in offering is a measly 3%. These current terms will not incite competition in payment systems as no developer will ever implement one. If you think the 3% will just about cover independent credit card processing fees, the customer acquisition costs and additional support overhead alone will make it an unprofitable course of action.

Apple’s stated policy is not long-term sustainable. I don’t know whether it will be changed as a result of these proceedings, or a different lawsuit down the road. It will change. Everyone agrees 27% is a joke. I think it’s quite reasonable to say that 0% would also be unfair to Apple; Apple deserves something. It’s just figuring out what is an acceptable rate in a market which lacks other forms of competition like alternative app stores or native app sideloading. There are other distribution issues that Apple’s App Store model imposes but ultimately money talks, and all of this legal theatre is a protracted negotiation over that core commission structure. As a member of the Small Business program myself, 12% (15%−3%) sure feels a whole lot fairer than 27%. I honestly believe most of these big company lawsuits would fall away if Apple announced that 12% was going to be the new normal for everyone.

Use Face ID With A Mask


With iOS 15.4 beta 1 Apple is starting to test the ability to use Face ID while wearing a mask but without the need for an Apple Watch around. Not only that, but the company is also improving glasses support. With this new iOS 15.4 feature, it will be possible to unlock your iPhone with the facial recognition feature focusing on the area around your eyes to authenticate.

Face ID isn’t superior to Touch ID in every respect, and vice versa. For instance, even five years on since the introduction of the TrueDepth camera system with iPhone X, Apple recommends that identical twins only use passcode authentication to unlock because Face ID will not be able to reliably tell them apart. Touch ID did not have this problem. Buying with Apple Pay is also nicer with Touch ID, compared to the double-click dance that Face ID requires. On balance, if pressed to choose just one approach, I think Face ID is the obvious choice though because the best benefits are really great; first time setup is far more streamlined than the fingerprint registration process and the most frequent use case of unlocking your phone is so much more elegant with Face ID. It also has a magical quality that Touch ID lacks. It is much cooler to look at the screen than to place your thumb on a fingerprint reader.

This is what Apple went with since 2017: FaceID only in the name of simplicity and (partly) cost savings. Pre-pandemic, I think they could have gotten away with that strategy forever. Post-2020, the see-saw of tradeoffs suddenly weigh down very much in the other direction. Until the release of iOS 15.4 beta, the return of Touch ID seemed inevitable to me.

For identical twins, they could hypothetically enable Touch ID and not Face ID. That status quo is much better than the current compromise of being forced to just use a passcode.

The existence of the Unlock with Mask feature probably means that Apple doesn’t have to ship an iPhone with Touch ID again. I would certainly take it as a signal that a Touch ID iPhone is not coming back anytime soon. But I still think they should do it. Long-term, the best iPhone is surely one that offers both Face ID and Touch ID (either via under-display scanner or iPad-esque side button sensor). Users would be bale to set up both types of biometrics, and the iPhone would simply unlock as soon as either is presented it. It really would be a best-of-both-worlds scenario with each biometric’s advantages making up for the drawbacks of the other.

I also think it is somewhat telling that Apple goes out of way to call out the accuracy of Face ID is lessened when using the mask unlock mode, right there in the settings UI. The peak of COVID and mask-wearing is (hopefully) behind us, but it isn’t going away altogether. The Unlock with Mask feature is going to be widely used for years to come, and it doesn’t feel very sustainable for Apple’s solution to this problem to be something that they openly warn significantly impacts the security of your device. You also have the ongoing threat of other wearable items — like sunglasses or even Apple’s own forthcoming headset product — that may impact the usefulness of Face ID, over the course of this decade. Bringing back Touch ID in some form is a hedge against all of those potential risks, and one that many people would applaud.

Some Minor Issues With The Mini-LED MacBook Pro Display

The new generation of MacBook Pro features a terrific display. The colour depth, maximum brightness and contrast levels it can achieve are truly stunning and a huge leap over previous models of MacBook. It’s also significantly higher resolution than the 2019 16-inch, and the increased pixel density is noticeably better in terms of the visible detail of photos and videos. The extra resolution also enabled Apple to restore 2x Retina mode as the standard display setting without sacrificing on effective screen real estate. The panel’s 120 FPS refresh rate is icing on the cake, even if macOS still hasn’t quite caught up to the hardware capability (although the 12.2 beta seed is much better in this regard).

However, all display technologies have tradeoffs, and the mini-LED design seen in the MacBook Pro is no different. Blooming is often discussed as a downside of mini-LED but funnily enough, I don’t see it crop up too much in how I use my computer. It’s there if you seek it out, but you really have to hunt.

As shown in the video above, a persistent niggle for me is the vignetting effect around the edges of the display. The extreme edge of the screen is just slightly darker all the way around, and it sticks out when the rest of the screen is uniformly bright. You can observe this border pretty much all the time. It’s annoying. I’d put in the same category as the notch. In practice, because it only impacts the screen quality at the very fringes, it rarely intrudes on the content you are viewing and your brain learns to ignore the periphery imperfections.

Another more subtle artefact is the screen response time when changing between light and dark content. Basically, if you have a big dark coloured blob and then quickly change to a new screenful of content that is mostly white, it takes a few extra milliseconds for the black regions to turn white. I haven’t precisely timed it, it might even be as small as a 100 milliseconds lag, but it is noticeable to the human eye. It’s sort of like OLED jelly scrolling, but less prevalent.

Modern LCD backlights certainly don’t have the vignetting problems, and screen response time can be consistently as low as 1 millisecond. Apple clearly made the right choice to move from LCD to mini-LED though. It is simply superior in most regards. A hypothetical decision between a MacBook Pro with mini-LED and one with an OLED screen is less clear cut. OLEDs don’t exhibit the edge vignetting and have no blooming because each pixel is individually lit, but they bring their own issues like burn-in and jelly scrolling to contend with.

Every Apple TV+ Show Reviewed In Five Minutes

This is my incredibly succinct five minute review of every Apple TV+ show released to date. I figured I might as well get this out the way before the volume of content makes it untenable to do; even this video ignores the dozen original movies the company has put out so far. Don’t take it too seriously. The main takeaway, if any, is that Apple TV+ continues to expand its content library, with more hits than misses, and will (easily) eclipse 150 premium originals by the end of 2022. The user interface and app experience issues remain the service’s biggest roadblock to attaining mainstream uptake from the general public.

The Metaverse Is Not A Real Thing

I can’t quite believe how much ink has been spilled these last few months about a concept that doesn’t exist and is — at best — a pipe dream. The metaverse is not a thing. It’s meaningless. Facebook had an hour long keynote event which wholly consisted of computer-generated sequences of floating Memoji/Xbox avatars. Microsoft joined the fray with similarly unsubstantiated claims that Teams is becoming a metaverse.

The bandwagoning of the name ‘metaverse’ is dumb, but I’m not really interested in that aspect. I’m just going to ignore all of that misappropriation. Marketing teams always do stupid stuff; see the ongoing misappropriation by mobile carriers about what 5G can do.

I take the meaning of “metaverse” to be the generally accepted idea that people will wear some kind of headset or glasses and be able to access a virtual world, meeting up with others in some kind of virtual geography. The realism and quality of the experience is promised to be so good that your brain believes you are actually there, with your senses succumbing to the generated interface such that you can suspend disbelief that what you are interacting with is not actually there. Perhaps it is not an all-encompassing experience, but instead augmented reality avatars/objects appear to materialise in the space around you and behave accordingly.

Either way, it’s not feasible. It’s not a real thing because it is not grounded in any sort of technological truth. There’s not a tech demo on earth that can deliver anything close to that description; nothing bespoke exists and something for the mass market is even more illusory. It’s not a real thing.

Break apart the vision to any individual element and the state of the art technology is nowhere close to good enough. Realtime visual fidelity has to advance leaps and bounds to be as convincingly legitimate as what Facebook ‘demonstrated’ in its mockups. I’d love to know how long it took whatever render farm they used to make these videos. Probably, days. Even the mockups aren’t what I’d call convincing to a human, because the avatars look like avatars and not people. If that is the aim, forget it. We can’t even get CGI people in Hollywood movies to reliably break through the uncanny valley, and these films take months to generate a single second of footage. For a portable headset, it’s not even on the horizon. Five years. Ten years. Maybe longer. It’s not going to happen.

Graphics are just one of a thousand problems. All the other senses need to be satiated too for a start and the technology for generating synthetic smells, tastes and touch is so much further behind where we are at with GPUs for photorealistic imagery. One of the things that motivated me to write up this ridiculousness in a blog post is this fencing demo from Facebook’s Meta keynote. Zuckerberg is shown to be playing against a hologram of a professional athlete, waving swords at each other. In the demo, when he lunges, she parries with the swords perfectly stopping in mid-air. How on earth is that going to be possible to do, outside of a visual effects mockup? There’s no way to recreate the sensation of metal hitting metal and the sabres rebounding. Rather than an in-air clash of swords, the real sword is just going to pass right through the VR one. A vibration motor and some haptic feedback doesn’t cut it, although that doesn’t stop Zuckerberg miming contact and saying “that’s a little too realistic”. Lest we forget network latency hurdles or a myriad of other issues of course.

What these companies are touting is a fully immersive, engrossing, alternate world is only a few years away, just out of sight. The truth is it’s not anywhere close. I’m not a denier of augmented reality technology altogether. There will be continued small and meaningful improvements to the enterprise and consumer offerings, many of which will find their niche and bring genuine utility and/or entertainment. It will be able to enhance our life. For instance, VR gaming is basically already here, save for some less clunky hardware to use it on and some nice graphics. I could even see how a portable headset, or smart glasses, product could replace the phone in the medium term, as the primary communications device for humanity. The power-efficient-yet-technically-capable hardware to pull that off is still a ways out — maybe ten years, probably twenty — but it’s a plausible future that is deserving of consideration. I’d put that idea in the same bucket as self-driving cars or consumer space travel. These things live in the realm of tech demo today, but they have shown feasibility and appear attainable. Contrast that to the “metaverse”, which is merely made-up fantasy.

Craig Federighi Discusses Sideloading At Web Summit Conference

Craig Federighi, Web Summit 2021:

Even if you have no intention of sideloading, people are routinely coerced or tricked into doing it. And that is true across the board, even on platforms like Android that make sideloading somewhat difficult to do.

Apple doesn’t trot out Federighi to a third-party conference with a highly-produced Keynote deck for the fun of it. They are clearly concerned that European lawmakers are actually going to do something they don’t want; that is, pass laws requiring them to offer sideloading as an option. On the whole, he presents good arguments against the policy. You can watch the full thing here.

However, one particular talking point highlights a severe weakness that I see in Apple’s stance. Federighi posits that a social networking app may choose to “avoid the pesky privacy protections of the App Store” and only make their apps available via sideloading. Apple’s customers would then have to leave the ‘safe’ Apple software ecosystem, or lose touch with their family and friends. This is sort of true. But what is omitted is that an app choosing to leave the App Store is not primarily doing so to avoid Apple’s privacy standards, but because it would then be able to avoid Apple’s IAP rules.

Apple benefits financially — measured in the billions of dollars per year — by keeping the App Store as a monopoly. However much it wants to tout the user privacy and safety benefits, Apple’s position would be far stronger if cynics weren’t able to point to the money being accrued by the App Store gravy train. The 30% cut is ultimately the driving factor that has led Europe to want to pass these competition laws in the first place. If Apple truly wants to put customers first and protect from sideloading, alternative app stores and the like, it needs to compromise on the business policies somehow.

Developers Must Opt In To 120Hz Animations On iPhone 13 Pro


As a general rule, for better visual appearance use faster refresh rates when animating fast-moving items that travel across large areas of the screen. But, if you’re animating a smaller item that doesn’t move over a great distance, but “animates in place”, that typically doesn’t benefit from a high refresh rate. You can use slower refresh rates when animating smaller items without any impact to the visual appearance. Selecting the right animation speed is always a tradeoff between a smoother visual appearance and saving energy. As a guiding principle, strive for the lowest animation speed possible while maintaining good visual appearance.

If it wasn't for the PR stink on Friday, presumably this documentation would have taken even longer to appear.

This documentation should have been made available alongside the iOS 15 and Xcode 13 Release Candidate a week ago. Because it wasn’t, app developers didn’t even have a chance to get their software ProMotion-ready for iPhone 13 launch day. Indeed, the lack of published documentation meant that everyone assumed that adopting 120Hz would be done automatically by the system. This is how it works on the iPad Pro, which has supported ProMotion since 2017. But for the iPhone 13, high frame rate animation is actually gated twice, firstly by a global Info.plist key and secondly by the fact that each individual animation in the codebase will need to be audited and marked as wanting high refresh rate pacing.

All apps will see ProMotion benefits when the user is actively interacting with the display and generating touch events, which thankfully means scrolling is always ultra-responsive and fluid across the system.

However, this also puts an onus on developers to meticulously check all the animations in their app and do the code changes where it makes sense. 60 FPS animations in app like Twitter will stick out like a sore thumb if the user has just finished scrolling their timeline at a smooth 120 FPS rate. The stark contrast could even make it feel like the app is lagging, as the user’s brain becomes accustomed to seeing smoother motion. Whilst the work needed to opt-in is only one line of code, it is a pretty laborious task to do that for every animation in an app and I fear that very few developers will bother. Even for someone who cares, it’s an easy thing to forget when adding a feature or implementing a new screen of an app.

Clearly, Apple believes the benefit to battery life is worth the pain of enforcing selectivity. The second half of this document actually recommends not adopting 120Hz indiscriminately and reserving its use for “high-impact animations” only. That is, animations that cause significant on-screen content changes like a full-screen wipe to a new page. I guess we will have to trust their judgement on this, but it definitely should have been accompanied by better communication as it is such a big departure from the precedent set by ProMotion on the iPad Pro.

Apple Will 'Help' Some Developers Put A Single Link To Their Website In Their Apps


Because developers of reader apps do not offer in-app digital goods and services for purchase, Apple agreed with the JFTC to let developers of these apps share a single link to their website to help users set up and manage their account. While in-app purchases through the App Store commerce system remain the safest and most trusted payment methods for users, Apple will also help developers of reader apps protect users when they link them to an external website to make purchases.

Apple’s resistance to change any App Stores rules of its own accord means that you have to read any of these announcements with extreme care and caution. The details matter.

In this case, I am perturbed by the fact that there are lot of words, a lot of paragraphs, surrounding what should be a straightforward policy change: allowing developers to link out to their website on the sign-up screen.

A couple of the limits are made transparent in the copy; this revised rule applies to reader apps only and developers are allowed a ‘single’ link only.

Setting aside Apple’s self-serving and/or contradictory rules around what counts as a reader app, what the heck does a single link mean in a digital world? It’s a hilarious concept. If I style a link with a big font, placed on a rectangle of prominent background colour, is that a single link … or is that a button? What if the single link takes up 90% of the screen, in a huge font? If I put a static link at the bottom of the screen like a footer, and the link doesn’t move or disappear when the user navigates to a new page, is that still a single link? I mean it’s still the same link, it is just permanently visible.

That’s one whole ordeal. The second part I zone in on — in my pessimistically critical reading — is ‘help’. What constitutes help? Of course, I fully expect Apple to lay out rules around the design and behaviour of the destination websites, possibly including limits on what payment methods can be used and the language used in the sign-up form. Furthermore, because this policy is not coming into effect until next year, it seems like this ‘help’ is going to include some kind of technical component too. Maybe Apple will have a special new API or something that ensures the link out to the website doesn’t change after the fact, or must link to a specifically (pre-approved) registered domain. Apple could ‘help’ by requiring use of a sandboxed web view that somehow doesn’t have access to a user’s standard AutoFill information.

Thirdly, all these developers obviously want the ability to link out to the web in order to encourage their customers to use payment methods other than Apple’s In-App Purchase. Apple’s press release implies that motivation but the actual wording isn’t so direct: it says the link is to enable users to “set up and manage their account”.

You’d hope Apple would comply to the Japanese law in good faith, but I’m certainly not ruling out something more sinister. I don’t think the implementation of ‘help’ will be onerous, but perhaps just inconvenient enough to make some percentage of developers not bother.

Ultimately, these rules should have a positive impact on user experience and a very small negative impact on Apple’s financials. Apple’s revenue from reader apps is already small, because those are the exact category of apps already allowed to circumvent In-App Purchase altogether. That being said, this Japanese settlement is not going to fundamentally resolve any of the other impending lawsuits; Spotify benefits from these new rules but will want more and will keep pushing, Epic is just going to be more furious that they don’t benefit at all, and there are plenty more EU and US investigations to come.

On-Device CSAM Scanning For iCloud Photos


Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.

To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC).

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations.

The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

The naysayers of the last week are not necessarily wrong. This issue is nuanced and Apple’s decisions involve concessions. Personally, I think Apple have done well here. They probably could have handled the communication surrounding the announcement better, but the actual functionality and policy decisions are reasonable.

In a world where Apple is isolated to all external pressures, I don’t think they would have done this at all. Ideally, Apple would like it such that your device and your data is sacrosanct. Impenetrable. That fits with their business model, it fits with their morals, and it fits with their marketing.

However big Apple is, they do still have to conform to government expectations and the law. That’s why in China they let a third-party Chinese company run the iCloud data centre. They can make it as secure as can be under that arrangement, but it’s still a compromise from the ideal situation where the data centres are managed by Apple themselves (and is how it happens in every other region of the world).

In the US, big tech companies are expected to help governments track and trace child abuse content happening on their platforms. Apple runs a big platform full of user generated content, iMessage and iCloud, yet their contribution to this specific cause has been disproportionally small, infinitesimal even. Facebook reports something like 20 million instances of CSAM a year to the NCMEC organisation. In that same timeframe, Apple barely crossed the 200 mark.

So, this is the underlying motivation for these new policies. Big government bodies want Apple to help to track down the spread of CSAM on their devices, in just the same way that other big cloud companies like Google, Amazon, and Facebook comply by scanning all incoming content to their platforms.

You have to assume that privacy issues are a key reason why Apple has historically been so lax in this department. It’s not that Apple has sympathy for the people spreading child pornography. Why right now? That is still unclear. Perhaps, behind closed doors, someone was threatening lawsuits or similar action if Apple didn’t step up to par soon. Either way, it’s crunch time.

I’m sure governments would welcome a free-for-all backdoor. Of course, Apple isn’t going to let that happen. So, what Apple has delivered is a very small, teensy-tiny, window into the content of user’s devices.

The actual system is almost identical to what Google/Amazon/Facebook do on their servers, attempting to match against a database of hashes provided by NCMEC. Except, Apple runs the matching process on device.

I’ve seen a bit of consternation around this. I don’t think it’s a big deal where the matching process happens. Arguably, if it is happening on device, security researchers have more visibility into what Apple is doing (or what a hypothetical nefarious actor is doing with the same technology) compared to if it was taking place on a remote server.

You do have to basically take Apple’s word that it is only scanning photos destined to be sent to iCloud. Sure. But you have to take Apple’s word on a lot of things. A malicious state could secretly compel Apple to do much worse with much less granularity. I don’t trust Apple any more or any less about this than the myriad other possible ‘backdoors’ that iOS could harbour. The slippery slope argument is a concern, and worth watching out for in the future, but I don’t see anything here that is an obvious pathway to that.

You also have to stay grounded in the baseline case. Right now, if you use iCloud Backup (clarifying again that this new Child Safety stuff only applies to if you use iCloud Photos, photos stored in the backup only are exempt), all of your phone’s data is stored on an Apple server somewhere in a manner that they can read your data if they so desire. This also means that a government can subpoena Apple to hand over that information. This is not a secret. Apple has done it countless times, purportedly in the presence of a valid warrant, including very publicly in the midst of the PR fiasco that was the 2016 San Bernadino shooter case.

With that in mind, almost all of your phone is already accessible to law enforcement or state actors if they so desire. This new entry point in the name of Child Safety pales in comparison to that level of potential access.

One assumption I’ve seen floated around is that Apple wants to roll out end-to-end encrypted iCloud Backup option in the future. The criticism is that this on-device scanning policy undermines the point of E2E because the scanner would still be able to “spy” on the data before it was cryptographically sealed. I mean, I guess that’s true to a degree, but I’d still rather have the option for end-to-end backups with a CSAM scanner in place, than not have it all which is the world we live in today.

The weakest link in the chain on the technical side of this infrastructure is the opaqueness of the hashed content database. By design, Apple doesn’t know what the hashes represent as Apple is not allowed to knowingly traffic illicit child abuse material. Effectively, the system works on third-party trust. Apple has to trust that the database provided by NCMEC — or whatever partner Apple works with in the future when this feature rolls out internationally — does only include hashes of known CSAM content.

I think there’s a reasonable worry that a government could use this as a way to shuttle other kinds of content detection through the system, by simply providing hashes of images of political activism or democracy or whatever some dictatorial leader would like to oppress. Apple’s defence for this is that all flagged images are first sent to Apple for human review, before being sent on. That feels flimsy.

My suggestion would be that all flagged images are reported to the user. That way, the system cannot be misused in secret. This could be built in the software stack itself, such that nothing is sent onward unless the user is notified. In press briefings, Apple has said they don’t want to do this because their privacy policy doesn’t allow them to retain user data, enabling a legitimate criminal who is sharing CSAM would simply be able to delete their photo library when alerted. I think tweaks to policy could solve it. For instance, it would be very reasonable for a flagged image to be automatically marked frozen in iCloud, unable to be deleted by a user, until it has gone through the review process. The additional layer of transparency is beneficial.