Apple Introduces Mac Studio

Apple:

Apple today introduced Mac Studio and Studio Display, an entirely new Mac desktop and display designed to give users everything they need to build the studio of their dreams. A breakthrough in personal computing, Mac Studio is powered by M1 Max and the new M1 Ultra, the world’s most powerful chip for a personal computer. It is the first computer to deliver an unprecedented level of performance, an extensive array of connectivity, and completely new capabilities in an unbelievably compact design that sits within arm’s reach on the desk.

I see the Mac Studio as the spiritual successor to the 2013 Mac Pro. It is meant to be small and compact enough to sit on the desk, not under the desk. It has a lot of IO ports for attaching external storage, additional displays and other peripherals, but it is not a user-expandable machine. The 2013 Mac Pro was compact, if only because Apple gambled on a future of GPU-oriented computation that never really panned out. Fast forward to the present day, and there is no need for trickery; it is the sheer efficiency of Apple Silicon enables the Mac Studio to boast top-tier performance in CPU and GPU benchmarks, all housed in an enclosure even smaller than the 2013 Mac Pro.

However, whereas that Mac Pro made a statement, the Mac Studio is wholly perfunctory in its design. The Mac Pro is a cooler object; a perfect cylinder in shape, a shiny reflective casing, it even had backlit USB ports that illuminated when an accelerometer detected the machine had been turned around. The Mac Studio is a boring box with rounded corners, and has no party tricks to speak of. The trashcan was a truly wild, out-there, design. Apple was admittedly less ambitious with the 2019 Mac Pro which resembles a traditional tower workstation, but that too leaves more of a lasting impression than the Mac Studio thanks to its unique lattice of milled circular vent holes.

In truth, the Mac Studio is basically just a fat Mac mini. Compared to a Mac Pro, or the 2021 MacBook Pro, or the colourful M1 iMac, the Mac Studio industrial design doesn’t offer much to get excited about — save from the philosophical milestone that is front-facing IO. That’s a bit of a shame because the introduction of a brand new model of Mac is precisely the best time to do something entirely new. But Apple opted to played it safe this time, perhaps because the failings of recent attempts to be more adventurous — like the butterfly keyboard — are still fresh in their minds. The Mac Studio contains radical innards in a plain exterior. That being said, in all other respects, the Mac Studio looks set to be a home run, so any feelings of disappointment will ultimately be fleeting.

Apple Announces Alternative Payment Systems Policy For Apps In Netherlands

Apple:

Consistent with the ACM’s order, dating apps that are granted an entitlement to link out or use a third-party in-app payment provider will pay Apple a commission on transactions. Apple will charge a 27% commission on the price paid by the user, net of value-added taxes. This is a reduced rate that excludes value related to payment processing and related activities. Developers will be responsible for the collection and remittance of any applicable taxes, such as the Netherlands’ value-added tax (VAT), for sales processed by a third-party payment provider. Developers using these entitlements will be required to provide a report to Apple recording each sale of digital goods and content that has been facilitated through the App Store. This report will need to be provided monthly within 15 calendar days following the end of Apple’s fiscal month.

Apple is doing everything they can to toe the line to comply with the Netherlands ruling on alternate payment systems for dating apps. I’m not sure you could find a webpage more emblematic of the idiom of following the letter of the law, rather than the spirit of the law. They are also simultaneously appealing the decision and that tone comes across in the text too, as if each sentence is dripping with resentment.

I can only assume this is just the first bout in many rounds of back-and-forth over terms, that will be replicated and reproduced on a global scale eventually. This court ruling is on enabling competition for in-app payment systems, rather than the general monopoly of mobile app stores. However, the two are obviously inextricably linked. No one is going to use a third-party payment system when the saving compared to Apple’s built-in offering is a measly 3%. These current terms will not incite competition in payment systems as no developer will ever implement one. If you think the 3% will just about cover independent credit card processing fees, the customer acquisition costs and additional support overhead alone will make it an unprofitable course of action.

Apple’s stated policy is not long-term sustainable. I don’t know whether it will be changed as a result of these proceedings, or a different lawsuit down the road. It will change. Everyone agrees 27% is a joke. I think it’s quite reasonable to say that 0% would also be unfair to Apple; Apple deserves something. It’s just figuring out what is an acceptable rate in a market which lacks other forms of competition like alternative app stores or native app sideloading. There are other distribution issues that Apple’s App Store model imposes but ultimately money talks, and all of this legal theatre is a protracted negotiation over that core commission structure. As a member of the Small Business program myself, 12% (15%−3%) sure feels a whole lot fairer than 27%. I honestly believe most of these big company lawsuits would fall away if Apple announced that 12% was going to be the new normal for everyone.

Use Face ID With A Mask

9to5Mac:

With iOS 15.4 beta 1 Apple is starting to test the ability to use Face ID while wearing a mask but without the need for an Apple Watch around. Not only that, but the company is also improving glasses support. With this new iOS 15.4 feature, it will be possible to unlock your iPhone with the facial recognition feature focusing on the area around your eyes to authenticate.

Face ID isn’t superior to Touch ID in every respect, and vice versa. For instance, even five years on since the introduction of the TrueDepth camera system with iPhone X, Apple recommends that identical twins only use passcode authentication to unlock because Face ID will not be able to reliably tell them apart. Touch ID did not have this problem. Buying with Apple Pay is also nicer with Touch ID, compared to the double-click dance that Face ID requires. On balance, if pressed to choose just one approach, I think Face ID is the obvious choice though because the best benefits are really great; first time setup is far more streamlined than the fingerprint registration process and the most frequent use case of unlocking your phone is so much more elegant with Face ID. It also has a magical quality that Touch ID lacks. It is much cooler to look at the screen than to place your thumb on a fingerprint reader.

This is what Apple went with since 2017: FaceID only in the name of simplicity and (partly) cost savings. Pre-pandemic, I think they could have gotten away with that strategy forever. Post-2020, the see-saw of tradeoffs suddenly weigh down very much in the other direction. Until the release of iOS 15.4 beta, the return of Touch ID seemed inevitable to me.

For identical twins, they could hypothetically enable Touch ID and not Face ID. That status quo is much better than the current compromise of being forced to just use a passcode.

The existence of the Unlock with Mask feature probably means that Apple doesn’t have to ship an iPhone with Touch ID again. I would certainly take it as a signal that a Touch ID iPhone is not coming back anytime soon. But I still think they should do it. Long-term, the best iPhone is surely one that offers both Face ID and Touch ID (either via under-display scanner or iPad-esque side button sensor). Users would be bale to set up both types of biometrics, and the iPhone would simply unlock as soon as either is presented it. It really would be a best-of-both-worlds scenario with each biometric’s advantages making up for the drawbacks of the other.

I also think it is somewhat telling that Apple goes out of way to call out the accuracy of Face ID is lessened when using the mask unlock mode, right there in the settings UI. The peak of COVID and mask-wearing is (hopefully) behind us, but it isn’t going away altogether. The Unlock with Mask feature is going to be widely used for years to come, and it doesn’t feel very sustainable for Apple’s solution to this problem to be something that they openly warn significantly impacts the security of your device. You also have the ongoing threat of other wearable items — like sunglasses or even Apple’s own forthcoming headset product — that may impact the usefulness of Face ID, over the course of this decade. Bringing back Touch ID in some form is a hedge against all of those potential risks, and one that many people would applaud.

Some Minor Issues With The Mini-LED MacBook Pro Display

The new generation of MacBook Pro features a terrific display. The colour depth, maximum brightness and contrast levels it can achieve are truly stunning and a huge leap over previous models of MacBook. It’s also significantly higher resolution than the 2019 16-inch, and the increased pixel density is noticeably better in terms of the visible detail of photos and videos. The extra resolution also enabled Apple to restore 2x Retina mode as the standard display setting without sacrificing on effective screen real estate. The panel’s 120 FPS refresh rate is icing on the cake, even if macOS still hasn’t quite caught up to the hardware capability (although the 12.2 beta seed is much better in this regard).

However, all display technologies have tradeoffs, and the mini-LED design seen in the MacBook Pro is no different. Blooming is often discussed as a downside of mini-LED but funnily enough, I don’t see it crop up too much in how I use my computer. It’s there if you seek it out, but you really have to hunt.

As shown in the video above, a persistent niggle for me is the vignetting effect around the edges of the display. The extreme edge of the screen is just slightly darker all the way around, and it sticks out when the rest of the screen is uniformly bright. You can observe this border pretty much all the time. It’s annoying. I’d put in the same category as the notch. In practice, because it only impacts the screen quality at the very fringes, it rarely intrudes on the content you are viewing and your brain learns to ignore the periphery imperfections.

Another more subtle artefact is the screen response time when changing between light and dark content. Basically, if you have a big dark coloured blob and then quickly change to a new screenful of content that is mostly white, it takes a few extra milliseconds for the black regions to turn white. I haven’t precisely timed it, it might even be as small as a 100 milliseconds lag, but it is noticeable to the human eye. It’s sort of like OLED jelly scrolling, but less prevalent.

Modern LCD backlights certainly don’t have the vignetting problems, and screen response time can be consistently as low as 1 millisecond. Apple clearly made the right choice to move from LCD to mini-LED though. It is simply superior in most regards. A hypothetical decision between a MacBook Pro with mini-LED and one with an OLED screen is less clear cut. OLEDs don’t exhibit the edge vignetting and have no blooming because each pixel is individually lit, but they bring their own issues like burn-in and jelly scrolling to contend with.

Every Apple TV+ Show Reviewed In Five Minutes

This is my incredibly succinct five minute review of every Apple TV+ show released to date. I figured I might as well get this out the way before the volume of content makes it untenable to do; even this video ignores the dozen original movies the company has put out so far. Don’t take it too seriously. The main takeaway, if any, is that Apple TV+ continues to expand its content library, with more hits than misses, and will (easily) eclipse 150 premium originals by the end of 2022. The user interface and app experience issues remain the service’s biggest roadblock to attaining mainstream uptake from the general public.

The Metaverse Is Not A Real Thing

I can’t quite believe how much ink has been spilled these last few months about a concept that doesn’t exist and is — at best — a pipe dream. The metaverse is not a thing. It’s meaningless. Facebook had an hour long keynote event which wholly consisted of computer-generated sequences of floating Memoji/Xbox avatars. Microsoft joined the fray with similarly unsubstantiated claims that Teams is becoming a metaverse.

The bandwagoning of the name ‘metaverse’ is dumb, but I’m not really interested in that aspect. I’m just going to ignore all of that misappropriation. Marketing teams always do stupid stuff; see the ongoing misappropriation by mobile carriers about what 5G can do.

I take the meaning of “metaverse” to be the generally accepted idea that people will wear some kind of headset or glasses and be able to access a virtual world, meeting up with others in some kind of virtual geography. The realism and quality of the experience is promised to be so good that your brain believes you are actually there, with your senses succumbing to the generated interface such that you can suspend disbelief that what you are interacting with is not actually there. Perhaps it is not an all-encompassing experience, but instead augmented reality avatars/objects appear to materialise in the space around you and behave accordingly.

Either way, it’s not feasible. It’s not a real thing because it is not grounded in any sort of technological truth. There’s not a tech demo on earth that can deliver anything close to that description; nothing bespoke exists and something for the mass market is even more illusory. It’s not a real thing.

Break apart the vision to any individual element and the state of the art technology is nowhere close to good enough. Realtime visual fidelity has to advance leaps and bounds to be as convincingly legitimate as what Facebook ‘demonstrated’ in its mockups. I’d love to know how long it took whatever render farm they used to make these videos. Probably, days. Even the mockups aren’t what I’d call convincing to a human, because the avatars look like avatars and not people. If that is the aim, forget it. We can’t even get CGI people in Hollywood movies to reliably break through the uncanny valley, and these films take months to generate a single second of footage. For a portable headset, it’s not even on the horizon. Five years. Ten years. Maybe longer. It’s not going to happen.

Graphics are just one of a thousand problems. All the other senses need to be satiated too for a start and the technology for generating synthetic smells, tastes and touch is so much further behind where we are at with GPUs for photorealistic imagery. One of the things that motivated me to write up this ridiculousness in a blog post is this fencing demo from Facebook’s Meta keynote. Zuckerberg is shown to be playing against a hologram of a professional athlete, waving swords at each other. In the demo, when he lunges, she parries with the swords perfectly stopping in mid-air. How on earth is that going to be possible to do, outside of a visual effects mockup? There’s no way to recreate the sensation of metal hitting metal and the sabres rebounding. Rather than an in-air clash of swords, the real sword is just going to pass right through the VR one. A vibration motor and some haptic feedback doesn’t cut it, although that doesn’t stop Zuckerberg miming contact and saying “that’s a little too realistic”. Lest we forget network latency hurdles or a myriad of other issues of course.

What these companies are touting is a fully immersive, engrossing, alternate world is only a few years away, just out of sight. The truth is it’s not anywhere close. I’m not a denier of augmented reality technology altogether. There will be continued small and meaningful improvements to the enterprise and consumer offerings, many of which will find their niche and bring genuine utility and/or entertainment. It will be able to enhance our life. For instance, VR gaming is basically already here, save for some less clunky hardware to use it on and some nice graphics. I could even see how a portable headset, or smart glasses, product could replace the phone in the medium term, as the primary communications device for humanity. The power-efficient-yet-technically-capable hardware to pull that off is still a ways out — maybe ten years, probably twenty — but it’s a plausible future that is deserving of consideration. I’d put that idea in the same bucket as self-driving cars or consumer space travel. These things live in the realm of tech demo today, but they have shown feasibility and appear attainable. Contrast that to the “metaverse”, which is merely made-up fantasy.

Craig Federighi Discusses Sideloading At Web Summit Conference

Craig Federighi, Web Summit 2021:

Even if you have no intention of sideloading, people are routinely coerced or tricked into doing it. And that is true across the board, even on platforms like Android that make sideloading somewhat difficult to do.

Apple doesn’t trot out Federighi to a third-party conference with a highly-produced Keynote deck for the fun of it. They are clearly concerned that European lawmakers are actually going to do something they don’t want; that is, pass laws requiring them to offer sideloading as an option. On the whole, he presents good arguments against the policy. You can watch the full thing here.

However, one particular talking point highlights a severe weakness that I see in Apple’s stance. Federighi posits that a social networking app may choose to “avoid the pesky privacy protections of the App Store” and only make their apps available via sideloading. Apple’s customers would then have to leave the ‘safe’ Apple software ecosystem, or lose touch with their family and friends. This is sort of true. But what is omitted is that an app choosing to leave the App Store is not primarily doing so to avoid Apple’s privacy standards, but because it would then be able to avoid Apple’s IAP rules.

Apple benefits financially — measured in the billions of dollars per year — by keeping the App Store as a monopoly. However much it wants to tout the user privacy and safety benefits, Apple’s position would be far stronger if cynics weren’t able to point to the money being accrued by the App Store gravy train. The 30% cut is ultimately the driving factor that has led Europe to want to pass these competition laws in the first place. If Apple truly wants to put customers first and protect from sideloading, alternative app stores and the like, it needs to compromise on the business policies somehow.

Developers Must Opt In To 120Hz Animations On iPhone 13 Pro

Apple:

As a general rule, for better visual appearance use faster refresh rates when animating fast-moving items that travel across large areas of the screen. But, if you’re animating a smaller item that doesn’t move over a great distance, but “animates in place”, that typically doesn’t benefit from a high refresh rate. You can use slower refresh rates when animating smaller items without any impact to the visual appearance. Selecting the right animation speed is always a tradeoff between a smoother visual appearance and saving energy. As a guiding principle, strive for the lowest animation speed possible while maintaining good visual appearance.

If it wasn't for the PR stink on Friday, presumably this documentation would have taken even longer to appear.

This documentation should have been made available alongside the iOS 15 and Xcode 13 Release Candidate a week ago. Because it wasn’t, app developers didn’t even have a chance to get their software ProMotion-ready for iPhone 13 launch day. Indeed, the lack of published documentation meant that everyone assumed that adopting 120Hz would be done automatically by the system. This is how it works on the iPad Pro, which has supported ProMotion since 2017. But for the iPhone 13, high frame rate animation is actually gated twice, firstly by a global Info.plist key and secondly by the fact that each individual animation in the codebase will need to be audited and marked as wanting high refresh rate pacing.

All apps will see ProMotion benefits when the user is actively interacting with the display and generating touch events, which thankfully means scrolling is always ultra-responsive and fluid across the system.

However, this also puts an onus on developers to meticulously check all the animations in their app and do the code changes where it makes sense. 60 FPS animations in app like Twitter will stick out like a sore thumb if the user has just finished scrolling their timeline at a smooth 120 FPS rate. The stark contrast could even make it feel like the app is lagging, as the user’s brain becomes accustomed to seeing smoother motion. Whilst the work needed to opt-in is only one line of code, it is a pretty laborious task to do that for every animation in an app and I fear that very few developers will bother. Even for someone who cares, it’s an easy thing to forget when adding a feature or implementing a new screen of an app.

Clearly, Apple believes the benefit to battery life is worth the pain of enforcing selectivity. The second half of this document actually recommends not adopting 120Hz indiscriminately and reserving its use for “high-impact animations” only. That is, animations that cause significant on-screen content changes like a full-screen wipe to a new page. I guess we will have to trust their judgement on this, but it definitely should have been accompanied by better communication as it is such a big departure from the precedent set by ProMotion on the iPad Pro.

Apple Will 'Help' Some Developers Put A Single Link To Their Website In Their Apps

Apple:

Because developers of reader apps do not offer in-app digital goods and services for purchase, Apple agreed with the JFTC to let developers of these apps share a single link to their website to help users set up and manage their account. While in-app purchases through the App Store commerce system remain the safest and most trusted payment methods for users, Apple will also help developers of reader apps protect users when they link them to an external website to make purchases.

Apple’s resistance to change any App Stores rules of its own accord means that you have to read any of these announcements with extreme care and caution. The details matter.

In this case, I am perturbed by the fact that there are lot of words, a lot of paragraphs, surrounding what should be a straightforward policy change: allowing developers to link out to their website on the sign-up screen.

A couple of the limits are made transparent in the copy; this revised rule applies to reader apps only and developers are allowed a ‘single’ link only.

Setting aside Apple’s self-serving and/or contradictory rules around what counts as a reader app, what the heck does a single link mean in a digital world? It’s a hilarious concept. If I style a link with a big font, placed on a rectangle of prominent background colour, is that a single link … or is that a button? What if the single link takes up 90% of the screen, in a huge font? If I put a static link at the bottom of the screen like a footer, and the link doesn’t move or disappear when the user navigates to a new page, is that still a single link? I mean it’s still the same link, it is just permanently visible.

That’s one whole ordeal. The second part I zone in on — in my pessimistically critical reading — is ‘help’. What constitutes help? Of course, I fully expect Apple to lay out rules around the design and behaviour of the destination websites, possibly including limits on what payment methods can be used and the language used in the sign-up form. Furthermore, because this policy is not coming into effect until next year, it seems like this ‘help’ is going to include some kind of technical component too. Maybe Apple will have a special new API or something that ensures the link out to the website doesn’t change after the fact, or must link to a specifically (pre-approved) registered domain. Apple could ‘help’ by requiring use of a sandboxed web view that somehow doesn’t have access to a user’s standard AutoFill information.

Thirdly, all these developers obviously want the ability to link out to the web in order to encourage their customers to use payment methods other than Apple’s In-App Purchase. Apple’s press release implies that motivation but the actual wording isn’t so direct: it says the link is to enable users to “set up and manage their account”.

You’d hope Apple would comply to the Japanese law in good faith, but I’m certainly not ruling out something more sinister. I don’t think the implementation of ‘help’ will be onerous, but perhaps just inconvenient enough to make some percentage of developers not bother.

Ultimately, these rules should have a positive impact on user experience and a very small negative impact on Apple’s financials. Apple’s revenue from reader apps is already small, because those are the exact category of apps already allowed to circumvent In-App Purchase altogether. That being said, this Japanese settlement is not going to fundamentally resolve any of the other impending lawsuits; Spotify benefits from these new rules but will want more and will keep pushing, Epic is just going to be more furious that they don’t benefit at all, and there are plenty more EU and US investigations to come.

On-Device CSAM Scanning For iCloud Photos

Apple:

Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.

To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC).

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations.

The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

The naysayers of the last week are not necessarily wrong. This issue is nuanced and Apple’s decisions involve concessions. Personally, I think Apple have done well here. They probably could have handled the communication surrounding the announcement better, but the actual functionality and policy decisions are reasonable.

In a world where Apple is isolated to all external pressures, I don’t think they would have done this at all. Ideally, Apple would like it such that your device and your data is sacrosanct. Impenetrable. That fits with their business model, it fits with their morals, and it fits with their marketing.

However big Apple is, they do still have to conform to government expectations and the law. That’s why in China they let a third-party Chinese company run the iCloud data centre. They can make it as secure as can be under that arrangement, but it’s still a compromise from the ideal situation where the data centres are managed by Apple themselves (and is how it happens in every other region of the world).

In the US, big tech companies are expected to help governments track and trace child abuse content happening on their platforms. Apple runs a big platform full of user generated content, iMessage and iCloud, yet their contribution to this specific cause has been disproportionally small, infinitesimal even. Facebook reports something like 20 million instances of CSAM a year to the NCMEC organisation. In that same timeframe, Apple barely crossed the 200 mark.

So, this is the underlying motivation for these new policies. Big government bodies want Apple to help to track down the spread of CSAM on their devices, in just the same way that other big cloud companies like Google, Amazon, and Facebook comply by scanning all incoming content to their platforms.

You have to assume that privacy issues are a key reason why Apple has historically been so lax in this department. It’s not that Apple has sympathy for the people spreading child pornography. Why right now? That is still unclear. Perhaps, behind closed doors, someone was threatening lawsuits or similar action if Apple didn’t step up to par soon. Either way, it’s crunch time.

I’m sure governments would welcome a free-for-all backdoor. Of course, Apple isn’t going to let that happen. So, what Apple has delivered is a very small, teensy-tiny, window into the content of user’s devices.

The actual system is almost identical to what Google/Amazon/Facebook do on their servers, attempting to match against a database of hashes provided by NCMEC. Except, Apple runs the matching process on device.

I’ve seen a bit of consternation around this. I don’t think it’s a big deal where the matching process happens. Arguably, if it is happening on device, security researchers have more visibility into what Apple is doing (or what a hypothetical nefarious actor is doing with the same technology) compared to if it was taking place on a remote server.

You do have to basically take Apple’s word that it is only scanning photos destined to be sent to iCloud. Sure. But you have to take Apple’s word on a lot of things. A malicious state could secretly compel Apple to do much worse with much less granularity. I don’t trust Apple any more or any less about this than the myriad other possible ‘backdoors’ that iOS could harbour. The slippery slope argument is a concern, and worth watching out for in the future, but I don’t see anything here that is an obvious pathway to that.

You also have to stay grounded in the baseline case. Right now, if you use iCloud Backup (clarifying again that this new Child Safety stuff only applies to if you use iCloud Photos, photos stored in the backup only are exempt), all of your phone’s data is stored on an Apple server somewhere in a manner that they can read your data if they so desire. This also means that a government can subpoena Apple to hand over that information. This is not a secret. Apple has done it countless times, purportedly in the presence of a valid warrant, including very publicly in the midst of the PR fiasco that was the 2016 San Bernadino shooter case.

With that in mind, almost all of your phone is already accessible to law enforcement or state actors if they so desire. This new entry point in the name of Child Safety pales in comparison to that level of potential access.

One assumption I’ve seen floated around is that Apple wants to roll out end-to-end encrypted iCloud Backup option in the future. The criticism is that this on-device scanning policy undermines the point of E2E because the scanner would still be able to “spy” on the data before it was cryptographically sealed. I mean, I guess that’s true to a degree, but I’d still rather have the option for end-to-end backups with a CSAM scanner in place, than not have it all which is the world we live in today.

The weakest link in the chain on the technical side of this infrastructure is the opaqueness of the hashed content database. By design, Apple doesn’t know what the hashes represent as Apple is not allowed to knowingly traffic illicit child abuse material. Effectively, the system works on third-party trust. Apple has to trust that the database provided by NCMEC — or whatever partner Apple works with in the future when this feature rolls out internationally — does only include hashes of known CSAM content.

I think there’s a reasonable worry that a government could use this as a way to shuttle other kinds of content detection through the system, by simply providing hashes of images of political activism or democracy or whatever some dictatorial leader would like to oppress. Apple’s defence for this is that all flagged images are first sent to Apple for human review, before being sent on. That feels flimsy.

My suggestion would be that all flagged images are reported to the user. That way, the system cannot be misused in secret. This could be built in the software stack itself, such that nothing is sent onward unless the user is notified. In press briefings, Apple has said they don’t want to do this because their privacy policy doesn’t allow them to retain user data, enabling a legitimate criminal who is sharing CSAM would simply be able to delete their photo library when alerted. I think tweaks to policy could solve it. For instance, it would be very reasonable for a flagged image to be automatically marked frozen in iCloud, unable to be deleted by a user, until it has gone through the review process. The additional layer of transparency is beneficial.

A13 Chip Inside Future Apple External Display

9to5Mac:

However, 9to5Mac has now learned from sources familiar with the matter that Apple is internally testing a new external display with a dedicated A13 chip and also Neural Engine. The new display is being developed under the codename J327, but at this point, details about technical specifications are unclear. According to sources, this display will have an Apple-made SoC, which right now is the A13 Bionic chip — the same one used in the iPhone 11 lineup.

Apple Silicon in an external display? That must be really exciting! Well, not really. It is most likely something boring. Perhaps the A13’s sole purpose is to drive higher density mini-LED dimming zones a la the new iPad Pro, probably repurposing a lot of the R&D investment for the iPad into this product.

Apple’s massive economies of scale means it may very well be cheaper for them to shove in A13 chips to act as the display’s controller, compared to the cost of designing new silicon for something bespoke. The new display project is in early development stages and by the time it ships, in a couple of years, the A13 would be old. It is analogous to how the T2 chip was used in the iMac Pro and later Intel MacBooks, which was essentially a rebranded A10 SoC. The T2 served ‘boring’ duties like driving the secure boot process and SSD controller.

If you want to take a more optimistic and exciting tone, the A13 could possibly be present to enable features like Face ID. That would be neat. Timing wise, that would line up too as the rumour mill expects Apple to transition Macs to Face ID biometrics in the next few years.

More than anything though, I simply hope the next-generation display is more affordable. It would obviously still be a premium product, it is Apple after all, but there’s a big difference between ~$2000 and $5000 (plus $1000 stand and $1000 nano-texture coating). As the technical capabilities of the iPad Pro’s Liquid Retina XDR display already surpass the Pro Display XDR, a significantly lower price point seems readily achievable.

Netflix Games Next Year

Bloomberg:

The idea is to offer video games on Netflix’s streaming platform within the next year, according to a person familiar with the situation. The games will appear alongside current fare as a new programming genre – similar to what Netflix did with documentaries or stand-up specials. The company doesn’t currently plan to charge extra for the content, said the person, who asked not to be identified because the deliberations are private.

This story has a lot of parallels to how Apple uprooted the music labels with $0.99 songs on iTunes. Apple became dominant, and the labels regretted their complacency.

Netflix became a streaming behemoth largely because traditional TV networks vastly undervalued the content they had. Infamously, Netflix paid a mere $30 million for streaming rights to 2500 TV shows and movies from the Starz catalogue. That is, $30 million for four years … a sum so unthinkably low in today’s market that I just had to double check the numbers. Similar cheap deals were struck with other studios too. This base of content was a huge springboard for Netflix’s streaming initiatives. By the time the incumbent content owners had realised their mistake, Netflix was getting established in the originals space and years ahead of the competition.

Playing in the games space is not going to be as straightforward. ‘Content’ in general is so hot right now and Netflix is the leader. People aren’t going to sell games to Netflix on the cheap. People might not even sell games to Netflix at all. PlayStation Now and Xbox Cloud Gaming are already trying to own this market, and they aren’t sharing their exclusive properties. Netflix does have popular intellectual property in the form of its original TV shows, but the question is whether it can convert viewers into players.

A significant reason why Netflix garnered consumer adoption of streaming is that it had a natural on-ramp to move its existing avid customer base of movie watchers getting DVDs by mail into watching the same movies online instead. The jump from watching TV to playing games is a much bigger chasm to cross. I’m not saying Netflix shouldn’t try to branch out, but I don’t think it is going to take over on the spot. I foresee a slow and gradual expansion.

Distribution is another issue altogether. The App Stores are a roadblock, obviously. You basically can’t offer game streaming on iOS, apart from through Safari. Apple TV lacks a web browser, so there’s no entry point there unless/until Apple’s rules change. A lot of people watch Netflix on their PlayStation and Xbox, but it seems unlikely they are going to welcome Netflix games with open arms, as they all want to push their own game streaming services. The Netflix app is installed on millions of set-top boxes and smart TVs worldwide, and are theoretically powerful enough to display game streams, but only a fraction of those will support pairing things like game controllers.

FaceTime in iOS 15

Apple:

FaceTime helps customers easily connect with those who matter most and with iOS 15, conversations with friends and family feel even more natural. With spatial audio, voices in a FaceTime call sound as if they are coming from where the person is positioned on the screen, and new microphone modes separate the user’s voice from background noise. Inspired by the stunning portrait photos taken on iPhone, Portrait mode is now available for FaceTime and designed specifically for video calls, so users can blur their background and put themselves in focus. While using Group FaceTime, a new grid view enables participants to see more faces at the same time.

Users can now share experiences with SharePlay while connecting with friends on FaceTime, including listening to songs together with Apple Music, watching a TV show or movie from Apple TV+ and other streaming services in sync, or sharing their screen to view apps together. SharePlay works across iPhone, iPad, and Mac, and with shared playback controls, anyone in a SharePlay session can play, pause, or jump ahead.

FaceTime calls also extend beyond Apple devices with the ability to create a link from iPhone, iPad, or Mac, and share it through Messages, Calendar, Mail, or third-party apps, so anyone can join a FaceTime call from their web browser on Android and Windows devices. FaceTime calls on the web remain end-to-end encrypted, so privacy is not compromised.

It is incredibly tempting to glibly pass off many of these new FaceTime additions as features targeting an era that (we all hope, at least) has passed, and Apple is late to the game. I’m pretty sure I tweeted a joke to that effect on keynote day. On reflection, though, it is an unfair view.

Zoom and Microsoft Teams have dominated mindshare recently because of the surge in popularity of group video chats in 2020, but FaceTime is a huge player when it comes to video calls in general, because it is incredibly popular for personal one-on-one calls. Maybe the biggest actually.

Save for grid view which is specific to mass group chats, all of these new iOS 15 features will be beneficial to the friendships and relationships forged over FaceTime calls that take place every single day, in their millions. Rather than painting it all with a pandemic brush, I now note that many of these features were actually requested and in demand years ago … well before anyone had heard of COVID-19. Screen sharing will make FaceTime family tech support so much better. The underlying intents of screen sharing and SharePlay go all the way back to iChat AV from macOS Tiger, but they never made the leap to FaceTime until now. I guess we have the pandemic to thank for motivating Apple to move these features higher up the internal to-do list.

Maybe some friend will get together on Group FaceTime because of SharePlay. Still niche though.

The ability to join FaceTime calls via web links (including Android and PC support) is clearly Apple’s half-hearted attempt to battle Zoom and Microsoft Teams in the remote work and school videoconferencing space. I doubt it will get much traction. Products like Teams have all sorts of components to help people collaborate on projects, and arrange group meetings. FaceTime has none of that stuff. FaceTime is more like an add-on of Messages, competing against WhatsApp and traditional phone calls if anything. You also see this in how each service handles identity; Zoom and Teams have abstracted user accounts, whereas on FaceTime you connect by sharing your personal phone number or email address — information that you only want to give out to close friends.

Apple Tracking Transparency

Between the PR blitz, ad campaigns and the actual feature itself, the rollout of App Tracking Transparency means that iOS users are now more aware of the relationship between apps and third-party advertising and analytics companies. The power to deny an app the ability to do something many people were previously oblivious to — third-party tracking — with one button press is powerful and meaningful.

You won’t see an App Tracking Transparency dialog show up in any Apple app. This is because App Tracking Transparency falls solely under the purview of third-party tracking, and Apple’s apps technically don’t share data with anyone but Apple. Under the letter of the law, Apple is in the clear. But of course, it is Apple that wrote the law. It’s been niggling at me for months that App Tracking Transparency is defined in such a way that Apple’s own data collection activity is unaffected.

Apple Card needs to send your data to third-party agencies to check your credit. It just so happens that one exemption of ATT is reporting data to a third-party for the purposes of evaluating creditworthiness.

App Tracking Transparency comprises a laundry list of clauses and exemptions, but the main distinction is the first-party versus third-party thing. That puts Apple in the clear as it doesn’t run an ad network that other companies participate in; Apple serves ads to its users inside its own ‘first-party’ walled garden. Apple’s targeted advertising currently manifests itself in the News app (ad banners), the Stocks app (ad banners in integrated News stories), and the App Store (search ads).

Whilst it is true that Apple’s ad network uses far fewer signals than most of its competitors to target ads, it probably collects more than you think. Basics like age, gender, home address, current GPS location are in play. It also incorporates what kinds of media you download from the App Store to generate demographics buckets. To reiterate, Apple looks at what you have downloaded across its content stores — music, television, books and apps — and puts your Apple ID in appropriate marketing categories. For instance, this means Apple News can advertise games to you, because your account’s profile indicates that you like to play games.

I’d refer to this as Apple’s ad tracking. Apple officially calls this “Personalised Ads”, because it wants to define tracking as a third-party concept. The behaviour of Personalised Ads is not conveyed through any kind of user-facing dialog or permissions prompt, unlike Apple Tracking Transparency. In fact, Apple enables ads personalisation by default. The setting to turn it off is buried in Settings, tucked away at the bottom of the Privacy screen (conveniently positioned below the fold).

I don’t really have a problem with News targeting ads based on stuff I read in news. I think what irks me the most about this situation is that an Apple ID is a prerequisite to owning an iPhone and you can’t download any application from the App Store without one. Apple’s delineations of first-party and third-party allows the App Store to share any information it pleases with the News app, without telling the user at all. It feels wrong that News silently target ads to me based on the apps I download, the music I listen to and the television shows I watch.

The data Apple is collecting is relatively boring and benign … but it still is data collection for the purposes of advertising. This goes against the principles Apple advocates in its commitments to privacy and data minimisation. Again, it’s not that Apple is applying one rule for itself and another for developers. After all, Facebook is allowed to share as much data as it wants with Instagram and WhatsApp as they all fall under the first-party rule. But I have come to expect Apple to be better than Facebook, and I expect them to do more than the bare minimum.

It is self-serving that Apple classifies the App Store as first-party and therefore its ad network avoids the friction of App Tracking Transparency. As it stands today, the vast majority of iOS users have ads personalisation de facto enabled and they don’t even know what it is or that it is happening.

One way for Apple to behave better about this would be to treat the App Store as if it was a third-party operation. In accordance with its own rules, this would mean the App Store would need to get explicit permission before it could show targeted ads. The permissions dialog would educate users about the kinds of data Apple collects for advertising purposes and let them make up their own minds about whether they are comfortable with it.

Apple Music Lossless

Apple:

Apple today announced Apple Music is bringing industry-leading sound quality to subscribers with the addition of Spatial Audio with support for Dolby Atmos. Spatial Audio gives artists the opportunity to create immersive audio experiences for their fans with true multidimensional sound and clarity. Apple Music subscribers will also be able to listen to more than 75 million songs in Lossless Audio — the way the artists created them in the studio. These new features will be available for Apple Music subscribers starting next month at no additional cost.

A couple of months ago, Spotify announced that they were working on a paid hi-fi tier to launch later this year. Spotify is actively seeking ways to increase profitability of its large pool of active users, and hi-fi looked like one way to do that. Seemingly simultaneously, Apple and Amazon were campaigning to the labels (who ultimately dictate streaming pricing) that they wanted to promote lossless quality options as a way to increase the overall number of people streaming, rather than as some incremental upsell. Apple and Amazon’s negotiations evidently succeeded and so both Apple Music and Amazon Music HD will be offering lossless at no extra cost. Apple and Amazon are able to focus on growing subscriber counts because they can afford to aggressively subsidise their streaming divisions. In contrast, Spotify is an independent company and draws all of its income from that subscription price. Therefore, it is much more sensitive to things like ARPU. Making hi-fi a commodity feature is not what Spotify wanted to happen.

In addition to the ruthless war of competitive business, I also think Apple recognises that lossless is a very niche thing. Similar to the App Store Small Business Program, lossless-as-free is a huge win for Apple in goodwill with very little associated cost to the company. It’s an attractive feature on paper but I doubt many people will ever stream Apple Music lossless in practice. Beyond just issues of bandwidth, I’d guess less than one percent of Apple Music subscribers actually have premium audio equipment that can reproduce the additional details found in lossless music compared to standard Apple Music AAC encoded tracks. This is reflected in the fact that streaming lossless will be an entirely opt-in feature.

The spatial audio tracks with Dolby Atmos is going to have a more meaningful impact on users. This will be on by default and can be experienced by the millions of AirPods owners. Apple says it will stream Atmos songs to any Apple or Beats headphones with the H1 or W1 chip, although I’m sure certain models (like AirPods Max) will do a better job at creating a 3D soundstage than others. Interestingly, Spatial Audio for video content specifically requires AirPods Pro or AirPods Max. I think the difference is that the latter uses the gyroscope and accelerometers to position the music relative to your iPhone or iPad. Apple Music won’t try to do relative positioning, and so the simulation of Dolby Atmos surround sound is purely done by the headphones themselves. This also means that any third-party headphones that tout Dolby Atmos support can also benefit.

The New Siri Remote

Apple:

The all-new Siri Remote features an innovative clickpad control that offers five-way navigation for better accuracy, and is also touch-enabled for the fast directional swipes Apple TV users love. The outer ring of the clickpad supports an intuitive circular gesture that turns it into a jog control — perfect for finding a scene in a movie or show. And with its one-piece aluminum design, the new Siri Remote fits more comfortably in a user’s hand.

It feels a little silly to be commending something as primitive as a TV remote, but the new Siri Remote deserves it. When Apple responded to the butterfly keyboard backlash, they simply reverted back to a tried-and-trusted design. The Magic Keyboard in the 2019 MacBook Pro was near-identical to what they shipped before — prior to the butterfly fiasco that is. It was a crowd pleaser but also a little boring. Had they really learned nothing in that four year span of misery?

The new Siri Remote is not retracing the past. It’s a robust combination of the Apple Infrared Remote and the trackpad Siri Remote, squarely addressing the failings of the latter. There’s no glass surface anymore, so it should be more durable and not prone to breaking upon being dropped or knocked off the coffee table. There’s asymmetry that you can feel with your fingers, so your brain can know which way is up just by touch. There is a physical D-Pad arrangement with labelled indicators, so you can do traditional up-right-left-down navigation like every other TV remote on the planet. And yet, at the same time, Apple didn’t lose the best bit of the trackpad: the ability to swipe the trackpad to zoom through a list of items. The circular D-Pad doubles as a capacitive touch surface (the “clickpad” as named by Apple marketing) so you can still do the same gestures on it. A new innovation: if you move your finger in a circular motion around the outer edge, it scrubs through the playing video just like an iPod clickwheel.

One let down is the lack of Find My integration, or even a simple beeper so you can make it ping if you misplace it. The omission of this functionality was made only more stark by the fact Apple chose this same event to sing the item-finding praises of the long-rumoured AirTag tracker. This remote should be harder to lose though; it is less likely to fall down a sofa simply because it is bigger and chunkier. Switching from an all-black finish to silver aluminium also makes it easier to see in the dark.

The other let down is the price: $59 for a remote is exorbitantly high. You can buy an entire Chromecast for less than the price of the Siri Remote. Unfortunately, the entire Apple TV product remains poorly priced. They refreshed the Apple TV 4K’s internals with an A12 chip, but it’s still sold at the same $179 price point and more ridiculously still, the Apple TV HD remains on sale — hardware unchanged aside from bundling the new remote — for the same (inflated) price it debuted at in 2015.

The A12 chip and the overhauled Remote do just enough to serve as a signal that Apple is committed to keeping the Apple TV around. I can allay my fears about its discontinuation. But clearly, there’s a lot more to be done in the living room and I hope Apple has more coming down the pipe in both hardware and software.