The Apple TV hardware has two main issues: lagging OS, and price. This week’s hardware refresh naturally didn’t do much to change the software experience — although Siri features a slightly more modernised UI and per-user voice recognition now — but they did tackle the price problem.
The previous lineup was $149 for Apple TV HD, a product first released in 2015, and two models of Apple TV 4K varying by storage capacity; $179 for 32 GB and $199 for 64 GB. This was simply outrageous pricing, in a market where competent 4K streaming sticks can be picked up for under $49. The premium advantages of the Apple TV platform were simply not worth four times more. I bought it because I’m a sucker, but I’d never recommend it to family or friends.
The new lineup is $129 for Apple TV 4K with 64 GB and $149 for Apple TV 4K with 128 GB. This time around, the higher-end model also differs in features other than storage; the more expensive Apple TV has an Ethernet port for wired networking and a Thread radio, for communicating with the latest Thread-only smart home devices.
In raw numbers, the Apple TV 4K is 28% cheaper than it was a week ago. The cheapest Apple TV you can buy is now 15% cheaper, and actually of respectable, recommendable, spec: the latest A15 chip, full 4K HDR support, and plenty of storage for future-proofing / space for downloading a dozen Apple Arcade games. And the obsolete Apple TV HD is gone for good, thankfully.
This is fantastic news. The lineup is much more reasonable now. If someone is mad that their Roku or Fire Stick is ad-ridden or behaving laggy, suggesting a $129 solution is now possible — an order of magnitude more palatable than the old $179 price point.
That being said, $129 is still too much for the Apple TV to capture significant market share. I wish Apple went further with stripping down the base model to push the price down more. $99 really feels like the target to hit, and they didn’t quite get there. A hypothetical 32 GB Apple TV model for $99 would have been appealing; very few will benefit from having 64 GB or 128 GB onboard storage. If you are just streaming video, you don’t care about the storage space.
It’s such an Apple simplification to show the exact same lock screen whether the phone is awake or not. It’s so similar that it is a stretch to even describe it as a “mode”. It’s just your lock screen, dimmed.
I like it. It’s the same way the Apple Watch works, it makes sense to me. When you invest in customising your lock screen using all the new widgets, dynamic wallpapers, font and colour options available in iOS 16, you get to enjoy your personalisation choices (quite literally) all day long on the iPhone 14 Pro. It feels like you have made the phone your own. The always-on feature also benefits from the iOS 16 maximised album art lock screen view, adding a splash of colour and vitality when your phone would otherwise be sitting dormant on your desk. The ability to glance at notifications and widgets adds some degree of utility, but it’s mostly just nice-to-have the screen stay on. Screen and battery technology have advanced to make it possible, so why shouldn’t phones work this way?
Of course, personal preference matters a lot here. Indeed, it’s never a good look when a significant portion of the initial embargoed reviews mentioned they turned off the feature entirely — arguably disabling one third of the Pro-exclusive features this generation — because they found the permanent aliveness of Apple’s always-on implementation too distracting. Some people simply prefer a simpler, muted, always-on state.
A Nightstand Mode perhaps, again cribbing from the Apple Watch.
The OS integration does feel a little incomplete. Obviously, some people want an option to get an Android-like always-on mode, where it just shows the time on a wholly black background. I don’t care for that so much but I do want a little more flexibility in how I am allowed to conditionally enable or disable always-on.
As I already said, I happen to like the feature as-is — but I don’t want the screen to shine brightly through the night whilst I’m trying to sleep, if only to avoid unnecessary battery drain. As of right now, the only way to get the screen to turn off at night is to use the Sleep Focus mode. The Sleep Focus is blessed with abilities other Focuses are not, and turning off always-on is one of them. I don’t do sleep tracking and I don’t really want to have a Focus-oriented device lifestyle. Until I got my 14, I had stuck to the pre-iOS 15 binary system of Do Not Disturb, or nothing. I have resorted to using Sleep, but I shouldn’t have to. Why can’t I set it so that Do Not Disturb also tames the always-on display?
There’s nothing inherently bad about ads per se. Typically, any ‘damage’ done to the user experience of a product by showing ads is offset by a lower — or free — purchase price. That’s the balancing act at play; the user is compensated in exchange for having to consume advertising, thereby maintaining an overall equilibrium of customer satisfaction. For just one example, Google runs a widely successful suite of services, almost exclusively powered by an ad-supported monetisation model.
In the context of Apple, the same ideas apply. I don’t think anyone would complain if Apple launched an ad-supported tier of Apple TV+, or Apple Music, as long as it was proportionately cheaper than the ad-free tier. If anything, those tiers would likely be more popular than the existing offerings. After all, Spotify has huge market dominance in music streaming precisely because they offer a free ad-supported tier.
Where the tension is, is the expectation that Apple is going to insert more and more ads into the user experiences of its premium products, without any such compensation in return. All signs point to the fact that Apple is going to try and uphold its position as a premium company charging premium prices, whilst also sucking out more revenue apparently indiscriminately.
And that is a dangerous slippery slope that threatens the essence of Apple’s entire product lineup. It’s a risky venture. A big problem is that the feedback loop is not so sensitive; the increase in revenue is immediate but the observation of the cumulative negative impact felt by users is lagging. You can probably insert a few additional ads into iOS and get away with it. But overdo it, and then you start undermining the premium brand the company has carefully curated for so long, and then you start losing customers, maybe for good.
Actually, we fitted smart wall switches rather than independent light bulbs, but it’s the same difference.
Smart lights are the go-to accessory to kit out a smart home with. We set some up in our house a few years ago, and an obvious thing to do was to make it so all the lights turn themselves off at night, so we aren’t wasting electricity, all night long, if someone forgets to flick the switches before going upstairs to bed.
We achieved this by configuring an automation that sets a ‘Good Night’ scene at 2 AM. It turns off the TV and some other stuff too. It’s neat, useful even. But a time-based schedule is far from an ideal trigger for this. What if someone happens to stay up late? Well, tough luck, everything is still going to unceremoniously turn itself off. The automation is set at 2 AM because all members of the family have clocked out around midnight or 1 AM, and that leaves enough of a buffer to account for the occasions when people stay up for another hour or so. Still, it’s not a foolproof system. The hack also means that when people are — most often — usually in bed before 12, the lights accessories are not automatically turned off for another two hours, wasting electricity for no reason.
It also doesn’t help with all the times during normal waking hours when people turn lights on and then leave the room. If you want to solve for that situation, a fixed time automation is not sufficient. The next tool in the arsenal is motion sensors, setting stuff to turn off when no motion is detected for a while. I’ve attempted to use a couple of motion sensors from different brands, but they are all largely unsatisfactory at the job. They aren’t reliable in general, especially for larger rooms and people might be obscured by sofas or tables; a very common pitfall is that when people relax, like reading a book or watching TV, they tend not to move enough to trigger the sensor. Motion sensors can be successfully deployed in some specific scenarios, but they aren’t general purpose solutions to the task of turning stuff on when people aren’t there anymore.
A few dedicated room occupancy sensors do exist. They typically attach to door frames, and count how many people enter and exit each room. If the count is greater than zero, the room is considered occupied. Unfortunately, these kinds of sensors are cost prohibitive, somewhat ugly, and also imperfect; a missed count of just one person will mean the total is off, requiring manual intervention to reset it. People enter and exit rooms a lot; even something that is 99% accurate will be wrong enough times to be frustratingly annoying.
So, the ‘simple’ task of intelligently automating turning things off in a house remains an open problem to crack. The 2 AM Good Night automation I use at the moment is a crude hack, but works well enough to tease the possibilities of where we could go in the future. I think fully solving room occupancy is the key to doing this and a world of other related fun and useful features.
The promise of a smart home could really come into its own if we could reliably detect exactly who is in what room. If the iPhone knew what room you were in at all times, you could open the Lock Screen and immediately see relevant smart home controls for the lights and accessories in your immediate proximity, as that is odds on what you wanted to change. It would be great to be able to say to your watch ‘turn on the light’, and the virtual assistant actually do exactly what you intended to happen as if you were talking to a real person; turn on the light just in that room where you are and nothing else.
Reducing friction is vital to making smart home stuff feel more useful and less of a gimmick. If the home knew who was in the lounge, it could change to that person’s user profile on the TV automatically. Playback of a podcast on smart speakers could follow someone through the house, as they move from eating dinner in the dining room to relaxing in their room. It’d be really cool if a smart speaker could automatically avoid expletive-filled music when young kids were known to be nearby. Smart thermostats could adjust to the temperature preferences of the person in the room at any moment, and perhaps turn off heating altogether when the system reliably knew everyone had gone to their rooms at bedtime at night. And your wake-up alarm could turn itself off when it observed you walk out of your bedroom the next morning.
I don’t pretend to know how we get there technologically. I figure it would probably involve coordination between some camera-esque sensors dotted around the home and communication with the devices people carry on them, like phones and watches. Maybe Apple could take advantage of Ultra-Wide Band positioning to accurately follow people’s movements throughout a house, with various static nodes like HomePods or Apple TVs or Echo speakers or whatever working in concert to track and triangulate the signals. I hope manufacturers pull on that Thread and see where it takes us (pun intended).
The jump to Apple Silicon and the M1 chip was nothing short of astonishing. The Mac got way, way, way, better, quite literally overnight. In the timeline of the Mac, 2020 will be remembered forever (and it’s not about COVID). Even more staggeringly, Rosetta 2 ensured old Intel binaries ran well enough that you generally couldn’t even tell that you were running through an emulation layer. Whether running new or old binaries on M1 Macs, everything performed at either the same speed or faster (typically significantly faster) than their Intel counterparts. The move to M1 came with no asterisks, downsides, or drawbacks. It was a perfectly executed transition.
Well, almost perfect. The move from Intel to Apple Silicon meant Apple’s best-selling machines, the 13-inch MacBook Air and MacBook Pro, went backwards in one regard. They were no longer capable of driving two external displays.
But that was easily excused. It is only the first-generation of Apple Silicon after all. Although using two monitors at once is not really an edge case, it’s certainly not a dealbreaker for a base model laptop. As such, the sang was remarked upon and quickly excused as a footnote; a strange quirk of first-gen engineering.
Year two, here comes M2. CPU is better, GPU is better … and yet the one-display limitation remains. Count me surprised. I was a bit taken aback by that. Now that the glow of Apple Silicon transition has faded slightly, the footnote pitfall is slightly harder to ignore.
I don’t think it’s unreasonable to expect any Mac to be able to output to two displays at once. It’s a weird thing to explain to normal people too; when is the last time a customer had to think twice about plugging in a second monitor to their computer? Decade-old Intel Macs could do it just fine. Plenty of old MacBook Air users spent the last two pandemic years working from home with a dual display setup, and now when they come to upgrade their years-old laptop, they are going to be unpleasantly surprised to find that the (otherwise shiny and new) 2022 MacBook Air can’t do that.
It’s still not a dealbreaker — and it doesn’t impact everyone of course — but it is a noteworthy con. I’m obviously not expecting the base chip to drive multiple 6K displays like the Pro/Max/Ultra chips can. An adequate bar would be the ability to output to two 4K screens at once, plus the laptop’s own screen. I don’t think I’m asking for much; just match what the old Intel machines could do.
I hope this isn’t a new product segmentation scheme on Apple’s part to differentiate pro and non-pro lines by how many screens they can support. That would be dumb. I don’t think that’s the case. The M3 will surely close the loop. Right?
I swear, sometimes it feels like Apple waits out for everyone to give up hope for a feature, only to deliver it on a silver platter the very next year. Cynically, better late than never. Practically, this is great news.
I love using the Photos app to scroll through years of images, edit and crop right on the phone, run photo screensavers on the family room Apple TV, and glance at relevant images in the Photos home screen widgets across my devices.
It all just works swimmingly … except for the rather crucial part about getting those photos collated in the first place. The strategy our family has adopted until now, which I assume is what everyone else does, was to assign one person’s account to be ‘primary’ and they are in charge of having all the photos. If others take pictures, they can send them to said person who saves them in the canonical archive.
This is obviously a clunky ‘solution’. The primary person takes on a lot of responsibility to manage the library, including doing all cropping and editing, and loses the ability to have a safe space for their own personal photos that they want to keep separate. A family member cannot see all the family pictures on their own phone unless they also keep copies on their own individual buckets of iCloud, something that is both annoying to manage manually and wastes gigabytes of our 2 TB storage plan with duplicated content. A particular pain point in our household is that this arrangement necessitates having the designated person’s account signed into the Apple TV, so that the Photos screensaver will work. Unfortunately, that means those Apple TVs are unable to participate in HomeKit because the HomeKit home configuration was set up on someone else’s account, and tvOS won’t let you sign in to both at the same time.
The newly announced iCloud Shared Photo Library codifies the impromptu approach we have all been using into an official feature, giving all the benefits without the aforementioned downsides. Photos can be saved to your personal library, or sent to the shared library — which all members can access and view. iOS will use machine learning to remind you to share images to the family library where appropriate, and can (optionally) do it automatically when it detects the family has gone a group trip together. Edits to the photos and metadata adjustments can be done by anyone, and automatically synchronise. It just works.
As Apple presented it, it seemed like the shared library would be tied to the Family Sharing system; the six users are the six people in your Family Sharing group. That would certainly be the Occam’s razor approach, removing the need for additional account management steps. However, apparently, that is not the case and the “up to six users” can include people that aren’t in your Family Sharing circle. That definitely opens up the feature to be useful to groups of people who want to share their photos but are not neatly contained into a single household, with just one shared payment method between them. It does raise some finicky questions, though, like how exactly does iCloud allocate the shared library’s storage. If someone contributes a photo, does the file size count against their personal iCloud storage quota, the person who created the shared library originally, or a wholly separate bucket altogether? Who pays for it, if you need more space? Who is in control of adding and removing people? Can you be removed from the shared library against your will, and if that happens, can you get a local copy of all the pictures of you before you lose access?
As the App Store (is forced to) relax rules around alternative payment systems, In-App Purchase is more sensitive to competition and has to do more to compete. Long term, this will be positive for customers with lower prices and better features across the board. In the short term, those same competition forces mean that Apple will have to pull back on some of the customer-friendly In-App Purchase policies to align with the market, to keep publishers onboard.
As evidenced in discovery of the Apple vs Epic trial, the churn from ‘ungrandfathering’ price increases was one factor that led Netflix to exit In-App Purchase in 2018.
The prior policy that meant a subscription’s price could not be increased without explicit user consent was incredibly favourable to the customer, but out of whack with general customer expectations. The vast majority of subscriptions in the world do not work that way. In-App Purchase was a stark outlier. It stood in contrast to even Apple’s own subscriptions like iCloud or Apple One; they increase their price freely with notification, but without consent.
So, now, In-App Purchase will work the same way. I don’t think it’s something to get too mad at Apple about. It’s the reality of business; you have to balance developer and customer interests. In this instance, Apple has still enforced appropriate price caps to stop abuse of the system. And In-App Purchase remains highly customer favourable overall, with how easy each subscription is to cancel.
To me, the Apple brand ultimately stands for high-quality premium products, developed by teams of people that care deeply about what they are working on and have the freedom to sweat the details. Whilst they don’t always succeed, their consistency at achieving that feeling when it comes to hardware design is unrivalled. Every part, every component, every material, appears to have been thoroughly considered and debated. Nothing is rushed or skimped on. That permeates through to the end product, tangibly and intangibly so. That doesn’t mean everything they make is a surefire hit or a runaway success; just that someone cared about making it.
I’m not sure I could name a single Apple service that meets that bar. Apple’s services tick the boxes, and they mostly do what they promise. However, nothing comes close to the quality of experience I expect to have from things branded with the Apple logo. When I am using these apps, I am not filled with confidence that striving for greatness was a top priority. Far too often, meeting revenue goals and business objectives seem more important to their creation.
They are built to a passing grade, but nothing more. Basic features found in services from rival companies are either lacking altogether in Apple’s apps, or implemented half-heartedly and performance is sluggish. Browsing in Music and TV is painful, with an over-reliance on the infinite scroll. New content is just tacked on the bottom of already long lists. Meanwhile, the navigation bars are blank when they could include simple shortcut buttons and filters to help users navigate and explore. Moreover, these apps feature too many loading states and too much waiting around. They are akin to janky web apps, rather than richly-compelling responsive experiences.
Frequently, it seems the content teams and the tech teams are isolated from each other, when they really should be in sync and working together to make everything sing. Arcade is trapped inside a tab in the App Store app, and obvious synergies with Game Center are not exploited; Game Center remains in a quasi-extant state as a panel in Settings. The Library tab in TV is useless in the modern streaming service era. (Frankly, the entire TV app belies the content Apple is producing for it.) Another example: Apple Music relaunched the Radio tab significantly in 2020, boasting three live broadcasts and dozens of weekly shows, but all that is for nothing when it is still impossible for users to subscribe to a show to be notified when an artist goes live or when new episodes are available to listen to on-demand.
On a daily basis, I encounter issues, ranging from small niggles to significant gaps in functionality. These things have been in this state of mediocrity for many years. I’m losing faith that anyone in a position of power at the Services group cares enough to make them better. The high standards seen in the products of Apple’s hardware divisions are not reflected here.
Grading on a curve, Music and Fitness are the best, Arcade is in the middle, and TV+ and News+ are fighting it out at the bottom. But I stress, that’s grading on a curve of their own output. Apple’s best is not good enough. It is all middling to inferior, creaky and uninspired. Organisational dysfunction, a mountain of tech debt, distorted leadership incentives, and lack of passion likely all play a role. Whatever the cause, fixing this stems from the top. Services’ engineering and design teams have to be empowered with the resources and time to effectively execute and ultimately deliver excellence to customers, like the hardware teams clearly are.
Going into the quarter with Netflix providing guidance of +2.5 million subscriber adds, the company shocked everyone with poor results: minus 500,000 subscribers this quarter, another minus 2 million expected next quarter. They blamed a 700,000 subscriber loss on their exit from Russian market, which mitigates the current quarter numbers slightly. Obviously, the figures aren’t glowing, but I was struck by how strong the blowback on social media was to the news. Overnight, Netflix has suddenly become a service that nobody uses? That’s how my Twitter feed was acting like on Wednesday at least. Netflix may not be the exciting place anymore, but it’s undeniably the bedrock of modern television, and I don’t see that changing.
Firstly, the company is a behemoth on every metric; viewership, subscriber count and profitability. Sensations like Squid Game cannot be created by any other service; only Netflix has the worldwide content development infrastructure to get it made, and the immense audience reach to make it popular. Kicking Netflix out of culture is going to be really hard. I think nigh impossible. This is especially true when you consider they are the only streaming service with profitable operations. That means everybody else is having to sink into savings (and debt) to merely try and catch up. Maybe Disney+ will overtake them, but Netflix will be able to stick around as a top three player forever.
Of course, the investor base wanted growth and the shares got pummelled on the news for missing such expectations. My position is not talking about the stock market side. A funny parallel is the community opinion of Apple TV+, which has also gone through an about face recently. For two years straight, the general punditry has derided TV+ as a silly venture that will never succeed. Suddenly, Apple wins an Oscar and now everyone thinks TV+ is ruling the world. I have liked TV+ since the beginning, but the reality is TV+ is still an upstart with a long road ahead of it. It is hilarious how quickly people turn on something, positively in the recent case of TV+ or negatively in the recent case of Netflix. The knee-jerk reactions are rarely on the money, and do not reflect the fact that streaming is a very long game.
When TV+ was announced, I argued Apple had a good shot at being successful with it, at a time when the common take was that TV+ would be a failure and fade away. My argument on that was based on the idea that the fundamentals of streaming are relatively simple: get content people want to watch. The way to do that is to attract talent with the promise of money, audience and prestige. Resource-rich Apple had the money part guaranteed, and a decent runway at attaining the rest. Fast forward a couple of years and, sure enough, Apple has picked up the necessary prestige; audience size remains a question mark. Compare that to Netflix, which has a huge audience, a lot of money (recall they are the only profitable service so far) and decent — if not as much as it used to — levels of awards recognition and prestige. If I believed Apple could do it from scratch on that basis, so surely can Netflix from a position of incumbent strength. As long as those fundamentals remain strong, I’m not worried about Netflix’s future. The subscriber numbers need to be a lot worse before I deem it anything other than turbulence and growing pains.
Even as they reach saturation, they also have a lot more headroom to potentially exploit. Netflix has been somewhat complacent when it comes to business model expansion, with CEO Reed Hastings preferring a simpler streamlined approach. Now, he is forced to relent slightly and develop things like a cheaper ad-supported tier of Netflix; something which could catapult their market share even higher than it already is. They can also juice their financials and extract some incremental growth out of the announced crackdown on account sharing. They don’t need to annoy their entire user base with account verification screens; just the fraction that is fragrantly abusing the system with one password shared amongst three, four, or more households. That’s still tens of millions of people to try to monetise, which it can then reinvest in content for the long term. As a reminder of the scale here, Netflix has more freeloaders than most of these services have total subscribers. I think Netflix will be just fine.
I see the Mac Studio as the spiritual successor to the 2013 Mac Pro. It is meant to be small and compact enough to sit on the desk, not under the desk. It has a lot of IO ports for attaching external storage, additional displays and other peripherals, but it is not a user-expandable machine. The 2013 Mac Pro was compact, if only because Apple gambled on a future of GPU-oriented computation that never really panned out. Fast forward to the present day, and there is no need for trickery; it is the sheer efficiency of Apple Silicon enables the Mac Studio to boast top-tier performance in CPU and GPU benchmarks, all housed in an enclosure even smaller than the 2013 Mac Pro.
However, whereas that Mac Pro made a statement, the Mac Studio is wholly perfunctory in its design. The Mac Pro is a cooler object; a perfect cylinder in shape, a shiny reflective casing, it even had backlit USB ports that illuminated when an accelerometer detected the machine had been turned around. The Mac Studio is a boring box with rounded corners, and has no party tricks to speak of. The trashcan was a truly wild, out-there, design. Apple was admittedly less ambitious with the 2019 Mac Pro which resembles a traditional tower workstation, but that too leaves more of a lasting impression than the Mac Studio thanks to its unique lattice of milled circular vent holes.
In truth, the Mac Studio is basically just a fat Mac mini. Compared to a Mac Pro, or the 2021 MacBook Pro, or the colourful M1 iMac, the Mac Studio industrial design doesn’t offer much to get excited about — save from the philosophical milestone that is front-facing IO. That’s a bit of a shame because the introduction of a brand new model of Mac is precisely the best time to do something entirely new. But Apple opted to played it safe this time, perhaps because the failings of recent attempts to be more adventurous — like the butterfly keyboard — are still fresh in their minds. The Mac Studio contains radical innards in a plain exterior. That being said, in all other respects, the Mac Studio looks set to be a home run, so any feelings of disappointment will ultimately be fleeting.
Apple is doing everything they can to toe the line to comply with the Netherlands ruling on alternate payment systems for dating apps. I’m not sure you could find a webpage more emblematic of the idiom of following the letter of the law, rather than the spirit of the law. They are also simultaneously appealing the decision and that tone comes across in the text too, as if each sentence is dripping with resentment.
I can only assume this is just the first bout in many rounds of back-and-forth over terms, that will be replicated and reproduced on a global scale eventually. This court ruling is on enabling competition for in-app payment systems, rather than the general monopoly of mobile app stores. However, the two are obviously inextricably linked. No one is going to use a third-party payment system when the saving compared to Apple’s built-in offering is a measly 3%. These current terms will not incite competition in payment systems as no developer will ever implement one. If you think the 3% will just about cover independent credit card processing fees, the customer acquisition costs and additional support overhead alone will make it an unprofitable course of action.
Apple’s stated policy is not long-term sustainable. I don’t know whether it will be changed as a result of these proceedings, or a different lawsuit down the road. It will change. Everyone agrees 27% is a joke. I think it’s quite reasonable to say that 0% would also be unfair to Apple; Apple deserves something. It’s just figuring out what is an acceptable rate in a market which lacks other forms of competition like alternative app stores or native app sideloading. There are other distribution issues that Apple’s App Store model imposes but ultimately money talks, and all of this legal theatre is a protracted negotiation over that core commission structure. As a member of the Small Business program myself, 12% (15%−3%) sure feels a whole lot fairer than 27%. I honestly believe most of these big company lawsuits would fall away if Apple announced that 12% was going to be the new normal for everyone.
Face ID isn’t superior to Touch ID in every respect, and vice versa. For instance, even five years on since the introduction of the TrueDepth camera system with iPhone X, Apple recommends that identical twins only use passcode authentication to unlock because Face ID will not be able to reliably tell them apart. Touch ID did not have this problem. Buying with Apple Pay is also nicer with Touch ID, compared to the double-click dance that Face ID requires. On balance, if pressed to choose just one approach, I think Face ID is the obvious choice though because the best benefits are really great; first time setup is far more streamlined than the fingerprint registration process and the most frequent use case of unlocking your phone is so much more elegant with Face ID. It also has a magical quality that Touch ID lacks. It is much cooler to look at the screen than to place your thumb on a fingerprint reader.
This is what Apple went with since 2017: FaceID only in the name of simplicity and (partly) cost savings. Pre-pandemic, I think they could have gotten away with that strategy forever. Post-2020, the see-saw of tradeoffs suddenly weigh down very much in the other direction. Until the release of iOS 15.4 beta, the return of Touch ID seemed inevitable to me.
For identical twins, they could hypothetically enable Touch ID and not Face ID. That status quo is much better than the current compromise of being forced to just use a passcode.
The existence of the Unlock with Mask feature probably means that Apple doesn’t have to ship an iPhone with Touch ID again. I would certainly take it as a signal that a Touch ID iPhone is not coming back anytime soon. But I still think they should do it. Long-term, the best iPhone is surely one that offers both Face ID and Touch ID (either via under-display scanner or iPad-esque side button sensor). Users would be bale to set up both types of biometrics, and the iPhone would simply unlock as soon as either is presented it. It really would be a best-of-both-worlds scenario with each biometric’s advantages making up for the drawbacks of the other.
I also think it is somewhat telling that Apple goes out of way to call out the accuracy of Face ID is lessened when using the mask unlock mode, right there in the settings UI. The peak of COVID and mask-wearing is (hopefully) behind us, but it isn’t going away altogether. The Unlock with Mask feature is going to be widely used for years to come, and it doesn’t feel very sustainable for Apple’s solution to this problem to be something that they openly warn significantly impacts the security of your device. You also have the ongoing threat of other wearable items — like sunglasses or even Apple’s own forthcoming headset product — that may impact the usefulness of Face ID, over the course of this decade. Bringing back Touch ID in some form is a hedge against all of those potential risks, and one that many people would applaud.
The new generation of MacBook Pro features a terrific display. The colour depth, maximum brightness and contrast levels it can achieve are truly stunning and a huge leap over previous models of MacBook. It’s also significantly higher resolution than the 2019 16-inch, and the increased pixel density is noticeably better in terms of the visible detail of photos and videos. The extra resolution also enabled Apple to restore 2x Retina mode as the standard display setting without sacrificing on effective screen real estate. The panel’s 120 FPS refresh rate is icing on the cake, even if macOS still hasn’t quite caught up to the hardware capability (although the 12.2 beta seed is much better in this regard).
However, all display technologies have tradeoffs, and the mini-LED design seen in the MacBook Pro is no different. Blooming is often discussed as a downside of mini-LED but funnily enough, I don’t see it crop up too much in how I use my computer. It’s there if you seek it out, but you really have to hunt.
As shown in the video above, a persistent niggle for me is the vignetting effect around the edges of the display. The extreme edge of the screen is just slightly darker all the way around, and it sticks out when the rest of the screen is uniformly bright. You can observe this border pretty much all the time. It’s annoying. I’d put in the same category as the notch. In practice, because it only impacts the screen quality at the very fringes, it rarely intrudes on the content you are viewing and your brain learns to ignore the periphery imperfections.
Another more subtle artefact is the screen response time when changing between light and dark content. Basically, if you have a big dark coloured blob and then quickly change to a new screenful of content that is mostly white, it takes a few extra milliseconds for the black regions to turn white. I haven’t precisely timed it, it might even be as small as a 100 milliseconds lag, but it is noticeable to the human eye. It’s sort of like OLED jelly scrolling, but less prevalent.
Modern LCD backlights certainly don’t have the vignetting problems, and screen response time can be consistently as low as 1 millisecond. Apple clearly made the right choice to move from LCD to mini-LED though. It is simply superior in most regards. A hypothetical decision between a MacBook Pro with mini-LED and one with an OLED screen is less clear cut. OLEDs don’t exhibit the edge vignetting and have no blooming because each pixel is individually lit, but they bring their own issues like burn-in and jelly scrolling to contend with.
This is my incredibly succinct five minute review of every Apple TV+ show released to date. I figured I might as well get this out the way before the volume of content makes it untenable to do; even this video ignores the dozen original movies the company has put out so far. Don’t take it too seriously. The main takeaway, if any, is that Apple TV+ continues to expand its content library, with more hits than misses, and will (easily) eclipse 150 premium originals by the end of 2022. The user interface and app experience issues remain the service’s biggest roadblock to attaining mainstream uptake from the general public.
I can’t quite believe how much ink has been spilled these last few months about a concept that doesn’t exist and is — at best — a pipe dream. The metaverse is not a thing. It’s meaningless. Facebook had an hour long keynote event which wholly consisted of computer-generated sequences of floating Memoji/Xbox avatars. Microsoft joined the fray with similarly unsubstantiated claims that Teams is becoming a metaverse.
The bandwagoning of the name ‘metaverse’ is dumb, but I’m not really interested in that aspect. I’m just going to ignore all of that misappropriation. Marketing teams always do stupid stuff; see the ongoing misappropriation by mobile carriers about what 5G can do.
I take the meaning of “metaverse” to be the generally accepted idea that people will wear some kind of headset or glasses and be able to access a virtual world, meeting up with others in some kind of virtual geography. The realism and quality of the experience is promised to be so good that your brain believes you are actually there, with your senses succumbing to the generated interface such that you can suspend disbelief that what you are interacting with is not actually there. Perhaps it is not an all-encompassing experience, but instead augmented reality avatars/objects appear to materialise in the space around you and behave accordingly.
Either way, it’s not feasible. It’s not a real thing because it is not grounded in any sort of technological truth. There’s not a tech demo on earth that can deliver anything close to that description; nothing bespoke exists and something for the mass market is even more illusory. It’s not a real thing.
Break apart the vision to any individual element and the state of the art technology is nowhere close to good enough. Realtime visual fidelity has to advance leaps and bounds to be as convincingly legitimate as what Facebook ‘demonstrated’ in its mockups. I’d love to know how long it took whatever render farm they used to make these videos. Probably, days. Even the mockups aren’t what I’d call convincing to a human, because the avatars look like avatars and not people. If that is the aim, forget it. We can’t even get CGI people in Hollywood movies to reliably break through the uncanny valley, and these films take months to generate a single second of footage. For a portable headset, it’s not even on the horizon. Five years. Ten years. Maybe longer. It’s not going to happen.
Graphics are just one of a thousand problems. All the other senses need to be satiated too for a start and the technology for generating synthetic smells, tastes and touch is so much further behind where we are at with GPUs for photorealistic imagery. One of the things that motivated me to write up this ridiculousness in a blog post is this fencing demo from Facebook’s Meta keynote. Zuckerberg is shown to be playing against a hologram of a professional athlete, waving swords at each other. In the demo, when he lunges, she parries with the swords perfectly stopping in mid-air. How on earth is that going to be possible to do, outside of a visual effects mockup? There’s no way to recreate the sensation of metal hitting metal and the sabres rebounding. Rather than an in-air clash of swords, the real sword is just going to pass right through the VR one. A vibration motor and some haptic feedback doesn’t cut it, although that doesn’t stop Zuckerberg miming contact and saying “that’s a little too realistic”. Lest we forget network latency hurdles or a myriad of other issues of course.
What these companies are touting is a fully immersive, engrossing, alternate world is only a few years away, just out of sight. The truth is it’s not anywhere close. I’m not a denier of augmented reality technology altogether. There will be continued small and meaningful improvements to the enterprise and consumer offerings, many of which will find their niche and bring genuine utility and/or entertainment. It will be able to enhance our life. For instance, VR gaming is basically already here, save for some less clunky hardware to use it on and some nice graphics. I could even see how a portable headset, or smart glasses, product could replace the phone in the medium term, as the primary communications device for humanity. The power-efficient-yet-technically-capable hardware to pull that off is still a ways out — maybe ten years, probably twenty — but it’s a plausible future that is deserving of consideration. I’d put that idea in the same bucket as self-driving cars or consumer space travel. These things live in the realm of tech demo today, but they have shown feasibility and appear attainable. Contrast that to the “metaverse”, which is merely made-up fantasy.
Apple doesn’t trot out Federighi to a third-party conference with a highly-produced Keynote deck for the fun of it. They are clearly concerned that European lawmakers are actually going to do something they don’t want; that is, pass laws requiring them to offer sideloading as an option. On the whole, he presents good arguments against the policy. You can watch the full thing here.
However, one particular talking point highlights a severe weakness that I see in Apple’s stance. Federighi posits that a social networking app may choose to “avoid the pesky privacy protections of the App Store” and only make their apps available via sideloading. Apple’s customers would then have to leave the ‘safe’ Apple software ecosystem, or lose touch with their family and friends. This is sort of true. But what is omitted is that an app choosing to leave the App Store is not primarily doing so to avoid Apple’s privacy standards, but because it would then be able to avoid Apple’s IAP rules.
Apple benefits financially — measured in the billions of dollars per year — by keeping the App Store as a monopoly. However much it wants to tout the user privacy and safety benefits, Apple’s position would be far stronger if cynics weren’t able to point to the money being accrued by the App Store gravy train. The 30% cut is ultimately the driving factor that has led Europe to want to pass these competition laws in the first place. If Apple truly wants to put customers first and protect from sideloading, alternative app stores and the like, it needs to compromise on the business policies somehow.