Not all video games need video. Over the years, games that exist only in audio have taken players into entirely new worlds in which there’s nothing to see and still everything to do. These games have huge accessibility implications, allowing people who can’t see to play an equally fun, equally immersive game with their other senses. And when all you have is sound, there’s actually even more you can do to make your game great.
On this episode of The Vergecast, we explore the history of audio-only games with Paul Bennun, who has been in this space longer than most. Years ago, Bennun and his team at Somethin’ Else made a series of games called Papa Sangre that were among the most innovative and most popular games of their kind. He explains what makes an audio game work, why the iPhone 4 was such a crucial technological achievement for these games, and more.
Bennun also makes the case that, right now, even in this ultra-visual time, is the perfect time for a rebirth of audio games. He points to AirPods and other spatial audio headphones along with devices like the Vision Pro, advances in location tracking, and improvements in multiplayer gaming as reasons to think that audio-first games could be a huge hit now. It even sounds a bit like Bennun might have a game in the works, but he won’t tell us about that.
If you want to know more about the topics we cover in this episode, here are a few links to get you started:
Patient Dies Weeks After Kidney Transplant From Genetically Modified Pig Richard Slayman received the historic procedure in March. The hospital said it had “no indication” his death was related to the transplant.
Hi, friends! Welcome to Installer No. 37, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, send me links, and also, you can read all the old editions at the Installer homepage.)
I also have for you a thoroughly impressive new iPad, a clever new smart home hub, a Twitter documentary to watch this weekend, a sci-fi show to check out, a cheap streaming box, and much more. Let’s do it.
(As always, the best part of Installer is your ideas and tips. What are you reading / watching / cooking / playing / building right now? What should everyone else be into as well? Email me at installer@theverge.com or find me on Signal at @davidpierce.11. And if you know someone else who might enjoy Installer, and tell them to subscribe here.)
The Drop
The new iPad Pro. The new Pro is easily the most impressive piece of hardware I’ve seen in a while. It’s so thin and light, and that OLED screen… gorgeous. It’s bonkers expensive, and the iPad’s big problem continues to be its software, but this is how you build a tablet, folks.
Animal Well. Our friends over at Polygoncalled this “one of the most inventive games of the last decade,” which is obviously high praise! By all accounts, it’s unusual, surprising, occasionally frustrating, very smart, and incredibly engaging. Even the trailer looks like nothing I’ve seen before. (I got a lot of recommendations for this one this week — thanks to everyone who sent it in!)
Final Cut Camera. This only got a quick mention at Apple’s event this week, but it’s kind of a huge deal! It’s a first-party, pro-level camera app for iPhones and iPads that gives you lots of manual control and editing features. It’s exactly what a lot of creatives have been asking for. No word yet on exactly when it’ll be available, but I’m excited.
The Aqara Hub M3. The only way to manage your smart home is to make sure your devices can support as many assistants, protocols, and platforms as possible. This seems like a way to do it: it’s a Matter-ready device that can handle just about any smart-home gear you throw at it.
“Battle of the Clipboard Managers.” I don’t think I’ve ever linked to a Reddit thread here, but check this one out: it’s a long discussion about why a clipboard manager is a useful tool, plus a bunch of good options to choose from. (I agree with all the folks who love Raycast, but there are a lot of choices and ideas here.)
Proton Pass. My ongoing No. 1 piece of technology advice is that everyone needs a password manager. I’m a longtime 1Password fan, but Proton’s app is starting to look tempting — this week, it got a new monitoring tool for security threats, in addition to all the smart email hiding and sharing features it already has.
The Onn 4K Pro. Basically all streaming boxes are ad-riddled, slow, and bad. This Google TV box from Walmart is at least also cheap, comes with voice control and support for all the specs you’d want, and works as a smart speaker. I love a customizable button, too.
Dark Matter. I’ve mostly loved all the Blake Crouch sci-fi books I’ve read, so I have high hopes for this Apple TV Plus series about life in a parallel universe. Apple TV Plus, by the way? Really good at the whole sci-fi thing.
TheWordlearchive. More than 1,000 days of Wordle, all ready to be played and replayed (because, let’s be honest, who remembers Wordle from three weeks ago?). I don’t have access to the archive yet, but you better believe I’ll be playing it all the way through as soon as it’s out.
Black Twitter: A People’s History. Based on a really fun Wired series, this is a three-part deep dive Hulu doc about the ways Black Twitter took over social media and a tour of the internet’s experience of some of the biggest events of the last decade.
Screen share
Kylie Robison, The Verge’s new senior AI reporter, tweeted a video of her old iPhone the other day that was like a perfect time capsule of a device. She had approximately 90,000 games, including a bunch that I’m 100 percent sure were scams, and that iPod logo in her dock made me feel a lot of things. Those were good days.
I messaged Kylie in Slack roughly eight minutes after she became a Verge employee, hoping I could convince her to share her current homescreen — and what she’d been up to during her funemployment time ahead of starting with us.
Sadly, she says she tamed her homescreen chaos before starting, because something something professionalism, or whatever. And now she swears she can’t even find a screenshot of her old homescreen! SURE, KYLIE. Anyway, here’s Kylie’s newly functional homescreen, plus some info on the apps she uses and why.
The phone: iPhone 14 Pro Max.
The wallpaper: A black screen because I think it’s too noisy otherwise. (My lock screen is about 20 revolving photos, though.)
The apps: Apple Maps, Notes, Spotify, Messages, FaceTime, Safari, Phone.
I need calendar and weather apps right in front of me when I unlock my phone because I’m forgetful. I use Spotify for all things music and podcasts.
Work is life so I have all those apps front and center, too (Signal, Google Drive, Okta).
Just before starting, I reorganized my phone screen because 1) I had time and 2) I knew I’d have to show it off for David. All the apps are sorted into folders now, but before, they were completely free-range because I use the search bar to find apps; I rarely scroll around. So just imagine about 25 random apps filling up all the pages: Pegasus for some international flight I booked, a random stuffed bell pepper recipe, what have you.
I also asked Kylie to share a few things she’s into right now. Here’s what she shared:
I actually started 3 Body Problem because of an old Installer. Also, I loved Fallout and need more episodes.
My serious guilty pleasure is Love Island UK, and I’ve been watching the latest season during my break.
Crowdsourced
Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Emailinstaller@theverge.comor hit me up on Signal — I’m @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. And if you want even more recommendations, check out the replies tothis post on Threads.
“I have always found Spotify’s recommendation algorithm and music channels to be terrible; wayyy too much fussing and tailoring required when all I want is to hit play and get a good diversity of music I will like. So I finally gave up and tried Pandora again. Its recommendation / station algorithm is so wildly better than Spotify’s (at least for me), it’s shocking how it has seemed to fade into cultural anonymity. Can’t speak for others, but if anyone out there is similarly frustrated with Spotify playlists, I highly recommend the Pandora option.” – Will
“Mantella mod for Skyrim (and Fallout 4). Not so much a single mod, but a mod plus a collection of apps that gives (basically) every NPC their own lives and stories. It’s like suddenly being allowed to participate in the fun and games with Woody and Buzz, rather than them having to say the words when you pull the string.” – Jonathan
“The Snipd podcast app (whose primary selling point is AI transcription of podcasts and the ability to easily capture, manage, and export text snippets from podcasts) has a new feature that shows you a name, bio, and picture for podcast guests, and allows you to find more podcasts with the same guest or even follow specific guests. Pretty cool!” – Andy
“I have recently bought a new Kindle, and I’m trying to figure out how to get news on it! My current plan is to use Omnivore as my bookmarks app, which will sync with this awesome community tool that converts those bookmarks into a Kindle-friendly website.” – David
“With all the conversation around Delta on iOS, I have recently procured and am currently enamored with my Miyoo Mini Plus. It’s customizable and perfectly sized, and in my advanced years with no love for Fortnite, PUBG, or any of the myriad of online connected games, it’s lovely to go back and play some of these ‘legally obtained’ games that I played in my childhood.” – Benjamin
“Rusty’s Retirement is a great, mostly idle farm sim that sits at the bottom or the side of your monitor for both Mac and Windows. Rusty just goes and completes little tasks of his own accord while you work or do other stuff. It rocks. Look at him go!” – Brendon
“Last week, Nicholas talked about YACReader and was asking for another great comic e-reader app for DRM-free files. After much searching myself, I settled on Panels for iPad. Great Apple-native UI, thoughtful features, and decent performance. The free version can handle a local library, but to unlock its full potential, the Pro version (sub or lifetime) supports iCloud, so you can keep all your comics in iCloud Drive, manage the files via a Mac, and only download what you’re currently reading — great for lower-end iPads with less storage.” – Diogo
Signing off
I have spent so much time over the years trying to both figure out and explain to people the basics of a camera. There are a billion metaphors for ISO, shutter speed, and aperture, and all of them fall short. That’s probably why a lot of the photographer types I know have been passing around this very fun depth of field simulator over the last few days, which lets you play with aperture, focal length, sensor size, and more in order to understand how different settings change the way you take photos. It’s a really clever, simple way to see how it all works — and to understand what becomes possible when you really start to control your camera. I’ll be sharing this link a lot, I suspect, and I’m learning a lot from it, too.
Elon Musk’s Diplomacy: Woo Right-Wing World Leaders. Then Benefit. Mr. Musk has built a constellation of like-minded heads of state — including Argentina’s Javier Milei and India’s Narendra Modi — to push his own politics and expand his business empire.
Google is preparing to hold its annual Google I/O developer conference next week, and naturally, it will be all about AI. The company has made no secret of that. Since last year’s I/O, it has debuted Gemini, its new, more powerful model meant to compete with OpenAI’s ChatGPT, and has been deep in testing new features for Search, Google Maps, and Android. Expect to hear a lot about that stuff this year.
When Google I/O will happen and where you can watch
Google I/O kicks off on Tuesday, May 14th at 10AM PT / 1PM ET with a keynote talk. You can catch that on Google’s site or its YouTube channel, via the livestream link that’s also embedded at the top of this page. (There’s also a version with an American Sign Language interpreter.) Set a good amount of time aside for that; I/O tends to go on for a couple of hours.
Google will probably also focus on ways it plans to turn your smartphone into more of an AI gadget. That means more generative AI features for Google’s apps. It’s been working on AI features that help with dining and shopping or finding EV chargers in Google Maps, for instance. Google is also testing a feature that uses AI to call a business and wait on hold for you until there’s actually a human being available to talk to.
The Pixel as an AI gadget
I/O could also see the debut of a new, more personal version of its digital assistant, rumored to be called “Pixie.” The Gemini-powered assistant is expected to integrate multimodal features like the ability to take pictures of objects to learn how to use them or get directions to a place to buy them.
That kind of thing could be bad news for devices like the Rabbit R1 and the Human Ai Pin, which each recently launched and struggled to justify their existence. At the moment, the only advantage they maybe sort of have is that it’s kind of hard (though not impossible) to pull off using a smartphone as an AI wearable.
Will there be Hardware at I/O?
It seems unlikely that Google will focus much on new hardware this year, given that the Pixel 8A is already available for preorder and you can now buy a relaunched, cheaper Pixel Tablet, unchanged apart from the fact that the magnetic speaker dock is now a separate purchase. The company could still tease new products like the Pixel 9 — which, in typical Google fashion, is alreadyleakingalloverthe place — and the Pixel Tablet 2, of course.
The search giant could also talk about its follow-up to the Pixel Fold, which is rumored to get a mouthful of a rebrand to the Pixel 9 Pro Fold.
The AI gadgets that were supposed to save us from our phones have arrived woefullyunderbaked — whatever illusions we might have held that the Humane AI pin or the Rabbit R1 were going to offer any kind of salve for the constant rug burn of dealing with our personal tech is gone. Hot Gadget Spring is over and developer season is upon us, starting with Google I/O this coming Tuesday.
It also happens to be a pivotal time for Android. I/O comes on the heels of a major re-org that put the Android team together with Google’s hardware team for the first time. The directive is clear: to run full speed ahead and put more AI in more things. Not preferring Google’s own products was a foundational principle of Android, though that model started shifting years ago as hardware and software teams collaborated more closely. Now, the wall is gone and the AI era is here. And if the past 12 months have been any indication, it’s going to be a little messy.
So far, despite Samsung and Google’s best efforts, AI on smartphones has really only amounted to a handful of party tricks. You can turn a picture of a lamp into a different lamp, summarize meeting notes with varying degrees of success, and circle something on your screen to search for it. Handy, sure, but far from a cohesive vision of our AI future. But Android has the key to one important door that could bring more of these features together: Gemini.
Gemini launched as an AI-fueled alternative to the standard Google Assistant a little over three months ago, and it didn’t feel quite ready yet. On day one, it couldn’t access your calendar or set a reminder — not super helpful. Google has added those functions since then, but it still doesn’t support third-party media apps like Spotify. Google Assistant has supported Spotify for most of the last decade.
But the more I come back to Gemini, the more I can see how it’s going to change how I use my phone. It can memorize a dinner recipe and talk me through the steps as I’m cooking. It can understand when I’m asking the wrong question and give me the answer to the one I’m looking for instead (figs are the fruit that have dead wasp parts in them; not dates, as I learned). It can tell me which Paw Patrol toy I’m holding, for Pete’s sake.
Again, though — party tricks. Gemini’s real utility will arrive when it can integrate more easily across the Android ecosystem; when it’s built into your earbuds, your watch, and into the very operating system itself.
Android’s success in the AI era rides on those integrations. ChatGPT can’t read your emails or your calendars as readily as Gemini; it doesn’t have easy access to a history of every place you’ve visited in the past decade. Those are real advantages, and Google needs every advantage right now. We’ve seen plenty of signals that Apple plans to unveil a much smarter Siri at WWDC this year. Microsoft and OpenAI aren’t sitting still either. Google needs to lean into its advantages to deliver AI that’s more than a party trick — even if it’s a little un-Android-like.
The Garmin Lily 2 was the tracker I needed on vacation
Its limitations made it fall short in daily life but ended up being a plus while trying to disconnect from the world.
On my last day of vacation, I sat on a pristine beach, sipping on a piña colada while staring at a turquoise Caribbean Sea. In four days, I’d charged my Apple Watch Ultra 2 three times, and I was down to about 30 percent. On the other wrist, I had the more modest $249.99 Garmin Lily 2 Sport. It was at about 15 percent, but I hadn’t charged it once. Actually, I’d left the cable hundreds of miles away at home. While pondering this, the Ultra 2 started buzzing. My phone may have been buried under towels and sunscreen bottles at the bottom of a beach bag, but Peloton was having a bad earnings day. The way that watch is set up, there was no way it would let me forget. The Lily 2 also buzzed every now and then. The difference was reading notifications on it was too bothersome and, therefore, easily ignored.
That tiny slice in time sums up everything that makes the Lily 2 great — and perhaps not so great.
My 10 days with the Lily 2 were split into two dramatically different weeks. The first was a chaotic hell spent zipping here and there to get 10,000 things done before vacation. The second, I did my very best to be an untroubled beach potato. That first week, I found the Lily 2 to be cute and comfortable but lacking for my particular needs. On vacation, its limitations meant it was exactly the kind of wearable I needed.
I wasn’t surprised by that. The Lily 2 is not meant to be a mini wrist computer that can occasionally sub in for your phone. It’s meant to look chic, tell you the time, and hey, here’s some basic notifications and fitness tracking. That’s ideal for casual users — the kind of folks who loved fitness bands and Fitbits before Google started mucking around with the formula.
The main thing with the Lily 2 is you have to accept that it’s going to look nice on your wrist but be a little finicky to actually use. The original Lily’s display didn’t register swipes or taps that well. It’s improved a smidge with the Lily 2, but just a smidge. Reading notifications, navigating through menus, and just doing most things on the watch itself I found to be nowhere near as convenient as a more full-fledged touchscreen smartwatch. This extra friction is a big reason why the Lily 2 just didn’t fit my needs in daily life.
As a fitness tracker, the Lily 2 is middling. The main additions this time around are better sleep tracking and a few more activity types, like HIIT, indoor rowing and walking, and meditation. There are also new dance fitness profiles with various subgenres, like Zumba, afrobeat, jazz, line dancing, etc. That said, the Lily 2 isn’t great for monitoring your data mid-workout. Again, fiddly swipes and a small screen add too much friction for that.
I also wouldn’t recommend trying to train for a marathon with the Lily 2. Since it uses your phone’s GPS, my results with outdoor runs were a mixed bag. One four-mile run was recorded as 4.01 miles. Great! Another two-mile run was logged as 2.4 miles. Less great. It’s a tracker best suited to an active life, but not one where the details really matter. Case in point, it was great for tracking my general activity splashing around and floating in the ocean — but it’s not really the tracker I’d reach for if I were trying to track laps in the pool.
At 35mm, it’s a skosh bigger than the original Lily but much smaller than just about every other smartwatch on the market. It’s lighter than most at 24.4g, too. That makes this a supremely comfortable lil watch. Most days, I forgot I was wearing it.
While I’m no fashionista, I didn’t feel like my lilac review unit was hard to slot into my daily wardrobe. But if playful colors aren’t your thing, the Classic version is $30-50 more and has a more elegant feel, a more muted color palette, and nylon / leather straps. (It also adds contactless payments.)
As a woman with a small wrist, the 35mm size is a plus. But while I personally don’t think the Lily 2 has to be a women’s watch, it is undeniably dainty. If you want something with a more neutral vibe or a slightly bigger size, Garmin has the Vivomove Trend or Vivomove Sport. Withings’ ScanWatch 2 or ScanWatch Light are also compelling options.
Ultimately, the Lily 2 is great for folks who want to be more active while trying to cut down on notifications. It’s also a great alternative if you miss the old Misfits, Jawbones, or Fitbit Alta HR. Deep down, I wish that were me, but the reality is I have too much gadget FOMO and care way too much about my running data. That said, the next time I go on vacation — or feel the urge to disconnect — I think I’ll reach for the Lily 2 and try to leave the rest of my life at home.
They unsurprisingly look like laptops — albeit with overall slimmer profiles.
The most interesting model is Dell’s new XPS 13 9345, which seems to be a sleeker rebirth of the XPS 13 Plus from 2022. It’s got the same touchy touch-bar on the top row and comes with only two USB-C ports for I/O.
There’s also a leaked new Inspiron 14 7441 Plus that’s reportedly equipped with a 16-core Snapdragon X Elite and has 16GB of base RAM. Inspirons are considered Dell’s everyman PC that isn’t as sleek as the XPS lineup, although this one looks like it has thinned, and seems to come with two USB-C ports, one USB-A, and a microSD card slot.
Dell had revealed a new XPS lineup in January which introduced keyboards that bear Microsoft’s new Copilot key on the bottom row — and it looks like these leaked ones have them, too. Dell, HP, and Lenovo have all partnered with Microsoft to release notebooks supporting Windows 11 AI features. And these leaked Dell laptops apparently have Microsoft’s upcoming “AI Explorer” features out of the box.
They’re among the first Snapdragon X laptops we’ve seen leak out — the other is this Lenovo Yoga Slim 7 that leaker WalkingCat unearthed,
Qualcomm’s Snapdragon X series are due to appear in laptops this summer, and it’s the chipmaker’s big bet to challenge Apple Silicon, Intel, and AMD on performance.
How Airlines Are Using AI to Make Flying Easier Airlines are using artificial intelligence to save fuel, keep customers informed and hold connecting flights for delayed passengers. Here’s what to expect.
Microsoft’s new Xbox mobile gaming store is launching in July
Microsoft has been talking about plans for an Xbox mobile gaming store for a couple of years now, and the company now plans to launch it in July. Speaking at the Bloomberg Technology Summit earlier today, Xbox president Sarah Bond revealed the launch date and how Microsoft is going to avoid Apple’s strict App Store rules.
“We’re going to start by bringing our own first-party portfolio to [the Xbox mobile store], so you’re going to see games like Candy Crush show up in that experience, games like Minecraft,” says Bond. “We’re going to start on the web, and we’re doing that because that really allows us to have it be an experience that’s accessible across all devices, all countries, no matter what and independent of the policies of closed ecosystem stores.”
Sarah Bond, @Microsoft’s Xbox president, announced at #BloombergTech that the company will launch its own mobile game store in July, creating an alternative to Apple and Google’s app stores pic.twitter.com/hj6eLtsGfl
The store will be focused on first-party mobile games from Microsoft’s various studios, which include huge hits like Call of Duty: Mobile and Candy Crush Saga. Bond says the company will extend this to partners at some point in the future, too.
While games will naturally be part of the store, it sounds like the key parts of the Xbox experience will also be available. Bond argues there isn’t a gaming platform and store experience that “goes truly across devices — where who you are, your library, your identity, your rewards travel with you versus being locked to a single ecosystem.” So Microsoft is trying to build that with its Xbox mobile store.
Microsoft had also been building this store in anticipation of companies like Apple and Google being forced to open up their mobile app stores, but it’s clear the software giant isn’t willing to wait on the Digital Markets Act to shake out in Europe or any potential action in the US.
A web-only mobile store will be challenging to pull off, and it’s not immediately clear how Microsoft will position this as an alternative to these mobile games already existing on rival app stores. Bond says Microsoft will “extend” beyond the web, hinting that it could eventually launch a true rival to Google and Apple’s mobile app stores at some point soon.
Microsoft first hinted at a “next-generation store” in early 2022, just a month after the company announced its Activision Blizzard acquisition. “We want to be in a position to offer Xbox and content from both us and our third-party partners across any screen where somebody would want to play,” said Microsoft Gaming CEO Phil Spencer in an interview with the Financial Times last year. “Today, we can’t do that on mobile devices but we want to build towards a world that we think will be coming where those devices are opened up.”
Verizon and T-Mobile are trying to gobble up US Cellular
Now that they’ve got an extra $100 billion worth of premium airwaves and Sprint no longer nipping at their heels, how can the big three cellular carriers continue to consolidate and grow? Well, T-Mobile and Verizon “are in discussions to carve up U.S. Cellular,” The Wall Street Journal reports.
The report suggests this is about harvesting even more wireless spectrum; my colleague Allison pointed out in 2022 that US Cellular “tends to offer service where some of the major carriers don’t.” (It would certainly be nice for T-Mobile and Verizon customers to have better coverage, but I would prefer competition to lower my wireless bill.)
T-Mobile would reportedly pay over $2 billion for wireless spectrum licenses and take over “some operations;” the WSJ doesn’t say what Verizon wants, but says US Cellular “also owns more than 4,000 cellular towers that weren’t part of the latest sale talks.”
The idea behind splitting up US Cellular between T-Mobile and Verizon, the WSJ suggests, is to keep antitrust regulators from blocking the deal. Regulators wound up letting T-Mobile merge with Sprint after promises that it would turn Dish into a new fourth major US cellular carrier, but last we checked, Dish had yet to become a meaningful competitor.
Apple’s New iPad Ad Leaves Its Creative Audience Feeling … Flat An ad meant to show how the updated device can do many things has become a metaphor for a community’s fears of the technology industry.
Microsoft’s ‘air gapped’ AI is a bot set up to process top-secret info
Microsoft Strategic Missions and Technology CTO William Chappell announced that it’s deployed a GPT-4 large language model in an isolated, air-gapped environment on a government-only network. Bloomberg first reported the setup, citing an unnamed executive who claimed that the Azure Government Top Secret cloud-hosted model represents the first time a “major” LLM has operated separated from the internet.
Chappell announced the AI supercomputer on Tuesday afternoon at the “first-ever AI Expo for National Competitiveness” in Washington D.C. Unlike the models behind ChatGPT or other tools, Microsoft says this server is “static,” operating without learning from the files it processes or the wider internet.
Chappell told Bloomberg, “It is now deployed, it’s live, it’s answering questions, it will write code as an example of the type of thing it’ll do.” As Chappell mentioned to DefenseScoop, it has not been accredited for top-secret use, so the Pentagon and other government departments aren’t actually using it yet, whether that’s processing data for a particular mission or something like HR.
Disney, Hulu and Max Streaming Bundle Will Soon Become Available The offering from Disney and Warner Bros. Discovery shows how rival companies are willing to work together to navigate an uncertain entertainment landscape.
Biden to Announce A.I. Center in Wisconsin as Part of Economic Agenda The president’s visit will highlight the investment by Microsoft and point to a failed Foxconn project negotiated by Donald J. Trump.
Artificially Intelligent Help for Planning Your Summer Vacation Travel-focused A.I. bots and more eco-friendly transportation options in online maps and search tools can help you quickly organize your seasonal getaway.
The US is propping up gas while the world moves to renewable energy
The amount of electricity and greenhouse gas emissions from fossil fuel-fired power plants likely peaked in 2023, according to the annual global electricity review by energy think tank Ember. That means human civilization has likely passed a key turning point, according to Ember: countries will likely never generate as much electricity from fossil fuels again.
A record 30 percent of electricity globally came from renewable sources of energy last year thanks primarily to growth in solar and wind power. Starting this year, pollution from the power sector is likely to start dropping, with a 2 percent drop in the amount of fossil fuel-powered electricity projected for 2024 — a decline Ember expects to speed up in the long term.
“The decline of power sector emissions is now inevitable. 2023 was likely the pivot point – a major turning point in the history of energy,” Dave Jones, Ember’s insights director, said in an emailed statement. “But the pace ... depends on how fast the renewables revolution continues.”
It’s a transition that could be happening much faster if not for the US, which is already the world’s biggest gas producer, using record amounts of gas last year. Without the US, Ember finds, electricity generation from gas would have fallen globally in 2023. Global economies excluding the US managed to generate 62 terawatt hours less gas-powered electricity last year compared to the year prior. But the US ramped up its electricity generation from gas by nearly twice that amount in the same timeframe, an additional 115TWh from gas in 2023.
A big part of the problem is that the US is replacing a majority of aging power plants that run on coal, the dirtiest fossil fuel, with gas-fired plants instead of carbon pollution-free alternatives. “The US is switching one fossil fuel for another,” Jones said. “After two decades of building such a heavy reliance on gas power, the US has a big journey ahead to get to a truly clean power system.”
The US gets just 23 percent of its electricity from renewable energy, according to Ember, falling below the global average of 30 percent.
“Last century’s outdated technologies can no longer compete with the exponential innovations and declining cost curves in renewable energy and storage,” Christiana Figueres, former executive secretary of the United Nations Framework Convention on Climate Change, said in an emailed statement.
Ember’s report tracks closely with other predictions from the International Energy Agency (IEA), which called a transition to clean energy “unstoppable” in October. The IEA forecast a peak in global demand for coal, gas, and oil this decade (for all energy use, not just electricity). It also projected that renewables would make up nearly 50 percent of the world’s electricity mix by 2030.
Ember is a little more optimistic after more than 130 countries pledged to triple renewable energy capacity by 2030 during a United Nations climate summit in December. With that progress, renewable electricity globally would reach 60 percent by the end of the decade compared to less than 20 percent in 2000.
As people try to find more uses for generative AI that are less about making a fake photo and are instead actually useful, Google plans to point AI to cybersecurity and make threat reports easier to read.
In a blog post, Google writes its new cybersecurity product, Google Threat Intelligence, will bring together the work of its Mandiant cybersecurity unit and VirusTotal threat intelligence with the Gemini AI model.
The new product uses the Gemini 1.5 Pro large language model, which Google says reduces the time needed to reverse engineer malware attacks. The company claims Gemini 1.5 Pro, released in February, took only 34 seconds to analyze the code of the WannaCry virus — the 2017 ransomware attack that hobbled hospitals, companies, and other organizations around the world — and identify a kill switch. That’s impressive but not surprising, given LLMs’ knack for reading and writing code.
But another possible use for Gemini in the threat space is summarizing threat reports into natural language inside Threat Intelligence so companies can assess how potential attacks may impact them — or, in other words, so companies don’t overreact or underreact to threats.
Google says Threat Intelligence also has a vast network of information to monitor potential threats before an attack happens. It lets users see a larger picture of the cybersecurity landscape and prioritize what to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks. VirusTotal’s community also regularly posts threat indicators.
The company also plans to use Mandiant’s experts to assess security vulnerabilities around AI projects. Through Google’s Secure AI Framework, Mandiant will test the defenses of AI models and help in red-teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes become prey to malicious actors. These threats sometimes include “data poisoning,” which adds bad code to data AI models scrape so the models can’t respond to specific prompts.
Google, of course, is not the only company melding AI with cybersecurity. Microsoft launched Copilot for Security , powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, and lets cybersecurity professionals ask questions about threats. Whether either is genuinely a good use case for generative AI remains to be seen, but it’s nice to see it used for something besides pictures of a swaggy Pope.
Wayve, an A.I. Start-Up for Autonomous Driving, Raises $1 Billion The London-based developer of artificial intelligence systems for self-driving vehicles raised the funding from SoftBank, Nvidia, Microsoft and others.
Robinhood’s crypto arm receives SEC warning over alleged securities violations
Robinhood’s cryptocurrency division could soon be in trouble with the Securities and Exchange Commission. In an 8-K filing submitted on Saturday, Robinhood revealed that it received a Wells notice from the SEC’s staff recommending the agency take action against the trading platform for alleged securities violations.
Robinhood says it received the Wells notice after cooperating with the SEC’s requests for investigative subpoenas about its crypto listings, custody of cryptocurrencies, and the platform’s operations. A Well notice is a letter from the SEC that warns a company of a potential enforcement action. The SEC’s response could include an injunction, a cease-and-desist order, disgorgement, limits on activities, and / or civil penalties.
“We firmly believe that the assets listed on our platform are not securities and we look forward to engaging with the SEC to make clear just how weak any case against Robinhood Crypto would be on both the facts and the law,” Dan Gallagher, Robinhood’s chief legal, compliance, and corporate affairs officer, said in a statement.
Robinhood says it already made the “difficult choice” to delist certain tokens — including Solana, Polygon, and Cardano — in response to the SEC’s lawsuits against other trading platforms. In the past, the SEC has argued that some cryptocurrencies are considered securities, which would require exchanges to register with the SEC. This would give the agency regulatory control over the exchanges and the registered tokens.
Robinhood could face a long legal battle if it chooses to fight the SEC’s potential enforcement action. The company’s shares have already dipped in response to the news.
US to fund digital twin research in semiconductors
The Biden administration wants to attract companies working on digital twins for semiconductors using funding from the $280 billion CHIPS and Science Act and the creation of a chip manufacturing institute.
The CHIPS Manufacturing USA institute aims to establish regional networks to share resources with companies developing and manufacturing both physical semiconductors and digital twins.
Digital twins, virtual representations of physical chips that mimic the real version, make it easier to simulate how a chip might react to a boost in power or a different data configuration. This helps researchers test out new processors before putting them into production.
“Digital twin technology can help to spark innovation in research, development, and manufacturing of semiconductors across the country — but only if we invest in America’s understanding and ability of this new technology,” Commerce Secretary Gina Raimondo says in a press release.
Digital twin research showed it can integrate with other emerging technologies like generative AI to accelerate simulation or further studies into new semiconductor concepts.
Officials of the Biden administration says it will hold briefings with interested parties this month to talk about the funding opportunities. The government will fund the operational activities of the institute, research around digital twins, physical and digital facilities like access to cloud environments, and workforce training.
The CHIPS Act passed in 2022 to boost semiconductor manufacturing in the country, but has struggled to keep up with the capital demand. Raimondo previously said manufacturers requested more than $70 billion in grants, more than the $28 billion the government budgeted in investments.
So far, companies like Intel and Micron are set to receive funding from the US government through the CHIPS Act. Part of the Biden’s administration goal with the CHIPS Act is to encourage semiconductor companies to build new types of processors in the US, especially now that demand for high-powered chips grew thanks to the AI boom.
Tensions Rise in Silicon Valley Over Sales of Start-Up Stocks The market for shares of hot start-ups like SpaceX and Stripe is projected to reach a record $64 billion this year.
Randy Travis gets his voice back in a new Warner AI music experiment
For the first time since a 2013 stroke left country singer Randy Travis unable to speak or sing properly, he has released a new song. He didn’t sing it, though; instead, the vocals were created with AI software and a surrogate singer.
The song, called “Where That Came From,” is every bit the kind of folksy, sentimental tune I came to love as a kid when Travis was at the height of his fame. The producers created it by training an unnamed AI model, starting with 42 of his vocal-isolated recordings. Then, under the supervision of Travis and his career-long producer Kyle Lehning, fellow country singer James DuPre laid down the vocals to be transformed into Travis’ by AI.
Besides being on YouTube, the song is on other streaming platforms like Apple Music and Spotify.
The result of Warner’s experiment is a gentle tune that captures Travis’ relaxed style, which rarely wavered far from its baritone foundation. It sounds like one of those singles that would’ve hung around the charts long enough for me to nervously sway to once after working up the gumption to ask a girl to dance at a middle school social. I wouldn’t say it’s a great Randy Travis song, but it’s certainly not the worst — I’d even say I like it.
Dustin Ballard, who runs the various incarnations of the There I Ruined It social media account, creates his AI voice parodies in much the same way as Travis’ team, giving birth to goofy mash-ups like AI Elvis Presley singing “Baby Got Back” or synthetic Johnny Cash singing “Barbie Girl.”
It would be easy to sound the alarm over this song or Ballard’s creations, declaring the death of human-made music as we know it. But I’d say it does quite the opposite, reinforcing what tools like an AI voice clone can do in the right hands. Whether you like the song or not, you have to admit that you can’t get something like this from casual prompting.
Cris Lacy, Co-president of Warner Music Nashville, told CBS Sunday Morning that AI voice cloning sites produce approximations of artists like Travis that don’t “sound real, because it’s not.” She called the label’s use of AI to clone Travis’ voice “AI for good.”
Right now, Warner can’t really do much about AI clones that it feels don’t fall under the heading of “AI for good.” But Tennessee’s recently-passed ELVIS Act, which goes into effect on July 1st, would allow labels to take legal action against those using software to recreate an artists’ voice without permission.
Travis’ song is a good edge-case example of AI being used to make music that actually feels legitimate. But on the other hand, it also may open a new path for Warner, which owns the rights to vast catalogs of music from famous, dead artists that are ripe for digital resurrection and, if they want to go there, potential profit. As heartwarming as this story is, it makes me wonder what lessons Warner Music Nashville — and the record industry as a whole — will take away from this song.
Tesla plans to charge some Model Y owners to unlock more range
Tesla CEO Elon Musk posted on Friday that the Standard Range rear-wheel drive Model Y the company has been building and selling “over the last several months” actually has more range than the 260 miles they were sold with. Pending “regulatory approval,” he wrote that the company will unlock another 40–60 miles of total range, depending on which battery Model Y owners have, “for $1,500 to $2,000.”
The “260 mile” range Model Y’s built over the past several months actually have more range that can be unlocked for $1500 to $2000 (gains 40 to 60 miles of range), depending on which battery cells you have.
This isn’t the first time Tesla has software-locked its cars’ range. The company revealed back in 2016 that the 70kWh battery in the Model S 70 actually had 75kWh of capacity that customers could pay more than $3,000 to access. It’s possible that the current Model S and X cars, which weigh the same as their longer-range counterparts, have also been software-limited.
The auto industry, in general, has been trending toward controlling access to cars’ existing features with pay-to-remove software locks. Polestar started selling a $1,200 over-the-air update to boost the Polestar 2’s performance in 2022. Mercedes-Benz charged the same amount, but annually, to improve the horsepower and torque of the EQE and EQS. BMW once paywalled software-locked CarPlay and, later, heated seats (the company later dropped that plan). And of course, Tesla has proven itself willing to remotely disable paid-for features when one of its cars is resold.
The Eta Aquarid meteor shower peaks tonight — here’s how to see it
If you’ve got clear skies and want an excuse to get away from town, the Eta Aquarid meteor shower is roughly at its peak and should be going strong tonight. Made up of remnants of Halley’s Comet that the Earth passes through, this annual shower is active from April 15th to May 27th and can show up at a rate of about 10–30 meteors per hour, according to the American Meteor Society.
You can see the Aquarids starting around 2AM local time in the Northern Hemisphere, radiating from the Aquarius constellation (though you’ll want to look 40–60 degrees around Aquarius to see them). Weather permitting, conditions are pretty good for watching them since the moon is in its late waning period and won’t be reflecting much light. Try to plan your stargazing spot using a light pollution map or by checking with your local astronomical society for tips on the best places to go for unfettered viewing.
As NASA writes, Eta Aquarid is viewable as “Earthgrazers,” or “long meteors that appear to skim the surface of the Earth at the horizon.” They’re fast-moving, traveling at over 40 miles per second.
You can bring binoculars or a telescope if you want to look at the stars, too, but you can see meteors with your naked eye, and trying to look for them with binoculars limits your field of view too much to be practical. Be sure to go easy on your neck with a reclining chair or something to lay on, too; heavy is the head that watches the stars. And dress appropriately, since it’s often chillier out in the country than in the city at night.
Finally, be patient. It can take around 30 minutes for your eyes to adjust to the dark enough to see meteors. Once they do, assuming you’re in a dark enough place, you should be able to see not just the meteors, but plenty of stars and even satellites as they move across the sky.
Halley’s Comet comes around, inconveniently for most, only once every 76 years. The last time it showed its tail for Earth-dwellers was in 1986, when I was three years old, and it won’t be here again until 2061, when I’m 78 (if I’m even still alive). Very rude. But at least we get to see some of the junk it leaves behind.
Better Siri is coming: what Apple’s research says about its AI plans
Apple hasn’t talked too much about AI so far — but it’s been working on stuff. A lot of stuff.
It would be easy to think that Apple is late to the game on AI. Since late 2022, when ChatGPT took the world by storm, most of Apple’s competitors have fallen over themselves to catch up. While Apple has certainly talked about AI and even released some products with AI in mind, it seemed to be dipping a toe in rather than diving in headfirst.
But over the last few months, rumors and reports have suggested that Apple has, in fact, just been biding its time, waiting to make its move. There have been reports in recent weeks that Apple is talking to both OpenAI and Google about powering some of its AI features, and the company has also been working on its own model, called Ajax.
If you look through Apple’s published AI research, a picture starts to develop of how Apple’s approach to AI might come to life. Now, obviously, making product assumptions based on research papers is a deeply inexact science — the line from research to store shelves is windy and full of potholes. But you can at least get a sense of what the company is thinking about — and how its AI features might work when Apple starts to talk about them at its annual developer conference, WWDC, in June.
Smaller, more efficient models
I suspect you and I are hoping for the same thing here: Better Siri. And it looks very much like Better Siri is coming! There’s an assumption in a lot of Apple’s research (and in a lot of the tech industry, the world, and everywhere) that large language models will immediately make virtual assistants better and smarter. For Apple, getting to Better Siri means making those models as fast as possible — and making sure they’re everywhere.
In iOS 18, Apple plans to have all its AI features running on an on-device, fully offline model, Bloomberg recently reported. It’s tough to build a good multipurpose model even when you have a network of data centers and thousands of state-of-the-art GPUs — it’s drastically harder to do it with only the guts inside your smartphone. So Apple’s having to get creative.
In a paper called “LLM in a flash: Efficient Large Language Model Inference with Limited Memory” (all these papers have really boring titles but are really interesting, I promise!), researchers devised a system for storing a model’s data, which is usually stored on your device’s RAM, on the SSD instead. “We have demonstrated the ability to run LLMs up to twice the size of available DRAM [on the SSD],” the researchers wrote, “achieving an acceleration in inference speed by 4-5x compared to traditional loading methods in CPU, and 20-25x in GPU.” By taking advantage of the most inexpensive and available storage on your device, they found, the models can run faster and more efficiently.
Apple’s researchers also created a system called EELBERT that can essentially compress an LLM into a much smaller size without making it meaningfully worse. Their compressed take on Google’s Bert model was 15 times smaller — only 1.2 megabytes — and saw only a 4 percent reduction in quality. It did come with some latency tradeoffs, though.
In general, Apple is pushing to solve a core tension in the model world: the bigger a model gets, the better and more useful it can be, but also the more unwieldy, power-hungry, and slow it can become. Like so many others, the company is trying to find the right balance between all those things while also looking for a way to have it all.
Siri, but good
A lot of what we talk about when we talk about AI products is virtual assistants — assistants that know things, that can remind us of things, that can answer questions, and get stuff done on our behalf. So it’s not exactly shocking that a lot of Apple’s AI research boils down to a single question: what if Siri was really, really, really good?
A group of Apple researchers has been working on a way to use Siri without needing to use a wake word at all; instead of listening for “Hey Siri” or “Siri,” the device might be able to simply intuit whether you’re talking to it. “This problem is significantly more challenging than voice trigger detection,” the researchers did acknowledge, “since there might not be a leading trigger phrase that marks the beginning of a voice command.” That might be why another group of researchers developed a system to more accurately detect wake words. Another paper trained a model to better understand rare words, which are often not well understood by assistants.
In both cases, the appeal of an LLM is that it can, in theory, process much more information much more quickly. In the wake-word paper, for instance, the researchers found that by not trying to discard all unnecessary sound but, instead, feeding it all to the model and letting it process what does and doesn’t matter, the wake word worked far more reliably.
Once Siri hears you, Apple’s doing a bunch of work to make sure it understands and communicates better. In one paper, it developed a system called STEER (which stands for Semantic Turn Extension-Expansion Recognition, so we’ll go with STEER) that aims to improve your back-and-forth communication with an assistant by trying to figure out when you’re asking a follow-up question and when you’re asking a new one. In another, it uses LLMs to better understand “ambiguous queries” to figure out what you mean no matter how you say it. “In uncertain circumstances,” they wrote, “intelligent conversational agents may need to take the initiative to reduce their uncertainty by asking good questions proactively, thereby solving problems more effectively.” Another paper aims to help with that, too: researchers used LLMs to make assistants less verbose and more understandable when they’re generating answers.
AI in health, image editors, in your Memojis
Whenever Apple does talk publicly about AI, it tends to focus less on raw technological might and more on the day-to-day stuff AI can actually do for you. So, while there’s a lot of focus on Siri — especially as Apple looks to compete with devices like the Humane AI Pin, the Rabbit R1, and Google’s ongoing smashing of Gemini into all of Android — there are plenty of other ways Apple seems to see AI being useful.
One obvious place for Apple to focus is on health: LLMs could, in theory, help wade through the oceans of biometric data collected by your various devices and help you make sense of it all. So, Apple has been researching how to collect and collate all of your motion data, how to use gait recognition and your headphones to identify you, and how to track and understand your heart rate data. Apple also created and released “the largest multi-device multi-location sensor-based human activity dataset” available after collecting data from 50 participants with multiple on-body sensors.
Apple also seems to imagine AI as a creative tool. For one paper, researchers interviewed a bunch of animators, designers, and engineers and built a system called Keyframer that “enable[s] users to iteratively construct and refine generated designs.” Instead of typing in a prompt and getting an image, then typing another prompt to get another image, you start with a prompt but then get a toolkit to tweak and refine parts of the image to your liking. You could imagine this kind of back-and-forth artistic process showing up anywhere from the Memoji creator to some of Apple’s more professional artistic tools.
In another paper, Apple describes a tool called MGIE that lets you edit an image just by describing the edits you want to make. (“Make the sky more blue,” “make my face less weird,” “add some rocks,” that sort of thing.) “Instead of brief but ambiguous guidance, MGIE derives explicit visual-aware intention and leads to reasonable image editing,” the researchers wrote. Its initial experiments weren’t perfect, but they were impressive.
We might even get some AI in Apple Music: for a paper called “Resource-constrained Stereo Singing Voice Cancellation,” researchers explored ways to separate voices from instruments in songs — which could come in handy if Apple wants to give people tools to, say, remix songs the way you can on TikTok or Instagram.
Over time, I’d bet this is the kind of stuff you’ll see Apple lean into, especially on iOS. Some of it Apple will build into its own apps; some it will offer to third-party developers as APIs. (The recent Journaling Suggestions feature is probably a good guide to how that might work.) Apple has always trumpeted its hardware capabilities, particularly compared to your average Android device; pairing all that horsepower with on-device, privacy-focused AI could be a big differentiator.
But if you want to see the biggest, most ambitious AI thing going at Apple, you need to know about Ferret. Ferret is a multi-modal large language model that can take instructions, focus on something specific you’ve circled or otherwise selected, and understand the world around it. It’s designed for the now-normal AI use case of asking a device about the world around you, but it might also be able to understand what’s on your screen. In the Ferret paper, researchers show that it could help you navigate apps, answer questions about App Store ratings, describe what you’re looking at, and more. This has really exciting implications for accessibility but could also completely change the way you use your phone — and your Vision Pro and / or smart glasses someday.
We’re getting way ahead of ourselves here, but you can imagine how this would work with some of the other stuff Apple is working on. A Siri that can understand what you want, paired with a device that can see and understand everything that’s happening on your display, is a phone that can literally use itself. Apple wouldn’t need deep integrations with everything; it could simply run the apps and tap the right buttons automatically.
Again, all this is just research, and for all of it to work well starting this spring would be a legitimately unheard-of technical achievement. (I mean, you’ve tried chatbots — you know they’re not great.) But I’d bet you anything we’re going to get some big AI announcements at WWDC. Apple CEO Tim Cook even teased as much in February, and basically promised it on this week’s earnings call. And two things are very clear: Apple is very much in the AI race, and it might amount to a total overhaul of the iPhone. Heck, you might even start willingly using Siri! And that would be quite the accomplishment.