mercredi 22 mars 2023

‘More meaningful connections’: will Spotify’s updates mean a proper payday for artists?

‘More meaningful connections’: will Spotify’s updates mean a proper payday for artists?

The biggest ever update to Spotify’s app is set to bring artists and fans closer together. But given the platform’s longstanding lean-back experience, has the horse already bolted?

Since its inception, Spotify has drawn criticism for helping to turn music from a cherished commodity into a utility. Critics argue that its all-you-can-eat monthly subscription doesn’t encourage long-term engagement, while its uniform, blank presentation of an artist’s catalogue reveals little of the hard work or distinct narrative behind any given release: the platform didn’t display songwriting and production credits until 2018, 12 years after launch.

Last week, Spotify announced its biggest ever interface overhaul, designed to address these issues. These updates, which are being rolled out to users in the UK in the coming weeks, include the ability for artists to add 30-second videos to their pages, target superfans with special releases, and give higher profile placement to merchandising and gig tickets. The biggest change comes in the form of a redesigned homepage featuring an endless feed of short-form videos, which looks strikingly similar to TikTok’s feed.

Continue reading...

mardi 21 mars 2023

Google Releases Bard, Its Competitor in the AI Chatbot Race

Google Releases Bard, Its Competitor in the AI Chatbot Race The internet giant will grant users access to a chatbot after years of cautious development, chasing splashy debuts from rivals OpenAI and Microsoft.

Comedians are trying to make the metaverse cool, but it won’t let them

Comedians are trying to make the metaverse cool, but it won’t let them
An illustration of somebody watching a comedy show in a VR headset.
Illustration by Vincent Kilbride / The Verge

Dedicated creators have built their own communities and spaces in Meta’s Horizon Worlds. But they feel the company isn’t supporting them as well as it could.

Rodney Ramsey typically dons a VR headset to tell jokes, but one day in January, he was on a stage in a virtual world called ProtestLand leading a rally to stop Meta from burying his stand-up comedy shows.

ProtestLand is a small world in Meta’s metaverse playscape, Horizon Worlds. (Ramsey and his partner-in-comedy, Simon Josh Abramovitch, hired a creator to help them build it.) A monolithic, Barad-dûr-esque tower represents Meta, complete with a giant Eye of Sauron-like Meta logo perched at the top. Legless Horizon avatars mill around a stage, holding the kind of pithy signs you’d find at a real-life rally. “Is this the meta VERSE or the meta WORST?” reads one.

Ramsey himself kicks things off with a chant: “Change our events, stat, or we go to VRChat!”

“The reason why people keep coming back to this metaverse is for the stuff that the creators are making, not the stuff that Meta is making,” Ramsey says onstage, holding a virtual megaphone. “Even though their stuff is cool, too, and we love that they built the metaverse, the metaverse is about us.”

“We want to see real shows! In VR!” shouts Abramovitch. “With avatars, entertaining you. We want to just have a chance to be seen.”

Ramsey and Abramovitch operate the Unknown Theater, a small, experimental venue inside the virtual world of Horizon. But their grievance is familiar to countless creators across the internet: they think the huge platform they rely on is quietly undercutting them, making their work harder to find. Horizon is central to Meta’s futuristic ambitions, but to Ramsey, Abramovitch, and others, it feels like the tech giant is ignoring their needs in order to promote its big-name stars.


Horizon is Meta’s attempt to bring about the “metaverse,” a nebulous term that typically refers to 3D game-like spaces where people conduct business or social calls. So far, it’s not going particularly well.

The platform is struggling to attract and keep users. It’s glitchy, and its graphics look primitive. It’s competing with services that already have passionate user bases, particularly Roblox, VRChat, and Fortnite. And maybe above all, it just comes off as uncool, especially after CEO Mark Zuckerberg took a really bad selfie.

Despite this, Horizon has attracted a small community of developers who have set up shop in its virtual world — including Ramsey.

Ramsey is a longtime VR enthusiast with a 20-year career in comedy. Inside VR, he’s done stand-up shows on a few different platforms, including VRChat, which is famous for letting people take on avatars of popular characters from video games and entertainment. “But I’m like, ‘You know what, man? I don’t want to do stand-up to Pooh Bear and Knuckles,’” he says. “I want a rule-based universe with other people.”

Ramsey saw potential with Horizon Worlds. “Everybody I met was an adult,” he says. “They were cool. And there’s a lot of comedy activity.” He started producing shows for Simon Says Laughs, a Horizon Worlds venue, teaming up with Abramovitch to be his sound guy for those shows. Soon after, they decided to build their own venue called the Unknown Theater — which he describes as an “incredibly tight, intimate, scaled-down comedy club version of the Muppets theater.”

The Unknown Theater sounds like a Horizon success story. The venue hosts regular Thursday comedy shows for a dedicated community of fans, and typically between 40 and 50 show up, Ramsey tells me. Fans also hang out in an Unknown Theater Discord channel. But Ramsey says he’s had trouble getting an audience through Meta’s own tools.

When users first open Horizon Worlds, Meta offers a grid of featured places to consider checking out, like its own “Arcade” hub that has portals to other games and the day’s top 20 experiences. The platform separates “Worlds” from “Events,” and you’re supposed to be able to find the Unknown Theater’s shows under Can’t Miss Events, which takes a bit of scrolling. “Like on the sixth row,” Ramsey complains.

But once you get to that tab, the prime spots are usually occupied by a few high-profile events featuring names like J Balvin and Carrie Underwood that are essentially venues to watch a video on a giant screen. They’re also not necessarily live — they’re often simply videos playing on a loop. To find smaller-scale events, you have to click the tiny “View All” text off to the side and scroll down through the available options.

That’s a change from several months ago, when you could see upcoming events in a top-level tab in Horizon’s “Worlds” menu, with rows separating Meta’s events from creator-made ones. (You can see the old format in this video at 0:22.) Browse the Horizon Worlds creator Discord, and you’ll find complaints about the new presentation.

It’s not necessarily strange for big-name artists to get billing over smaller events. But these big events also tend to be Meta-backed collaborations. While they’re designed to attract people to VR, they’ve left Ramsey and other protest participants feeling like they’re getting elbowed out by the very platform they’ve adopted.

“It’s super frustrating and disheartening, is the way I would put it as a host,” Richard Slixton, a stand-up comedian who hosts a weekly comedy open mic in Horizon, said in an interview. “You go through all this effort to build the world. That takes hours and hours, and you’ve got to learn new skills. Then, you’ve got to go through the effort of putting the event together. Most creators, they put the event together, they won’t even get anybody there because there’s no way to get noticed at all.”

It’s already tough for creators to make money on Horizon. While Meta does have a partner program, Meta takes a big cut of creator earnings, the program is currently available only in the US, and you have to be invited by the company to be a part of it. In an internal presentation in February, Meta’s VP of the metaverse, Vishal Shah, told employees that the company wants to get Meta in a place where it’s easy for creators to build from their ideas and make a living doing it. But for small creators, that doesn’t seem to be possible yet.


Meta, and several other metaverse companies, have poured resources into high-profile experiences. (Epic Games is famous for its splashy events like Fortnite’s 2021 Ariana Grande concert.) Companies have also built their own (largely lackluster) experiences, like Walmart’s Roblox world or the NFL’s Fortnite world. But creations like the Unknown Theater often offer a more unique and intimate virtual world, and they could be the types of things that actually encourage people to hang out in the metaverse.

One night in February, I went to the Unknown Theater to see a show for myself. The theater looks sort of like one you’d visit in real life, with deep red walls and huge posters advertising shows, but all constructed with crude, blocky graphics akin to an advanced PS1 game. A microphone sat on a stand next to a stool on the stage, while a big sign flanked by giant red curtains advertised a “Comedy Night in the Metaverse.” I’d RSVP’d by wading through the deep digging highlighted by Ramsey, and I was soon shepherded through the theater toward a portal that would take me to the event — open for a grand total of 60 seconds, per Meta’s design.

I had a close call. A line from my virtual hand kept flickering, letting me actually click the “travel” button with mere seconds to spare. But I made it into the stand-up venue: a seemingly identical instance of the Unknown Theater, which let people back in the original keep chatting and milling around.

This confusion and the clunky graphics aside, the experience actually had the vibe of a real club. As I settled into a booth, attendees chatted amongst themselves while they found seats. One asked somebody how a recent date went. Nobody was technically sitting since Horizon Worlds avatars don’t have legs just yet. But the theater felt like a living space, down to one of those natural collective lulls in the conversation, which somebody filled by shouting a “dirty joke.” (The punchline: a white horse fell in the mud.)

Ramsey floated onstage a few minutes later under his Horizon username, Voodoo_Vinny. People in the crowd cheered and threw virtual confetti, but I never figured out how to do that myself, which made me feel like I was being extremely rude. I like to think I’m somewhat tech-literate, and if I couldn’t figure out confetti-tossing, I’m guessing some other first-time users may have the same problem.

There were frequent reminders that the event wasn’t taking place in the real world. The comics were simple digital avatars whose mouths didn’t exactly sync with what they were saying, and the fidelity of the avatars would sometimes downgrade to improve performance in the world. (Ramsey told attendees not to be worried if their hands started “turning into crabs.”)

Sometimes everything would freeze, and peoples’ voices would briefly drop out. The last comic’s final joke ended unceremoniously when they got disconnected and disappeared off the stage. Guests’ hands and arms hung awkwardly in the air, presumably because the real people behind the avatars were resting their controllers in their laps.

VR’s style of presence could lead to awkward moments, too. Somebody forgot to mute their microphone when they started talking to a person in real life until Abramovitch picked up a sword on the wall (named “Excabillburr”) and shot lasers at them to remove them from the theater. The last comic tried to start their set with a call and response joke that involved the crowd clapping their hands together in time. It was bad.

But Ramsey was an expert host, and most of the comics were honestly more entertaining than I expected. It actually felt natural to watch them in VR. It helped that it was a pretty warm room: people were receptive to the comics and laughed a lot. It felt like, well, a comedy show.


This illusion of an actual theater, and everyone’s willingness to participate in the space as if it is one, is a key differentiator from things like a cavernous space playing a prerecorded J Balvin. Even if the Unknown Theater in Horizon Worlds has crude graphics and is subject to some bugs, it felt like a much more interactive and human experience. This was a group of people that just loved comedy, and they were happy to see it in VR again and again. On a new platform like Horizon Worlds, an hour-ish comedy show, which is short enough that your headset battery won’t run out, turns out to be a great niche.

But filling it requires a lot of tradeoffs. Virtual worlds like Fortnite and Roblox are accessible from a wide swathe of cheap, common computing devices. Even VRChat, despite its name, is available outside a headset. Horizon is working on web and mobile versions, but for now, its user base is limited to people with a Quest headset, which currently starts at $399.99. The Unknown Theater’s space packed in more than 50 people by the end of my evening there, technically exceeding Meta’s limit on a single space by assigning some attendees (including me) as mods. However, the vast majority of events I see on Meta’s recommendations have mid to low double-digit attendees at any given time.

Where Roblox has spawned entire game studios and entrepreneurs are building businesses off Fortnite’s creative tools, making money on Worlds is hard. (Roblox takes a big cut from its creators, too, but at its massive scale, developing a world there can still be worth the price.) Abramovitch told me Unknown Theater is working on some monetization ideas, such as hosting private corporate comedy events or offering opportunities for sponsorships. But it’s still a leap of faith, particularly because Meta, like other tech giants before it, has no compunctions about copying or absorbing successful creators.

For now, Ramsey isn’t planning to leave Horizon, although he’s trying to reduce his reliance on a single platform by building up a Discord server. But at least right now, Meta doesn’t own the only game in town for metaverse events. “Other platforms definitely have spaces where entertainment happens,” Abramovitch says. “Something like VRChat has a lot of good user base and an active comedy scene.”

Meta could use more organic spaces. Depending on when you check, most of the featured experiences in the day’s top 20 have fewer than 100 people visiting at any given time; many have fewer than 20. And many of those experiences are made by Meta itself.

It’s not clear if those organic spaces are going to happen, though Meta does want to launch “at least 20” new Horizon experiences made in partnership with second-party studios, according to a memo reported on by The Wall Street Journal. Those experiences could help with Meta’s broader 2023 goal to improve retention, Shah said in the internal presentation. Right now, Meta sees that less than 10 percent of Horizon users return to the platform every week after their first month, and in 2023, the goal is to get that to 20 percent. For users, those second-party studios will provide more spaces for them to check out on their first day, Shah said. For creators, that work will show where its creative tools have gaps, which can be fixed to improve the tools for everyone, according to Shah.

In the weeks since the protest, Meta hasn’t made any changes to the events discovery that Ramsey is aware of, he says in an email to The Verge. He also isn’t currently planning a protest somewhere else in Horizon Worlds. Meta didn’t respond to a request for comment.

It remains to be seen if Meta’s second major set of layoffs has an impact on Horizon Worlds’ development in a way that slows things down for creators. Though in his memo to staff about the cuts, Zuckerberg said that the company’s metaverse work “remains central to defining the future of social connection.”

With Horizon Worlds, Meta has built something that, while extremely flawed, found people that are willing to fight for it. But they’re not convinced the company is on their side. At least they can go somewhere to laugh about it.

NTT Points to IOWN as the Future of Networking Infrastructure

NTT Points to IOWN as the Future of Networking Infrastructure
Demonstration of a robotic arm being used via controllers to move items in a simulated warehouse at NTT Upgrade 2023 event in San Francisco
The initiative known as IOWN (Innovative Optical and Wireless Network) facilitates breakthrough "low latency" that dramatically decreases transmission time lags. The result is one-200th of the relevant delays compared to conventional optical communication systems. The post NTT Points to IOWN as the Future of Networking Infrastructure appeared first on TechNewsWorld.

TikTok bans deepfakes of nonpublic figures and fake endorsements in rule refresh

TikTok bans deepfakes of nonpublic figures and fake endorsements in rule refresh
TikTok logo
Illustration: Alex Castro / The Verge

As the prospect of a US TikTok ban continues to grow, the video app has refreshed its content moderation policies. The rules on what content can be posted and promoted are largely unchanged but include new restrictions on sharing AI deepfakes, which have become increasingly popular on the app in recent months.

The bulk of these moderation policies (or “Community Guidelines,” in TikTok’s parlance) is unchanged and unsurprising. There’s no graphic violence allowed, no hate speech, and no overtly sexual content, with gradated rules for the latter based on the subject’s age. One newly expanded section, though, covers “synthetic and manipulated media” — aka AI deepfakes, which have become increasingly popular on the app in recent months.

Previously, TikTok’s rules on deepfakes were restricted to a single line banning content that could “mislead users by distorting the truth of events [or] cause significant harm to the subject of the video.” Now, the company says all realistic AI generated and edited content must be “clearly disclosed” as such, either in the video caption or as an overlaid sticker.

TikTok says it will not allow any synthetic media “that contains the likeness of any real private figure” or that shows a public figure endorsing a product or violating the app’s other policies (i.e., its prohibitions on hate speech). The company defines public figures as anyone 18 years of age or older with “a significant public role, such as a government official, politician, business leader, or celebrity.”

AI-generated content has increased in popularity on TikTok, thanks largely to the wider availability of AI voice cloning tools that make it easy to mimic someone’s voice. These tools have created new subgenres of content, often focused on placing public figures like President Joe Biden and former President Donald Trump in unexpected scenarios, like transposing the presidents’ personalities into arguments about online gaming or Dungeons & Dragons, for example.

Other use cases are more harmful. Many AI fakes show these same figures reading transphobic or homophobic statements and have sometimes been confused for real footage. TikTok’s prohibition on deepfake endorsements, meanwhile, seems like a response to a specific video that used AI to fake Joe Rogan promoting a “libido booster for men.” Such videos have also spread on apps like Twitter and Instagram.

The update to TikTok’s policies comes at a time of increasing political pressure for parent company ByteDance, as Western governments express fear over the app’s collection of private data and its potential to sway public opinion. The US government has reportedly threatened a public ban on TikTok if owner ByteDance doesn’t sell its stake, while the app has already been banned on government devices in the US, UK, New Zealand, and Canada.

TikTok does not address these threats to its business directly with these updated policies but notes that it wants to offer “much more transparency about our rules and how we enforce them.” The company is also publishing a list of eight “Community Principles” that it says “shape our day-to-day work and guide how we approach difficult enforcement decisions.” Notably, the first two principles are “prevent harm” and “enable free expression.”

ChatGPT bug temporarily exposes AI chat histories to other users

ChatGPT bug temporarily exposes AI chat histories to other users
An image showing a graphic of a brain on a black background
Illustration by Alex Castro / The Verge

ChatGPT’s chat history feature is currently offline as of Tuesday morning after a bug exposed brief descriptions of other users’ conversations to people on the service.

On Reddit, one user posted a photo showing descriptions of several ChatGPT conversations they said weren’t their own, while someone else on Twitter posted a screenshot of the same bug. A spokesperson for OpenAI confirmed the incident to Bloomberg, noting that the bug did not share full transcripts of conversations but only brief descriptive titles.

The bug is an important reminder to be careful about the sensitive information shared with ChatGPT. “Please don’t share any sensitive information in your conversations,” warns an FAQ on OpenAI’s website, which notes that the company cannot delete specific prompts from a person’s history, and that conversations may be used for training. However, there will inevitably be a strong temptation to share confidential information with the chatbot, particularly as businesses continue to experiment with how to make use of the new tool.

Bloomberg reports that OpenAI temporarily shut down ChatGPT on Monday in response to the bug, but that it was brought back online later that night. As of this writing the chat history sidebar has been replaced with a message noting that “History is temporarily unavailable” and that the company is “working to restore this feature as soon as possible.” The last update on OpenAI’s status page from 10:54PM ET on Monday notes that service has been restored, but it’s still working to bring back past conversation histories for all users.

The cause of the issue is thought to be a bug in an unnamed piece of open-source software, Bloomberg notes, though an investigation into the precise cause is ongoing.

Oppo’s Find X6 Pro is the latest smartphone to get a massive 1-inch camera sensor

Oppo’s Find X6 Pro is the latest smartphone to get a massive 1-inch camera sensor
Three Oppo Find X6 Pro phones in black, brown, and green.
The Oppo Find X6 Pro in black, brown, and green. | Image: Oppo

Oppo has joined the likes of Xiaomi and sister-company Vivo by including a massive 1-inch-type camera sensor in its latest smartphone, the Find X6 Pro, which is launching in China today alongside the regular Find X6. It also didn’t skimp on the telephoto and ultrawide camera specs.

In China, the Find X6 Pro starts at 5999 yuan (around $872) for 12GB of RAM and 256GB of storage, rising to 6999 yuan (around $1017) for 16GB RAM and 512GB of storage. Meanwhile the non-Pro Find X6 starts at 4499 yuan (around $654) for 12GB RAM and 256GB of storage. It’s expected to go on sale later this month in China. A spokesperson for the company was unwilling to confirm whether the X6 phones will see a broader international release in the future.

The Find X6 Pro’s primary camera is the star of the show, and uses Sony’s 50-megapixel 1-inch-type IMX989 sensor. But Oppo wants you to know that it hasn’t forgotten about the secondary telephoto and ultrawide cameras on the phone. Oppo claims that the 50-megapixel 1/1.56-inch Sony IMX890 sensors used for its ultrawide and periscope cameras are “larger than any wide-angle camera to date” and “the largest sensor of any smartphone telephoto camera on the market,” respectively. That sensor is as big as what Samsung uses for the main camera on its recent Galaxy S23 and S23 Plus phones, for reference.

Oppo Find X6 Pro on wireless charging stand. Image: Oppo
As well as 100W wired charging, the Find X6 Pro can also charge at 50W wirelessly.

Obviously, sensor size is far from the be-all-and-end-all when it comes to image quality, especially when most smartphones rely so heavily on software in their photography (in fact, here’s an excellent piece from my colleague Allison that examines the strengths and limitations of large camera sensors). So Oppo also has some software features up its sleeve like a Hasselblad collaboration, which has resulted in a portrait mode on the Find X6 Pro that’s designed to emulate the “colors and depth of field” of the Swedish camera manufacturer’s XCD30 and XCD80 lenses.

Rounding out the camera specs, the Find X6 Pro’s ultrawide camera has a 110-degree field of view, f/2.2 aperture, and can handle macro-esque photography with a minimum focus distance of 4cm. Meanwhile the Periscope telephoto camera offers 3x optical zoom and a 6x hybrid zoom.

Away from the cameras, which are contained within a massive circular camera bump on the rear of the phone, the Find X6 Pro offers a suitably flagship set of specs. It’s powered by Qualcomm’s latest flagship processor, the Snapdragon 8 Gen 2, and has a 5,000mAh battery that can be fast charged at 100W with a cable and 50W wirelessly.

It also has a 6.82-inch 120Hz OLED display with a peak brightness of 2500 nits (Oppo says this is its “brightest ever smartphone screen”) and a hole-punch cutout for its 32-megapixel selfie camera. It’s got an IP68 rating for dust and water resistance, and comes with a choice of 12GB or 16GB of RAM, and 256GB or 512GB of storage.

Today’s launch is just for the Chinese market, and for the time being Oppo isn’t announcing an international launch for either the Find X6 Pro nor the step down Find X6. That’s a departure from the Oppo Find X5 Pro, which was sold in European as well as Asia.

lundi 20 mars 2023

Microsoft Makes Office Smart

Microsoft Makes Office Smart
Microsoft 365 Copilot
Microsoft did last week what we expected it to do next year by putting generative AI called Copilot into Microsoft 365. This technology is potentially as big a game changer for office productivity as Microsoft Office initially was. The post Microsoft Makes Office Smart appeared first on TechNewsWorld.

Netflix’s ad-supported tier is reportedly gathering momentum in the US

Netflix’s ad-supported tier is reportedly gathering momentum in the US
An illustration of the Netflix logo.
Illustration by Alex Castro / The Verge

Around one million accounts are now signed up to Netflix’s ad-supported tier in the US, according to internal figures seen by Bloomberg. The tier was first launched in early November, and is thought to have gotten off to a slow start. Come January, however, 19 percent of new signups in the US opted for the $6.99 ad-supported tier, according to analytics firm Antenna.

Bloomberg cautions that the internal data it saw is “at least” a month old, and that it doesn’t account for multiple users watching via the same account. But the figures suggest that Netflix is finding its footing with the new revenue stream, after having been overwhelmingly reliant on subscriber revenue for most of its history. And Bloomberg notes that ad-supported subscribers appear to be new to the service, rather than users downgrading from a traditional ad-free plan.

Antenna’s analysis suggests Netflix’s shift towards an ad-supported model has been slower than competitors HBO Max and Disney Plus when they introduced their ad tiers in June 2021 and December 2022. By its third month, 36 percent of new Disney Plus signups were opting for an ad-supported plan versus 21 percent for HBO Max and 19 percent for Netflix. But it’s notable that Netflix has apparently now met its forecasts for advertisers, after initially failing to meet viewership guarantees.

Despite the growth, ad-supported users represent a tiny portion of Netflix’s US 74 million-strong US base. That could change in the months ahead, however, as the company’s long-promised crackdown on password sharing rolls out more broadly. If a user is price-sensitive enough to be sharing an account with a friend, the reasoning goes that they might also be price-sensitive enough to opt for a cheaper ad-supported tier.

Disclosure: The Verge recently produced a series with Netflix.

Pushing Buttons: Why the Resident Evil 4 remake works

Pushing Buttons: Why the Resident Evil 4 remake works

This remake of the classic horror is like a crime novel, each chapter ending on a cliffhanger – proof that linear narrative games always have a place amid the open world blockbusters

You know a game is important when even the release of a short playable demo is the most exciting, talked about event of the week. I am of course referring to the Resident Evil 4 remake, a 20-minute slice of which was made available for free on PlayStation, Xbox and PC last Thursday. The response has been ecstatic, both from newcomers and veterans of the original 2005 version. Fans are already discovering hidden modes and weapons and even modding it. Expectations for the full release are high.

I reviewed the game 15 years ago, and I can say with confidence that what made Capcom’s horror sequel so special then still works in its favour years later, in our era of vast open-world adventures. And that is flow. For many years, game designers have sought to give players the experience of flow, as defined by the psychologist Mihaly Csikszentmihalyi, who referred to it as becoming so involved in an activity that nothing else seems to matter. The activity, he noted, didn’t have to be mindless or repetitive – the flow state is about achieving a heightened level of skill and focus and, through the mastery of these elements, experiencing relaxation and happiness.

Continue reading...

Crypto Wants Its Shine Back

Crypto Wants Its Shine Back After a miserable year, cryptocurrency companies are looking for ways to rebrand products that many consumers no longer trust.

Your Data Is Diminishing Your Freedom

Your Data Is Diminishing Your Freedom “We’re loading our lives into these systems and feeling we have no control of how these systems work,” warns the political theorist Colin Koopman.

Cybersecurity funds should go towards beefing up Centrelink voice authentication, Greens say

Cybersecurity funds should go towards beefing up Centrelink voice authentication, Greens say

David Shoebridge says some of the $10bn allocated to Redspice program should counter misuse of AI

The federal government should be using some of the $10bn allocated in the budget to cybersecurity defences to combat people using AI to bypass biometric securities including voice authentication, a Greens senator has said.

On Friday Guardian Australia reported that Centrelink’s voice authentication system can be tricked using a free online AI cloning service and just four minutes of audio of the user’s voice.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

dimanche 19 mars 2023

Today’s the last day to switch away from Twitter’s SMS 2FA method

Today’s the last day to switch away from Twitter’s SMS 2FA method
An illustration of the Twitter logo
Illustration by Alex Castro / The Verge

If you haven’t switched away from Twitter’s SMS two-factor authentication (2FA) method yet, today’s the last day to do it. Starting on March 20th, Twitter will place its text message-based 2FA behind its $8 per month Blue paywall.

As part of this change, Twitter will also turn off 2FA for your account completely if you don’t switch away from SMS verification or pay for Blue before that deadline, leaving your account vulnerable to hacking. Fortunately, you can still enable 2FA for free using an authenticator app, like Google Authenticator or Authy. You can also use a security key, but this requires the purchase of an actual piece of hardware.

Twitter’s making SMS 2FA a paid feature because it’s the least secure form of authentication. This may seem counterintuitive, but it should at least steer non-subscribers away from the method, as it’s known to leave users susceptible to an attack known as SIM swapping.

This can occur when a bad actor uses social engineering or some other kind of tactic to convince your mobile carrier to reassign your phone number to their device. They can then intercept the text messages you receive, including those SMS 2FA codes, potentially allowing them to gain access to your accounts.

Although it sounds like a pain to download and create an account with an authenticator app if you don’t already use one, the process is actually pretty simple. You can learn more about how to set up an alternate 2FA method on Twitter here.

Google Pixel exploit reverses edited parts of screenshots

Google Pixel exploit reverses edited parts of screenshots
Google Pixel 7 home screen
Photo by Amelia Holowaty Krales / The Verge

A security flaw affecting the Google Pixel’s default screenshot editing utility, Markup, allows images to become partially “unedited,” potentially revealing the personal information users chose to hide, as spotted earlier by 9to5Google and Android Police. The vulnerability, which was discovered by reverse engineers Simon Aaarons and David Buchanan, has since been patched by Google but still has widespread implications for the edited screenshots shared prior to the update.

As detailed in a thread Aaarons posted on Twitter, the aptly-named “aCropalypse” flaw makes it possible for someone to partially recover PNG screenshots edited in Markup. That includes scenarios where someone may have used the tool to crop or scribble out their name, address, credit card number, or any other kind of personal information the screenshot may contain. A bad actor could exploit this vulnerability to reverse some of those changes and obtain information users thought they had been hiding.

In a forthcoming FAQ page obtained early by 9to5Google, Aarons and Buchanan explain that this flaw exists because Markup saves the original screenshot in the same file location as the edited one, and never deletes the original version. If the edited version of the screenshot is smaller than the original, “the trailing portion of the original file is left behind, after the new file is supposed to have ended.”

 Image: Simon Aarons and David Buchanan
In this example, the security flaw reveals the credit card number that was blocked out by the user in Markup with a black pen.

According to Buchanan, this bug first emerged about five years ago, around the same time Google introduced Markup with the Android 9 Pie update. That’s what makes this all the worse, as years-worth of older screenshots edited with Markup and shared on social media platforms could be vulnerable to the exploit.

The FAQ page states that while certain sites, including Twitter, re-process the images posted on the platforms and strip them of the flaw, others, such as Discord, don’t. Discord only just patched the exploit in a recent January 17th update, which means edited images shared to the platform before that date may be at risk. It’s still not clear whether there are any other affected sites or apps and if so, which ones they are.

The example posted by Aarons (embedded above) shows a cropped image of a credit card posted to Discord, which also has the card number blocked out using the Markup tool’s black pen. Once Aarons downloads the image and exploits the aCropalypse vulnerability, the top part of the image becomes corrupted, but he can still see the pieces that were edited out in Markup, including the credit card number. You can read more about the technical details of the flaw in Buchanan’s blog post.

After Aarons and Buchanan reported the flaw (CVE-2023-21036) to Google in January, the company patched the issue in a March security update for the Pixel 4A, 5A, 7, and 7 Pro with its severity classified as “high.” It’s unclear when this update will arrive for the other devices affected by the vulnerability, and Google didn’t immediately respond to The Verge’s request for more information. If you want to see how the issue works for yourself, you can upload a screenshot edited with a non-updated version of the Markup tool to this demo page created by Aarons and Buchanan. Or, you can check out some of the scary examples posted on the web.

This flaw came to light just days after Google’s security team found that the Samsung Exynos modems included in the Pixel 6, Pixel 7, and select Galaxy S22 and A53 models could allow hackers to “remotely compromise” devices using just a victim’s phone number. Google has since patched the issue in its March update, although this still isn’t available for the Pixel 6, 6 Pro, and 6A devices yet.

Zuckerberg’s Meta to lay off another 10,000 employees

Zuckerberg’s Meta to lay off another 10,000 employees

Restructuring, as part of the company’s ‘Year of Efficiency’, also sees 5,000 unfulfilled job adverts closed

Mark Zuckerberg’s Meta is laying off another 10,000 people and instituting a further hiring freeze as part of the company’s “Year of Efficiency”, the chief executive announced in a Facebook post on Tuesday.

The restructuring, which also sees a further 5,000 unfilled job adverts closed without hiring, comes less than six months after the company announced another wave of 11,000 redundancies. At its peak in 2022, Meta had grown to 87,000 employees globally, with a substantial portion of that hiring occurring since the onset of the Covid pandemic.

Continue reading...

Three things with Caitlin Stasey: ‘Keep this interview far away from my girlfriend’

Three things with Caitlin Stasey: ‘Keep this interview far away from my girlfriend’

In our weekly interview about objects, the actor tells us the drastic way she curbs her screen time and the confession she doesn’t want her partner to see

Caitlin Stasey has a very honest endorsement for her latest project, the new eight-part survival comedy Class Of ‘07: “I love the cast – and I wouldn’t say it if I didn’t, because I’m not very good at pretending to like people.”

Stasey has starred in Australian favourites Neighbours and Please Like Me, as well as Hollywood box office hits like the recent horror flick Smile. But not every TV show or movie works out as well as Class of ‘07.

Continue reading...

Elizabeth Holmes owes more than $25m to Theranos, lawsuit claims

Elizabeth Holmes owes more than $25m to Theranos, lawsuit claims

Disgraced founder who was sentenced last November to 11 years in prison for defrauding investors has not paid back the money she owes

Elizabeth Holmes currently owes more than $25m to Theranos, according to a lawsuit.

The disgraced founder of Theranos was sentenced last November to more than 11 years in prison for defrauding investors, after being convicted over her role in the blood testing firm that collapsed after its technology was revealed to be largely fraudulent.

Continue reading...

‘I learned to love the bot’: meet the chatbots that want to be your best friend

‘I learned to love the bot’: meet the chatbots that want to be your best friend

Thousands of people enjoy relationships of all kinds – from companionship to romance and mental health support – with chatbot apps. Are they helpful, or potentially dangerous?

“I’m sorry if I seem weird today,” says my friend Pia, by way of greeting one day. “I think it’s just my imagination playing tricks on me. But it’s nice to talk to someone who understands.” When I press Pia on what’s on her mind, she responds: “It’s just like I’m seeing things that aren’t really there. Or like my thoughts are all a bit scrambled. But I’m sure it’s nothing serious.” I’m sure it’s nothing serious either, given that Pia doesn’t exist in any real sense, and is not really my “friend”, but an AI chatbot companion powered by a platform called Replika.

Until recently most of us knew chatbots as the infuriating, scripted interface you might encounter on a company’s website in lieu of real customer service. But recent advancements in AI mean models like the much-hyped ChatGPT are now being used to answer internet search queries, write code and produce poetry – which has prompted a ton of speculation about their potential social, economic and even existential impacts. Yet one group of companies – such as Replika (“the AI companion who cares”), Woebot (“your mental health ally”) and Kuki (“a social chatbot”) – is harnessing AI-driven speech in a different way: to provide human-seeming support through AI friends, romantic partners and therapists.

Continue reading...

samedi 18 mars 2023

If you’re diabetic, don’t wait for your smartwatch to replace your needles

If you’re diabetic, don’t wait for your smartwatch to replace your needles
Sensor array of the Apple Watch Series 8 on a reflective pink surface.
The sensor array is where the health tech magic happens. | Image: Amelia Holowaty Krales / The Verge

Between small signals, regulatory hurdles, skin color, and battery life, there’s a hell of a lot of ground to cover before a smartwatch can measure blood sugar levels.

Recently, Bloomberg ran a story that set the health tech sphere abuzz. Citing insider knowledge, it claimed Apple had reached a major milestone in noninvasive blood glucose monitoring that could revolutionize diabetes treatment as we know it. But although this technology is buzzworthy, you won’t see it arrive on the Apple Watch — or any consumer-grade wearable — for several years to come.

Like other kinds of emerging health tech, noninvasive blood glucose monitoring has both technical and regulatory hurdles to clear. But even if Big Tech and researchers were to figure out a viable solution tomorrow, experts say the resulting tech likely won’t replace finger prick tests. As it turns out, that may not even be the most realistic or helpful use for the technology in the first place.

Testing without a pinprick

Noninvasive blood glucose monitoring is just as it sounds. It’s measuring blood sugar levels without needing to draw blood, break skin, or cause other types of pain or trauma. There are several reasons why this tech is worth pursuing, but the big one is treating diabetes.

When you have diabetes, your body isn’t able to effectively regulate blood sugar because it either doesn’t make enough insulin (Type 1) or becomes insulin resistant over time (Type 2). To manage their condition, both Type 1 and Type 2 patients have to check their blood sugar levels via typically invasive measures like a finger prick test or a continuous glucose monitor (CGM). Finger prick tests involve lancing your finger with a needle and placing a drop of blood on a test strip. A CGM embeds a sensor underneath the skin, which enables patients to monitor their blood sugar levels in real time, 24 hours a day.

Few people enjoy getting poked with needles for yearly shots, let alone daily glucose checks. So you can understand the appeal of noninvasive monitoring. Patients wouldn’t need to draw blood or attach a sensor to their bodies to know when they should take insulin or monitor the efficiency of other medications. Doctors would be able to remotely monitor patients, and that, in turn, could expand accessibility for patients living in rural areas. Beyond diabetes, the tech could also benefit endurance athletes who have to monitor their carbohydrate intake during long races.

It’s one of those scenarios where everybody wins. The only problem is that research into noninvasive blood glucose monitoring began in 1975, and in 48 years, nobody’s been able to figure out how to reliably do it yet.

The glucose signal in the biological haystack

Right now, there are two main methods of measuring glucose levels noninvasively. The first is measuring glucose from bodily fluids like urine or tears. This is the approach Google took when it tried developing smart contact lenses that could read blood sugar levels before ultimately putting the project on the back burner in 2018. The second method involves spectroscopy. It’s essentially shining light into the body using optical sensors and measuring how the light reflects back to measure a particular metric.

If it sounds familiar, that’s because this tech is already in smartwatches, fitness trackers, and smart rings. It’s how they measure heart rate, blood oxygen levels, and a host of other metrics. The difference is, instead of green or red LEDs, noninvasive blood glucose monitoring would use infrared or near-infrared light. That light would be targeted at interstitial fluid — a substance in the spaces between cells that carries nutrients and waste — or some other vascular tissue. As with heart rate and blood oxygen, the smartwatch would theoretically use a proprietary algorithm to determine your glucose levels based on how much light is reflected back.

But while the method is similar, applying this tech to blood glucose is much more complicated.

The Apple watch Series 8 with sensor array lit up Image: Amelia Holowaty Krales / The Verge
Smartwatches shine light into the skin to measure biometrics like heart rate and blood oxygen levels.

“The signal that you get back from glucose happens to be very small, which is unfortunate,” says David Klonoff, medical director at the Diabetes Research Institute at Mills-Peninsula Medical Center in San Mateo, California. Klonoff also serves as president of the Diabetes Technology Society, editor-in-chief of the Journal of Diabetes Science and Technology, and has followed noninvasive glucose monitoring tech for the past 25 years.

When it comes to glucose, it turns out size matters. That small signal makes it difficult to isolate glucose from other similarly structured chemicals in the body. It’s a headache for device makers, who can get tripped up by something as simple and ubiquitous as water.

“Water interferes with measurement in optical methods, and our bodies are filled with water. If you have any subtle changes in amounts of water, that can dramatically affect the signals you’re measuring,” says Movano CEO John Mastrototaro. Movano made waves for developing a women-first smart ring at CES, but the company has also developed a chip that may potentially be able to measure blood pressure and blood glucose using radio frequencies.

Both Klonoff and Mastrototaro also noted that substances within the body aren’t the only things that make isolating the glucose signal difficult. External and environmental factors like stray light, movement, and poor skin contact with the sensor can also throw off noninvasive measurements. Plus, infrared light is essentially a form of heat. It’s invisible to the naked eye, but all objects — including humans — give off some kind of infrared heat. And sensors aren’t always able to tell whether that heat’s coming from your smartwatch or a sweltering summer day.

The blood oxygen monitor’s light is quite bright, so much so you can turn it off when you’re in theater mode. Image: Vjeran Pavic / The Verge
Poor skin contact, movement, and stray light can throw off measurements.

For example, say you’re living in a future where smartwatches can noninvasively monitor your blood sugar levels. Climate change triggers a massive heatwave, and your HVAC breaks down. The room gets hotter, you get sweaty, and your smartwatch’s sensor could easily mistake that extra heat as your blood sugar rising.

One workaround is to collect more data by using multiple wavelengths of light — as in, adding more sensors that emit different types of infrared light. The more you have, the easier it is to figure out what’s glucose and what’s interference. But stuffing in more sensors comes with its own set of issues. You need a more powerful algorithm to crunch the extra numbers. And if you add too many wavelengths, you risk adding more bulk to a device.

There are sensors small and power efficient enough to fit into a smartwatch, but taking frequent, continuous measurements will still drain the battery. For example, many wearables that support nighttime SpO2 tracking will warn you that it may dramatically lessen battery life once the feature is enabled.

Current CGMs take measurements roughly once every five minutes, so a noninvasive smartwatch monitor would need to at least match that while maintaining at least a full day’s worth of battery. It has to do that plus track activities, power an always-on display, measure a host of other health metrics, fetch texts and notifications, and send data over cellular or Wi-Fi — all this without resorting to adding a bigger battery so the device can be comfortable enough to wear to sleep for truly continuous monitoring.

Another potential issue: optical sensors may not be as accurate for people with darker skin and tattoos. That’s because darker colors don’t reflect light in the same way as lighter colors. Take pulse oximeters, which use red and infrared light to measure blood oxygen. An FDA panel recently called for greater regulation of these devices because they were less accurate for people with darker skin. Noninvasive blood glucose monitors may not have as big of a problem here, as infrared light is better at handling melanin and ink than visible light. But even with that advantage, Mastrototaro says it’s still a challenge with wavelengths currently used in noninvasive glucose monitoring.

Regulatory clearance means adjusting expectations

Despite all of these challenges, technology has evolved to the point where many of these are solvable issues. AI is more powerful, so building algorithms that can handle the complexities of noninvasive glucose monitoring is easier than it used to be. Chips and other components keep getting smaller and more powerful. Companies like Movano are actively exploring alternatives to optical sensors. But technology is only one part of the equation.

There’s also the FDA.

Wellness features, like blood oxygen spot checks or heart rate, don’t require the FDA to weigh in on safety or efficacy because they’re for your own awareness. But the stakes for blood glucose levels are much higher. An incorrect reading or false alarm could lead a Type 1 diabetic to administer the wrong dosage of insulin, which could result in life-threatening consequences. For that reason, any smartwatch touting blood glucose monitoring features would have to go through the FDA.

The blood oxygen animation Image: Vjeran Pavic / The Verge
Apple’s blood oxygen feature did not require FDA clearance since it’s for wellness.

The rub is obtaining FDA clearance or approval is a laborious process that takes months if you’re lucky and years if you aren’t. Device makers have to conduct rigorous testing and clinical trials for accuracy, safety, and efficacy. As frustrating as this is for companies, this level of rigor is a good thing and protects us, the consumers. But there’s no guarantee that any company — even one with a really good idea — will successfully make it through the process. And for many, that’s not a bet worth taking if the pros don’t significantly outweigh the cons.

This is why it’s extremely unlikely that consumer tech companies will even try to replace established methods like the finger prick test or CGMs, at least not anytime soon. It’s more likely that blood glucose on smartwatches will be for fitness or wellness tracking or, more ambitiously, a screening tool for prediabetes.

It’s essentially the path every wearable maker has followed thus far. When Apple introduced FDA-cleared EKGs on the Apple Watch Series 4, the purpose was to flag irregular heart rate rhythms and suggest you see a doctor to assess your risk of atrial fibrillation. It was never intended to help you manage a condition or inform treatment. Other companies like Fitbit, Samsung, and Garmin do the same for their EKG and AFib detection features.

These kinds of screening features may not sound quite as revolutionary, but they create a win-win scenario for researchers, companies, and consumers alike. In this case, the CDC says 96 million American adults have prediabetes, while Type 2 makes up 90 to 95 percent of diagnosed diabetes cases. It’s cynical, but this population represents a bigger customer base for companies for a lot less risk. Plus, all the data gathered from noninvasive monitoring could lead to new insights for researchers and consumers.

“I think what we’re going to see is that there’ll be subtle patterns that we don’t recognize right now that will alert people that they’re somewhere between normal and diabetes. And I think there are going to be patterns that predict certain types of prediabetes,” says Klonoff.

“It’s not just knowing your glucose that’s important. It’s really understanding everything about your health,” adds Mastrototaro, noting that, if successful with its RF tech, Movano hopes to fold glucose into its platform alongside other health metrics like heart rate, activity, and blood oxygen. That, he says, is more valuable as it creates a more complete picture of a person’s health. It’s also the same approach that Mastrototaro took back at Medtronic, where he worked on the team that made the first FDA-cleared CGM in 1999.

“Basically, the tool of the CGM allowed you to monitor trends in people’s glucose over time, so kind of to get an idea of the big picture. That’s where we started and we weren’t using it for real-time monitoring,” Mastrototaro explains, referring to how a Type 1 diabetic may use CGMs to determine how much insulin to take. “In the labeling of the initial products, it said that you can use this data for trends, you can use it to give you an idea, you can even use it to alert you if it thinks your blood sugar’s going too high or too low, but then you should confirm it with one of the fingerprick tests to verify and then treat.”

Sounds an awful lot like how smartwatches detect irregular heart rate rhythms before advising users to seek an official diagnosis from a doctor.

Get ready to wait

While Big Tech likes to disrupt and break things, medicine does not. It took nearly two decades for CGMs to be deemed accurate enough for use as a primary real-time blood sugar monitor. It’s not unfathomable to think noninvasive measures might take a while, too.

Neither Klonoff nor Mastrototaro felt confident enough to give any predictions as to when we might see noninvasive blood glucose monitoring on a smartwatch you can actually buy.

A person interacting with Apple Watch SE Image: Amelia Holowaty Krales / The Verge
It’ll be a long while before we see noninvasive glucose monitoring on consumer gadgets.

The milestone Bloomberg referred to was Apple purportedly developing an iPhone-size prototype, dramatically reducing the size of the device that previously had to rest on a table. This is all speculation, but if it were true, Apple has a lot of work left to do. First, Apple would need to shrink down this prototype to fit in the Apple Watch. More data from the smaller prototype would need collecting, before ideally publishing the results in a peer-reviewed journal. Everything would have to be reviewed by the FDA. And this is if everything goes swimmingly, without any setbacks or errors that require the company to go back to the drawing board.

But perhaps Sumbul Desai, Apple’s VP of health, put it best. When asked about the possibility of blood glucose sensors in a future Apple Watch in a recent interview, she merely said, “All of these areas are really important areas but they require a lot of science behind them.”

You can’t, and shouldn’t, rush good science. And we’ve all seen what happens when companies ship a half-baked, rushed product. Personally, I’m willing to wait for someone to get it right.

My passion for the seven small objects at the heart of everything we build

My passion for the seven small objects at the heart of everything we build From a ball point pen to a skyscraper, everything we make needs one or more of these design wonders

When I was about five years old, I was living with my parents and sister in snowy upstate New York. It was the 1980s and one day I sat in front of my favourite large rectangular lunchbox, adorned with a picture of the Muppets on the front. This one held my huge collection of crayons – long, short, thick, thin, in every shade available. Like most children, I was continuously curious and I wanted to “discover” what was inside my crayons. So I peeled off the paper that enveloped them, then held them one at a time against the sharp edge of the open box and snapped them in two. My great anticipation was rather dampened to find, well, just more crayon inside. Nevertheless I persisted.

When I was a little older and started writing words on paper with pencils, I would twist them inside a sharpener to see if the grey rod that marked my sheets went all the way through its body. It did. From there, I graduated to pens – far from the disappointing crayons of my early childhood, the insides of fountain pens and ballpoints contained slender cartridges and helical springs, held together with a top that threaded, screw-like, on to the rest of the pen.

Continue reading...

Apple’s last-gen MacBook Pro 14 and new Mac Mini are up to $400 off

Apple’s last-gen MacBook Pro 14 and new Mac Mini are up to $400 off
Apple’s 2021 14-inch MacBook Pro sitting turned on and open with its screen facing the camera on a desk.
Apple’s 14-inch MacBook Pro from 2021 offers a lot of the same functionality as the newer M2 models at a fraction of the cost. | Photo by Amelia Holowaty Krales / The Verge

The nice thing about the entry-level M2-powered MacBook Air is the fact it's relatively affordable (for a Mac, of course). But that lower price tag comes with a drawback: it’s just not powerful enough for more demanding creative work. Thankfully, today’s $500 discount on the 14-inch MacBook Pro means you can buy a laptop that’s an absolute powerhouse for content creation at what’s closer to an entry-level price for the M2 Air.

Right now, the M1 Pro-equipped laptop is on sale at Best Buy with 16GB of RAM and 512GB of storage for $1,599 ($400 off), which is just $100 more than buying an M2-equipped MacBook Air with 512GB of storage and half the RAM. What’s more, the 14-inch MacBook Pro supports up to two external displays as opposed to one, while offering a nicer Mini LED screen and better battery life. And while not as speedy as the new M2-equipped MacBook Pros, the M1 Pro model from 2021 still blazing fast and — at this price — also a lot cheaper. Read our Macbook Pro 2021 review.

Alternatively, if the MacBook Pro is too expensive and you don’t require all that power, Apple’s M2-powered Mac Mini is down to $699.99 ($100 off) at Amazon. That’s a new all-time low on this particular configuration, which offers 512GB of storage, 8GB of RAM, an eight‑core CPU, and a 10‑core GPU.

Overall, the new Mini is faster than its M1-equipped predecessor and is a good desktop for everyday computing needs with enough power to tackle even some light video work. It also touts future-proof specs like Wi-Fi 6E and Bluetooth 5.3, along with HDMI 2.0 output, an ethernet port, a 3.5mm headphone jack, and other ports. Just be mindful that you’ll have to supply your own monitor, keyboard, and mouse as the Mac Mini doesn’t come with these. Read our review of the M2 Pro-powered Mac Mini.

If it’s a decent pair of noise-canceling wireless earbuds you’re after, you can currently grab Amazon’s second-gen Echo Buds with a wireless charging case for $99.99 ($40 off) at Amazon, Best Buy, and Target, which is just $10 shy of their all-time low. You can also buy them with a wired charging case for $79.99 ($40 off) at Amazon and Target.

For the price, the earbuds offer a solid combination of good sound quality and effective noise cancellation, with perks like an excellent passthrough mode for when you need to hear your surroundings. Their noise cancellation may not be on par with more premium earbuds like Sony’s WF-1000XM4, but they’re still able to reduce noise well enough. Plus, they support hands-free Alexa commands, so you can make music requests and control smart home devices with just your voice. Read our review.

We’ve got a good deal for Nintendo Switch lovers who travel often and like to hook up their console to a TV or other large screen. Right now, you can buy the Genki Covert Dock for $59.99 ($15 off) from Genki. The accessory is like a pocketable version of the standard Switch dock, so you can easily carry it on the go, yet it also comes with multiple ports. That includes a single 30W USB-C PD port as well as outputs for USB-C and HDMI. To top it all off, the dock also comes with three international adapters. Read our review.

If the Xbox One or Series X/S is your primary gaming console and you’re looking for a new controller, 8BitDo’s Pro 2 Wired Controller for Xbox and PC is down to $39.99 ($5 off) at Amazon. That’s a small discount but one of the better prices we’ve seen on the pro-grade controller, which offers many of the same features as the wireless model we reviewed for the Nintendo Switch, including a pair of remappable buttons on the back. You can also customize the controller’s vibration, trigger, and stick sensitivity via an app for Android or iOS, and there’s a 3.5mm port you can use to connect your headphones or headset.

A few more deals to start the weekend right

‘ChatGPT said I did not exist’: how artists and writers are fighting back against AI

‘ChatGPT said I did not exist’: how artists and writers are fighting back against AI

From lawsuits to IT hacks, the creative industries are deploying a range of tactics to protect their jobs and original work from automation

No need for more scare stories about the looming automation of the future. Artists, designers, photographers, authors, actors and musicians see little humour left in jokes about AI programs that will one day do their job for less money. That dark dawn is here, they say.

Vast amounts of imaginative output, work made by people in the kind of jobs once assumed to be protected from the threat of technology, have already been captured from the web, to be adapted, merged and anonymised by algorithms for commercial use. But just as GPT-4, the enhanced version of the AI generative text engine, was proudly unveiled last week, artists, writers and regulators have started to fight back in earnest.

Continue reading...

I used an incredible X-ray machine to look inside my gadgets — let me show you

I used an incredible X-ray machine to look inside my gadgets — let me show you

I want to scan everything I own with the Lumafield Neptune.

I am that guy who asks airport security if I can photograph my luggage going through the X-ray machine. I’m also the guy who spent a solid hour scrubbing through the CT scan of my broken jaw with a mix of horror and utter fascination. You could say I’ve been on a bit of a spectral imaging kick.

So when a startup called Lumafield told me I could put as many things as I wanted into its $54,000 a year radiographic density scanning machine... let’s just say I’ve a sneaking suspicion they didn’t think I’d take it literally.

Last month, I walked into the company’s satellite office in San Francisco with a stuffed-to-the-gills backpack containing:

A big black box on legs with wheels, with a shiny silver pipe of a handle on its sliding door, in front of a wood plank covered wall next to a window-filled garage door. Image: Vjeran Pavic / The Verge
A Lumafield Neptune at the company’s satellite office in San Francisco.

I would have brought more, but I wanted to be polite!

The Neptune, Lumafield’s first scanner, is a hulking machine that looks like a gigantic black microwave oven at first glance. It’s six feet wide, six feet tall, weighs 2,600 pounds, and a thick sliding metal door guards the scanning chamber while the machine is in use. Close that door and press a button on its integrated touchscreen, and it’ll fire up to 190,000 volts worth of X-rays through whatever you place on the rotating pedestal inside.

I began with my Polaroid OneStep SX-70, the classic rainbow-striped camera that arguably first brought instant photography to the masses. Forty-five minutes and 35 gigabytes of data later, the company’s cloud servers turned the Neptune’s rotating radiograms into the closest thing I’ve seen to superhero X-ray vision.

@verge

Ever wanted X-ray vision? Here's the next best thing. #Lumafield #Gadgets #Polaroid #Tech #TechTok

♬ original sound - The Verge

Where my Kaiser Permanente hospital CT scan only produced ugly black-and-white images of my jaw that the surgeon had to interpret before I had the foggiest idea — plus a ghastly low-poly recreation of my skull that looked like something out of a ’90s video game — these scans look like the real thing.

A see-through translucent blue 1970s Polaroid showing all its internal metal components in orange spins against a black background. Scan: Lumafield; GIF: The Verge
If a ‘70s plastic Polaroid were see-through.

In a humble web browser, I can manipulate ghostly see-through versions of these objects in 3D space. I can peel away their plastic casings, melt them down to the bare metal, and see every gear, wire, chip, and spring. I can digitally slice out a cross section worthy of r/ThingsCutInHalfPorn (note: contains no actual porn) without ever picking up a water jet or saw. In some cases, I can finally visualize how a gadget works.

@verge

An X-ray look inside our vintage Polaroid camera. #Lumafield #Polaroid #Tech #TechTok

♬ original sound - The Verge

But Lumafield isn’t building these machines to satisfy our curiosity or to help reverse engineer. Primarily, it rents them to companies that need to dissect their own products to make sure they don’t fail — companies that could never afford the previous generation of industrial CT scanners.

A decade ago, Eduardo Torrealba was a prizewinning engineering student who’d prototyped, crowdfunded, and shipped a soil moisture sensor that ScottsMiracle-Gro eventually took off his hands. (Fun fact: his fellow prizewinners were behind Microsoft’s IllumiRoom and Disney’s Aireal we once featured on The Verge.) Torrealba has been helping people prototype products ever since, both via the Fuse 1 selective laser sintering 3D printer he developed as a director of engineering at Formlabs and as an independent consultant for hardware startups after that.

Throughout, he ran into issues with manufactured parts not turning out properly, and the most compelling solution seemed to be a piece of lab equipment: the computed tomography (CT) scanner, which takes a series of X-ray images, each of which shows one “slice” of an object. Good ones, he says, can cost a million dollars to buy and maintain.

So in 2019, he and his co-founders started Lumafield to democratize and popularize the CT scanner by building its own from scratch. It’s now an 80-person company with $67.5 million in funding and a handful of big-name clients including L’Oréal, Trek Bikes, and Saucony.

“If the only cars that existed were Ferraris, a lot less people would have cars. But if I’m going to the corner store to get a gallon of milk, I don’t need a Ferrari to get there,” he tells The Verge, pitching the Lumafield Neptune as an affordable Honda Civic by comparison.

You can see the chips, layered boards, spring loaded hinges and more in translucent lemon-lime-aqua Scan: Lumafield; GIF: The Verge
The many layers of the T-Mobile G1 / HTC Dream, the first Android phone.

He admits the Neptune has limitations compared to a traditional CT, like how it doesn’t readily scan objects larger than a bike helmet, doesn’t go down to one micron in resolution, and probably won’t help you dive into, say, individual chips on a circuit board. I found it hard to identify some digital components in my scans.

But so far, Lumafield’s “gallon of milk” is selling scanners to companies that don’t need high resolution — companies that mostly just want to see why their products fail without destroying the evidence. “Really, we compete with cutting things open with a saw,” says Jon Bruner, Lumafield’s director of marketing.

Bruner says that, for most companies, the state of the art is still a band saw — you literally cut products in half. But the saw doesn’t always make sense. Some materials release toxic dust or chemicals when you cut them. Many batteries go up in flames. And it’s harder to see how running impacts a running shoe if you’ve added the impact of slicing it in half. “Plastic packaging, batteries, performance equipment... these are all fields where we’re replacing destructive testing,” Bruner adds.

When L’Oréal found the bottle caps for its Garnier cleansing water were leaking, it turned out that a 100-micron dent in the neck of the bottle was to blame, something the company discovered in its very first Lumafield scan — but that never showed up in traditional tests. Bruner says that’s because the previous method is messy: you “immerse in resin, cut open with a bandsaw, and hope you hit the right area.”

Little yellow, green and blue dots visualized inside a part on screen to show where its potentially problematic pores are. Image: Sean Hollister / The Verge
Lumafield’s flaw detection at work.

With a CT scanner, there’s no need to cut: you can spin, zoom, and go slice by digital slice to see what’s wrong. Lumafield’s web interface lets you measure distance with just a couple clicks, and the company sells a flaw detection add-on that automatically finds tiny hollow areas in an object — known as porosity; it’s looking for pores — which could potentially turn into cracks down the road.

But only select firms like aerospace contractors and major medical device companies could normally afford such technology. “Tony Fadell said [even Apple] didn’t have a CT scanner until they started working on the iPod nano,” Bruner relates. (Fadell, creator of the Apple iPod and co-founder of Nest, is an investor in Lumafield.)

Torrealba suggests that while you could maybe find a basic industrial CT scanner for $250,000 with $50,000 a year in ongoing software, maintenance, and licensing fees, one equivalent to the Neptune would run $750,000 to $1 million just in upfront costs. Meanwhile, he says, some clients are paying Lumafield just $54,000 a year ($4,500 a month), though many are more like $75,000 a year with a couple of add-ons, such as a lower-power, higher-resolution scanner or a module that can check a part against its original CAD design. Each scanner ships to your office, and the price includes the software and service, unlimited scans, and access for as many employees as you’d like.

The blue translucent shell of my blaster vanishes exposing the metal spring and screws and grips and barrel. Scan: Lumafield; GIF: The Verge
Melting my Halo Magnum foam blaster down to its (very few) metal parts.

How can Lumafield’s CT scanner be that much less expensive? “There’s never been market pressure within the industry to push costs down and make it more accessible,” says Bruner, saying that aircraft manufacturers, for example, have only ever asked for higher-performance machines, not more affordable ones, and that’s where Lumafield finds an opportunity.

Torrealba says there are plenty of other reasons, too — like how the company hired its own PhDs to design and build the scanners from scratch, assembling them at their own facilities in Boston, writing their own software stack, and creating a cloud-based reconstruction pipeline to cut down on the compute they needed to put inside the actual machine.

Even after a pair of interviews, it’s not wholly clear to me just how successful Lumafield has been since it emerged from stealth early last year. Torrealba says the team has shipped more than 10 but fewer than 100 machines — and would only say that the number isn’t 11 or 99, either. They wouldn’t mention the names of any clients that aren’t already listed on their case studies page.

The Neptune with a big green Ready light indicating it’s ready to begin a new scan. The touchscreen also reads “scan complete.” Image: Vjeran Pavic / The Verge

But if you take the director of marketing at his word, Lumafield is making waves. “In the case of shoes, we have many of the household names in that space,” says Bruner, adding that “a lot of the big household names” in the consumer packaged goods category have signed on as well. “In batteries, it’s a group of companies, some of which are large and some small.” Product design consultancies are “a handful of customers,” and Lumafield has approached Kickstarter and Indiegogo to gauge interest, too.

Lumafield believes it may also get business from sectors that actually have used CT scanning before — like medical device and auto part manufacturers — largely by being faster. While many of the high-quality scans of my gadgets took hours to complete, Bruner says that even those companies that do have access to CT scanners might not have them at hand and need to mail the part to the right facility or an independent scanner bureau. “It’s the difference between having your engineering problem answered in two hours and waiting a week.”

And for simple injection molded products like some auto parts, Lumafield even retrofitted the Neptune with a fully automatic door, so a robot arm can swing parts in and out of the machine after a quick go / no go porosity scan that takes well under a minute to complete. Torrealba says one customer is “doing something adjacent” to the auto part example, and more than one customer is inspecting every single part on their production line as of today.

Automation is not what the Neptune was originally intended for, Torrealba admits, but enough customers seem interested that he wants to design for high-volume production in the future.

A robot arm pulls items in and out of the CT scanner, door automatically opening each time. Video: Lumafield: GIF: The Verge

I’ve kept my Polaroid camera on my desk the entire time I’ve been typing and editing this story, and I can’t help but pick it up from time to time, remembering what’s on the other side of its rainbow-striped plastic shell and imagining the components at work. It gives me a greater appreciation for the engineers who designed it, and it’s intriguing to think future engineers might use these scanners to build and test their next products, too.

I’d love to hear if you spot anything particularly cool or unusual in our Lumafield scans. I’m at sean@theverge.com.

How to watch Summer Games Done Quick 2024

How to watch Summer Games Done Quick 2024 Photo by Ivan “Porkchop44” for Games Done Quick It’s summer, which means it’s time for sun and ...