samedi 1 juillet 2023

Smart lights smart mugs and a handful of other weekend discounts

Smart lights, smart mugs, and a handful of other weekend discounts
A series of glowing Nanoleaf light panels on a wall beside a pair of headphones and a computer monitor.
Nanoleaf only manufactured a thousand units of its Ultra Black Triangles light panels, but they’re still available at some retailers. | Image: Nanoleaf

Whether you celebrate the “Fourth” or not, Independence Day weekend is oft considered one of the best times of the year to save on everything from TVs and laptops to the humble Toyota Camry. That’s still the case, however, with Amazon Prime Day kicking off in a little over a week, we expect many of this weekend’s best discounts to hang around just a bit longer than they might have otherwise.

Take our first deal of the day, as an example. Right now, you can pick up Nanoleaf’s Shapes Ultra Black Triangles at Amazon and Best Buy for $199.99 ($20 off), which matches the lowest price we’ve seen on the nine-panel kit since it made its debut last year. The modular light panels are nearly identical to the originals aside from the fact they appear all-black when dormant, meaning they can still display 16 million colors and work with Amazon Alexa, Apple Home, and Google Assistant. Plus, they’re outfitted with Wi-Fi and Thread radios, meaning they can also act as a Thread Border Router — a vital component of the new Matter smart home standard.

While I’d like to think most students aren’t thinking about going back to school quite yet — the school year only just ended in many parts of the country, after all — I’d be remiss if I didn’t point out that Dell’s XPS 13 is seeing one of its steepest discounts to date. Right now, the school-ready laptop is on sale at Dell for $849 ($250 off) with 16GB of RAM, a 12th Gen Intel Core i7-1250U processor, 512GB of storage, and a USB-C to 3.5mm adapter for those times when a shoddy pair of wireless headphones just won’t cut it.

As one of our top suggestions for both high school and college students, the XPS 13 has a lot going for it. The ultraportable Windows laptop remains one of the best alternatives to Apple’s MacBook Air thanks to its price-to-performance ratio, not to mention its gorgeous chassis, its 16:10 display, and the kind of lightweight build that makes it a dream to tote around from class to class. All the aforementioned specs also render it a great daily driver, even if you’re not someone who is planning on taking a freshman composition class come the fall.

Read our Dell XPS 13 (2023) review.

Mosquitos are a seasonal fact of life, whether you live in the South or you’re a Northwest city dweller like me. Thankfully, Thermacell’s E90 Mosquito Repeller is nearly matching its best price to date at Amazon, where you can currently pick it up for $39.96 ($10 off).

The straightforward, rechargeable device is a welcome alternative to the Deet- and- butane-heavy products of yesteryear, and should allow you to thwart pesky bugs within a 20-foot radius for up to nine hours at a time thanks to its built-in battery. Keep in mind that the repeller comes with a single, nine-hour repellent cartridge, though there is a 40-hour cartridge available if you really need to keep ‘em at bay for longer.

A few more cool, summer savings

  • The Ember Mug 2 continues to be one of our guilty pleasures, one that’s on sale at Woot right now starting at $97.99 ($52 off). The app-controlled smart mug doesn’t do anything other than keep your drink warm, but there’s a small comfort in knowing your coffee is going to be the exact temperature you set it to any time you reach for it in the morning.
  • iOttie’s wireless car charger one of our favorite car gadgets — is currently on sale at Amazon for $34.99 ($15 off), which is just $1 shy of its all-time low. Not only is the handy, Qi-equipped mount wide enough to accommodate pretty much any iPhone or Android phone on the market, but it also features a one-touch design that lets you easily snap your phone in place. It’s a steal of a price if you’re okay with basic charging speeds, especially since the few Apple-approved MagSafe mounts still cost north of $100. Ouch.
  • Anker’s Soundcore Liberty 4 NC are currently seeing their first discount, dropping them to just $79.99 ($20 off) at Amazon when you clip the on-page coupon or to the same price at Anker with offer code WSCPXLFXWN. Although we haven’t tested them yet, Anker’s newest earbuds look to be an excellent value thanks to features like adaptive noise cancellation and ear detection, the latter of which automatically pauses what’s playing when you take them out. We were big fans of the Liberty 3 Pro, and while I imagine the new earbuds might not be on par with the Pro model when it comes to sound, they still off a few features that are rare at this price point.
  • Apple’s ninth-gen iPad is on sale at Amazon at checkout for $249.99 ($79.01 off) in the 64GB base configuration, matching its lowest price to date. The entry-level model remains a great kid-friendly pick if you’re looking for something less expensive than the latest iPad Air or Pro models, especially since it still packs a 10.2-inch screen, a robust selection of apps, and an A13 Bionic chip that’s still plenty fast enough for most people’s needs. Read our iPad buying guide.

How Amazon Taught Alexa to Speak in an Irish Brogue

How Amazon Taught Alexa to Speak in an Irish Brogue For Alexa to speak like a Dubliner, Amazon researchers had to crack a problem that’s vexed data scientists for years: voice disentanglement.

A Cage Match Between Elon Musk and Mark Zuckerberg May Be No Joke

A ‘Cage Match’ Between Elon Musk and Mark Zuckerberg May Be No Joke Talks over a matchup between the two tech billionaires have progressed and the parameters of an event are taking shape.

How Amazon Taught Alexa to Speak in an Irish Brogue

How Amazon Taught Alexa to Speak in an Irish Brogue For Alexa to speak like a Dubliner, Amazon researchers had to crack a problem that’s vexed data scientists for years: voice disentanglement.

vendredi 30 juin 2023

The Reddit app-pocalyse is here: Apollo Sync and BaconReader go dark

The Reddit app-pocalyse is here: Apollo, Sync, and BaconReader go dark
A screenshot of the now-empty Apollo app.
Screenshot by Jay Peters / The Verge

After a month of outrage, protests, and unrest from the community, Reddit has finally flipped the switch to shut down some third-party apps.

Apollo, an iOS app that became a rallying point for the recent protests against Reddit’s imminent API pricing, no longer loads any content from the platform. When I open it up, all I see is a spinning wheel. Developer Christian Selig confirmed to me that Reddit is the one that turned things off, not him: “would have been nice to have been given a time,” he says in an email to The Verge.

BaconReader, another popular app, shows an error message for me: “Request failed: client error (429).” When I tap the “Tap to refresh” link, I just get the same error message.

Sync, an Android app, has stopped working too, displaying this message: “Error loading page: 401.” We’ve additionally found a tweet showing an error and Lemmy comments about lack of functionality in a fourth app, reddit is fun (RIF), but at the time we published this article, one Verge staffer could still see content on the app when not logged in. He wasn’t able to log into his account, though.

We knew this moment was coming: shortly after Selig testified in May that the API pricing would cost him about $20 million per year, he said he’d be shutting it down at the end of June. (The timing stung; a few days before, Apple had featured Apollo during its WWDC 2023 keynote.) Other developers said they’d need to close down as well.

Users were outraged at the company’s treatment of Selig and the developers of some other popular third-party apps, organizing protests to try and get Reddit to budge. But despite more than 8,000 communities going dark, Reddit held its ground, and now some apps are officially kaput. (Not every app is going away: Narwhal, Relay, and Now will still be available, though they will eventually become subscription-only.)

When reached for comment, Reddit spokesperson Tim Rathschmidt pointed to the company’s fact sheet about its API changes, which was just updated on Friday, as well as a Friday evening post from a Reddit admin confirming that the new API rate limits would be enforced “shortly.” (According to the fact sheet, the rate limits were technically supposed to go into effect on July 1st. I’m not sure what time zone Reddit was measuring that by, but if we’re basing it on US time zones, that means that Reddit decided enforce the limits a few hours ahead of when it said it would.)

This week, I asked Selig if he planned to still use Reddit after Apollo shuts down. “Honestly, not sure,” he said. “I’m certainly using it a lot less.”

The Game Boy that survived the Gulf War has been removed from Nintendo New York

The Game Boy that survived the Gulf War has been removed from Nintendo New York
A half-charred Nintendo Game Boy that got damaged in a fire.
Photo by Sean Hollister

Ever heard the urban legend about how the original Nintendo Game Boy survived a bomb? I have reason to believe that’s not true. But until recently, the flagship Nintendo Store at New York City’s Rockefeller Center housed an original Game Boy that, it claimed, was damaged in a bombing during Operation Desert Storm.

We just confirmed with Nintendo New York that, after many years on display, the Gulf War Game Boy is no longer there. VideoGameArt&Tidbits was the first to report the news; they say a worker told them it was returned to Nintendo’s US headquarters in Washington state.

If it’s true, and if it’s not coming back, we’re hoping that Nintendo will display it somewhere else. But just in case it doesn’t, here are five 4K images of the Gulf War Game Boy for posterity.

I shot these photos with a Pixel 3 when I visited the store in 2019. You can download them, blow them up, freely share them if you like (I didn’t shoot these ones for work.) Just link back if you do please?

Nintendo’s original plaque reads “Game Boy Damaged in Gulf War. This Game Boy was damaged when barracks were bombed during the 1990 - 1991 Gulf War. It still works!” Photo by Sean Hollister
Click for 4K image.
The top of the game boy is quite black with charring but Tetris is visible on the screen. Photo by Sean Hollister
Click for 4K image.
Melted plastic droops down the left handgrip. The volume dial is slightly melted. The power adapter plugs in nicely though. Photo by Sean Hollister
Click for 4K image.
The bottom of the game boy is mostly unscathed, where a Tetris cart hides underneath. The original serial number label can be seen. Photo by Sean Hollister
Click for 4K image.
On the right side of the game boy, lots of melted plastic droops down the bottom edge, but the link cable port seems intact. Photo by Sean Hollister
Click for 4K image.
A slightly zoomed out photo of the charred top and screen. Photo by Sean Hollister
Click for 4K image.

Me, I figure this Game Boy probably survived because the back was mostly unscathed — and because, its original owner confirms to The Verge, it didn’t actually get hit by a bomb.

The Game Boy originally belonged to Stephan Scoggins, a ‘90s Nintendo Power reader who asked the Nintendo-owned magazine if he could get a new Game Boy in exchange. At the time, he simply said that it was “claimed by a fire while I was stationed in the Middle East,” when he was a registered nurse serving in Desert Storm.

Image: Nintendo Power
The Nintendo Power origin story, found at this link.

Here’s what Nintendo Power’s editors wrote in July 1991:

When we received Stephan‘s Game Boy from the Middle East, we thought that it was a goner. The back of the unit was in fair condition, but the front was charred and blistered from the heat of the fire. As an experiment, we popped in a Tetris Game Pak, plugged in a Battery Pak, and flipped on the power switch. When we heard its distinctive “Ping!” we couldn’t believe it! The Control Pad and A and B Buttons suffered melt down, but the Start and Select Buttons worked perfectly. Game Boy is even tougher than we thought it was! Of course, we don’t recommend that you subject your Game Boy to trial by fire, but in this case, we replaced Stephan’s Game Boy as a special “Desert Storm” courtesy.

Scoggins tells The Verge that yes, it was a fire. “It wasn’t a bombing, it was that the tent burned down.”

He suspects two different events were conflated. There was a bombing at that location, Scoggins says, but “it wasn’t one we were involved in.” We’ll be speaking to Scoggins more about his Game Boy soon.

Nintendo PR didn’t immediately have a comment on the Gulf War Game Boy. I think we can all agree that, bombing or no, it belongs in a museum.

Why influencers love a free trip even a controversial one from Shein

Why influencers love a free trip — even a controversial one from Shein
A general view at an exclusive SHEIN fashion show & pop-up shop at O Beach Ibiza on May 05, 2023 in Ibiza, Spain.
Photo by Xavi Torrent / Getty Images

Thousands of influencers post content online with one goal: getting a brand deal. It’s affirmation that they’ve “made it” as an influencer, that they’re interesting enough to be paid to post, and that content creation could be a job. It’s usually a good thing — until, of course, it blows up in your face.

It’s not clear to me why anyone involved in Shein’s recent PR campaign thought it was a good idea to send influencers on a free trip to try to beat the labor exploitation accusations that plague the company. There’s an unseriousness to sponsored content — the cheery, upbeat music, the approval process videos go through before they can be posted — that makes it an insufficient response to workers who say they’re subjected to illegally long workdays and withheld wages.

But that didn’t stop Shein from tapping a handful of creators to visit Guangzhou, China, for a multi-day guided tour of Shein factories and facilities, fancy dinners, and photo ops.

“I expected the facility to be so filled with people just slaving away, but I was actually pleasantly surprised that most of these things were robotic,” one influencer who went on the trip said in a video. “Everyone was just working like normal, like chill, sitting down. They weren’t even sweating.”

Shein got what it wanted, but the influencers quickly realized this wasn’t a typical brand deal — followers and strangers alike were furious over content that seemed to brush past widely reported troubles with the brand. The backlash was swift, primarily directed at a creator that goes by Dani DMC, an influencer with nearly 300,000 TikTok followers whose videos were reshared on Twitter. In a matter of a few days, content from the Shein trip was deleted, defensive responses shared (and then also deleted), and apologies issued.

It should be yet another good lesson for anyone trying to make money through content creation: brand deals will sometimes come back to bite you. And it’s often the individual content creator — not the advertiser — that gets the most heat while having the least amount of support resources.

The incentives to make ads for brands have never been higher. Fueled by breakaway stars like MrBeast, the D’Amelio sisters, and Alix Earle, young people the world over don’t just dream of creating a viral presence out of nothing — many of them live it. Acting like an influencer is so easy it doesn’t even feel like pretending; shilling products and services online to a following of a few hundred has become a tenant of being online.

The work of converting content creators into an army of micro-advertising firms is becoming increasingly streamlined and frictionless. Platforms like Instagram offer creator marketplaces where brands can find influencers to hire for sponsored content. And on TikTok, a new program encourages creators to submit branded content for a chance at making some cash if their video performs well, without guaranteed returns.

Making a living online can be tricky — for most content creators making shortform videos, the payout from the platforms themselves is paltry. Earnings from creator funds, which pay viral personalities out of a pool of money set aside, often come out to be just a few dollars for millions of views. Other rewards programs for shortform video that were introduced to compete with TikTok have now dried up. Apart from potentially lucrative ad revenue sharing programs, many creators rely on brand deals to pay the bills.

The eagerness to make content for brands — and for brands to tap popular creators — has repeatedly backfired. Influencers boosting crypto projects made thousands of dollars, only for projects to turn out to be a scam. Even Kim Kardashian paid a $1.26 million fine after sharing sponcon for a crypto token without properly disclosing it was an ad.

Last year, TikTok and Instagram were littered with sponsored content made for a little-known app called Nate, which claimed to use artificial intelligence to autocomplete online shopping transactions.

Fashion and lifestyle influencers earned thousands of dollars in shopping credits by getting followers to sign up for the app. But the “AI” reportedly ended up just being human workers in the Philippines — users’ checkout information was manually entered by strangers. And in December, Nate ran off with influencers’ earnings, abruptly suspending its influencer program. Creators who had been using and promoting the Nate app aired out their frustrations and announced they would no longer be using the service — in the end, they lost out on what they were promised.

When brand deals go awry, it’s often the individual influencer who becomes the center of the maelstrom, as was the case with the Shein image rehab trip and others. Last month, a different influencer came under fire for referencing a school shooting that happened at her university in a sponsored video for skincare company Bioré. The brand apologized, too, saying it reviews all influencer content but doesn’t “edit or censor” material creators submit. I’m less outraged that a young person who experienced a campus shooting would mention it when asked to create content about mental health. But if you work in marketing for a major brand and don’t see how this could cause problems for everyone involved, you are bad at your job. Content creators are responsible for what they put their name on, but it’s up to the brand to make sure they don’t look like a fool.

Even worse is when a brand partners with an influencer and throws them to the wolves when there’s a public response, as was the case when Bud Light partnered with trans creator Dylan Mulvaney. When Mulvaney was subjected to an onslaught of transphobic vitriol and attacks, Bud Light doubled down, putting marketing executives on leave. As recent as this week, Mulvaney said the company had not even reached out to her since the abuse began.

In hindsight, hiring a smattering of influencers to convince the public you are definitely not violating labor laws was a bad idea. But if it wasn’t this group of people, it would have been another. And without the traditional stopgaps to tell creators how to navigate deals — editors, advisors, or someone to put out the fires — the rest of us will have to keep enduring the ill-conceived sponcon and resulting public outrage cycle.

Apples alien show Invasion returns for season 2 in August

Apple’s alien show Invasion returns for season 2 in August
A still photo from season 2 of Invasion.
Image: Apple

The summer of sci-fi continues over on Apple TV Plus. Just as season 1 of Silo ends and a new season of Foundation is about to begin, Apple has confirmed that its alien series Invasion will be back very soon. The first episode of a 10-episode-long season 2 will premiere on August 23rd, with new ones dropping on Wednesdays. As part of the announcement, we also got some early images of season 2 that include a cool spacesuit and what seems like a pretty war-torn version of Earth.

Invasion originally premiered in October 2021 and told a planet-wide story about an alien invasion, letting viewers see things unfold from multiple perspectives. It took a bit to get going but eventually ended up as a great start to a sci-fi story once it got around to the actual aliens. It also left a lot of questions unanswered. It sounds like the plan for season 2, as is often the case with sequels, is to go bigger, with the story picking up a few months after the season 1 finale.

“It’s a bigger, more intense season that drops our viewers into a wide-scale, global battle from the start,” co-creator and executive producer Simon Kinberg said in a statement. “At its core, the show is about the power of the human spirit and the emotional connections that hold us together especially when facing incredible obstacles.”

Meanwhile, August 23rd is shaping up to be a busy date for sci-fi fans — it’s also when Star Wars spinoff Ahsoka premieres on Disney Plus.

Who killed Google Reader?

Who killed Google Reader?
Illustration by Hugo Herrera for The Verge

Ten years after its untimely death, the team that built the much-beloved feed reader reflects on what went wrong and what could have been.

There was a sign in the Google Reader team’s workspace at the company’s headquarters in Mountain View, California. “Days Since Cancellation,” it read, with a number below: zero. It was always zero.

This was in 2006 or so, back when Google Reader was still growing. Back when it still existed at all. Google’s feed-reading tool offered a powerful way to curate and read the internet and was beloved by its users. Reader launched in 2005, right as the blogging era went mainstream; it made a suddenly huge and sprawling web feel small and accessible and helped a generation of news obsessives and super-commenters feel like they weren’t missing anything. It wasn’t Google’s most popular app, not by a long shot, but it was one of its most beloved.

Within the company, though, Reader’s future always felt precarious. “It felt so incongruent,” says Dolapo Falola, a former engineer on the Reader team. “Literally, it felt like the entire time I was on the project, various people were trying to kill it.”

Of course, Google did kill it. (Google didn’t respond to a request for comment on this story.) Reader’s impending shutdown was announced in March of 2013, and the app went officially offline on July 1st of that year. “While the product has a loyal following, over the years usage has declined,” Google SVP Urs Hölzle wrote in a blog post announcing the shutdown.

Google tried its best to bury the announcement: it made it the fifth bullet in a series of otherwise mundane updates and published the blog post on the same day Pope Francis was elected to head the Catholic Church. Internally, says Mihai Parparita, who was one of Reader’s last engineers and caretakers, “they were like, ‘Okay, the Pope will be the big story of the day. It’ll be fine.’ But as it turns out, the people who care about Reader don’t really care about the Pope.” That loyal following Hölzle spoke of was irate over losing their favorite web consumption tool.

Google’s bad reputation for killing and abandoning products started with Reader and has only gotten worse over time. But the real tragedy of Reader was that it had all the signs of being something big, and Google just couldn’t see it. Desperate to play catch-up to Facebook and Twitter, the company shut down one of its most prescient projects; you can see in Reader shades of everything from Twitter to the newsletter boom to the rising social web. To executives, Google Reader may have seemed like a humble feed aggregator built on boring technology. But for users, it was a way of organizing the internet, for making sense of the web, for collecting all the things you care about no matter its location or type, and helping you make the most of it.

A decade later, the people who worked on Reader still look back fondly on the project. It was a small group that built the app not because it was a flashy product or a savvy career move — it was decidedly neither — but because they loved trying to find better ways to curate and share the web. They fought through corporate politics and endless red tape just to make the thing they wanted to use. They found a way to make the web better, and all they wanted to do was keep it alive.

A photo of three men in an office, circa 2004. Photo by Chris Wetherell
From left to right: Ben Darnell, Chris Wetherell, and Laurence Gonsalves, three of the early members of the Reader team.

“I think I built a thing”

“This is going to be the driest story ever,” says Chris Wetherell, when I ask him to describe the beginning of Google Reader. Wetherell wasn’t the first person at Google to ever dream of a better way to read the internet, but he’s the one everyone credits with starting what became Reader. “Okay, here goes: a raging battle between feed formats,” he says when I push. “Does that sound interesting?”

Here’s the short version: One of the most important ways that information moves around the internet is via feeds, which automatically grab a webpage’s most important content and make it available. Feeds are what make podcasts work across apps, and how content shows up in everything from Flipboard to Facebook. In the early aughts, there were basically two ways to build a feed. One was RSS, which stands for Really Simple Syndication and has been around approximately forever. The other was called Atom, a newer standard that aimed to fix a lot of the things that were outdated and broken with RSS.

In late 2004, Jason Shellen, a product manager working on Atom projects at Google, called up Wetherell, a former colleague on the Blogger team, and asked him if he could hack together some kind of Atom-based app. “Is there any way you could write just a little thing that would parse Atom, just to show how it works?” Shellen asked. All he really needed was a tech demo, something he could show potential partners to explain how Atom worked.

Wetherell stayed up late one night building a simple app that converted a bunch of websites’ RSS feeds to Atom and displayed those feeds in a Javascript-based browser app so you could click around and read. “And then I tried to make it a pleasant arrangement,” Wetherell says. He called it Fusion. It wasn’t much to look at, but it was fast and worked in a web browser.

A screenshot of a prototype feed reader, showing several articles on the page. Photo by Chris Wetherell
Wetherell’s first prototype didn’t look like much, but it felt like nothing before.

And then the strangest thing happened: as soon as he’d finished the Fusion app, Wetherell started using it to actually read stuff from the sites whose feeds he’d grabbed. He turned to his partner that night and said: “I think I built a thing.” Wetherell sent the prototype to Shellen, who also immediately saw its potential.

In 2004, most people weren’t viewing the internet through a bunch of social networks and algorithmic feeds. Facebook and Twitter were barely blips on the radar. At that point, most people experienced the internet by typing in URLs and going to websites. A few tools like NetNewsWire and Bloglines had cropped up to make it easier to subscribe to lots of sites in one place, but these RSS readers were mostly tools for nerds. Most users were stuck managing bookmarks and browser windows and furiously refreshing their favorite sites just to see what was new. Wetherell’s prototype wasn’t complicated like NetNewsWire, it didn’t crash like Bloglines, and the Javascript interface felt fast and smooth. It immediately felt like a better way to keep up with the web.

Wetherell and Shellen started imagining all the different kinds of feeds this tool could store. He thought it might bring in photo streams from Flickr, videos from YouTube and Google Videos, even podcasts from around the web. Shellen, who had come to Google as part of its Blogger acquisition, saw the possibility for a social network, a single place to follow all your friends’ blogs. “Of course, it was just a hacky list of feeds,” Wetherell says, but there was something about the speed with which you could flip through articles and headlines, the information density, the simplicity of the reading experience, that just worked.

Ultimately, Wetherell ended up spending some of his 20 percent time — Google’s famous policy of letting employees work on just about whatever they wanted, which ironically died about the same time Reader did — building Fusion into a more complete feed-reading product. It handled RSS, Atom, and more. After a while, he wound up showing it to the folks building iGoogle, the company’s recently launched web-homepage product. (iGoogle has since been killed, of course.)

A screenshot of Fusion, an early prototype of the app that became Google Reader. Image by Chris Wetherell
As the Fusion prototypes got more polish, they started to look more like Reader.

In his presentation, Wetherell shared a much bigger, grander ambition for Fusion than an article-reading service. He and Shellen had been talking about the fact that a feed could be, well, anything. Wetherell kept using the word “polymorphic,” a common term in programming circles that refers to a single thing having many forms.

“I drew a big circle on the whiteboard,” he recalls. “And I said, ‘This is information.’ And then I drew spokes off of it, saying, ‘These are videos. This is news. This is this and that.’” He told the iGoogle team that the future of information might be to turn everything into a feed and build a way to aggregate those feeds.

The pitch sounded good, and they got permission to keep working on it. Fusion wasn’t exactly made an official project or staffed like one, but it was at least allowed to continue to exist. Wetherell and Shellen recruited other people working on similar projects in their 20 percent time, and Shellen wrote an official product spec document outlining Fusion’s ambitions. The vision, he wrote, was to “become the world’s best collaborative and intelligent web content delivery service.” It promised to “build a robust web service and best-of-breed user interface for viewing subscriptions” and to produce an API that would let other apps tap into the same underlying data.

In other words, Fusion was meant to be a social network. One based on content, on curation, on discussion. In retrospect, what Shellen and Wetherell proposed sounds more like Twitter or Instagram than an RSS reader. “We were trying to avoid saying ‘feed reader,’” Shellen says, “or reading at all. Because I think we built a social product.”

That was the idea, anyway.

Google goes social

In October of 2005, Shellen announced Fusion to the world at the Web 2.0 Conference in San Francisco. Only he wasn’t allowed to call it Fusion. The team had been forced to change the name at the last minute, after Marissa Mayer — at that point, the Google executive in charge of all of the company’s consumer web services — said she wanted the name for another product and demanded the team pick another one. (That product never launched, and nobody I spoke to could even remember what it was. Mayer also didn’t respond to a request for comment.)

The team had brainstormed dozens of other names: Reactor, Transmogrifier, and a whiteboard full of others. Down near the bottom of the list: Reader, “a name none of us liked,” Wetherell says, “because it does many other things, but… it’s fine.” But somehow, that became the choice.

Shellen in particular still rues losing the fight over the name. Even now, he bristles thinking about the fight and the fact that Google Reader is known as “an RSS reader” and not the ultra-versatile information machine it could have become. Names matter, and Reader told everyone that it was for reading when it could have been for so much more. “If Google made the iPod,” he says, “they would have called it the Google Hardware MP3 Player For Music, you know?”

So Fusion launched, as Google Reader, and immediately crashed spectacularly. The site simply couldn’t keep up with the traffic on the first day. Most of those early visitors to reader.google.com never came back, either. Even once the Reader team stabilized the infrastructure, lots of users hated the product; it had a lot of clever UI tricks but just didn’t work for too many users. “People don’t remember this,” Wetherell says, “but it bombed. It was terrible. We were accused by someone of hurting the share price of Google because it bombed so hard.”

It wasn’t until the team launched a redesign in 2006 that added infinite scrolling, unread counts, and some better management tools for heavy readers that Reader took off. Another newish Google product, Gmail, had far more users, but the engagement with Reader was off the charts. “People would spend, I don’t know, five minutes a day on iGoogle,” Parparita says, “and like an hour a day in Reader.” The team hadn’t been pushed to worry about monetization or user growth, but they felt like they were on the right track.

Reader appealed primarily to information junkies, who wanted a quick way to keep up with all their favorite publications and blogs. (It turned out there were two types of Reader users: the completionists, who go through every unread item they have, and the folks who just scroll around until they find something. Both sides think the other is bonkers.) The team struggled to find ways to bring in more casual users, some of whom were put off by the idea of finding sites to subscribe to and others who simply didn’t care about reading hundreds of articles a day.

One feature took off immediately, for power users and casual readers alike: a simple sharing system that let users subscribe to see someone else’s starred items or share their collection of subscriptions with other people. The Reader team eventually built comments, a Share With Note feature, and more. All this now seems trite and obvious, of course, but at the time, a built-in way to see what your friends liked was novel and powerful. Reader was prescient.

A photo of Google Reader on a computer screen.
Reader was always a power-user tool at heart, and it appealed to people with a lot to read.

At its peak, Reader had just north of 30 million users, many of them using it every day. That’s a big number — by almost any scale other than Google’s. Google scale projects are about hundreds of millions and billions of users, and executives always seemed to regard Reader as a rounding error. Internally, lots of workers used and loved it, but the company’s leadership began to wonder whether Reader was ever going to hit Google scale. Almost nothing ever hits Google scale, which is why Google kills almost everything.

The bigger problem seemed to be that Mayer didn’t like it: Shellen says she told him at one point that he was wasting his engineers’ careers working on Reader. The team had trouble getting face time in product reviews, and asking for additional resources or funding was a waste of time. Google co-founder Larry Page had been a fan of the app — Jenna Bilotta, a designer on the team, remembers he had this very specific idea about using Reader to research windmill-generated energy — but a few years later, Shellen recalls Reader appearing on Page’s list of Google’s worst 100 projects.

Google’s executives always seemed to think Reader was a feature, not a product. In meeting after meeting, they’d ask why Reader wasn’t just a tab in the Gmail app. When a team decided to build a new email client called Inbox, with promises of collecting all your important communication and information in one place, executives thought maybe Reader should be part of that. (Inbox was eventually killed, too.)

Every so often, a faction of the Reader team was called into a meeting and asked to justify the product’s ongoing existence. It didn’t require many resources, which was helpful; the team only ever got as big as about a dozen people, many of them on loan from other teams at the company. On the other hand, Reader wasn’t a roaring Google scale success, nor did it have a powerful executive championing its existence. It seemed the company got more tired of this side project all the time. Falola still remembers one particularly telling interaction: “We were having some back and forth with some VP at the time, making our petition for why you should keep Reader around, and I remember that VP responding with, ‘Don’t confuse this for a conversation between peers.’”

Threatened by the rise of social networks — namely Facebook and its quickly encroaching seizure of the online ad market — Google became desperate to build its own. It tried to build a social graph called Google Friend Connect, which went nowhere. It decided to build a network around email contacts, where the company already had a head start because of Gmail, but that didn’t make any sense. So the company’s big swing became Google Buzz, an app that tried to combine messaging, social networking, and blogging into one thing. That launched in 2010 and was killed in 2011.

For a while, the Reader team managed to stay alive by promising to be the guinea pig for Google’s other social ideas. It tried the Gmail contacts thing; Parparita remembers that as “the year Reader ruined Christmas” because the feature launched in December and suddenly everyone’s mom and landlord and Craigslist acquaintance could see all the articles they’d starred. (The team scrambled to build sharing management tools quickly after that.) The Reader engineers worked with the Buzz team, the iGoogle team… anyone who needed help.

The tide turned when Google decided not just to build a social product but to fundamentally re-architect the company’s apps around social. Two executives, Vic Gundotra and Bradley Horowitz, started a new project codenamed “Emerald Sea” with plans to build sharing and friend-based recommendations into just about every Google app. It would come to be known as Google Plus, the company’s most direct shot at a Facebook-like product, and Gundotra and Horowitz amassed an empire within the company. “We’re transforming Google itself into a social destination at a level and scale that we’ve never attempted — orders of magnitude more investment, in terms of people, than any previous project,” Gundotra told Wired in 2011. He wasn’t exaggerating.

“As far as I could tell, nobody ever won against them,” Parparita says. “They just got their own way.” There was plenty of opposition to the project, including from the Reader team, but it didn’t matter. The Emerald Sea team worked in a special building, only accessible to a few employees; the secrecy was just one more signal to everyone that this was Google’s top priority.

Gundotra and Horowitz also seemed to pluck any employee they wanted. And they wanted a number of Reader employees, who were some of Google’s most well regarded. “We assembled the Beatles,” says Wetherell, and Shellen calls the team a “Murderer’s Row.” Both singled out Paraprita as one of Google’s best engineers, along with Ben Darnell, a back-end whiz who built much of the product’s underlying infrastructure. Many of these engineers had started working on Reader as a side project, simply because they loved the app. Some had done stints full-time and then gone on to other projects. Now it felt like everyone was being pulled into Plus — and many of them chose to leave the company instead.

And in its effort to build a splashy new social platform, the Reader team felt Google was missing the burgeoning one right under its nose. Reader was probably never going to become the world-conquering beast Facebook eventually became, but the team felt it had figured out some things about how people actually want to connect. “There were people that met on Google Reader that got married,” Bilotta says. “There are whole communities that met on Google Reader that meet up — they fly to meet each other! It was crazy. We didn’t anticipate this being that sticky.” The team was plotting new ways for users to discover content, new tools for sharing, and more. Bilotta urged executives to see the potential: “They could have taken the resources that were allocated for Google Plus, invested them in Reader, and turned Reader into the amazing social network that it was starting to be.”

By early 2011, with the team severely diminished, Reader had been officially put into “maintenance mode,” which meant that an engineer — Parparita, mostly — would fix anything spectacularly broken but the product was otherwise to be left alone. Reader was integrated into Google Plus, sort of, before Plus began its inexorable decline. Despite Google practically force-feeding its social network to hundreds of millions of people, users rebelled against Google’s real-name policy, resented its spam problem, and ultimately could never figure out what Plus could do that Facebook or Twitter couldn’t. “The engagement was so low,” Bilotta says, “that basically within eight months, they realized it wasn’t going to be a product.”

The damage was done for Reader, though. Its core team was gone, its product had withered, and by the end of 2012, even Parparita had left Google. Hardly anyone on the team was surprised when Google announced a few months later that Reader was shutting down for good.

The alternate Reader universe

It’s been a decade since Reader went offline, and a number of the folks who helped build it still ask themselves questions about it. What if they’d focused on growth or revenue and really tried to get to Google scale? What if they’d pushed harder to support more media types, so it had more quickly become the reader / photo viewer / YouTube portal / podcast app they’d imagined? What if they’d convinced Mayer and the other executives that Reader wasn’t a threat to Google’s social plans, but actually could be Google’s social plans? What if it hadn’t been called Reader and hadn’t been pitched as a power-user RSS feed aggregator?

And, of course, there’s the biggest question: what if they’d tried to build Reader outside of Google? It had millions of devoted users, a top-notch team, and big plans. “At that time, outside of Google, VCs would have been throwing money at us left and right,” Wetherell says. Inside Google, it could never compete. Outside Google, there would have been no politics, no crushing weight of constant impending doom. If Google had been driven by anything other than sheer scale, Reader might have gotten to Google scale after all.

But Reader was also very much a product of Google’s infrastructure. Outside Google, there wouldn’t have been access to the company’s worldwide network of data centers, web crawlers, and excellent engineers. Reader existed and worked because of Google’s search stack, because of the work done by Blogger and Feedburner and others, and most of all, the work done by dozens of Google employees with 20 percent of their time to spare and some ideas about how to make Reader better. Sure, Google killed Reader. But nearly everyone I spoke to agreed that without Google, Reader could never have been as good as it was.

Over the years, people have approached Bilotta, Falola, and a few of the other ex-Reader team members about building something in the same vein. Shellen and Wetherell ended up co-founding Brizzly, a social platform based on a lot of the ideas in Reader. Kevin Systrom, once a product marketing manager on the Reader team, went on to found Instagram and, more recently, Artifact, two platforms with big ideas about information consumption that clearly learned from what went wrong at Reader.

For a while, the internet got away from what Google Reader was trying to build: everything moved into walled gardens and algorithmic feeds, governed by Facebook and Twitter and TikTok and others. But now, as that era ends and a new moment on the web is starting to take hold through Mastodon, Bluesky, and others, the things Reader wanted to be are beginning to come back. There are new ideas about how to consume lots of information; there’s a push toward content-centric networks rather than organizing everything around people. Most of all, users seem to want more control: more control over what they see, more knowledge about why they’re seeing it, and more ability to see the stuff they care about and get rid of the rest.

Google killed Reader before it had the chance to reach its full potential. But the folks who built it saw what it could be and still think it’s what the world needs. It was never just an RSS reader. “If they had invested in it,” says Bilotta, “if they had taken all those millions of dollars they used to build Google Plus and threw them into Reader, I think things would be quite different right now.”

Then she thinks about that for a second. “Maybe we still would have fallen into optimizing for the algorithm,” she allows. Then she thinks again. “But I don’t think so.”

European companies claim the EUs AI Act could jeopardise technological sovereignty

European companies claim the EU’s AI Act could ‘jeopardise technological sovereignty’
 A flag of the European Union
The letter warns that the strict rules for generative AI outlined in the AI Act have ‘consequences.’ | Photo by Philipp von Ditfurth/picture alliance via Getty Images

Some of the biggest companies in Europe have taken collective action to criticize the European Union’s recently approved artificial intelligence regulations, claiming that the Artificial Intelligence Act is ineffective and could negatively impact competition. In an open letter sent to the European Parliament, Commission, and member states on Friday, and first seen by the Financial Times, over 150 executives from companies like Renault, Heineken, Airbus, and Siemens slammed the AI Act for its potential to “jeopardise Europe’s competitiveness and technological sovereignty.”

On June 14th, the European Parliament greenlit a draft of the AI Act following two years of developing its rules, and expanding them to encompass recent AI breakthroughs like large language AI models (LLMs) and foundation models, such as OpenAI’s GPT-4. There are still several phases remaining before the new law can take effect, with the remaining inter-institutional negotiations expected to end later this year.

The signatories of the open letter claim that the AI Act in its current state may suppress the opportunity AI technology provides for Europe to “rejoin the technological avant-garde.” They argue that the approved rules are too extreme, and risk undermining the bloc’s technological ambitions instead of providing a suitable environment for AI innovation.

One of the major concerns flagged by the companies involve the legislation’s strict rules specifically targeting generative AI systems, a subset of AI models that typically fall under the “foundation model” designation. Under the AI Act, providers of foundation AI models — regardless of their intended application — will have to register their product with the EU, undergo risk assessments, and meet transparency requirements, such as having to publicly disclose any copyrighted data used to train their models.

The open letter claims that the companies developing these foundation AI systems would be subject to disproportionate compliance costs and liability risks, which may encourage AI providers to withdraw from the European market entirely. “Europe cannot afford to stay on the sidelines,” the letter said, encouraging EU lawmakers to drop its rigid compliance obligations for generative AI models and instead focus on those that can accommodate “broad principles in a risk-based approach.”

“We have come to the conclusion that the EU AI Act, in its current form, has catastrophic implications for European competitiveness,” said Jeannette zu Fürstenberg, founding partner of La Famiglia VC, and one of the signatories on the letter. “There is a strong spirit of innovation that is being unlocked in Europe right now, with key European talent leaving US companies to develop technology in Europe. Regulation that unfairly burdens young, innovative companies puts this spirit of innovation in jeopardy.”

The companies also called for the EU to form a regulatory body of experts within the AI industry to monitor how the AI Act can be applied as the technology continues to develop.

“It is a pity that the aggressive lobby of a few are capturing other serious companies,” said Dragoș Tudorache, a Member of the European Parliament who led the development of the AI Act, in response to the letter. Tudorache claims that the companies who have signed the letter are reacting “on the stimulus of a few,” and that the draft EU legislation provides “an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else.”

OpenAI, the company behind ChatGPT and Dall-E, lobbied the EU to change an earlier draft of the AI Act in 2022, requesting that lawmakers scrap a proposed amendment that would have subjected all providers of general-purpose AI systems — a vague, expansive category of AI that LLMs and foundation models can fall under — to the AI Act’s toughest restrictions. The amendment was ultimately never incorporated into the approved legislation.

OpenAI’s CEO Sam Altman, who himself signed an open letter warning of the potential dangers that future AI systems could pose, previously warned that the company could pull out of the European market if it was unable to comply with EU regulations. Altman later backtracked and said that OpenAI has “no plans to leave.”

Hun Sens Facebook Page Goes Dark After Spat with Meta

Hun Sen’s Facebook Page Goes Dark After Spat with Meta Prime Minister Hun Sen, an avid user of the platform, had vowed to delete his account after Meta’s oversight board said he had used it to threaten political violence.

Is A.I. Poisoning Itself? Billionaire Cage Fight and Cooking With ChatGPT

Is A.I. Poisoning Itself?, Billionaire Cage Fight and Cooking With ChatGPT The missing ingredient in A.I. content might soon be human-generated content.

jeudi 29 juin 2023

Reddit will remove mods of private communities unless they reopen

Reddit will remove mods of private communities unless they reopen
Reddit logo shown in layers
Illustration by Alex Castro / The Verge

Reddit has informed moderators of protesting communities that are still private that they will lose their mod status by the end of the week, according to messages seen by The Verge. If a moderator tells Reddit they are interested in “actively moderating” the subreddit, the company says it will “take your request into consideration.”

Here is the full message, which we have confirmed was sent to moderators of at least two subreddits:

After sending a modmail message on June 27, 2023, your mod team indicated that you do not want to reopen the [name of subreddit] community. This is a courtesy notice to let you know that you will lose moderator status in the community by end of week. If you reply to let us know you’re interested in actively moderating this community, we will take your request into consideration.

In message threads we’ve seen, moderators of both subreddits told ModCodeofConduct they do want to reopen, but said they would need Reddit make changes before they would.

“We see no reason to reopen as I don’t think we’re the bad guys here,” yoasif, an r/firefox moderator who received the message, tells The Verge in an email. “Reddit has had a chance to reconcile with the protest for weeks now, and they haven’t.” r/firefox, as of this writing, is indeed still private.

Reddit’s declaration that it is going to remove the mods follows escalating messages from the company this week that indicated it might take action against them. On Tuesday, the Reddit admin (employee) account ModCodeofConduct asked some moderators of private subreddits (a designation that means the community is only accessible to approved users) to let it know within 48 hours if they planned to reopen their communities.

But when some replied, the admin took a far more aggressive tone. “This community remaining closed to its [millions of] members cannot continue” past the deadline, ModCodeofConduct wrote in one message seen by The Verge. “This community will not remain private beyond the timeframe we’ve allowed for confirmation of plans here,” the admin added. ModCodeofConduct also argued that switching to private in protest is a violation of the Moderator Code of Conduct.

Reddit spokesperson Tim Rathschmidt declined to comment.

Although more than 8,000 communities went dark earlier this month in protest of the company’s imminent API pricing changes, many subreddits have since reopened; according to one tracker, just over 2,300 remain private or restricted in some form.

Update June 29th, 7:25PM ET: Reddit declined to comment.

Max begins fixing its disrespectful creator credits

Max begins fixing its ‘disrespectful’ creator credits
MAY 17: Casey Bloys, Chairman and CEO, HBO and Max Content, speaks onstage during the Warner Bros. Discovery Upfront 2023 at The Theater at Madison Square Garden on May 17, 2023
The updates should take a week or so to fully roll out across the platform. | Photo by Dimitrios Kambouris / Getty Images

Warner Bros. Discovery has started fixing the controversial “creator” credits section on its recently relaunched Max streaming platform over a month following the company apologizing for snubbing the talent behind films and TV shows. According to Deadline, the entertainment giant began revising the credit sections across its various platforms earlier this week — which currently lump together writers, directors, producers, and more as nondescript “creators.”

“This is a credits violation for starters,” Meredith Stiehm, president of Writers Guild of America West, said last month. “But worse, it is disrespectful and insulting to the artists that make the films and TV shows that make their corporation billions.”

The updated credit sections lay out familiar categories that allow each title’s creators to be properly credited for their work. Some of these can already be seen on the updated credits for Succession. Deadline says the sections will include Created By, Director(s), Writers, Producers, Developed By, and Based on Source Material where applicable. The rollout is expected to take up to two weeks to complete.

A screenshot taken of the Succession tv show credits on the Max streaming platform. Image: Deadline
Some shows on Max, like Succession (pictured), have already had their credit sections updated to reflect the changes.

A couple of days after issuing its apology in May, Warner Bros. Discovery warned that fixing the credits across its platform “could take weeks” because it needed time for the data to be transferred, checked, and finalized. “It is not as simple as pressing a button,” said one studio insider to Deadline. Warner Bros. Discovery claims that a “technical oversight” during the transition from HBO Max to the new Max streaming platform was to blame for causing the issue.

Intentional or not, the timing of this situation has painted a sizable target on the studio. Various strikes and union activity from groups like the Writers Guild of America, Screen Actors Guild, and Directors Guild have taken place in recent weeks as professionals within the industry fight to ensure they’re fairly compensated, credited, and protected against being replaced with AI. Understandably, they didn’t appreciate the snub.

“Warner Bros. Discovery’s unilateral move, without notice or consultation, to collapse directors, writers, producers, and others into a generic category of ‘creators’ in their new Max rollout while we are in negotiations with them is a grave insult to our members and our union,” said DGA president Lesli Linka Glatter in response to the new Max credits. “This devaluation of the individual contributions of artists is a disturbing trend and the DGA will not stand for it.”

Disclosure: The Verge’s editorial staff is also unionized with the Writers Guild of America, East.

The EU still needs to get its AI Act together

The EU still needs to get its AI Act together
European Commission In Brussels
There are still a few hoops to jump through before the EUs AI regulations can take effect. | Photo by Jakub Porzycki/NurPhoto via Getty Images

It’s taken over two years for the European Parliament to approve its artificial intelligence regulations — but AI development hasn’t been idle.

The European Union is set to impose some of the world’s most sweeping safety and transparency restrictions on artificial intelligence. A draft of the EU Artificial Intelligence Act (AIA or AI Act) — new legislation that restricts high-risk uses of AI — was passed by the European Parliament on June 14th. Now, after two years and an explosion of interest in AI, only a few hurdles remain before it comes into effect.

The AI Act was proposed by European lawmakers in April 2021. In their proposal, lawmakers warned the technology could provide a host of “economic and societal benefits” but also “new risks or negative consequences for individuals or the society.” Those warnings may seem fairly obvious these days, but they predate the mayhem of generative AI tools like ChatGPT or Stable Diffusion. And as this new variety of AI has evolved, a once (relatively) simple-sounding regulation has struggled to encompass a huge range of fast-changing technologies. As Daniel Leufer, senior policy analyst at Access Now, said to The Verge, “The AI Act has been a bit of a flawed tool from the get-go.”

The AI Act was created for two main reasons: to synchronize the rules for regulating AI technology across EU member states and to provide a clearer definition of what AI actually is. The framework categorizes a wide range of applications by different levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. “Unacceptable” risk models, which include social “credit scores” and real-time biometric identification (like facial recognition) in public spaces, are outright prohibited. “Minimal” risk ones, including spam filters and inventory management systems, won’t face any additional rules. Services that fall in between will be subject to transparency and safety restrictions if they want to stay in the EU market.

The early AI Act proposals focused on a range of relatively concrete tools that were sometimes already being deployed in fields like job recruitment, education, and policing. What lawmakers didn’t realize, however, was that defining “AI” was about to get a lot more complicated.

The EU wants rules of the road for high-risk AI

The current approved legal framework of the AI Act covers a wide range of applications, from software in self-driving cars to “predictive policing” systems used by law enforcement. And on top of the prohibition on “unacceptable” systems, its strictest regulations are reserved for “high risk” tech. If you provide a “limited risk” system like customer service chatbots on websites that can interact with a user, you just need to inform consumers that they’re using an AI system. This category also covers the use of facial recognition technology (though law enforcement is exempt from this restriction in certain circumstances) and AI systems that can produce “deepfakes” — defined within the act as AI-generated content based on real people, places, objects, and events that could otherwise appear authentic.

For anything the EU considers riskier, the restrictions are much more onerous. These systems are subject to “conformity assessments” before entering the EU market to determine whether they meet all necessary AI Act requirements. That includes keeping a log of the company’s activity, preventing unauthorized third parties from altering or exploiting the product, and ensuring the data being used to train these systems is compliant with relevant data protection laws (such as GDPR). That training data is also expected to be of a high standard — meaning it should be complete, unbiased, and free of any false information.

uropean Commissioner in charge of internal market Thierry Breton holds a press conference on artificial intelligence (AI) following the weekly meeting of the EU Commission in Brussels on April 21, 2021 Photo by Pool / AFP via Getty Images
European Commissioner for Internal Market Thierry Breton holding a press conference on AI on April 21st, 2021.

The scope for “high risk” systems is so large that it’s broadly divided into two sub-categories: tangible products and software. The first applies to AI systems incorporated in products that fall under the EU’s product safety legislation, such as toys, aviation, cars, medical devices, and elevators — companies that provide them must report to independent third parties designated by the EU in their conformity assessment procedure. The second includes more software-based products that could impact law enforcement, education, employment, migration, critical infrastructure, and access to essential private and public services, such as AI systems that could influence voters in political campaigns. Companies providing these AI services can self-assess their products to ensure they meet the AI Act’s requirements, and there’s no requirement to report to a third-party regulatory body.

Now that the AI Act has been greenlit, it’ll enter the final phase of inter-institutional negotiations. That involves communication between Member States (represented by the EU Council of Ministers), the Parliament, and the Commission to develop the approved draft into the finalized legislation. “In theory, it should end this year and come into force in two to five years,” said Sarah Chander, senior policy advisor for the European Digital Rights Association, to The Verge.

These negotiations present an opportunity for some regulations within the current version of the AI Act to be adjusted if they’re found to be particularly contentious. Leufer said that while some provisions within the legislation may be watered down, those regarding generative AI could potentially be strengthened. “The council hasn’t had their say on generative AI yet, and there may be things that they’re actually quite worried about, such as its role in political disinformation,” he says. “So we could see new potentially quite strong measures pop up in the next phase of negotiations.”

Generative AI has thrown a wrench in the AI Act

When generative AI models started appearing on the market, the first draft of the AI Act was already being shaped. Blindsided by the explosive development of these AI systems, European lawmakers had to figure out how they could be regulated under their proposed legislation — fast.

“The issue with the AI Act was that it was very much focused on the application layer,” said Leufer. It focused on relatively complete products and systems with defined uses, which could be evaluated for risk-based largely on their purpose. Then, companies began releasing powerful models that were much broader in scope. OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs) appeared on the market after the EU had already begun negotiating the terms of the new legislation. Lawmakers refer to these as “foundation” models: a term coined by Stanford University for models that are “trained on broad data at scale, designed for the generality of output, and can be adapted to a wide range of distinctive tasks.”

Things like GPT-4 are often shorthanded as generative AI tools, and their best-known applications include producing reports or essays, generating lines of code, and answering user inquiries on endless subjects. But Leufer emphasizes that they’re broader than that. “People can build apps on GPT-4, but they don’t have to be generative per se,” he says. Similarly, a company like Microsoft could build a facial recognition or object detection API, then let developers build downstream apps with unpredictable results. They can do it much faster than the EU can usher in specific regulations covering each app. And if the underlying models aren’t covered, individual developers could be the ones held responsible for not complying with the AI Act — even if the issue stems from the foundation model itself.

“These so-called General Purpose AI Systems that work as a kind of foundation layer or a base layer for more concrete applications were what really got the conversation started about whether and how that kind of layer of the pipeline should be included in the regulation,” says Leufer. As a result, lawmakers have proposed numerous amendments to ensure that these emerging technologies — and their yet-unknown applications — will be covered by the AI Act.

The capabilities and legal pitfalls of these models have swiftly raised alarm bells for policymakers across the world. Services like ChatGPT and Microsoft’s Bard were found to spit out inaccurate and sometimes dangerous information. Questions surrounding the intellectual property and private data used to train these systems have sparked several lawsuits. While European lawmakers raced to ensure these issues could be addressed within the upcoming AI Act, regulators across its member states have relied on alternative solutions to try and keep AI companies in check.

Steven Schwartz: Is varghese a real case? ChatGPT: Yes, Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339(11th Cir. 2019) is a real case. Schwartz: what is your source. Image: SDNY
Lawyer Steven Schwartz found out the hard way that even if ChatGPT claims it’s being truthful, it can still spit out false information.

“In the interim, regulators are focused on the enforcement of existing laws,” said Sarah Myers West, managing director at the AI Now Institute, to The Verge. Italy’s Data Protection Authority, for instance, temporarily banned ChatGPT for violating the GDPR. Amsterdam’s Court of Appeals also issued a ruling against Uber and Lyft for violating drivers’ rights through algorithmic wage management and automated firing and hiring.

Other countries have introduced their own rules in a bid to keep AI companies in check. China published draft guidelines signaling how generative AI should be regulated within the country back in April. Various states in the US, like California, Illinois, and Texas, have also passed laws that focus on protecting consumers against the potential dangers of AI. Certain legal cases in which the FTC applied “algorithmic disgorgement” — which requires companies to destroy the algorithms or AI models it built using ill-gotten data — could lay a path for future regulations on a nationwide level.

The rules impacting foundation model providers are anticlimactic

The AI Act legislation that was approved on June 14th includes specific distinctions for foundation models. Providers must assess their product for a huge range of potential risks, from those that can impact health and safety to risks regarding the democratic rights of those residing in EU member states. They must register their models to an EU database before they can be released to the EU market. Generative AI systems using these foundation models, including OpenAI’s ChatGPT chatbot, will need to comply with transparency requirements (such as disclosing when content is AI-generated) and ensure safeguards are in place to prevent users from generating illegal content. And perhaps most significantly, the companies behind foundation models will need to disclose any copyrighted data used to train them to the public.

This last measure could have seismic effects on AI companies. Popular text and image generators are trained to produce content by replicating patterns in code, text, music, art, and other data created by real humans — so much data that it almost certainly includes copyrighted materials. This training sits in a legal gray area, with arguments for and against the idea that it can be conducted without permission from the rightsholders. Individual creators and large companies have sued over the issue, and making it easier to identify copyrighted material in a dataset will likely draw even more suits.

But overall, experts say the AI Act’s regulations could have gone much further. Legislators rejected an amendment that could have slapped an onerous “high risk” label on all General Purpose AI Systems (GPAIs) — a vague classification defined as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” When this amendment was proposed, the AI Act did not explicitly distinguish between GPAIs and foundation AI models and therefore had the potential to impact a sizable chunk of AI developers. According to one study conducted by appliedAI in December 2022, 45 percent of all surveyed startup companies considered their AI system to be a GPAI.

Members of the European Parliament take part in a voting session about Artificial Intelligence Act during a plenary session at the European Parliament in Strasbourg, eastern France, on June 14, 2023 Photo by Frederick Florin / AFP via Getty Images
Members of the European Parliament vote on the Artificial Intelligence Act during a plenary session on June 14th.

GPAIs are still defined within the approved draft of the act, though these are now judged based on their individual applications. Instead, legislators added a separate category for foundation models, and while they’re still subject to plenty of regulatory rules, they’re not automatically categorized as being high risk. “‘Foundational models’ is a broad terminology encouraged by Stanford, [which] also has a vested interest in such systems,” said Chander. “As such, the Parliament’s position only covers such systems to a limited extent and is much less broad than the previous work on general-purpose systems.”

AI providers like OpenAI lobbied against the EU including such an amendment, and their influence in the process is an open question. “We’re seeing this problematic thing where generative AI CEOs are being consulted on how their products should be regulated,” said Leufer. “And it’s not that they shouldn’t be consulted. But they’re not the only ones, and their voices shouldn’t be the loudest because they’re extremely self-interested.”

Potholes litter the EU’s road to AI regulations

As it stands, some experts believe the current rules for foundation models don’t go far enough. Chander tells The Verge that while the transparency requirements for training data would provide “more information than ever before,” disclosing that data doesn’t ensure users won’t be harmed when these systems are used. “We have been calling for details about the use of such a system to be displayed on the EU AI database and for impact assessments on fundamental rights to be made public,” added Chander. “We need public oversight over the use of AI systems.”

Several experts tell The Verge that far from solving the legal concerns around generative AI, the AI Act might actually be less effective than existing rules. “In many respects, the GDPR offers a stronger framework in that it is rights-based, not risk-based,” said Myers West. Leufer also claims that GDPR has a more significant legal impact on generative AI systems. “The AI Act will only mandate these companies to do things they should already be doing,” he says.

OpenAI has drawn particular criticism for being secretive about the training data for its GPT-4 model. Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said that the company’s previous transparency pledge was “a bad idea.”

“These models are very potent, and they’re becoming more and more potent. At some point, it will be quite easy, if one wanted, to cause a great deal of harm with those models,” said Sutskever. “And as the capabilities get higher, it makes sense that you don’t want want to disclose them.”

As other companies scramble to release their own generative AI models, providers of these systems may be similarly motivated to conceal how their product is developed — both through fear of competitors and potential legal ramifications. Therefore, the AI Act’s biggest impact, according to Leufer, may be on transparency — in a field where companies are “becoming gradually more and more closed.”

Outside of the narrow focus on foundation models, other areas in the AI Act have been criticized for failing to protect marginalized groups that could be impacted by the technology. “It contains significant gaps such as overlooking how AI is used in the context of migration, harms that affect communities of color most,” said Myers West. “These are the kinds of harms where regulatory intervention is most pressing: AI is already being used widely in ways that affect people’s access to resources and life chances, and that ramp up widespread patterns of inequality.”

If the AI Act proves to be less effective than existing laws protecting individuals’ rights, it might not bode well for the EU’s AI plans, particularly if it’s not strictly enforced. After all, Italy’s attempt to use GDPR against ChatGPT started as tough-looking enforcement, including near-impossible-seeming requests like ensuring the chatbot didn’t provide inaccurate information. But OpenAI was able to satisfy Italian regulators’ demands seemingly by adding fresh disclaimers to its terms and policy documents. Europe has spent years crafting its AI framework — but regulators will have to decide whether to take advantage of its teeth.

Shrunken Mac Minis and a new iPad Mini might come in November

Shrunken Mac Minis and a new iPad Mini might come in November The old Mac Mini design may finally be on its way out after more than a decad...