lundi 8 mai 2023

How Google tried to fix the web — by taking it over

How Google tried to fix the web — by taking it over
Animated illustration featuring the Google Chrome logo floating in water, surrounded by four circling Facebook logos

Google promised to create a better, faster web for media companies with a new standard called AMP. In the end, it ruined the trust publishers had in the internet giant.

In 2015, Google hatched a plan to save the mobile web by effectively taking it over. And for a while, the media industry had practically no choice but to play along.

It began on a cheery October morning in New York City; the company had gathered the press together at a buzzy breakfast spot named Sadelle’s in SoHo. As the assembled reporters ate their bagels and lox, Google’s vice president of news, Richard Gingras, explained that the open web was in crisis. Sites were too slow, too hard to use, too filled with ads. As a result, he warned, people were flocking to the better experiences offered by social platforms and app stores. If this trend continued, it would be the end of the web as we know it.

But Google had a plan to fight back: Accelerated Mobile Pages, or AMP, a new format for designing mobile-first webpages. AMP would ensure that the mobile web could be as fast, as usable, instantly loading, and every bit as popular as mobile apps. “We are here to make sure that the web evolves, and our entire focus is on that effort,” Gingras said. “We are here to make the web great again.”

“Make the web great again” was a popular phrase across Google at the time, echoing the burgeoning presidential campaign of an upstart Republican named Donald Trump. There was a lot of technical work behind the slogan: Google was building its own Chrome browser into a viable web-first operating system for laptops; trying to replace native apps with Progressive Web Apps; pushing to make the more secure HTTPS standard across the web; and promoting new top-level domains that would aim to make .blog and .pizza as important as .com. Much of this was boring or went over the heads of media execs. The point was that Google was promising to wrest distribution power away from Apple and Facebook and back into the hands of publishers.

After a decade of newspapers disappearing, magazine circulations shrinking, and websites’ business dwindling, the media industry had become resigned to its own powerlessness. Even the most cynical publishers had grown used to playing whatever games platforms like Google and Facebook demanded in a quest for traffic. And as Facebook chaotically pivoted to video, that left Google as the overwhelming driver of traffic to websites all over the web. What choice did anyone have?

“If Google said, ‘you must have your homepage colored bright pink on Tuesdays to be the result in Google,’ everybody would do it, because that’s what they need to do to survive,” says Terence Eden, a web standards expert and a former member of the Google AMP Advisory Committee. One media executive who worked on AMP projects but who, like other sources in this story, requested anonymity to speak about Google, framed the tradeoff even more simply: “you want access to this audience, you need to play by these rules.”

Adopting Google’s strange new version of the web resulted in an irresistible flood of traffic for publishers at first: using AMP increased search traffic to one major national magazine’s site by 20 percent, according to the executive who oversaw the implementation.

But AMP came with huge tradeoffs, most notably around how all those webpages were monetized. AMP made it harder to use ad tech that didn’t come from Google, fraying the relationship between Google and the media so badly that AMP became a key component in an antitrust lawsuit filed just five years after its launch in 2020 by 17 state attorneys general, accusing Google of maintaining an illegal monopoly on the advertising industry. The states argue that Google designed AMP in part to thwart publishers from using alternative ad tools — tools that would have generated more money for publishers and less for Google. Another lawsuit, filed in January 2023 by the US Justice Department, went even further, alleging that Google envisioned AMP as “an effort to push parts of the open web into a Google-controlled walled garden, one where Google could dictate more directly how digital advertising space could be sold.”

Here in 2023, AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore. The rise of ChatGPT and other AI services pose a much more direct threat to its search business than Facebook Instant Articles and Apple News ever did. But the media industry is still dependent on Google’s fire hose of traffic, and as the company searches for its next move, the story of how it ruthlessly used AMP in an attempt to control the very structure and business of the web makes clear exactly how far it will go to preserve its business — and how powerless the web may be to stop it.

AMP succeeded spectacularly. Then it failed. And to anyone looking for a reason not to trust the biggest company on the internet, AMP’s story contains all the evidence you’ll ever need.

The small-screen shake-up

Earlier in 2015, months before AMP launched, one of Google’s key metrics was on the verge of a dramatic flip: the volume of searches coming from mobile phones was just about to outnumber the ones coming from desktop and laptop computers. This shift had been a long time coming, and Google saw it as an existential threat. The company had become a nearly $75 billion annual business almost entirely on ads — which made up about 90 percent of its revenue — and the most important ones by far were the ones atop search results in desktop browsers. By some internal measures, a typical mobile search at the time brought in about one-sixth as much ad revenue as on desktop. The increasingly mobile-focused future could mean a disastrous revenue drop for Google.

In public, Google framed AMP as something like a civic mission, an attempt to keep the web open and accessible to everyone instead of moving to closed gardens like Facebook Instant Articles or Apple News, which offered superior mobile reading experiences. “To some degree, on mobile, [the web] has not fully satisfied users’ expectations,” Gingras said at the launch event. “We are hoping to change that.”

But the fight to fix the mobile web wasn’t just an altruistic move in the name of teamwork and openness and kumbaya. Internally, some viewed it as a battle for Google’s own survival. As smartphones became the default browsing experience for billions of users around the world, the mobile web was becoming the only web that really mattered. Google’s competitors were exerting far more control over how users lived their lives on their phones: readers were getting their news from native apps and from proprietary formats created by Facebook and Apple. Google worried that if enough users switched to these faster, simpler, more controlled experiences, it risked being left out altogether.

As Big Tech companies took over the ad industry, it did so largely at the expense of publishers. Newspapers used to be the way to advertise your new hair salon, or you might buy local TV ads to hawk the latest appliances for sale in your store. By 2015, most advertisers just went through Facebook and Google, which offered a more targeted and more efficient way to reach buyers.

Google, obviously aware that it was taking revenue from publishers, occasionally tried to make nice. Sometimes that meant creating new products, like the awkwardly named Google Play Newsstand, to give media companies another place to distribute and sell content. Sometimes — often, actually — it meant just giving publishers a bunch of money whenever a government would get mad, like the €60 million “Digital Publishing Innovation Fund” Google set up in France after a group of European publishers sued and settled with the search giant.

This “we care about publishers!” dance is a staple of Silicon Valley. Apple briefly promised to save the news business with the iPad, convincing publishers around the world to build bespoke tablet magazines before mostly abandoning that project. Facebook remains in a perpetually whipsawing relationship with the media, too: it will promote stories in the News Feed only to later demote them in favor of “Meaningful Social Interactions,” then promise publishers endless video eyeballs before mostly giving up on Facebook Watch.

The platforms need content to keep users entertained and engaged; publishers need distribution for their content to be seen. At best, it’s a perfectly symbiotic relationship. At worst, and all too often, the platforms simply cajole publishers into doing whatever the platforms need to increase engagement that quarter.

For publishers over the last decade, chasing platform policies and supporting new products has become the only means of survival. “That’s the sort of tradeoff publishers are used to,” says one media executive who was involved with AMP in its early days. “Do it this way and you’ll get an audience.” But while publishers had long been wary of the tendency of Big Tech companies to suck up ad dollars and user data, they had seen Google as something closer to a partner. “You meet with a Facebook person and you see in their eyes they’re psychotic,” says one media executive who’s dealt with all the major platforms. “The Apple person kind of listens but then does what it wants to do. The Google person honestly thinks what they’re doing is the best thing.”

Phones potentially made all of this harder. For Google, search was harder to monetize on smaller screens with correspondingly fewer ad slots, and it was also, in some ways, an inferior product. That was largely for reasons out of Google’s control: many of the mobile websites Google sent users to were slow, covered in autoplaying video and unclosable ads, and generally considered a worse experience than the apps that publishers and media organizations had been focused on for the last several years. Google executives talked often internally about being ashamed of sending people to some websites.

But the big reason for consternation within Google was a company just a few miles down the road. If mobile was going to win, then so was Facebook. This was pre-metaverse Facebook, of course, when the company was a booming social networking giant, a thriving ad business, and a mobile success story: Facebook reported in April 2015 that it had 1.25 billion mobile active users on its products every month and that nearly three-quarters of its advertising revenue was coming from mobile.

Facebook was, to most users, a mobile app, not a website. Google can’t crawl a mobile app. And it got worse: most content on Facebook was shared among friends and followers and, as such, was completely opaque to Google, even on the web. For most of its existence, Google could take for granted that the vast majority of the internet’s content would be open and searchable. As Facebook grew, and social media in general began to replace blogs and forums, it felt like Google’s view of the internet was shrinking.

Meanwhile, Mark Zuckerberg made no secret of Facebook’s ambitions to take on Google, to take on everybody, really: the CEO’s aim was to turn Facebook into a platform the size of the internet. But he wanted to win at search, too, first by better indexing Facebook content and then by ultimately doing the same to the web. “There’s a lot of public content that’s out there that any web search engine can go index and provide,” he told investors in the spring of 2015.

The simplest thing to do would be to beat Facebook at its own game. But Google had already tried that — a few times. Seeing the rise of social networking, and the threat that friend-sourced content posed to Google’s search-based business model, the company poured resources into the Google Plus social network. But it never caught on and, by 2015, was effectively on its last legs. There was simply no way to out-Facebook Facebook.

Around the same time, Facebook also launched Instant Articles, a Facebook-specific tool that turned web articles into native posts on the platform. The pitch for Instant Articles was simple: they would speed up the News Feed, making it quicker to read stories so users didn’t have to suffer through the mobile web’s interminable load times and hideous pages. Instant Articles made some publishers nervous since it effectively loaded their content directly onto Facebook’s platform and gave the company complete control over their audiences. Some opted out entirely. But many others saw too big a potential audience to ignore and developed tools to syndicate their stories as Instant Articles.

A few months later, Apple launched Apple News, its own proprietary article format and app for displaying publisher content. At its own developer conference that spring, Apple’s then-VP of product management, Susan Prescott, made a case that sounded eerily like Facebook’s. “The articles can come from anywhere,” she said, “but the best ones are built in our new Apple News format.” Software chief Craig Federighi followed up with a backhanded swipe at Google News and Facebook. “Unlike just about any other news aggregation service we’re aware of on the planet, News is designed from the ground up with your privacy in mind.”

The media industry, collectively, bought the hype around what came to be known as “distributed publishing.” “Is the media becoming a wire service?” asked Ezra Klein at Vox in a piece that kicked off a million AMP and Instant Articles projects. “My guess is that within three years, it will be normal for news organizations of even modest scale to be publishing to some combination of their own websites, a separate mobile app, Facebook Instant Articles, Apple News, Snapchat, RSS, Facebook Video, Twitter Video, YouTube, Flipboard, and at least one or two major players yet to be named,” he wrote. “The biggest publishers will be publishing to all of these simultaneously.”

To some at Google, all of this looked a lot like a few proprietary platforms conspiring to kill the open web. Which might kill Google. Search — and its behemoth ad business — only worked if the web was full of open, indexable pages that its search crawlers could see and direct users to. Instant Articles and Apple News also gave those platforms control over the advertising on their pages, which threatened AdWords, another of Google’s largest revenue streams.

Over the course of 2015, as Google debated internally how best to respond, the company also hosted a clubby “unconference” called Newsgeist. Google held these periodically in partnership with the Knight Foundation as a way to work with and hear from the news industry. Jeff Jarvis, a CUNY professor and media critic, had been agitating at Newsgeist events for years for Google to build what he called “the embeddable newspaper,” a way for news articles to be displayed around the internet in much the same way a YouTube video can be embedded practically anywhere. Gingras also liked the idea; he was a big believer in what he called “portable content.”

In May 2015, at the first Newsgeist Europe in Helsinki, Finland, Instant Articles was a topic of much conversation. Jarvis, in particular, saw Instant Articles as a useful technical prototype with all the wrong attributes: it was closed off, only worked on one platform, and accrued no value back to publishers. Jarvis spent time at the conference arguing for someone — presumably Google — to build a better alternative.

Ultimately, what the company built was AMP. Done right, it could bring the same speed, simplicity, and design to the entire internet — without closing it off. To lead the effort, Google designated two people who had come from Google Plus: David Besbris, who had led the company’s wayward social networking effort, and Malte Ubl, who helped to build the social network’s technical infrastructure.

At least, that’s how Google described it publicly. According to interviews with former employees, publishing executives, and experts associated with the early days of AMP, while it was waxing poetic about the value and future of the open web, Google was privately urging publishers into handing over near-total control of how their articles worked and looked and monetized. And it was wielding the web’s most powerful real estate — the top of search results — to get its way.

“[Google] came to us and said, the internet is broken, ads aren’t loading, blah blah, blah. We want to provide a better user experience to users by coming up with this clean standard,” says one magazine product executive. “My reaction was that the main problem is ads, so why don’t you fix the ads? They said they can’t fix the ads. It’s too hard.”

Video

Faster, faster, faster

Before it was called AMP, Google’s nascent web standard was known as PCU — Portable Content Unit. The team of Googlers building the new format had only one goal, or at least only one that mattered: make webpages faster. There were lots of other goals, like giving publishers monetization and branding options, but all of that was secondary to load times. If the page appeared instantly after a user tapped the link in search results, AMP would feel as instant and native as an app. Nothing else mattered as much as speed.

Google had tried in the past to incentivize publishers to make their own webpages faster. Load times had long been a factor in how the search engine ranked sites on desktop, for instance, and load times were presented front and center in Google Analytics. Google even built a tool called “Instant Pages” that tried to guess which sites users would click on and pre-render those pages so they’d appear more quickly.

And yet, the mobile web still, in a word, sucked. “Publishers, frankly, then — and to a great degree still now — considered mobile web traffic to be essentially junk traffic,” says Aron Pilhofer, a longtime media executive and now a journalism professor at Temple University. Many mobile websites were completely separate entities from their desktop pages, prefaced with “mobile.” or “m.” in their URLs. Publishers compensated for small screens with more ads per page, and the whole industry was in the midst of an unfortunate obsession with autoplaying video. Phone browsers were bad; the webpages were even worse.

Google didn’t have great tools for understanding mobile pages at the time, so it couldn’t easily issue the same “we just like fast pages” edict. It could take the effort to develop those metrics and then urge publishers to update their sites to meet Google’s bar for speed, but there simply wasn’t time. Internally, Google felt it needed a solution immediately. Competition was here. AMP was a blunt object, but it was designed to get results quickly. AMP’s purpose, Google’s Gingras said at the 2015 launch event, “is about making sure the World Wide Web is not the World Wide Wait.”

AMP was, in many ways, a step backward for the web. Nieman Lab’s Joshua Benton noted at the time that Google’s sample AMP-powered webpages “look a lot like the web of, say, 2002, shrunk down to a phone screen.”

But it was fast. And to Google, that was all that mattered.

The growth hack to end all growth hacks

For AMP to work, Google knew it needed to get broad adoption. But simply asking publishers to support a new standard wouldn’t be easy. Publishers were already neglecting their mobile websites, which was the whole problem, and they weren’t likely to sign up to work on them just for Google’s benefit.

The team tried a few things to get more AMP content, like auto-converting stories from the Google Play Newsstand and elsewhere. WordPress began working on a plug-in that made creating AMP pages as easy as checking a box every time you published a post. One way some people in and outside of Google thought of AMP was similar to RSS — another syndication format, another box to click next to the one that tweets the story and posts the top image on Instagram. But Google worried that this approach would give all AMP pages a same-y, boring look and reader experience. What Google really needed was for publishers to not just support AMP but also embrace it.

The team quickly landed on a much more powerful growth hack: Google’s search results. It would be easy for Google to factor AMP into the way it ranks search results, to effectively tell publishers that AMP-powered pages would be higher on the list, and anything else would be pushed down the page. (It had previously done something similar with HTTPS, another push toward a new web standard.) Publishers, most of them existentially reliant on the fire hose of Google traffic, would have no choice but to give in and use AMP.

Such an aggressive move would be a bad look for Google, though, not to mention a potentially anti-competitive one, especially given that the company has always maintained it cares about a webpage’s “relevance” above all else. But there was a middle ground, or maybe a loophole: a relatively new product in Google search known as the Top Stories carousel, which showed a handful of horizontally scrolling news stories at the top of some search results pages. They weren’t part of the search results, the “10 blue links” Google is known for and so scrutinized over. They were something separate, so the rules could be different.

Google said from the beginning that AMP would not be a factor in regular “10 blue link” search results. (Several publishing executives say they’re still not sure if that was true: “when Google said AMP doesn’t matter, no one believes them,” one says. The company denies that it has ever been a factor in search result rankings.) But only AMP pages would be included in the carousel, with a lightning bolt in the corner to signify that tapping that card would offer the instant loading experience users were getting from Instant Articles and Apple News.

That carousel took up most of the precious space on a phone screen, which made Top Stories some of the most important real estate on the mobile web. And so, the growth hack worked. When AMP launched in early 2016, a who’s who of publishers had signed up to support the new format: The Guardian, The Washington Post, BuzzFeed, the BBC, The New York Times, and Vox Media, The Verge’s parent company, all quickly began developing for AMP. Others would join in the months that followed.

But many of those publishers weren’t necessarily signing up because they believed in AMP’s vision or loved the tech. Far from it. Google’s relentless focus on page speed, and on shipping as quickly as possible to thwart Facebook and Apple, meant the first versions of AMP couldn’t do very much. It didn’t support comments or paywalls, and the restrictions on JavaScript meant publishers couldn’t bring in third-party analytics or advertising. Interactive elements, even simple things like tables and charts, mostly didn’t work.

AMP, it turned out, wasn’t even that fast. Multiple publishers ran internal tests and found they were able to make pages that loaded more quickly than AMP pages, so long as they were able to rein in the ad load and extra trackers. It was much harder to build slow pages on AMP — in part because AMP couldn’t do very much — but there were lots of other ways to build good pages.

And even if AMP pages did seem to load faster from search results, “it felt faster because Google cheated,” says Barry Adams, a longtime SEO consultant. When publishers built AMP-powered pages, they submitted them to Google’s AMP Validator, which made sure the page worked right — and cleared it for access to the carousel. As it was checking the code, Google would grab a copy of the entire page and store it on Google’s own servers. Then, when someone clicked on the article in search results, rather than loading the webpage itself, Google would load its stored version. Any page pre-rendered like that would load faster, AMP or otherwise.

The AMP cache made it harder for publishers to quickly update their content — and made it nearly impossible for them to understand how people were using their sites. On cached pages, even the URL began with “google.com,” rather than the publisher’s own domain. It was as if Google had subsumed the entire publishing industry inside its office park in Mountain View.

Google kept promising publishers that this restrictive, Google-controlled version of AMP was just version one, that there was much more to come. But the carousel, that all-important new space in search results, required AMP from the beginning. “The problem was that when Google launched it, they also said, ‘You have to use AMP. We built a standard, it’s shit, it’s terrible, it’s not ready, it does only like a quarter of what you need it to do, but we need you to use it anyway because otherwise we’re just not going to show your articles in mobile search results anymore,’” Adams says. “And that is what ruffled everybody’s feathers.”

“The audience people hated it because it was against audience strategy,” says one former media executive who worked with AMP. “The data people hated it because it was against advertising and privacy strategy. The engineers hated it because it’s a horrendous format to work with… The analysts hated it because we got really bad behavioral data out of it. Everyone’s like, ‘Okay, so there’s no upside to this — apart from the traffic.’”

On top of that, the traffic was worth less because it had fewer and more limited ads. “Every publisher experienced this — the AMP audience is less valuable. It’s millions of pennies and not having any dollars,” one executive says. “An AMP article earned 60 percent of what a [standard] article earned… It’s low enough to be noticeable. You were just playing the game of ‘if I didn’t have all this traffic, would I make more money?’”

“Google did not have an answer for the revenue gap — there was a lot of hand-waving, a lot of saying they would work with us,” says another executive. “Google on AMP was like Google on every product — lots of fanfare in the beginning, lots of grand plans, and then none of those plans ever saw the light of day.”

But the pageviews, in many cases, were enough to outweigh the costs. It’s almost impossible to overstate how important Google traffic is to most publishers. The analytics company Chartbeat estimated this year that search accounts for 19.3 percent of total traffic to websites, a number that doesn’t even include products like Google News and the news feed in the Google app, both of which also account for a huge portion of many publishers’ traffic. Google, as a whole, can account for up to 40 percent of traffic for even the largest sites. Disappearing from Google is life-and-death stuff.

Bigger media companies, those that could employ product and engineering staff of their own, could sometimes hack around AMP’s limitations — or, at the very least, deal with them without affecting the rest of the company’s business. Some big publishers came to see AMP as nothing more than some additional work required for a distributor. But even many smaller publishers, without the staff to manage the technical shortcomings or the resources to maintain yet another version of their website, still felt they had no choice but to support AMP.

As long as anyone played the game, everybody had to. “Google’s strategy is always to create prisoner’s dilemmas that it controls — to create a system such that if only one person defects, then they win,” a former media executive says. As long as anyone was willing to use AMP and get into that carousel, everyone else had to do the same or risk being left out.

Many within Google continued to see AMP as a net good, a way to make the web better and to keep it from collapsing into a few walled gardens. But to most publishers, AMP was, at best, just another app to send stuff to. “We didn’t see it as any different from building on Android or building on iOS,” one former media executive says. “It was this way to deliver the best mobile experience.” Supporting AMP was like supporting Apple News, Facebook Instant Articles, or even maintaining RSS feeds. It was just more work for more platforms.

That’s why the Top Stories carousel felt like a shakedown to so many publishers. Google claimed it was merely an incentive to do the obviously right thing and a nice boost in the user experience. But publishers sensed an unspoken message: comply with this new format or risk your precious search traffic. And your entire business.

Video

Good governance

Despite all the issues with AMP’s tech and misgivings about Google’s intentions, the new format was a success from the very beginning. By December 2016, less than a year after its official launch, an Adobe study found that AMP pages already accounted for 7 percent of mobile traffic to “top publishers” in the US and grew 405 percent in just the final eight months of the year. Microsoft was planning to use AMP in the Bing app for iOS and Android. Twitter was looking into using it as well.

From the beginning, Google had proclaimed loudly that AMP was not a Google product. It was to be an open-source platform, all its source code available on GitHub for anyone to fork and edit and use to their own ends. AMP’s success was the web’s success, not Google’s.

In reality, Google exerted near-total authority over AMP. According to the 2020 antitrust lawsuit against Google, the company adopted a “Benevolent Dictator For Life” policy, and even when it transferred the AMP project to the OpenJS Foundation in 2019, it remained very much in charge. “When it suited them, it was open-source,” says Jeremy Keith, a web developer and a former member of AMP’s advisory council. “But whenever there were any questions about direction and control… it was Google’s.”

Several sources told me stories of heated arguments about the future of the web that ended in Google employees awkwardly reading lawyer-approved statements about things being open and opt in — and Google then getting its way. After a debate about the cache, and the data it gave Google, “they started bringing a whole bunch of people no one had ever heard of to committee meetings to say how wonderful the cache was,” one media exec remembers. And whenever there was debate about new features or the roadmap, Google always won.

Over time, AMP began to support more ad networks — or, rather, more ad networks began to do the work required to support AMP’s locked-down structure. But many still felt the best experience was reserved for Google’s own ad tech. That fact has become the most contentious part of AMP’s history — and the reason it wound up in multiple antitrust lawsuits against Google. The suits allege, among other things, that Google used AMP as a way to curtail a practice called “header bidding,” which allows publishers to show their inventory to multiple ad exchanges at once in order to get the best price in real time. “Specifically,” the 2020 lawsuit says, “Google made AMP unable to execute JavaScript in the header, which frustrated publishers’ use of header bidding.” Google spokesperson Meghann Farnsworth said in a statement that “AG Paxton’s claims about AMP and header bidding are just false.” Most of the AMP-related provisions in that 2020 lawsuit were thrown out by a district court in 2022, which found that the case “does not plausible [sic] allege AMP to be an anticompetitive strategy.”

As AMP caught on, Google’s vision for the product became even more ambitious. The company started to suggest that, rather than maintain a website and a separate set of AMP pages, maybe some publishers should build their entire site within AMP. On launch day in October 2015, the AMP project website proudly proclaimed that it was “an architectural framework built for speed.” By the end of 2017, AMP was promising to enable “the creation of websites and ads that are consistently fast, beautiful and high-performing across devices and distribution platforms.” It was no longer just articles, and it was no longer just mobile. It was the whole web, rewritten Google’s way and forever compatible with its search engine.

“I 100 percent believe that Google would have loved to have said AMP is the future of HTML,” Eden says. “I have no doubt that the long-term play was to say, ‘We’re Google. This is a new language for the web. If you don’t like it, you’re not on the front page of Google anymore.’”

Ultimately, though, Google’s grandest ambitions didn’t come to pass. Neither did its smallest ambitions, really. As publishers continued to thrash against AMP’s constraints, and as overall scrutiny against Google ramped up, the company began to pull back.

The non-standard

In 2021, Google announced it would start featuring all pages in the Top Stories carousel, not just AMP-powered ones. Last May, Google let some local news providers for covid-related stories bypass this requirement. As soon as publishers didn’t have to use AMP anymore, they mostly stopped. The Washington Post abandoned it the same year, and a litany of others (including Vox Media) spent 2022 looking for ways off the platform. Even now, though, some of those publishers say they’re nervous about traffic disappearing. Google remains such a black box that it can be hard to trust the company, even as it continues to say it doesn’t factor AMP into results.

The true irony of AMP is that even as publishers are jumping off the platform, many also acknowledge that, actually, AMP is pretty good now. It supports comments and more interactive elements; it’s still fast and simple. Now that it’s run by the OpenJS Foundation and separated from the search results incentive, it appears to be on track to become a genuinely useful project. It’s not likely to replace HTML anytime soon, but it could help usher in the idea of portable and embeddable content that Jarvis and Gingras imagined all those years ago. Developers can even use AMP to make web-based projects that feel like Instagram Stories or the TikTok feed. “AMP potentially could have been — in some ways, I still think possibly could be — a really interesting way of syndicating content that takes that middle person out of the mix,” Pilhofer says.

Everyone I spoke to also thinks Core Web Vitals is a good and valuable idea, too. Speed matters more than ever; how you hit the mark doesn’t matter as much.

One source I spoke to wondered aloud if the internet might be a different place if the first versions of AMP had actually been good. Would publishers have thrown even more resources into supporting the format, giving Google even more control over how the web works — and, as the antitrust lawsuits allege, how it makes money? It certainly seems possible.

But one thing proved undeniable: for Google, there was simply no coming back from the first days of AMP, when publishers felt like the company was making grand pronouncements about saving the web while also force-feeding them bad products that served Google’s ends and no one else’s. Even Facebook Instant Articles and Apple News, constrained and problematic as they were, felt optional. AMP didn’t.

“It maybe had good intentions about making the mobile web better,” Adams says, “but went about it in probably one of the worst ways you could have imagined. It was a PR nightmare.”

One of the smartest and most profitable things Google ever did was align itself with the growth of the web. It offered useful free services, used projects like Fiber and Android to help get more people online, and made the sprawling internet a little easier for people to navigate. As the web grew, so did Google, both to great heights. But when the web was threatened by the rise of closed platforms, Google mortgaged many of its ideas about openness in order to make sure the profits kept coming. “And as a long-term effect, it probably woke a lot of news publishers up to the fact that Google is maybe not a benign entity,” Adams says. “And we need to take their dominance a bit more seriously as a news story in its own right.”

In response to this story, Google spokesperson Meghann Farnsworth said the company “will continue to collaborate with the industry to build technology that provides helpful experiences for users, delivers value to publishers and creators and helps contribute to a healthy ecosystem and the open web.”

Google is still the web’s biggest and most influential company. But across the publishing industry, it’s no longer seen as a partner. AMP ultimately neither saved nor killed the open web. But it did kill Google’s good name — one not-that-fast webpage at a time.


Casey Newton and Nilay Patel contributed reporting.

Are Mainframes an Indicator of Banking Reliability?

Are Mainframes an Indicator of Banking Reliability?
data center server racks
There is a decent chance that banks using mainframes prioritize low risk, while banks that do not may be more willing to take unreasonable risks. With people currently concerned about where to put their money safely, one of the questions you should ask is, "Do you use a mainframe for your mission-critical applications?" The post Are Mainframes an Indicator of Banking Reliability? appeared first on TechNewsWorld.

How to get a better mobile phone deal in the UK

How to get a better mobile phone deal in the UK

With above-inflation increases, tips and tricks to find the right plan are even more important

There’s a dizzying array of mobile phone tariffs, and with many providers recently imposing above-inflation increases, it is even more important to choose the right deal. So how can you navigate the networks to get a plan that is right for you? What are the top tips for saving money?

Continue reading...

dimanche 7 mai 2023

Twitter Criticized for Allowing Texas Shooting Images to Spread

Twitter Criticized for Allowing Texas Shooting Images to Spread Graphic images of the attack went viral on the platform, which has made cuts to its moderation team. Some users said the images exposed the realities of gun violence.

A day on the Gateway 14

A day on the Gateway 14

I spent a day on a $279, bright blue, cow-spotted 14-inch laptop, and I’m seriously impressed by how much it has to offer.

I’ve just finished a day on the Gateway 14, one of the most talked-about laptops in the budget Windows space. And folks... I’m very impressed.

We bought this at Walmart for $279 (discounted from $360 since it’s a couple generations old), and yes, it has the legendary cow spots on its lid. The Gateway brand from the 1990s that we all know and love is now licensed by Acer and has become a Walmart-exclusive brand. The cow moos on. Mooooo.

The model I’ve been using includes an Intel Core i5-1135G7 (a chip that powered many of 2020’s most premium ultraportable devices, including the Samsung Galaxy Book, the Acer Swift 5, the Dell XPS 13, and the Lenovo Yoga 9i). There is 16GB of RAM and 512GB of storage. For $279, that’s a very solid deal and probably close to the best specs you can get for that price. The biggest compromise is a mediocre touchpad, but that’s mitigated by a robust port selection that should allow you to plug in a mouse with no problem.

The chassis is also the sturdiest and best-built one I’ve ever seen from a Windows laptop, with no flex in the keyboard or screen and impressive fingerprint rejection. There’s even an empty drive slot on the bottom (fastened with two screws), so you can stick in however much storage you need. Oh, and it’s blue. Blue! How fun is that? Gateway also put the little Microsoft and Intel stickers on the bottom of the device, so the palm rests are a fully untarnished blue. It’s a nice, bold look. I approve.

I opened the Gateway up just before 9AM to start work. It shipped with real Windows, not S-mode. I noticed immediately that there was a lot of stuff preinstalled. Some of it was helpful — I didn’t have to download Spotify! — but there were also games like Solitaire pinned to the taskbar, as well as some backends to browser games like Forge of Empires and Elvenar on the desktop. In the name of Marie Kondo, I cleared all of that out.

It was a fairly uneventful morning and afternoon; I mostly spent it writing in Chrome, with around a dozen tabs open and Spotify occasionally streaming in the background. At first, it seemed a bit laggy, and this was apparently because it really, really needed to be updated (the unit has been sitting around in our review closet for a bit since its purchase). I tried to put this off because I’m a procrastinator that way, but the device eventually took matters into its own hands: it froze, crashed, and began updating itself. Fair enough. I guess I deserved that.

Once the update was sorted, I resumed my workload. And reader, the Gateway is fast. It sailed through the day without breaking a sweat. I didn’t once hear a single decibel of fan noise; I could make out a teensy bit of coil whine if I put my ear to the keyboard, but that was it. Performance was visibly smoother and faster than that of our slightly-more-expensive HP 14 unit, which has a weaker processor and a quarter of the Gateway’s RAM. I also slightly prefer the Gateway’s screen, which is 1920 x 1080 and just has a bit of a more modern look to it. I was working at 20 to 30 percent brightness indoors with no glare.

The Gateway 14 seen from the back.
This logo gave me flashbacks.

Audio was tinny, with weak percussion and no bass, but had decent volume to it, and I could certainly hear better than I could on the HP 14. The microphones, on the other hand, are functional but not very good — we tested them on The Vergecast (in, admittedly, a very unfavorable environment), so check out that episode to hear what they sound like firsthand.

Ports on the left side of the Gateway 14
Look how handy!
The Gateway 14 keyboard seen from above.
That’s a fingerprint reader in the top left.

I started the day with the Gateway fully charged, and the unit almost made it through the full day unplugged, dying in the late afternoon around the seven-and-a-half-hour mark. That solidly beats the HP 14, as well as... quite a few more expensive Windows laptops I’ve tested recently. I’ll take it.

My post-work activity was the Gateway’s ultimate test. I spent the evening working on a manuscript and researching potential agents to submit that manuscript to. This was an involved affair, and I had probably 40-50 Chrome tabs open — lists of various agencies, their requirements, their blogs, and other such things — and I was resizing, swapping, and clicking in and out of all of them very fast. No trouble for the Gateway, which zipped through it all.

I also had a whole bunch of my own Google Docs open, including the manuscript itself, which was well over 300 pages. I have to be careful which computers I open this document on because Docs files of this size get very unwieldy and slow very fast. This was also no trouble for the Gateway 14, which loaded the whole thing about as quickly as any Windows computer I’ve ever used and never once froze or lagged while editing it.

The Gateway’s keyboard isn’t backlit, but I actually had no problem working on it late into the night with my lights dimmed. The bright white text against the dark black keys provided enough contrast that I could make out what I needed to in the dark. I actually much prefer this experience to that of using laptops that are backlit but not very well (which is often what you get if you buy a backlit device in this price range).

The ports on the right side of the Gateway 14.
I really like the USB-A on each side.
The Gateway 14 seen from the front displaying The Verge homepage.
If you’re looking for a Windows laptop under $300, this is my current (and bluest) recommendation.

Now, there is one significant downside that did hold me up. This is one of the worst touchpads I have ever used. The size is not an issue; it feels roomier than the tiny one on the HP 14. However, the click is very difficult. You really have to shove the thing down. It’s quite loud and feels like a chore. I’m also not quite sure what was going on with the actuation points, but there were times when I would click in a certain area at a certain angle and feel like I was clicking multiple times.

But most annoyingly, clicking and dragging doesn’t quite work. There seemed to be a hard cap on how much text I could highlight before the touchpad just decided it was done; it also took many attempts to highlight as click-and-drag attempts, which really screwed up my manuscript editing process.

Now, on a laptop that’s even slightly more expensive, this issue would be enough to tank the Gateway’s score. I’m slightly more forgiving of it on this sub-$300 laptop because the extensive port selection (also better than the HP 14’s) will make it very easy to plug a mouse into it. In particular, the fact that there are USB-A ports on both sides will make it quite convenient to stick peripherals in, regardless of which hand you use your mouse with. I don’t use peripherals when reviewing laptops, but you should plan to keep a mouse handy if you buy this. (There’s also a lock slot, an HDMI (strangely upside-down, but nevertheless), a USB-C, a microSD (!), and a headphone jack.)

Given the fact that my two biggest issues, the touchpad and the microphones, can both be solved by external peripherals, I really did not have too much to complain about here. If you don’t already own a mouse or microphone and will need to buy them, this device may lose some of its value — but if you already have them on hand (or just won’t be needing to use the Gateway for video calls too often), I really think this is one of the best deals you will find on a Windows laptop. Even with its problems, this seems like it could easily be (at least) several hundred bucks more expensive. Plus, it’s blue! Did I mention it’s blue?

I’m so serious when I say I’m actually thinking of buying one of these for myself. Come on — it’s blue!

The digital media bubble has burst. Where does the industry go from here?

The digital media bubble has burst. Where does the industry go from here?

Buzzfeed, Vice, Gawker and Drudge Report are all traffic-war casualties, but they succeeded in shaking up the media landscape

Toward the end of Traffic, a new account of the early rock n roll years of internet publishing, Ben Smith writes that the failings of Buzzfeed News had come about as a result of a “utopian ideology, from a kind of magical thinking”.

No truer words, perhaps, for a digital-based business that for a decade paddled in a warm bath of venture capital funding but never fully controlled its pricing and distribution, a basic business requirement that applies to information as much as it does to selling lemonade in the school yard or fossil fuels.

Continue reading...

The brief Age of the Worker is over – employers have the upper hand again

The brief Age of the Worker is over – employers have the upper hand again

The pandemic ushered in an era of ‘quiet quitting’ and ‘bare minimum Mondays’ but workers have since lost leverage

It seems that it was only yesterday that the media was filled with stories about workers calling the shots. There were the work-from-homers who refused to come back to the office after the pandemic was long over. There were the “quiet quitters” who proudly – and publicly – admitted that, even though they were collecting a paycheck from their employer they weren’t doing much at all during the day except looking for another job. And then there’s the group of workers who were advocating for “bare minimum Mondays” because apparently, a five-day workweek was just too much to bear.

During the past few years, we’ve heard employees publicly demand unlimited paid time off, four-day workweeks, wellness sabbaticals, gigantic bonuses to switch jobs and even “pawternity leave” – getting time off when you adopt a puppy. Facing labor shortages, customer demands and supply chain headaches, most employers caved. The Age of the Worker blossomed.

Continue reading...

‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity

‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity

Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it?

The first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade, on good terms. “I have no objection to what Google has done or is doing, but obviously the media would love to spin me as ‘a disgruntled Google employee’. It’s not like that.”

It’s an important clarification to make, because it’s easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, we’re about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all.

Continue reading...

How to get a better mobile phone deal in the UK

How to get a better mobile phone deal in the UK

With above-inflation increases, tips and tricks to find the right plan are even more important

There’s a dizzying array of mobile phone tariffs, and with many providers recently imposing above-inflation increases, it is even more important to choose the right deal. So how can you navigate the networks to get a plan that is right for you? What are the top tips for saving money?

Continue reading...

samedi 6 mai 2023

I’m glad you’ve bought an electric vehicle. But your conscience isn’t clean | John Naughton

I’m glad you’ve bought an electric vehicle. But your conscience isn’t clean | John Naughton

First, you’ve got to drive a long way before you overcome your EV’s embedded carbon debt. And then there’s the trouble with the minerals in its battery…

So you’ve finally taken the plunge and bought an electric vehicle (EV)? Me too. You’re basking in the warm glow that comes from doing one’s bit to save the planet, right? And now you know that smug feeling when you are stuck in a motorway tailback behind a hideous diesel SUV that’s pumping out particulates and noxious gases, but you’re sitting there in peace and quiet and emitting none of the above. And when the traffic finally starts to move again you notice that the fast lane is clear and you want to get ahead of that dratted SUV. So you put your foot down and – whoosh! – you get that pressure in the small of your back that only owners of Porsche 911s used to get. Life’s good, n’est-ce pas?

Er, up to a point. True, there’s nothing noxious coming out of your exhaust pipe, because you don’t have one; and the electric motors that power your wheels certainly don’t burn any fossil fuel. But that doesn’t mean that your carbon footprint is zero. First of all, where did the electricity that charged that big battery of yours come from? If it came from renewable sources, then that’s definitely good for the planet. But in most countries, at least some of that electricity came from non-renewable sources, maybe even – shock, horror! – coal-burning generating stations.

Continue reading...

Discord’s username change is causing discord

Discord’s username change is causing discord
red black and white geometric shapes with discord text logo in center
Illustration by Alex Castro / The Verge

A race to reserve usernames is kicking off on Discord.

Starting in the next couple of weeks, millions of Discord users will be forced to say goodbye to their old four-digit-appended names. Discord is requiring everyone to take up a new common platform-wide handle. For Discord, it’s a move toward mainstream social network conventions. For some users, though, it’s a change to the basics of what Discord is — a shift that’s as much about culture as technology.

Discord has historically handled usernames with a numeric suffix system. Instead of requiring a completely unique handle, it allowed duplicate names by adding a four-digit code known as a “discriminator” — think TheVerge#1234. But earlier this week, it announced it was changing course and moving toward unique identifiers that resemble Twitter-style “@” handles.

Co-founder and CTO Stanislav Vishnevskiy acknowledged the change would be “tough” for some people, but he said the discriminators had proven too confusing. He noted that over 40 percent of users don’t know their discriminator number, which leads to “almost half” of all friend requests failing to connect people to the right person, largely due to mistyped numbers.

Over on Reddit, Vishnevskiy argued that the new handles wouldn’t even show up in the interface that often since Discord will allow users to set a separate display name that’s not unique. Carrying more than 500 downvotes on some Reddit replies, he called the original system a “halfway measure” and rejected ideas like just adding more numbers to the end of a handle. “This was not a change that we decided to do lightly and have been talking about doing for many years, trying to avoid it if we could,” he posted.

During the change, Discord users will have to navigate a process that’s fraught with uncertainty and cutthroat competition. Users will need to wait for an in-app prompt for when it’s their turn to select a new username, which will eventually roll out to all users over the course of “several months.” The company will assign priority to users based on their Discord registration dates, so people who have had their name “for quite a while” will have a better chance to get a desired name.

This raises a lot of obvious fears and thorny questions. Depending on who gets to set their usernames first, is there anything stopping people from taking over a particularly popular creator’s distinctive name? Should Discord prevent this by holding usernames for well-known creators, even if they’re not first in line? This is a problem for a lot of social networks, but unlike with some fledgling service attracting new users, all these people are already on Discord — in some cases, they’re probably even paid subscribers.

In a statement to The Verge, Discord said it would be trying to navigate the change gracefully for its best-known users. “We created processes for high-visibility users to secure usernames that will allow them to operate on our platform with minimal risk of impersonation,” said Kellyn Slone, director of product communications. “Users with a standing business relationship with Discord who manage certain partner, verified, or creator servers will be able to pick a username before other users in order to reduce the risk of impersonation for their accounts.”

A lot of Discord users will fall outside those boundaries. “As a content creator who has a relatively large fanbase — my handle is subject to username sniping by someone with an older account than me,” artist ZestyLemons, who uses Discord to connect with fans, writes to The Verge. “I am not a Discord partner, nor am I famous enough to obtain their recognition, so I will absolutely not have security with my public handle.” ZestyLemons noted that for people who do get desirable names, there’s the risk of being swatted or threatened to give it up — something that’s happened on Instagram and Twitter.

Discord users understand right now that there are a lot of accounts with very similar names, distinguished only by random numbers at the end. But absolute names change that understanding. They encourage people to look for believable usernames — if somebody nabs the one and only @verge (our Twitter handle) on Discord, people could be more inclined to believe it’s us.

And this pushes people to treat their Discord names like part of a centralized identity — rather than, as many users have referred to them, something like a private phone number. It compels individuals to take a username that represents them elsewhere before someone else does. This links whoever they are on Discord back to their internet-wide identity, with all the potential downsides — like stalking or a simple feeling of exposure — that entails.

Despite fears about individual users impersonating each other, the risks for server moderation are less clear — and some Discord server admins told The Verge they weren’t worried. “I don’t think the change will be a big deal for admins + mods,” says Emily, an admin for a large Pokémon Go meet group on Discord. The server already asks people to set server-specific nicknames that match their Pokémon Go trainer name, so they’re not relying on discriminators to tell people apart.

But Emily isn’t a fan of the change. “It’s a bummer that Discord’s giving in to the usual social media norms,” they said. “Discriminators were kinda clever… it allows many people [to] share the same name without stressing over the ‘perfect’ username. Discord is a more personal sort of social media. You’re not publishing publicly into the ether — like Twitter or something — so having a clever memorable username doesn’t matter.”

SupaIsaiah016, an avid Geometry Dash player who also runs a small Discord server, agrees. “The current username and discriminator system worked perfectly fine, and allowed for thousands of people to have the same name on the platform overall,” SupaIsaiah016 writes to The Verge. “Sites that use handles and display names such as Twitter have very different reasons as to why they use those systems, as they are public social medias.”

Part of the problem is simply that Discord is asking millions of settled users to make a huge change to their online identity, and there’s no great way to do that without friction. But there’s also a sense that Discord’s old username style made it a different, albeit clunky, kind of social network. And to many users, that was a part of the appeal.

“Discord was originally meant to be a messaging app, to which a lot of content creators used to separate their online lives versus their real, personal lives,” ZestyLemons writes. Verge reader SpookyMulder put it another way in the comments of our original news post. “Discord has some sort of pseudo-identity culture,” SpookyMulder writes. “We tend to value the freedom of anonymity in Discord than your usual social media @username identities.”

Whether you’re a Discord user who wants to maintain a sense of anonymity or one who is all in for a more shareable and easily identifiable system, the race to get the username that’s right for you is on. But you’ll have to wait and see where the starting line is.

Twitter admits to ‘security incident’ involving Circles tweets

Twitter admits to ‘security incident’ involving Circles tweets

Feature allows users to set a list of friends and post tweets that only they are supposed to be able to read

A privacy breach at Twitter published tweets that were never supposed to be seen by anyone but the poster’s closest friends to the site at large, the company has admitted after weeks of stonewalling reports.

The site’s Circles feature allows users to set an exclusive list of friends and post tweets that only they can read. Similar to Instagram’s Close Friends setting, it allows users to share private thoughts, explicit images or unprofessional statements without risking sharing them with their wider network.

Continue reading...

VanMoof S5 e-bike review: nice but twice the price

VanMoof S5 e-bike review: nice but twice the price

$4,000 and a long list of features, but how many do you really need?

“Sometimes you have to kill your darlings,” is a phrase used by designers to justify the removal of elements they find personally exciting but fail to add value.

The last time I heard it was in April, 2022, when I rode pre-production versions of VanMoof’s new full-size S5 and smaller A5 electric bikes. The phrase was uttered by co-founder and CEO Taco Carlier to justify the removal of VanMoof’s iconic matrix display for a new “Halo Ring” interface.

One year later and both e-bikes are now — finally — being delivered to customers, well after their original target of July 2022. The price has also been raised to $3,998 / €3,498 from an early preorder price of $2,998 / €2,498, which was already much more expensive than what you’d pay for VanMoof’s previous generation e-bikes — the VanMoof S3 / X3 — when introduced for a rather remarkable price of $1,998 / €1,998 back in 2020.

Look, everything is more expensive in 2023, e-bikes included. But in terms of value for money, the $4,000 VanMoof S5 needs to be twice as good as the $2,000 S3, right? Otherwise the latest flagship e-bike from this former investment darling might be dead on arrival.

If only it was that simple.

Although the S5 and A5 pedal-assisted e-bikes still look like VanMoofs with that extended top tube capped by front and rear lights, everything from the frame down to the chips and sensors have been reengineered. The company says that only a “handful of parts” were carried over from the previous models.

Here are some of the most notable changes:

  • New LED Halo Ring visual interfaces flanking both grips.
  • An integrated SP Connect phone mount (you provide the case) with USB-C charging port.
  • New almost completely silent Gen 5 front-hub motor with torque sensor and three-speed automatic e-shifter (the S3 / X3 had four-speed e-shifters).
  • New multi-function buttons have been added below the bell (next to left grip) and boost (next to right grip) buttons.
  • The boost button now offers more oomph with torque increasing to 68Nm from 59Nm.
  • The S5 frame which has been criticized for being too tall has been lowered by 5cm (2 inches) to better accommodate riders as tall as 165cm (5 feet, 5 inches), while the A5 caters to riders as tall as 155cm (5 feet, 1 inch) and allows for an easier step-through than the X3 it supersedes.

These join a very long list of standard features found on VanMoof e-bikes like a well designed and useful app, integrated Kick Lock on the rear wheel, baked in Apple Find My support, hydraulic disc brakes, muscular city tires, bright integrated lights, mudguards, and a sturdy kickstand. And because it’s VanMoof, you can also subscribe to three years of theft protection ($398 / €348) with guaranteed recovery or replacement within two weeks, and three years of maintenance ($348 / €298) in select cities.

VanMoof e-bikes now have integrated mounts and USB-C charging for your phone.

I picked up my dark gray (also available in light gray) VanMoof S5 loaner in late March but I ran into a few issues that delayed this review. These included intermittent connectivity failures between the app and bike, a Kick Lock that didn’t always disengage, and an alarm that would briefly trigger for no apparent reason. Those issues were all corrected by an over-the-air firmware (v1.20) update released in mid-April before I could even report them back to VanMoof support.

I have mixed emotions about this. In March the S5 and A5 started shipping in quantity — albeit, eight months late — so you’d think they would have had time to sort out any issues in VanMoof’s new testing labs. That’s annoying given VanMoof’s history of initial quality issues and assurances provided by the company that they wouldn’t be repeated. Then again, premium e-bikes from companies like VanMoof are increasingly complex machines, and seeing the company solve issues so quickly is commendable.

One issue that hasn’t been fixed is idle battery drain, but VanMoof tells me that a firmware update is coming to solve it in “two weeks” time. In my case, the issue caused the idle S5’s battery to drain from 86 percent to 65 percent over a period of 10 days. I generally lose about two percent charge each day whether I ride it or not.

Oh, and since I’ve installed several firmware updates in the last month (I’m currently at v1.2.4), I need to mention this: the S5 plays a jaunty little tune the entire time the firmware is being installed. It was cute at first, my daughter even offered a little dance to go with it. But it takes five to 10 minutes, and after the first time you hear it, it’s just annoying and there’s no way to turn it off.

Halo Ring in sunlight.
Halo Ring in low light.

Regarding new features, the Halo Rings next to each grip are the most visible change from previous VanMoofs. At least until you hit sunlight and those weak LEDs washout almost completely. The Halo Rings are meant to show speed, charge remaining, current pedal-assist power level, and more through a series of light bars and animations. Overall they’re fine, if gimmicky, but I don’t have much of a need for status information when bicycling. I also didn’t miss the old top-tube matrix display.

Riding a 23kg / 50.7lbs VanMoof S5 feels like an S3 albeit with fewer shifts and a boost button that provides more torque when trying to pass someone or get an early jump off the line. The fifth generation 250W motor of VanMoof design is absolutely quiet, even at its top speed of 25km/h in Europe (which increases to 20mph in the US). And the new three-speed e-shifter does a better job of accurately finding the right gear than the S3’s four-speed e-shifter did. I still felt a few clinks and spinning pedals, especially when mashing down hard on the cranks when in a hurry. But overall the S5’s predictive shifting is much improved, especially when rolling along at a casual pace. Still, it’s not as smooth as the automatic shifters from Enviolo, for example, so there’s still work to be done.

It’s a shame VanMoof doesn’t offer a simple belt-drive option for its e-bikes. That coupled with the S5’s torquey boost button would obviate the need for any gears when riding in all but the most hilly environments. That’s why I’m a big fan of the premium belt-driven pedal-assist e-bikes sold by Ampler and Cowboy. They cost less than the S5 but are also more fun to ride thanks to their lighter front ends (both brands use rear-hub motors).

As to range, VanMoof says I should be able to get 60km on full power mode. However, I was only able to eke out 48.6km (30.2 miles) from the S5’s 487Wh battery when riding in full power mode and frequently pressing the boost button, in temperatures that ranged from freezing to 15C (59F). That’s about the same range I got when testing the VanMoof S3 — 47 km (29.2 miles) — and its bigger 504Wh battery. The issue that currently causes the battery to lose energy overnight certainly didn’t help my range. The battery can be charged from zero to 100 percent in a very slow 6 hours and 30 minutes via the included charger.

I had been wondering how VanMoof would use the new multifunction buttons located just below the bell and boost buttons. The small button on the right (below the boost) let’s you change your motor power on the fly, while the one on the left (below the bell) makes your lights flash, as some kind of warning to people around you. Both of these features tick boxes on marketing sheets but aren’t very useful in everyday usage.

And since this is a VanMoof, the battery is integrated and can only be removed during maintenance. The company does have a new “click-on” version (no velcro!) of its extended battery coming for the S5 and A5 that you can charge inside the home.

The dark gray VanMoof S5: too complex for its own good?

I’ve had a nagging concern about VanMoof e-bikes for the last few years that I even mentioned in the S3 review. Are they getting too complex for their own good?

Electric bikes — especially commuter e-bikes like the S5 — are subjected to daily wear and tear in all kinds of weather conditions. Even basic bikes are difficult to maintain when used everyday and VanMoof’s e-bikes are expensive rolling computers.

Honestly, I could do without the fancy automatic chain-driven three-speed shifter, superfluous multifunction buttons, programmable electronic bell, Halo Ring interface, Apple tracking, and perky sounds for startup, shutdown, and firmware updates. Give me one gear and a maintenance-free belt drive alongside that torquey boost button on a pedal-assisted e-bike that will get me back and forth to my office every day, no matter what, in style and without fail. But that’s not the S5.

Don’t get me wrong, the VanMoof S5 is a very good electric bike with a longer feature list than any other e-bike I can name. It also has one of the best networks of service hubs available in cities around the world. That’s important because most S5 / A5 parts are only available from VanMoof so make sure you have a service center nearby if you’re interested in buying.

VanMoof for all its early covid successes ran into financial issues at the end of 2022 when it was forced to ask investors for an emergency injection of capital just to pay the bills. But the entire e-bike industry was struggling post-covid as sales plummeted and supply chains wobbled. Competitors like Cowboy and industry giant Stella also had to raise cash to handle overstock after e-bike inventories swelled.

As good as the S5 is, the feature set is verging on gimmickry, in my opinion, perhaps in an attempt to justify its new higher $3,998 / €3,498 price tag. They’re cute, entertaining, and sure, a tad useful at first. But many just aren’t needed for regular commuters. The S5 has too many darlings, and not enough killing.

For context on that price, the VanMoof S5 is currently $500 more expensive than the comparable Cowboy 4 series and $1,000 more than the simpler Ampler Axel. Viewed in those terms, VanMoof’s pricing seems about right.

Is the S5 worth it? I’ll leave that for you to decide in this uncertain and inflationary world. While it’s definitely an improvement over the S3, it’s not twice the bike.

All photography by Thomas Ricker / The Verge

Backup Power: A Growing Need, if You Can Afford It

Backup Power: A Growing Need, if You Can Afford It Extreme weather linked to climate change is causing more blackouts. But generators and batteries are still out of reach of many.

Google engineer warns it could lose out to open-source technology in AI race

Google engineer warns it could lose out to open-source technology in AI race

Commonly available software poses threat to tech company and OpenAI’s ChatGPT, leaked document says

Google has been warned by one of its engineers that the company is not in a position to win the artificial intelligence race and could lose out to commonly available AI technology.

A document from a Google engineer leaked online said the company had done “a lot of looking over our shoulders at OpenAI”, referring to the developer of the ChatGPT chatbot.

Continue reading...

vendredi 5 mai 2023

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet
Hands with additional fingers typing on a keyboard.
Álvaro Bernis / The Verge

How we use the internet is changing fast, thanks to the advancement of AI-powered chatbots that can find information and redeliver it as a simple conversation.

Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

We’ve even tested all three chatbots head-to-head to see which one is the best — or at least which one gives us the best responses right now when it comes to pressing questions like “how do I install RAM into my PC?”

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”

Or, in the words of James Vincent, a human person, “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle
Sony’s PS5 console.
Photo by Vjeran Pavic / The Verge

The deadline to claim the titles included in the PlayStation Plus Collection is fast approaching, meaning you’ve got until May 9th to claim the dozen-plus PS4 classics that are included as part of a PlayStation Plus subscription. Eurogamer notes that the list currently includes some absolute cracking games like Bloodborne, God of War, Ratchet and Clank, The Last Guardian, and Uncharted 4: A Thief’s End.

Sony launched the collection alongside the PlayStation 5 console back in 2020, allowing owners of its new console to catch up on some of the biggest games of the previous generation. It was a neat incentive to subscribe to PlayStation Plus, and a reminder of the wealth of PS4 titles available to play via backwards compatibility in the early days when native PS5 titles were still thin on the ground.

Obviously there are far too many games included in the collection to be able to play through them all by next Tuesday. But when announcing the discontinuation of the feature in February, Sony explicitly said that redeeming them before May 9th “will enable you to access those titles even after this date for as long as you remain a PlayStation Plus member.” So if you have an inkling you’d like to play Ratchet and Clank at some point after that date, best nab it now. Just to be safe.

If you don’t, there are at least a couple of games on the list that will continue to be available as part of Sony’s more expensive Playstation Plus Extra and Premium tiers. These include Bloodborne, Resident Evil 7: Biohazard, Batman: Arkham Knight, The Last Guardian, God of War, Uncharted 4, and Fallout 4. The full PlayStation Plus catalog can be found on Sony’s site.

Microsoft is reportedly helping AMD expand into AI chips

Microsoft is reportedly helping AMD expand into AI chips
US-TECHNOLOGY-LIFESTYLE-ELECTRONICS
AMD’s AI-capable MI300 data center APU is set to arrive sometime later this year. | Photo by ROBYN BECK/AFP via Getty Images

Microsoft has allegedly teamed up with AMD to help bolster the chipmaker’s expansion into artificial intelligence processors. According to a report by Bloomberg, Microsoft is providing engineering resources to support AMD’s developments as the two companies join forces to compete against Nvidia, which controls an estimated 80 percent market share in the AI processor market.

In turn, Bloomberg’s sources also claim that AMD is helping Microsoft to develop its own in-house AI chips, codenamed Athena. Several hundred employees from Microsoft’s silicon division are reportedly working on the project and the company has apparently already sunk around $2 billion into its development. Microsoft spokesperson Frank Shaw has, however, denied that AMD is involved with Athena.

We have contacted AMD and Microsoft for confirmation and will update this story should we hear back.

The explosive popularity of AI services like OpenAI’s ChatGPT is driving the demand for processors that can handle the huge computational workloads required to run them. Nvidia’s commanding market share for graphic processing units (GPUs) — specialized chips that provide the required computing power — allows it to dominate this space. There’s currently no suitable alternative, and that’s a problem for companies like Microsoft that need Nvidia’s expensive processors to power the various AI services running in its Azure Cloud.

Nvidia’s CUDA libraries have driven most of the progress in AI over the past decade. Despite AMD being a major rival in the gaming hardware industry, the company still doesn’t have a suitable alternative to the CUDA ecosystem for large-scale machine learning deployments. Now that the AI industry is heating up, AMD is seeking to place itself in a better position to capitalize. “We are very excited about our opportunity in AI — this is our number one strategic priority,” Chief Executive Officer Lisa Su said during the chipmaker’s earnings call Tuesday. “We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history.”

Su claims that AMD is well positioned to create partly customized chips for its biggest customers to use in their AI data centers. “I think we have a very complete IP portfolio across CPUs, GPUs, FPGAs, adaptive SoCs, DPUs, and a very capable semi-custom team,” said Su, adding that the company is seeing “higher volume opportunities beyond game consoles.”

AMD is also confident that its upcoming Instinct MI300 data center chip could be adapted for generative AI workloads. “MI300 is actually very well-positioned for both HPC or supercomputing workloads as well as for AI workloads,” said Su. “And with the recent interest in generative AI, I would say the pipeline for MI300 has expanded considerably here over the last few months, and we’re excited about that. We’re putting in a lot more resources.”

In the meantime, Microsoft intends to keep working closely with Nvidia as it attempts to secure more of the company’s processors. The AI boom has led to a growing shortage of specialized GPU chips, further constrained by Nvidia having a near monopoly on the supply of such hardware. Microsoft and AMD aren’t the only players trying to develop in-house AI chips — Google has its own TPU (Tensor Processing Unit) chip for training its AI models, and Amazon has similarly created Trainium AI chips to train machine learning computer models.

OpenAI’s regulatory troubles are only just beginning

OpenAI’s regulatory troubles are only just beginning
Illustration of the OpenAI logo on an orange background with purple lines
ChatGPT isn’t out of the EU’s data privacy woods just yet. | Illustration: The Verge

The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.

Earlier this year, OpenAI’s popular and controversial ChatGPT chatbot hit a big legal snag: an effective ban in Italy. The Italian Data Protection Authority (GPDP) accused OpenAI of violating EU data protection rules, and the company agreed to restrict access to the service in Italy while it attempted to fix the problem. On April 28th, ChatGPT returned to the country, with OpenAI lightly addressing GPDP’s concerns without making major changes to its service — an apparent victory.

The GPDP has said it “welcomes” the changes ChatGPT made. However, the firm’s legal issues — and those of companies building similar chatbots — are likely just beginning. Regulators in several countries are investigating how these AI tools collect and produce information, citing a range of concerns from companies’ collection of unlicensed training data to chatbots’ tendency to spew misinformation. In the EU, they’re applying the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, the effects of which will likely reach far outside Europe. Meanwhile, lawmakers in the bloc are putting together a law that will address AI specifically — likely ushering in a new era of regulation for systems like ChatGPT.

ChatGPT is one of the most popular examples of generative AI — a blanket term covering tools that produce text, image, video, and audio based on user prompts. The service reportedly became one of the fastest-growing consumer applications in history after reaching 100 million monthly active users just two months after launching in November 2022 (OpenAI has never confirmed these figures). People use it to translate text into different languages, write college essays, and generate code. But critics — including regulators — have highlighted ChatGPT’s unreliable output, confusing copyright issues, and murky data protection practices.

Italy was the first country to make a move. On March 31st, it highlighted four ways it believed OpenAI was breaking GDPR: allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of its data collection practices, failing to meet any of the six possible legal justifications for processing personal data, and failing to adequately prevent children under 13 years old using the service. It ordered OpenAI to immediately stop using personal information collected from Italian citizens in its training data for ChatGPT.

No other country has taken such action. But since March, at least three EU nations — Germany, France, and Spain — have launched their own investigations into ChatGPT. Meanwhile, across the Atlantic, Canada is evaluating privacy concerns under its Personal Information Protection and Electronic Documents Act, or PIPEDA. The European Data Protection Board (EDPB) has even established a dedicated task force to help coordinate investigations. And if these agencies demand changes from OpenAI, they could affect how the service runs for users across the globe.

Regulators’ concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

ChatGPT uses either OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), which are trained on vast quantities of human-produced text. OpenAI is cagey about exactly what training text is used but says it draws on “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information.”

This potentially poses huge problems under GDPR. The law was enacted in 2018 and covers every service that collects or processes data from EU citizens — no matter where the organization responsible is based. GDPR rules require companies to have explicit consent before collecting personal data, to have legal justification for why it’s being collected, and to be transparent about how it’s being used and stored.

European regulators claim that the secrecy around OpenAI’s training data means there’s no way to confirm if the personal information swept into it was initially given with user consent, and the GPDP specifically argued that OpenAI had “no legal basis” for collecting it in the first place. OpenAI and others have gotten away with little scrutiny so far, but this claim adds a big question mark to future data scraping efforts.

Then there’s GDPR’s “right to be forgotten,” which lets users demand that companies correct their personal information or remove it entirely. OpenAI preemptively updated its privacy policy to facilitate those requests, but there’s been debate about whether it’s technically possible to handle them, given how complex it can be to separate specific data once it’s churned into these large language models.

OpenAI also gathers information directly from users. Like any internet platform, it collects a range of standard user data (e.g., name, contact info, card details, etc). But, more significantly, it records interactions users have with ChatGPT. As stated in an FAQ, this data can be reviewed by OpenAI’s employees and is used to train future versions of its model. Given the intimate questions people ask ChatGPT — using the bot as a therapist or a doctor — this means the company is scooping up all sorts of sensitive data.

At least some of this data may have been collected from minors, as while OpenAI’s policy states that it “does not knowingly collect personal information from children under the age of 13,” there’s no strict age verification gate. That doesn’t play well with EU rules, which ban collecting data from people under 13 and (in some countries) require parental consent for minors under 16. On the output side, the GPDP claimed that ChatGPT’s lack of age filters exposes minors to “absolutely unsuitable responses with respect to their degree of development and self-awareness.”

OpenAI maintains broad latitude to use that data, which has worried some regulators, and storing it presents a security risk. Companies like Samsung and JPMorgan have banned employees from using generative AI tools over fears they’ll upload sensitive data. And, in fact, Italy announced its ban soon after ChatGPT suffered a serious data leak, exposing users’ chat history and email addresses.

ChatGPT’s propensity for providing false information may also pose a problem. GDPR regulations stipulate that all personal data must be accurate, something the GPDP highlighted in its announcement. Depending on how that’s defined, it could spell trouble for most AI text generators, which are prone to “hallucinations”: a cutesy industry term for factually incorrect or irrelevant responses to a query. This has already seen some real-world repercussions elsewhere, as a regional Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had served time in prison for bribery.

ChatGPT’s popularity and current dominance over the AI market make it a particularly attractive target, but there’s no reason why its competitors and collaborators, like Google with Bard or Microsoft with its OpenAI-powered Azure AI, won’t face scrutiny, too. Before ChatGPT, Italy banned the chatbot platform Replika for collecting information on minors — and so far, it’s stayed banned.

While GDPR is a powerful set of laws, it wasn’t made to address AI-specific issues. Rules that do, however, may be on the horizon.

In 2021, the EU submitted its first draft of the Artificial Intelligence Act (AIA), legislation that will work alongside GDPR. The act governs AI tools according to their perceived risk, from “minimal” (things like spam filters) to “high” (AI tools for law enforcement or education) or “unacceptable” and therefore banned (like a social credit system). After the explosion of large language models like ChatGPT last year, lawmakers are now racing to add rules for “foundation models” and “General Purpose AI Systems (GPAIs)” — two terms for large-scale AI systems that include LLMs — and potentially classing them as “high risk” services.

The AIA’s provisions go beyond data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop generative AI tools. That could expose once-secret datasets and leave more companies vulnerable to infringement lawsuits, which are already hitting some services.

But passing it may take a while. EU lawmakers reached a provisional AI Act deal on April 27th. A committee will vote on the draft on May 11th, and the final proposal is expected by mid-June. Then, the European Council, Parliament, and Commission will have to resolve any remaining disputes before implementing the law. If everything goes smoothly, it could be adopted by the second half of 2024, a little behind the official target of Europe’s May 2024 elections.

For now, Italy and OpenAI’s spat offers an early look at how regulators and AI companies might negotiate. The GPDP offered to lift its ban if OpenAI met several proposed resolutions by April 30th. That included informing users how ChatGPT stores and processes their data, asking for explicit consent to use said data, facilitating requests to correct or remove false personal information generated by ChatGPT, and requiring Italian users to confirm they’re over 18 when registering for an account. OpenAI didn’t hit all of those stipulations, but it met enough to appease Italian regulators and get access to ChatGPT in Italy restored.

OpenAI still has targets to meet. It has until September 30th to create a harder age-gate to keep out minors under 13 and require parental consent for older underage teens. If it fails, it could see itself blocked again. But it’s provided an example of what Europe considers acceptable behavior for an AI company — at least until new laws are on the books.

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

Era of lofty valuations for upstart youth media appears to be over, with Vice ‘heading for bankruptcy’ and BuzzFeed News shutting down

Just over a decade ago Rupert Murdoch endorsed what appeared to be a glittering future for Vice, firing off a tweet after an impromptu visit to the media upstart’s Brooklyn offices and a drink in a nearby bar with the outspoken co-founder Shane Smith.

“Who’s heard of VICE media?” the Australian media mogul posted from his car on the way home from the 2012 visit, which resulted in a $70m (£55m) investment. “Wild, interesting effort to interest millennials who don’t read or watch established media. Global success.”

Continue reading...

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival.

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival. Ron DeSantis, a likely contender for the Republican presidential nomination, must court far-right voters who consider him a tool of the Deep State.

jeudi 4 mai 2023

White House Unveils Initiatives to Reduce Risks of AI

White House Unveils Initiatives to Reduce Risks of AI Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

White House rolls out plan to promote ethical AI

White House rolls out plan to promote ethical AI
President Biden And VP Harris Deliver Remarks On National Small Business Week In The Rose Garden
Photo by Chip Somodevilla/Getty Images

The White House announced more funding and policy guidance for developing responsible artificial intelligence ahead of a Biden administration meeting with top industry executives.

The actions include a $140 million investment from the National Science Foundation to launch seven new National AI Research (NAIR) Institutes, increasing the total number of AI-dedicated facilities to 25 nationwide. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to allow their language models to be publicly evaluated during this year’s Def Con. The Office of Management and Budget (OMB) also said that it would be publishing draft rules this summer for how the federal government should use AI technology.

“These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities,” the administration’s press release said. It does not specify the details of what the Def Con evaluation will include, beyond saying that it will “allow these models to be evaluated thoroughly by thousands of community partners and AI experts.”

The announcement comes ahead of a Thursday White House meeting, led by Vice President Kamala Harris, with the chief executives of Alphabet, Anthropic, Microsoft, and OpenAI to discuss AI’s potential risks. “The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues,” the Thursday release said.

Last October, the Biden administration made its first strides to regulate AI by releasing a blueprint for an “AI Bill of Rights.” The project was intended to serve as a framework for use of the technology by both the public and private sectors, encouraging anti-discrimination and privacy protections.

Federal regulators and Congress have announced a fresh focus on AI over the last few weeks. In April, the Federal Trade Commission, Consumer Federal Protection Bureau, Justice Department, and Employment Opportunity Commission issued a joint warning arguing that they already had authority to go after companies whose AI products harm users.

House Majority Leader Chuck Schumer (D-NY) and other lawmakers also reportedly met with Elon Musk to discuss AI regulation last week.

What really happened when Elon Musk took over Twitter

What really happened when Elon Musk took over Twitter

Why has the social network been in total chaos since the world’s richest man took control? Flipping the Bird investigates. Plus: five of the best podcasts about planet Earth

If the prospect of taking an oath of allegiance to an unelected billionaire doesn’t excite you, television is going to be a pretty tedious place for the next few days. Fortunately, there’s never been a better time to retreat into the world of podcasts. This week, the Guardian released a five-part series looking into the murky finances of King Charles. From its examination of his family’s past exploitation of enslaved people, through to the dubious line between his personal wealth and that which is supposedly held for our nation, it’s a welcome look at the kinds of issues lacking from the national conversation.

Also excellently skewering the powers that be is Pod Save the UK – a new homegrown version of the popular US take on politics, hosted by Nish Kumar and Guardian journalist Coco Khan. There’s also a look at Elon Musk’s, ahem, maverick leadership of Twitter, and an examination of a two decade-long battle over the theft of a Banksy. Plenty of brilliant listens, then, to distract you from having to watch the king’s big party ...

Alexi Duggins
Deputy TV editor

Continue reading...

White House Unveils Initiatives to Reduce Risks of A.I.

White House Unveils Initiatives to Reduce Risks of A.I. Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Apple and Microsoft also collaborated on the technology which allows authentication with fingerprint ID, facial ID or a pin

Google is moving one step closer to ditching passwords, rolling out its passkey technology to Google accounts from Thursday.

The passkey is designed to replace passwords entirely by allowing authentication with fingerprint ID, facial ID or pin on the phone or device you use for authentication.

Continue reading...

Nothing’s CMF Phone 1 is proof that gadgets can still be fun

Nothing’s CMF Phone 1 is proof that gadgets can still be fun Honestly, this is Nothing’s best idea yet. I’ve never had so much fun taking...