mercredi 17 mai 2023

Beepberry is a Blackberry keyboard tinker toy from the founder of Pebble

Beepberry is a Blackberry keyboard tinker toy from the founder of Pebble
Beepberry device in between a banana and apple
The ‘Beepberry’ with other fruits accompanying it. | Image: SQFMI

Are you a hacker who happens to miss their Blackberry? Looks like there’s a new product that’s just your speed: the “Beepberry.” It literally grafts the keyboard of a Blackberry Classic onto a pocketable custom board designed to fit a Raspberry Pi Zero W, all paired with a 400 x 240 “Memory LCD” screen that looks like it was ripped from an old graphing calculator — but is a bit more sophisticated.

Beepberry is designed by Eric Migicovsky, founder of the gone-but-not-forgotten Pebble smartwatch and more relevantly co-founder of Beeper: the hacky all-in-one messenging app that stuffs every service from WhatsApp to iMessage (using a jailbroken iPhone) into one place.

The device is ostensibly designed to run Beeper without any other online distractions, but Migicovsky knows what you’re thinking: he also describes Beepberry as a portable “e-paper” computer for hackers.

You could build yourself a fun handheld device that’s purposefully limiting but also kind of limitless in terms of what you can do with a Raspberry Pi. The website provides a few examples to get your mind oriented, including a simple weather checker, playing Ascii Star Wars, browsing the cyberdeck subreddit, and running a gomuks Matrix client.

In case you’re wondering about that “e-paper” screen, it’s not technically e-ink — but it is an LCD made by Sharp with a one-bit memory circuit embedded in each pixel for e-ink-like image retention.

For $79 you get the Beepberry, mounting screws, and a 2,000mAh battery — although you’ll have to find a way to hold the battery in place. (In some demos, the creators are literally using a rubber-band.) In addition to the 2.7-inch screen and Blackberry Classic Q20 backlit keyboard, you get an USB-C port, a RGB LED, a side button, a power switch, and General Purpose Input / Output (GPIO) breakouts.

 Image: SQFMI
A Beepberry installed inside a 3D-printed shell next to a Pebble-like ‘Watchy’ smartwatch.

For $99, you can get a Beepberry kit that includes a Pi Zero W preinstalled. Otherwise, you’ll have to bring your own or another single-board computer chip like the Radxa Zero or MQ-Pro — which should be drag-and-drop since the Beepberry has a solder-less header.

Notably, the Beepberry itself lacks any hardware for cellular data connectivity, so it’s not quite a self-contained beeper in the traditional sense. You’ll need to use Raspberry Pi’s built-in Wi-Fi, perhaps hotspot it to a smartphone when you’re out and about, unless you’re willing to devise a cellular add-on that plugs into its headers.

If you’re interested in a Beepberry, you might want to act fast: there’s only 50 initially available to ship. You’ll need to put in your order, and then fill out the Early Access Program form on the bottom of the page to let them know you want it now. The site does not mention how many of the initial 50, if any, remain available for purchase.

 Image: SQFMI
Beepberry specs.

Software and firmware for the Beepberry is available to download online. There are even 3D-printable enclosures to get started with, and a Discord community for anyone taking on a Beepberry project.

It’s important to note that the software / firmware is still actively being developed and nothing is final, so don’t expect many out-of-box features if you get your hands on one. If you simply want something with a complete out-of-box experience and a black and white screen, a Playdate may be more your speed. And if you’re just looking to support the Pebble founder’s next endeavor, you could wait for his team’s upcoming Small Android Phone.

mardi 16 mai 2023

Elon Musk keeps insisting the Texas shooter with a swastika tattoo is not a white supremacist

Elon Musk keeps insisting the Texas shooter with a swastika tattoo is not a white supremacist
Elon Musk grins in a photo illustration, lifting his arms over his head triumphantly
Kristen Radtke / The Verge; Getty Images

In an interview with CNBC Tuesday evening, Elon Musk defended spreading conspiracy theories about the deadly mass shooting in Texas earlier this month.

On May 9th, open-source intelligence research group Bellingcat posted a story with details about the shooter that indicated he held white supremacist and neo-Nazi views. Bellingcat’s story included social media posts from the Russian social network Odnoklassniki that traced back to the shooter, including photos featuring a large swastika tattoo and body armor with a RWDS (Right-Wing Death Squad, a far-right slogan) patch. The Texas Department of Public Safety has also said that the shooter showed indications of holding neo-Nazi ideology, with an official saying that “He had patches. He had tattoos.”

But on Twitter on May 9th, Musk replied to a crude meme questioning details about the shooter, claiming that Bellingcat “literally specializes in psychological operations” and saying that “this is either the weirdest story ever or a very bad psyop!”

CNBC’s David Faber asked him about that tweet in an interview Tuesday evening. “I think it was incorrectly ascribed to be a white supremacist action,” Musk said. “And the evidence for that was some obscure Russian website that no one’s ever heard of that had no followers. And the company that found this was Bellingcat. And do you know what Bellingcat is? Psyops.” In its story, Bellingcat notes that it did not in fact discover the profile; its existence was first reported by The New York Times.

Musk added, “I’m saying I thought that ascribing it to white supremacy was bullshit. And that the information for that came from an obscure Russian website and was somehow magically found by Bellingcat, which is a company that does psyops.” Bellingcat’s report describes finding the profile by matching accounts against the shooter’s date of birth. The account had posted photos of identification documents, including a speeding ticket and a boarding pass that included the shooter’s name.

You can hear Musk’s comments for yourself starting at 2:39 in this video.

Musk’s comments about the shooting were part of an escalating series of messages that echo right-wing talking points. In the interview he similarly defended comments claiming billionaire philanthropist George Soros, a frequent target of antisemitic conspiracy theories, “hates humanity.” Last year he also shared a widely dismissed conspiracy theory about the motives for an attack on Nancy Pelosi’s husband, Paul Pelosi. Later in the interview with CNBC, he reiterated his denial that the shooter held white supremacist views:

Faber: There’s no proof, by the way, that he was not [a white supremacist]

Musk: I would say that there’s no proof that he is.

Faber; And that’s a debate you want to get into on Twitter?

Musk: Yes. Because we should not be ascribing things to white supremacy if it is false.

This conversation happened after Musk told Faber that he’ll say what he wants even if it loses him money. I have to imagine these comments will make him lose some money.

Court rules Theranos founder Elizabeth Holmes must go to prison while she appeals sentence

Court rules Theranos founder Elizabeth Holmes must go to prison while she appeals sentence

Holmes, who was charged with defrauding investors in her blood-testing start-up, hoped to stay out of jail while she appealed her conviction

Theranos founder Elizabeth Holmes must begin serving her prison sentence while she appeals her conviction on charges of defrauding investors in the failed blood-testing startup, an appeals court in San Francisco ruled on Tuesday.

Holmes, who rose to fame after claiming Theranos’ small machines could run an array of diagnostic tests with just a few drops of blood, was convicted at trial in San Jose, California, last year and sentenced to 11 years and three months in prison.

Continue reading...

Elon Musk: I will say what I want even if it loses me money

Elon Musk: I will say what I want even if it loses me money
Elon Musk shrugging on a background with the Twitter logo
Illustration by Kristen Radtke / The Verge; Getty Images

During an interview on CNBC, Elon Musk defended his right to say inflammatory things on Twitter, even if those statements lose him money. He appeared to disassociate briefly after being asked why even bother tweeting. And he eventually quoted The Princess Bride to explain his cavalier attitude toward what he shares on Twitter.

It was a very weird interview.

The interview came after a particular troubling run of tweets for Musk, in which he promoted conspiracy theories about a mass shooting in Texas, was accused of antisemitism after claiming that George Soros “hates humanity,” and retweeted discredited theories about crime and race.

After a series of mostly softball questions about Tesla and time management, CNBC’s David Farmer asked why he tweets conspiracy theories and makes statements that have been criticized as racist and anti-semitic, especially when they could lose him customers and hurt the companies he runs.

After an extremely long and uncomfortable pause, Musk referenced the scene from the 1987 movie The Princess Bride, in which Mandy Patinkin’s Inigo Montoya character confronts the man who killed his father.

“He says, ‘Offer me money. Offer me power,’” Musk said. “‘I don’t care.’”

“You just don’t care,” Faber replied, to which Musk just stared at him. “You want to share what you have to say.”

Eventually, Musk said, “I’ll say what I what to say, and if the consequences are losing money, so be it.”

As CEO of a public company, there are limits to what Musk can say, on Twitter or elsewhere. If he tweets misleading things about Tesla, shareholders will sue him — as they did after he tweeted about taking the company private at $420 a share. (The shareholders lost the suit and Musk was found to not be liable for their losses.)

His tweets have caused him all sorts of headaches over the years. His take-private tweet in 2018 got him fined $40 million by the Securities and Exchange Commission and lost him the chairmanship of Tesla. He is currently under a consent decree with the SEC that requires a lawyer to approve his tweets about Tesla before he can post them. A federal appeals court recently ruled against Musk’s attempts to vacate the consent decree.

We’ve been through all this before. Musk is asked why he tweets incendiary things, and he points to his follower count to justify his increasingly unhinged behavior — as if a large chunk of those followers aren’t just rubber-necking. His followers and shareholders implore him to stop tweeting, but he doubles and triples down, again and again. It is, one might say, inconceivable.

Elon Musk says ‘get off your work-from-home bullshit’

Elon Musk says ‘get off your work-from-home bullshit’
An image of Elon Musk on a blue illustrated background.
Kristen Radtke / The Verge; Getty Images

In an interview aired on CNBC, Elon Musk called remote work “morally wrong” and “bullshit,” arguing it was unfair to those workers who can’t work from home.

Musk, who banned remote work at Twitter after acquiring the company, has not been shy about sharing his disdain about work from home policies. But in the interview, Musk was more animated than usual, arguing that remote work was counter-productive.

“I’m a big believer that people need to be more productive when they’re in person,” he told CNBC’s David Faber.

Musk imposed a strict return-to-the-office policy for Tesla in June 2022, warning them they would lose their jobs if they refused to do so. Employees would need to spend a minimum of 40 hours at the office a week; anything less would be “phoning it in.”

Tesla was more open to remote work before the pandemic, workers told CNBC. But after covid, Musk took a hard line against remote work, as well as other preventative measures like mask-wearing. The company also lacked room and resources to bring many of its employees back to its San Francisco offices.

After acquiring Twitter, Musk set the same strict policy, just as he was laying off over three-fourths of the workforce. During the interview Tuesday, he became extremely animated when CNBC’s David Faber casually referenced the policy.

“Get off the goddamn moral high horse with the work-from-home bullshit,” Musk said, “because they’re asking everyone else to not work from home while they do.”

He went on to argue that because people who deliver food and build houses can’t work from home, neither should office workers, calling the decision “messed up” and a “moral issue.”

“If you want to work at Tesla, you want to work at SpaceX, you want to work at Twitter — you got to come into the office every day,” he said.

Valve just got sued by Immersion over Steam Deck and Index rumble

Valve just got sued by Immersion over Steam Deck and Index rumble
Photo by Vjeran Pavic / The Verge

Consider it a rite of passage: Valve has finally become successful enough at building gaming hardware that it’s getting sued by Immersion Corporation.

Immersion — the haptic feedback company that’s purchased, developed, or otherwise accumulated so many patents on rumble tech that almost every major tech company has licensed or settled out of court — is now accusing Valve of infringing its patents with the Steam Deck handheld, the Valve Index VR platform, its SteamVR software, and Half-Life: Alyx among other titles. There’s no mention of Valve’s long-gone Steam Controller, which also used lots of haptic feedback.

Immersion is asking for damages, royalties, and an injunction against Valve “from deploying, operating, maintaining, testing, and using the Accused Handheld Instrumentalities and Accused VR Instrumentalities”.

The company filed its complaint Monday in federal court, specifically the Western District of Washington, citing patents 7,336,260, 8,749,507, 9,430,042, 9,116,546, 10,627,907, 10,665,067, and 11,175,738.

Sony and Microsoft both license Immersion’s patent portfolio, following lawsuits and settlements. Apple, Google, Motorola and Fitbit settled as well. Meta is currently in the middle of its own Immersion lawsuit, filed a year ago. Nintendo seemingly escaped a suit, perhaps due to its own development of Rumble Pak tech for the Nintendo 64, but it also licenses Immersion tech now.

While you might be tempted to point out that Valve’s hardware uses a different form of rumble than the ones that Sony and co. got sued for back in the day — linear resonant actuators (LRA), rather than an eccentrically swinging weight on a shaft — that didn’t stop Nintendo from signing an agreement with Immersion to bring its technology to the Switch, which uses LRAs as well. So does Sony’s DualSense, for that matter.

And if you look at the individual patents I link above, they’re much more nuanced than hardware. I would be surprised if Valve doesn’t settle.

Valve didn’t immediately respond to a request for comment. The Steam Deck once again topped the Steam Weekly Top Sellers (by revenue) this past week; while we don’t know sales numbers, it’s safe to say the handheld has sold well over a million units. Valve is looking at an eventual successor, too, and reportedly has a successor headset codename “Deckard.”

Ring founder to become CEO of commercial smart home company Latch

Ring founder to become CEO of commercial smart home company Latch
image of Jamie Siminoff and a bunch of Latch products
Image: Latch

Just a day after we found out Ring co-founder Jamie Siminoff was leaving Amazon and Ring altogether, it seems he will become CEO of enterprise smart home company Latch after it buys Honest Day’s Work (HDW), a little-known business Siminoff appears to have founded about four months ago.

The interim CEO, Jason Keyes, had this to say:

“I am confident Jamie’s passion for leading mission-driven technology companies, his operational expertise, and his track record for creating industry-dominating products will be invaluable as Latch enters its next phase. I am excited about the opportunity to further improve our customer experience via the integration of HDW services into LatchOS.”

LatchOS is the name of the company’s “full-building operating system,” which gives apartment building owners and landlords control over the smart home products they include with tenants’ leases — devices like Sonos speakers, Honeywell and Ecobee thermostats, and the company’s own smart locks.

Latch promotes this as a way for landlords to eschew keys for access via phone or watch only, leaving physical keys out of the equation altogether. In fact, one of its products, the Latch C2, has no physical key option. As we noted when the company announced its funding in 2016, however, this also creates more privacy issues, like records of when a resident left their apartment and when they returned.

Latch’s products have created friction between landlords and tenants in the past — in 2019, five New York City tenants successfully sued their landlord for the right to keyed access to their building’s lobby after its owner installed a Latch smart lock.

Of the transition, Siminoff said:

“I’m excited to join the Latch team, which has built an incredible offering that users across the country enjoy and benefit from every day. Smart, secure access control is not only fundamental to real estate operators like myself, but also to residents and service providers. I look forward to combining Honest Day’s Work with Latch to build a residential ecosystem that empowers building owners, operators, service providers, and residents alike.”

HDW appears to be about four months old, with a website that only just went live and offers no clues about its purpose. According to Latch’s press release, HDW’s mission is “to enable residential service providers, such as housekeepers, dog walkers, electricians, and drivers…more control over their businesses and enable them to deliver high-quality service to their customers.” How HDW achieves the mission isn’t stated, but it seems likely to be similar to Amazon Key or Walmart Plus deliveries, where personnel can enter your home and place orders, such as groceries, inside your door, garage, or even your refrigerator.

Perhaps HDW will let you give providers, like those listed in the release, temporary access to your home or garage. It makes sense that Latch would be interested in incorporating such a feature into its platform, making it easier for building owners to manage and scale controls.

Whether HDW’s brand will remain around or the company will simply gobble up its assets and repurpose them under the Latch brand is unclear, but it sounds like maybe the latter, as the release says 30 of HDW’s employees will be shuffled into its ranks after the transition. I asked, in an email exchange, whether the HDW brand would remain in use, and Meredith Chiricosta, a PR representative for Latch, only said, “The primary focus has been on refining the concept for Honest Day’s Work and building out the technology.”

I also asked if layoffs were expected, and Chiricosta responded that “right now, the teams are focused on ensuring a smooth transition. The goal is to build a strong foundation for what we expect to be a unique offering that will make life easier for property owners, tenants, and service providers. We’re excited for what the future holds.”

‘Why would we employ people?’ Experts on five ways AI will change work

‘Why would we employ people?’ Experts on five ways AI will change work

From farming and education to healthcare and the military, artificial intelligence is poised to make sweeping changes to the workplace. But can it have a positive impact – or are we in for a darker future?

In 1965, the political scientist and Nobel laureate Herbert Simon declared: “Machines will be capable, within 20 years, of doing any work a man can do.” Today, in what is increasingly referred to as the fourth industrial revolution, the arrival of artificial intelligence (AI) in the workplace is igniting similar concerns.

The European parliament’s forthcoming Artificial Intelligence Act is likely to deem the use of AI across education, law enforcement and worker management to be “high risk”. Geoffrey Hinton, known as the “godfather of AI”, recently resigned from his position at Google, citing concerns about the technology’s impact on the job market. And, in early May, striking members of the Writers Guild of America promised executives: “AI will replace you before it replaces us.”

Continue reading...

lundi 15 mai 2023

The NFL will make one playoff game a streaming exclusive on Peacock next year

The NFL will make one playoff game a streaming exclusive on Peacock next year
A graphic showing Peacock’s logo in a beige circle surrounded by other colorful circles
Illustration by Alex Castro / The Verge

The NFL just announced a new deal with Peacock that gives the NBCUniversal-owned streaming service national broadcast rights to one wild card playoff game next season. Described as the “first-ever exclusive live streamed NFL Playoff game,” it will feature two teams facing off in primetime on January 13th, 2024. The arrangement was first reported by The Wall Street Journal, which cites sources saying the deal for this one game is worth around $110 million.

Of course, we don’t know which two teams will be playing, but for fans without Peacock, options for watching may depend on where you live. Along with a regular season game (Bills vs. Chargers on December 23rd, or as you may know it, the Burn It All game) that will also be a Peacock exclusive, the game will be broadcast in those teams’ local markets on an NBC affiliate and on NFL Plus, the league’s subscription streaming platform made for phones and tablets but not TVs.

The Peacock exclusive Wild Card game and regular season game will be broadcast on NBC stations in the two competing team cities, and available on mobile devices with NFL+. The NFL is the only sports league that presents all regular-season and postseason games on free, over-the-air television in local markets.

According to the WSJ, when the league struck new broadcasting deals in 2021, it reserved the rights to one playoff game per season, and this NBCUniversal deal is only for the 2023 postseason. NFL Media exec Hans Schroeder told the outlet it’s “likely” that in future years, the game will continue to be streaming only, which could attract bids to put it on platforms including Peacock again, Viacom-owned Paramount Plus, Disney’s ESPN Plus streaming, or Fox’s Tubi.

Disclosure: Comcast, which owns NBCUniversal, is also an investor in Vox Media, The Verge’s parent company.

A small cable company is going all-in on YouTube TV for video

A small cable company is going all-in on YouTube TV for video
YouTube’s logo with geometric design in the background
YouTube TV | Illustration: Alex Castro / The Verge

Cable TV is on the way out, and YouTube TV is in for Wide Open West, or WOW!, a smaller US broadband operator with just over a half million internet customers and, as of March 31st, 117,000 households paying for cable TV.

The company said Monday that it would start migrating residential TV customers this summer:

The process of migrating WOW!’s residential video customers to YouTube TV will begin this summer as WOW! discontinues the marketing and selling of its TV services, including WOW! tv+, and sells YouTube TV across its footprint. WOW! will maintain and support its current video services as its existing base of video customers can switch from WOW!’s current video products to YouTube TV.

WOW! has been transitioning away from traditional cable since introducing WOW! tv+ several years ago, which is its IPTV service that uses an Android-based streaming box for on-demand video, cloud DVR, and a Google Assistant voice remote. The company also offers bundled pricing for streaming TV alternatives like FuboTV, DirecTV streaming, and, yes, YouTube TV, but this new deal would distill its video options to just marketing the YouTube TV subscription with no included hardware.

The presumed end goal here would be moving its customers off traditional cable entirely, which would free up bandwidth for the ISP overall. WOW! isn’t the only provider with a YouTube TV deal. Verizon bundled YouTube TV with its Fios service in 2019, and Frontier began selling a YouTube TV bundle with integrated billing in March, among others.

Streaming services have been steadily ascendant for years, and in 2022, actually edged out cable for most viewership by a few tenths of a percent, so the move to YouTube TV makes sense, particularly given the company’s successful bid for NFL Sunday Ticket just a few months ago.

Google said YouTube TV surpassed 5 million subscribers last year, and a recent analyst report pegged its total at about 6.3 million now, with months to go before the Sunday Ticket deal kicks in. Still, whether they get their TV from a streaming service or traditional cable, the number of pay-TV subscribers in the US has continued to drop and is around 75.5 million households, according to the report by MoffettNathanson.

Microsoft will pay to capture carbon from burning wood

Microsoft will pay to capture carbon from burning wood
Art depicts cartoon balloons attached to the tops of four smokestacks.
Illustration by Hugo Herrera / The Verge

Microsoft just backed a big plan to capture carbon dioxide emissions from a wood-burning power plant. Today, the tech giant announced a deal with Danish energy company Ørsted to purchase credits representing 2.76 million metric tons of carbon dioxide captured at Ørsted’s Asnæs Power Station over 11 years.

It’s one of the biggest deals any company has made to date to draw down carbon dioxide emissions, according to a press release from Ørsted. The move is supposed to help Microsoft hit its goal of becoming carbon negative by 2030, the point at which the company is removing more planet-heating carbon dioxide from the atmosphere than it generates through its operations.

But technology to capture carbon dioxide emissions is still nascent, and some environmental groups and researchers are skeptical that the strategy Microsoft just helped to fund can be an effective way to tackle climate change. Without Microsoft’s support, Ørsted wouldn’t have been able to install carbon capture devices at its power plant. “Danish state subsidies and Microsoft’s contract were both necessary to make this project viable,” Ørsted’s announcement says.

With Microsoft’s help, Ørsted was able to nab an even bigger, 20-year contract with the Danish Energy Agency (DEA) to capture CO2 emissions from Asnæs in western Zealand and a second power plant near Copenhagen. After the carbon capture devices are installed, they should be able to capture a total of 430,000 metric tons of CO2 annually by 2026. For comparison, that’s roughly equivalent to how much CO2 a single gas-fired power plant emits in a year.

These power plants, however, burn wood chips and straw, fuels also known as “biomass.” And burning biomass, which can include agricultural waste and other plant material, as a sustainable energy source is controversial. The EU counts biomass as its biggest source of renewable energy, but a lot of the wood that’s burned has come from trees cut down in forests across Europe and the southeast US. Ørsted says the wood chips burned at its Asnæs Power Station “come from sustainably managed production forests and consists of residues from trimming or crooked trees.”

How is burning trees supposed to be good for the environment? After all, wood still releases CO2 when it’s burned. The argument is that trees or crops used to make biomass naturally take in and store CO2 when they’re alive. So if you replant the trees or plants, you can potentially have a fuel that’s carbon neutral.

Ørsted is going one step further by adding technologies that can filter CO2 out of its power plants’ smokestacks, keeping it from wafting up into the atmosphere. By doing that, it believes its biomass-burning power plants will become carbon negative. They plan to bury the excess carbon dioxide they capture under the North Sea and sell credits to Microsoft representing each ton of CO2. Microsoft can then use those credits to claim that it has canceled out some of its own greenhouse gas pollution.

If that all sounds like a tricky balancing act, it is. Previous research has found that burning woody biomass can create more CO2 emissions than what’s captured. That’s because only capturing smokestack emissions fails to account for all the pollution that might come from cutting down the trees and transporting the wood. Plus, it can take a long time for trees or plants to grow mature enough for people to rely on them to draw down a significant amount of CO2.

“We think that the details are crucial,” Phillip Goodman, carbon removal portfolio director at Microsoft, says in an email to The Verge. An effective carbon capture project would need to use biomass “harvested from appropriate areas” and account for all of its “process” emissions, Goodman says. Microsoft declined to say how much it would pay Ørsted for carbon removal credits for this particular project.

Microsoft has been making some bold bets on climate tech and clean energy technologies lately. Last week, it announced a plan to purchase electricity from a forthcoming nuclear fusion power plant — even though some experts don’t think such a cutting-edge power plant could realistically be developed for several more decades. Microsoft has also paid a Swiss company called Climeworks to filter CO2 out of the air.

How to hardwire your home without ethernet in the walls

How to hardwire your home without ethernet in the walls
An illustration of a home surrounded by smart tech.
Illustration by Hugo Herrera for The Verge

Listen. I don’t have anything against Wi-Fi. High-speed wireless access to the internet is darn near miraculous, and there are a lot of situations where it doesn’t make any sense to use a wired connection. Can you imagine if your phone was connected to the wall?

But since we’re celebrating the 50th anniversary of ethernet, I’d like to make a pitch for the humble, hardworking wired connection.

A wired connection is more stable than Wi-Fi, it’s almost always faster, and it has much lower latency. It’s just plain better to send a signal through a set of copper wires than turning it into radio waves and blasting it through walls, furniture, appliances, and people. (Wi-Fi isn’t bad for people; people are bad for Wi-Fi.) And every device you get off of your Wi-Fi will also help the devices still on it. You should hardwire every device you can, especially computers, gaming consoles, TVs, and especially your Wi-Fi access points (home servers and network-attached storage, too, but if you have those, you don’t need me to tell you about the advantages of wires).

Even a little bit of wiring can have a dramatic effect on your Wi-Fi situation and might let you avoid having to spring for a mesh networking system — or, worse, a Wi-Fi extender.

Here are the two best things wired networking lets you do.

Move the router: The best place for a Wi-Fi router is in the center of the home, but unless your home is already wired for ethernet, your internet hookup is probably along an exterior wall, somewhere that was convenient for the ISP’s installer but not necessarily for you. Running a wired connection between your ISP modem/gateway and your router lets you put the Wi-Fi where it needs to be while keeping the modem where it needs to be. Everybody wins.

By way of example: My fiber gateway — where the fiber optic signal from my ISP comes in — is in my garage. My house is about a decade old and wired for ethernet, but the connection between the gateway and the networking enclosure in my laundry room is indirect and full of splices due to some mystifying decisions by earlier ISP installers and occupants, so my internet connection kept dropping. Eventually, I’ll run a proper direct in-wall ethernet connection that bypasses that mess, but in the meantime, I’ve run a 50-foot patch cable out of my garage door, into my laundry room door, and into the wireless router because the alternative is putting the router in the garage, where it would slowly cook itself, and I’d still have to run a patch cable to hook up the rest of my network.

Replacing mesh backhaul: The whole reason mesh networking kits became popular is that they give you a decent Wi-Fi connection without wires, and here I am suggesting that you put the wires right back in. Hear me out.

Mesh networking kits like Eero, Nest Pro, and Orbi use Wi-Fi to communicate between the router and the satellite nodes as well as with the client devices. Usually, they dedicate one Wi-Fi band to backhaul — the communication between the mesh nodes — and one or more bands for devices. But each node has to be close enough to the next to have good reception on the backhaul band, and you have that many more Wi-Fi signals in your airspace. Replacing even one backhaul from your main router to a satellite node with a wired connection — if your mesh system supports it — dramatically improves the connection, especially for devices further away from the main router. You can put your Wi-Fi access points further away from each other, have better communication between them, and use fewer of them overall. (This is how I fixed my in-laws’ Eero installation last Christmas, to mild applause.)

Some houses and apartment buildings, especially those built or renovated in the last decade, are fortunate to have ethernet in the walls already: some in just one or two places, others in almost every room. If that’s an option for you and you’re not already taking advantage of it, you don’t need much to get started beyond a networking switch wherever those ethernet runs all meet and some cables to hook things into your wall jacks. But most people don’t have ethernet in the walls, and it’s not trivial or cheap to get it there, even if you have the option of poking a bunch of holes in the wall.

Fortunately, there are plenty of alternatives. In order from cheapest to best to… least good, there’s buying a really long ethernet cable, using your existing coax wiring, and finally, powerline networking.

The cheapest option: just get a really long cable

Here’s my pitch: get a really long ethernet cable. A hundred-foot cable from a reputable company costs about $25. Hook up the things you want to hook up. Maybe this lets you put your Wi-Fi router at the center of the house. Maybe it lets you hardwire your gaming PC and stop lagging out of multiplayer matches. Maybe it lets you use wired backhaul for one of your mesh networking nodes, or maybe you want to wire up your entire entertainment center with a simple networking switch. This is a good option for renters and people who don’t have ethernet or cable wiring in the walls and don’t want to (or can’t) put it there.

Now you’ve spent $25 or $50. If you’re happy with the performance but not the aesthetics of having a hundred-foot ethernet cable lying around, do what you can to pretty it up a bit. Tuck it under the baseboards or the edge of the carpet if you can, or use a peel-and-stick cable raceway. Is it elegant? Not really. Does it work? Yes.

The actual best option: use your cable wiring

shot of a white MoCA adapter labelled “goCoax” with ethernet and power cables emerging from the top and a coax cable coming from the bottom. The adapter is dangling in midair in front of a bookshelf, with a power outlet visible in the background. Image: Nilay Patel / The Verge
MoCA adapters like this one convert between Ethernet and coax wiring, so you can use your existing cable to extend your home network.

Most older homes have coax in at least a room or two, thanks to generations of satellite TV, cable TV, and cable internet installations. If your home or apartment was built in the 1990s or later, you may even have coax cable hookups preinstalled in most rooms. If you have existing cable wiring, you can use MoCA adapters (that’s Multimedia over Coax Alliance) to convert ethernet to coax and back without the finickiness of Wi-Fi or powerline. Depending on your exact setup, it might not be the easiest or cheapest option, but it’s as good as in-wall ethernet, and you’re a lot more likely to have it already there.

The current version, MoCA 2.5, can support transfer speeds of up to 2.5Gbps. A basic MoCA setup requires an adapter at each end. You should look for MoCA 2.5 adapters with 2.5GbE ethernet ports. Most peoples’ internet connections aren’t that fast yet, but 2.5GbE ports are becoming more common on desktop and network devices, and there’s no reason to bottleneck yourself in the future by getting MoCA adapters with 1Gbps ports when 2.5GbE options aren’t much more expensive.

To get started with MoCA, you need a coax port near your router. Get a MoCA adapter, and connect it to one of your router’s LAN ports with an ethernet cable. Plug the coax side into the nearest coax port. The other adapter connects to the coax port in the wall at your destination; then, you can connect the ethernet end to your device or a networking switch to wire up multiple devices. You can use multiple endpoint adapters with one router-side adapter, and if your router has a coax port on it — like most FiOS gateways — it already has MoCA built in, and you just need the endpoint adapters. The Verge’s editor-in-chief, Nilay Patel, uses MoCA adapters to run the backhaul for his Eero network.

Of course, that assumes a direct cable connection between the router end and the device end, which isn’t at all guaranteed. I’ve seen houses with three or four non-intersecting coax cable networks laid down by various cable and satellite installers over the past few decades. You also need to make sure there aren’t too many splitters in the path between them — these can lower the signal strength — and if you are also using coax cables for your TV or incoming internet connection, you’ll need a PoE filter, which makes sure MoCA network doesn’t interfere with other signals in your network. This may require some cable archaeology and pruning disused splitters and cables.

The most useful and up-to-date explanation on MoCA that I’ve found is the one Dong Ngo just published in April, which includes helpful info on network layout, splitters, and PoE filters.

It might work great: powerline networking

Close-up of a TP-Link AV2000 powerline networking adapter plugged into the lower receptacle of a 2-gang electrical wall outlet. A power cord is plugged into the upper receptacle. Image: Richard Lawler / The Verge
A powerline adapter sends your Ethernet signal through your existing electrical wiring. It can work well, though it depends on the age of your electrical wiring and other factors.

Powerline networking lets you use your existing electrical wiring to extend your network. It makes a lot of sense in theory; most people have electrical outlets in every room.

But in practice, its performance depends a lot on the age of your wiring and how each outlet is connected to your electrical panel. Richard Lawler, The Verge’s senior news editor, uses an AV2000 powerline networking kit in his home. He says he gets 700 to 1000 Mbps (on a Gigabit network) in some rooms and 300 to 500 Mbps in others. That’s better than you’ll get with a lot of Wi-Fi routers at range, though not better than MoCA or a long ethernet cable.

Wirecutter’s powerline article — which still has a GIF of my floor lamp going disco mode during my 2015 testing — is a good rundown of the powerline options. Joel also knows his stuff.

In addition to powerline kits, Wirecutter also tested MoCA adapters, and you can see in that article just how thoroughly the MoCA kits beat the powerline ones. Just saying. You’re better off with MoCA if you have it. A long ethernet cable beats powerline if you can manage it, but if that’s not an option, powerline is worth a shot.

TP-Link already has some Wi-Fi 7 routers for you to buy

TP-Link already has some Wi-Fi 7 routers for you to buy
Product shots of the TP-Link Archer BE800 (left) and a three-pack of the Deco BE85 (right)
Do you want your internet in tube form or whatever that other thing is? | Image: TP-Link

The Wi-Fi 7 spec isn’t totally finished yet, but you know TP-Link won’t let a pesky little thing like pending certification stop it, and why should it? Netgear and Asus aren’t. And so, the company is launching its first-ever Wi-Fi 7 routers — a mesh system called the Deco BE85, and a powerful single access point router that looks like a set piece designed for an old sci-fi show, the TP-Link Archer BE800.

Wi-Fi 7 means you can probably expect a laptop or a phone with a good Wi-Fi 6E or Wi-Fi 7 network card to get over a gigabit throughput wirelessly, possibly way more, particularly if you have a connection to your ISP that’s capable of it, such as those offered in some places by the likes of Google Fiber, AT&T, or Comcast.

TP-Link sent us some of its test data, along with the layout of the house it tested the mesh Deco BE85 system in, using a OnePlus 11 5G phone as the Wi-Fi 7 client, and if true, the numbers look enticing, showing the phone reaching well over 3,000Mbps throughput a few feet from the main router, only dropping to a little over 300Mbps at the farthest point. That’s incredibly fast and not something I’ve ever even been close to seeing, with most Wi-Fi 6E routers pushing up over 1,000Mbps at close range. I’d like to see how that bears out in real-world testing.

Despite the spec not being officially completed, these should be some impressive routers. Just looking at their hardware features, both get 10Gbps and 2.5Gbps ports, single USB 3.0 ports, and an SFP+ WAN/LAN port.

Both of TP-Link’s new routers are technically capable of up to 19Gbps, which is consistent with speeds claimed by Qualcomm regarding its new Wi-Fi 7 chipsets that likely live in them. Naturally, that’s spread across all bands and individual data links, so you shouldn’t expect to see anywhere near that throughput. Both routers will support free and pro versions of TP-Link HomeShield, the company’s network security package.

A Deco BE85 mesh node unit sits on a wooden surface next to a small bowl. Image: TP-Link
A lonely, unassuming Deco BE85 blasts Wi-Fi all over the place.

Looking closer, the Deco BE85 uses eight antennas per mesh node, and TP-Link says a three-pack should cover about 9,600 square feet. Of course, in the real world, with walls and furniture and sacks of sentient meat walking around, that may not quite line up. The company also says it will use AI to control roaming, shoving your devices around to the best node as you wander around your home. The best mesh systems already do this without AI and usually do a decent job of it, so we’ll see how and whether TP-Link has improved things there.

Each node has two 2.5Gbps ports and two 10Gbps ports, each of which can serve as WAN or LAN — meaning they’ll autoconfigure themselves depending on whether they’re connected to your modem or to the rest of your network.

The Deco BE85 will also support wired backhaul, meaning you can run an ethernet cable between the devices so that each mesh node gets your full bandwidth to share with devices at every part of your house, rather than what it could otherwise pick up wirelessly — at least, that’s traditionally how it works. With multi-link aggregation for wireless backhaul, it’s likely to be faster than gigabit. So, if your house is already wired with older cabling and you don’t feel like running new ethernet drops, you might be better off going wireless with enough Deco BE85 nodes bopping around.

An Archer BE800 sitting on a desk, next to a laptop, plant, lamp, and a jar with some sticks in it. Image: TP-Link
Always love a screen on a router.

As for the chunky TP-Link Archer BE800 (which is named “beboo” in my headcanon), the company’s new single access point looks like a rejected Xbox design, with its pseudo-x-shaped outer shell and black front grate. It gets a dot matrix LED display, and TP-Link says it has over 3,000 graphics, showing you the weather, time, and for some reason, emoji. I don’t know who wants their router to emote at them, but it looks like y’all finally get that. TP-Link says it’ll also sport the ability to set up a separate network just for your smart home devices,

Mostly, its wireless capabilities look on par with the BE85 kit: it has the same number of bands, antennas, and potential throughput, and though TP-Link didn’t share the coverage area, I’d bet on somewhere around 2,500-3,000 square feet. The company also mentioned the ability to create a distinct network for your smart home devices, though it’s unclear if that goes for the BE85 mesh system as well.

Despite its appearance, TP-Link isn’t calling this a gaming router — that distinction is reserved for the Archer GE800, which has been announced but not yet released.

The TP-Link BE85 is available now at Best Buy in a three-pack configuration, and it ain’t cheap, at $1,500. Same goes for the $600 TP-Link BE800, also at Best Buy. Both will also be available for preorder at Amazon.

Should you run out and buy one of these routers today? For most people, of course not. The spec won’t be finalized until sometime later this year at the earliest, and as that time approaches, other companies will start announcing routers supporting the new standard. These are both pricey devices, and for at least the next year or two, you won’t see many products that can take advantage of anything Wi-Fi 7 brings to the table. Still, if you just like having the fancy new thing, they’ll be backward-compatible with everything you already own, so it’s not like they won’t work.

Philadelphia Inquirer severely disrupted by cyber-attack

Philadelphia Inquirer severely disrupted by cyber-attack

Attack has caused worst disruption in decades to city’s paper of record and it is unclear when normal editorial services will resume

The Philadelphia Inquirer is scrambling to restore its systems and resume normal operations after it became the latest major media organization to be targeted in a cyber-attack.

With no regular Sunday newspaper and online stories also facing some delays, the cyber-attack has triggered the worst disruption to the Inquirer in decades.

Continue reading...

Fixing the Market Demand Problem With Generative AI

Fixing the Market Demand Problem With Generative AI
Generative AI markets
In the face of economic hardship, companies are engaging in price wars to stay competitive, often at the risk of business survival. Traditional demand-generation marketing strategies are losing effectiveness in the digital era. Generative AI, capable of personalized, scalable customer interactions, may offer a solution to this market crisis. The post Fixing the Market Demand Problem With Generative AI appeared first on TechNewsWorld.

MEPs to vote on proposed ban on ‘Big Brother’ AI facial recognition on streets

MEPs to vote on proposed ban on ‘Big Brother’ AI facial recognition on streets

Thursday’s vote in EU parliament seen as key test in formation of world’s first artificial intelligence laws

Moves to ban live “Big Brother” real time facial recognition technology from being deployed across the streets of the EU or by border officials will be tested in a key vote at the European parliament on Thursday.

The amendment is part of a package of proposals for the world’s first artificial intelligence laws, which could result in firms being fined up to €10m (£8.7m) or removed from trading within the EU for breaches of the rules.

Continue reading...

dimanche 14 mai 2023

‘Design me a chair made from petals!’: The artists pushing the boundaries of AI

‘Design me a chair made from petals!’: The artists pushing the boundaries of AI

From restoring artefacts destroyed by Isis to training robot vacuum cleaners, architects, artists and game developers are discovering the potential – and pitfalls – of the virtual world

A shower of pink petals rains down in slow motion against an ethereal backdrop of minimalist white arches, bathed in the soft focus of a cosmetics advert. The camera pulls back to reveal the petals have clustered together to form a delicate puffy armchair, standing in the centre of a temple-like space, surrounded by a dreamy landscape of fluffy pink trees. It looks like a luxury zen retreat, as conceived by Glossier.

The aesthetic is eerily familiar: these are the pastel tones, tactile textures and ubiquitous arches of Instagram architecture, an amalgamation of design tropes specifically honed for likes. An ode to millennial pink, this computer-rendered scene has been finely tuned to seduce the social media algorithm, calibrated to slide into your feed like a sugary tranquilliser, promising to envelop you in its candy-floss embrace.

What makes it different from countless other such CGI visions that populate the infinite scroll is that this implausible chair now exists in reality. In front of the video, on show in the Museum of Applied Arts in Vienna (MAK), stands the Hortensia chair, a vision of blossomy luxury plucked from the screen and fabricated from thousands of laser-cut pink fabric petals – yours for about £5,000.

It is the work of digital artist Andrés Reisinger, who minted the original digital chair design as an NFT after his images went viral on Instagram in 2018. He was soon approached by collectors asking where they could buy the real thing, so he decided to make it – with the help of product designer Júlia Esqué and furniture brand Moooi – first as a limited edition, and now adapted for serial production. It was the first time that an armchair had been willed into being by likes and shares, a physical product spawned from the dark matter of the algorithm.

Continue reading...

Apple iPhone 14 connects Australians in danger to helplines via satellite

Apple iPhone 14 connects Australians in danger to helplines via satellite

The feature, rolled out to users in the US and UK last year, can send details including location without phone reception to trained specialists

Apple has launched a new emergency feature on all iPhone 14 models in Australia and New Zealand that enables users to message emergency services and alert family and friends if they’re in strife, even when there is no phone reception.

The Emergency SOS feature works by connecting directly to satellites located more than a 1,000km from Earth.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

Google, how do I ask your AI the right questions?

Google, how do I ask your AI the right questions?
An illustration of a woman typing on a keyboard, her face replaced with lines of code.
Live footage of me thinking of what to ask AI bots. | Image: The Verge

A few weeks ago, my spouse and I made a bet. I said there was no way ChatGPT could believably mimic my writing style for a smartwatch review. I’d already asked the bot to do that months ago, and the results were laughable. My spouse bet that they could ask ChatGPT the exact same thing but get a much better result. My problem, they said, was I didn’t know the right queries to ask to get the answer I wanted.

To my chagrin, they were right. ChatGPT wrote much better reviews as me when my spouse did the asking.

That memory flashed through my mind while Iiveblogging Google I/O. This year’s keynote was essentially a two-hour thesis on AI, how it’ll impact Search, and all the ways it could boldly and responsibly make our lives better. A lot of it was neat. But I felt a shiver run down my spine when Google openly acknowledged that it’s hard to ask AI the right questions.

During its demo of Duet AI, a series of tools that will live inside Gmail, Docs, and more, Google showed off a feature called Sidekick that can proactively offer you prompts that change based on the Workspace document you’re working on. In other words, it’s prompting you on how to prompt it by telling you what it can do.

That showed up again later in the keynote when Google demoed its new AI search results, called Search Generative Experience (SGE). SGE takes any question you type into the search bar and generates a mini report, or a “snapshot,” at the top of the page. At the bottom of that snapshot are follow-up questions.

As a person whose job is to ask questions, both demos were unsettling. The queries and prompts Google used on stage look nothing like the questions I type into my search bar. My search queries often read like a toddler talking. (They’re also usually followed by “Reddit” so I get answers from a non-SEO content mill.) Things like “Bald Dennis BlackBerry movie actor name.” When I’m searching for something I wrote about Peloton’s 2022 earnings, I pop in “Site:theverge.com Peloton McCarthy ship metaphors.” Rarely do I search for things like “What should I do in Paris for a weekend?” I don’t even think to ask Google stuff like that.

I’ll admit that when staring at any kind of generative AI, I don’t know what I’m supposed to do. I can watch a zillion demos, and still, the blank window taunts me. It’s like I’m back in second grade and my grumpy teacher just called on me for a question I don’t know the answer to. When I do ask something, the results I get are laughably bad — things that would take me more time to make presentable than if I just did it myself.

On the other hand, my spouse has taken to AI like a fish to water. After our bet, I watched them play around with ChatGPT for a solid hour. What struck me most was how different our prompts and queries were. Mine were short, open-ended, and broad. My spouse left the AI very little room for interpretation. “You have to hand-hold it,” they said. “You have to feed it exactly everything you need.” Their commands and queries are hyper-specific, long, and often include reference links or data sets. But even they have to rephrase prompts and queries over and over again to get exactly what they’re looking for.

A screenshot of an AI snapshot about Bryce Canyon Image: Google
The SGE snapshots also prompt you on what to ask it next.

This is just ChatGPT. What Google’s pitching goes a step further. Duet AI is meant to pull contextual data from your emails and documents and intuit what you need (which is hilarious since I don’t even know what I need half the time). SGE is designed to answer your questions — even those that don’t have a “right” answer — and then anticipate what you might ask next. For this more intuitive AI to work, programmers have to make it so the AI knows what questions to ask users so that users, in turn, can ask it the right questions. This means that programmers have to know what questions users want answered before they’ve even asked them. It gives me a headache thinking about it.

Not to get too philosophical, but you could say all of life is about figuring out the right questions to ask. For me, the most uncomfortable thing about the AI era is I don’t think any of us know what we really want from AI. Google says it’s whatever it showed on stage at I/O. OpenAI thinks it’s chatbots. Microsoft thinks it’s a really horny chatbot. But whenever I talk to the average person about AI these days, the question everybody wants answered is simple. How will AI change and impact my life?

The problem is nobody, not even the bots, has a good answer for that yet. And I don’t think we’ll get any satisfactory answer until everyone takes the time to rewire their brains to speak with AI more fluently.

Google’s new Magic Editor pushes us toward AI-perfected fakery

Google’s new Magic Editor pushes us toward AI-perfected fakery
A photo of a woman in front of a waterfall
Image: Google

One of the most impressive demos at Google I/O started with a photo of a woman in front of a waterfall. A presenter onstage tapped on the woman, picked her up, and moved her to the other side of the image, with the app automatically filling in the space where she once stood. They then tapped on the overcast sky, and it instantly bloomed into a brighter cloudless blue. In just a matter of seconds, the image had been transformed.

The AI-powered tool, dubbed the Magic Editor, certainly lived up to its name during the demo. It’s the kind of tool that Google has been building toward for years. It already has a couple of AI-powered image editing features in its arsenal, including the Magic Eraser, which lets you quickly remove people or objects from the background of an image. But this type of tool takes things up a notch by letting you alter the contents — and potentially, the meaning — of a photo in much more significant ways.

A photo of a person in front of a waterfall being edited using Magic Editor. GIF: Google
The Magic Editor transforms the photo in seconds.

While it’s clear that this tool isn’t flawless — and there remains no firm release date for it — Google’s end goal is clear: to make perfecting photos as easy as just tapping or dragging something on your screen. The company markets the tool as a way to “make complex edits without pro-level editing tools,” allowing you to leverage the power of AI to single out and transform a portion of your photo. That includes the ability to enhance the sky, move and scale subjects, as well as remove parts of an image with just a few taps.

Google’s Magic Editor attempts to package all the steps that it would take to make similar edits in a program like Photoshop into a single tap — or, at least, that’s what it looks like from the demo. In Photoshop, for example, you’re stuck using the Content-Aware Move tool (or any of the other methods of your choice) to pick up and move a subject inside of an image. Even then, the photo still might not look quite right, which means you’ll have to pick up other tools, like the Clone Stamp tool or maybe even the Spot Healing Brush, to fix any leftover artifacts or a mismatched background. It’s not the most complicated process ever, but as with most professional creative tools, there’s a definite learning curve for people who are new to the program.

I’m all for Google making photo editing tools free and more accessible, given that Photoshop and some of the other image editing apps out there are expensive and pretty unintuitive. But putting powerful and incredibly easy-to-use image editing tools into the hands of, well, just about everyone who downloads Google Photos could transform the way we edit — and look at — photos. There have long been discussions about how far a photo can be edited before it’s no longer a photo, and Google’s tools push us closer to a world where we tap on every image to perfect it, reality or not.

Samsung recently brought attention to the power of AI-“enhanced” photos with “Space Zoom,” a feature that’s supposed to let you capture incredible pictures of the Moon on newer Galaxy devices. In March, a Reddit user tried using Space Zoom on an almost unsalvageable image of the Moon and found that Samsung appeared to add craters and other patches that weren’t actually there. Not only does this run the risk of creating a “fake” image of the Moon, but it also leaves actual space photographers in a strange place, as they spend years mastering the art of capturing the night sky, only for the public to often be presented with fakes.

 Image: Google
A sequence of edits with Google’s Magic Editor.

To be fair, there are a ton of similar photography-enhancing features that are built in to smartphone cameras. As my colleague Allison Johnson points out, mobile photography already fakes a lot of things, whether it’s by applying filters or unblurring a photo, and doctored images are nothing new. But Google’s Magic Editor could make a more substantial form of fakery easier and more attractive. In its blog post explaining the tool, Google makes it seem like we’re all in search of perfection, noting that the Magic Editor will provide “more control over the final look and feel of your photo” while getting the chance to fix a missed opportunity that would make a photo look its best.

Call me some type of weird photo purist, but I’m not a fan of editing a photo in a way that would alter my memory of an event. If I was taking a picture of a wedding and the sky was cloudy, I wouldn’t think about swapping it for something better. Maybe — just maybe — I might consider moving things around or amping up the sky on a picture I’m posting to social media, but even that seems a little disingenuous. But, again, that’s just me. I could still see plenty of people using the Magic Editor to perfect their photos for social media, which adds to the larger conversation of what exactly we should consider a photo and whether or not that’s something people should be obligated to disclose.

Google calls its Magic Editor “experimental technology” that will become available to “select” Pixel phones later this year before rolling out to everyone else. If Google is already adding AI-powered image editing tools to Photos, it seems like it’s only a matter of time before smartphone makers integrate these one-tap tools, like sky replacement or the ability to move a subject, directly into a phone’s camera software. Sometimes, the beauty of a photo is its imperfection. It just seems like smartphone makers are trying to push us farther and farther away from that idea.

Google’s AI tools embrace the dream of Clippy

Google’s AI tools embrace the dream of Clippy
Clippy on ruled paper.
Microsoft’s Clippy sits atop its paper throne. | Image: Microsoft

The words “it looks like you’re writing a letter, would you like some help with that?” didn’t appear at any point during Google’s recent demo of its AI office suite tools. But as I watched Aparna Pappu, Google’s Workspace leader, outline the feature onstage at I/O, I was reminded of a certain animated paperclip that another tech giant once hoped would help usher in a new era of office work.

Even Microsoft would acknowledge that Clippy’s legacy is not wholly positive, but the virtual assistant is forever associated with a particular period of work — one packed to the brim with laborious emails, clip art, and beige computers with clunking hard drives. Now, work has changed — it’s Slack pings, text cursors jostling in a Google Doc, and students who don’t know what file systems are — and as generative AI creeps into our professional lives, both Google and Microsoft are recognizing that it’s calling for a new era of tools to get things done.

Google dedicated roughly 10 minutes of its developer conference keynote to what it now calls “Duet AI for Google Workspace,” a collection of AI-infused tools it’s building into its productivity apps — Gmail, Docs, Slides, Sheets, etc. Most of the features were previously announced in March, but the demonstration showed them off in more detail. Examples included being able to generate a draft job description in Docs from just a couple of prompts, building a schedule for a dog walking business in Sheets, and even generating images to illustrate a presentation in Slides.

New for the I/O presentation was Sidekick, a feature designed to understand what you’re working on, pull together details from across Google’s different apps, and present you with clear information to use as notes or even incorporate directly into your work.

If Google’s Duet is designed to deal with the horror of a blank document, then Sidekick seems to be looking ahead to a future where a black AI prompt box could instead be the intimidating first hurdle. “What if AI could proactively offer you prompts?” Pappu said as she introduced the new feature. “Even better, what if these prompts were actually contextual and changed based on what you were working on?”

In a live demonstration that followed, the audience was shown how Sidekick could analyze a roughly two-paragraph-long children’s story, provide a summary, and then suggest prompts for continuing it. Clicking on one of these prompts (“What happened to the golden seashell?”) brought up three potential directions for the narrative to go. Clicking “insert” added these as bullet points to the story to act as a reference for the ongoing writing. It could also suggest and then generate an image as an illustration.

Next, Sidekick was shown summarizing a chain of emails. When prompted, it was able to pull out specific details from an associated Sheets spreadsheet and insert them into an emailed response. And finally, on Slides, Sidekick suggested generating speaker notes for the presenter to read from while showing the slides.

The feature looks like a modern twist on Clippy, Microsoft’s old assistant that would spring into action at the mere hint of activity in a Word document to ask if you wanted help with tasks like writing a letter. Google’s Duet is surely in a different league, both in terms of its reading comprehension and the quality of the text that the generative AI spits out. But the basic spirit of Clippy — identifying what you’re trying to do and offering to help — remains.

But perhaps more important is how Sidekick was shown offering this information. In Google’s demonstration, Sidekick is summoned by the user and doesn’t appear until they press its icon. That’s important since one of the things that annoyed people most about Clippy was that it wouldn’t shut the hell up. “These toon-zombies are as insistent on popping up again as Wile E. Coyote,” The New York Times observed in its original review of Office 97.

Though they share some similarities, Clippy and Sidekick belong to two very different eras of computing. Clippy was designed for an era where many people were buying their first desktop computers for the home and using office software for the first time. New York Magazine cites one Microsoft postmortem that says part of its problem was that the assistant was “optimized for first use” — potentially helpful the first time you saw it but intensely annoying every time thereafter.

Fast forward to 2023, and these tools are now familiar but exhausting in the possibilities they offer. We no longer just sit, type, print, and email but, rather, collaborate across platforms, bring together endless streams of data, and try to produce a coherent output in multimedia splendor.

AI features like Duet and Sidekick (not to mention Microsoft’s competing Copilot feature for Office) aren’t there to teach you the basics of how to write a letter in Google Docs. They’re there because you’ve already written hundreds of letters, and you don’t want to spend your life manually writing hundreds more. They’re not there to show that Slides has a speaker notes feature; they’re there to populate it for you.

Google Workspace’s Duet AI or Microsoft Office’s Copilot don’t seem interested in teaching you the basics of how to use their software. They’re there to automate the process. The spirit of Clippy lives on, but in a world that’s moved on from needing a paperclip to tell you how to write a letter.

Microsoft disabled Clippy by default with the release of Office XP in 2001 and removed the assistant entirely in 2007. In between these points, the philosopher Nick Bostrom outlined his now famous paperclip maximizer thought experiment, which warned of the existential risk posed by AI even if given a supposedly harmless goal (making paperclips). Clippy isn’t making a comeback, but its spirit — now animated by AI — lives on. Let’s hope it’s still harmless.

samedi 13 mai 2023

AI voice synthesising is being hailed as the future of video games – but at what cost?

AI voice synthesising is being hailed as the future of video games – but at what cost?

Tech advances that make it easier to recreate human voices also raise ethical questions about the rights of actors and musicians

When the epic open-world PlayStation 4 game Red Dead Redemption 2 was developed in 2013, it took 2,200 days to record the 1,200 voices in the game with 700 voice actors, who recited the 500,000 lines of dialogue.

It was a massive feat that is nearly impossible for any other studio to replicate – let alone a games studio smaller than Rockstar Games.

Sign up for a weekly email featuring our best reads

Continue reading...

The much simpler way to keep track of everything

The much simpler way to keep track of everything Hi, friends! Welcome to Installer No. 55, your guide to the best and Verge -iest stuff in...