lundi 15 mai 2023

Microsoft will pay to capture carbon from burning wood

Microsoft will pay to capture carbon from burning wood
Art depicts cartoon balloons attached to the tops of four smokestacks.
Illustration by Hugo Herrera / The Verge

Microsoft just backed a big plan to capture carbon dioxide emissions from a wood-burning power plant. Today, the tech giant announced a deal with Danish energy company Ørsted to purchase credits representing 2.76 million metric tons of carbon dioxide captured at Ørsted’s Asnæs Power Station over 11 years.

It’s one of the biggest deals any company has made to date to draw down carbon dioxide emissions, according to a press release from Ørsted. The move is supposed to help Microsoft hit its goal of becoming carbon negative by 2030, the point at which the company is removing more planet-heating carbon dioxide from the atmosphere than it generates through its operations.

But technology to capture carbon dioxide emissions is still nascent, and some environmental groups and researchers are skeptical that the strategy Microsoft just helped to fund can be an effective way to tackle climate change. Without Microsoft’s support, Ørsted wouldn’t have been able to install carbon capture devices at its power plant. “Danish state subsidies and Microsoft’s contract were both necessary to make this project viable,” Ørsted’s announcement says.

With Microsoft’s help, Ørsted was able to nab an even bigger, 20-year contract with the Danish Energy Agency (DEA) to capture CO2 emissions from Asnæs in western Zealand and a second power plant near Copenhagen. After the carbon capture devices are installed, they should be able to capture a total of 430,000 metric tons of CO2 annually by 2026. For comparison, that’s roughly equivalent to how much CO2 a single gas-fired power plant emits in a year.

These power plants, however, burn wood chips and straw, fuels also known as “biomass.” And burning biomass, which can include agricultural waste and other plant material, as a sustainable energy source is controversial. The EU counts biomass as its biggest source of renewable energy, but a lot of the wood that’s burned has come from trees cut down in forests across Europe and the southeast US. Ørsted says the wood chips burned at its Asnæs Power Station “come from sustainably managed production forests and consists of residues from trimming or crooked trees.”

How is burning trees supposed to be good for the environment? After all, wood still releases CO2 when it’s burned. The argument is that trees or crops used to make biomass naturally take in and store CO2 when they’re alive. So if you replant the trees or plants, you can potentially have a fuel that’s carbon neutral.

Ørsted is going one step further by adding technologies that can filter CO2 out of its power plants’ smokestacks, keeping it from wafting up into the atmosphere. By doing that, it believes its biomass-burning power plants will become carbon negative. They plan to bury the excess carbon dioxide they capture under the North Sea and sell credits to Microsoft representing each ton of CO2. Microsoft can then use those credits to claim that it has canceled out some of its own greenhouse gas pollution.

If that all sounds like a tricky balancing act, it is. Previous research has found that burning woody biomass can create more CO2 emissions than what’s captured. That’s because only capturing smokestack emissions fails to account for all the pollution that might come from cutting down the trees and transporting the wood. Plus, it can take a long time for trees or plants to grow mature enough for people to rely on them to draw down a significant amount of CO2.

“We think that the details are crucial,” Phillip Goodman, carbon removal portfolio director at Microsoft, says in an email to The Verge. An effective carbon capture project would need to use biomass “harvested from appropriate areas” and account for all of its “process” emissions, Goodman says. Microsoft declined to say how much it would pay Ørsted for carbon removal credits for this particular project.

Microsoft has been making some bold bets on climate tech and clean energy technologies lately. Last week, it announced a plan to purchase electricity from a forthcoming nuclear fusion power plant — even though some experts don’t think such a cutting-edge power plant could realistically be developed for several more decades. Microsoft has also paid a Swiss company called Climeworks to filter CO2 out of the air.

How to hardwire your home without ethernet in the walls

How to hardwire your home without ethernet in the walls
An illustration of a home surrounded by smart tech.
Illustration by Hugo Herrera for The Verge

Listen. I don’t have anything against Wi-Fi. High-speed wireless access to the internet is darn near miraculous, and there are a lot of situations where it doesn’t make any sense to use a wired connection. Can you imagine if your phone was connected to the wall?

But since we’re celebrating the 50th anniversary of ethernet, I’d like to make a pitch for the humble, hardworking wired connection.

A wired connection is more stable than Wi-Fi, it’s almost always faster, and it has much lower latency. It’s just plain better to send a signal through a set of copper wires than turning it into radio waves and blasting it through walls, furniture, appliances, and people. (Wi-Fi isn’t bad for people; people are bad for Wi-Fi.) And every device you get off of your Wi-Fi will also help the devices still on it. You should hardwire every device you can, especially computers, gaming consoles, TVs, and especially your Wi-Fi access points (home servers and network-attached storage, too, but if you have those, you don’t need me to tell you about the advantages of wires).

Even a little bit of wiring can have a dramatic effect on your Wi-Fi situation and might let you avoid having to spring for a mesh networking system — or, worse, a Wi-Fi extender.

Here are the two best things wired networking lets you do.

Move the router: The best place for a Wi-Fi router is in the center of the home, but unless your home is already wired for ethernet, your internet hookup is probably along an exterior wall, somewhere that was convenient for the ISP’s installer but not necessarily for you. Running a wired connection between your ISP modem/gateway and your router lets you put the Wi-Fi where it needs to be while keeping the modem where it needs to be. Everybody wins.

By way of example: My fiber gateway — where the fiber optic signal from my ISP comes in — is in my garage. My house is about a decade old and wired for ethernet, but the connection between the gateway and the networking enclosure in my laundry room is indirect and full of splices due to some mystifying decisions by earlier ISP installers and occupants, so my internet connection kept dropping. Eventually, I’ll run a proper direct in-wall ethernet connection that bypasses that mess, but in the meantime, I’ve run a 50-foot patch cable out of my garage door, into my laundry room door, and into the wireless router because the alternative is putting the router in the garage, where it would slowly cook itself, and I’d still have to run a patch cable to hook up the rest of my network.

Replacing mesh backhaul: The whole reason mesh networking kits became popular is that they give you a decent Wi-Fi connection without wires, and here I am suggesting that you put the wires right back in. Hear me out.

Mesh networking kits like Eero, Nest Pro, and Orbi use Wi-Fi to communicate between the router and the satellite nodes as well as with the client devices. Usually, they dedicate one Wi-Fi band to backhaul — the communication between the mesh nodes — and one or more bands for devices. But each node has to be close enough to the next to have good reception on the backhaul band, and you have that many more Wi-Fi signals in your airspace. Replacing even one backhaul from your main router to a satellite node with a wired connection — if your mesh system supports it — dramatically improves the connection, especially for devices further away from the main router. You can put your Wi-Fi access points further away from each other, have better communication between them, and use fewer of them overall. (This is how I fixed my in-laws’ Eero installation last Christmas, to mild applause.)

Some houses and apartment buildings, especially those built or renovated in the last decade, are fortunate to have ethernet in the walls already: some in just one or two places, others in almost every room. If that’s an option for you and you’re not already taking advantage of it, you don’t need much to get started beyond a networking switch wherever those ethernet runs all meet and some cables to hook things into your wall jacks. But most people don’t have ethernet in the walls, and it’s not trivial or cheap to get it there, even if you have the option of poking a bunch of holes in the wall.

Fortunately, there are plenty of alternatives. In order from cheapest to best to… least good, there’s buying a really long ethernet cable, using your existing coax wiring, and finally, powerline networking.

The cheapest option: just get a really long cable

Here’s my pitch: get a really long ethernet cable. A hundred-foot cable from a reputable company costs about $25. Hook up the things you want to hook up. Maybe this lets you put your Wi-Fi router at the center of the house. Maybe it lets you hardwire your gaming PC and stop lagging out of multiplayer matches. Maybe it lets you use wired backhaul for one of your mesh networking nodes, or maybe you want to wire up your entire entertainment center with a simple networking switch. This is a good option for renters and people who don’t have ethernet or cable wiring in the walls and don’t want to (or can’t) put it there.

Now you’ve spent $25 or $50. If you’re happy with the performance but not the aesthetics of having a hundred-foot ethernet cable lying around, do what you can to pretty it up a bit. Tuck it under the baseboards or the edge of the carpet if you can, or use a peel-and-stick cable raceway. Is it elegant? Not really. Does it work? Yes.

The actual best option: use your cable wiring

shot of a white MoCA adapter labelled “goCoax” with ethernet and power cables emerging from the top and a coax cable coming from the bottom. The adapter is dangling in midair in front of a bookshelf, with a power outlet visible in the background. Image: Nilay Patel / The Verge
MoCA adapters like this one convert between Ethernet and coax wiring, so you can use your existing cable to extend your home network.

Most older homes have coax in at least a room or two, thanks to generations of satellite TV, cable TV, and cable internet installations. If your home or apartment was built in the 1990s or later, you may even have coax cable hookups preinstalled in most rooms. If you have existing cable wiring, you can use MoCA adapters (that’s Multimedia over Coax Alliance) to convert ethernet to coax and back without the finickiness of Wi-Fi or powerline. Depending on your exact setup, it might not be the easiest or cheapest option, but it’s as good as in-wall ethernet, and you’re a lot more likely to have it already there.

The current version, MoCA 2.5, can support transfer speeds of up to 2.5Gbps. A basic MoCA setup requires an adapter at each end. You should look for MoCA 2.5 adapters with 2.5GbE ethernet ports. Most peoples’ internet connections aren’t that fast yet, but 2.5GbE ports are becoming more common on desktop and network devices, and there’s no reason to bottleneck yourself in the future by getting MoCA adapters with 1Gbps ports when 2.5GbE options aren’t much more expensive.

To get started with MoCA, you need a coax port near your router. Get a MoCA adapter, and connect it to one of your router’s LAN ports with an ethernet cable. Plug the coax side into the nearest coax port. The other adapter connects to the coax port in the wall at your destination; then, you can connect the ethernet end to your device or a networking switch to wire up multiple devices. You can use multiple endpoint adapters with one router-side adapter, and if your router has a coax port on it — like most FiOS gateways — it already has MoCA built in, and you just need the endpoint adapters. The Verge’s editor-in-chief, Nilay Patel, uses MoCA adapters to run the backhaul for his Eero network.

Of course, that assumes a direct cable connection between the router end and the device end, which isn’t at all guaranteed. I’ve seen houses with three or four non-intersecting coax cable networks laid down by various cable and satellite installers over the past few decades. You also need to make sure there aren’t too many splitters in the path between them — these can lower the signal strength — and if you are also using coax cables for your TV or incoming internet connection, you’ll need a PoE filter, which makes sure MoCA network doesn’t interfere with other signals in your network. This may require some cable archaeology and pruning disused splitters and cables.

The most useful and up-to-date explanation on MoCA that I’ve found is the one Dong Ngo just published in April, which includes helpful info on network layout, splitters, and PoE filters.

It might work great: powerline networking

Close-up of a TP-Link AV2000 powerline networking adapter plugged into the lower receptacle of a 2-gang electrical wall outlet. A power cord is plugged into the upper receptacle. Image: Richard Lawler / The Verge
A powerline adapter sends your Ethernet signal through your existing electrical wiring. It can work well, though it depends on the age of your electrical wiring and other factors.

Powerline networking lets you use your existing electrical wiring to extend your network. It makes a lot of sense in theory; most people have electrical outlets in every room.

But in practice, its performance depends a lot on the age of your wiring and how each outlet is connected to your electrical panel. Richard Lawler, The Verge’s senior news editor, uses an AV2000 powerline networking kit in his home. He says he gets 700 to 1000 Mbps (on a Gigabit network) in some rooms and 300 to 500 Mbps in others. That’s better than you’ll get with a lot of Wi-Fi routers at range, though not better than MoCA or a long ethernet cable.

Wirecutter’s powerline article — which still has a GIF of my floor lamp going disco mode during my 2015 testing — is a good rundown of the powerline options. Joel also knows his stuff.

In addition to powerline kits, Wirecutter also tested MoCA adapters, and you can see in that article just how thoroughly the MoCA kits beat the powerline ones. Just saying. You’re better off with MoCA if you have it. A long ethernet cable beats powerline if you can manage it, but if that’s not an option, powerline is worth a shot.

TP-Link already has some Wi-Fi 7 routers for you to buy

TP-Link already has some Wi-Fi 7 routers for you to buy
Product shots of the TP-Link Archer BE800 (left) and a three-pack of the Deco BE85 (right)
Do you want your internet in tube form or whatever that other thing is? | Image: TP-Link

The Wi-Fi 7 spec isn’t totally finished yet, but you know TP-Link won’t let a pesky little thing like pending certification stop it, and why should it? Netgear and Asus aren’t. And so, the company is launching its first-ever Wi-Fi 7 routers — a mesh system called the Deco BE85, and a powerful single access point router that looks like a set piece designed for an old sci-fi show, the TP-Link Archer BE800.

Wi-Fi 7 means you can probably expect a laptop or a phone with a good Wi-Fi 6E or Wi-Fi 7 network card to get over a gigabit throughput wirelessly, possibly way more, particularly if you have a connection to your ISP that’s capable of it, such as those offered in some places by the likes of Google Fiber, AT&T, or Comcast.

TP-Link sent us some of its test data, along with the layout of the house it tested the mesh Deco BE85 system in, using a OnePlus 11 5G phone as the Wi-Fi 7 client, and if true, the numbers look enticing, showing the phone reaching well over 3,000Mbps throughput a few feet from the main router, only dropping to a little over 300Mbps at the farthest point. That’s incredibly fast and not something I’ve ever even been close to seeing, with most Wi-Fi 6E routers pushing up over 1,000Mbps at close range. I’d like to see how that bears out in real-world testing.

Despite the spec not being officially completed, these should be some impressive routers. Just looking at their hardware features, both get 10Gbps and 2.5Gbps ports, single USB 3.0 ports, and an SFP+ WAN/LAN port.

Both of TP-Link’s new routers are technically capable of up to 19Gbps, which is consistent with speeds claimed by Qualcomm regarding its new Wi-Fi 7 chipsets that likely live in them. Naturally, that’s spread across all bands and individual data links, so you shouldn’t expect to see anywhere near that throughput. Both routers will support free and pro versions of TP-Link HomeShield, the company’s network security package.

A Deco BE85 mesh node unit sits on a wooden surface next to a small bowl. Image: TP-Link
A lonely, unassuming Deco BE85 blasts Wi-Fi all over the place.

Looking closer, the Deco BE85 uses eight antennas per mesh node, and TP-Link says a three-pack should cover about 9,600 square feet. Of course, in the real world, with walls and furniture and sacks of sentient meat walking around, that may not quite line up. The company also says it will use AI to control roaming, shoving your devices around to the best node as you wander around your home. The best mesh systems already do this without AI and usually do a decent job of it, so we’ll see how and whether TP-Link has improved things there.

Each node has two 2.5Gbps ports and two 10Gbps ports, each of which can serve as WAN or LAN — meaning they’ll autoconfigure themselves depending on whether they’re connected to your modem or to the rest of your network.

The Deco BE85 will also support wired backhaul, meaning you can run an ethernet cable between the devices so that each mesh node gets your full bandwidth to share with devices at every part of your house, rather than what it could otherwise pick up wirelessly — at least, that’s traditionally how it works. With multi-link aggregation for wireless backhaul, it’s likely to be faster than gigabit. So, if your house is already wired with older cabling and you don’t feel like running new ethernet drops, you might be better off going wireless with enough Deco BE85 nodes bopping around.

An Archer BE800 sitting on a desk, next to a laptop, plant, lamp, and a jar with some sticks in it. Image: TP-Link
Always love a screen on a router.

As for the chunky TP-Link Archer BE800 (which is named “beboo” in my headcanon), the company’s new single access point looks like a rejected Xbox design, with its pseudo-x-shaped outer shell and black front grate. It gets a dot matrix LED display, and TP-Link says it has over 3,000 graphics, showing you the weather, time, and for some reason, emoji. I don’t know who wants their router to emote at them, but it looks like y’all finally get that. TP-Link says it’ll also sport the ability to set up a separate network just for your smart home devices,

Mostly, its wireless capabilities look on par with the BE85 kit: it has the same number of bands, antennas, and potential throughput, and though TP-Link didn’t share the coverage area, I’d bet on somewhere around 2,500-3,000 square feet. The company also mentioned the ability to create a distinct network for your smart home devices, though it’s unclear if that goes for the BE85 mesh system as well.

Despite its appearance, TP-Link isn’t calling this a gaming router — that distinction is reserved for the Archer GE800, which has been announced but not yet released.

The TP-Link BE85 is available now at Best Buy in a three-pack configuration, and it ain’t cheap, at $1,500. Same goes for the $600 TP-Link BE800, also at Best Buy. Both will also be available for preorder at Amazon.

Should you run out and buy one of these routers today? For most people, of course not. The spec won’t be finalized until sometime later this year at the earliest, and as that time approaches, other companies will start announcing routers supporting the new standard. These are both pricey devices, and for at least the next year or two, you won’t see many products that can take advantage of anything Wi-Fi 7 brings to the table. Still, if you just like having the fancy new thing, they’ll be backward-compatible with everything you already own, so it’s not like they won’t work.

Philadelphia Inquirer severely disrupted by cyber-attack

Philadelphia Inquirer severely disrupted by cyber-attack

Attack has caused worst disruption in decades to city’s paper of record and it is unclear when normal editorial services will resume

The Philadelphia Inquirer is scrambling to restore its systems and resume normal operations after it became the latest major media organization to be targeted in a cyber-attack.

With no regular Sunday newspaper and online stories also facing some delays, the cyber-attack has triggered the worst disruption to the Inquirer in decades.

Continue reading...

Fixing the Market Demand Problem With Generative AI

Fixing the Market Demand Problem With Generative AI
Generative AI markets
In the face of economic hardship, companies are engaging in price wars to stay competitive, often at the risk of business survival. Traditional demand-generation marketing strategies are losing effectiveness in the digital era. Generative AI, capable of personalized, scalable customer interactions, may offer a solution to this market crisis. The post Fixing the Market Demand Problem With Generative AI appeared first on TechNewsWorld.

MEPs to vote on proposed ban on ‘Big Brother’ AI facial recognition on streets

MEPs to vote on proposed ban on ‘Big Brother’ AI facial recognition on streets

Thursday’s vote in EU parliament seen as key test in formation of world’s first artificial intelligence laws

Moves to ban live “Big Brother” real time facial recognition technology from being deployed across the streets of the EU or by border officials will be tested in a key vote at the European parliament on Thursday.

The amendment is part of a package of proposals for the world’s first artificial intelligence laws, which could result in firms being fined up to €10m (£8.7m) or removed from trading within the EU for breaches of the rules.

Continue reading...

dimanche 14 mai 2023

‘Design me a chair made from petals!’: The artists pushing the boundaries of AI

‘Design me a chair made from petals!’: The artists pushing the boundaries of AI

From restoring artefacts destroyed by Isis to training robot vacuum cleaners, architects, artists and game developers are discovering the potential – and pitfalls – of the virtual world

A shower of pink petals rains down in slow motion against an ethereal backdrop of minimalist white arches, bathed in the soft focus of a cosmetics advert. The camera pulls back to reveal the petals have clustered together to form a delicate puffy armchair, standing in the centre of a temple-like space, surrounded by a dreamy landscape of fluffy pink trees. It looks like a luxury zen retreat, as conceived by Glossier.

The aesthetic is eerily familiar: these are the pastel tones, tactile textures and ubiquitous arches of Instagram architecture, an amalgamation of design tropes specifically honed for likes. An ode to millennial pink, this computer-rendered scene has been finely tuned to seduce the social media algorithm, calibrated to slide into your feed like a sugary tranquilliser, promising to envelop you in its candy-floss embrace.

What makes it different from countless other such CGI visions that populate the infinite scroll is that this implausible chair now exists in reality. In front of the video, on show in the Museum of Applied Arts in Vienna (MAK), stands the Hortensia chair, a vision of blossomy luxury plucked from the screen and fabricated from thousands of laser-cut pink fabric petals – yours for about £5,000.

It is the work of digital artist Andrés Reisinger, who minted the original digital chair design as an NFT after his images went viral on Instagram in 2018. He was soon approached by collectors asking where they could buy the real thing, so he decided to make it – with the help of product designer Júlia Esqué and furniture brand Moooi – first as a limited edition, and now adapted for serial production. It was the first time that an armchair had been willed into being by likes and shares, a physical product spawned from the dark matter of the algorithm.

Continue reading...

Apple iPhone 14 connects Australians in danger to helplines via satellite

Apple iPhone 14 connects Australians in danger to helplines via satellite

The feature, rolled out to users in the US and UK last year, can send details including location without phone reception to trained specialists

Apple has launched a new emergency feature on all iPhone 14 models in Australia and New Zealand that enables users to message emergency services and alert family and friends if they’re in strife, even when there is no phone reception.

The Emergency SOS feature works by connecting directly to satellites located more than a 1,000km from Earth.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

Google, how do I ask your AI the right questions?

Google, how do I ask your AI the right questions?
An illustration of a woman typing on a keyboard, her face replaced with lines of code.
Live footage of me thinking of what to ask AI bots. | Image: The Verge

A few weeks ago, my spouse and I made a bet. I said there was no way ChatGPT could believably mimic my writing style for a smartwatch review. I’d already asked the bot to do that months ago, and the results were laughable. My spouse bet that they could ask ChatGPT the exact same thing but get a much better result. My problem, they said, was I didn’t know the right queries to ask to get the answer I wanted.

To my chagrin, they were right. ChatGPT wrote much better reviews as me when my spouse did the asking.

That memory flashed through my mind while Iiveblogging Google I/O. This year’s keynote was essentially a two-hour thesis on AI, how it’ll impact Search, and all the ways it could boldly and responsibly make our lives better. A lot of it was neat. But I felt a shiver run down my spine when Google openly acknowledged that it’s hard to ask AI the right questions.

During its demo of Duet AI, a series of tools that will live inside Gmail, Docs, and more, Google showed off a feature called Sidekick that can proactively offer you prompts that change based on the Workspace document you’re working on. In other words, it’s prompting you on how to prompt it by telling you what it can do.

That showed up again later in the keynote when Google demoed its new AI search results, called Search Generative Experience (SGE). SGE takes any question you type into the search bar and generates a mini report, or a “snapshot,” at the top of the page. At the bottom of that snapshot are follow-up questions.

As a person whose job is to ask questions, both demos were unsettling. The queries and prompts Google used on stage look nothing like the questions I type into my search bar. My search queries often read like a toddler talking. (They’re also usually followed by “Reddit” so I get answers from a non-SEO content mill.) Things like “Bald Dennis BlackBerry movie actor name.” When I’m searching for something I wrote about Peloton’s 2022 earnings, I pop in “Site:theverge.com Peloton McCarthy ship metaphors.” Rarely do I search for things like “What should I do in Paris for a weekend?” I don’t even think to ask Google stuff like that.

I’ll admit that when staring at any kind of generative AI, I don’t know what I’m supposed to do. I can watch a zillion demos, and still, the blank window taunts me. It’s like I’m back in second grade and my grumpy teacher just called on me for a question I don’t know the answer to. When I do ask something, the results I get are laughably bad — things that would take me more time to make presentable than if I just did it myself.

On the other hand, my spouse has taken to AI like a fish to water. After our bet, I watched them play around with ChatGPT for a solid hour. What struck me most was how different our prompts and queries were. Mine were short, open-ended, and broad. My spouse left the AI very little room for interpretation. “You have to hand-hold it,” they said. “You have to feed it exactly everything you need.” Their commands and queries are hyper-specific, long, and often include reference links or data sets. But even they have to rephrase prompts and queries over and over again to get exactly what they’re looking for.

A screenshot of an AI snapshot about Bryce Canyon Image: Google
The SGE snapshots also prompt you on what to ask it next.

This is just ChatGPT. What Google’s pitching goes a step further. Duet AI is meant to pull contextual data from your emails and documents and intuit what you need (which is hilarious since I don’t even know what I need half the time). SGE is designed to answer your questions — even those that don’t have a “right” answer — and then anticipate what you might ask next. For this more intuitive AI to work, programmers have to make it so the AI knows what questions to ask users so that users, in turn, can ask it the right questions. This means that programmers have to know what questions users want answered before they’ve even asked them. It gives me a headache thinking about it.

Not to get too philosophical, but you could say all of life is about figuring out the right questions to ask. For me, the most uncomfortable thing about the AI era is I don’t think any of us know what we really want from AI. Google says it’s whatever it showed on stage at I/O. OpenAI thinks it’s chatbots. Microsoft thinks it’s a really horny chatbot. But whenever I talk to the average person about AI these days, the question everybody wants answered is simple. How will AI change and impact my life?

The problem is nobody, not even the bots, has a good answer for that yet. And I don’t think we’ll get any satisfactory answer until everyone takes the time to rewire their brains to speak with AI more fluently.

Google’s new Magic Editor pushes us toward AI-perfected fakery

Google’s new Magic Editor pushes us toward AI-perfected fakery
A photo of a woman in front of a waterfall
Image: Google

One of the most impressive demos at Google I/O started with a photo of a woman in front of a waterfall. A presenter onstage tapped on the woman, picked her up, and moved her to the other side of the image, with the app automatically filling in the space where she once stood. They then tapped on the overcast sky, and it instantly bloomed into a brighter cloudless blue. In just a matter of seconds, the image had been transformed.

The AI-powered tool, dubbed the Magic Editor, certainly lived up to its name during the demo. It’s the kind of tool that Google has been building toward for years. It already has a couple of AI-powered image editing features in its arsenal, including the Magic Eraser, which lets you quickly remove people or objects from the background of an image. But this type of tool takes things up a notch by letting you alter the contents — and potentially, the meaning — of a photo in much more significant ways.

A photo of a person in front of a waterfall being edited using Magic Editor. GIF: Google
The Magic Editor transforms the photo in seconds.

While it’s clear that this tool isn’t flawless — and there remains no firm release date for it — Google’s end goal is clear: to make perfecting photos as easy as just tapping or dragging something on your screen. The company markets the tool as a way to “make complex edits without pro-level editing tools,” allowing you to leverage the power of AI to single out and transform a portion of your photo. That includes the ability to enhance the sky, move and scale subjects, as well as remove parts of an image with just a few taps.

Google’s Magic Editor attempts to package all the steps that it would take to make similar edits in a program like Photoshop into a single tap — or, at least, that’s what it looks like from the demo. In Photoshop, for example, you’re stuck using the Content-Aware Move tool (or any of the other methods of your choice) to pick up and move a subject inside of an image. Even then, the photo still might not look quite right, which means you’ll have to pick up other tools, like the Clone Stamp tool or maybe even the Spot Healing Brush, to fix any leftover artifacts or a mismatched background. It’s not the most complicated process ever, but as with most professional creative tools, there’s a definite learning curve for people who are new to the program.

I’m all for Google making photo editing tools free and more accessible, given that Photoshop and some of the other image editing apps out there are expensive and pretty unintuitive. But putting powerful and incredibly easy-to-use image editing tools into the hands of, well, just about everyone who downloads Google Photos could transform the way we edit — and look at — photos. There have long been discussions about how far a photo can be edited before it’s no longer a photo, and Google’s tools push us closer to a world where we tap on every image to perfect it, reality or not.

Samsung recently brought attention to the power of AI-“enhanced” photos with “Space Zoom,” a feature that’s supposed to let you capture incredible pictures of the Moon on newer Galaxy devices. In March, a Reddit user tried using Space Zoom on an almost unsalvageable image of the Moon and found that Samsung appeared to add craters and other patches that weren’t actually there. Not only does this run the risk of creating a “fake” image of the Moon, but it also leaves actual space photographers in a strange place, as they spend years mastering the art of capturing the night sky, only for the public to often be presented with fakes.

 Image: Google
A sequence of edits with Google’s Magic Editor.

To be fair, there are a ton of similar photography-enhancing features that are built in to smartphone cameras. As my colleague Allison Johnson points out, mobile photography already fakes a lot of things, whether it’s by applying filters or unblurring a photo, and doctored images are nothing new. But Google’s Magic Editor could make a more substantial form of fakery easier and more attractive. In its blog post explaining the tool, Google makes it seem like we’re all in search of perfection, noting that the Magic Editor will provide “more control over the final look and feel of your photo” while getting the chance to fix a missed opportunity that would make a photo look its best.

Call me some type of weird photo purist, but I’m not a fan of editing a photo in a way that would alter my memory of an event. If I was taking a picture of a wedding and the sky was cloudy, I wouldn’t think about swapping it for something better. Maybe — just maybe — I might consider moving things around or amping up the sky on a picture I’m posting to social media, but even that seems a little disingenuous. But, again, that’s just me. I could still see plenty of people using the Magic Editor to perfect their photos for social media, which adds to the larger conversation of what exactly we should consider a photo and whether or not that’s something people should be obligated to disclose.

Google calls its Magic Editor “experimental technology” that will become available to “select” Pixel phones later this year before rolling out to everyone else. If Google is already adding AI-powered image editing tools to Photos, it seems like it’s only a matter of time before smartphone makers integrate these one-tap tools, like sky replacement or the ability to move a subject, directly into a phone’s camera software. Sometimes, the beauty of a photo is its imperfection. It just seems like smartphone makers are trying to push us farther and farther away from that idea.

Google’s AI tools embrace the dream of Clippy

Google’s AI tools embrace the dream of Clippy
Clippy on ruled paper.
Microsoft’s Clippy sits atop its paper throne. | Image: Microsoft

The words “it looks like you’re writing a letter, would you like some help with that?” didn’t appear at any point during Google’s recent demo of its AI office suite tools. But as I watched Aparna Pappu, Google’s Workspace leader, outline the feature onstage at I/O, I was reminded of a certain animated paperclip that another tech giant once hoped would help usher in a new era of office work.

Even Microsoft would acknowledge that Clippy’s legacy is not wholly positive, but the virtual assistant is forever associated with a particular period of work — one packed to the brim with laborious emails, clip art, and beige computers with clunking hard drives. Now, work has changed — it’s Slack pings, text cursors jostling in a Google Doc, and students who don’t know what file systems are — and as generative AI creeps into our professional lives, both Google and Microsoft are recognizing that it’s calling for a new era of tools to get things done.

Google dedicated roughly 10 minutes of its developer conference keynote to what it now calls “Duet AI for Google Workspace,” a collection of AI-infused tools it’s building into its productivity apps — Gmail, Docs, Slides, Sheets, etc. Most of the features were previously announced in March, but the demonstration showed them off in more detail. Examples included being able to generate a draft job description in Docs from just a couple of prompts, building a schedule for a dog walking business in Sheets, and even generating images to illustrate a presentation in Slides.

New for the I/O presentation was Sidekick, a feature designed to understand what you’re working on, pull together details from across Google’s different apps, and present you with clear information to use as notes or even incorporate directly into your work.

If Google’s Duet is designed to deal with the horror of a blank document, then Sidekick seems to be looking ahead to a future where a black AI prompt box could instead be the intimidating first hurdle. “What if AI could proactively offer you prompts?” Pappu said as she introduced the new feature. “Even better, what if these prompts were actually contextual and changed based on what you were working on?”

In a live demonstration that followed, the audience was shown how Sidekick could analyze a roughly two-paragraph-long children’s story, provide a summary, and then suggest prompts for continuing it. Clicking on one of these prompts (“What happened to the golden seashell?”) brought up three potential directions for the narrative to go. Clicking “insert” added these as bullet points to the story to act as a reference for the ongoing writing. It could also suggest and then generate an image as an illustration.

Next, Sidekick was shown summarizing a chain of emails. When prompted, it was able to pull out specific details from an associated Sheets spreadsheet and insert them into an emailed response. And finally, on Slides, Sidekick suggested generating speaker notes for the presenter to read from while showing the slides.

The feature looks like a modern twist on Clippy, Microsoft’s old assistant that would spring into action at the mere hint of activity in a Word document to ask if you wanted help with tasks like writing a letter. Google’s Duet is surely in a different league, both in terms of its reading comprehension and the quality of the text that the generative AI spits out. But the basic spirit of Clippy — identifying what you’re trying to do and offering to help — remains.

But perhaps more important is how Sidekick was shown offering this information. In Google’s demonstration, Sidekick is summoned by the user and doesn’t appear until they press its icon. That’s important since one of the things that annoyed people most about Clippy was that it wouldn’t shut the hell up. “These toon-zombies are as insistent on popping up again as Wile E. Coyote,” The New York Times observed in its original review of Office 97.

Though they share some similarities, Clippy and Sidekick belong to two very different eras of computing. Clippy was designed for an era where many people were buying their first desktop computers for the home and using office software for the first time. New York Magazine cites one Microsoft postmortem that says part of its problem was that the assistant was “optimized for first use” — potentially helpful the first time you saw it but intensely annoying every time thereafter.

Fast forward to 2023, and these tools are now familiar but exhausting in the possibilities they offer. We no longer just sit, type, print, and email but, rather, collaborate across platforms, bring together endless streams of data, and try to produce a coherent output in multimedia splendor.

AI features like Duet and Sidekick (not to mention Microsoft’s competing Copilot feature for Office) aren’t there to teach you the basics of how to write a letter in Google Docs. They’re there because you’ve already written hundreds of letters, and you don’t want to spend your life manually writing hundreds more. They’re not there to show that Slides has a speaker notes feature; they’re there to populate it for you.

Google Workspace’s Duet AI or Microsoft Office’s Copilot don’t seem interested in teaching you the basics of how to use their software. They’re there to automate the process. The spirit of Clippy lives on, but in a world that’s moved on from needing a paperclip to tell you how to write a letter.

Microsoft disabled Clippy by default with the release of Office XP in 2001 and removed the assistant entirely in 2007. In between these points, the philosopher Nick Bostrom outlined his now famous paperclip maximizer thought experiment, which warned of the existential risk posed by AI even if given a supposedly harmless goal (making paperclips). Clippy isn’t making a comeback, but its spirit — now animated by AI — lives on. Let’s hope it’s still harmless.

samedi 13 mai 2023

AI voice synthesising is being hailed as the future of video games – but at what cost?

AI voice synthesising is being hailed as the future of video games – but at what cost?

Tech advances that make it easier to recreate human voices also raise ethical questions about the rights of actors and musicians

When the epic open-world PlayStation 4 game Red Dead Redemption 2 was developed in 2013, it took 2,200 days to record the 1,200 voices in the game with 700 voice actors, who recited the 500,000 lines of dialogue.

It was a massive feat that is nearly impossible for any other studio to replicate – let alone a games studio smaller than Rockstar Games.

Sign up for a weekly email featuring our best reads

Continue reading...

Google’s AI pitch is a recipe for email hell

Google’s AI pitch is a recipe for email hell
Google’s Cathy Edwards onstage at Google I/O 2023.
Screenshot taken from the Google I/O 2023 keynote replay

Of all the emotions I expected out of this week’s Google I/O keynote, “dread” was not on my short list. Google has spent a couple of decades creating products that make my life easier. This week, it demonstrated the exact opposite: a way to make communicating with other people worse.

Google I/O, this year, was focused on artificial intelligence, and one of its centerpieces was prompt-based text generation, particularly in Google’s office suite. At several points throughout the keynote, it demoed a system called “Help me write” — which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished doesn’t automatically mean better.

Good writing can delight us with an intriguing turn of phrase. It can tell us something about how a person thinks and feels. Or it can offer some kind of useful detail about the world. And there’s no reason AI can’t play a role. I’ve seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people explore new uses for it.

But Google’s onstage ideas were almost impressive in their ability to serve precisely none of writing’s core purposes. Its examples of Help me write calcify the worst tendencies of “professional” communications. They’re bland, bloated boilerplate that turns a human prompt into something that uses more words to say less.

Let’s look at one of the examples. Early in the keynote, a presenter auto-generates a job description for a textile design position that they (in the demo’s scenario) send into the world. Later, another presenter responds to it, supplying the following prompt:

interested! 8 years exp with intl brands. Call on Tuesday.

Help me write works its magic and produces the following:

Hi Lori,

Thank you so much for reaching out! I am very interested in the Senior Textile Designer position at Cymbal. I have 8 years of experience working with international brands, and I am confident that I have the skills and experience that you are looking for.

I am available for a call on Tuesday at your earliest convenience. Please let me know what time works.

The latter version is obviously the way you’re expected to apply to a job because, written by a human, it suggests you’re committed enough to type out a multi-paragraph reply and are capable of hitting certain class and cultural signifiers. An AI-generated response doesn’t serve either of those purposes, though. It was created almost instantly based on a predictive text system that requires minimal English-language competency, and so far, that system can only handle fairly formulaic writing; it won’t substitute for human work in many real-world situations. So all it does is construct a longer and more stilted version of the original prompt — one that probably only has value until everyone expects it was written with AI.

And even worse, the AI generation reinforces the idea that overenthusiastic US business speak is the required way to write, regardless of whether it’s a necessary skill for the job. I’ve seen thoughtful stories about people with dyslexia using ChatGPT to produce text that is — as a Washington Post article puts it — “unfailingly professional and polite.” But there’s an unspoken, simpler alternative: being willing to accept wider variations in how people communicate. I don’t begrudge anyone who uses AI writing to meet largely arbitrary standards, but at a society-wide level, it’s a linguistic arms race toward a more boring future.

For another example, try this friendly message of congratulations from Google Android engineering VP Dave Burke to devices and services SVP Rick Osterloh:

rick congrats on pixel fold launch amazing with Android dave

Here’s what Help me write churns out:

Dear Rick,

Congratulations on the successful launch of the Pixel Fold! I’m really impressed with the device and how well it integrates with Android. The foldable screen is a game-changer, and I can’t wait to see what you do with it next.

Best,
Dave

I’m going to assume this is tongue-in-cheek because I know the way executives send personal emails, and they seem entirely comfortable with one-sentence punctuation-free missives. But again, it’s worse! It tells me less about what the sender sounds like. It takes longer to read. It makes the sender sound like an interchangeable robot.

To understand the pragmatic value of this process right now, you have to look at an email from the very beginning of the keynote. The core of the generation is actually pretty cool — it takes a very simple prompt for an airline refund request and fills it in with concrete details drawn from your inbox, like the flight name and date. But after that, it embellishes the facts with a few paragraphs of irritated prose. Here’s the original:

ask for a full refund for this canceled flight

And the AI-generated version:

Dear Customer Service,

I am writing to request a full refund for my canceled flight, DF 4542, on February 4, 2023. I understand that you are offering a voucher as a gesture of goodwill, but I would prefer to be reimbursed for the cost of my ticket.

I have been a loyal customer of your airline for many years and have always been satisfied with your service. However, I am very disappointed with the way my recent flight was handled. I was not given any advance notice of the cancellation, and I was forced to make alternative arrangements at the last minute. This caused me a great deal of inconvenience and expense.

I believe that a full refund is the only fair way to compensate me for the problems I experienced. I have enclosed a copy of my original ticket for your reference.

Thank you for your time and consideration.

Sincerely,
Rowan

There’s no indication these furious claims are accurate. It’s a bureaucratic deimatic display: convincing some hapless customer service representative that you’re mad enough to type out all that text and will probably be a real nuisance until you get your money back. I’ve seen this idea pitched better on Tumblr of all places. In a popular post about AI, someone described using ChatGPT to pull off a sort of high-tech version of Brazil’s Form 27B / 6 gambit, generating a threatening faux-legal letter to a landlord who was breaking housing laws. As a fellow longtime tenant, I applaud them.

But this stuff is effective in a brief transition period while generative text isn’t in widespread use, while readers are likely to assume it’s connected to a human writer. If you know it’s a machine, the illusion evaporates. You’re left with a world full of communications that are longer, less thoughtfully created, and more mind-numbing to read. I’d rather hire someone based on an honest “8 years exp” than a cover letter full of empty automated prose.

By contrast, Google’s most useful example of Help me write involved simply conveying information. In an email about a potluck, its AI was able to look at a document with a list of dishes people had signed up to bring, then summarize that list as a line in an email. It saves writers the step of pasting in a series of items and readers the inconvenience of clicking through to another tab. Most importantly, its value doesn’t rely on pretending that a human wrote it — and if Google has its way, that’s a trick that won’t last for long.

Apple’s AirTags are available at a rare discount

Apple’s AirTags are available at a rare discount
A close-up image depicting a set of hands holding a selection of Apple AirTags.
You can get a discount no matter whether you buy a single AirTag or a set of four today. | Image: Vjeran Pavic / The Verge

Memorial Day is just around the corner, and if you’ve got plans to travel, today’s deal on Apple’s AirTags has landed at the perfect time. Right now, you can buy a single AirTag for $25 ($4 off) at Amazon and Best Buy, which is the lowest price we’ve seen so far this year. If you’d like to buy a set for the whole family, you can also buy a pack of four for $89.99 ($10 off) from Amazon, Walmart, and Best Buy. Even better? If you order from Walmart or Amazon in the next few hours, you may still be able to get the set in time by Mother’s Day.

Apple’s ultra wideband-capable Bluetooth tracker can help you keep tabs on everything from keys to suitcases, allowing you to easily find your belongings should they get lost. The location tracker also features other cool perks, like IP67 water and dust resistance and user-replaceable batteries. Just be mindful that it lacks a built-in lanyard hole, so you’ll need an extra accessory like an AirTag Loop to attach it to your keys and other items. Read our review.

After years of waiting, The Legend of Zelda: Tears of the Kingdom has finally landed on the Nintendo Switch. And as if the release of the year’s most anticipated game wasn’t exciting enough, you can also save some money if you’re a Nintendo Switch Online subscriber.

As we’ve written about before, you can still buy a Nintendo Switch Game Voucher which, for $99.98, will allow you to buy two games featured on this game list. That includes Tears of the Kingdom as well as Splatoon 3, Advance Wars 1+2: Re-Boot Camp, and forthcoming titles like Pikmin 4. Essentially, that means you’d be paying $49.99 for each game, thus saving you even more money if you were planning on buying a second game anyway.

My colleague Ash Parrish just published her full review of the latest Zelda title, which you can read in full here. In a nutshell, it’s very similar in story to its predecessor, Breath of the Wild, but with some fun new weapons, capabilities, and other features. Admittedly, we haven’t felt it inspires the same sense of awe as Breath of the Wild, but don’t feel too disappointed — it’s still an absolute pleasure to play. Read our review.

Speaking of recent releases, Google just launched its latest midrange phone, the Pixel 7A. Thankfully, if you’re hoping for some early bird incentives, Google is currently throwing in a $100 credit toward a pair of Pixel Buds and a free Case-Mate case (a $25 value) when you pick up the budget-friendly Android phone for its regular price of $499. Retailers like Best Buy and Amazon, meanwhile, are offering a $50 gift card with your purchase.

The 7A costs $50 more than the Pixel 6A it replaces, but that extra money nets some nice upgrades — including wireless charging, a faster 90Hz screen for smoother scrolling, and a more refined dual camera setup. It doesn’t skimp on the processor, either, as it borrows the Tensor G2 from the flagship Pixel 7 and 7 Pro. It’s the best midrange Pixel to date, so barring the unforeseen hiccups that sometimes plague Pixel phones, it feels worthwhile to take advantage of these early adopter deals while you still can. Read our review.

If you’re looking for a pair of wireless earbuds that’ll help you drown out the world so you can better focus on the task at hand (or just listen to T-Swift), Bose’s QuietComfort Earbuds II are on sale for $249 ($50 off) at Amazon, Best Buy, and direct from Bose.

In a nutshell, the QC Earbuds offer dependable performance, terrific sound quality, and the most powerful noise cancellation on the market. Plus, they sport an excellent transparency mode for when you actually do want to let some noise in. While it’s a shame they lack support for wireless charging, they’re an otherwise excellent pair of earbuds that’ll silence your surroundings like no other. Read our review.

A few more deals to welcome the weekend with

A moment’s silence, please, for the death of the Metaverse

A moment’s silence, please, for the death of the Metaverse

Meta sank tens of billions into its CEO’s virtual reality dream, but what will he do next?

Dearly beloved, we are gathered here today to remember the metaverse, which was quietly laid to rest a few weeks ago by its grieving adoptive parent, one Mark Zuckerberg. Those of you with long memories will remember how, in October 2021, Zuck (as he is known to his friends) excitedly announced the arrival of his new adoptee, to which he had playfully assigned the nickname “The Future”.

So delighted was he that he had even renamed his family home in her honour. Henceforth, what was formerly called “Facebook” would be known as “Meta”. In a presentation at the company’s annual conference, Zuckerberg announced the name change and detailed how his child would grow up to be a new version of cyberspace. She “will be the successor to the mobile internet”, he told a stunned audience of credulous hacks and cynical Wall Street analysts. “We’ll be able to feel present – like we’re right there with people no matter how far apart we actually are.” And no expense would be spared in ensuring that his child would fulfil her destiny.

Continue reading...

Can Google’s Pixel Fold really hang?

Can Google’s Pixel Fold really hang?
A partially opened Pixel Fold playing a YouTube video on its top half.
For US buyers, the Pixel Fold is the first credible alternative to Samsung’s Galaxy Fold series. | Photo by Dan Seifert / The Verge

Google’s debut foldable makes a strong first impression. But if recent Pixels are anything to go by, the company has a lot to prove when it comes to performance and dependability.

I don’t give a damn about the bezels. Just let me get that part of Google’s new $1,799 Pixel Fold out of the way. They’re fine. And I’m absolutely on board with the squat form factor: having this phone / tablet hybrid feel like a notepad in hand when it’s closed seems like a far better solution than Samsung’s tall boy.

The Galaxy Fold 4 is too narrow for us large-handed humans, and when opened up, its square-ish inner display leaves sizable black bars when watching videos. In his Pixel Fold hands-on, my colleague Dan Seifert found Google’s wider aspect ratio to feel more natural for multitasking — and it should make for a better entertainment device, too.

There’s a lot that’s promising about the Pixel Fold, actually. I’m not worried about the software or cameras. It might not offer a sensor of the same size as the 7 and 7 Pro, but I still trust Google’s computational photography to nail shots on the first try more than phones from any other manufacturer. I’ve come to enjoy Android’s colorful Material You UX, and I think we’ll keep seeing developers release apps that are optimized for foldables.

But I do have some hang-ups over this $1,800 gadget, and they boil down to performance, reliability, and customer service. At the moment, all three remain total unknowns.

Google’s Tensor chips aren’t the most efficient and can run hot

Google’s self-branded processors routinely trail Qualcomm and Apple in benchmarks — sometimes substantially — but they’re more than powerful enough to provide a smooth day-to-day smartphone experience. It seems like that’s all Google ever really wanted. They’re perfectly adequate.

Except in those moments when they’re not.

As someone who’s owned a Pixel 6, Pixel 6A, and most recently, a Pixel 7, I can attest that both the Tensor G1 and G2 have demonstrated a tendency to run warm. Not always. Some days are better than others. But when things heat up, the Pixel phones will often disable features like 4K video recording or even something as simple as a camera flash. Are you using your device while it’s plugged in? Expect sluggish charging speeds if the battery percentage climbs at all. The Pixel 7 hasn’t gotten nearly as hot as the 6 series did in my experience so far. But then again, these phones haven’t even faced their first summer yet: the Pixel Fold is arriving just as temperatures rise across the US.

If you look around the very active Pixel subreddit and other social media, similar reports aren’t uncommon. With Tensor G2, Google overcame the woeful cellular reception challenges that some Pixel 6 and 6 Pro owners encountered. But the simple truth of the matter is that these chips aren’t as efficient as Qualcomm’s latest and greatest.

A Samsung Galaxy Z Fold 4 next to the Google Pixel Fold. Photo by Dan Seifert / The Verge
If I’m being honest, I expect Samsung’s next Galaxy Fold to absolutely smoke the Pixel Fold in performance. But that won’t matter to everyone.

When the Galaxy Fold 5 is released this summer, likely powered by some variant of the Snapdragon 8 Gen 2, we could see a significant divide between it and the Pixel Fold when it comes to thermal performance. A cooler processor tends to result in better battery life. And the Fold’s battery estimates already seem rather optimistic when you consider it houses two 120Hz displays that can crank very bright.

Tensor’s quirks, typically attributed to its Exynos DNA, are somewhat excusable with phones costing between $400 and $900. But if a $1,800 foldable starts overheating and disabling software features during normal summer activities, people are going to lose it.

If there’s one core element of the Pixel Fold that’s giving me pause, it’s the silicon. I’m doing my best to believe that Google has thought all of this through, made some cooling tweaks for this form factor, and it’s not going to become A Thing. We’ll start finding out in late June.

How are you supposed to get this thing fixed?

I’m quick to admit that those of us who live in or near New York City are outright spoiled when it comes to easy solutions for our tech dilemmas. There are plenty of Apple stores within a few-mile radius, the only two Google Store locations are based here, and Samsung 837 can provide speedy scheduled repairs even for its foldables, covering screen protector replacements — yes, the Pixel Fold has one of those — and other hardware issues.

But there will be plenty of Pixel Fold buyers that live far from this city, San Francisco, or any other major metro hub. And their repair options for the Pixel Fold are very unproven at this point. Samsung might not have the same expansive retail presence as Apple, but it has at least teamed with Best Buy for authorized repairs.

In the past, Google has partnered with the Asurion-owned uBreakiFix for its extended warranty plan. That same arrangement seems to be continuing with the Pixel Fold, with coverage running $15 per month or $279 for two years of coverage (including accidental damage).

Now, I’m not exactly a champion of the Genius Bar or Geek Squad, but reviews for uBreakiFix are often mixed, and if you want the best experience, one pro tip I’ve picked up on is to make sure you’re visiting a corporate Asurion location.

If you’re curious, here’s what Google says about service fees for repair visits:

For Pixel Fold, Pixel 7a, service fees for walk-in screen repairs are $29. For Pixel 7, Pixel 7 Pro, Pixel 6, and Pixel 6 Pro and Pixel 6a, service fees for walk-in screen repairs are $29* while the service fee for other repairs or replacements (including mechanical/electrical breakdown and all other accidental damages) is $49 for Pixel Tablet, $49 for Pixel 7a, $129 for Pixel Fold, $49 for Pixel 6a, $79 for Pixel 7, $99 for Pixel 7 Pro, $99 for Pixel 6, and $149 for Pixel 6 Pro.

uBreakiFix says “most” repairs can be completed in 45 minutes. But since the Pixel Fold still hasn’t been released, we don’t know if that will be true of Google’s foldable. Will only certain locations have the right parts and tools? Will Google’s default response be sending customers a new Fold, which would likely require another $1,800 hold on a credit card?

For the money it’s asking, I’m hopeful Google is going to do well by Pixel Fold customers and focus on top-tier service. But the company hasn’t yet established such a reputation. And considering its regular slab phones continue to exhibit problems — like the camera glass randomly shattering on some Pixel 7 and 7 Pro units — there’s a lot riding on how the Fold fares in hardware reliability. When you’re spending upward of $2,000 on a phone after tax, you deserve some serious white glove treatment.

I still can’t wait to try one

Again, owing to its design, sleek software, and surefire camera, the Pixel Fold is immediately more appealing to me than Samsung’s Galaxy Fold. Do I want to give Google’s first foldable a spin? Most certainly. I’m curious how other aspects of its hardware like the speakers and haptics will shake out.

But I also have the good fortune of working at The Verge, where I’ll be able to spend time handling a Pixel Fold without bidding farewell to $1,800 of my own money. As it stands, and as tempting as the whole package looks, I don’t think I’d be able to hit the preorder button if I didn’t. Maybe that pause will fade away when the first reviews hit, and we’ll confidently be able to take another step into the foldable future.

Ministers not doing enough to control AI, says UK professor

Ministers not doing enough to control AI, says UK professor

Stuart Russell, former government adviser, says ChatGPT could become part of super-intelligent machine that can’t be constrained

One of the professors at the forefront of artificial intelligence has said ministers are not doing enough to protect against the dangers of super-intelligent machines in the future.

In the latest contribution to the debate about the safety of the ever-quickening development of AI, Prof Stuart Russell told the Times that the government was reluctant to regulate the industry despite the concerns that the technology could get out of control and threaten the future of humanity.

Continue reading...

vendredi 12 mai 2023

Amazon’s working on a secret new home robot that could be more like Rosie

Amazon’s working on a secret new home robot that could be more like Rosie
The Astro robot in a hallway.
Amazon is looking to improve on its current home robot Astro by adding ChatGPT-like features to future versions | Photo by Jennifer Pattison Tuohy / The Verge

As a smart home reviewer of a certain age, all I’ve ever wanted for my home is a Rosie the Robot. The Jetsons’ mechanical housekeeper was the example I held Amazon’s Astro to when I tested the company’s first home robot — and it unsurprisingly failed. Not just because it had no arms, but because it couldn’t really do anything.

Now, according to internal documents from Amazon seen by Insider, the company thinks it has found the keys to unlock Astro’s potential. Burnham is a secret new AI robot project Amazon is developing that, according to the documents, adds a layer of “intelligence and a conversational spoken interface” to a smart home robot, reports Insider.

An upgraded Astro powered by Burnham could use large language models, and other advanced AI, to become a home robot that understands the context of a busy household and responds appropriately. According to Insider, the documents indicate that the technology “remembers what it saw and understood” and the robot can then “engage in a Q&A dialogue on what it saw” and use AI powered by LLMs to act on it.

For example, the documents describe an Astro product using Burnham as able to find a stove left burning or a faucet left running and track down its owner to alert them. It could check on someone who has fallen and call 911 if it’s an emergency. It could help find your keys, check if a window was left open overnight, and monitor whether kids had friends over after school, according to the documents. These are all things you can do to some extent with existing smart home tech, but they require multiple steps, devices, and actions, as opposed to one — Astro.

Most interestingly, though, Amazon appears to be exploring initiating more complex tasks. An example given was a robot that sees broken glass on the floor, knows that it presents a hazard, and prioritizes sweeping it up before someone steps on it — essentially, spot problems and potentially solve them.

This “Contextual Understanding,” as Amazon describes the tech in the documents, is its “latest and most advanced AI technology designed to make robots more intelligent, more useful, and more conversational.” So, basically, Rosie the Robot (but without the arms).

However, Burnham isn’t coming to a robot near you anytime soon. Amazon acknowledges in the documents that it still has a long way to go before Burnham can be put into a product. You also still can’t buy the current, not-so-smart Astro without an invite, its price just went up to $1,600, and Insider notes Amazon scrapped plans to release a cheaper version.

Even amid the rapid adoption of generative AI by tech companies like Amazon, a home robot as capable as Rosie is still just a character in science fiction. Although Amazon’s statement in one document, "Our robot has a strong body. What we need next is a brain,” makes me think twice about how much I really want an intelligent, AI-powered robot roaming around my home.

DirecTV and Dish’s on-and-off merger saga switches back to off

DirecTV and Dish’s on-and-off merger saga switches back to off Illustration by Alex Castro / The Verge DirecTV has dropped its plans to a...