lundi 29 juillet 2024

The AI Keeps the Score

The AI Keeps the Score

When Simone Biles saluted the judges and stepped onto the mat to vault at the Sportpaleis in Antwerp, Belgium, it seemed like every camera in the packed arena was trained on her. People in the audience pulled their smartphones to record. The photographers zoomed in from their media perches. One TV camera tracked her run on a high-speed dolly, all the way down the runway, as she hurdled into a roundoff onto the springboard. The spider cam, swinging above, caught the upward trajectory of her body as she turned towards the table and blocked up and off, twisting one and a half times before landing on the blue mat and raising her arms above her head. The apex of human athleticism and kinesthetic beauty had been captured.

But there were other cameras that few other people watching in the arena were thinking about as they took in Biles’ prowess on the event: the four placed in each corner of the mat where the vault was situated. These cameras also caught the occasion but not with the purpose of transmitting it to the rest of the world. These were set up by the Japanese technology giant Fujitsu, which, since 2017, has been collaborating with the International Gymnastics Federation (FIG) to create an AI gymnastics judging system.

In its early days, the system used lidar (light detection and ranging) technology to create 3D composites of gymnasts in action. These days, it uses an even more sophisticated system, drawing from four to eight strategically placed hi-def cameras to capture the movement of the athletes, make 3D models, and identify whether the elements they are performing fall into the parameters established by the judging bodies inside the federation.

But the computer system doesn’t make judgments itself. Instead, it is deployed when there is an inquiry from the gymnast or coaches or a dispute within the judging panel itself. The Judging Support System (JSS) can be consulted to calculate the difficulty score of an athlete’s exercise — a second opinion, rather than an initial prognosis. Currently, it is mostly used for edge cases.

The JSS wasn’t necessary to evaluate the value of Biles’ vault in Antwerp. Her performance on that vault was too emphatic to be borderline. Still, the cameras positioned at the corners of the vault podium captured her 3D likeness as they did for all of the other athletes who competed through the 2023 World Gymnastics Championships. The technology distilled the legendary athlete and her performance down to straight lines and sharp angles; it showed the distance and height she traveled in numbers. The awe and wonder one feels when watching Biles perform could now be recognized by a computer — understood, though not exactly appreciated.

Fujitsu and FIG announced JSS back in 2017 with the goal of having the system up and running by the Summer Olympics in 2021. A home Games in Tokyo would have been an ideal opportunity for the Japanese-based tech conglomerate to showcase this kind of technology, and it would’ve been a noteworthy achievement for Morinari Watanabe, the first Japanese president of the Lausanne-based FIG. But the JSS wasn’t ready; in fact, it would take another four years of work. At the 2023 world championships in Antwerp, the JSS was finally ready to go on all 10 artistic gymnastics apparatuses — six for the men and four for the women.

This was all part of the “dream,” as Watanabe put it in the joint press conference hosted by FIG and Fujitsu heralding the technological breakthrough. “Today is a day of liberation in sports,” he proclaimed to the media and other gymnastics officials who showed up for the explainer that was held shortly before the start of the men’s all-around final. “The day has come when all athletes, not just gymnasts, will receive fair and transparent scoring.”

This proclamation was a bit hyperbolic, especially given that this is not AI’s first foray into judging athletic competition. It has already been successfully applied in sporting contexts, often with approval from athletes and coaches themselves. Hawk-Eye Live, the electronic line-calling system, is used in lieu of line judges in tennis at two of the majors, and its calls are generally considered reliable.

But in tennis, Hawk-Eye is being tasked with answering a yes / no question — is the ball in, or is it out? The JSS is being asked to perform a much more complicated task: it needs to be able to identify hundreds of skills in the Code of Points, and the ranges in which they’re done, across the whole span of gymnast body types — a complex undertaking, and one that changes regularly, as the FIG is updating its rules every four years. In a sport where the difference between first and fifth can be a mere tenth of a point, and when global rankings can mean the difference between being funded by your national federation or not, getting the score right is very important.

The appeal to a technological solution to judging feels practically inevitable. Humans are fallible. That’s why deductions exist in the first place: to quantify the mistakes that the gymnasts make. But we’d never replace the human athletes with machines, regardless of how advanced Boston Dynamics’ back-flipping robot gets. The draw of gymnastics is watching mere mortals push the limits of athleticism. But the performance of the judges is a means to an end, not the end itself. For more than a century, human judgment was the only option, no matter how much this might’ve discomfited us, given the stakes. Now, there’s a potential technological solution that shows promise. But can AI judge human excellence better than a human?


The JSS started, according to Watanabe and Hidenori Fujiwara, as a joke. It was late in 2015, about a year before Watanabe won his first FIG presidential election, making him the first non-European to helm the international federation since its inception in 1881. He suggested that Fujitsu should develop robots to judge gymnastics.

Fujiwara, head of Fujitsu’s sports business development division, took the challenge seriously. “We developed a prototype system,” Fujiwara said, which he then showed to Watanabe, who was surprised by the progress. Watanabe clarified that what he’d said about robots had only been a joke, and yet here they were.

This origin story for the JSS was emphasized during the press conference I attended in Antwerp shortly before the start of the men’s all-around final. There was, of course, a PowerPoint. An early slide in the presentation showed a comic with robots holding up score placards, as a male gymnast swings into a scissor-like movement on the pommel horse. The caption above the image read: “Joke come true!” (I didn’t get why it was funny; I guess you had to be there.)

It’s a “joke” that Fujitsu has spent untold amounts of money, time, and energy on. Though the company wouldn’t disclose the cost of this whole undertaking, it’s hard to fathom, after strolling through their offices in the annals of the Sportpaleis and seeing the arena setup of the technology in the field of play — and off to the side — that it was anything short of a tremendously expensive and resource-intensive endeavor. But I couldn’t help but feel like it was a lot of effort for technology that, at least as pitched by Watanabe, would only ever amount to a slightly better version of judge-assisted video replay.

Even ignoring the years of investing in R&D, the physical footprint of JSS appears expensive. During the competition, I glimpsed the backroom where there was a row of servers and another of monitors, a cluster of power packs, and tons of cable. Like so much of AI, its “magic” obscures copious amounts of energy-intensive hardware.

Out on the floor, the JSS cameras were subtle, but a lot of human effort went into calibrating them. Before the start of the day’s competition and frequently in between sessions, you could watch as technicians took to the floor, placing large orange balls similar to exercise balls you’d find at the gym, mounted to tripod-like devices, at strategic spots on or near the equipment to make sure that the cameras were properly aligned. Sometimes, they waved these balls like wands around the apparatuses. And throughout the competition, several technicians monitored the event from behind six computer screens near the media box. Nothing about this can be done cheaply.

The entire history of judging had created tragedies, Watanabe explained somewhat dramatically. But even if his remark to Fujiwara had been made in jest, the fact that FIG has doggedly pursued this venture with Fujitsu going on six years suggests that the joke hinted at something critical and true (as jokes often do): that he felt that there was something amiss in judging in the sport of gymnastics, and maybe technology could fix it.

Watanabe didn’t specify any particular instance of judging malfeasance or error that created these personal tragedies. But he didn’t really have to. The conventional wisdom around the judged aesthetic sports, such as gymnastics and figure skating, is that there are and always have been issues with the scoring. During the Cold War, when both the US and the Soviet Union fought for the top spot in the Olympic medal rankings, there was fairly widespread cheating and collusion in gymnastics judging. Back in 1988, after former University of Utah gymnastics head coach Greg Marsden’s brief foray into international elite gymnastics, he let slip to the media that, at the previous year’s world championships, there was judging collusion between the US and Romania, with the coaches exchanging scores before their athletes took to the mat. And in the years since the Cold War ended and the old judging alliances started to break down, the issues became more mundane but no less consequential. It was mostly human error, confusing rules and processes, with a dash of bias — racial, national, or both — that created most of the problems.

Elements of subjectivity can be found in most sports, and these judgment calls can end up having major consequences when it comes to competitive outcomes. In basketball, for example, a referee might make a bad call that affects the outcome of the entire game, like this year’s Women’s Final Four matchup between UConn and Iowa that featured a controversial offensive foul call in the final seconds of the game. But in general, the way of amassing points is fairly straightforward and has remained consistent over many years. The lines on the court, except in the case of the free throw, determine the point value of any given shot, and this didn’t change when Stephen Curry started nailing deep three-pointers. A shot from well behind the three-point line is objectively more difficult — and impressive — than one made closer to the basket. But the NBA hasn’t painted another line on the court to reward the higher difficulty level of shots taken from well behind the arc. Nor did the league change the rules to make Curry’s threes harder. Players simply learned to shoot from further back.

This is not how gymnastics operates. As gymnasts introduce new elements, the FIG has to assess them for their difficulty value, and there is no upward limit, at least in theory, as there is with basketball shots. In gymnastics, a half-court shot isn’t worth the same as one from right behind the arc. Skill valuations can change from one Olympic cycle to the next; requirement groups can be added or removed. A bad score in one cycle might be a good one in the next. The rules are not stable as they are in other sports, and it can be baffling, without highly specialized knowledge, to understand the difference in difficulty from one skill to the next.

The most significant change to the rules came in 2006 when the FIG scrapped the Perfect 10 scoring paradigm in favor of an open-ended approach that gives the gymnasts two marks that are added together — the difficulty score, which starts at zero and builds, depending on the fulfillment of requirements and the skills the athlete performs; and an execution one that starts at 10 and is reduced as the judges apply deductions for mistakes the athlete makes.

The immediate catalyst for this particular change was the scoring controversies of the 2004 Olympics, particularly the miscalculated start value of Yang Tae-young. The South Korean gymnast was erroneously docked a tenth of a point, which led to him missing out on the gold medal in the men’s all-around. This mistake has meant that Yang, who is now a coach, doesn’t receive a gold medalist’s pension from the South Korean government. Watanabe was not wrong about how errors in judging can have serious ramifications for athletes, even years after the fact.

Judges still make mistakes on the D-score, which is the updated name for start value. But, unlike with the execution mark (aka the “how well you did it” score), a gymnast or coach has the right to appeal the difficulty calculation. This is where JSS can help. Like in the earlier iterations of Hawk-Eye in tennis (and still at Wimbledon), players can challenge the call of a line judge, and the computer will override any human error. Fujitsu’s system enables something similar, albeit much slower and more bureaucratic.

Several times over the course of the world championships in Antwerp, I heard an announcement over the PA that an inquiry had been submitted for one gymnast’s beam score or a different athlete’s bars mark. The large scoreboard to my back would show the athlete’s name and “under review” right next to it. Judges would consult video replay and the new JSS system, though it was unclear under which conditions the JSS, rather than video review, was used. Often, inquiries were only a few minutes, though, in an already long competition, it felt like a drag waiting for the eventual resolution to be announced. In most cases, the gymnast’s score remained unchanged. If AI was used in these inquiries, it functioned solely to validate the work of the human judges.


When I sat down with the Fujitsu technicians in Antwerp in a room somewhere in the bowels of the Sportpaleis, I got to see just how precise the JSS can be. I was shown recordings of the switch ring leap, a skill that was also highlighted during the press conference the day before. This element is notoriously tricky to perform and to judge. The gymnast has a lot of boxes to tick: split of the legs, the position of the back leg relative to the crown of the head (they have to be at roughly the same level), the arch of the back, and the head release. The judge has to be able to register all of that in the split second the skill appears before them on the balance beam.

JSS looked a lot like video replay, except that the gymnast is transformed into an unclothed mannequin performing the elements. The apparatus is there, but all of the trappings of the gymnasium are gone; the rendering is set against what looks like the holodeck set on Star Trek before the computer program fills in the details, a black space, with white lines running parallel and perpendicular. To the side, you can see key measurements, such as angles, to help determine whether the gymnast met the demands of the element — all of the color and flare stripped out down to the nuts and bolts.

In the first clip, the gymnast did not fulfill the requirements. At the apex of the leap, her back foot didn’t line up with the crown of her head. The technician applied one tool, a blue horizontal plane, which made it quite clear that her back leg wasn’t high enough. “It’s minus 40 centimeters,” she said, pointing her cursor at the upper right corner of the screen.

Next, she played a recording of another switch ring, at normal speed. “What do you think?” she asked. I responded that I thought it was performed within acceptable parameters. Turns out I was right. Don’t give me too much credit here, though; the reason I could see it easily is because the gymnast had performed it exceptionally well. Her split was oversplit; her back foot went so high that it was well above the crown of her head.

As much fun as I had playing around with the system — and talking about the finer points of gymnastics with the experts — I wasn’t entirely convinced that the JSS, at its current stage of development, had made a compelling case for its necessity as a decision support system. It felt like a solution in search of a problem.

Steve Butcher, former head of the men’s technical committee and technical coordinator for FIG, said initially he shared my same skepticism. He knows better than most people how hard judging can be, having spent 40 years doing it. But Butcher said he was won over quickly. All it took was a short demonstration showing a gymnast doing an iron cross, a static strength hold, gripping the rings with their arms extended to the sides so they’re completely parallel to the floor. Ideally, the athlete will create a perfectly straight line across, from wrist to wrist.

“They showed me one arm, he has three degrees of deviation. And the other arm, he has one degree of deviation,” Butcher said, noting it was not perceptible by the human eye. Since that demo, he has worked with Fujitsu on behalf of FIG to help the company address the gymnastics needs and has remained a consultant on the project even though he left his full-time position with the gymnastics federation in 2022.

But was this really an improvement over plain ol’ video review? How would seeing the angles of someone’s arms to this degree — a difference of two degrees, to be specific — actually improve the judging? In the example that Butcher cited, the knowledge that the JSS provided was interesting, but it wouldn’t have changed the valuation for the gymnast: he would’ve been credited the skill because he had performed it very close to the platonic ideal. At the top end of the performance, those minute flaws, if they rise to the deductible level, would be sorted out by the execution judges. The JSS isn’t up to that particular task yet.

To provide an example where the JSS could’ve potentially outperformed the judges — and certainly video review — Butcher brings me back to 2012, to the moment when the men’s team finals in London had medals on the line. It was the final rotation, last routine, last gymnast up. Kohei Uchimura, then the three-time world all-around champion, was on the pommel horse, the event where Butcher was the apparatus supervisor. Uchimura’s routine went off as planned, clean and smooth, until the dismount. As he swung up from the pommel to the handstand, his arms seemed to buckle, legs akimbo; he spun wildly and slipped off the apparatus, somehow landing on the mat on his feet, albeit chest down. He walked off the podium, seemingly bemused and confused as to what just happened.

This last mistake created a dilemma for the D judges: did Uchimura successfully reach a handstand — or get close enough to it — in order to receive credit for doing a dismount? If the judges didn’t give him credit for it, he would lose the value of the skill and miss a requirement group. The hit to his — and, by extension, Team Japan’s — overall score would be massive.

Butcher did not give credit for the handstand, nor did the other two D judges. Uchimura’s mark put Japan in fourth place, behind Great Britain and Ukraine. Both teams started celebrating medals they thought they had just won. The Japanese team, however, immediately submitted an inquiry.

The superior jury watched the video replay several times, in slow motion, frame by frame. The TV cameras hovered by the shoulders of the judges as they studied Uchimura’s routine. The action in the North Greenwich Arena had shifted from the athletes to a bunch of men in gray blazers, staring at a laptop.

Finally, the superior jury decided Uchimura was close enough to a handstand. The reversal of the D panel’s original call added seven-tenths to Uchimura’s score. Japan shot from fourth to second. Great Britain ended up with the bronze, and Ukraine, to their utter devastation, was bumped off the podium.

Butcher, however, still stands by what he and the two other D judges decided over 10 years ago. “We have to remember, they’re not looking at any exact angles. They’re looking at a foot here, a leg there, and looking in a video, freezing it, with no true measurements being applied,” Butcher pointed out. The decision to award credit or to withhold it was something of a very educated coin flip. “In that situation, I would have loved to have been able to have the Fujitsu system and be able to have that as the primary decision-maker,” he said.

When I watched the video of Uchimura’s London performance, I found myself agreeing with the original call. That was not a handstand. He never even managed to straighten his arms completely. But like the judges of the superior jury, I wasn’t working with any precise measurements. I was basing this strictly off of my gut. It was an aesthetic judgment as much as a technical one. But in gymnastics, there’s long been a feedback loop between the technical and aesthetic; what is technically sound is often most aesthetically pleasing, and vice versa.

Of course, none of this matters to AI. It doesn’t “know” things in the way that humans do. Facial and object recognition technology doesn’t recognize what a “labrador” is; it’s been shown millions of photos of that dog and has been told that this is, in fact, a labrador, or at least the sum average of a labrador.

Apply the same logic of what an AI “knows” to a handstand in gymnastics, and it recognizes what a handstand is based on a series of rules and parameters of what a handstand is supposed to be. At the same time, it knows when the articulations of a body aren’t doing a handstand. That distinction may seem trite, but it also turns the sport into the color-negative version of itself.

Which presents the weird irony of AI-assisted judging, a system that cannot understand or appreciate the beauty of the sport: Butcher and his panel could have used a system like JSS to back an aesthetic opinion with hard numbers.


In many industries, AI has been used as an excuse to cut down on labor expenses. That’s not the case here with JSS since its implementation is strictly to support human judges. Besides, judging gymnastics isn’t a full-time career for anyone, not even at the very highest levels, so that particular objection to AI doesn’t play. But the fact that judging gymnastics events is a sporadic activity points to another issue with the JSS’s application: there isn’t a lot of opportunity to use this expensive system. It will judge even less frequently than humans do. The majority of gymnastics events are decidedly low-tech affairs. Not every competition venue will have the necessary infrastructure to support the JSS. And all meets, except the biggest ones, are a couple of days long, if that, hardly worth the time, energy, and costs that go into the setup. Fujitsu said that it took about a dozen people to set up and run the JSS in Antwerp. When asked about the next competition this much-ballyhooed system will be used at, Fujitsu didn’t answer. They said it would be decided jointly by them and FIG.

Of course, it would be foolish to assume that it will always be this costly or difficult to set up the JSS in a competition format. The technology should improve over time and get cheaper, too. That opens up the possibility for what Butcher believes is its best use case: as a training aid. He told me that this was his first thought when Fujitsu first presented the JSS to him.

“Somebody’s doing a triple back off the high bar but you can see that their body’s slightly skewed in the air and you can measure that angle, you can see that they [are] landing heavier on one side of their body than the other.” Being slightly off like this in the air doesn’t change the valuation of the skill. It will still be regarded as a triple-back. But in the hands of the athlete and the coach, this kind of information can prevent an almost imperceptible defect from blooming into an injury. In this example, the JSS is merely a sophisticated measuring tool. Butcher said that some national federations have expressed interest in aligning the JSS with their pre-existing video systems, which Fujitsu confirmed, adding that they plan to unveil a version specifically for training in July. Throughout the week in Antwerp, and in follow-up calls with experts, this was the most persuasive use case that I came across.

Right after the Fujitsu press conference, I encountered Donatella Sacchi, the president of the women’s technical committee, who had been on the panel, along with her counterpart on the men’s side. She’s a compact woman, on the short side — but who isn’t in gymnastics? — with cropped hair, and speaks exuberantly, often standing to make her point and to demonstrate what she means by using her whole body.

Sacchi was very excited at the potential of the JSS but raised the specific issue that AI couldn’t intuitively understand things the way a person with gymnastics experience could.

A lot of work needed to be done — and continues to be done — to “parameterize” everything just so JSS could “see” things like a human, though not make errors like one.

Sacchi pointed to a couple of issues that the system has not yet been able to overcome. When we spoke again about a month after the world championships, Sacchi told me that the JSS cannot determine whether two skills done consecutively on the beam are actually connected in one continuous movement. This is one of the ways that gymnasts rack up tenths, linking different skills for connection bonus or value (CV). This is one of the most challenging aspects for human judges to evaluate since not all credited connections feature the transfer of speed and momentum from one skill into the next, which would make the connection easy to perceive. This is especially true if you change direction in a series or if you’re combining dance and acrobatic skills. There’s usually some sort of pause or hesitation, however slight. It’s up to the gymnast to move briskly between elements, even if the skills don’t lend themselves to seamless connections. If you’re going to have a system like the JSS around to help determine difficulty scores, it needs to be able to handle connections, especially since on an event like beam, they are the most contested part of the D-score, and isn’t that what the JSS is there to address, after all?

I asked Ayako Kawahito, a former gymnast and current judge who is working as a manager in the Human Digital Twin division of Fujitsu, about the beam connection problem. The issue, she said, is not about movement but about stillness. Kawahito pointed out that a person can appear to be completely still, according to the human eye, but if you subjected them to an MRI, their “joint coordinates are always moving around.” In order for the JSS to be able to assess connection value, Fujitsu and the FIG have to agree on the “(amount of) movement that can be considered a stop by a human judge,” she said.

Movement that can be considered a stop. Sounds a bit like an oxymoron, but it’s the kind of question that must be answered if the JSS will be able to help the judges in the places they need it the most.


If you were in Antwerp at the world championships and wandered into the Fujitsu booth, you’d be forgiven for temporarily forgetting you were at a gymnastics competition. There was very little inside to suggest that you were even at a sporting event of any kind. Monitors were hung on the bare white walls, but they didn’t show videos of gymnasts performing routines or even single elements, overlaid by JSS analysis. Instead, they showed how the technology behind the JSS could be used for fraud and theft prevention.

Though this might come as something of a surprise, it’s not really the left turn that some might imagine it to be. There’s a long tradition of the Games being used as a showcase for new surveillance and security technology. “The Olympics are often used to be kind of a showroom,” Dennis Pauschinger, a researcher at the University of Neuchâtel, told me in 2019 when I was working on a story about the global anti-Olympic movement.

The Fujitsu booth experience began with a simplified version of the JSS that you could play around with. I stood in front of a camera, which projected my movements onto a large screen and labeled them appropriately. It would say which hand you raised and what it was doing. “The judging system is based on what we call ‘pose estimation,’” Mike Fournigault, a Fujitsu AI architect, explained to me. “With cameras, we are able to reconstruct the pose of the body of people and to understand where are the hands, where are the arms, what are they doing with their hands, with their arms, with their legs?”

This is the kind of technology that is used for self-driving cars, with incredibly mixed results. In 2018, Uber’s self-driving car could delineate between a person walking and a person riding a bike but could not reconcile the existence of a 49-year-old woman walking her bike in Tempe, Arizona; the vehicle struck and killed her. At least the stakes for JSS aren’t life and death — though, to the athletes, it can sometimes feel that way.

I was shocked how much of Fujitsu’s booth was dedicated to crimes — not of the sports judging variety, but actual chargeable offenses. The monitors showed how this pose estimation might be applied to situations outside of sports. One showed how it could help prevent car theft; the other demonstrated how it can discern whether people were getting up to no good in the self-checkout line, such as putting an item in their bag without first scanning it. In the press conference, there was also mention of its applications in healthcare and rehab settings, which is not hard to imagine with a technology that can measure body movements and angles as precisely as the JSS can.

“There has been increasingly this sense that we can’t just end with gymnastics because, you know, obviously it was a very expensive process to develop JSS,” Andrew Kane, then Fujitsu’s deputy head of international public relations, told me in Antwerp. Fujitsu’s end goal was never gymnastics.

Later, I follow up with Fujitsu and receive a somewhat evasive answer. “We demonstrated different solutions related to Human Motion Analytics (HMA), which were for more than just gymnastics/sports,” Yuka Hatagaki of Fujitsu’s global PR wrote in an email about the booth’s contents. “The HMA technology that can analyze human movement with high precision cultivated through JSS can be applied to various industries, such as healthcare, ergonomics, and entertainment besides monitoring and theft prevention.”

JSS was being developed as a means of capturing the body, to synthesize the great range of human motion into something that could be understood by a computer. What gymnastics offered was a massive set of training data to help train the AI. Fujitsu mentioned additional uses in follow-up correspondence, including applications for physical therapists to develop hyper-specific programs for patients and using gait analysis to detect early signs of dementia in the elderly, which sounds very promising, especially as someone with a mother in cognitive decline.

All of this technology is built on the back of what I was witnessing around me in Antwerp. The heights of athleticism — and the competition as a whole — were used to feed a system that is repurposed and resold as a tool of surveillance. A solution in search of profit.


On the morning of the final day of competition in Antwerp, I was allowed to sit in the beam judges seat while the JSS was being calibrated and the arena was being set up for the evening’s competition. The field of play was clean, not yet covered in a white, chalky film, as it would be later when the gymnasts arrived to warm up. Some athletes mark the beam with chalk as a cue for where to start their acrobatic series. All of them douse themselves in the white stuff to mop up sweat on their feet and hands, both of which they need to grip the apparatus. It’s even worse over at the uneven bars where the whole apparatus is covered in the stuff. At a gymnastics meet, magnesium is always in the air.

In person, the beam seems smaller than it does on TV. When you’re watching on television, the camera zooms in on the apparatus and athlete. It’s practically all you see. Live, the equipment and the gymnast are set against the massive arena. You don’t get a sense of that scale on your screen. Still, the action seems more impressive in person, even if everything and everyone appears smaller. The added dimension really makes a difference. And in some cases, so does the massive arena. There are gymnasts out there, like Simone Biles, who, despite her diminutive stature, seem to be able to truly fill the space.

As an exercise, I tried to imagine what it would be like to actually rigorously evaluate a routine, to look at it piece by piece, and find favor or fault with it when medals are on the line. Imagining that burden left me with a queasy anxiety. Years of watching and analyzing the sport, mostly from the comfort of my couch, qualified me to do exactly what I was in Antwerp to do — report on a gymnastics competition — and little more, my success at identifying the credited switch ring notwithstanding.

“You cannot duplicate [that pressure] when you sit in your chair and in front of you are the best gymnasts, maybe trying to qualify for the Olympic Games,” Sacchi told me. She said that even after all of her years as a judge, she is still nervous before big events. At least the JSS can’t experience anxiety.

I get why, with so much on the line, you’d reach for a technology that promises to overcome human limitations. What the JSS offers is not only the promise of accuracy but also consistency, across rounds of competition, across several days of competition. It will not tire after a 12-hour judging day the way that human judges are wont to do. Gymnasts and coaches don’t like competing in the earliest subdivisions for a reason: the judges are fresh, and their figurative pencils — they actually use tablets — are sharp, and as a result, the execution scores tend to be lower. (The JSS doesn’t yet address the execution score, but I imagine that this is the eventual goal for the technology and would make the system more useful in the long term.)

Some of the hopes that are being pinned on the JSS, such as increased transparency, which Watanabe mentioned in his opening remarks at the conference, seem misplaced. Yes, the JSS can provide a lot of detailed information, but that is not the same thing as transparency. The FBI collects lots of information on US citizens, often through high-tech means, but no one would accuse it of being transparent. (Any journalist that has tried to get info from the FBI knows that it’s actually a black hole.) The fact that the JSS is collecting all this data doesn’t mean it will be shared with the gymnastics community. Ultimately, transparency is not a question of technology but of policy.

The yearslong process that it took to create the JSS illuminated the complexity of the judging task, which simultaneously calls for technological intervention and impedes it at every turn. Some of that complexity is unavoidable, even desirable. It shows a sport that is constantly evolving, its athletes always innovating. And some of it points to opportunities to streamline and improve the rules.

Later that day, when I was back in the media section where I belonged, I watched the eight women who qualified for the beam final. Biles won the gold there, her performance clean and surefooted. Her pace was brisk, moving from one element to the next with only the most minor of adjustments. She competed with the nonchalance of someone who has been there many times before. In second was Chinese gymnast Zhou Yaqin, a newcomer who showed a lot of style and precision in her world championship debut. She was rewarded with a 14.7 for her efforts, just a tenth behind Biles. Zhou’s coach immediately filed an inquiry because they had been anticipating a higher D-score, based on what she had been previously awarded. It would all come down to the question of those frustrating connections, the ones that the JSS is not yet able to adjudicate.

After a few minutes, the announcer told the audience that there had been no change. Biles would remain in first, Zhou in second. From my seat, a few rows above the judges, this result seemed fair — though, if it had gone the other way and Zhou had received the additional tenth, tying Biles, I might’ve felt the same way. With so little separating gymnasts, who wins and who loses can, at times, feel more like a judgment call. Everything can be endlessly debated on social media. This can have the effect of making it feel like no results are ever truly final. One of the hopes for the JSS is to offer finality to the outcomes so that when an athlete looks back on their careers, the counterfactuals they might spin have nothing to do with the competency of the judges evaluating them that day.

“When I speak to coaches, judges, administrators, [I] say the job of the judge is to separate gymnasts,” Butcher said. The judges’ job is to slice finely, to find the difference between gymnasts, and rank them accordingly.

Judging and scoring in gymnastics can certainly be improved, and perhaps the JSS can help along that trajectory. But we’ll never escape human judgment altogether, no matter how discomfiting that thought might be.

Google’s new Nest Thermostat has an improved UI and ‘borderless’ display

Google’s new Nest Thermostat has an improved UI and ‘borderless’ display
Google’s 4th generation Nest learning thermostat
Image: MysteryLupin (X)

Google appears to be preparing to launch a fourth generation Nest Learning Thermostat. Details on the thermostat first leaked last week, and now the same leaker has posted more documents on X that show this new model will have a customizable home screen, a new “Dynamic Farsight” feature, and a “borderless” display.

The leaks detail a “high-res borderless display” which looks like it will allow the UI of the Nest Learning Thermostat to extend into the area that was typically a black bezel on existing third generation units. Other leaked images show UI elements that appear much closer to the bezels of the display, too.

 Image: MysteryLupin (X)
The leaked specs of the fourth generation Nest Learning Thermostat.

Dynamic Farsight is mentioned, but the leak doesn’t show how this new feature works. Farsight on existing Nest models allows the unit to wake up when you walk up to it, to show information like weather, the time, or room temperature. I’m hoping the dynamic mention means the fourth generation can show a mixture of information this time, rather than being limited to certain options.

Google’s leaked documents also mention “natural heating and cooling” and a customizable home screen for the fourth generation Nest Learning Thermostat. It’s not clear how the natural heating and cooling works, nor just how customizable the new home screen is.

 Image: MysteryLupin (X)
Some of the UI on the new Nest Learning Thermostat.

The fourth generation Nest Learning Thermostat will come with an oval-shape trim plate in the box, alongside the usual thermostat base, a rear steel plate, and a second-generation Nest Temperature Sensor. Google only has very basic integration of its third generation Nest Learning Thermostat into the Google Home app, but this new model has a full UI within the app that allows you to change all the settings and control schedules without having to switch over to the Nest app.

There’s no mention of hardware upgrades beyond the display changes, but I’m hoping Google has greatly improved the internals here after nearly 10 years. Third-generation Learning Thermostats are notorious for having Wi-Fi chip issues, where the units will die and have to be replaced. I’ve personally had to replace my Nest Learning Thermostat twice because of the Wi-Fi problems.

The leaker also suggests this new fourth generation model will be priced at $279, making it the most expensive Nest thermostat yet. The new Nest temperature sensors are said to be $39 each, or three for $99. We might get official launch dates and pricing during the “made by Google” hardware event on August 13th.

dimanche 28 juillet 2024

Germans Combat Climate Change With D.I.Y. Solar Panels

Germans Combat Climate Change With D.I.Y. Solar Panels Plug-and-play solar panels are popping up in yards and on balcony railings across Germany, driven by bargain prices and looser regulations.

This $56 Casio watch is a retro step tracking dream

This $56 Casio watch is a retro step tracking dream
Close-up of person interacting with Casio WS-B1000 smartwatch
Did I mention it’s only $56?!

It doesn’t do anything other than track steps, but that’s all I want it to do. And at this price? I ain’t complaining.

When I was in high school, all I wanted was a Baby-G Casio watch — partly because it came in fun colors, partly because all the cool kids had one. When I finally convinced my mom to get me one, I loved it to pieces until its battery died ages later. It’s been over 20 years since then, but as Y2K fashion invades my TikTok algorithm, I think a lot about how my watches used to just be watches that looked nice. Sometimes I feel like I want to go back to those days... then I remember that the main reason I got into smarter watches was for step tracking.

And then I found out about the Casio WS-B1000, which costs a mere $55.95, syncs with your phone for the time, and tracks steps. What!?

It’s not unfathomable that today’s Casio watches could be more than the analog watches of my youth. And yet it hadn’t occurred to me to check. Never mind that I reviewed a more rugged Casio Wear OS watch a few years ago — that was a chunky multisport watch at a time when the Wear OS struggle bus had a perpetual flat tire. But after a bit of digging, it turns out that Casio has modernized a few of its watches to have a bit more fitness tracking functionality while keeping that classic Casio design.

Wide shot of person wearing Casio WS-B1000 while holding a backpack in front of a pastel background
I appreciate that it doesn’t overpower my wrist.

The WS-B1000 is one such watch, though it keeps things very simple. There’s no optical heart rate monitor, OLED display, fancy health sensors, contactless payments, or LTE connectivity. This device has Bluetooth to connect with your phone, an accelerometer to track steps, your classic stopwatch and timer functions, alarms, move reminders, and an LCD screen with a backlight button. In other words, just enough smarts to count as a fitness tracker — but barely.

A few years ago, that feature set probably wouldn’t have appealed to me. But these days, I’m at a point in my fitness journey where I’m recovering from mental and physical burnout from prolonged overtraining. It is a frustratingly long process, and to my surprise, the thing that’s kept me going are devices and apps that prioritize rest and simplicity over “going hard.” Many current smartwatches hurl active minutes, standing goals, calorie burn goals, and other targets at you — so many goals for you to hit daily that it can be overwhelming. So the fact that the WS-B1000 can only track steps or work as a stopwatch? That’s a plus.

Front view of person wearing Casio WS-B1000
The Y2K vibes are immaculate.

And you know what? The three weeks I tested the WS-B1000 were delightful. I’d forgotten how nice it is to set a simple step goal and try to meet it. With this watch, I could just look down and say, “Uh-oh! It’s 4PM and I’m at 2,000 steps. Time to go for a walk.” If I wanted to check my history, I could go to the Casio app and view a rough log. There was nothing fancy, and that’s just how I wanted it. Accuracy-wise, I was generally within 500–1,000 steps of my Apple Watch Ultra — which is a fair margin of error given they were worn on different arms and I talk with my hands. But if you’re opting for something like this, the general goal is to simply move more, and this is just fine for that.

There were other little things I appreciated, too. Because the watch doesn’t need the sensors, chips, and giant battery of a smartwatch, it’s remarkably light to wear. It only weighs 36 grams, and for once, I didn’t look like I had a giant hockey puck strapped to my wrist. I also never had to worry about charging the dang thing, either — it runs on a CR2016 coin cell battery that lasts approximately two years.

The neat part about the Casio app is that it automatically syncs the time so you don’t have to sit there fiddling with buttons to reset the time or set alarms. (I’m terrible at that on older watches; I can never remember how to do it or into which drawer I stuffed the user manual.) That stuff you can program from your phone.

Obviously, this isn’t going to be the watch for folks who want the most out of their smartwatch. But if, like me, you would like an occasional break from the fitness tech grind or the ideal of chill, low-tech fitness appeals to you, this is an excellent option. And might I remind you that it’s just $56?! Most basic trackers in this range tend to be fitness bands, whereas this is a cute, retro-chic Casio watch.

Alas, I only have two wrists, and as a wearables reviewer, I have to rotate out the Casio for the next smartwatch in my testing queue. But I have a pretty good feeling that, in between products, this is the watch I’ll be reaching for.

Democratic Meme Makers Rejoice During Kamala Harris’s Campaign

Democratic Meme Makers Rejoice During Kamala Harris’s Campaign After a few sluggish years under President Biden, liberal social media creators are seeing their messages resonate as Kamala Harris campaigns for the White House.

samedi 27 juillet 2024

Marvel is bringing the Russo Bros. back to direct the next two Avengers films

Marvel is bringing the Russo Bros. back to direct the next two Avengers films
Two men in evening wear standing side-by-side on a red carpet.
Photo by Tristan Fewings / Getty Images

Following reports that the Russo brothers were in talks to helm more features for Marvel, the studio confirmed at this year’s San Diego Comic-Con that the pair will be directing two new Avengers films.

During Marvel’s Hall H panel at San Diego Comic-Con, the studio announced that the Russo brothers have signed on to direct Avengers: Doomsday and Avengers: Secret Wars — the former of which will see Robert Downey Jr, return to the MCU as Doctor Doom. Doomsday is due out in May 2026, while Secret Wars will follow in May 2027.

The very surprising pivot to Doctor Doom comes after a number of setbacks for The Kang Dynasty — Marvel’s previously-announced (and now seemingly sidelined) Avengers film. Last fall, after news broke that Marvel was delaying The Kang Dynasty’s premiere by a full year, Destin Daniel Cretton and Michael Waldron stepped down as the film’s original director and writer, respectively.

Jeff Loveness signed on to pen a script shortly after Waldon’s exit, but the film’s fate seemed even more uncertain following Marvel’s decision to fire actor Jonathan Majors — who played Kang in Disney Plus’ Loki series and Ant-Man and the Wasp: Quantumania — for his involvement in a domestic violence case.

Doctor Doom showing up just as the Fantastic Four are about to arrive makes it seem like Marvel’s been cooking up a plan to send its beleaguered Multiverse Saga off with a bang — one that’s probably going to be orchestrated by a guy who looks just like Iron Man.

Silo season 2 hits Apple TV Plus this November

Silo season 2 hits Apple TV Plus this November
A still photo from season 2 of Silo.
Image: Apple

The end of the world just got a little closer. Apple confirmed that the much-anticipated second season of its postapocalyptic series Silo will start streaming on November 15th.

Silo premiered last May and was renewed before the first season finished. Apple first teased season 2, alongside the return of Severance, at WWDC 2024. (Severance’s second season will start streaming in January.)

Based on the trilogy of novels by Hugh Howey, Silo is set in the distant future and follows the remains of humanity, who live in giant underground bunkers — the titular silos — to avoid the deadly world outside. The season 1 finale ended with a big twist that made it clear there’s a lot more going on than the show initially lets on. It also only covers part of the first book, so it’ll be interesting to see how much of the story the new season covers. Silo’s second season will see returning cast members like Rebecca Ferguson, Tim Robbins, and Common.

The show is part of an ever-growing library of science fiction series on Apple TV Plus, which also includes the likes of Sunny, Dark Matter, Constellation, Invasion, and Foundation as well as upcoming series based on Neuromancer and The Murderbot Diaries.

8BitDo’s first mechanical keyboard is down to its best price to date

8BitDo’s first mechanical keyboard is down to its best price to date
Two keyboards in a sea of retro tech and Nintendo paraphernalia.
The Western-style “N Edition,” which is inspired by the original NES controller, is on sale at Woot through July 30th. | Image: 8BitDo

With all the excitement over the 2024 Olympics, it’s easy to forget that this weekend is the last weekend of July. Yet, as August approaches, so does back-to-school season. We’ve pulled together a list of gadgets and goods fit for the occasion, but if you need another suggestion, 8BitDo’s Retro Mechanical Keyboard is down to an all-time low at Woot. Now through July 30th, you can buy the “N Edition” model for $69.99 ($30 off) or the Famicom-inspired “Fami Edition” for $59.99 ($40 off), both of which come with a 90-day Woot warranty.

No matter which you choose, both tenkeyless mechanical keyboards can spruce up any home office or add a bit of fun to long study sessions. The Western-style “N Edition” is inspired by the original NES controller, while the cheaper board resembles the OG Famicom in color and styling. Both come with a pair of programmable “Super Buttons” that scream retro, along with clicky hot-swappable switches that allow for a more customized experience. Even better, both support USB-C and other connectivity modes, including Bluetooth and even 2.4GHz wireless via a dongle.

A few more deals to kickstart the weekend

  • Lego’s Tales of the Space Age set is on sale for $39.99 ($10 off) at Amazon and Walmart, which matches its best price to date. Inspired by sci-fi films and books from the ‘80s, the 688-piece kit lets you assemble all kinds of postcard-like display models, including those depicting shooting stars, comets, and other celestial objects.
  • Woot is selling a pair of Blink Mini 2 cameras for just $49.99 ($30 off) through August 22nd. Given a single unit typically costs $39.99, it’s almost as if you’re getting the second 1080p cam for free. Like the original model, the second-gen Mini is a tiny indoor camera with motion alerts and two-way audio; however, it now offers IP65 waterproofing, so you can use it outside if you purchase Blink’s optional Weather Resistant Power Adapter. Read our review.
  • You can buy a physical PS4 copy of Assassin’s Creed Mirage for $14.99 ($35 off) through July 30th from Woot, which is $10 less than the price we saw during Prime Day. The game includes a free digital copy for the PS5, too, along with a physical map of Baghdad and three lithographs. It’s not the most impressive game in the Assassin’s Creed series, but it’s still an enjoyable return to form with a heavy focus on stealth and assassinations. Read our review.
  • Google’s latest thermostat might be just around the corner, but if you want a smart thermostat to help cool your home ASAP, the Nest Learning Thermostat is available from Google, Lowe’s, and Best Buy for around $169 ($80 off). Google’s third-gen thermostat is still a good investment despite its age, one that’s capable of learning your cooling and heating preferences over time. It also supports a wide array of smart home platforms, though, sadly, it still doesn’t offer Matter support like the entry-level Nest Thermostat.

How Do You Solve a Problem Like Elon?

How Do You Solve a Problem Like Elon? Linda Yaccarino, the C.E.O. of X, has worked hard to bring back advertisers and fix the platform’s business. But its owner, Elon Musk, is always one whim away from undoing her work.

Memecoins, Cryptocurrencies Based on Internet Memes, Roar Back

Memecoins, Cryptocurrencies Based on Internet Memes, Roar Back One of the wildest, most scam-ridden corners of the cryptocurrency industry — memecoins, which are rooted in internet memes — has roared back.

vendredi 26 juillet 2024

Justice Dept. Defends TikTok Law That Forces App’s Sale or Ban

Justice Dept. Defends TikTok Law That Forces App’s Sale or Ban In its first detailed response to a legal challenge, the agency said TikTok’s proposed changes wouldn’t prevent China from using it to collect U.S. users’ data or spread propaganda.

Steam is getting some big upgrades for game demos

Steam is getting some big upgrades for game demos
The Steam brand logo against a blue and black backdrop
Image: The Verge

Valve has introduced some changes to Steam that should make it easier to find and install playable game demos. In a new events blog post, Steam said the “Great Steam Demo Update” was based on developer and player feedback, with new functionality that makes demos behave more like standalone games hosted on the platform.

For one, demos can now have a store page that’s completely separate from the main game, allowing developers to display demo-specific content like trailers, screenshots, and supported features. These pages will also display buttons to both install the demo and visit the main game’s store page, and allow players to leave demo-specific reviews.

And if a demo becomes available for a game that users have on their wishlists, or from a developer that they follow, those users can now be notified via email or mobile alerts. Demo listings can also now appear on the same lists and category pages as free games, such as the “New & Trending” section of Steam’s homepage charts. Alongside free games, users may see demos appearing more frequently as Steam says it’s made “some changes to the thresholds” in order to “better balance them with paid products.”

A snapshot taken of Steams game listings. Image: Steam
Free games and the newly supported gaming demo listings will appear more frequently in Steam’s charts.

Other new features in this update include the ability to add demos to Steam libraries without immediately installing them, allowing demos to be installed even if the user already owns the full game, and making it easier to remove demos by right-clicking on them. When the demos are uninstalled they’ll also be removed from the user’s library.

The visibility changes introduced in this update may resurface older demos on user accounts, with Steam saying “We’ve tried our best to clean up the demos that we expect you don’t care about anymore, but we may have missed some.”

Amusingly, Steam added a note in the “infrequently asked questions” section of the blog for users who don’t know that Steam’s demo icon is based upon the Compact Disc, not a dinner plate. Am I so old that we’re really at the point of modernizing the save icon / floppy disk gag?

When A.I. Fails the Language Test, Who Is Left Out of the Conversation?

When A.I. Fails the Language Test, Who Is Left Out of the Conversation? The use of artificial intelligence is exploding around the world, but the technology’s language models are primarily trained in English, leaving many speakers of other languages behind.

Windows 11 will soon add your Android phone to File Explorer

Windows 11 will soon add your Android phone to File Explorer
A photo of Microsoft’s Surface Pro with an OLED display.
Photo by Chris Welch / The Verge

Microsoft has started testing a new way to access your Android phone from directly within Windows 11’s File Explorer. Windows Insiders are now able to test this new feature, which lets you wirelessly browse through folders and files on your Android phone.

The integration in File Explorer means your Android device appears just like a regular USB device on the left-hand side, with the ability to copy or move files between a PC and Android phone, and rename or delete them. It’s certainly a lot quicker than using the existing Phone Link app.

 Image: Microsoft
Android phones will appear inside File Explorer.

You’ll need a device running Android version 11 or higher, be part of the Windows Insider program, and the beta version of the Link to Windows app to get the feature working right now. All four Windows Insider channels are getting access to test this, including the Release Preview ring — which suggests that it won’t be long until everyone running Windows 11 will be able to access this new feature.

You can enable this File Explorer feature by navigating to Settings > Bluetooth & Devices > Mobile Devices and selecting the manage devices section to allow your PC to connect to your Android phone. A prompt will include a toggle for access in File Explorer, alongside the usual selections for notifications and camera access.

Kamala Harris’s Bratty Coconut Memescape + What Does $1,000 a Month Do? + The Empire CrowdStrikes Back

Kamala Harris’s Bratty Coconut Memescape + What Does $1,000 a Month Do? + The Empire CrowdStrikes Back An episode unburdened by what has been.

mercredi 24 juillet 2024

Amazon is discontinuing my favorite Echo — the one with a dot-matrix clock

Amazon is discontinuing my favorite Echo — the one with a dot-matrix clock
An Echo Dot with Clock on a counter
Photo by Jennifer Pattison Tuohy / The Verge

I have six Amazon Echo smart speakers in my house, and I’ve tested more, but my favorite is the Echo Dot with Clock. I love how the fabric-covered LED dot matrix display makes time unobtrusively accessible, beaming its gentle white light from my dresser across my blackout-curtained dark bedroom. (It definitely beats asking Alexa the time.)

So I’m sorry to be the bearer of bad news: Amazon has discontinued the Dot with Clock in favor of a more expensive, less eye-pleasing model.

“You can check the product page for the latest device availability, but once inventory of this generation Echo Dot with Clock is sold through it will not be restocked,” Amazon spokesperson Liz Roland tells The Verge.

Amazon didn’t tell us why it’s going away. At first, I mistakenly thought it might be due to hidden defects — my own Echo Dot with Clock began mysteriously freezing a few weeks back, completely unresponsive to voice commands and with images stuck on its display. Multiple resets didn’t help.

But after I successfully argued for Amazon to credit me for a replacement Echo, it began working again. (I had to hard reset it, then go through the setup process multiple times in the Alexa app to get it working.)

When I went looking for a replacement $60 Echo Dot with Clock, I was surprised to find Amazon didn’t stock it anymore — only refurbished models were available when I checked, even though the blue model is available again at Amazon and Target today. So instead, I took a chance on the company’s spiritual successor: the $80 Echo Spot, which replaces the dot-matrix display with a screen.

But despite being more expensive, I’m finding the Spot inferior for my purposes. While its screen isn’t too bright for a dim bedroom, it’s not what I’d call visually pleasing. It never lets me forget I’m staring at a cheap screen. Plus, the whole screen is tilted upwards, presumably for nightstand use, not my tall dresser. I have no nightstands in my bedroom.

Photo of an Echo Spot sitting on a nightstand. Photo by Jennifer Pattison Tuohy / The Verge
The Echo Dot with Clock and the Echo Spot, flanked by other small smart displays.

My colleague Jennifer Pattison Tuohy is currently working on a full review of the Echo Spot, and she likes it a good bit better than me!

But she says it doesn’t sound quite as good as the Dot either (though audio’s more directional), and it still doesn’t let you do anything as basic as setting an alarm with touch like you can with other smart displays. The main benefits are music playback controls and the ability to display time, date, temperature, and the weather simultaneously.

Now that my Echo Dot with Clock is working again, I’ll be returning the Spot — and the money that Amazon credited me.

A new Nest Learning Thermostat might be on the way

A new Nest Learning Thermostat might be on the way
A new Nest Learning Thermostat appears to be imminent. | Image: MysteryLupin via X

Leaked images posted on X by @MysteryLupin show a fourth-generation Nest Learning Thermostat and new temperature sensors, as well as several other thermostats: Nest E, Nest Learning Thermostat (3rd Gen). Missing from the pictures is the Nest Thermostat (2020), presumably because it’s not compatible with Nest’s room sensors.

The new addition looks similar to the third-generation model but appears to have a more curved display while retaining the physical dial, as pointed out by 9to5Google. The display is likely a touchscreen, as with the third-generation model, and the image shows a new icon with three wavy lines.

This indicates that the new thermostat is also an indoor air quality monitor, which would be a new feature for Nest thermostats. A leaked screenshot of the Google Home app with a new Climate screen backs up this theory by showing an air quality index score.

 Image: MysteryLupin via X
The first new Nest Learning Thermostat from Google since 2015 could be on its way.

Since the Nest E, which has been discontinued in the US, is pictured, this could indicate that, unlike the Nest Thermostat (2020), the new model will also launch in Europe.

The thermostat appears to work with a redesigned Nest Temperature Sensor (second-gen), which, like the current models, is wall-mountable or can be placed on a table. However, it doesn’t have the Nest branding at this time and is rounder and squishier-looking. The posts indicate that these will cost $39 each, or three for $99, and have a three-year battery life.

That’s the same price as the current sensors, which feed the temperature from other rooms of your house to the thermostat to help balance heating and cooling. But as we said in our review, they are limited compared to those from competitors like Ecobee, as they don’t detect presence. Hopefully these new versions will bring more function.

 Image: MysteryLupin via X
The new temperature sensors appear to be more rounded, almost marshmallow-like. A new Climate screen in the Google Home app now shows air quality — according to the leaked images.

9to5Google has dug up FCC filings showing the new thermostat may sport Google’s Soli radar, used to light up the thermostat’s display when you approach it, and detect presence to feed into Google’s Home & Away Routines. Soli is in the Nest Thermostat (2020) but not in the third-gen Nest Learning Thermostat from 2015, which uses motion sensors.

The FCC filings didn’t indicate a Thread radio in the thermostat, which would be surprising considering Thread was developed for the original Nest thermostat, although the 2020 model doesn’t have it, either.

 Image: MysteryLupin via X
This image shows the temp sensor mounted on a wall.

It's been a long time since Google launched a new Learning Thermostat, which can adapt to your heating and cooling patterns instead of mainly sticking to a schedule, as the newer, cheaper Nest Thermostat does.

The current Nest Learning Thermostat has been on sale for $169 (down from $249) for a while now. It also doesn’t support Matter, the smart home standard Google is a big part of, while the cheaper Nest Thermostat does. With a big Google hardware event scheduled for August 13th, we will find out soon enough if these leaks are genuine.

 Image: MysteryLupin via X
The new Nest Temperature Sensors appear to have similar features to the current versions.

mardi 23 juillet 2024

X replaced the water pistol emoji with a regular gun, for some reason

X replaced the water pistol emoji with a regular gun, for some reason
Vector collage of the X logo.
Image: The Verge

Years after Twitter replaced the pistol emoji with a green-and-orange water gun, X has decided to change it back to a regular handgun. An X employee announced the change in a post last week.

The company hasn’t explained the change, but it feels on-brand for Elon Musk’s social network. Twitter originally switched its emoji to display a water gun in 2018, following others like Google and Facebook. (Apple made the switch in 2016; Microsoft was a brief hold-out.)

We’ve embedded a screenshot of the X post, so you can see the gun image. (On some devices, the actual post still shows a water gun when embedded.)

Eventually, the Unicode Consortium, which decides which emoji get made in the first place, followed the platforms’ lead and officially renamed the pistol emoji as “water pistol”:

The Unicode website shows water pistol is the official short name for emoji #1121.
The current entry for the “water pistol” emoji. You can also find references to it here and here.

Emoji are universal insofar as they share common designations across platforms (U+1F52B is the water pistol), which are decided by the Unicode Consortium. But it’s up to each platform owner to decide how they’re visually represented. That’s how we got the Great Cheeseburger Emoji Debacle that was resolved in November 2017.

You’ll only see the gun if you’re looking at X on the web — as of this writing, it doesn’t appear to have updated in mobile versions of the app, though that’s apparently on its way at some point.

The cheapest Wi-Fi 7 router is this $99 TP-Link

The cheapest Wi-Fi 7 router is this $99 TP-Link
Image: TP-Link

TP-Link has debuted the Archer BE3600, a $99 Wi-Fi 7 router that is the cheapest one we’ve seen released in the US so far since the first routers supporting the new standard started arriving last year.

It doesn’t have the new 6GHz band like its pricier cousins or even many of the Wi-Fi 6E routers already on the market, though. As a result, for many people, TP-Link’s new router probably won’t get you your downloads a lot faster — if at all — than would a much older router.

The new tricks can mean a little throughput boost or a more stable connection than routers built to older specs in congested areas, though, thanks to the way Wi-Fi 7 handles its data streams. But without the one-two 6GHz punch of wider data channels and much more unoccupied space, you simply won’t see many of the multi-gigabit benefits hyped in Wi-Fi 7 marketing, and if you have a multi-gig internet connection, you should probably connect it to something a little more upmarket.

There are things to like here, though. Two of its five ethernet ports offer 2.5Gbps connections, which is rare at this price. It also supports Multi-Link Operation, which won’t be so much a throughput benefit (again: no 6GHz band) but could mean a more stable connection for a Wi-Fi 7-capable phone or VR headset — if one band fails or is too busy, your future device can fall back onto the other one. And it supports the Wi-Fi Alliance’s EasyMesh standard, meaning it can make mesh networks with routers from other brands that also support the standard.

The most significant thing about this router seems to be that it offers Wi-Fi 7 for less than $100. That’s a first, and by a fair amount — the low end right now is otherwise generally around $300 (see TP-Link’s Deco BE63 or Archer BE550).

Picture of the Archer BE3600 from behind. Image: TP-Link
Good port selection for such a cheap router.

lundi 22 juillet 2024

Far Right Spreads Baseless Claims About Biden’s Whereabouts

Far Right Spreads Baseless Claims About Biden’s Whereabouts President Biden, who has been sidelined with Covid, is set to address the nation this week.

Congress Calls for Tech Outage Hearing to Grill Executive

Congress Calls for Tech Outage Hearing to Grill Executive The House Homeland Security Committee called on the chief executive of the cybersecurity firm CrowdStrike to testify on the disruption.

Slack introduces iPhone widgets to make work more inescapable

Slack introduces iPhone widgets to make work more inescapable
The new Slack Status update widget on a simulated iPhone screen.
Slack introduced its first three iPhone and iPad widgets today. | Screenshot: Slack

Today, Slack introduced the first four widgets for the iOS version of its mobile app. Three of them are designed for the iPhone’s homescreen, while the fourth can be added to the lockscreen, allowing users to jump immediately into the Slack app after unlocking their device.

The homescreen widgets include Catch Up, which provides an at-a-glance look at how many unread messages and mentions a user has without opening the app. It gives a little more detail than the Slack app icon’s badge, and tapping the Catch Up widget takes users directly to that section of the Slack app so they can quickly swipe through conversations they’ve missed.

The other two homescreen widgets streamline Slack status updates. Tapping the smaller Status widget also takes users directly to that section of the Slack iOS app, while a larger version offers the same functionality plus three preselected status options: one-hour “Focus” and “Lunch” statuses, plus a half-hour “Take a break” status. However, those preselected options aren’t customizable at this point.

Slack finding its way onto all of our devices has already made work feel like a nonstop thing. If you really want to bring more work into your life, adding these widgets could go even further, putting work front and center every time you open your phone.

Delta Flight Cancellations Continue As It Struggles To Recover From Tech Outage

Delta Flight Cancellations Continue As It Struggles To Recover From Tech Outage Transportation Secretary Pete Buttigieg singled out the airline on Sunday for continued disruptions and “unacceptable” customer service as it canceled another 1,300 flights.

AI terminology, explained for humans

AI terminology, explained for humans
Illustration of a computer teaching other computers how to learn.
Image: Hugo J. Herrera for The Verge

Articles today are filled with AI jargon. Here are some definitions to get you through.

Artificial intelligence is the hot new thing in tech — it feels like every company is talking about how it’s making strides by using or developing AI. But the field of AI is also so filled with jargon that it can be remarkably difficult to understand what’s actually happening with each new development.

To help you better understand what’s going on, we’ve put together a list of some of the most common AI terms. We’ll do our best to explain what they mean and why they’re important.

What exactly is AI?

Artificial intelligence: Often shortened to AI, the term “artificial intelligence” is technically the discipline of computer science that’s dedicated to making computer systems that can think like a human.

But right now, we’re mostly hearing about AI as a technology and or even an entity, and what exactly that means is harder to pin down. It’s also frequently used as a marketing buzzword, which makes its definition more mutable than it should be.

Google, for example, talks a lot about how it’s been investing in AI for years. That refers to how many of its products are improved by artificial intelligence and how the company offers tools like Gemini that appear to be intelligent, for example. There are the underlying AI models that power many AI tools, like OpenAI’s GPT. Then, there’s Meta CEO Mark Zuckerberg, who has used AI as a noun to refer to individual chatbots.

As more companies try to sell AI as the next big thing, the ways they use the term and other related nomenclature might get even more confusing. There are a bunch of phrases you are likely to come across in articles or marketing about AI, so to help you better understand them, I’ve put together an overview of many of the key terms in artificial intelligence that are currently being bandied about. Ultimately, however, it all boils down to trying to make computers smarter.

(Note that I’m only giving a rudimentary overview of many of these terms. Many of them can often get very scientific, but this article should hopefully give you a grasp of the basics.)

Machine learning: Machine learning systems are trained (we’ll explain more about what training is later) on data so they can make predictions about new information. That way, they can “learn.” Machine learning is a field within artificial intelligence and is critical to many AI technologies.

Artificial general intelligence (AGI): Artificial intelligence that’s as smart or smarter than a human. (OpenAI in particular is investing heavily into AGI.) This could be incredibly powerful technology, but for a lot of people, it’s also potentially the most frightening prospect about the possibilities of AI — think of all the movies we’ve seen about superintelligent machines taking over the world! If that isn’t enough, there is also work being done on “superintelligence,” or AI that’s much smarter than a human.

Generative AI: An AI technology capable of generating new text, images, code, and more. Think of all the interesting (if occasionally problematic) answers and images that you’ve seen being produced by ChatGPT or Google’s Gemini. Generative AI tools are powered by AI models that are typically trained on vast amounts of data.

Hallucinations: No, we’re not talking about weird visions. It’s this: because generative AI tools are only as good as the data they’re trained on, they can “hallucinate,” or confidently make up what they think are the best responses to questions. These hallucinations (or, if you want to be completely honest, bullshit) mean the systems can make factual errors or give gibberish answers. There’s even some controversy as to whether AI hallucinations can ever be “fixed.”

Bias: Hallucinations aren’t the only problems that have come up when dealing with AI — and this one might have been predicted since AIs are, after all, programmed by humans. As a result, depending on their training data, AI tools can demonstrate biases. For example, 2018 research from Joy Buolamwini, a computer scientist at MIT Media Lab, and Timnit Gebru, the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), co-authored a paper that illustrated how facial recognition software had higher error rates when attempting to identify the gender of darker-skinned women.

Illustration of wireframe figure inside a computer monitor. Image: Hugo J. Herrera for The Verge

I keep hearing a lot of talk about models. What are those?

AI model: AI models are trained on data so that they can perform tasks or make decisions on their own.

Large language models, or LLMs: A type of AI model that can process and generate natural language text. Anthropic’s Claude, which, according to the company, is “a helpful, honest, and harmless assistant with a conversational tone,” is an example of an LLM.

Diffusion models: AI models that can be used for things like generating images from text prompts. They are trained by first adding noise — such as static — to an image and then reversing the process so that the AI has learned how to create a clear image. There are also diffusion models that work with audio and video.

Foundation models: These generative AI models are trained on a huge amount of data and, as a result, can be the foundation for a wide variety of applications without specific training for those tasks. (The term was coined by Stanford researchers in 2021.) OpenAI’s GPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude are all examples of foundation models. Many companies are also marketing their AI models as multimodal, meaning they can process multiple types of data, such as text, images, and video.

Frontier models: In addition to foundation models, AI companies are working on what they call “frontier models,” which is basically just a marketing term for their unreleased future models. Theoretically, these models could be far more powerful than the AI models that are available today, though there are also concerns that they could pose significant risks.

Illustration of wireframe hands typing on a keyboard. Image: Hugo J. Herrera for The Verge

But how do AI models get all that info?

Well, they’re trained. Training is a process by which AI models learn to understand data in specific ways by analyzing datasets so they can make predictions and recognize patterns. For example, large language models have been trained by “reading” vast amounts of text. That means that when AI tools like ChatGPT respond to your queries, they can “understand” what you are saying and generate answers that sound like human language and address what your query is about.

Training often requires a significant amount of resources and computing power, and many companies rely on powerful GPUs to help with this training. AI models can be fed different types of data, typically in vast quantities, such as text, images, music, and video. This is — logically enough — known as training data.

Parameters, in short, are the variables an AI model learns as part of its training. The best description I’ve found of what that actually means comes from Helen Toner, the director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology and a former OpenAI board member:

Parameters are the numbers inside an AI model that determine how an input (e.g., a chunk of prompt text) is converted into an output (e.g., the next word after the prompt). The process of ‘training’ an AI model consists in using mathematical optimization techniques to tweak the model’s parameter values over and over again until the model is very good at converting inputs to outputs.

In other words, an AI model’s parameters help determine the answers that they will then spit out to you. Companies sometimes boast about how many parameters a model has as a way to demonstrate that model’s complexity.

Illustration of wireframe figure flipping through the pages of a book. Image: Hugo J. Herrera for The Verge

Are there any other terms I may come across?

Natural language processing (NLP): The ability for machines to understand human language thanks to machine learning. OpenAI’s ChatGPT is a basic example: it can understand your text queries and generate text in response. Another powerful tool that can do NLP is OpenAI’s Whisper speech recognition technology, which the company reportedly used to transcribe audio from more than 1 million hours of YouTube videos to help train GPT-4.

Inference: When a generative AI application actually generates something, like ChatGPT responding to a request about how to make chocolate chip cookies by sharing a recipe. This is the task your computer does when you execute local AI commands.

Tokens: Tokens refer to chunks of text, such as words, parts of words, or even individual characters. For example, LLMs will break text into tokens so that they can analyze them, determine how tokens relate to each other, and generate responses. The more tokens a model can process at once (a quantity known as its “context window”), the more sophisticated the results can be.

Neural network: A neural network is computer architecture that helps computers process data using nodes, which can be sort of compared to a human’s brain’s neurons. Neural networks are critical to popular generative AI systems because they can learn to understand complex patterns without explicit programming — for example, training on medical data to be able to make diagnoses.

Transformer: A transformer is a type of neural network architecture that uses an “attention” mechanism to process how parts of a sequence relate to each other. Amazon has a good example of what this means in practice:

Consider this input sequence: “What is the color of the sky?” The transformer model uses an internal mathematical representation that identifies the relevancy and relationship between the words color, sky, and blue. It uses that knowledge to generate the output: “The sky is blue.”

Not only are transformers very powerful, but they can also be trained faster than other types of neural networks. Since former Google employees published the first paper on transformers in 2017, they’ve become a huge reason why we’re talking about generative AI technologies so much right now. (The T in ChatGPT stands for transformer.)

RAG: This acronym stands for “retrieval-augmented generation.” When an AI model is generating something, RAG lets the model find and add context from beyond what it was trained on, which can improve accuracy of what it ultimately generates.

Let’s say you ask an AI chatbot something that, based on its training, it doesn’t actually know the answer to. Without RAG, the chatbot might just hallucinate a wrong answer. With RAG, however, it can check external sources — like, say, other sites on the internet — and use that data to help inform its answer.

Illustration of wireframe figure running over a circuitboard. Image: Hugo J. Herrera for The Verge

How about hardware? What do AI systems run on?

Nvidia’s H100 chip: One of the most popular graphics processing units (GPUs) used for AI training. Companies are clamoring for the H100 because it’s seen as the best at handling AI workloads over other server-grade AI chips. However, while the extraordinary demand for Nvidia’s chips has made it among the world’s most valuable companies, many other tech companies are developing their own AI chips, which could eat away at Nvidia’s grasp on the market.

Neural processing units (NPUs): Dedicated processors in computers, tablets, and smartphones that can perform AI inference on your device. (Apple uses the term “neural engine.”) NPUs can be more efficient at doing many AI-powered tasks on your devices (like adding background blur during a video call) than a CPU or a GPU.

TOPS: This acronym, which stands for “trillion operations per second,” is a term tech vendors are using to boast about how capable their chips are at AI inference.

Illustration of wireframe frame tapping an icon on a phone. Image: Hugo J. Herrera for The Verge

So what are all these different AI apps I keep hearing about?

There are many companies that have become leaders in developing AI and AI-powered tools. Some are entrenched tech giants, but others are newer startups. Here are a few of the players in the mix:

Here are the best Black Friday deals you can already get

Here are the best Black Friday deals you can already get Image: Elen Winata for The Verge From noise-canceling earbuds to robot vacuums a...