dimanche 7 mai 2023

The digital media bubble has burst. Where does the industry go from here?

The digital media bubble has burst. Where does the industry go from here?

Buzzfeed, Vice, Gawker and Drudge Report are all traffic-war casualties, but they succeeded in shaking up the media landscape

Toward the end of Traffic, a new account of the early rock n roll years of internet publishing, Ben Smith writes that the failings of Buzzfeed News had come about as a result of a “utopian ideology, from a kind of magical thinking”.

No truer words, perhaps, for a digital-based business that for a decade paddled in a warm bath of venture capital funding but never fully controlled its pricing and distribution, a basic business requirement that applies to information as much as it does to selling lemonade in the school yard or fossil fuels.

Continue reading...

The brief Age of the Worker is over – employers have the upper hand again

The brief Age of the Worker is over – employers have the upper hand again

The pandemic ushered in an era of ‘quiet quitting’ and ‘bare minimum Mondays’ but workers have since lost leverage

It seems that it was only yesterday that the media was filled with stories about workers calling the shots. There were the work-from-homers who refused to come back to the office after the pandemic was long over. There were the “quiet quitters” who proudly – and publicly – admitted that, even though they were collecting a paycheck from their employer they weren’t doing much at all during the day except looking for another job. And then there’s the group of workers who were advocating for “bare minimum Mondays” because apparently, a five-day workweek was just too much to bear.

During the past few years, we’ve heard employees publicly demand unlimited paid time off, four-day workweeks, wellness sabbaticals, gigantic bonuses to switch jobs and even “pawternity leave” – getting time off when you adopt a puppy. Facing labor shortages, customer demands and supply chain headaches, most employers caved. The Age of the Worker blossomed.

Continue reading...

‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity

‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity

Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it?

The first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade, on good terms. “I have no objection to what Google has done or is doing, but obviously the media would love to spin me as ‘a disgruntled Google employee’. It’s not like that.”

It’s an important clarification to make, because it’s easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, we’re about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all.

Continue reading...

How to get a better mobile phone deal in the UK

How to get a better mobile phone deal in the UK

With above-inflation increases, tips and tricks to find the right plan are even more important

There’s a dizzying array of mobile phone tariffs, and with many providers recently imposing above-inflation increases, it is even more important to choose the right deal. So how can you navigate the networks to get a plan that is right for you? What are the top tips for saving money?

Continue reading...

samedi 6 mai 2023

I’m glad you’ve bought an electric vehicle. But your conscience isn’t clean | John Naughton

I’m glad you’ve bought an electric vehicle. But your conscience isn’t clean | John Naughton

First, you’ve got to drive a long way before you overcome your EV’s embedded carbon debt. And then there’s the trouble with the minerals in its battery…

So you’ve finally taken the plunge and bought an electric vehicle (EV)? Me too. You’re basking in the warm glow that comes from doing one’s bit to save the planet, right? And now you know that smug feeling when you are stuck in a motorway tailback behind a hideous diesel SUV that’s pumping out particulates and noxious gases, but you’re sitting there in peace and quiet and emitting none of the above. And when the traffic finally starts to move again you notice that the fast lane is clear and you want to get ahead of that dratted SUV. So you put your foot down and – whoosh! – you get that pressure in the small of your back that only owners of Porsche 911s used to get. Life’s good, n’est-ce pas?

Er, up to a point. True, there’s nothing noxious coming out of your exhaust pipe, because you don’t have one; and the electric motors that power your wheels certainly don’t burn any fossil fuel. But that doesn’t mean that your carbon footprint is zero. First of all, where did the electricity that charged that big battery of yours come from? If it came from renewable sources, then that’s definitely good for the planet. But in most countries, at least some of that electricity came from non-renewable sources, maybe even – shock, horror! – coal-burning generating stations.

Continue reading...

Discord’s username change is causing discord

Discord’s username change is causing discord
red black and white geometric shapes with discord text logo in center
Illustration by Alex Castro / The Verge

A race to reserve usernames is kicking off on Discord.

Starting in the next couple of weeks, millions of Discord users will be forced to say goodbye to their old four-digit-appended names. Discord is requiring everyone to take up a new common platform-wide handle. For Discord, it’s a move toward mainstream social network conventions. For some users, though, it’s a change to the basics of what Discord is — a shift that’s as much about culture as technology.

Discord has historically handled usernames with a numeric suffix system. Instead of requiring a completely unique handle, it allowed duplicate names by adding a four-digit code known as a “discriminator” — think TheVerge#1234. But earlier this week, it announced it was changing course and moving toward unique identifiers that resemble Twitter-style “@” handles.

Co-founder and CTO Stanislav Vishnevskiy acknowledged the change would be “tough” for some people, but he said the discriminators had proven too confusing. He noted that over 40 percent of users don’t know their discriminator number, which leads to “almost half” of all friend requests failing to connect people to the right person, largely due to mistyped numbers.

Over on Reddit, Vishnevskiy argued that the new handles wouldn’t even show up in the interface that often since Discord will allow users to set a separate display name that’s not unique. Carrying more than 500 downvotes on some Reddit replies, he called the original system a “halfway measure” and rejected ideas like just adding more numbers to the end of a handle. “This was not a change that we decided to do lightly and have been talking about doing for many years, trying to avoid it if we could,” he posted.

During the change, Discord users will have to navigate a process that’s fraught with uncertainty and cutthroat competition. Users will need to wait for an in-app prompt for when it’s their turn to select a new username, which will eventually roll out to all users over the course of “several months.” The company will assign priority to users based on their Discord registration dates, so people who have had their name “for quite a while” will have a better chance to get a desired name.

This raises a lot of obvious fears and thorny questions. Depending on who gets to set their usernames first, is there anything stopping people from taking over a particularly popular creator’s distinctive name? Should Discord prevent this by holding usernames for well-known creators, even if they’re not first in line? This is a problem for a lot of social networks, but unlike with some fledgling service attracting new users, all these people are already on Discord — in some cases, they’re probably even paid subscribers.

In a statement to The Verge, Discord said it would be trying to navigate the change gracefully for its best-known users. “We created processes for high-visibility users to secure usernames that will allow them to operate on our platform with minimal risk of impersonation,” said Kellyn Slone, director of product communications. “Users with a standing business relationship with Discord who manage certain partner, verified, or creator servers will be able to pick a username before other users in order to reduce the risk of impersonation for their accounts.”

A lot of Discord users will fall outside those boundaries. “As a content creator who has a relatively large fanbase — my handle is subject to username sniping by someone with an older account than me,” artist ZestyLemons, who uses Discord to connect with fans, writes to The Verge. “I am not a Discord partner, nor am I famous enough to obtain their recognition, so I will absolutely not have security with my public handle.” ZestyLemons noted that for people who do get desirable names, there’s the risk of being swatted or threatened to give it up — something that’s happened on Instagram and Twitter.

Discord users understand right now that there are a lot of accounts with very similar names, distinguished only by random numbers at the end. But absolute names change that understanding. They encourage people to look for believable usernames — if somebody nabs the one and only @verge (our Twitter handle) on Discord, people could be more inclined to believe it’s us.

And this pushes people to treat their Discord names like part of a centralized identity — rather than, as many users have referred to them, something like a private phone number. It compels individuals to take a username that represents them elsewhere before someone else does. This links whoever they are on Discord back to their internet-wide identity, with all the potential downsides — like stalking or a simple feeling of exposure — that entails.

Despite fears about individual users impersonating each other, the risks for server moderation are less clear — and some Discord server admins told The Verge they weren’t worried. “I don’t think the change will be a big deal for admins + mods,” says Emily, an admin for a large Pokémon Go meet group on Discord. The server already asks people to set server-specific nicknames that match their Pokémon Go trainer name, so they’re not relying on discriminators to tell people apart.

But Emily isn’t a fan of the change. “It’s a bummer that Discord’s giving in to the usual social media norms,” they said. “Discriminators were kinda clever… it allows many people [to] share the same name without stressing over the ‘perfect’ username. Discord is a more personal sort of social media. You’re not publishing publicly into the ether — like Twitter or something — so having a clever memorable username doesn’t matter.”

SupaIsaiah016, an avid Geometry Dash player who also runs a small Discord server, agrees. “The current username and discriminator system worked perfectly fine, and allowed for thousands of people to have the same name on the platform overall,” SupaIsaiah016 writes to The Verge. “Sites that use handles and display names such as Twitter have very different reasons as to why they use those systems, as they are public social medias.”

Part of the problem is simply that Discord is asking millions of settled users to make a huge change to their online identity, and there’s no great way to do that without friction. But there’s also a sense that Discord’s old username style made it a different, albeit clunky, kind of social network. And to many users, that was a part of the appeal.

“Discord was originally meant to be a messaging app, to which a lot of content creators used to separate their online lives versus their real, personal lives,” ZestyLemons writes. Verge reader SpookyMulder put it another way in the comments of our original news post. “Discord has some sort of pseudo-identity culture,” SpookyMulder writes. “We tend to value the freedom of anonymity in Discord than your usual social media @username identities.”

Whether you’re a Discord user who wants to maintain a sense of anonymity or one who is all in for a more shareable and easily identifiable system, the race to get the username that’s right for you is on. But you’ll have to wait and see where the starting line is.

Twitter admits to ‘security incident’ involving Circles tweets

Twitter admits to ‘security incident’ involving Circles tweets

Feature allows users to set a list of friends and post tweets that only they are supposed to be able to read

A privacy breach at Twitter published tweets that were never supposed to be seen by anyone but the poster’s closest friends to the site at large, the company has admitted after weeks of stonewalling reports.

The site’s Circles feature allows users to set an exclusive list of friends and post tweets that only they can read. Similar to Instagram’s Close Friends setting, it allows users to share private thoughts, explicit images or unprofessional statements without risking sharing them with their wider network.

Continue reading...

VanMoof S5 e-bike review: nice but twice the price

VanMoof S5 e-bike review: nice but twice the price

$4,000 and a long list of features, but how many do you really need?

“Sometimes you have to kill your darlings,” is a phrase used by designers to justify the removal of elements they find personally exciting but fail to add value.

The last time I heard it was in April, 2022, when I rode pre-production versions of VanMoof’s new full-size S5 and smaller A5 electric bikes. The phrase was uttered by co-founder and CEO Taco Carlier to justify the removal of VanMoof’s iconic matrix display for a new “Halo Ring” interface.

One year later and both e-bikes are now — finally — being delivered to customers, well after their original target of July 2022. The price has also been raised to $3,998 / €3,498 from an early preorder price of $2,998 / €2,498, which was already much more expensive than what you’d pay for VanMoof’s previous generation e-bikes — the VanMoof S3 / X3 — when introduced for a rather remarkable price of $1,998 / €1,998 back in 2020.

Look, everything is more expensive in 2023, e-bikes included. But in terms of value for money, the $4,000 VanMoof S5 needs to be twice as good as the $2,000 S3, right? Otherwise the latest flagship e-bike from this former investment darling might be dead on arrival.

If only it was that simple.

Although the S5 and A5 pedal-assisted e-bikes still look like VanMoofs with that extended top tube capped by front and rear lights, everything from the frame down to the chips and sensors have been reengineered. The company says that only a “handful of parts” were carried over from the previous models.

Here are some of the most notable changes:

  • New LED Halo Ring visual interfaces flanking both grips.
  • An integrated SP Connect phone mount (you provide the case) with USB-C charging port.
  • New almost completely silent Gen 5 front-hub motor with torque sensor and three-speed automatic e-shifter (the S3 / X3 had four-speed e-shifters).
  • New multi-function buttons have been added below the bell (next to left grip) and boost (next to right grip) buttons.
  • The boost button now offers more oomph with torque increasing to 68Nm from 59Nm.
  • The S5 frame which has been criticized for being too tall has been lowered by 5cm (2 inches) to better accommodate riders as tall as 165cm (5 feet, 5 inches), while the A5 caters to riders as tall as 155cm (5 feet, 1 inch) and allows for an easier step-through than the X3 it supersedes.

These join a very long list of standard features found on VanMoof e-bikes like a well designed and useful app, integrated Kick Lock on the rear wheel, baked in Apple Find My support, hydraulic disc brakes, muscular city tires, bright integrated lights, mudguards, and a sturdy kickstand. And because it’s VanMoof, you can also subscribe to three years of theft protection ($398 / €348) with guaranteed recovery or replacement within two weeks, and three years of maintenance ($348 / €298) in select cities.

VanMoof e-bikes now have integrated mounts and USB-C charging for your phone.

I picked up my dark gray (also available in light gray) VanMoof S5 loaner in late March but I ran into a few issues that delayed this review. These included intermittent connectivity failures between the app and bike, a Kick Lock that didn’t always disengage, and an alarm that would briefly trigger for no apparent reason. Those issues were all corrected by an over-the-air firmware (v1.20) update released in mid-April before I could even report them back to VanMoof support.

I have mixed emotions about this. In March the S5 and A5 started shipping in quantity — albeit, eight months late — so you’d think they would have had time to sort out any issues in VanMoof’s new testing labs. That’s annoying given VanMoof’s history of initial quality issues and assurances provided by the company that they wouldn’t be repeated. Then again, premium e-bikes from companies like VanMoof are increasingly complex machines, and seeing the company solve issues so quickly is commendable.

One issue that hasn’t been fixed is idle battery drain, but VanMoof tells me that a firmware update is coming to solve it in “two weeks” time. In my case, the issue caused the idle S5’s battery to drain from 86 percent to 65 percent over a period of 10 days. I generally lose about two percent charge each day whether I ride it or not.

Oh, and since I’ve installed several firmware updates in the last month (I’m currently at v1.2.4), I need to mention this: the S5 plays a jaunty little tune the entire time the firmware is being installed. It was cute at first, my daughter even offered a little dance to go with it. But it takes five to 10 minutes, and after the first time you hear it, it’s just annoying and there’s no way to turn it off.

Halo Ring in sunlight.
Halo Ring in low light.

Regarding new features, the Halo Rings next to each grip are the most visible change from previous VanMoofs. At least until you hit sunlight and those weak LEDs washout almost completely. The Halo Rings are meant to show speed, charge remaining, current pedal-assist power level, and more through a series of light bars and animations. Overall they’re fine, if gimmicky, but I don’t have much of a need for status information when bicycling. I also didn’t miss the old top-tube matrix display.

Riding a 23kg / 50.7lbs VanMoof S5 feels like an S3 albeit with fewer shifts and a boost button that provides more torque when trying to pass someone or get an early jump off the line. The fifth generation 250W motor of VanMoof design is absolutely quiet, even at its top speed of 25km/h in Europe (which increases to 20mph in the US). And the new three-speed e-shifter does a better job of accurately finding the right gear than the S3’s four-speed e-shifter did. I still felt a few clinks and spinning pedals, especially when mashing down hard on the cranks when in a hurry. But overall the S5’s predictive shifting is much improved, especially when rolling along at a casual pace. Still, it’s not as smooth as the automatic shifters from Enviolo, for example, so there’s still work to be done.

It’s a shame VanMoof doesn’t offer a simple belt-drive option for its e-bikes. That coupled with the S5’s torquey boost button would obviate the need for any gears when riding in all but the most hilly environments. That’s why I’m a big fan of the premium belt-driven pedal-assist e-bikes sold by Ampler and Cowboy. They cost less than the S5 but are also more fun to ride thanks to their lighter front ends (both brands use rear-hub motors).

As to range, VanMoof says I should be able to get 60km on full power mode. However, I was only able to eke out 48.6km (30.2 miles) from the S5’s 487Wh battery when riding in full power mode and frequently pressing the boost button, in temperatures that ranged from freezing to 15C (59F). That’s about the same range I got when testing the VanMoof S3 — 47 km (29.2 miles) — and its bigger 504Wh battery. The issue that currently causes the battery to lose energy overnight certainly didn’t help my range. The battery can be charged from zero to 100 percent in a very slow 6 hours and 30 minutes via the included charger.

I had been wondering how VanMoof would use the new multifunction buttons located just below the bell and boost buttons. The small button on the right (below the boost) let’s you change your motor power on the fly, while the one on the left (below the bell) makes your lights flash, as some kind of warning to people around you. Both of these features tick boxes on marketing sheets but aren’t very useful in everyday usage.

And since this is a VanMoof, the battery is integrated and can only be removed during maintenance. The company does have a new “click-on” version (no velcro!) of its extended battery coming for the S5 and A5 that you can charge inside the home.

The dark gray VanMoof S5: too complex for its own good?

I’ve had a nagging concern about VanMoof e-bikes for the last few years that I even mentioned in the S3 review. Are they getting too complex for their own good?

Electric bikes — especially commuter e-bikes like the S5 — are subjected to daily wear and tear in all kinds of weather conditions. Even basic bikes are difficult to maintain when used everyday and VanMoof’s e-bikes are expensive rolling computers.

Honestly, I could do without the fancy automatic chain-driven three-speed shifter, superfluous multifunction buttons, programmable electronic bell, Halo Ring interface, Apple tracking, and perky sounds for startup, shutdown, and firmware updates. Give me one gear and a maintenance-free belt drive alongside that torquey boost button on a pedal-assisted e-bike that will get me back and forth to my office every day, no matter what, in style and without fail. But that’s not the S5.

Don’t get me wrong, the VanMoof S5 is a very good electric bike with a longer feature list than any other e-bike I can name. It also has one of the best networks of service hubs available in cities around the world. That’s important because most S5 / A5 parts are only available from VanMoof so make sure you have a service center nearby if you’re interested in buying.

VanMoof for all its early covid successes ran into financial issues at the end of 2022 when it was forced to ask investors for an emergency injection of capital just to pay the bills. But the entire e-bike industry was struggling post-covid as sales plummeted and supply chains wobbled. Competitors like Cowboy and industry giant Stella also had to raise cash to handle overstock after e-bike inventories swelled.

As good as the S5 is, the feature set is verging on gimmickry, in my opinion, perhaps in an attempt to justify its new higher $3,998 / €3,498 price tag. They’re cute, entertaining, and sure, a tad useful at first. But many just aren’t needed for regular commuters. The S5 has too many darlings, and not enough killing.

For context on that price, the VanMoof S5 is currently $500 more expensive than the comparable Cowboy 4 series and $1,000 more than the simpler Ampler Axel. Viewed in those terms, VanMoof’s pricing seems about right.

Is the S5 worth it? I’ll leave that for you to decide in this uncertain and inflationary world. While it’s definitely an improvement over the S3, it’s not twice the bike.

All photography by Thomas Ricker / The Verge

Backup Power: A Growing Need, if You Can Afford It

Backup Power: A Growing Need, if You Can Afford It Extreme weather linked to climate change is causing more blackouts. But generators and batteries are still out of reach of many.

Google engineer warns it could lose out to open-source technology in AI race

Google engineer warns it could lose out to open-source technology in AI race

Commonly available software poses threat to tech company and OpenAI’s ChatGPT, leaked document says

Google has been warned by one of its engineers that the company is not in a position to win the artificial intelligence race and could lose out to commonly available AI technology.

A document from a Google engineer leaked online said the company had done “a lot of looking over our shoulders at OpenAI”, referring to the developer of the ChatGPT chatbot.

Continue reading...

vendredi 5 mai 2023

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet
Hands with additional fingers typing on a keyboard.
Álvaro Bernis / The Verge

How we use the internet is changing fast, thanks to the advancement of AI-powered chatbots that can find information and redeliver it as a simple conversation.

Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

We’ve even tested all three chatbots head-to-head to see which one is the best — or at least which one gives us the best responses right now when it comes to pressing questions like “how do I install RAM into my PC?”

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”

Or, in the words of James Vincent, a human person, “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle
Sony’s PS5 console.
Photo by Vjeran Pavic / The Verge

The deadline to claim the titles included in the PlayStation Plus Collection is fast approaching, meaning you’ve got until May 9th to claim the dozen-plus PS4 classics that are included as part of a PlayStation Plus subscription. Eurogamer notes that the list currently includes some absolute cracking games like Bloodborne, God of War, Ratchet and Clank, The Last Guardian, and Uncharted 4: A Thief’s End.

Sony launched the collection alongside the PlayStation 5 console back in 2020, allowing owners of its new console to catch up on some of the biggest games of the previous generation. It was a neat incentive to subscribe to PlayStation Plus, and a reminder of the wealth of PS4 titles available to play via backwards compatibility in the early days when native PS5 titles were still thin on the ground.

Obviously there are far too many games included in the collection to be able to play through them all by next Tuesday. But when announcing the discontinuation of the feature in February, Sony explicitly said that redeeming them before May 9th “will enable you to access those titles even after this date for as long as you remain a PlayStation Plus member.” So if you have an inkling you’d like to play Ratchet and Clank at some point after that date, best nab it now. Just to be safe.

If you don’t, there are at least a couple of games on the list that will continue to be available as part of Sony’s more expensive Playstation Plus Extra and Premium tiers. These include Bloodborne, Resident Evil 7: Biohazard, Batman: Arkham Knight, The Last Guardian, God of War, Uncharted 4, and Fallout 4. The full PlayStation Plus catalog can be found on Sony’s site.

Microsoft is reportedly helping AMD expand into AI chips

Microsoft is reportedly helping AMD expand into AI chips
US-TECHNOLOGY-LIFESTYLE-ELECTRONICS
AMD’s AI-capable MI300 data center APU is set to arrive sometime later this year. | Photo by ROBYN BECK/AFP via Getty Images

Microsoft has allegedly teamed up with AMD to help bolster the chipmaker’s expansion into artificial intelligence processors. According to a report by Bloomberg, Microsoft is providing engineering resources to support AMD’s developments as the two companies join forces to compete against Nvidia, which controls an estimated 80 percent market share in the AI processor market.

In turn, Bloomberg’s sources also claim that AMD is helping Microsoft to develop its own in-house AI chips, codenamed Athena. Several hundred employees from Microsoft’s silicon division are reportedly working on the project and the company has apparently already sunk around $2 billion into its development. Microsoft spokesperson Frank Shaw has, however, denied that AMD is involved with Athena.

We have contacted AMD and Microsoft for confirmation and will update this story should we hear back.

The explosive popularity of AI services like OpenAI’s ChatGPT is driving the demand for processors that can handle the huge computational workloads required to run them. Nvidia’s commanding market share for graphic processing units (GPUs) — specialized chips that provide the required computing power — allows it to dominate this space. There’s currently no suitable alternative, and that’s a problem for companies like Microsoft that need Nvidia’s expensive processors to power the various AI services running in its Azure Cloud.

Nvidia’s CUDA libraries have driven most of the progress in AI over the past decade. Despite AMD being a major rival in the gaming hardware industry, the company still doesn’t have a suitable alternative to the CUDA ecosystem for large-scale machine learning deployments. Now that the AI industry is heating up, AMD is seeking to place itself in a better position to capitalize. “We are very excited about our opportunity in AI — this is our number one strategic priority,” Chief Executive Officer Lisa Su said during the chipmaker’s earnings call Tuesday. “We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history.”

Su claims that AMD is well positioned to create partly customized chips for its biggest customers to use in their AI data centers. “I think we have a very complete IP portfolio across CPUs, GPUs, FPGAs, adaptive SoCs, DPUs, and a very capable semi-custom team,” said Su, adding that the company is seeing “higher volume opportunities beyond game consoles.”

AMD is also confident that its upcoming Instinct MI300 data center chip could be adapted for generative AI workloads. “MI300 is actually very well-positioned for both HPC or supercomputing workloads as well as for AI workloads,” said Su. “And with the recent interest in generative AI, I would say the pipeline for MI300 has expanded considerably here over the last few months, and we’re excited about that. We’re putting in a lot more resources.”

In the meantime, Microsoft intends to keep working closely with Nvidia as it attempts to secure more of the company’s processors. The AI boom has led to a growing shortage of specialized GPU chips, further constrained by Nvidia having a near monopoly on the supply of such hardware. Microsoft and AMD aren’t the only players trying to develop in-house AI chips — Google has its own TPU (Tensor Processing Unit) chip for training its AI models, and Amazon has similarly created Trainium AI chips to train machine learning computer models.

OpenAI’s regulatory troubles are only just beginning

OpenAI’s regulatory troubles are only just beginning
Illustration of the OpenAI logo on an orange background with purple lines
ChatGPT isn’t out of the EU’s data privacy woods just yet. | Illustration: The Verge

The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.

Earlier this year, OpenAI’s popular and controversial ChatGPT chatbot hit a big legal snag: an effective ban in Italy. The Italian Data Protection Authority (GPDP) accused OpenAI of violating EU data protection rules, and the company agreed to restrict access to the service in Italy while it attempted to fix the problem. On April 28th, ChatGPT returned to the country, with OpenAI lightly addressing GPDP’s concerns without making major changes to its service — an apparent victory.

The GPDP has said it “welcomes” the changes ChatGPT made. However, the firm’s legal issues — and those of companies building similar chatbots — are likely just beginning. Regulators in several countries are investigating how these AI tools collect and produce information, citing a range of concerns from companies’ collection of unlicensed training data to chatbots’ tendency to spew misinformation. In the EU, they’re applying the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, the effects of which will likely reach far outside Europe. Meanwhile, lawmakers in the bloc are putting together a law that will address AI specifically — likely ushering in a new era of regulation for systems like ChatGPT.

ChatGPT is one of the most popular examples of generative AI — a blanket term covering tools that produce text, image, video, and audio based on user prompts. The service reportedly became one of the fastest-growing consumer applications in history after reaching 100 million monthly active users just two months after launching in November 2022 (OpenAI has never confirmed these figures). People use it to translate text into different languages, write college essays, and generate code. But critics — including regulators — have highlighted ChatGPT’s unreliable output, confusing copyright issues, and murky data protection practices.

Italy was the first country to make a move. On March 31st, it highlighted four ways it believed OpenAI was breaking GDPR: allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of its data collection practices, failing to meet any of the six possible legal justifications for processing personal data, and failing to adequately prevent children under 13 years old using the service. It ordered OpenAI to immediately stop using personal information collected from Italian citizens in its training data for ChatGPT.

No other country has taken such action. But since March, at least three EU nations — Germany, France, and Spain — have launched their own investigations into ChatGPT. Meanwhile, across the Atlantic, Canada is evaluating privacy concerns under its Personal Information Protection and Electronic Documents Act, or PIPEDA. The European Data Protection Board (EDPB) has even established a dedicated task force to help coordinate investigations. And if these agencies demand changes from OpenAI, they could affect how the service runs for users across the globe.

Regulators’ concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

ChatGPT uses either OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), which are trained on vast quantities of human-produced text. OpenAI is cagey about exactly what training text is used but says it draws on “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information.”

This potentially poses huge problems under GDPR. The law was enacted in 2018 and covers every service that collects or processes data from EU citizens — no matter where the organization responsible is based. GDPR rules require companies to have explicit consent before collecting personal data, to have legal justification for why it’s being collected, and to be transparent about how it’s being used and stored.

European regulators claim that the secrecy around OpenAI’s training data means there’s no way to confirm if the personal information swept into it was initially given with user consent, and the GPDP specifically argued that OpenAI had “no legal basis” for collecting it in the first place. OpenAI and others have gotten away with little scrutiny so far, but this claim adds a big question mark to future data scraping efforts.

Then there’s GDPR’s “right to be forgotten,” which lets users demand that companies correct their personal information or remove it entirely. OpenAI preemptively updated its privacy policy to facilitate those requests, but there’s been debate about whether it’s technically possible to handle them, given how complex it can be to separate specific data once it’s churned into these large language models.

OpenAI also gathers information directly from users. Like any internet platform, it collects a range of standard user data (e.g., name, contact info, card details, etc). But, more significantly, it records interactions users have with ChatGPT. As stated in an FAQ, this data can be reviewed by OpenAI’s employees and is used to train future versions of its model. Given the intimate questions people ask ChatGPT — using the bot as a therapist or a doctor — this means the company is scooping up all sorts of sensitive data.

At least some of this data may have been collected from minors, as while OpenAI’s policy states that it “does not knowingly collect personal information from children under the age of 13,” there’s no strict age verification gate. That doesn’t play well with EU rules, which ban collecting data from people under 13 and (in some countries) require parental consent for minors under 16. On the output side, the GPDP claimed that ChatGPT’s lack of age filters exposes minors to “absolutely unsuitable responses with respect to their degree of development and self-awareness.”

OpenAI maintains broad latitude to use that data, which has worried some regulators, and storing it presents a security risk. Companies like Samsung and JPMorgan have banned employees from using generative AI tools over fears they’ll upload sensitive data. And, in fact, Italy announced its ban soon after ChatGPT suffered a serious data leak, exposing users’ chat history and email addresses.

ChatGPT’s propensity for providing false information may also pose a problem. GDPR regulations stipulate that all personal data must be accurate, something the GPDP highlighted in its announcement. Depending on how that’s defined, it could spell trouble for most AI text generators, which are prone to “hallucinations”: a cutesy industry term for factually incorrect or irrelevant responses to a query. This has already seen some real-world repercussions elsewhere, as a regional Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had served time in prison for bribery.

ChatGPT’s popularity and current dominance over the AI market make it a particularly attractive target, but there’s no reason why its competitors and collaborators, like Google with Bard or Microsoft with its OpenAI-powered Azure AI, won’t face scrutiny, too. Before ChatGPT, Italy banned the chatbot platform Replika for collecting information on minors — and so far, it’s stayed banned.

While GDPR is a powerful set of laws, it wasn’t made to address AI-specific issues. Rules that do, however, may be on the horizon.

In 2021, the EU submitted its first draft of the Artificial Intelligence Act (AIA), legislation that will work alongside GDPR. The act governs AI tools according to their perceived risk, from “minimal” (things like spam filters) to “high” (AI tools for law enforcement or education) or “unacceptable” and therefore banned (like a social credit system). After the explosion of large language models like ChatGPT last year, lawmakers are now racing to add rules for “foundation models” and “General Purpose AI Systems (GPAIs)” — two terms for large-scale AI systems that include LLMs — and potentially classing them as “high risk” services.

The AIA’s provisions go beyond data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop generative AI tools. That could expose once-secret datasets and leave more companies vulnerable to infringement lawsuits, which are already hitting some services.

But passing it may take a while. EU lawmakers reached a provisional AI Act deal on April 27th. A committee will vote on the draft on May 11th, and the final proposal is expected by mid-June. Then, the European Council, Parliament, and Commission will have to resolve any remaining disputes before implementing the law. If everything goes smoothly, it could be adopted by the second half of 2024, a little behind the official target of Europe’s May 2024 elections.

For now, Italy and OpenAI’s spat offers an early look at how regulators and AI companies might negotiate. The GPDP offered to lift its ban if OpenAI met several proposed resolutions by April 30th. That included informing users how ChatGPT stores and processes their data, asking for explicit consent to use said data, facilitating requests to correct or remove false personal information generated by ChatGPT, and requiring Italian users to confirm they’re over 18 when registering for an account. OpenAI didn’t hit all of those stipulations, but it met enough to appease Italian regulators and get access to ChatGPT in Italy restored.

OpenAI still has targets to meet. It has until September 30th to create a harder age-gate to keep out minors under 13 and require parental consent for older underage teens. If it fails, it could see itself blocked again. But it’s provided an example of what Europe considers acceptable behavior for an AI company — at least until new laws are on the books.

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

Era of lofty valuations for upstart youth media appears to be over, with Vice ‘heading for bankruptcy’ and BuzzFeed News shutting down

Just over a decade ago Rupert Murdoch endorsed what appeared to be a glittering future for Vice, firing off a tweet after an impromptu visit to the media upstart’s Brooklyn offices and a drink in a nearby bar with the outspoken co-founder Shane Smith.

“Who’s heard of VICE media?” the Australian media mogul posted from his car on the way home from the 2012 visit, which resulted in a $70m (£55m) investment. “Wild, interesting effort to interest millennials who don’t read or watch established media. Global success.”

Continue reading...

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival.

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival. Ron DeSantis, a likely contender for the Republican presidential nomination, must court far-right voters who consider him a tool of the Deep State.

jeudi 4 mai 2023

White House Unveils Initiatives to Reduce Risks of AI

White House Unveils Initiatives to Reduce Risks of AI Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

White House rolls out plan to promote ethical AI

White House rolls out plan to promote ethical AI
President Biden And VP Harris Deliver Remarks On National Small Business Week In The Rose Garden
Photo by Chip Somodevilla/Getty Images

The White House announced more funding and policy guidance for developing responsible artificial intelligence ahead of a Biden administration meeting with top industry executives.

The actions include a $140 million investment from the National Science Foundation to launch seven new National AI Research (NAIR) Institutes, increasing the total number of AI-dedicated facilities to 25 nationwide. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to allow their language models to be publicly evaluated during this year’s Def Con. The Office of Management and Budget (OMB) also said that it would be publishing draft rules this summer for how the federal government should use AI technology.

“These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities,” the administration’s press release said. It does not specify the details of what the Def Con evaluation will include, beyond saying that it will “allow these models to be evaluated thoroughly by thousands of community partners and AI experts.”

The announcement comes ahead of a Thursday White House meeting, led by Vice President Kamala Harris, with the chief executives of Alphabet, Anthropic, Microsoft, and OpenAI to discuss AI’s potential risks. “The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues,” the Thursday release said.

Last October, the Biden administration made its first strides to regulate AI by releasing a blueprint for an “AI Bill of Rights.” The project was intended to serve as a framework for use of the technology by both the public and private sectors, encouraging anti-discrimination and privacy protections.

Federal regulators and Congress have announced a fresh focus on AI over the last few weeks. In April, the Federal Trade Commission, Consumer Federal Protection Bureau, Justice Department, and Employment Opportunity Commission issued a joint warning arguing that they already had authority to go after companies whose AI products harm users.

House Majority Leader Chuck Schumer (D-NY) and other lawmakers also reportedly met with Elon Musk to discuss AI regulation last week.

What really happened when Elon Musk took over Twitter

What really happened when Elon Musk took over Twitter

Why has the social network been in total chaos since the world’s richest man took control? Flipping the Bird investigates. Plus: five of the best podcasts about planet Earth

If the prospect of taking an oath of allegiance to an unelected billionaire doesn’t excite you, television is going to be a pretty tedious place for the next few days. Fortunately, there’s never been a better time to retreat into the world of podcasts. This week, the Guardian released a five-part series looking into the murky finances of King Charles. From its examination of his family’s past exploitation of enslaved people, through to the dubious line between his personal wealth and that which is supposedly held for our nation, it’s a welcome look at the kinds of issues lacking from the national conversation.

Also excellently skewering the powers that be is Pod Save the UK – a new homegrown version of the popular US take on politics, hosted by Nish Kumar and Guardian journalist Coco Khan. There’s also a look at Elon Musk’s, ahem, maverick leadership of Twitter, and an examination of a two decade-long battle over the theft of a Banksy. Plenty of brilliant listens, then, to distract you from having to watch the king’s big party ...

Alexi Duggins
Deputy TV editor

Continue reading...

White House Unveils Initiatives to Reduce Risks of A.I.

White House Unveils Initiatives to Reduce Risks of A.I. Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Apple and Microsoft also collaborated on the technology which allows authentication with fingerprint ID, facial ID or a pin

Google is moving one step closer to ditching passwords, rolling out its passkey technology to Google accounts from Thursday.

The passkey is designed to replace passwords entirely by allowing authentication with fingerprint ID, facial ID or pin on the phone or device you use for authentication.

Continue reading...

The UK’s tortured attempt to remake the internet, explained

The UK’s tortured attempt to remake the internet, explained
Illustration by Hugo Herrera for The Verge

The bill aims to make the country ‘the safest place in the world to be online’ but has been mired by multiple delays and criticisms that it’s grown too large and unwieldy to please anyone. 

At some point this year, the UK’s long-delayed Online Safety Bill is finally expected to become law. In the government’s words, the legislation is an attempt to make the UK “the safest place in the world to be online” by introducing a range of obligations for how large tech firms should design, operate, and moderate their platforms.

As any self-respecting Verge reader knows, content moderation is never simple. It’s difficult for platforms, difficult for regulators, and difficult for lawmakers crafting the rules in the first place. But even by the standards of internet legislation, the Online Safety Bill has had a rocky passage. It’s been developed over years during a particularly turbulent era in British politics, changing dramatically from year to year. And as an example of just how controversial the bill has become, some of the world’s biggest online organizations, from WhatsApp to Wikipedia, are preemptively refusing to comply with its potential requirements.

So if you’ve tuned out the Online Safety Bill over the past few years — and let’s be honest, a lot of us have — it’s time to brush up. Here’s where the bill came from, how it’s changed, and why lawmakers might be about to finally put it on the books.

So let’s start from the beginning. What is the Online Safety Bill?

The UK government’s elevator pitch is that the bill is fundamentally an attempt to make the internet safer, particularly for children. It attempts to crack down on illegal content like child sexual abuse material (CSAM) and to minimize the possibility that kids might encounter harmful and age-inappropriate content, including online harassment as well as content that glorifies suicide, self-harm, and eating disorders.

But it’s difficult to TL;DR the Online Safety Bill at this point, precisely because it’s become so big and sprawling. On top of these broad strokes, the bill has a host of other rules. It requires online platforms to let people filter out objectionable content. It introduces age verification for porn sites. It criminalizes fraudulent ads. It requires sites to consistently enforce their terms of service. And if companies don’t comply, they could be fined up to £18 million (around $22.5 million) or 10 percent of global revenue, see their services blocked, and even see their executives jailed.

In short, the Online Safety Bill has become a catchall for UK internet regulation, mutating every time a new prime minister or digital minister has taken up the cause.

How many prime ministers are we talking about here?

So far? Four.

Wait, how long has this bill been in the works for?

The Online Safety Bill started with a document called the “Online Harms White Paper,” which was unveiled way back in April 2019 by then-digital minister Jeremy Wright. The death of Molly Russell by suicide in 2017 brought into sharp relief the dangers of children being able to access content relating to self-harm and suicide online, and other events like the Cambridge Analytica scandal had created the political impetus to do something to regulate big online platforms.

The idea was to introduce a so-called “duty of care” for big platforms like Facebook — similar to how British law asks employers to look after the safety of their employees. This meant companies would have to perform risk assessments and make proactive solutions to the potential harms rather than play whack-a-mole with problems as they crop up. As Carnegie UK associate Maeve Walsh puts it, “Interventions could take place in the way accounts are created, the incentives given to content creators, in the way content is spread as well as in the tools made available to users before we got to content take down.”

The white paper laid out fines and the potential to block websites that don’t comply. At that point, it amounted to some of the broadest and potentially strictest online regulations to have been proposed globally.

What was the response like at the time?

Obviously, there was a healthy amount of skepticism (Wired’s take was simply titled “All that’s wrong with the UK’s crusade against online harms”), but there were hints of cautious optimism as well. Mozilla, for example, said the overall approach had “promising potential,” although it warned about several issues that would need to be addressed to avoid infringing on people’s rights.

If the British government was on to such a winner, why hasn’t it passed this bill four years later?

Have you paid attention to British politics at all in the past four years? The original white paper was introduced four prime ministers and five digital ministers ago, and it seems to have been forced into the back seat by more urgent matters like leaving the European Union or handling the covid-19 pandemic.

But as it’s passed through all these hands, the bill has ballooned in size — picking up new provisions and sometimes dropping them when they’re too controversial. In 2021, when the first draft of the bill was presented to Parliament, it was “just” 145 pages long, but by this year, it had almost doubled to 262 pages.

Where did all those extra pages come from?

Given the bill’s broad ambitions for making online life safer in general, many new elements were added by the time it returned to Parliament in March 2022. In no particular order, these included:

  • Age checks for porn sites
  • Measures to clamp down on “anonymous trolls” by requiring that services give the option for users to verify their identity
  • Criminalizing cyberflashing (aka the sending of unsolicited nudes via social media or dating apps)
  • Cracking down on scam ads

Over time, the bill’s definition of “safety” has started to look pretty vague. A provision in the May 2021 draft forbade companies “from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation,” echoing now familiar fears that conservative voices are unfairly “censored” online. Bloomberg called this an “anti-censorship” clause at the time, and it continues to be present in the 2023 version of the bill.

And last November, ministers were promising to add even more offenses to the bill, including downblousing and the creation of nonconsensual deepfake pornography.

Hold up. Why does this pornography age check sound so familiar?

The Conservative Party has been trying to make it happen since well before the Online Safety Bill. Age verification was a planned part of the Digital Economy Bill in 2016 and then was supposed to happen in 2019 before being delayed and abandoned in favor of rolling the requirements into the Online Safety Bill.

The problem is, it’s very difficult to come up with an age verification system that can’t be either easily circumvented in minutes or create the risk that someone’s most intimate web browsing moments could be linked to their real-life identity — notwithstanding a plan to let users buy a “porn pass” from a local shop.

And it’s not clear how the Online Safety Bill will overcome this challenge. An explainer by The Guardian notes that Ofcom will issue codes of practice on how to determine users’ ages, with possible solutions involving having age verification companies check official IDs or bank statements.

Regardless of the difficulties, the government is pushing ahead with the age verification requirements, which is more than can be said for its proposed rules around “legal but harmful” content.

And what exactly were these “legal but harmful” rules?

Well, they were one of the most controversial additions to the entire bill — so much so that they’ve been (at least partially) walked back.

Originally, the government said it should officially designate certain content as harmful to adults but not necessarily illegal — things like bullying or content relating to eating disorders. (It’s the less catchy cousin of “lawful but awful.”) Companies wouldn’t necessarily have to remove this content, but they’d have to do risk assessments about the harm it might pose and set out clearly in their terms of service how they plan to tackle it.

But critics were wary of letting the state define what counts as “harmful,” the fear being that ministers would have the power to censor what people could say online. At a certain point, if the government is formally pushing companies to police legal speech, it’s debatable how “legal” that speech still is.

This criticism had an effect. The “legal but harmful” provisions for adults were removed from the bill in late 2022 — and so was a “harmful communications” offense that covered sending messages that caused “serious distress,” something critics feared could similarly criminalize offensive but legal speech.

Instead, the government introduced a “triple shield” covering content meant for adults. The first “shield” rule says platforms must remove illegal content like fraud or death threats. The second says anything that breaches a website’s terms of service should be moderated. And the third says adult users should be offered filters to control the content they see.

The thinking here is that most websites already restrict “harmful communications” and “legal but harmful” content, so if they’re told to apply their terms of service consistently, the problem (theoretically) takes care of itself. Conversely, platforms are actively prohibited from restricting content that doesn’t breach the terms of service or break the law. Meanwhile, the filters are supposed to let adults decide whether to block objectionable content like racism, antisemitism, or misogyny. The bill also tells sites to let people block unverified users — aka those pesky “anonymous trolls.”

None of this impacts the rules aimed specifically at children — in those cases, platforms will still have a duty to mitigate the impact of legal but harmful content.

I’m glad that the government addressed those problems, leaving a completely uncontroversial bill in its wake.

Wait, sorry. We’re just getting to the part where the UK might lose encrypted messaging apps.

Excuse me?

Remember WhatsApp? After the Online Safety Bill was introduced, it took issue with a section that asks online tech companies to use “accredited technology” to identify child sexual abuse content “whether communicated publicly or privately.” Since personal WhatsApp messages are end-to-end encrypted, not even the company itself can see their contents. Asking it to be able to identify CSAM, it says, would inevitably compromise this end-to-end encryption.

WhatsApp is owned by Meta, which is persona non grata among regulators these days, but it’s not the only encrypted messaging service whose operators are concerned. WhatsApp head Will Cathcart wrote an open letter that was co-signed by the heads of six other messaging apps, including Signal. “If implemented as written, [this bill] could empower Ofcom to try to force the proactive scanning of private messages on end-to-end encrypted communication services - nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users,” says the letter. “In short, the bill poses an unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate.”

The consensus among legal and cybersecurity experts is that the only way to monitor for CSAM while maintaining encryption is to use some kind of client-side scanning, an approach Apple announced in 2021 that it would be using for image uploads to iCloud. But the company ditched the plans the following year amid widespread criticism from privacy advocates.

Organizations such as the Internet Society say that such scanning risks creating vulnerabilities for criminals and other attackers to exploit and that it could eventually lead to the monitoring of other kinds of speech. The government disagrees and says the bill “does not represent a ban on end-to-end encryption, nor will it require services to weaken encryption.” But without an existing model for how such monitoring can coexist with end-to-end encryption, it’s hard to see how the law could satisfy critics.

The UK government already has the power to demand that services remove encryption thanks to a 2016 piece of legislation called the Investigatory Powers Act. But The Guardian notes that WhatsApp has never received a request to do so. At least one commentator thinks the same could happen with the Online Safety Bill, effectively giving Ofcom a radical new power that it may never choose to wield.

But that hasn’t exactly satisfied WhatsApp, which has suggested it would rather leave the UK than comply with the bill.

Okay, so messaging apps aren’t a fan. What do other companies and campaigners have to say about the bill?

Privacy activists have also been fiercely critical of what they see as an attack on end-to-end encryption. The Electronic Frontier Foundation, Big Brother Watch, and Article 19 published an analysis earlier this year that said the only way to identify and remove child sexual exploitation and abuse material would be to monitor all private communications, undermining users’ privacy rights and freedom of expression. Similar objections were raised in another open letter last year signed by 70 organizations, cybersecurity experts, and elected officials. The Electronic Frontier Foundation has called the bill “a blueprint for repression around the world.”

Tech giants like Google and Meta have also raised numerous concerns with the bill. Google says there are practical challenges to distinguishing between illegal and legal content at scale and that this could lead to the over-removal of legal content. Meta suggests that focusing on having users verify their identities risks excluding anyone who doesn’t wish to share their identity from participating in online conversations.

Even beyond that, there are more fundamental concerns about the bill. Matthew Lesh, head of public policy at the Institute of Economic Affairs, notes that there’s simply a massive disparity between what is acceptable for children to encounter online and what’s acceptable for adults under the bill. So you either risk the privacy and data protection concerns of asking all users to verify their age or you moderate to a children’s standard by default for everyone.

That could put even a relatively safe and educational service like Wikipedia under pressure to ask for the ages of its users, which the Wikimedia Foundation’s Rebecca MacKinnon says would “violate [its] commitment to collect minimal data about readers and contributors.”

“The Wikimedia Foundation will not be verifying the age of UK readers or contributors,” MacKinnon wrote.

Okay, that’s a lot of criticism. So who’s in favor of this bill?

One group that’s been broadly supportive of the bill is children’s charities. The National Society for the Prevention of Cruelty to Children (NSPCC), for example, has called the Online Safety Bill “an urgent and necessary child protection measure” to tackle grooming and child sexual abuse online. It calls the legislation “workable and well-designed” and likes that it aims to “tackle the drivers of online harms rather than seek to remove individual pieces of content.” Barnardo’s, another children’s charity, has been supportive of the bill’s introduction of age verification for pornography sites.

Ian Russell, the father of the late Molly Russell, has called the Online Safety Bill “a really important piece of legislation,” though he’s pushed for it to go further when it comes to criminal sanctions for executives whose products are found to have endangered children’s well-being.

“I don’t think that without effective regulation the tech industry is going to put its house in order, to prevent tragedies like Molly’s from happening again,” Russell said. This sentiment appears to be shared by increasing numbers of lawmakers internationally, such as those in California who passed the Age-Appropriate Design Code Act in August last year.

Where’s the bill at these days?

As of this writing, the bill is currently working its way through the UK’s upper chamber, the House of Lords, after which it’ll be passed back to the House of Commons to consider any amendments that have been made. The government’s hope is to pass it at some point this summer.

Even after the bill passes, however, there will still be practical decisions made about how it’ll work in practice. Ofcom will need to decide what services pose a high enough risk to be covered by the bill’s strictest rules and develop codes of practice for platforms to abide by, including tackling thorny issues like how to introduce age verification for pornography sites. Only after the regulator completes this consultation process will companies know when and how to fully comply with the bill, and Ofcom has said it expects this to take months.

The Online Safety Bill has had a difficult journey through Parliament, and it’s likely to be months before we know how its most controversial aspects are going to work (or not) in practice.

mercredi 3 mai 2023

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Dr Geoffrey Hinton has been inundated with requests to talk after quitting Google to warn about risk of digital intelligence

The man often touted as the godfather of artificial intelligence will be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he says, just days after quitting Google to warn the world about the risk of digital intelligence.

Dr Geoffrey Hinton, 75, won computer science’s highest honour, the Turing award, in 2018 for his work on “deep learning”, along with Meta’s Yann Lecun and the University of Montreal’s Yoshua Bengio.

Continue reading...

Nothing’s CMF Phone 1 is proof that gadgets can still be fun

Nothing’s CMF Phone 1 is proof that gadgets can still be fun Honestly, this is Nothing’s best idea yet. I’ve never had so much fun taking...