samedi 6 mai 2023

VanMoof S5 e-bike review: nice but twice the price

VanMoof S5 e-bike review: nice but twice the price

$4,000 and a long list of features, but how many do you really need?

“Sometimes you have to kill your darlings,” is a phrase used by designers to justify the removal of elements they find personally exciting but fail to add value.

The last time I heard it was in April, 2022, when I rode pre-production versions of VanMoof’s new full-size S5 and smaller A5 electric bikes. The phrase was uttered by co-founder and CEO Taco Carlier to justify the removal of VanMoof’s iconic matrix display for a new “Halo Ring” interface.

One year later and both e-bikes are now — finally — being delivered to customers, well after their original target of July 2022. The price has also been raised to $3,998 / €3,498 from an early preorder price of $2,998 / €2,498, which was already much more expensive than what you’d pay for VanMoof’s previous generation e-bikes — the VanMoof S3 / X3 — when introduced for a rather remarkable price of $1,998 / €1,998 back in 2020.

Look, everything is more expensive in 2023, e-bikes included. But in terms of value for money, the $4,000 VanMoof S5 needs to be twice as good as the $2,000 S3, right? Otherwise the latest flagship e-bike from this former investment darling might be dead on arrival.

If only it was that simple.

Although the S5 and A5 pedal-assisted e-bikes still look like VanMoofs with that extended top tube capped by front and rear lights, everything from the frame down to the chips and sensors have been reengineered. The company says that only a “handful of parts” were carried over from the previous models.

Here are some of the most notable changes:

  • New LED Halo Ring visual interfaces flanking both grips.
  • An integrated SP Connect phone mount (you provide the case) with USB-C charging port.
  • New almost completely silent Gen 5 front-hub motor with torque sensor and three-speed automatic e-shifter (the S3 / X3 had four-speed e-shifters).
  • New multi-function buttons have been added below the bell (next to left grip) and boost (next to right grip) buttons.
  • The boost button now offers more oomph with torque increasing to 68Nm from 59Nm.
  • The S5 frame which has been criticized for being too tall has been lowered by 5cm (2 inches) to better accommodate riders as tall as 165cm (5 feet, 5 inches), while the A5 caters to riders as tall as 155cm (5 feet, 1 inch) and allows for an easier step-through than the X3 it supersedes.

These join a very long list of standard features found on VanMoof e-bikes like a well designed and useful app, integrated Kick Lock on the rear wheel, baked in Apple Find My support, hydraulic disc brakes, muscular city tires, bright integrated lights, mudguards, and a sturdy kickstand. And because it’s VanMoof, you can also subscribe to three years of theft protection ($398 / €348) with guaranteed recovery or replacement within two weeks, and three years of maintenance ($348 / €298) in select cities.

VanMoof e-bikes now have integrated mounts and USB-C charging for your phone.

I picked up my dark gray (also available in light gray) VanMoof S5 loaner in late March but I ran into a few issues that delayed this review. These included intermittent connectivity failures between the app and bike, a Kick Lock that didn’t always disengage, and an alarm that would briefly trigger for no apparent reason. Those issues were all corrected by an over-the-air firmware (v1.20) update released in mid-April before I could even report them back to VanMoof support.

I have mixed emotions about this. In March the S5 and A5 started shipping in quantity — albeit, eight months late — so you’d think they would have had time to sort out any issues in VanMoof’s new testing labs. That’s annoying given VanMoof’s history of initial quality issues and assurances provided by the company that they wouldn’t be repeated. Then again, premium e-bikes from companies like VanMoof are increasingly complex machines, and seeing the company solve issues so quickly is commendable.

One issue that hasn’t been fixed is idle battery drain, but VanMoof tells me that a firmware update is coming to solve it in “two weeks” time. In my case, the issue caused the idle S5’s battery to drain from 86 percent to 65 percent over a period of 10 days. I generally lose about two percent charge each day whether I ride it or not.

Oh, and since I’ve installed several firmware updates in the last month (I’m currently at v1.2.4), I need to mention this: the S5 plays a jaunty little tune the entire time the firmware is being installed. It was cute at first, my daughter even offered a little dance to go with it. But it takes five to 10 minutes, and after the first time you hear it, it’s just annoying and there’s no way to turn it off.

Halo Ring in sunlight.
Halo Ring in low light.

Regarding new features, the Halo Rings next to each grip are the most visible change from previous VanMoofs. At least until you hit sunlight and those weak LEDs washout almost completely. The Halo Rings are meant to show speed, charge remaining, current pedal-assist power level, and more through a series of light bars and animations. Overall they’re fine, if gimmicky, but I don’t have much of a need for status information when bicycling. I also didn’t miss the old top-tube matrix display.

Riding a 23kg / 50.7lbs VanMoof S5 feels like an S3 albeit with fewer shifts and a boost button that provides more torque when trying to pass someone or get an early jump off the line. The fifth generation 250W motor of VanMoof design is absolutely quiet, even at its top speed of 25km/h in Europe (which increases to 20mph in the US). And the new three-speed e-shifter does a better job of accurately finding the right gear than the S3’s four-speed e-shifter did. I still felt a few clinks and spinning pedals, especially when mashing down hard on the cranks when in a hurry. But overall the S5’s predictive shifting is much improved, especially when rolling along at a casual pace. Still, it’s not as smooth as the automatic shifters from Enviolo, for example, so there’s still work to be done.

It’s a shame VanMoof doesn’t offer a simple belt-drive option for its e-bikes. That coupled with the S5’s torquey boost button would obviate the need for any gears when riding in all but the most hilly environments. That’s why I’m a big fan of the premium belt-driven pedal-assist e-bikes sold by Ampler and Cowboy. They cost less than the S5 but are also more fun to ride thanks to their lighter front ends (both brands use rear-hub motors).

As to range, VanMoof says I should be able to get 60km on full power mode. However, I was only able to eke out 48.6km (30.2 miles) from the S5’s 487Wh battery when riding in full power mode and frequently pressing the boost button, in temperatures that ranged from freezing to 15C (59F). That’s about the same range I got when testing the VanMoof S3 — 47 km (29.2 miles) — and its bigger 504Wh battery. The issue that currently causes the battery to lose energy overnight certainly didn’t help my range. The battery can be charged from zero to 100 percent in a very slow 6 hours and 30 minutes via the included charger.

I had been wondering how VanMoof would use the new multifunction buttons located just below the bell and boost buttons. The small button on the right (below the boost) let’s you change your motor power on the fly, while the one on the left (below the bell) makes your lights flash, as some kind of warning to people around you. Both of these features tick boxes on marketing sheets but aren’t very useful in everyday usage.

And since this is a VanMoof, the battery is integrated and can only be removed during maintenance. The company does have a new “click-on” version (no velcro!) of its extended battery coming for the S5 and A5 that you can charge inside the home.

The dark gray VanMoof S5: too complex for its own good?

I’ve had a nagging concern about VanMoof e-bikes for the last few years that I even mentioned in the S3 review. Are they getting too complex for their own good?

Electric bikes — especially commuter e-bikes like the S5 — are subjected to daily wear and tear in all kinds of weather conditions. Even basic bikes are difficult to maintain when used everyday and VanMoof’s e-bikes are expensive rolling computers.

Honestly, I could do without the fancy automatic chain-driven three-speed shifter, superfluous multifunction buttons, programmable electronic bell, Halo Ring interface, Apple tracking, and perky sounds for startup, shutdown, and firmware updates. Give me one gear and a maintenance-free belt drive alongside that torquey boost button on a pedal-assisted e-bike that will get me back and forth to my office every day, no matter what, in style and without fail. But that’s not the S5.

Don’t get me wrong, the VanMoof S5 is a very good electric bike with a longer feature list than any other e-bike I can name. It also has one of the best networks of service hubs available in cities around the world. That’s important because most S5 / A5 parts are only available from VanMoof so make sure you have a service center nearby if you’re interested in buying.

VanMoof for all its early covid successes ran into financial issues at the end of 2022 when it was forced to ask investors for an emergency injection of capital just to pay the bills. But the entire e-bike industry was struggling post-covid as sales plummeted and supply chains wobbled. Competitors like Cowboy and industry giant Stella also had to raise cash to handle overstock after e-bike inventories swelled.

As good as the S5 is, the feature set is verging on gimmickry, in my opinion, perhaps in an attempt to justify its new higher $3,998 / €3,498 price tag. They’re cute, entertaining, and sure, a tad useful at first. But many just aren’t needed for regular commuters. The S5 has too many darlings, and not enough killing.

For context on that price, the VanMoof S5 is currently $500 more expensive than the comparable Cowboy 4 series and $1,000 more than the simpler Ampler Axel. Viewed in those terms, VanMoof’s pricing seems about right.

Is the S5 worth it? I’ll leave that for you to decide in this uncertain and inflationary world. While it’s definitely an improvement over the S3, it’s not twice the bike.

All photography by Thomas Ricker / The Verge

Backup Power: A Growing Need, if You Can Afford It

Backup Power: A Growing Need, if You Can Afford It Extreme weather linked to climate change is causing more blackouts. But generators and batteries are still out of reach of many.

Google engineer warns it could lose out to open-source technology in AI race

Google engineer warns it could lose out to open-source technology in AI race

Commonly available software poses threat to tech company and OpenAI’s ChatGPT, leaked document says

Google has been warned by one of its engineers that the company is not in a position to win the artificial intelligence race and could lose out to commonly available AI technology.

A document from a Google engineer leaked online said the company had done “a lot of looking over our shoulders at OpenAI”, referring to the developer of the ChatGPT chatbot.

Continue reading...

vendredi 5 mai 2023

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet

Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet
Hands with additional fingers typing on a keyboard.
Álvaro Bernis / The Verge

How we use the internet is changing fast, thanks to the advancement of AI-powered chatbots that can find information and redeliver it as a simple conversation.

Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

We’ve even tested all three chatbots head-to-head to see which one is the best — or at least which one gives us the best responses right now when it comes to pressing questions like “how do I install RAM into my PC?”

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”

Or, in the words of James Vincent, a human person, “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle

PSA: you’ve got just one weekend left to claim Sony’s PlayStation Plus greatest hits bundle
Sony’s PS5 console.
Photo by Vjeran Pavic / The Verge

The deadline to claim the titles included in the PlayStation Plus Collection is fast approaching, meaning you’ve got until May 9th to claim the dozen-plus PS4 classics that are included as part of a PlayStation Plus subscription. Eurogamer notes that the list currently includes some absolute cracking games like Bloodborne, God of War, Ratchet and Clank, The Last Guardian, and Uncharted 4: A Thief’s End.

Sony launched the collection alongside the PlayStation 5 console back in 2020, allowing owners of its new console to catch up on some of the biggest games of the previous generation. It was a neat incentive to subscribe to PlayStation Plus, and a reminder of the wealth of PS4 titles available to play via backwards compatibility in the early days when native PS5 titles were still thin on the ground.

Obviously there are far too many games included in the collection to be able to play through them all by next Tuesday. But when announcing the discontinuation of the feature in February, Sony explicitly said that redeeming them before May 9th “will enable you to access those titles even after this date for as long as you remain a PlayStation Plus member.” So if you have an inkling you’d like to play Ratchet and Clank at some point after that date, best nab it now. Just to be safe.

If you don’t, there are at least a couple of games on the list that will continue to be available as part of Sony’s more expensive Playstation Plus Extra and Premium tiers. These include Bloodborne, Resident Evil 7: Biohazard, Batman: Arkham Knight, The Last Guardian, God of War, Uncharted 4, and Fallout 4. The full PlayStation Plus catalog can be found on Sony’s site.

Microsoft is reportedly helping AMD expand into AI chips

Microsoft is reportedly helping AMD expand into AI chips
US-TECHNOLOGY-LIFESTYLE-ELECTRONICS
AMD’s AI-capable MI300 data center APU is set to arrive sometime later this year. | Photo by ROBYN BECK/AFP via Getty Images

Microsoft has allegedly teamed up with AMD to help bolster the chipmaker’s expansion into artificial intelligence processors. According to a report by Bloomberg, Microsoft is providing engineering resources to support AMD’s developments as the two companies join forces to compete against Nvidia, which controls an estimated 80 percent market share in the AI processor market.

In turn, Bloomberg’s sources also claim that AMD is helping Microsoft to develop its own in-house AI chips, codenamed Athena. Several hundred employees from Microsoft’s silicon division are reportedly working on the project and the company has apparently already sunk around $2 billion into its development. Microsoft spokesperson Frank Shaw has, however, denied that AMD is involved with Athena.

We have contacted AMD and Microsoft for confirmation and will update this story should we hear back.

The explosive popularity of AI services like OpenAI’s ChatGPT is driving the demand for processors that can handle the huge computational workloads required to run them. Nvidia’s commanding market share for graphic processing units (GPUs) — specialized chips that provide the required computing power — allows it to dominate this space. There’s currently no suitable alternative, and that’s a problem for companies like Microsoft that need Nvidia’s expensive processors to power the various AI services running in its Azure Cloud.

Nvidia’s CUDA libraries have driven most of the progress in AI over the past decade. Despite AMD being a major rival in the gaming hardware industry, the company still doesn’t have a suitable alternative to the CUDA ecosystem for large-scale machine learning deployments. Now that the AI industry is heating up, AMD is seeking to place itself in a better position to capitalize. “We are very excited about our opportunity in AI — this is our number one strategic priority,” Chief Executive Officer Lisa Su said during the chipmaker’s earnings call Tuesday. “We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history.”

Su claims that AMD is well positioned to create partly customized chips for its biggest customers to use in their AI data centers. “I think we have a very complete IP portfolio across CPUs, GPUs, FPGAs, adaptive SoCs, DPUs, and a very capable semi-custom team,” said Su, adding that the company is seeing “higher volume opportunities beyond game consoles.”

AMD is also confident that its upcoming Instinct MI300 data center chip could be adapted for generative AI workloads. “MI300 is actually very well-positioned for both HPC or supercomputing workloads as well as for AI workloads,” said Su. “And with the recent interest in generative AI, I would say the pipeline for MI300 has expanded considerably here over the last few months, and we’re excited about that. We’re putting in a lot more resources.”

In the meantime, Microsoft intends to keep working closely with Nvidia as it attempts to secure more of the company’s processors. The AI boom has led to a growing shortage of specialized GPU chips, further constrained by Nvidia having a near monopoly on the supply of such hardware. Microsoft and AMD aren’t the only players trying to develop in-house AI chips — Google has its own TPU (Tensor Processing Unit) chip for training its AI models, and Amazon has similarly created Trainium AI chips to train machine learning computer models.

OpenAI’s regulatory troubles are only just beginning

OpenAI’s regulatory troubles are only just beginning
Illustration of the OpenAI logo on an orange background with purple lines
ChatGPT isn’t out of the EU’s data privacy woods just yet. | Illustration: The Verge

The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.

Earlier this year, OpenAI’s popular and controversial ChatGPT chatbot hit a big legal snag: an effective ban in Italy. The Italian Data Protection Authority (GPDP) accused OpenAI of violating EU data protection rules, and the company agreed to restrict access to the service in Italy while it attempted to fix the problem. On April 28th, ChatGPT returned to the country, with OpenAI lightly addressing GPDP’s concerns without making major changes to its service — an apparent victory.

The GPDP has said it “welcomes” the changes ChatGPT made. However, the firm’s legal issues — and those of companies building similar chatbots — are likely just beginning. Regulators in several countries are investigating how these AI tools collect and produce information, citing a range of concerns from companies’ collection of unlicensed training data to chatbots’ tendency to spew misinformation. In the EU, they’re applying the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, the effects of which will likely reach far outside Europe. Meanwhile, lawmakers in the bloc are putting together a law that will address AI specifically — likely ushering in a new era of regulation for systems like ChatGPT.

ChatGPT is one of the most popular examples of generative AI — a blanket term covering tools that produce text, image, video, and audio based on user prompts. The service reportedly became one of the fastest-growing consumer applications in history after reaching 100 million monthly active users just two months after launching in November 2022 (OpenAI has never confirmed these figures). People use it to translate text into different languages, write college essays, and generate code. But critics — including regulators — have highlighted ChatGPT’s unreliable output, confusing copyright issues, and murky data protection practices.

Italy was the first country to make a move. On March 31st, it highlighted four ways it believed OpenAI was breaking GDPR: allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of its data collection practices, failing to meet any of the six possible legal justifications for processing personal data, and failing to adequately prevent children under 13 years old using the service. It ordered OpenAI to immediately stop using personal information collected from Italian citizens in its training data for ChatGPT.

No other country has taken such action. But since March, at least three EU nations — Germany, France, and Spain — have launched their own investigations into ChatGPT. Meanwhile, across the Atlantic, Canada is evaluating privacy concerns under its Personal Information Protection and Electronic Documents Act, or PIPEDA. The European Data Protection Board (EDPB) has even established a dedicated task force to help coordinate investigations. And if these agencies demand changes from OpenAI, they could affect how the service runs for users across the globe.

Regulators’ concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

ChatGPT uses either OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), which are trained on vast quantities of human-produced text. OpenAI is cagey about exactly what training text is used but says it draws on “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information.”

This potentially poses huge problems under GDPR. The law was enacted in 2018 and covers every service that collects or processes data from EU citizens — no matter where the organization responsible is based. GDPR rules require companies to have explicit consent before collecting personal data, to have legal justification for why it’s being collected, and to be transparent about how it’s being used and stored.

European regulators claim that the secrecy around OpenAI’s training data means there’s no way to confirm if the personal information swept into it was initially given with user consent, and the GPDP specifically argued that OpenAI had “no legal basis” for collecting it in the first place. OpenAI and others have gotten away with little scrutiny so far, but this claim adds a big question mark to future data scraping efforts.

Then there’s GDPR’s “right to be forgotten,” which lets users demand that companies correct their personal information or remove it entirely. OpenAI preemptively updated its privacy policy to facilitate those requests, but there’s been debate about whether it’s technically possible to handle them, given how complex it can be to separate specific data once it’s churned into these large language models.

OpenAI also gathers information directly from users. Like any internet platform, it collects a range of standard user data (e.g., name, contact info, card details, etc). But, more significantly, it records interactions users have with ChatGPT. As stated in an FAQ, this data can be reviewed by OpenAI’s employees and is used to train future versions of its model. Given the intimate questions people ask ChatGPT — using the bot as a therapist or a doctor — this means the company is scooping up all sorts of sensitive data.

At least some of this data may have been collected from minors, as while OpenAI’s policy states that it “does not knowingly collect personal information from children under the age of 13,” there’s no strict age verification gate. That doesn’t play well with EU rules, which ban collecting data from people under 13 and (in some countries) require parental consent for minors under 16. On the output side, the GPDP claimed that ChatGPT’s lack of age filters exposes minors to “absolutely unsuitable responses with respect to their degree of development and self-awareness.”

OpenAI maintains broad latitude to use that data, which has worried some regulators, and storing it presents a security risk. Companies like Samsung and JPMorgan have banned employees from using generative AI tools over fears they’ll upload sensitive data. And, in fact, Italy announced its ban soon after ChatGPT suffered a serious data leak, exposing users’ chat history and email addresses.

ChatGPT’s propensity for providing false information may also pose a problem. GDPR regulations stipulate that all personal data must be accurate, something the GPDP highlighted in its announcement. Depending on how that’s defined, it could spell trouble for most AI text generators, which are prone to “hallucinations”: a cutesy industry term for factually incorrect or irrelevant responses to a query. This has already seen some real-world repercussions elsewhere, as a regional Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had served time in prison for bribery.

ChatGPT’s popularity and current dominance over the AI market make it a particularly attractive target, but there’s no reason why its competitors and collaborators, like Google with Bard or Microsoft with its OpenAI-powered Azure AI, won’t face scrutiny, too. Before ChatGPT, Italy banned the chatbot platform Replika for collecting information on minors — and so far, it’s stayed banned.

While GDPR is a powerful set of laws, it wasn’t made to address AI-specific issues. Rules that do, however, may be on the horizon.

In 2021, the EU submitted its first draft of the Artificial Intelligence Act (AIA), legislation that will work alongside GDPR. The act governs AI tools according to their perceived risk, from “minimal” (things like spam filters) to “high” (AI tools for law enforcement or education) or “unacceptable” and therefore banned (like a social credit system). After the explosion of large language models like ChatGPT last year, lawmakers are now racing to add rules for “foundation models” and “General Purpose AI Systems (GPAIs)” — two terms for large-scale AI systems that include LLMs — and potentially classing them as “high risk” services.

The AIA’s provisions go beyond data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop generative AI tools. That could expose once-secret datasets and leave more companies vulnerable to infringement lawsuits, which are already hitting some services.

But passing it may take a while. EU lawmakers reached a provisional AI Act deal on April 27th. A committee will vote on the draft on May 11th, and the final proposal is expected by mid-June. Then, the European Council, Parliament, and Commission will have to resolve any remaining disputes before implementing the law. If everything goes smoothly, it could be adopted by the second half of 2024, a little behind the official target of Europe’s May 2024 elections.

For now, Italy and OpenAI’s spat offers an early look at how regulators and AI companies might negotiate. The GPDP offered to lift its ban if OpenAI met several proposed resolutions by April 30th. That included informing users how ChatGPT stores and processes their data, asking for explicit consent to use said data, facilitating requests to correct or remove false personal information generated by ChatGPT, and requiring Italian users to confirm they’re over 18 when registering for an account. OpenAI didn’t hit all of those stipulations, but it met enough to appease Italian regulators and get access to ChatGPT in Italy restored.

OpenAI still has targets to meet. It has until September 30th to create a harder age-gate to keep out minors under 13 and require parental consent for older underage teens. If it fails, it could see itself blocked again. But it’s provided an example of what Europe considers acceptable behavior for an AI company — at least until new laws are on the books.

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

Era of lofty valuations for upstart youth media appears to be over, with Vice ‘heading for bankruptcy’ and BuzzFeed News shutting down

Just over a decade ago Rupert Murdoch endorsed what appeared to be a glittering future for Vice, firing off a tweet after an impromptu visit to the media upstart’s Brooklyn offices and a drink in a nearby bar with the outspoken co-founder Shane Smith.

“Who’s heard of VICE media?” the Australian media mogul posted from his car on the way home from the 2012 visit, which resulted in a $70m (£55m) investment. “Wild, interesting effort to interest millennials who don’t read or watch established media. Global success.”

Continue reading...

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival.

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival. Ron DeSantis, a likely contender for the Republican presidential nomination, must court far-right voters who consider him a tool of the Deep State.

jeudi 4 mai 2023

White House Unveils Initiatives to Reduce Risks of AI

White House Unveils Initiatives to Reduce Risks of AI Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

White House rolls out plan to promote ethical AI

White House rolls out plan to promote ethical AI
President Biden And VP Harris Deliver Remarks On National Small Business Week In The Rose Garden
Photo by Chip Somodevilla/Getty Images

The White House announced more funding and policy guidance for developing responsible artificial intelligence ahead of a Biden administration meeting with top industry executives.

The actions include a $140 million investment from the National Science Foundation to launch seven new National AI Research (NAIR) Institutes, increasing the total number of AI-dedicated facilities to 25 nationwide. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to allow their language models to be publicly evaluated during this year’s Def Con. The Office of Management and Budget (OMB) also said that it would be publishing draft rules this summer for how the federal government should use AI technology.

“These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities,” the administration’s press release said. It does not specify the details of what the Def Con evaluation will include, beyond saying that it will “allow these models to be evaluated thoroughly by thousands of community partners and AI experts.”

The announcement comes ahead of a Thursday White House meeting, led by Vice President Kamala Harris, with the chief executives of Alphabet, Anthropic, Microsoft, and OpenAI to discuss AI’s potential risks. “The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues,” the Thursday release said.

Last October, the Biden administration made its first strides to regulate AI by releasing a blueprint for an “AI Bill of Rights.” The project was intended to serve as a framework for use of the technology by both the public and private sectors, encouraging anti-discrimination and privacy protections.

Federal regulators and Congress have announced a fresh focus on AI over the last few weeks. In April, the Federal Trade Commission, Consumer Federal Protection Bureau, Justice Department, and Employment Opportunity Commission issued a joint warning arguing that they already had authority to go after companies whose AI products harm users.

House Majority Leader Chuck Schumer (D-NY) and other lawmakers also reportedly met with Elon Musk to discuss AI regulation last week.

What really happened when Elon Musk took over Twitter

What really happened when Elon Musk took over Twitter

Why has the social network been in total chaos since the world’s richest man took control? Flipping the Bird investigates. Plus: five of the best podcasts about planet Earth

If the prospect of taking an oath of allegiance to an unelected billionaire doesn’t excite you, television is going to be a pretty tedious place for the next few days. Fortunately, there’s never been a better time to retreat into the world of podcasts. This week, the Guardian released a five-part series looking into the murky finances of King Charles. From its examination of his family’s past exploitation of enslaved people, through to the dubious line between his personal wealth and that which is supposedly held for our nation, it’s a welcome look at the kinds of issues lacking from the national conversation.

Also excellently skewering the powers that be is Pod Save the UK – a new homegrown version of the popular US take on politics, hosted by Nish Kumar and Guardian journalist Coco Khan. There’s also a look at Elon Musk’s, ahem, maverick leadership of Twitter, and an examination of a two decade-long battle over the theft of a Banksy. Plenty of brilliant listens, then, to distract you from having to watch the king’s big party ...

Alexi Duggins
Deputy TV editor

Continue reading...

White House Unveils Initiatives to Reduce Risks of A.I.

White House Unveils Initiatives to Reduce Risks of A.I. Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Apple and Microsoft also collaborated on the technology which allows authentication with fingerprint ID, facial ID or a pin

Google is moving one step closer to ditching passwords, rolling out its passkey technology to Google accounts from Thursday.

The passkey is designed to replace passwords entirely by allowing authentication with fingerprint ID, facial ID or pin on the phone or device you use for authentication.

Continue reading...

The UK’s tortured attempt to remake the internet, explained

The UK’s tortured attempt to remake the internet, explained
Illustration by Hugo Herrera for The Verge

The bill aims to make the country ‘the safest place in the world to be online’ but has been mired by multiple delays and criticisms that it’s grown too large and unwieldy to please anyone. 

At some point this year, the UK’s long-delayed Online Safety Bill is finally expected to become law. In the government’s words, the legislation is an attempt to make the UK “the safest place in the world to be online” by introducing a range of obligations for how large tech firms should design, operate, and moderate their platforms.

As any self-respecting Verge reader knows, content moderation is never simple. It’s difficult for platforms, difficult for regulators, and difficult for lawmakers crafting the rules in the first place. But even by the standards of internet legislation, the Online Safety Bill has had a rocky passage. It’s been developed over years during a particularly turbulent era in British politics, changing dramatically from year to year. And as an example of just how controversial the bill has become, some of the world’s biggest online organizations, from WhatsApp to Wikipedia, are preemptively refusing to comply with its potential requirements.

So if you’ve tuned out the Online Safety Bill over the past few years — and let’s be honest, a lot of us have — it’s time to brush up. Here’s where the bill came from, how it’s changed, and why lawmakers might be about to finally put it on the books.

So let’s start from the beginning. What is the Online Safety Bill?

The UK government’s elevator pitch is that the bill is fundamentally an attempt to make the internet safer, particularly for children. It attempts to crack down on illegal content like child sexual abuse material (CSAM) and to minimize the possibility that kids might encounter harmful and age-inappropriate content, including online harassment as well as content that glorifies suicide, self-harm, and eating disorders.

But it’s difficult to TL;DR the Online Safety Bill at this point, precisely because it’s become so big and sprawling. On top of these broad strokes, the bill has a host of other rules. It requires online platforms to let people filter out objectionable content. It introduces age verification for porn sites. It criminalizes fraudulent ads. It requires sites to consistently enforce their terms of service. And if companies don’t comply, they could be fined up to £18 million (around $22.5 million) or 10 percent of global revenue, see their services blocked, and even see their executives jailed.

In short, the Online Safety Bill has become a catchall for UK internet regulation, mutating every time a new prime minister or digital minister has taken up the cause.

How many prime ministers are we talking about here?

So far? Four.

Wait, how long has this bill been in the works for?

The Online Safety Bill started with a document called the “Online Harms White Paper,” which was unveiled way back in April 2019 by then-digital minister Jeremy Wright. The death of Molly Russell by suicide in 2017 brought into sharp relief the dangers of children being able to access content relating to self-harm and suicide online, and other events like the Cambridge Analytica scandal had created the political impetus to do something to regulate big online platforms.

The idea was to introduce a so-called “duty of care” for big platforms like Facebook — similar to how British law asks employers to look after the safety of their employees. This meant companies would have to perform risk assessments and make proactive solutions to the potential harms rather than play whack-a-mole with problems as they crop up. As Carnegie UK associate Maeve Walsh puts it, “Interventions could take place in the way accounts are created, the incentives given to content creators, in the way content is spread as well as in the tools made available to users before we got to content take down.”

The white paper laid out fines and the potential to block websites that don’t comply. At that point, it amounted to some of the broadest and potentially strictest online regulations to have been proposed globally.

What was the response like at the time?

Obviously, there was a healthy amount of skepticism (Wired’s take was simply titled “All that’s wrong with the UK’s crusade against online harms”), but there were hints of cautious optimism as well. Mozilla, for example, said the overall approach had “promising potential,” although it warned about several issues that would need to be addressed to avoid infringing on people’s rights.

If the British government was on to such a winner, why hasn’t it passed this bill four years later?

Have you paid attention to British politics at all in the past four years? The original white paper was introduced four prime ministers and five digital ministers ago, and it seems to have been forced into the back seat by more urgent matters like leaving the European Union or handling the covid-19 pandemic.

But as it’s passed through all these hands, the bill has ballooned in size — picking up new provisions and sometimes dropping them when they’re too controversial. In 2021, when the first draft of the bill was presented to Parliament, it was “just” 145 pages long, but by this year, it had almost doubled to 262 pages.

Where did all those extra pages come from?

Given the bill’s broad ambitions for making online life safer in general, many new elements were added by the time it returned to Parliament in March 2022. In no particular order, these included:

  • Age checks for porn sites
  • Measures to clamp down on “anonymous trolls” by requiring that services give the option for users to verify their identity
  • Criminalizing cyberflashing (aka the sending of unsolicited nudes via social media or dating apps)
  • Cracking down on scam ads

Over time, the bill’s definition of “safety” has started to look pretty vague. A provision in the May 2021 draft forbade companies “from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation,” echoing now familiar fears that conservative voices are unfairly “censored” online. Bloomberg called this an “anti-censorship” clause at the time, and it continues to be present in the 2023 version of the bill.

And last November, ministers were promising to add even more offenses to the bill, including downblousing and the creation of nonconsensual deepfake pornography.

Hold up. Why does this pornography age check sound so familiar?

The Conservative Party has been trying to make it happen since well before the Online Safety Bill. Age verification was a planned part of the Digital Economy Bill in 2016 and then was supposed to happen in 2019 before being delayed and abandoned in favor of rolling the requirements into the Online Safety Bill.

The problem is, it’s very difficult to come up with an age verification system that can’t be either easily circumvented in minutes or create the risk that someone’s most intimate web browsing moments could be linked to their real-life identity — notwithstanding a plan to let users buy a “porn pass” from a local shop.

And it’s not clear how the Online Safety Bill will overcome this challenge. An explainer by The Guardian notes that Ofcom will issue codes of practice on how to determine users’ ages, with possible solutions involving having age verification companies check official IDs or bank statements.

Regardless of the difficulties, the government is pushing ahead with the age verification requirements, which is more than can be said for its proposed rules around “legal but harmful” content.

And what exactly were these “legal but harmful” rules?

Well, they were one of the most controversial additions to the entire bill — so much so that they’ve been (at least partially) walked back.

Originally, the government said it should officially designate certain content as harmful to adults but not necessarily illegal — things like bullying or content relating to eating disorders. (It’s the less catchy cousin of “lawful but awful.”) Companies wouldn’t necessarily have to remove this content, but they’d have to do risk assessments about the harm it might pose and set out clearly in their terms of service how they plan to tackle it.

But critics were wary of letting the state define what counts as “harmful,” the fear being that ministers would have the power to censor what people could say online. At a certain point, if the government is formally pushing companies to police legal speech, it’s debatable how “legal” that speech still is.

This criticism had an effect. The “legal but harmful” provisions for adults were removed from the bill in late 2022 — and so was a “harmful communications” offense that covered sending messages that caused “serious distress,” something critics feared could similarly criminalize offensive but legal speech.

Instead, the government introduced a “triple shield” covering content meant for adults. The first “shield” rule says platforms must remove illegal content like fraud or death threats. The second says anything that breaches a website’s terms of service should be moderated. And the third says adult users should be offered filters to control the content they see.

The thinking here is that most websites already restrict “harmful communications” and “legal but harmful” content, so if they’re told to apply their terms of service consistently, the problem (theoretically) takes care of itself. Conversely, platforms are actively prohibited from restricting content that doesn’t breach the terms of service or break the law. Meanwhile, the filters are supposed to let adults decide whether to block objectionable content like racism, antisemitism, or misogyny. The bill also tells sites to let people block unverified users — aka those pesky “anonymous trolls.”

None of this impacts the rules aimed specifically at children — in those cases, platforms will still have a duty to mitigate the impact of legal but harmful content.

I’m glad that the government addressed those problems, leaving a completely uncontroversial bill in its wake.

Wait, sorry. We’re just getting to the part where the UK might lose encrypted messaging apps.

Excuse me?

Remember WhatsApp? After the Online Safety Bill was introduced, it took issue with a section that asks online tech companies to use “accredited technology” to identify child sexual abuse content “whether communicated publicly or privately.” Since personal WhatsApp messages are end-to-end encrypted, not even the company itself can see their contents. Asking it to be able to identify CSAM, it says, would inevitably compromise this end-to-end encryption.

WhatsApp is owned by Meta, which is persona non grata among regulators these days, but it’s not the only encrypted messaging service whose operators are concerned. WhatsApp head Will Cathcart wrote an open letter that was co-signed by the heads of six other messaging apps, including Signal. “If implemented as written, [this bill] could empower Ofcom to try to force the proactive scanning of private messages on end-to-end encrypted communication services - nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users,” says the letter. “In short, the bill poses an unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate.”

The consensus among legal and cybersecurity experts is that the only way to monitor for CSAM while maintaining encryption is to use some kind of client-side scanning, an approach Apple announced in 2021 that it would be using for image uploads to iCloud. But the company ditched the plans the following year amid widespread criticism from privacy advocates.

Organizations such as the Internet Society say that such scanning risks creating vulnerabilities for criminals and other attackers to exploit and that it could eventually lead to the monitoring of other kinds of speech. The government disagrees and says the bill “does not represent a ban on end-to-end encryption, nor will it require services to weaken encryption.” But without an existing model for how such monitoring can coexist with end-to-end encryption, it’s hard to see how the law could satisfy critics.

The UK government already has the power to demand that services remove encryption thanks to a 2016 piece of legislation called the Investigatory Powers Act. But The Guardian notes that WhatsApp has never received a request to do so. At least one commentator thinks the same could happen with the Online Safety Bill, effectively giving Ofcom a radical new power that it may never choose to wield.

But that hasn’t exactly satisfied WhatsApp, which has suggested it would rather leave the UK than comply with the bill.

Okay, so messaging apps aren’t a fan. What do other companies and campaigners have to say about the bill?

Privacy activists have also been fiercely critical of what they see as an attack on end-to-end encryption. The Electronic Frontier Foundation, Big Brother Watch, and Article 19 published an analysis earlier this year that said the only way to identify and remove child sexual exploitation and abuse material would be to monitor all private communications, undermining users’ privacy rights and freedom of expression. Similar objections were raised in another open letter last year signed by 70 organizations, cybersecurity experts, and elected officials. The Electronic Frontier Foundation has called the bill “a blueprint for repression around the world.”

Tech giants like Google and Meta have also raised numerous concerns with the bill. Google says there are practical challenges to distinguishing between illegal and legal content at scale and that this could lead to the over-removal of legal content. Meta suggests that focusing on having users verify their identities risks excluding anyone who doesn’t wish to share their identity from participating in online conversations.

Even beyond that, there are more fundamental concerns about the bill. Matthew Lesh, head of public policy at the Institute of Economic Affairs, notes that there’s simply a massive disparity between what is acceptable for children to encounter online and what’s acceptable for adults under the bill. So you either risk the privacy and data protection concerns of asking all users to verify their age or you moderate to a children’s standard by default for everyone.

That could put even a relatively safe and educational service like Wikipedia under pressure to ask for the ages of its users, which the Wikimedia Foundation’s Rebecca MacKinnon says would “violate [its] commitment to collect minimal data about readers and contributors.”

“The Wikimedia Foundation will not be verifying the age of UK readers or contributors,” MacKinnon wrote.

Okay, that’s a lot of criticism. So who’s in favor of this bill?

One group that’s been broadly supportive of the bill is children’s charities. The National Society for the Prevention of Cruelty to Children (NSPCC), for example, has called the Online Safety Bill “an urgent and necessary child protection measure” to tackle grooming and child sexual abuse online. It calls the legislation “workable and well-designed” and likes that it aims to “tackle the drivers of online harms rather than seek to remove individual pieces of content.” Barnardo’s, another children’s charity, has been supportive of the bill’s introduction of age verification for pornography sites.

Ian Russell, the father of the late Molly Russell, has called the Online Safety Bill “a really important piece of legislation,” though he’s pushed for it to go further when it comes to criminal sanctions for executives whose products are found to have endangered children’s well-being.

“I don’t think that without effective regulation the tech industry is going to put its house in order, to prevent tragedies like Molly’s from happening again,” Russell said. This sentiment appears to be shared by increasing numbers of lawmakers internationally, such as those in California who passed the Age-Appropriate Design Code Act in August last year.

Where’s the bill at these days?

As of this writing, the bill is currently working its way through the UK’s upper chamber, the House of Lords, after which it’ll be passed back to the House of Commons to consider any amendments that have been made. The government’s hope is to pass it at some point this summer.

Even after the bill passes, however, there will still be practical decisions made about how it’ll work in practice. Ofcom will need to decide what services pose a high enough risk to be covered by the bill’s strictest rules and develop codes of practice for platforms to abide by, including tackling thorny issues like how to introduce age verification for pornography sites. Only after the regulator completes this consultation process will companies know when and how to fully comply with the bill, and Ofcom has said it expects this to take months.

The Online Safety Bill has had a difficult journey through Parliament, and it’s likely to be months before we know how its most controversial aspects are going to work (or not) in practice.

mercredi 3 mai 2023

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Dr Geoffrey Hinton has been inundated with requests to talk after quitting Google to warn about risk of digital intelligence

The man often touted as the godfather of artificial intelligence will be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he says, just days after quitting Google to warn the world about the risk of digital intelligence.

Dr Geoffrey Hinton, 75, won computer science’s highest honour, the Turing award, in 2018 for his work on “deep learning”, along with Meta’s Yann Lecun and the University of Montreal’s Yoshua Bengio.

Continue reading...

Microsoft is forcing Outlook and Teams to open links in Edge and IT admins are angry

Microsoft is forcing Outlook and Teams to open links in Edge and IT admins are angry
An image showing the Edge logo
Illustration: The Verge

Microsoft Edge is a good browser but for some reason Microsoft keeps trying to shove it down everyone’s throat and make it more difficult to use rivals like Chrome or Firefox. Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead.

Reddit users have posted messages from the Microsoft 365 admin center that reveal how Microsoft is going to roll out this change. “Web links from Azure Active Directory (AAD) accounts and Microsoft (MSA) accounts in the Outlook for Windows app will open in Microsoft Edge in a single view showing the opened link side-by-side with the email it came from,” reads a message to IT admins from Microsoft.

While this won’t affect the default browser setting in Windows, it’s yet another part of Microsoft 365 and Windows that totally ignores your default browser choice for links. Microsoft already does this with the Widgets system in Windows 11 and even the search experience, where you’ll be forced into Edge if you click a link even if you have another browser set as default.

 Image: paulanerspezi (Reddit)
Microsoft’s message to IT admins.

IT admins aren’t happy with many complaining in various threads on Reddit, spotted by Neowin. If Outlook wasn’t enough, Microsoft says “a similar experience will arrive in Teams” soon with web links from chats opening in Microsoft Edge side-by-side with Teams chats. Microsoft seems to be rolling this out gradually across Microsoft 365 users, and IT admins get 30 days notice before it rolls out to Outlook.

Microsoft 365 Enterprise IT admins will be able to alter the policy, but those on Microsoft 365 for business will have to manage this change on individual machines. That’s going to leave a lot of small businesses with the unnecessary headache of working out what has changed. Imagine being less tech savvy, clicking a link in Outlook, and thinking you’ve lost all your favorites because it didn’t open in your usual browser.

The notifications to IT admins come just weeks after Microsoft promised significant changes to the way Windows manages which apps open certain files or links by default. At the time Microsoft said it believed “we have a responsibility to ensure user choices are respected” and that it’s “important that we lead by example with our own first party Microsoft products.” Forcing people into Microsoft Edge and ignoring default browsers is anything but respecting user choice, and it’s gross that Microsoft continues to abuse this.

Microsoft tested a similar change to the default Windows 10 Mail app in 2018, in an attempt to force people into Edge for email links. That never came to pass, thanks to a backlash from Windows 10 testers. A similar change in 2020 saw Microsoft try and force Chrome’s default search engine to Bing using the Office 365 installer, and IT admins weren’t happy then either.

Windows 11 also launched with a messy and cumbersome process to set default apps, which was a step back from Windows 10 and drew concern from competing browser makers like Mozilla, Opera, and Vivaldi. A Windows 11 update has improved that process, but it’s clear Microsoft is still interested in finding ways to circumvent default browser choices.

Microsoft has already been using aggressive prompts to stop you from using Chrome and even added a giant Bing button to Edge in an effort to push people to use its search engine. Microsoft has also faced criticism over adding buy now, pay later financing options into Edge and its plan to build a crypto wallet into Edge. Microsoft also added a prompt to Edge Dev recently that appears when you try to use Google’s rival Bard AI chatbot. This relentless push of Edge, including through Windows Update, could all backfire for Microsoft and end up alienating Edge users instead of tempting them over from Chrome.

Pushing Buttons: Why the Microsoft-Activision Blizzard merger is a fight over the future of games

Pushing Buttons: Why the Microsoft-Activision Blizzard merger is a fight over the future of games

In this week’s newsletter: A UK regulator blocked a $70bn acquisition last week not because of any threat now, but over worries about a future monopoly

As is now tradition, an enormous piece of gaming news landed right after last week’s Pushing Buttons went out to readers: Microsoft’s huge $70bn purchase of Call of Duty, World of Warcraft and Candy Crush owner Activision Blizzard, a deal that has been in the works since January last year, was unexpectedly blocked by a UK regulator.

This might not seem interesting to anyone except those involved with the business of video games, or people with an inexplicable interest in the actions of regulatory authorities in Britain, but wait! It is quite interesting, because the response from these two giant companies has been entertainingly petty.

Continue reading...

‘Ready for some help?’: how a controversial technology firm courted Portland police

‘Ready for some help?’: how a controversial technology firm courted Portland police

SoundThinking, a gunshot detection company, worked with top police officials to secure a city contract, according to emails obtained by the Guardian

On 5 February 2022, police in Portland, Oregon, sent out a bulletin pleading with the public for information about a recent homicide case. Police had found Corey M Eady injured with several gunshot wounds, and the 44-year-old had died shortly after being taken to a hospital. “This is the 11th homicide in Portland this year,” the bulletin read. “All 11 have been by gunfire.”

The next day, Portland police captain James Crooker got a text. “Ready for some help?”

Shotspotter marketed itself aggressively to Portland police by tapping its vast network of law enforcement partners and supporters – some of whom now work at the company – to vouch for or advocate for the service.

The company backed up claims it is a noninstrusive and effective public safety tool with academic studies, some of which it funded or helped set up.

Once Portland police was on board, the company worked closely with Crooker, the Portland police captain, to win over a volunteer-led police oversight group, Fitcog, which recommended the use of Shotspotter devices to the mayor, Ted Wheeler.

Greene, the representative, also helped Crooker prepare for media interviews and even offered the company’s services to help the city apply for federal grants to fund a contract.

Continue reading...

Now you can post Horizon Worlds photos to your Instagram Story

Now you can post Horizon Worlds photos to your Instagram Story
Image of the Meta logo and wordmark on a blue background bordered by black scribbles made out of the Meta logo.
Illustration: Nick Barclay / The Verge

Meta’s VR social platform is getting a new feature to help you show off what a fun time you’re having in the so-called “metaverse” with all your Instagram and Facebook followers. With its v108 update, Horizon Worlds can now share photos and videos directly to your story on either platform. Meta is also using the update to test a couple of tweaks to the service’s safe zone feature.

It’s previously been possible to share content from Horizon Worlds to Reels — Instagram and Facebook’s TikTok-style vertical video feed feature — as reported by TechCrunch last October. But now you also have the option of posting Horizon Worlds’ Wii-level graphics to Meta’s stories, where they can sit alongside real-world photographs of holidays, meals, and days out.

Meta says the new story-sharing option is starting as a limited test for around 50 percent of Horizon Worlds’ users, but that it should get a wider rollout throughout the course of this month. The sharing feature can be accessed from the Media Gallery or Horizon Camera.

Alongside it, Meta is also testing a tweaked version of the Horizon Worlds’ safe zone, a feature designed to let users step away from the VR social space when things get heated and hectic. As part of the test, the safe zone will show more details of the world around the user, and offer fuller access to user profiles and other menus.

I suspect this won’t be the last time Meta tries to make its various services more interoperable, especially if it can succeed in creating any degree of FOMO among Instagram users about Horizon Worlds. But based on our experience trying out its social VR space when we reviewed the Quest Pro headset last year, there are more fundamental issues with Horizon Worlds that need to be addressed first.

My Weekend With an Emotional Support A.I. Companion

My Weekend With an Emotional Support A.I. Companion Pi, an A.I. tool that debuted this week, is a twist on the new wave of chatbots: It assists people with their wellness and emotions.

mardi 2 mai 2023

Apple and Google submit plan to fight AirTag stalking

Apple and Google submit plan to fight AirTag stalking

Companies join forces to tackle unwanted tracking via Apple’s gadget and similar devices such as Tile

Apple and Google are teaming up to thwart unwanted tracking through AirTags and similar gadgets.

The two companies behind the iPhone and the software that powers Android phones on Tuesday submitted a proposal to set standards for combatting secret surveillance on Bluetooth devices that were created to help people find lost keys, keep tabs on luggage or to locate other things that have a tendency to be misplaced.

Continue reading...

Reddit reworks sharing from its apps and the look of link embeds for better social reach

Reddit reworks sharing from its apps and the look of link embeds for better social reach
Reddit logo shown in layers
Illustration by Alex Castro / The Verge

Reddit’s blog post today admits it “didn’t make it easy” to share content like cool conversations and memes to other social platforms — but now it’s finally doing something about it. Reddit is enhancing link embeds for messaging apps and adding more sharing functions like sharing directly to Instagram stories right from Reddit’s app.

If you’ve ever tried to share a Reddit link from the official app on, for instance, iMessage on an iPhone, you might recall it not having a particularly content-rich preview. Now the company is enhancing it with a more robust visual preview of the content, its subreddit name, and the number of upvotes and comments it has.

 Image: Reddit
The new preview for Reddit shares in iMessage.

Third-party Reddit client apps have, for years, built out better ways to share content — like how Apollo can create a threaded screenshot displaying however many replies you’d like — but these don’t include a link for the recipient that encourages them to visit Reddit.

The new Reddit app share sheet can toss content directly into an Instagram story as a still image. Image: Reddit
The new Reddit app share sheet can toss content directly into an Instagram story as a still image.

Reddit’s newfound prioritization of its own app and platform comes in hot as the world looks for the next best app to use as an information channel, adding these features built for Reddit’s official mobile app on iOS and Android.

Even official Reddit app users have gotten into the habit of just screenshotting everything they want to share. But now Reddit will remind them another way to do it: with a pop-up notification inviting the user to share the link via the share sheet.

 Image: Reddit
Reddit will poke you for continuing to screenshot content and help you use its new sharing features instead.

Reddit’s focus on providing richer embeds also extends to content publishing platforms, like the one The Verge uses to bring you these articles, thanks to the company’s release of a new embedding toolbox (PDF). Content from other social platforms like Twitter or YouTube has often been easier to embed on websites than Reddit links, but that may be changing.

It’s particularly notable that the new tools and abilities also come just after Reddit’s recent extensive overhaul of how it allows outsiders to handle its data. In a post on Reddit, the developer of Apollo called the changes “not necessarily for the worse in all cases, provided Reddit is reasonable,” but said that based on discussions with the company, it doesn’t plan to offer free API access for commercial third-party apps in the future.

Samsung tells employees not to use AI tools like ChatGPT, citing security concerns

Samsung tells employees not to use AI tools like ChatGPT, citing security concerns
Samsung’s logo set in the middle of red, black, white, and yellow ovals.
Illustration by Alex Castro / The Verge

Samsung has banned the use of generative AI tools like ChatGPT on its internal networks and company-owned devices over fears that uploading sensitive information to these platforms represents a security risk, Bloomberg News reports. The rule was communicated to staff in a memo which describes it as a temporary restriction while Samsung works to “create a secure environment” to safely use generative AI tools.

The biggest risk factor is likely OpenAI’s chatbot ChatGPT, which has become hugely popular not only as toy for entertainment but as a tool to help with serious work. People can use the system to summarize reports or write response to emails — but that might mean inputting sensitive information, which OpenAI might have access too.

The privacy risks involved in using ChatGPT vary based on how a user accesses the service. If a company is ChatGPT’s API, then conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. However, this is not true of text inputted into the general web interface using its default settings.

In an FAQ, the company says it reviews conversations users have with ChatGPT to improve its systems and ensure that it complies with its policies and safety requirements. It advises users to not “share any sensitive information in your conversations” and notes that any conversations may also be used to train future versions of ChatGPT. The company recently rolled out a feature similar to a browser’s “incognito mode,” Reuters notes, which does not save chat histories and prevents them from being used for training.

Samsung is evidently worried about employees playing around with the tool and not realizing that it’s a potential security risk.

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said the company’s internal memo, reports Bloomberg. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.” As well as restricting the use of generative AI on company computers, phones, and tablets, Samsung is also asking staff not to upload sensitive business information via their personal machines.

“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung’s memo said. The South Korean tech giant confirmed the authenticity of the memo to Bloomberg. A spokesperson did not immediately respond to The Verge’s request for comment.

The ban comes after Samsung discovered that some of its staff “leaked internal source code by uploading it to ChatGPT,” according to Bloomberg. There are concerns that uploading sensitive company information to external servers operated by AI providers risks exposing it publicly, and limits Samsung’s ability to delete it after the fact. News of Samsung’s policy comes a little over a month after ChatGPT experienced a bug that temporarily exposed some chat histories, and potentially payment information, to other users of the service.

Samsung’s policy means it joins a host of other companies and institutions to have placed limits on the use of generative AI tools, though the exact reasons for the restrictions vary. JPMorgan has restricted their use over compliance concerns, CNN reports, while other banks such as Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo have also either banned or restricted the use of such tools. New York City schools have banned ChatGPT over cheating and misinformation fears, while data protection and child safety concerns was cited as the reason for ChatGPT’s temporary ban in Italy.

Samsung reportedly has plans for its employees to use AI tools eventually, but it sounds like it’s waiting to develop in-house solutions. Bloomberg notes that it’s working on tools to help with translation, summarizing documents, and software development.

Any generative AI restrictions do not apply to devices sold to consumers like laptops or phones.

Marvel Snap is the most positive addiction I’ve ever had

Marvel Snap is the most positive addiction I’ve ever had

After breaking up with FIFA nearly two years ago, has Dominik Diamond found his new forever game in Marvel’s infectiously joyous mobile card-battler?

I don’t look cool. I have aged ungracefully. At 18 I was Morten Harket meets the Milky Bar Kid. Now I am Gary Oldman’s Dracula meets a potato. Yet I bonded with the coolest guy in my town this week. He and his mates invaded the bus en masse, all tumbling hair, skinny jeans and laughing eyes, fanning out around me, thinking it best not to bother the hobo in the ski jacket and ankle wellies.

I caught sight of the screen on Cool Guy’s phone. My heart flipped and I said the words that have united a million people around the world recently.

Continue reading...

The Interview: The Netflix Chief’s Plan to Get You to Binge Even More

The Interview: The Netflix Chief’s Plan to Get You to Binge Even More Ted Sarandos, a chief executive of Netflix, on the future of entertain...