vendredi 5 mai 2023

OpenAI’s regulatory troubles are only just beginning

OpenAI’s regulatory troubles are only just beginning
Illustration of the OpenAI logo on an orange background with purple lines
ChatGPT isn’t out of the EU’s data privacy woods just yet. | Illustration: The Verge

The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.

Earlier this year, OpenAI’s popular and controversial ChatGPT chatbot hit a big legal snag: an effective ban in Italy. The Italian Data Protection Authority (GPDP) accused OpenAI of violating EU data protection rules, and the company agreed to restrict access to the service in Italy while it attempted to fix the problem. On April 28th, ChatGPT returned to the country, with OpenAI lightly addressing GPDP’s concerns without making major changes to its service — an apparent victory.

The GPDP has said it “welcomes” the changes ChatGPT made. However, the firm’s legal issues — and those of companies building similar chatbots — are likely just beginning. Regulators in several countries are investigating how these AI tools collect and produce information, citing a range of concerns from companies’ collection of unlicensed training data to chatbots’ tendency to spew misinformation. In the EU, they’re applying the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, the effects of which will likely reach far outside Europe. Meanwhile, lawmakers in the bloc are putting together a law that will address AI specifically — likely ushering in a new era of regulation for systems like ChatGPT.

ChatGPT is one of the most popular examples of generative AI — a blanket term covering tools that produce text, image, video, and audio based on user prompts. The service reportedly became one of the fastest-growing consumer applications in history after reaching 100 million monthly active users just two months after launching in November 2022 (OpenAI has never confirmed these figures). People use it to translate text into different languages, write college essays, and generate code. But critics — including regulators — have highlighted ChatGPT’s unreliable output, confusing copyright issues, and murky data protection practices.

Italy was the first country to make a move. On March 31st, it highlighted four ways it believed OpenAI was breaking GDPR: allowing ChatGPT to provide inaccurate or misleading information, failing to notify users of its data collection practices, failing to meet any of the six possible legal justifications for processing personal data, and failing to adequately prevent children under 13 years old using the service. It ordered OpenAI to immediately stop using personal information collected from Italian citizens in its training data for ChatGPT.

No other country has taken such action. But since March, at least three EU nations — Germany, France, and Spain — have launched their own investigations into ChatGPT. Meanwhile, across the Atlantic, Canada is evaluating privacy concerns under its Personal Information Protection and Electronic Documents Act, or PIPEDA. The European Data Protection Board (EDPB) has even established a dedicated task force to help coordinate investigations. And if these agencies demand changes from OpenAI, they could affect how the service runs for users across the globe.

Regulators’ concerns can be broadly split into two categories: where ChatGPT’s training data comes from and how OpenAI is delivering information to its users.

ChatGPT uses either OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), which are trained on vast quantities of human-produced text. OpenAI is cagey about exactly what training text is used but says it draws on “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information.”

This potentially poses huge problems under GDPR. The law was enacted in 2018 and covers every service that collects or processes data from EU citizens — no matter where the organization responsible is based. GDPR rules require companies to have explicit consent before collecting personal data, to have legal justification for why it’s being collected, and to be transparent about how it’s being used and stored.

European regulators claim that the secrecy around OpenAI’s training data means there’s no way to confirm if the personal information swept into it was initially given with user consent, and the GPDP specifically argued that OpenAI had “no legal basis” for collecting it in the first place. OpenAI and others have gotten away with little scrutiny so far, but this claim adds a big question mark to future data scraping efforts.

Then there’s GDPR’s “right to be forgotten,” which lets users demand that companies correct their personal information or remove it entirely. OpenAI preemptively updated its privacy policy to facilitate those requests, but there’s been debate about whether it’s technically possible to handle them, given how complex it can be to separate specific data once it’s churned into these large language models.

OpenAI also gathers information directly from users. Like any internet platform, it collects a range of standard user data (e.g., name, contact info, card details, etc). But, more significantly, it records interactions users have with ChatGPT. As stated in an FAQ, this data can be reviewed by OpenAI’s employees and is used to train future versions of its model. Given the intimate questions people ask ChatGPT — using the bot as a therapist or a doctor — this means the company is scooping up all sorts of sensitive data.

At least some of this data may have been collected from minors, as while OpenAI’s policy states that it “does not knowingly collect personal information from children under the age of 13,” there’s no strict age verification gate. That doesn’t play well with EU rules, which ban collecting data from people under 13 and (in some countries) require parental consent for minors under 16. On the output side, the GPDP claimed that ChatGPT’s lack of age filters exposes minors to “absolutely unsuitable responses with respect to their degree of development and self-awareness.”

OpenAI maintains broad latitude to use that data, which has worried some regulators, and storing it presents a security risk. Companies like Samsung and JPMorgan have banned employees from using generative AI tools over fears they’ll upload sensitive data. And, in fact, Italy announced its ban soon after ChatGPT suffered a serious data leak, exposing users’ chat history and email addresses.

ChatGPT’s propensity for providing false information may also pose a problem. GDPR regulations stipulate that all personal data must be accurate, something the GPDP highlighted in its announcement. Depending on how that’s defined, it could spell trouble for most AI text generators, which are prone to “hallucinations”: a cutesy industry term for factually incorrect or irrelevant responses to a query. This has already seen some real-world repercussions elsewhere, as a regional Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had served time in prison for bribery.

ChatGPT’s popularity and current dominance over the AI market make it a particularly attractive target, but there’s no reason why its competitors and collaborators, like Google with Bard or Microsoft with its OpenAI-powered Azure AI, won’t face scrutiny, too. Before ChatGPT, Italy banned the chatbot platform Replika for collecting information on minors — and so far, it’s stayed banned.

While GDPR is a powerful set of laws, it wasn’t made to address AI-specific issues. Rules that do, however, may be on the horizon.

In 2021, the EU submitted its first draft of the Artificial Intelligence Act (AIA), legislation that will work alongside GDPR. The act governs AI tools according to their perceived risk, from “minimal” (things like spam filters) to “high” (AI tools for law enforcement or education) or “unacceptable” and therefore banned (like a social credit system). After the explosion of large language models like ChatGPT last year, lawmakers are now racing to add rules for “foundation models” and “General Purpose AI Systems (GPAIs)” — two terms for large-scale AI systems that include LLMs — and potentially classing them as “high risk” services.

The AIA’s provisions go beyond data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop generative AI tools. That could expose once-secret datasets and leave more companies vulnerable to infringement lawsuits, which are already hitting some services.

But passing it may take a while. EU lawmakers reached a provisional AI Act deal on April 27th. A committee will vote on the draft on May 11th, and the final proposal is expected by mid-June. Then, the European Council, Parliament, and Commission will have to resolve any remaining disputes before implementing the law. If everything goes smoothly, it could be adopted by the second half of 2024, a little behind the official target of Europe’s May 2024 elections.

For now, Italy and OpenAI’s spat offers an early look at how regulators and AI companies might negotiate. The GPDP offered to lift its ban if OpenAI met several proposed resolutions by April 30th. That included informing users how ChatGPT stores and processes their data, asking for explicit consent to use said data, facilitating requests to correct or remove false personal information generated by ChatGPT, and requiring Italian users to confirm they’re over 18 when registering for an account. OpenAI didn’t hit all of those stipulations, but it met enough to appease Italian regulators and get access to ChatGPT in Italy restored.

OpenAI still has targets to meet. It has until September 30th to create a harder age-gate to keep out minors under 13 and require parental consent for older underage teens. If it fails, it could see itself blocked again. But it’s provided an example of what Europe considers acceptable behavior for an AI company — at least until new laws are on the books.

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

‘Like Icarus – now everyone is burnt’: how Vice and BuzzFeed fell to earth

Era of lofty valuations for upstart youth media appears to be over, with Vice ‘heading for bankruptcy’ and BuzzFeed News shutting down

Just over a decade ago Rupert Murdoch endorsed what appeared to be a glittering future for Vice, firing off a tweet after an impromptu visit to the media upstart’s Brooklyn offices and a drink in a nearby bar with the outspoken co-founder Shane Smith.

“Who’s heard of VICE media?” the Australian media mogul posted from his car on the way home from the 2012 visit, which resulted in a $70m (£55m) investment. “Wild, interesting effort to interest millennials who don’t read or watch established media. Global success.”

Continue reading...

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival.

‘Ron DeSoros’? Conspiracy Theorists Target Trump’s Rival. Ron DeSantis, a likely contender for the Republican presidential nomination, must court far-right voters who consider him a tool of the Deep State.

jeudi 4 mai 2023

White House Unveils Initiatives to Reduce Risks of AI

White House Unveils Initiatives to Reduce Risks of AI Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

White House rolls out plan to promote ethical AI

White House rolls out plan to promote ethical AI
President Biden And VP Harris Deliver Remarks On National Small Business Week In The Rose Garden
Photo by Chip Somodevilla/Getty Images

The White House announced more funding and policy guidance for developing responsible artificial intelligence ahead of a Biden administration meeting with top industry executives.

The actions include a $140 million investment from the National Science Foundation to launch seven new National AI Research (NAIR) Institutes, increasing the total number of AI-dedicated facilities to 25 nationwide. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to allow their language models to be publicly evaluated during this year’s Def Con. The Office of Management and Budget (OMB) also said that it would be publishing draft rules this summer for how the federal government should use AI technology.

“These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities,” the administration’s press release said. It does not specify the details of what the Def Con evaluation will include, beyond saying that it will “allow these models to be evaluated thoroughly by thousands of community partners and AI experts.”

The announcement comes ahead of a Thursday White House meeting, led by Vice President Kamala Harris, with the chief executives of Alphabet, Anthropic, Microsoft, and OpenAI to discuss AI’s potential risks. “The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues,” the Thursday release said.

Last October, the Biden administration made its first strides to regulate AI by releasing a blueprint for an “AI Bill of Rights.” The project was intended to serve as a framework for use of the technology by both the public and private sectors, encouraging anti-discrimination and privacy protections.

Federal regulators and Congress have announced a fresh focus on AI over the last few weeks. In April, the Federal Trade Commission, Consumer Federal Protection Bureau, Justice Department, and Employment Opportunity Commission issued a joint warning arguing that they already had authority to go after companies whose AI products harm users.

House Majority Leader Chuck Schumer (D-NY) and other lawmakers also reportedly met with Elon Musk to discuss AI regulation last week.

What really happened when Elon Musk took over Twitter

What really happened when Elon Musk took over Twitter

Why has the social network been in total chaos since the world’s richest man took control? Flipping the Bird investigates. Plus: five of the best podcasts about planet Earth

If the prospect of taking an oath of allegiance to an unelected billionaire doesn’t excite you, television is going to be a pretty tedious place for the next few days. Fortunately, there’s never been a better time to retreat into the world of podcasts. This week, the Guardian released a five-part series looking into the murky finances of King Charles. From its examination of his family’s past exploitation of enslaved people, through to the dubious line between his personal wealth and that which is supposedly held for our nation, it’s a welcome look at the kinds of issues lacking from the national conversation.

Also excellently skewering the powers that be is Pod Save the UK – a new homegrown version of the popular US take on politics, hosted by Nish Kumar and Guardian journalist Coco Khan. There’s also a look at Elon Musk’s, ahem, maverick leadership of Twitter, and an examination of a two decade-long battle over the theft of a Banksy. Plenty of brilliant listens, then, to distract you from having to watch the king’s big party ...

Alexi Duggins
Deputy TV editor

Continue reading...

White House Unveils Initiatives to Reduce Risks of A.I.

White House Unveils Initiatives to Reduce Risks of A.I. Vice President Kamala Harris also plans to meet with the chief executives of tech companies that are developing A.I. later on Thursday.

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Google rolls out passkey technology in ‘beginning of the end’ for passwords

Apple and Microsoft also collaborated on the technology which allows authentication with fingerprint ID, facial ID or a pin

Google is moving one step closer to ditching passwords, rolling out its passkey technology to Google accounts from Thursday.

The passkey is designed to replace passwords entirely by allowing authentication with fingerprint ID, facial ID or pin on the phone or device you use for authentication.

Continue reading...

The UK’s tortured attempt to remake the internet, explained

The UK’s tortured attempt to remake the internet, explained
Illustration by Hugo Herrera for The Verge

The bill aims to make the country ‘the safest place in the world to be online’ but has been mired by multiple delays and criticisms that it’s grown too large and unwieldy to please anyone. 

At some point this year, the UK’s long-delayed Online Safety Bill is finally expected to become law. In the government’s words, the legislation is an attempt to make the UK “the safest place in the world to be online” by introducing a range of obligations for how large tech firms should design, operate, and moderate their platforms.

As any self-respecting Verge reader knows, content moderation is never simple. It’s difficult for platforms, difficult for regulators, and difficult for lawmakers crafting the rules in the first place. But even by the standards of internet legislation, the Online Safety Bill has had a rocky passage. It’s been developed over years during a particularly turbulent era in British politics, changing dramatically from year to year. And as an example of just how controversial the bill has become, some of the world’s biggest online organizations, from WhatsApp to Wikipedia, are preemptively refusing to comply with its potential requirements.

So if you’ve tuned out the Online Safety Bill over the past few years — and let’s be honest, a lot of us have — it’s time to brush up. Here’s where the bill came from, how it’s changed, and why lawmakers might be about to finally put it on the books.

So let’s start from the beginning. What is the Online Safety Bill?

The UK government’s elevator pitch is that the bill is fundamentally an attempt to make the internet safer, particularly for children. It attempts to crack down on illegal content like child sexual abuse material (CSAM) and to minimize the possibility that kids might encounter harmful and age-inappropriate content, including online harassment as well as content that glorifies suicide, self-harm, and eating disorders.

But it’s difficult to TL;DR the Online Safety Bill at this point, precisely because it’s become so big and sprawling. On top of these broad strokes, the bill has a host of other rules. It requires online platforms to let people filter out objectionable content. It introduces age verification for porn sites. It criminalizes fraudulent ads. It requires sites to consistently enforce their terms of service. And if companies don’t comply, they could be fined up to £18 million (around $22.5 million) or 10 percent of global revenue, see their services blocked, and even see their executives jailed.

In short, the Online Safety Bill has become a catchall for UK internet regulation, mutating every time a new prime minister or digital minister has taken up the cause.

How many prime ministers are we talking about here?

So far? Four.

Wait, how long has this bill been in the works for?

The Online Safety Bill started with a document called the “Online Harms White Paper,” which was unveiled way back in April 2019 by then-digital minister Jeremy Wright. The death of Molly Russell by suicide in 2017 brought into sharp relief the dangers of children being able to access content relating to self-harm and suicide online, and other events like the Cambridge Analytica scandal had created the political impetus to do something to regulate big online platforms.

The idea was to introduce a so-called “duty of care” for big platforms like Facebook — similar to how British law asks employers to look after the safety of their employees. This meant companies would have to perform risk assessments and make proactive solutions to the potential harms rather than play whack-a-mole with problems as they crop up. As Carnegie UK associate Maeve Walsh puts it, “Interventions could take place in the way accounts are created, the incentives given to content creators, in the way content is spread as well as in the tools made available to users before we got to content take down.”

The white paper laid out fines and the potential to block websites that don’t comply. At that point, it amounted to some of the broadest and potentially strictest online regulations to have been proposed globally.

What was the response like at the time?

Obviously, there was a healthy amount of skepticism (Wired’s take was simply titled “All that’s wrong with the UK’s crusade against online harms”), but there were hints of cautious optimism as well. Mozilla, for example, said the overall approach had “promising potential,” although it warned about several issues that would need to be addressed to avoid infringing on people’s rights.

If the British government was on to such a winner, why hasn’t it passed this bill four years later?

Have you paid attention to British politics at all in the past four years? The original white paper was introduced four prime ministers and five digital ministers ago, and it seems to have been forced into the back seat by more urgent matters like leaving the European Union or handling the covid-19 pandemic.

But as it’s passed through all these hands, the bill has ballooned in size — picking up new provisions and sometimes dropping them when they’re too controversial. In 2021, when the first draft of the bill was presented to Parliament, it was “just” 145 pages long, but by this year, it had almost doubled to 262 pages.

Where did all those extra pages come from?

Given the bill’s broad ambitions for making online life safer in general, many new elements were added by the time it returned to Parliament in March 2022. In no particular order, these included:

  • Age checks for porn sites
  • Measures to clamp down on “anonymous trolls” by requiring that services give the option for users to verify their identity
  • Criminalizing cyberflashing (aka the sending of unsolicited nudes via social media or dating apps)
  • Cracking down on scam ads

Over time, the bill’s definition of “safety” has started to look pretty vague. A provision in the May 2021 draft forbade companies “from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation,” echoing now familiar fears that conservative voices are unfairly “censored” online. Bloomberg called this an “anti-censorship” clause at the time, and it continues to be present in the 2023 version of the bill.

And last November, ministers were promising to add even more offenses to the bill, including downblousing and the creation of nonconsensual deepfake pornography.

Hold up. Why does this pornography age check sound so familiar?

The Conservative Party has been trying to make it happen since well before the Online Safety Bill. Age verification was a planned part of the Digital Economy Bill in 2016 and then was supposed to happen in 2019 before being delayed and abandoned in favor of rolling the requirements into the Online Safety Bill.

The problem is, it’s very difficult to come up with an age verification system that can’t be either easily circumvented in minutes or create the risk that someone’s most intimate web browsing moments could be linked to their real-life identity — notwithstanding a plan to let users buy a “porn pass” from a local shop.

And it’s not clear how the Online Safety Bill will overcome this challenge. An explainer by The Guardian notes that Ofcom will issue codes of practice on how to determine users’ ages, with possible solutions involving having age verification companies check official IDs or bank statements.

Regardless of the difficulties, the government is pushing ahead with the age verification requirements, which is more than can be said for its proposed rules around “legal but harmful” content.

And what exactly were these “legal but harmful” rules?

Well, they were one of the most controversial additions to the entire bill — so much so that they’ve been (at least partially) walked back.

Originally, the government said it should officially designate certain content as harmful to adults but not necessarily illegal — things like bullying or content relating to eating disorders. (It’s the less catchy cousin of “lawful but awful.”) Companies wouldn’t necessarily have to remove this content, but they’d have to do risk assessments about the harm it might pose and set out clearly in their terms of service how they plan to tackle it.

But critics were wary of letting the state define what counts as “harmful,” the fear being that ministers would have the power to censor what people could say online. At a certain point, if the government is formally pushing companies to police legal speech, it’s debatable how “legal” that speech still is.

This criticism had an effect. The “legal but harmful” provisions for adults were removed from the bill in late 2022 — and so was a “harmful communications” offense that covered sending messages that caused “serious distress,” something critics feared could similarly criminalize offensive but legal speech.

Instead, the government introduced a “triple shield” covering content meant for adults. The first “shield” rule says platforms must remove illegal content like fraud or death threats. The second says anything that breaches a website’s terms of service should be moderated. And the third says adult users should be offered filters to control the content they see.

The thinking here is that most websites already restrict “harmful communications” and “legal but harmful” content, so if they’re told to apply their terms of service consistently, the problem (theoretically) takes care of itself. Conversely, platforms are actively prohibited from restricting content that doesn’t breach the terms of service or break the law. Meanwhile, the filters are supposed to let adults decide whether to block objectionable content like racism, antisemitism, or misogyny. The bill also tells sites to let people block unverified users — aka those pesky “anonymous trolls.”

None of this impacts the rules aimed specifically at children — in those cases, platforms will still have a duty to mitigate the impact of legal but harmful content.

I’m glad that the government addressed those problems, leaving a completely uncontroversial bill in its wake.

Wait, sorry. We’re just getting to the part where the UK might lose encrypted messaging apps.

Excuse me?

Remember WhatsApp? After the Online Safety Bill was introduced, it took issue with a section that asks online tech companies to use “accredited technology” to identify child sexual abuse content “whether communicated publicly or privately.” Since personal WhatsApp messages are end-to-end encrypted, not even the company itself can see their contents. Asking it to be able to identify CSAM, it says, would inevitably compromise this end-to-end encryption.

WhatsApp is owned by Meta, which is persona non grata among regulators these days, but it’s not the only encrypted messaging service whose operators are concerned. WhatsApp head Will Cathcart wrote an open letter that was co-signed by the heads of six other messaging apps, including Signal. “If implemented as written, [this bill] could empower Ofcom to try to force the proactive scanning of private messages on end-to-end encrypted communication services - nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users,” says the letter. “In short, the bill poses an unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate.”

The consensus among legal and cybersecurity experts is that the only way to monitor for CSAM while maintaining encryption is to use some kind of client-side scanning, an approach Apple announced in 2021 that it would be using for image uploads to iCloud. But the company ditched the plans the following year amid widespread criticism from privacy advocates.

Organizations such as the Internet Society say that such scanning risks creating vulnerabilities for criminals and other attackers to exploit and that it could eventually lead to the monitoring of other kinds of speech. The government disagrees and says the bill “does not represent a ban on end-to-end encryption, nor will it require services to weaken encryption.” But without an existing model for how such monitoring can coexist with end-to-end encryption, it’s hard to see how the law could satisfy critics.

The UK government already has the power to demand that services remove encryption thanks to a 2016 piece of legislation called the Investigatory Powers Act. But The Guardian notes that WhatsApp has never received a request to do so. At least one commentator thinks the same could happen with the Online Safety Bill, effectively giving Ofcom a radical new power that it may never choose to wield.

But that hasn’t exactly satisfied WhatsApp, which has suggested it would rather leave the UK than comply with the bill.

Okay, so messaging apps aren’t a fan. What do other companies and campaigners have to say about the bill?

Privacy activists have also been fiercely critical of what they see as an attack on end-to-end encryption. The Electronic Frontier Foundation, Big Brother Watch, and Article 19 published an analysis earlier this year that said the only way to identify and remove child sexual exploitation and abuse material would be to monitor all private communications, undermining users’ privacy rights and freedom of expression. Similar objections were raised in another open letter last year signed by 70 organizations, cybersecurity experts, and elected officials. The Electronic Frontier Foundation has called the bill “a blueprint for repression around the world.”

Tech giants like Google and Meta have also raised numerous concerns with the bill. Google says there are practical challenges to distinguishing between illegal and legal content at scale and that this could lead to the over-removal of legal content. Meta suggests that focusing on having users verify their identities risks excluding anyone who doesn’t wish to share their identity from participating in online conversations.

Even beyond that, there are more fundamental concerns about the bill. Matthew Lesh, head of public policy at the Institute of Economic Affairs, notes that there’s simply a massive disparity between what is acceptable for children to encounter online and what’s acceptable for adults under the bill. So you either risk the privacy and data protection concerns of asking all users to verify their age or you moderate to a children’s standard by default for everyone.

That could put even a relatively safe and educational service like Wikipedia under pressure to ask for the ages of its users, which the Wikimedia Foundation’s Rebecca MacKinnon says would “violate [its] commitment to collect minimal data about readers and contributors.”

“The Wikimedia Foundation will not be verifying the age of UK readers or contributors,” MacKinnon wrote.

Okay, that’s a lot of criticism. So who’s in favor of this bill?

One group that’s been broadly supportive of the bill is children’s charities. The National Society for the Prevention of Cruelty to Children (NSPCC), for example, has called the Online Safety Bill “an urgent and necessary child protection measure” to tackle grooming and child sexual abuse online. It calls the legislation “workable and well-designed” and likes that it aims to “tackle the drivers of online harms rather than seek to remove individual pieces of content.” Barnardo’s, another children’s charity, has been supportive of the bill’s introduction of age verification for pornography sites.

Ian Russell, the father of the late Molly Russell, has called the Online Safety Bill “a really important piece of legislation,” though he’s pushed for it to go further when it comes to criminal sanctions for executives whose products are found to have endangered children’s well-being.

“I don’t think that without effective regulation the tech industry is going to put its house in order, to prevent tragedies like Molly’s from happening again,” Russell said. This sentiment appears to be shared by increasing numbers of lawmakers internationally, such as those in California who passed the Age-Appropriate Design Code Act in August last year.

Where’s the bill at these days?

As of this writing, the bill is currently working its way through the UK’s upper chamber, the House of Lords, after which it’ll be passed back to the House of Commons to consider any amendments that have been made. The government’s hope is to pass it at some point this summer.

Even after the bill passes, however, there will still be practical decisions made about how it’ll work in practice. Ofcom will need to decide what services pose a high enough risk to be covered by the bill’s strictest rules and develop codes of practice for platforms to abide by, including tackling thorny issues like how to introduce age verification for pornography sites. Only after the regulator completes this consultation process will companies know when and how to fully comply with the bill, and Ofcom has said it expects this to take months.

The Online Safety Bill has had a difficult journey through Parliament, and it’s likely to be months before we know how its most controversial aspects are going to work (or not) in practice.

mercredi 3 mai 2023

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Bernie Sanders, Elon Musk and White House seeking my help, says ‘godfather of AI’

Dr Geoffrey Hinton has been inundated with requests to talk after quitting Google to warn about risk of digital intelligence

The man often touted as the godfather of artificial intelligence will be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he says, just days after quitting Google to warn the world about the risk of digital intelligence.

Dr Geoffrey Hinton, 75, won computer science’s highest honour, the Turing award, in 2018 for his work on “deep learning”, along with Meta’s Yann Lecun and the University of Montreal’s Yoshua Bengio.

Continue reading...

Microsoft is forcing Outlook and Teams to open links in Edge and IT admins are angry

Microsoft is forcing Outlook and Teams to open links in Edge and IT admins are angry
An image showing the Edge logo
Illustration: The Verge

Microsoft Edge is a good browser but for some reason Microsoft keeps trying to shove it down everyone’s throat and make it more difficult to use rivals like Chrome or Firefox. Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead.

Reddit users have posted messages from the Microsoft 365 admin center that reveal how Microsoft is going to roll out this change. “Web links from Azure Active Directory (AAD) accounts and Microsoft (MSA) accounts in the Outlook for Windows app will open in Microsoft Edge in a single view showing the opened link side-by-side with the email it came from,” reads a message to IT admins from Microsoft.

While this won’t affect the default browser setting in Windows, it’s yet another part of Microsoft 365 and Windows that totally ignores your default browser choice for links. Microsoft already does this with the Widgets system in Windows 11 and even the search experience, where you’ll be forced into Edge if you click a link even if you have another browser set as default.

 Image: paulanerspezi (Reddit)
Microsoft’s message to IT admins.

IT admins aren’t happy with many complaining in various threads on Reddit, spotted by Neowin. If Outlook wasn’t enough, Microsoft says “a similar experience will arrive in Teams” soon with web links from chats opening in Microsoft Edge side-by-side with Teams chats. Microsoft seems to be rolling this out gradually across Microsoft 365 users, and IT admins get 30 days notice before it rolls out to Outlook.

Microsoft 365 Enterprise IT admins will be able to alter the policy, but those on Microsoft 365 for business will have to manage this change on individual machines. That’s going to leave a lot of small businesses with the unnecessary headache of working out what has changed. Imagine being less tech savvy, clicking a link in Outlook, and thinking you’ve lost all your favorites because it didn’t open in your usual browser.

The notifications to IT admins come just weeks after Microsoft promised significant changes to the way Windows manages which apps open certain files or links by default. At the time Microsoft said it believed “we have a responsibility to ensure user choices are respected” and that it’s “important that we lead by example with our own first party Microsoft products.” Forcing people into Microsoft Edge and ignoring default browsers is anything but respecting user choice, and it’s gross that Microsoft continues to abuse this.

Microsoft tested a similar change to the default Windows 10 Mail app in 2018, in an attempt to force people into Edge for email links. That never came to pass, thanks to a backlash from Windows 10 testers. A similar change in 2020 saw Microsoft try and force Chrome’s default search engine to Bing using the Office 365 installer, and IT admins weren’t happy then either.

Windows 11 also launched with a messy and cumbersome process to set default apps, which was a step back from Windows 10 and drew concern from competing browser makers like Mozilla, Opera, and Vivaldi. A Windows 11 update has improved that process, but it’s clear Microsoft is still interested in finding ways to circumvent default browser choices.

Microsoft has already been using aggressive prompts to stop you from using Chrome and even added a giant Bing button to Edge in an effort to push people to use its search engine. Microsoft has also faced criticism over adding buy now, pay later financing options into Edge and its plan to build a crypto wallet into Edge. Microsoft also added a prompt to Edge Dev recently that appears when you try to use Google’s rival Bard AI chatbot. This relentless push of Edge, including through Windows Update, could all backfire for Microsoft and end up alienating Edge users instead of tempting them over from Chrome.

Pushing Buttons: Why the Microsoft-Activision Blizzard merger is a fight over the future of games

Pushing Buttons: Why the Microsoft-Activision Blizzard merger is a fight over the future of games

In this week’s newsletter: A UK regulator blocked a $70bn acquisition last week not because of any threat now, but over worries about a future monopoly

As is now tradition, an enormous piece of gaming news landed right after last week’s Pushing Buttons went out to readers: Microsoft’s huge $70bn purchase of Call of Duty, World of Warcraft and Candy Crush owner Activision Blizzard, a deal that has been in the works since January last year, was unexpectedly blocked by a UK regulator.

This might not seem interesting to anyone except those involved with the business of video games, or people with an inexplicable interest in the actions of regulatory authorities in Britain, but wait! It is quite interesting, because the response from these two giant companies has been entertainingly petty.

Continue reading...

‘Ready for some help?’: how a controversial technology firm courted Portland police

‘Ready for some help?’: how a controversial technology firm courted Portland police

SoundThinking, a gunshot detection company, worked with top police officials to secure a city contract, according to emails obtained by the Guardian

On 5 February 2022, police in Portland, Oregon, sent out a bulletin pleading with the public for information about a recent homicide case. Police had found Corey M Eady injured with several gunshot wounds, and the 44-year-old had died shortly after being taken to a hospital. “This is the 11th homicide in Portland this year,” the bulletin read. “All 11 have been by gunfire.”

The next day, Portland police captain James Crooker got a text. “Ready for some help?”

Shotspotter marketed itself aggressively to Portland police by tapping its vast network of law enforcement partners and supporters – some of whom now work at the company – to vouch for or advocate for the service.

The company backed up claims it is a noninstrusive and effective public safety tool with academic studies, some of which it funded or helped set up.

Once Portland police was on board, the company worked closely with Crooker, the Portland police captain, to win over a volunteer-led police oversight group, Fitcog, which recommended the use of Shotspotter devices to the mayor, Ted Wheeler.

Greene, the representative, also helped Crooker prepare for media interviews and even offered the company’s services to help the city apply for federal grants to fund a contract.

Continue reading...

Now you can post Horizon Worlds photos to your Instagram Story

Now you can post Horizon Worlds photos to your Instagram Story
Image of the Meta logo and wordmark on a blue background bordered by black scribbles made out of the Meta logo.
Illustration: Nick Barclay / The Verge

Meta’s VR social platform is getting a new feature to help you show off what a fun time you’re having in the so-called “metaverse” with all your Instagram and Facebook followers. With its v108 update, Horizon Worlds can now share photos and videos directly to your story on either platform. Meta is also using the update to test a couple of tweaks to the service’s safe zone feature.

It’s previously been possible to share content from Horizon Worlds to Reels — Instagram and Facebook’s TikTok-style vertical video feed feature — as reported by TechCrunch last October. But now you also have the option of posting Horizon Worlds’ Wii-level graphics to Meta’s stories, where they can sit alongside real-world photographs of holidays, meals, and days out.

Meta says the new story-sharing option is starting as a limited test for around 50 percent of Horizon Worlds’ users, but that it should get a wider rollout throughout the course of this month. The sharing feature can be accessed from the Media Gallery or Horizon Camera.

Alongside it, Meta is also testing a tweaked version of the Horizon Worlds’ safe zone, a feature designed to let users step away from the VR social space when things get heated and hectic. As part of the test, the safe zone will show more details of the world around the user, and offer fuller access to user profiles and other menus.

I suspect this won’t be the last time Meta tries to make its various services more interoperable, especially if it can succeed in creating any degree of FOMO among Instagram users about Horizon Worlds. But based on our experience trying out its social VR space when we reviewed the Quest Pro headset last year, there are more fundamental issues with Horizon Worlds that need to be addressed first.

My Weekend With an Emotional Support A.I. Companion

My Weekend With an Emotional Support A.I. Companion Pi, an A.I. tool that debuted this week, is a twist on the new wave of chatbots: It assists people with their wellness and emotions.

mardi 2 mai 2023

Apple and Google submit plan to fight AirTag stalking

Apple and Google submit plan to fight AirTag stalking

Companies join forces to tackle unwanted tracking via Apple’s gadget and similar devices such as Tile

Apple and Google are teaming up to thwart unwanted tracking through AirTags and similar gadgets.

The two companies behind the iPhone and the software that powers Android phones on Tuesday submitted a proposal to set standards for combatting secret surveillance on Bluetooth devices that were created to help people find lost keys, keep tabs on luggage or to locate other things that have a tendency to be misplaced.

Continue reading...

Reddit reworks sharing from its apps and the look of link embeds for better social reach

Reddit reworks sharing from its apps and the look of link embeds for better social reach
Reddit logo shown in layers
Illustration by Alex Castro / The Verge

Reddit’s blog post today admits it “didn’t make it easy” to share content like cool conversations and memes to other social platforms — but now it’s finally doing something about it. Reddit is enhancing link embeds for messaging apps and adding more sharing functions like sharing directly to Instagram stories right from Reddit’s app.

If you’ve ever tried to share a Reddit link from the official app on, for instance, iMessage on an iPhone, you might recall it not having a particularly content-rich preview. Now the company is enhancing it with a more robust visual preview of the content, its subreddit name, and the number of upvotes and comments it has.

 Image: Reddit
The new preview for Reddit shares in iMessage.

Third-party Reddit client apps have, for years, built out better ways to share content — like how Apollo can create a threaded screenshot displaying however many replies you’d like — but these don’t include a link for the recipient that encourages them to visit Reddit.

The new Reddit app share sheet can toss content directly into an Instagram story as a still image. Image: Reddit
The new Reddit app share sheet can toss content directly into an Instagram story as a still image.

Reddit’s newfound prioritization of its own app and platform comes in hot as the world looks for the next best app to use as an information channel, adding these features built for Reddit’s official mobile app on iOS and Android.

Even official Reddit app users have gotten into the habit of just screenshotting everything they want to share. But now Reddit will remind them another way to do it: with a pop-up notification inviting the user to share the link via the share sheet.

 Image: Reddit
Reddit will poke you for continuing to screenshot content and help you use its new sharing features instead.

Reddit’s focus on providing richer embeds also extends to content publishing platforms, like the one The Verge uses to bring you these articles, thanks to the company’s release of a new embedding toolbox (PDF). Content from other social platforms like Twitter or YouTube has often been easier to embed on websites than Reddit links, but that may be changing.

It’s particularly notable that the new tools and abilities also come just after Reddit’s recent extensive overhaul of how it allows outsiders to handle its data. In a post on Reddit, the developer of Apollo called the changes “not necessarily for the worse in all cases, provided Reddit is reasonable,” but said that based on discussions with the company, it doesn’t plan to offer free API access for commercial third-party apps in the future.

Samsung tells employees not to use AI tools like ChatGPT, citing security concerns

Samsung tells employees not to use AI tools like ChatGPT, citing security concerns
Samsung’s logo set in the middle of red, black, white, and yellow ovals.
Illustration by Alex Castro / The Verge

Samsung has banned the use of generative AI tools like ChatGPT on its internal networks and company-owned devices over fears that uploading sensitive information to these platforms represents a security risk, Bloomberg News reports. The rule was communicated to staff in a memo which describes it as a temporary restriction while Samsung works to “create a secure environment” to safely use generative AI tools.

The biggest risk factor is likely OpenAI’s chatbot ChatGPT, which has become hugely popular not only as toy for entertainment but as a tool to help with serious work. People can use the system to summarize reports or write response to emails — but that might mean inputting sensitive information, which OpenAI might have access too.

The privacy risks involved in using ChatGPT vary based on how a user accesses the service. If a company is ChatGPT’s API, then conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. However, this is not true of text inputted into the general web interface using its default settings.

In an FAQ, the company says it reviews conversations users have with ChatGPT to improve its systems and ensure that it complies with its policies and safety requirements. It advises users to not “share any sensitive information in your conversations” and notes that any conversations may also be used to train future versions of ChatGPT. The company recently rolled out a feature similar to a browser’s “incognito mode,” Reuters notes, which does not save chat histories and prevents them from being used for training.

Samsung is evidently worried about employees playing around with the tool and not realizing that it’s a potential security risk.

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said the company’s internal memo, reports Bloomberg. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.” As well as restricting the use of generative AI on company computers, phones, and tablets, Samsung is also asking staff not to upload sensitive business information via their personal machines.

“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung’s memo said. The South Korean tech giant confirmed the authenticity of the memo to Bloomberg. A spokesperson did not immediately respond to The Verge’s request for comment.

The ban comes after Samsung discovered that some of its staff “leaked internal source code by uploading it to ChatGPT,” according to Bloomberg. There are concerns that uploading sensitive company information to external servers operated by AI providers risks exposing it publicly, and limits Samsung’s ability to delete it after the fact. News of Samsung’s policy comes a little over a month after ChatGPT experienced a bug that temporarily exposed some chat histories, and potentially payment information, to other users of the service.

Samsung’s policy means it joins a host of other companies and institutions to have placed limits on the use of generative AI tools, though the exact reasons for the restrictions vary. JPMorgan has restricted their use over compliance concerns, CNN reports, while other banks such as Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, and Wells Fargo have also either banned or restricted the use of such tools. New York City schools have banned ChatGPT over cheating and misinformation fears, while data protection and child safety concerns was cited as the reason for ChatGPT’s temporary ban in Italy.

Samsung reportedly has plans for its employees to use AI tools eventually, but it sounds like it’s waiting to develop in-house solutions. Bloomberg notes that it’s working on tools to help with translation, summarizing documents, and software development.

Any generative AI restrictions do not apply to devices sold to consumers like laptops or phones.

Marvel Snap is the most positive addiction I’ve ever had

Marvel Snap is the most positive addiction I’ve ever had

After breaking up with FIFA nearly two years ago, has Dominik Diamond found his new forever game in Marvel’s infectiously joyous mobile card-battler?

I don’t look cool. I have aged ungracefully. At 18 I was Morten Harket meets the Milky Bar Kid. Now I am Gary Oldman’s Dracula meets a potato. Yet I bonded with the coolest guy in my town this week. He and his mates invaded the bus en masse, all tumbling hair, skinny jeans and laughing eyes, fanning out around me, thinking it best not to bother the hobo in the ski jacket and ankle wellies.

I caught sight of the screen on Cool Guy’s phone. My heart flipped and I said the words that have united a million people around the world recently.

Continue reading...

Google Promised to Defund Climate Lies, but the Ads Keep Coming

Google Promised to Defund Climate Lies, but the Ads Keep Coming Google said in 2021 that it would stop running ads alongside videos and other content that denied the existence and causes of climate change.

Robot dogs deployed in New York building collapse revive surveillance fears

Robot dogs deployed in New York building collapse revive surveillance fears

Robots praised by New York mayor for searching ruins of a parking garage collapse, but critics fear robots will collect private data

“Digidog is out of the pound,” Eric Adams declared in April. The New York City mayor also insisted the successful use of the controversial robot in response to a recent building collapse should convince critics such devices can improve safety in the city.

Adams commended first responders’ use of the four-legged robot in the ruins of a parking garage collapse last week in Manhattan, in which one person was killed and five injured.

Continue reading...

lundi 1 mai 2023

A.I. Is Getting Better at Mind-Reading

A.I. Is Getting Better at Mind-Reading In a recent experiment, researchers used large language models to translate brain activity into words.

The Super Mario Bros. Movie has made a cool $1 billion

The Super Mario Bros. Movie has made a cool $1 billion
A dragon-like tortoise with its arms reached out to grasp a glowing star.
Bowser gazing at a Super Star. | Image: Universal

The writing’s been on the wall basically from the moment The Super Mario Bros. Movie first hit theaters, but after weeks of sitting comfortably at the top of the domestic box office, Universal, Illumination, and Nintendo’s big movie collaboration has officially made $1 billion.

It’s been less than a full month since co-directors Aaron Horvath and Michael Jelenic’s The Super Mario Bros. Movie premiered, but in those few short weeks, the project’s already raked in a cool $490 million domestically and $532 million internationally, making it the fifth pandemic-era movie to cross the $1 billion mark. Given how the film only just opened in markets including South Korea and Japan within the past few days, it’s all but assured to make quite a bit more money before its theatrical run comes to an end.

Having become the most financially successful video game movie ever, The Super Mario Bros. Movie is a far cry from the catastrophic box office failure that was Nintendo’s first Super Mario Bros. film from 1993. The movie’s gross basically guarantees that we’re going to be seeing more Mario sequels for years to come and feels like a signal that Nintendo’s big plan to build a new kind of entertainment empire for itself might just work.

The UK doesn’t want Microsoft’s Activision Blizzard deal, so what happens next?

The UK doesn’t want Microsoft’s Activision Blizzard deal, so what happens next?
Illustration of Microsoft, Activision, Blizzard, and Xbox logos
Microsoft’s giant deal hangs in the balance. | Image: Microsoft

Microsoft is furious. Last week, a surprise decision from the UK’s Competition and Markets Authority (CMA) left its $68.7 billion deal to acquire Activision Blizzard blocked in Britain, thanks to concerns about the future of cloud gaming.

Microsoft president Brad Smith was awake at 2AM that morning hastily writing a response from across the pond, according to Bloomberg. He spoke to the BBC a day later and called the UK regulator’s decision the “darkest day” for Microsoft in its four decades of working in Britain. He went a step further and said “the European Union is a more attractive place to start a business” than the UK, a particularly stinging statement given the political issues around Brexit.

Now, Microsoft is bruised, angry, and plotting its next move. If Brad Smith’s fighting talk is anything to go by, Microsoft will try to keep this deal alive. But the CMA’s decision won’t be an easy one to appeal.

Microsoft Corp. President Brad Smith News Conference Following EU Hearing
Microsoft president Brad Smith has previously appeared in Brussels to argue for its Activision deal.

UK regulators have been cracking down on merger and acquisition activity in recent years, coinciding with the UK’s exit from the European Union. To fight its latest decision, Microsoft will have to file a notice with the Competition Appeal Tribunal (CAT), a process that can take months. It will have to convince a panel of judges that the CMA acted irrationally, illegally, or with procedural impropriety or unfairness. And the chances of winning are slim. “The CMA has won 67 percent of all merger appeals since 2010,” wrote Nicole Kar, a partner at the Linklaters law firm, in 2020. I spoke to Kar after the CMA’s Microsoft decision, and she confirmed the CMA still wins the majority of any appeals.

Meta’s battle with the CMA over its Giphy acquisition shows what Microsoft might be in store for. Meta was originally ordered to sell Giphy in 2021 but appealed the ruling and was unsuccessful. Meta eventually had to comply with the UK competition watchdog and divest itself of social media GIF library Giphy. Viagogo’s $4 billion takeover of StubHub was also partially blocked by the CMA, forcing the company to keep StubHub’s US and Canadian operations but sell its UK and international businesses.

Microsoft skirmished with the CMA during the review process, publicly criticizing the regulator’s math and forcing it to fix “clear errors” in its financial calculations around withholding Call of Duty from PlayStation.

Those errors forced the CMA to make a rare U-turn with its provisional findings, dropping concerns around Call of Duty and the impact of Microsoft’s deal on console competition. But crucially, it kept cloud gaming concerns open — which led to the deal being blocked. Sony, which has emerged as one of the main opponents (alongside Google) to Microsoft’s Activision acquisition, called the CMA’s initial U-turn a “surprising, unprecedented, and irrational” decision, but the PlayStation maker hasn’t yet commented on the regulator’s decision to block the deal.

The CMA said in September that it was concerned about the effects of Microsoft owning Activision Blizzard games on existing rivals and emerging entrants offering multi-game subscriptions and cloud gaming services. I tweeted at the time that all of the headlines around Call of Duty were just noise, and there would be bigger concerns around Microsoft’s ability to leverage Windows and Azure, unlike its competitors, and how it could influence game distribution and revenue shares across the game industry with its Xbox Game Pass subscription.

A screenshot from Call of Duty: Modern Warfare II. Image: Activision Blizzard
Call of Duty wasn’t a big concern for the CMA after all.

Microsoft knew cloud gaming would be a key concern, and that’s why it has spent the past couple of months preparing by signing deals with Boosteroid, Ubitus, and Nvidia to allow Xbox PC games to run on rival cloud gaming services. These 10-year deals will also include access to Call of Duty and other Activision Blizzard games if Microsoft’s deal is approved by regulators. If it’s not approved, then the deals are off for Activision games, with only access to Microsoft’s Xbox PC games being supplied.

But these deals haven’t convinced the UK. The CMA says they are “too limited in scope” with models that mean gamers have to acquire the right to play games “by purchasing them on certain stores or subscribing to certain services.” There’s also concern around Microsoft potentially retaining all revenue from sales of Activision games and in-app purchases or cloud providers not being able to provide access to these games in rival multi-game subscription services or offer them on computer operating systems other than Windows.

Limiting support to Windows would make rival cloud gaming services customers of Microsoft, helping the software giant secure its dominance in operating systems if there ever was a bigger shift to cloud gaming. Valve’s SteamOS provides the only realistic threat to Windows gaming dominance right now, and if cloud providers have to license Windows to run games like Call of Duty, then it’s unlikely that we’ll see the switch to Linux that Google tried to push with its failed Stadia cloud gaming service.

Most of this deal now rests on the European Union’s shoulders. The cloud deals Microsoft has been signing are also designed to appease regulators in the EU. Reuters reported last month that the Activision deal is likely to be approved by EU regulators following the Nvidia and Nintendo licensing agreements. The EU is due to make a decision by May 22nd, and Microsoft is once again trying to get out ahead of regulators by signing a fresh deal with European cloud gaming platform Nware. Nvidia and Boosteroid, which both signed Microsoft’s 10-year cloud deal, have publicly questioned the CMA’s decision, with Microsoft hoping this kind of backing will sway EU regulators.

An EU approval could offer a glimmer of hope for Microsoft’s giant deal, as such a move would put pressure on the UK as the only major market to outright block the acquisition. Regulators in Saudi Arabia, Brazil, Chile, Serbia, Japan, and South Africa have already approved the deal. Microsoft does face trouble closer to home, though.

In the US, the Federal Trade Commission sued to block Microsoft and Activision Blizzard’s deal late last year. The FTC case is still at the document discovery stage, with an evidentiary hearing scheduled for August 2nd. Microsoft and Sony lawyers are already arguing over which (and how many) documents should be presented as part of the legal discovery process, and we’re months away from knowing how the case will proceed.

Microsoft has always maintained that the deal will close by the end of its fiscal 2023 year, which is the end of June. But that deadline looks incredibly unrealistic now, given the CMA’s intervention. We’re definitely going to see some fighting from Microsoft in the weeks ahead, but if EU regulators share the same concerns as the CMA, then it will almost certainly be game over for Microsoft. It’s hard to imagine it’s really willing to battle it out in courts for months or years with multiple regulators in Europe, all while facing the prospect of the FTC trying to break the deal apart. So for the next few weeks, all eyes are now on Brussels.

The Verge’s favorite Stream Deck hacks

The Verge’s favorite Stream Deck hacks
Stream Deck MK. 2 in dark purplish room
Image: Elgato

Stream Deck fever has hit The Verge — here are some of the uses that we put ours to.

Recently — this week, in fact — I purchased my first Stream Deck. Specifically, I decided to try the Stream Deck Mini, the smallest and most inexpensive model. Why? Because I saw how much fun many of my colleagues were having with theirs.

The Stream Deck is a device that lets you program a series of physical buttons (and, in the case of the Plus, knobs) to perform a single task or a series of tasks on your computer or on your home’s smart devices. In other words, it lets you do something that usually demands several keystrokes — say, starting a new email, dropping in a template, and sending it to a specific contact list — with a single button press. Neat, right?

Well, several staffers at The Verge think the Stream Deck is exceptionally neat, and they’ve been using the devices to make work more efficient, to make play more fun, and — well, just to mess around with the tech. So since I am a complete newbie, I thought I’d find out some of the ways that my co-workers were working with theirs.

By the way, if you’re also a Stream Deck fan and want to try some hacks, you can find plug-ins at Elgato’s site, ideas and advice on Reddit — or you can just Google what you’d like to try and see what comes up.

Meanwhile, here are how some of the folks here at The Verge have been using their Stream Decks.


I wanted knobs

Alex Cranz, managing editor

I know. Our own review of the Stream Deck Plus said most people didn’t need the Stream Deck Plus, and I know I could have gone a more fun and hacky route, but I wanted buttons, knobs, and a relatively easy setup. So now, I use a Stream Deck Plus. Button-wise, I mainly use it to quickly open a new page for posts on The Verge. I’ve got buttons for each story type, and I’ve customized the little Verge logo for each button. I also set up some hacks using the HomeControl app so I can control all my Philips Hue lights from the Stream Deck Plus, and that’s convenient, even if I often forget to do it.

But I bought the Stream Deck Plus because I wanted knobs rather than just buttons, so it’s no surprise that knob use cases are my favorites. I’ve got knobs for the volume on my computer and the brightness of the key light I use for video calls. I use them several times an hour — more than the 12 buttons I’ve programmed. The knobs work so well I wish they had more use cases. I’d love to be able to control every light in my house or control the volume for multiple audio outputs. I’m sure that kind of control is just a hack away. I just need to find it.


To trigger Mac shortcuts

Liam James, lead producer, The Vergecast

When I first started at The Verge, we were all obsessed with Art Lebedev’s prototype Optimus keyboard, which used tiny OLED screens underneath each keycap to show the most relevant input based on what you were doing. I wanted one very badly, but alas, it took years to become an actual product, and when it did, it was prohibitively expensive.

Fast-forward 10 years to the first time I saw a colleague use a Stream Deck to change the lighting in his remote office. I knew this was my time.

I use my Stream Deck MK. 2 primarily to trigger the Mac shortcuts (automations) I’ve created for repetitive tasks I have to do as part of my job as producer for The Vergecast. I can tap one button, and a Slack message I’ve received from one of the co-hosts turns into a new to-do item in my task manager. Another button quickly opens our online studio, Riverside, to the correct location I need for a recording. And of course, I copied my colleague David Pierce and can control everything in my smart home as well.


To declare podcast time

David Pierce, editor-at-large

I use my Stream Deck for mostly normal stuff. I use it to control my Philips smart lights because buttons are better than yelling “hey Siri, turn on the lights” a hundred times a day. I have a button that immediately ends whatever meeting I’m in. But there are two that I love and use most of all.

The first is Slack status, which I’ve rigged up to switch my Slack status to “BRB.” If it’s lunch / meeting / nap time, I just whack that button as I walk away, and poof! I’m gone. The second is a button connected to a Mac shortcut I call “Podcast Time!” (The exclamation point is very important.) When I hit that button, it turns on Do Not Disturb on my Mac, closes every app except the ones we use to record, and opens a tab with the episode’s Google Doc. It turns a million clicks into one button press, and it makes me happy every time I mash it.


To swap to speakers

Sean Hollister, senior editor

I can’t spend all day wearing a headset, no matter how comfortable, much less my amazing wireless gaming headset that slowly drives me up the wall. So I like to swap to a set of Audioengine speakers a few times a day, and my six-key Stream Deck Mini lets me do that with one tap of a button. I use the Audio Switcher plug-in by Fred Emmott to do it, which lets you pick two audio devices to switch between, complete with handy icons so you know which is active just by looking at a Stream Deck key.

It also comes with a must-enable fuzzy logic device match setting, so it can find my SteelSeries headset even if it decides to suddenly tell Windows it’s a brand-new device due to quirks of USB. I suppose I wouldn’t feel the need for this if Microsoft hadn’t buried the audio device switcher in Windows 11, but here we are, and the Stream Deck workaround works great for me.


Going for the basics

Brandon Widder, senior commerce editor

I’ll admit it, I’m an absolute newbie when it comes to the Stream Deck. I picked the entry-level Mini after I listened to many of my colleagues wax poetic about its infinite possibilities, which, as I quickly found out, are not all that hard to rig up if all you want to do is customize a few basic functions. Within minutes, I was able to program it to launch my favorite websites, update my Slack status, and swap between my various Philips Hue lighting zones (which is really just a selection of cool whites and some purplish zone called “vapor wave”). I’ve also programmed it, like others, to kick-start some of my go-to Spotify playlists, ensuring those lo-fi beats and whatever Wilco-adjacent deep cut I’m currently into is never out of reach.


Rearrange the windows

Dan Seifert, deputy editor, reviews

I started my Stream Deck journey with a six-button Mini, but I recently upgraded to the 15-key Stream Deck MK. 2 so I wouldn’t have to switch between pages as often to access the controls I use most frequently.

I use my Deck for a lot of the standard things — controlling media playback, smart home lights, in-meeting mute and leave — but my favorite hack combines a plug-in that can run small AppleScript code snippets with the Moom window management app. I set up a Multi Action Switch on the Stream Deck to automatically open the Google Meet web app and rearrange my windows to put it front and center (with my browser window off to the side) when I need to hop on a call, something I do multiple times a day. When the call is done, I press the same button, which runs a script to automatically close the Meet app and put my browser and other app windows back the way I had them, letting me get on with my next task.

It’s small things like this that make the Stream Deck an indispensable tool on my desk.


‘Godfather of AI’ quits Google with regrets and fears about his life’s work

‘Godfather of AI’ quits Google with regrets and fears about his life’s work
Key Speakers At The International Economic Forum Of The Americas Toronto Global Forum
Geoffrey Hinton (foreground) has left Google to speak out on the dangers of AI. | Image: Getty

Geoffrey Hinton, who alongside two other so-called “Godfathers of AI” won the 2018 Turing Award for their foundational work that led to the current boom in artificial intelligence, now says a part of him regrets his life’s work. Hinton recently quit his job at Google in order to speak freely about the risks of AI, according to an interview with the 75-year-old in The New York Times.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” said Hinton, who had been employed by Google for more than a decade. “It is hard to see how you can prevent the bad actors from using it for bad things.”

Hinton notified Google of his resignation last month, and on Thursday talked to CEO Sundar Pichai directly, according to the NYT. Details of that discussion were not disclosed.

The life-long academic joined Google after it acquired a company started by Hinton and two of his students, one of whom went on to become chief scientist at OpenAI. Hinton and his students had developed a neural network that taught itself to identify common objects like dogs, cats, and flowers after analyzing thousands of photos. It’s this work that ultimately led to the creation of ChatGPT and Google Bard.

According to the NYT interview, Hinton was happy with Google’s stewardship of the technology until Microsoft launched the new OpenAI-infused Bing, challenging Google’s core business and sparking a “code red” response inside the search giant. Such fierce competition might be impossible to stop, Hinton says, resulting in a world with so much fake imagery and text that nobody will be able to tell “what is true anymore.”

Google’s chief scientist, Jeff Dean, worked to soften the blow with the following statement:“We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.

“The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

EV Lessons Learned From 4 Years as a Jaguar I-Pace Owner

EV Lessons Learned From 4 Years as a Jaguar I-Pace Owner
Jaguar I-Pace at the 2019 New York International Auto Show
I enjoyed this car more than any other car I've driven, and I've driven many cars, including exotics. But the death of my I-Pace showcases several ongoing problems with electric vehicles that still exist today. The post EV Lessons Learned From 4 Years as a Jaguar I-Pace Owner appeared first on TechNewsWorld.

‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead

‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

dimanche 30 avril 2023

Frankenstein’s warning: the too-familiar hubris of today’s technoscience

Frankenstein’s warning: the too-familiar hubris of today’s technoscience

Technology presuming to recreate humanity is central to Mary Shelley’s masterpiece. It is more relevant today than ever

Can we imagine a scenario in which the different anxieties aroused by George Romero’s horror film Night of the Living Dead and Stanley Kubrick’s sci-fi dystopia 2001: A Space Odyssey merge?

How might a monster that combined our fear of becoming something less than human with our fear of increasingly “intelligent” machines appear to us and what might it say?

Continue reading...

Here are the best Black Friday deals you can already get

Here are the best Black Friday deals you can already get Image: Elen Winata for The Verge From noise-canceling earbuds to robot vacuums a...