lundi 13 mai 2024

Why Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI

Why Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI
Shantanu Narayen smiles at camera
Photo illustration by The Verge / Photo: Adobe

The tech and the consumers both might not be quite ready yet, but he’s betting big on an AI future.

Today, I’m talking with Adobe CEO Shantanu Narayen. Shantanu’s been at the top of my list of people I’ve wanted to talk to for the show since we first launched — he’s led Adobe for nearly 17 years now, but he doesn’t do too many wide-ranging interviews. I’ve always thought Adobe was an underappreciated company — its tools sit at the center of nearly every major creative workflow you can think of — and with generative AI poised to change the very nature of creative software, it seemed particularly important to talk with Shantanu now.

Adobe has an enormously long and influential history when it comes to creative software. It began in the early 1980s, developing something called PostScript that became the first industry-standard language for connecting computers to printers — a huge deal at the time. Then, in the 1980s and 1990s, it released the first versions of software that’s now so ubiquitous that it’s hard to imagine the computing and design industries without them. Adobe created the PDF, the document standard everyone now kind of loves to hate, as well as programs like Illustrator, Premiere, and — of course — Photoshop. If you work in a creative field, it’s a near certainty that there’s Adobe software running somewhere close to you.

All that influence puts Adobe right at the center of the whole web of tensions we like to talk about on Decoder — especially as the company has evolved its business and business model over time. Shantanu joined the company in 1998, back when desktop software was a thing you sold on a shelf. He was with the company when it started bundling a whole bunch of its flagship products into the Creative Suite, and he was the CEO who led the company’s pivot to subscription software with Creative Cloud in 2012. He also led some big acquisitions that turned into Adobe’s large but under-the-radar marketing business — so much of what gets made in tools like Photoshop is marketing and advertising collateral, after all, and the company is a growing business in helping businesses create, distribute, and track the performance of all that work around the web.

But AI really changes what it means to make and distribute creative work — even what it means to track advertising performance across the web — and you’ll hear us talk a lot about all the different things generative AI means for a company like Adobe. There are strategic problems, like cost: everyone’s pouring tons of money into R&D for AI, but not many people are seeing revenue returns on it just yet, and Shantanu explained how he’s betting on that investment return.

Then there are the fundamental philosophical challenges of adding AI to photo and video tools. How do you sustain human creativity when so much of it can be outsourced to the tools themselves with AI? And I asked a question I’ve been thinking about for a long time as more and more of the internet gets so deeply commercialized: What does it mean when a company like Adobe, which makes the tools so many people use to make their art, sees the creative process as a step in a marketing chain, instead of a goal in and of itself?

This one got deep — like I said, Shantanu doesn’t do many interviews like this, so I took my shots.

Okay: Adobe CEO Shantanu Narayen. Here we go.

This transcript has been lightly edited for length and clarity.

Shantanu Narayen, you’re the CEO of Adobe. Welcome to Decoder!

Thanks for having me, Nilay.

I am very excited to talk to you. You are one of the first guests I ever put on a list of guests I wanted on the show because I think Adobe is under-covered. As the CEO, you’ve been there for a long time. You don’t give a lot of interviews, so I’m very excited you chose to join us on the show.

Adobe is 40-plus years old. It has been a lot of different kinds of companies. You have been there since 1998. You became CEO in 2007. You saw at least one paradigm shift in computing. You led the company through another shift in computing. How would you describe Adobe today?

I think Adobe has always been about fundamental innovation, and I think we are guided by our mission to change the world through digital experiences. I think what motivates us is: Are we leveraging technology to deliver great value to customers and staying true to this mission of digital experiences?

What do you mean, specifically, by digital experiences?

The way people create digital experiences, the way they consume digital experiences, the new media types that are emerging, the devices on which people are engaging with digital, and the data associated with it as well. I think we started off way more with the creative process, and now we’re also into the science and data aspects. Think about the content lifecycle — how people create content, manage it, measure it, mobilize it, and monetize it. We want to play a role across that entire content life cycle.

I love this; you’re already way into what I wanted to talk about. Most people think of Adobe as the Photoshop company or, increasingly, the Premiere company. Wherever you are in the digital economy, Adobe is there, but what most people see is Creative Cloud.

You’re talking about everything that happens after you make the asset. You make the picture in Photoshop, and then a whole bunch of stuff might happen to it. You make the video in Premiere, and then a lot of things might happen. If you’re a marketer, you might make a sale. If you’re a content creator, you might run an ad. Something will happen there. You’re describing that whole expansive set of things that happen after the asset is made. Is that where your focus is, or is it still at the first step, which is someone has to double-click on Photoshop and do a thing?

I think it is across the entire chain — and, Nilay, I’d be remiss if I didn’t also say we are also pretty well known for PDF and everything associated with PDF!

[Laughs] Don’t worry, I have a lot of PDF questions coming for you.

I think as it relates to the content, which was your question: it doesn’t matter which platform you’re using to create content, whether it’s a desktop, whether it’s a mobile device, whether it’s web — that’s just the first step. It’s how people consume it, whether it’s on a social media site or whether it’s a company that’s engaging with customers and they’re creating some sort of a personalized experience. So, you’re right — very much, we’ve changed our aspirations. I think 20 years ago, we were probably known just for desktop applications, and now we’ve expanded that to the web, and the entire chain has certainly been one of the areas in which we’ve both innovated and grown.

I want to come back to that because there are a lot of ideas embedded in that. One thing that’s on my mind as I’ve been talking to people in this industry and all the CEOs on Decoder: half of them tell me that AI is a paradigm shift on the order of mobile, on the order of desktop publishing, all things that you have lived through. Do you buy that AI is another one of these paradigm shifts?

I think AI is something that we’ve actually been working on for a long time. What do computers do really well? Computers are great at pattern matching. Computers are great at automating inefficient tasks. I think all the buzz is around generative AI, which is the starting point of whether you’re having a conversational interface with your computer or you’re trying to create something and it enables you to start that entire process. I do think it’s going to be fairly fundamental because of the amount of energy, the amount of capital, the amount of great talent that’s focused on, “What does it mean to allow computers to have a conversation and reason and think?” That’s unprecedented. Even more so than, I would say, what happened in the move to mobile or the move to cloud because those were happening at the same time, and perhaps the energy and investment were divided among both, whereas now it’s all about generative AI and the implications.

If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?

I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.

The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.

For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.

Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.

When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive.

I feel like [Google CEO] Sundar Pichai likes to say AI is more profound than electricity —

You still need electricity to run the AI, so I think they’re both interrelated.

But I honestly think “used as much as layers” is the same statement. It’s at the same level of change. It’s pretty good.

I want to drill down into some of these ideas. You have been the CEO since 2007. That’s right at the beginning of the mobile era. Many things have changed. You’ve turned Adobe into a cloud business. You started as a product manager in 1998. I’m assuming your framework for making decisions has evolved. How do you make decisions now, and what’s your framework?

I think there are a whole bunch of things that have perhaps remained the same and a whole bunch of things that are different. I think at your core, when you make decisions — whether it’s our transition to the cloud, whether it’s what we did with getting into the digital marketing business — it’s always been about: Are we expanding the horizons and the aspirations we look at? How can we get more customers to the platform and deliver more value? At our core, what’s remained the same is this fundamental belief that by investing in deep technology platforms and delivering fundamental value, you will be able to deliver, value, and monetize it and grow as a company.

I think what’s different is the company has scaled how you recognize the importance, which was always important but becomes increasingly obvious: how do you create a structure in which people can innovate and how do you scale that? At $20 billion, how do you scale that business and make decisions that are appropriate? I think that’s changed. But at my core ... I managed seven people then, I manage seven people now, and it’s leveraging them to do amazing things.

That gets into the next Decoder question almost perfectly: How is Adobe structured today? How did you arrive at that structure?

I think structures are pendulums, and you change the pendulum based on what’s really important. We have three businesses: One is what we call the creative business that you touch on so much about, and the vision there is how we enable creativity for all. We have the document business, and in the document business, it’s really thinking about how we accelerate document productivity, and powering digital businesses is the marketing business. I would say we have product units. We call the first two Creative Cloud and Document Cloud as our digital media business, and we call the marketing business the digital experience business. So we have two core product units run by two presidents, Anil [Chakravarthy, president of Adobe’s digital experience business] and David [Wadhwani, president of Adobe’s digital media business]. And with the rest of the company, we have somebody focused on strategy and corporate development. Partnerships is an important part. And then you have finance, legal, marketing, and HR as functional areas of expertise.

Where do you spend your time? I always think about CEOs as having timelines. There’s a problem today some customer is having, you’ve gotta solve that in five minutes. There’s an acquisition that takes a year or maybe even more than that. Where do you spend your time? What timeline do you operate on?

Time is our most valuable commodity, right? I think prioritization is something where we’ve been increasingly trying to say: what moves the needle? One of the things I like to do — both for myself as well as at the end of the year with my senior executives — is say, “How do we move the needle and have an impact for the company?” And that might change over time.

I think what’s constant is product. I love products, I love building products, I love using our products, but the initiatives might change. A few years ago, it was all about building this product that we call the Adobe Experience Platform — a real-time customer data platform — because we had this vision that if you had to deliver personalized engaging experiences, you needed a next-generation infrastructure. This was not about the old generation of, “Where was your customer data stored?” It was more about: what’s a real-time platform that enables you to activate that data in real time? And that business has now exploded. We have tens of billions of profiles. The business has crossed $750 million in the book of business.

Incubating new businesses is hard. In companies, the power structure tends to be with businesses that are making money today. And so incubating businesses require sponsorship. Adobe Express is another product that we’ll talk about. We just released a phenomenal new version of Adobe Express on both mobile and web, which is all about this creativity for all. And so I think what are the needle-moving initiatives? Sometimes, it might be about partnerships. And as we think about LLMs and what’s happening in generative AI, where do we partner versus where do we build? While it changes, I would say there are three parts of where I spend my time.

There’s strategy because, at the end of the day, our jobs are planting the flag for where the company has to go and the vision for the company. I think it’s a cadence of execution. If you don’t execute against the things that are important for you, it doesn’t matter how good your strategy is. And the third set of things that you focus on are people. Are you creating a culture where people want to come in and work, so they can do their best work? Is the structure optimized to accomplish what is most important, and are you investing in the right places? I would say those are the three buckets, but it ebbs and flows based on the critical part. And you’re right — you do get interrupted, and having to deal with whatever is the interruption of the day is also an important part of what you do.

You said you had three core divisions. There’s the Creative Cloud — the digital media side of the business. There’s the Experience Cloud, which is the marketing side of the business, and then there’s ... I think you have a small advertising line of revenue in that report. Is that the right structure for the AI moment? Do you think you’re going to have to change that? Because you’ve been in that structure for quite some time now.

I think what’s been really amazing and gratifying to us is, at the end of the day, while you have a portfolio of businesses, if you can integrate them where you deliver value to somebody that is incredible and that no other company can do by themselves, that’s the magic that a company can do. We just had a recent summit at MAX in London. We had our summit here in Las Vegas. These are our big customer events. And the story even at the financial analyst meetings is all about how these are coming together: how the integration of the clouds is where we’re delivering value. When you talk about generative AI, we do creation and we do production. We have to do these asset managements.

If you’re a marketer and you’re creating all this content, whether it’s for social, whether it’s for email campaigns, whether it’s for media placement or just TV, where is all that content stored, and how do you localize it, and how do you distribute it? How do you activate it? How do you create these campaigns? What do you do with workflow and collaboration? And then what is the analysis and insight and reporting?

This entire framework’s called the GenStudio, and it’s actually the bringing together of the cloud businesses. The challenge in a company is you want people who are ruthlessly focused on driving innovation in a competitive way and leading the market and what they are responsible for, but you also want them to take a step back and realize that it’s actually putting these together in a way that only Adobe can uniquely do that differentiates us from everybody else. So, while we have these businesses, I think we really run the company as one Adobe, and we recognize the power of one Adobe, and that’s a big part of my job, too.

How do you think about investing at the cutting edge of technology? I’m sure you made AI investments years ago before anyone knew what they could become. I’m sure you have some next-gen graphics capabilities right now that are just in the research phase. That’s pure cost. I think Adobe has to have that R&D function in order to remain Adobe. At the same time, even the cost of deploying AI is going up as more and more people use Firefly or Generative Fill or anything else. And then you have a partnership with OpenAI to use Sora in Premiere, and that might be cheaper than developing on your own. How do you think about making those kinds of bets?

Again, we are in the business of investing in technology. A couple of things have really influenced how we think about it at the company. Software has an S-curve. You have things that are in incubation and have a horizon that’s not immediate, and you have other things that are mature. I would say our PostScript business is a mature business. It changed the world as we know it right now. But it’s a more mature business. And so, I think being thoughtful about where something is in its stage of evolution, and therefore, you’re making investments certainly ahead of the “monetization” part, but you have other metrics. And you say, am I making progress against metrics? But we’re thoughtful about having this portfolio approach. Some people call it a horizon approach and which phase you’re in. But in each one of them, are we impatient for success in some way? It may be impatient for usage. It may be impatient for making technology advancements. It may be impatience for revenue and monetization. It may be impatience for geographic distribution. I think you still have to create a culture where the expectations of why you are investing are clear and you measure the success against that criteria.

What are some of the longer-term bets you’re making right now that you don’t know when they’re going to pay off?

Well, we’re always investing. AI, building our own foundation models. I think we’re all fairly early right in this phase. We decided very early on that with Firefly, we’re going to be investing in our models. We are doing the same on the PDF side. We had Liquid mode, which allowed you to make all your PDFs responsive on a mobile device. In the Experience Cloud, how do you think about customers, and what’s a model for customers and profiles and recommendations? Across the spectrum, we’re doing it.

I would say the area where we probably make the most fundamental research is in Creative [Cloud]: what’s happening with compression models or resolution or image enhancement techniques or mathematical models for that? We’ve always had advanced technology in that. There, you actually want the team to experiment with things that are further from the tree because if you’re too close to the tree and your only metric is what part of that ships, you are perhaps going to miss some fundamental moves. So, again, you have to be thoughtful about what you are. But I would say core imaging science, core video science, is clearly the area — 3D immersive. That’s where we are probably making the most fundamental research investments.

You mentioned AI and where you are in the monetization curve. Most companies, as near as I can tell, are investing a lot in AI, rolling out a lot of AI features, and the best idea anyone has is, “We’ll charge you 20 bucks a month to ask this chatbot a question, and maybe it will confidently hallucinate at you.” And we’ll see if that’s the right business. But that’s where we are right now for monetization. Adobe is in a different spot. You already have a huge SaaS business. People are already using the features. Is the use of Firefly creating any margin pressure on Creative Cloud subscribers? You’re not charging extra for it, but you could in the future. How are you thinking about that increased cost?

We have been thoughtful about different models for the different products that we have. You’re right in Creative. Think about Express versus Creative Cloud. In Creative Cloud, we want low friction. We want people to experiment with it. Most people look at it and say, “Hey, are you acquiring new customers?” And that’s certainly an important part. What’s also equally important is, if that helps with retention and usage that also, for a subscription business, has a material impact on how you engage value with customers.

Express is very different. Express is an AI-first new product that’s designed to be this paradigm change where, instead of knowing exactly what you want to do, you have a conversation with the computer: I want to create this flyer or I want to remove the background of an image or I want to do something even more exciting and I want to post something on a social media site. And there, it’s, again, about acquisition and successful exports.

You’re right in that there’s a cost associated with it. I would say for the most part, for most companies, the training cost is probably higher right now than the inference costs, both because we can start to offload the inferencing as well on computers as that becomes a reality. But it’s what we do for a living. If you are uncomfortable investing in fundamental technology, you’re in the wrong business. And we’re not a company that has actually focused on being a fast follower, let somebody else invent it. We like creating markets. And so you have to recognize who you are as a company, and that comes with the consequences of how you have to operate.

I think it remains to be seen how consumer AI is monetized. It remains to be seen even with generative AI in Photoshop. At the individual creative level, I think it remains to be seen. Maybe it will just help you with retention, but I feel like retention in Photoshop is already pretty high. Maybe it will bring you new customers, but you already have a pretty high penetration of people who need to use Photoshop.

It’s never enough. We’re always trying to attract more customers.

But that’s one part of the business. I think there’s just a lot of question marks there. There’s another part of your business that, to me, is the most fascinating. When I say Adobe is under-covered, the part of the business that I think is just fully under-covered is — you mentioned it — GenStudio. It’s the marketing side of the business, the experience side of the business. We’re going to have creatives at an ad agency make some assets for a store. The store is going to pump its analytics into Adobe’s software. The software is going to optimize the assets, and then maybe at some turn, the AI is going to make new assets for you and target those directly to customers. That seems like a very big vision, and it’s already pre-monetized in its way. That’s just selling marketing services to e-commerce sites. Is that the whole of the vision or is it bigger than that?

It’s a big part of the vision, Nilay. We’ve been talking about this vision of personalization at scale. Whether you’re running a promotion or a campaign, you’re making a recommendation on what to watch next, and we’re in our infancy in terms of what happens. When I looked and focused on how we create our own content and partner with great agencies, the amount of content that’s created, and the way to personalize that and run variations and experiment and run this across 180 countries where we might do business — that entire process from a campaign brief to where an individual in some country is experiencing that content — it’s a long laborious process. And we think that we can bring a tremendous amount of technology to bear in making that way more seamless. So I think that is an explosive opportunity, and every consumer is now demanding it, and they’re demanding it on their mobile device.

I think people talk about the content supply chain and the amount of content that’s being created and the efficacy of that piece of content. It is a big part of our vision. But documents also. The world’s information is in documents, and we’re equally excited about what we are doing with PDF and the fact that now, in Reader, you can have a conversational interface, and you can say, “Hey, summarize for me,” and then over time, how does this document, if I’m doing medical research, correlate with the other research that’s in there and then go find things that might be on my computer or might be out there on the internet. You have to pose these interesting problems for your product team: how can we add value in this particular use case or scenario? And then they unleash their magic on it. Our job is posing these hard things, which is like, “Why am I starting the process for Black Friday or Cyber Monday five months in advance? Why can’t I decide a week before what campaign I want to run and what promotion I want to run?” And enabling that, I think we will deliver tremendous value.

I promised you I would ask you a lot of questions about PDF, and I’m not going to let go of that promise, but not yet. I want to stay focused on the marketing side.

There’s an idea embedded in two phrases you just said that I find myself wrestling with. I think it is the story of the internet. It is how commercialized the internet has become. You said “content supply chain” and “content life cycle.” The point of the content is to lead to a transaction that is an advertising and marketing-driven view of the internet. Someone, for money, is going to make content, and that content will help someone else down the purchase funnel, and then they’re going to a pair of shoes or a toothbrush or whatever it is. And that I think is in tension with creativity in a real way. That’s in tension with creativity and art and culture. Adobe sits at the center of this. Everybody uses your software. How do you think about that tension? Because it’s the thing that I worry about the most.

Specifically, the tension is as a result of what? The fact that we’re using it for commerce?

Yeah. I think if the tools are designed and organized and optimized for commerce, then they will pull everybody toward commerce. I look at young creators on social platforms, and they are just slowly becoming ad agencies. Like one-person ad agencies is where a creator ends if they are at the top of their game. MrBeast is such a successful ad agency that his rates are too high, and it is better for him to sell energy bars and make ads for his own energy bars than it is for him to sell ads to someone else. That is a success story in one particular way, and I don’t deny that it’s a success story, but it’s also where the tools and the platforms pull the creatives because that’s the money. And because the tools — particularly Adobe’s tools — are used by everybody for everything, I wonder if you at the very top think about that tension and the pull, the optimization that occurs, and what influence that has on the work.

We view our job as enablement. If you’re a solopreneur or you want to run a business, you want to be a one-person shop in terms of being able to do whatever your passion is and create it. And the internet has turned out to be this massively positive influence for a lot of people because it allows them distribution. It allows them reach. But I wouldn’t underplay the —

There are some people who would make, at this point, a very different argument about the effect of the internet on people.

But I was going to go to the other side. Whether it’s just communication and expressing themselves, one shouldn’t minimize the number of people for whom this is a creative outlet and it’s an expression, and it has nothing to do with commerce and they’re not looking to monetize it, but they’re looking to express themselves. Our tools, I think, do both phenomenally well. And I think that is our job. Our job is not doing value judgment on what people are using this for. Our job is [to ask], “How do we enable people to pursue their passion?”

I think we do a great job at that. If you’re a K–12 student today, when you write a project, you’re just using text. How archaic is that? Why not put in some images? Why not create a video? Why not point to other links? The whole learning process is going to be dramatically expanded visually for billions of people on the internet, and we enable that to happen. I think there are different users and different motivations, and again, as I said, we’re very comfortable with that.

One of the other tensions I think about right now when it comes to AI is that the whole business — the marketing business, the experience business you have — requires a feedback loop of analytics. You’re going to put some content ideally on the web. You’re going to put some Adobe software on the website. You own Omniture. You own a big analytics suite that you acquired with Omniture back in the day. Then that’s going to result in some conversions. You’ll do some more tracking. You’ll sell some stuff.

That all depends on a vibrant web. I’m guessing when people make videos in Premiere and upload them to YouTube, you don’t get to see what happens on YouTube. You don’t have great analytics from there. I’m guessing you have even worse analytics from TikTok and Instagram Reels. More and more people are going to those closed platforms, and the web is getting choked by AI. You can feel that it’s being overrun by low-quality SEO spam or AI content, or it’s mostly e-commerce sites because you can avoid some transaction fees if you can get people to go to a website. Do you worry about the pressure that AI is putting on the web itself and how people are going to the more closed platforms? Because that feels like it directly hits this business, but it also directly impacts the future of how people use Photoshop.

I think your point really brings to the forefront the fact that the more people use your products, the more differentiating yourself with your content is a challenge. I think that comes with the democratization of access to tools and information. It’s no different from if you’re a software engineer and you have all this access to GitHub and everything that you can do with software. How do you differentiate yourself as a great engineer, or if you’re a business, how do you differentiate yourself with a business? But as it relates to the content creation parts —

Actually, can I just interrupt you?

Sure.

I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.

You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?

I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.

How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?

Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.

So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.

And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.

It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?

I think the approach that we’ve taken has shown just tremendous leadership by saying ... Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.

I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.

It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?

To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?

I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—

The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?

I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears.

Is there a red line for you where you’ve said, “We are not going to cross this line and enable this kind of feature”?

Photoshop has actually done a couple of things in the past. I think with creating currency, if you remember, that was a place. I think pornography is another place. There’s some things in terms of content where we have drawn the line. But that’s a judgment call, and we’ll keep iterating on that, and we’ll keep refining what we do.

Alright. Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?

It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.

There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?

Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.

The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?

I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?

I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.

Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?

Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.

Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?

It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.

I can’t think of a company that is more tied to the general valence of the GPU market than Adobe. Literally, the capabilities you ship have always been at the boundary of GPU capabilities. Now that market is constrained in different ways. Different people want to buy GPUs for vastly different reasons. Is that something you’re thinking about: how the GPU market will shape as the overwhelming financial pressure to optimize for training begins to alter the products themselves?

For the most part, people look at the product . I don’t know anybody who says, “I’ve got enough processing power,” or “I’ve got enough network bandwidth,” or “I’ve got enough storage space.” And so I think all those will explode – you’re right. We tend to be a company that wants to exploit all of the above to deliver great value, but when you can have a conversation with [Nvidia CEO] Jensen [Huang] and talk about what they are doing and how they want to partner with us, I think that partnership is so valuable in times like this because they want this to happen.

Shantanu, I think we are out of time. Thank you so much for being on Decoder. Like I said, you were one of the first names I ever wrote down. I really appreciate you coming on.

Thanks for having me. Really enjoyed the conversation, Nilay.

Amazon’s robotaxi company is under investigation after two crashes with motorcyclists

Amazon’s robotaxi company is under investigation after two crashes with motorcyclists
Autonomous cars Zoox in California’s Foster City
Image: Tayfun Coskun / Anadolu Agency via Getty Images

US safety regulators are looking into two crashes involving Amazon’s robotaxi company, Zoox. The Office of Defects Investigation, under the US National Highway Traffic Safety Administration, opened a preliminary evaluation into Zoox after two separate reports of the vehicles suddenly braking and causing motorcyclists to crash into their rear end.

NHTSA confirms that the Zoox vehicles were operating in driverless mode without safety drivers when the incidents occurred. The vehicles involved in both crashes were Toyota Highlander SUVs, which Zoox uses for testing and data gathering. According to the Office of Defects Investigation, the investigation covers an estimated 500 vehicles.

The crashes did not involve Zoox’s unique toaster-looking vehicles that lack traditional pedals and steering wheels — which were approved for testing on California roads in 2023. Those vehicles just started to appear on roads in March.

This isn’t Zoox’s first run-in with NHTSA. Last year, the agency investigated claims by the company that its driverless vehicle met federal safety standards without an exemption from the government.

On Instagram, a Jewelry Ad Draws Solicitations for Sex With a 5-Year-Old

On Instagram, a Jewelry Ad Draws Solicitations for Sex With a 5-Year-Old Advertisers of merchandise for young girls find that adult men can become their unintended audience. In a test ad, convicted sex offenders inquired about a child model.

dimanche 12 mai 2024

The rise of the audio-only video game

The rise of the audio-only video game
Vector illustration showing the sound of innovative sensory games.
Image: Samar Haddad / The Verge

Not all video games need video. Over the years, games that exist only in audio have taken players into entirely new worlds in which there’s nothing to see and still everything to do. These games have huge accessibility implications, allowing people who can’t see to play an equally fun, equally immersive game with their other senses. And when all you have is sound, there’s actually even more you can do to make your game great.

On this episode of The Vergecast, we explore the history of audio-only games with Paul Bennun, who has been in this space longer than most. Years ago, Bennun and his team at Somethin’ Else made a series of games called Papa Sangre that were among the most innovative and most popular games of their kind. He explains what makes an audio game work, why the iPhone 4 was such a crucial technological achievement for these games, and more.

Bennun also makes the case that, right now, even in this ultra-visual time, is the perfect time for a rebirth of audio games. He points to AirPods and other spatial audio headphones along with devices like the Vision Pro, advances in location tracking, and improvements in multiplayer gaming as reasons to think that audio-first games could be a huge hit now. It even sounds a bit like Bennun might have a game in the works, but he won’t tell us about that.

If you want to know more about the topics we cover in this episode, here are a few links to get you started:

Patient Dies Weeks After Kidney Transplant From Genetically Modified Pig

Patient Dies Weeks After Kidney Transplant From Genetically Modified Pig Richard Slayman received the historic procedure in March. The hospital said it had “no indication” his death was related to the transplant.

The new iPad Pro looks like a winner

The new iPad Pro looks like a winner
Images of Animal Well, the iPad Pro, and the Wordle archive, on top of the Installer logo.
Image: David Pierce / The Verge

Hi, friends! Welcome to Installer No. 37, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, send me links, and also, you can read all the old editions at the Installer homepage.)

This week, I’ve been writing about iPads and LinkedIn games, reading about auto shows and typewriters and treasure hunters, watching Everybody’s in LA and Sugar, looking for reasons to buy Yeti’s new French press even though I definitely don’t need more coffee gear, following almost all of Jerry Saltz’s favorite Instagram accounts, testing Capacities and Heptabase for all my note-taking needs and Plinky for all my link-saving, and playing a lot of Blind Drive.

I also have for you a thoroughly impressive new iPad, a clever new smart home hub, a Twitter documentary to watch this weekend, a sci-fi show to check out, a cheap streaming box, and much more. Let’s do it.

(As always, the best part of Installer is your ideas and tips. What are you reading / watching / cooking / playing / building right now? What should everyone else be into as well? Email me at installer@theverge.com or find me on Signal at @davidpierce.11. And if you know someone else who might enjoy Installer, and tell them to subscribe here.)


The Drop

  • The new iPad Pro. The new Pro is easily the most impressive piece of hardware I’ve seen in a while. It’s so thin and light, and that OLED screen… gorgeous. It’s bonkers expensive, and the iPad’s big problem continues to be its software, but this is how you build a tablet, folks.
  • Animal Well. Our friends over at Polygon called this “one of the most inventive games of the last decade,” which is obviously high praise! By all accounts, it’s unusual, surprising, occasionally frustrating, very smart, and incredibly engaging. Even the trailer looks like nothing I’ve seen before. (I got a lot of recommendations for this one this week — thanks to everyone who sent it in!)
  • Final Cut Camera. This only got a quick mention at Apple’s event this week, but it’s kind of a huge deal! It’s a first-party, pro-level camera app for iPhones and iPads that gives you lots of manual control and editing features. It’s exactly what a lot of creatives have been asking for. No word yet on exactly when it’ll be available, but I’m excited.
  • The Aqara Hub M3. The only way to manage your smart home is to make sure your devices can support as many assistants, protocols, and platforms as possible. This seems like a way to do it: it’s a Matter-ready device that can handle just about any smart-home gear you throw at it.
  • Battle of the Clipboard Managers.” I don’t think I’ve ever linked to a Reddit thread here, but check this one out: it’s a long discussion about why a clipboard manager is a useful tool, plus a bunch of good options to choose from. (I agree with all the folks who love Raycast, but there are a lot of choices and ideas here.)
  • Proton Pass. My ongoing No. 1 piece of technology advice is that everyone needs a password manager. I’m a longtime 1Password fan, but Proton’s app is starting to look tempting — this week, it got a new monitoring tool for security threats, in addition to all the smart email hiding and sharing features it already has.
  • The Onn 4K Pro. Basically all streaming boxes are ad-riddled, slow, and bad. This Google TV box from Walmart is at least also cheap, comes with voice control and support for all the specs you’d want, and works as a smart speaker. I love a customizable button, too.
  • Dark Matter. I’ve mostly loved all the Blake Crouch sci-fi books I’ve read, so I have high hopes for this Apple TV Plus series about life in a parallel universe. Apple TV Plus, by the way? Really good at the whole sci-fi thing.
  • The Wordle archive. More than 1,000 days of Wordle, all ready to be played and replayed (because, let’s be honest, who remembers Wordle from three weeks ago?). I don’t have access to the archive yet, but you better believe I’ll be playing it all the way through as soon as it’s out.
  • Black Twitter: A People’s History. Based on a really fun Wired series, this is a three-part deep dive Hulu doc about the ways Black Twitter took over social media and a tour of the internet’s experience of some of the biggest events of the last decade.

Screen share

Kylie Robison, The Verge’s new senior AI reporter, tweeted a video of her old iPhone the other day that was like a perfect time capsule of a device. She had approximately 90,000 games, including a bunch that I’m 100 percent sure were scams, and that iPod logo in her dock made me feel a lot of things. Those were good days.

I messaged Kylie in Slack roughly eight minutes after she became a Verge employee, hoping I could convince her to share her current homescreen — and what she’d been up to during her funemployment time ahead of starting with us.

Sadly, she says she tamed her homescreen chaos before starting, because something something professionalism, or whatever. And now she swears she can’t even find a screenshot of her old homescreen! SURE, KYLIE. Anyway, here’s Kylie’s newly functional homescreen, plus some info on the apps she uses and why.

Two screenshots of an iPhone homescreen.

The phone: iPhone 14 Pro Max.

The wallpaper: A black screen because I think it’s too noisy otherwise. (My lock screen is about 20 revolving photos, though.)

The apps: Apple Maps, Notes, Spotify, Messages, FaceTime, Safari, Phone.

I need calendar and weather apps right in front of me when I unlock my phone because I’m forgetful. I use Spotify for all things music and podcasts.

Work is life so I have all those apps front and center, too (Signal, Google Drive, Okta).

Just before starting, I reorganized my phone screen because 1) I had time and 2) I knew I’d have to show it off for David. All the apps are sorted into folders now, but before, they were completely free-range because I use the search bar to find apps; I rarely scroll around. So just imagine about 25 random apps filling up all the pages: Pegasus for some international flight I booked, a random stuffed bell pepper recipe, what have you.

I also asked Kylie to share a few things she’s into right now. Here’s what she shared:

  • Stardew Valley took over my life during my work break.
  • I actually started 3 Body Problem because of an old Installer. Also, I loved Fallout and need more episodes.
  • My serious guilty pleasure is Love Island UK, and I’ve been watching the latest season during my break.

Crowdsourced

Here’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or hit me up on Signal — I’m @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. And if you want even more recommendations, check out the replies to this post on Threads.

“I have always found Spotify’s recommendation algorithm and music channels to be terrible; wayyy too much fussing and tailoring required when all I want is to hit play and get a good diversity of music I will like. So I finally gave up and tried Pandora again. Its recommendation / station algorithm is so wildly better than Spotify’s (at least for me), it’s shocking how it has seemed to fade into cultural anonymity. Can’t speak for others, but if anyone out there is similarly frustrated with Spotify playlists, I highly recommend the Pandora option.” – Will

“Everything coming out of Netflix Is a Joke Fest has been 10/10.” – Mike

Mantella mod for Skyrim (and Fallout 4). Not so much a single mod, but a mod plus a collection of apps that gives (basically) every NPC their own lives and stories. It’s like suddenly being allowed to participate in the fun and games with Woody and Buzz, rather than them having to say the words when you pull the string.” – Jonathan

“The Snipd podcast app (whose primary selling point is AI transcription of podcasts and the ability to easily capture, manage, and export text snippets from podcasts) has a new feature that shows you a name, bio, and picture for podcast guests, and allows you to find more podcasts with the same guest or even follow specific guests. Pretty cool!” – Andy

“I have recently bought a new Kindle, and I’m trying to figure out how to get news on it! My current plan is to use Omnivore as my bookmarks app, which will sync with this awesome community tool that converts those bookmarks into a Kindle-friendly website.” – David

Turtles All the Way Down! Great depiction of OCD.” – Saad

“With all the conversation around Delta on iOS, I have recently procured and am currently enamored with my Miyoo Mini Plus. It’s customizable and perfectly sized, and in my advanced years with no love for Fortnite, PUBG, or any of the myriad of online connected games, it’s lovely to go back and play some of these ‘legally obtained’ games that I played in my childhood.” – Benjamin

Rusty’s Retirement is a great, mostly idle farm sim that sits at the bottom or the side of your monitor for both Mac and Windows. Rusty just goes and completes little tasks of his own accord while you work or do other stuff. It rocks. Look at him go!” – Brendon

“Last week, Nicholas talked about YACReader and was asking for another great comic e-reader app for DRM-free files. After much searching myself, I settled on Panels for iPad. Great Apple-native UI, thoughtful features, and decent performance. The free version can handle a local library, but to unlock its full potential, the Pro version (sub or lifetime) supports iCloud, so you can keep all your comics in iCloud Drive, manage the files via a Mac, and only download what you’re currently reading — great for lower-end iPads with less storage.” – Diogo


Signing off

I have spent so much time over the years trying to both figure out and explain to people the basics of a camera. There are a billion metaphors for ISO, shutter speed, and aperture, and all of them fall short. That’s probably why a lot of the photographer types I know have been passing around this very fun depth of field simulator over the last few days, which lets you play with aperture, focal length, sensor size, and more in order to understand how different settings change the way you take photos. It’s a really clever, simple way to see how it all works — and to understand what becomes possible when you really start to control your camera. I’ll be sharing this link a lot, I suspect, and I’m learning a lot from it, too.

See you next week!

Elon Musk’s Diplomacy: Woo Right-Wing World Leaders. Then Benefit.

Elon Musk’s Diplomacy: Woo Right-Wing World Leaders. Then Benefit. Mr. Musk has built a constellation of like-minded heads of state — including Argentina’s Javier Milei and India’s Narendra Modi — to push his own politics and expand his business empire.

samedi 11 mai 2024

Google I/O 2024 will be all about AI again

Google I/O 2024 will be all about AI again
Google I/O logo.
Image: Google

Google is preparing to hold its annual Google I/O developer conference next week, and naturally, it will be all about AI. The company has made no secret of that. Since last year’s I/O, it has debuted Gemini, its new, more powerful model meant to compete with OpenAI’s ChatGPT, and has been deep in testing new features for Search, Google Maps, and Android. Expect to hear a lot about that stuff this year.

When Google I/O will happen and where you can watch

Google I/O kicks off on Tuesday, May 14th at 10AM PT / 1PM ET with a keynote talk. You can catch that on Google’s site or its YouTube channel, via the livestream link that’s also embedded at the top of this page. (There’s also a version with an American Sign Language interpreter.) Set a good amount of time aside for that; I/O tends to go on for a couple of hours.

AI at I/O

Google has been clear: I/O this year will be all about AI. Gemini has been out in the world — not without some controversy — for a few months now, as has the company’s smaller Gemma model. A lot of the keynote will probably cover how Google is fusing Search and generative AI. The company has been testing new search features like AI conversation practice for English language learners, as well as image generation for shopping and virtual try-on.

Google will probably also focus on ways it plans to turn your smartphone into more of an AI gadget. That means more generative AI features for Google’s apps. It’s been working on AI features that help with dining and shopping or finding EV chargers in Google Maps, for instance. Google is also testing a feature that uses AI to call a business and wait on hold for you until there’s actually a human being available to talk to.

The Pixel as an AI gadget

I/O could also see the debut of a new, more personal version of its digital assistant, rumored to be called “Pixie.” The Gemini-powered assistant is expected to integrate multimodal features like the ability to take pictures of objects to learn how to use them or get directions to a place to buy them.

That kind of thing could be bad news for devices like the Rabbit R1 and the Human Ai Pin, which each recently launched and struggled to justify their existence. At the moment, the only advantage they maybe sort of have is that it’s kind of hard (though not impossible) to pull off using a smartphone as an AI wearable.

A render of Google’s Pixel 9 smartphone. Image: OnLeaks / 91Mobiles
Leaked image of the Pixel 9.

Will there be Hardware at I/O?

It seems unlikely that Google will focus much on new hardware this year, given that the Pixel 8A is already available for preorder and you can now buy a relaunched, cheaper Pixel Tablet, unchanged apart from the fact that the magnetic speaker dock is now a separate purchase. The company could still tease new products like the Pixel 9 — which, in typical Google fashion, is already leaking all over the place — and the Pixel Tablet 2, of course.

The search giant could also talk about its follow-up to the Pixel Fold, which is rumored to get a mouthful of a rebrand to the Pixel 9 Pro Fold.

Android in the time of AI

Android in the time of AI
Android logo on a green and blue background
Google’s annual developer conference kicks off on Tuesday. | Illustration by Alex Castro / The Verge

The past few months have made one thing crystal clear: phones remain undefeated.

The AI gadgets that were supposed to save us from our phones have arrived woefully underbaked — whatever illusions we might have held that the Humane AI pin or the Rabbit R1 were going to offer any kind of salve for the constant rug burn of dealing with our personal tech is gone. Hot Gadget Spring is over and developer season is upon us, starting with Google I/O this coming Tuesday.

It also happens to be a pivotal time for Android. I/O comes on the heels of a major re-org that put the Android team together with Google’s hardware team for the first time. The directive is clear: to run full speed ahead and put more AI in more things. Not preferring Google’s own products was a foundational principle of Android, though that model started shifting years ago as hardware and software teams collaborated more closely. Now, the wall is gone and the AI era is here. And if the past 12 months have been any indication, it’s going to be a little messy.

Google Pixel 8 on a pink background showing pink mineral home screen wallpaper Photo by Allison Johnson / The Verge
The Pixel 8 uses Google’s AI-forward Tensor chipset, but its AI tricks don’t amount to a cohesive vision.

So far, despite Samsung and Google’s best efforts, AI on smartphones has really only amounted to a handful of party tricks. You can turn a picture of a lamp into a different lamp, summarize meeting notes with varying degrees of success, and circle something on your screen to search for it. Handy, sure, but far from a cohesive vision of our AI future. But Android has the key to one important door that could bring more of these features together: Gemini.

Gemini launched as an AI-fueled alternative to the standard Google Assistant a little over three months ago, and it didn’t feel quite ready yet. On day one, it couldn’t access your calendar or set a reminder — not super helpful. Google has added those functions since then, but it still doesn’t support third-party media apps like Spotify. Google Assistant has supported Spotify for most of the last decade.

But the more I come back to Gemini, the more I can see how it’s going to change how I use my phone. It can memorize a dinner recipe and talk me through the steps as I’m cooking. It can understand when I’m asking the wrong question and give me the answer to the one I’m looking for instead (figs are the fruit that have dead wasp parts in them; not dates, as I learned). It can tell me which Paw Patrol toy I’m holding, for Pete’s sake.

Phone in hand showing Google Gemini welcome screen. Photo by Amelia Holowaty Krales / The Verge
Gemini debuted a few months ago missing some key features, but Google has filled in some of the gaps since then.

Again, though — party tricks. Gemini’s real utility will arrive when it can integrate more easily across the Android ecosystem; when it’s built into your earbuds, your watch, and into the very operating system itself.

Android’s success in the AI era rides on those integrations. ChatGPT can’t read your emails or your calendars as readily as Gemini; it doesn’t have easy access to a history of every place you’ve visited in the past decade. Those are real advantages, and Google needs every advantage right now. We’ve seen plenty of signals that Apple plans to unveil a much smarter Siri at WWDC this year. Microsoft and OpenAI aren’t sitting still either. Google needs to lean into its advantages to deliver AI that’s more than a party trick — even if it’s a little un-Android-like.

The Garmin Lily 2 was the tracker I needed on vacation

The Garmin Lily 2 was the tracker I needed on vacation
close up of lilac Garmin Lily 2 Sport on a colorful background
The Lily 2 is a small, unassuming tracker that suits casual users. | Photo by Amelia Holowaty Krales / The Verge

Its limitations made it fall short in daily life but ended up being a plus while trying to disconnect from the world.

On my last day of vacation, I sat on a pristine beach, sipping on a piña colada while staring at a turquoise Caribbean Sea. In four days, I’d charged my Apple Watch Ultra 2 three times, and I was down to about 30 percent. On the other wrist, I had the more modest $249.99 Garmin Lily 2 Sport. It was at about 15 percent, but I hadn’t charged it once. Actually, I’d left the cable hundreds of miles away at home. While pondering this, the Ultra 2 started buzzing. My phone may have been buried under towels and sunscreen bottles at the bottom of a beach bag, but Peloton was having a bad earnings day. The way that watch is set up, there was no way it would let me forget. The Lily 2 also buzzed every now and then. The difference was reading notifications on it was too bothersome and, therefore, easily ignored.

That tiny slice in time sums up everything that makes the Lily 2 great — and perhaps not so great.

Close up of the Garmin Lily 2 looking for GPS
The hidden screen is a bit dim in direct sunlight and doesn’t fit a ton of information on it.

My 10 days with the Lily 2 were split into two dramatically different weeks. The first was a chaotic hell spent zipping here and there to get 10,000 things done before vacation. The second, I did my very best to be an untroubled beach potato. That first week, I found the Lily 2 to be cute and comfortable but lacking for my particular needs. On vacation, its limitations meant it was exactly the kind of wearable I needed.

I wasn’t surprised by that. The Lily 2 is not meant to be a mini wrist computer that can occasionally sub in for your phone. It’s meant to look chic, tell you the time, and hey, here’s some basic notifications and fitness tracking. That’s ideal for casual users — the kind of folks who loved fitness bands and Fitbits before Google started mucking around with the formula.

The main thing with the Lily 2 is you have to accept that it’s going to look nice on your wrist but be a little finicky to actually use. The original Lily’s display didn’t register swipes or taps that well. It’s improved a smidge with the Lily 2, but just a smidge. Reading notifications, navigating through menus, and just doing most things on the watch itself I found to be nowhere near as convenient as a more full-fledged touchscreen smartwatch. This extra friction is a big reason why the Lily 2 just didn’t fit my needs in daily life.

As a fitness tracker, the Lily 2 is middling. The main additions this time around are better sleep tracking and a few more activity types, like HIIT, indoor rowing and walking, and meditation. There are also new dance fitness profiles with various subgenres, like Zumba, afrobeat, jazz, line dancing, etc. That said, the Lily 2 isn’t great for monitoring your data mid-workout. Again, fiddly swipes and a small screen add too much friction for that.

I also wouldn’t recommend trying to train for a marathon with the Lily 2. Since it uses your phone’s GPS, my results with outdoor runs were a mixed bag. One four-mile run was recorded as 4.01 miles. Great! Another two-mile run was logged as 2.4 miles. Less great. It’s a tracker best suited to an active life, but not one where the details really matter. Case in point, it was great for tracking my general activity splashing around and floating in the ocean — but it’s not really the tracker I’d reach for if I were trying to track laps in the pool.

At 35mm, it’s a skosh bigger than the original Lily but much smaller than just about every other smartwatch on the market. It’s lighter than most at 24.4g, too. That makes this a supremely comfortable lil watch. Most days, I forgot I was wearing it.

While I’m no fashionista, I didn’t feel like my lilac review unit was hard to slot into my daily wardrobe. But if playful colors aren’t your thing, the Classic version is $30-50 more and has a more elegant feel, a more muted color palette, and nylon / leather straps. (It also adds contactless payments.)

As a woman with a small wrist, the 35mm size is a plus. But while I personally don’t think the Lily 2 has to be a women’s watch, it is undeniably dainty. If you want something with a more neutral vibe or a slightly bigger size, Garmin has the Vivomove Trend or Vivomove Sport. Withings’ ScanWatch 2 or ScanWatch Light are also compelling options.

View of the Garmin Lily 2’s sensor array
The sensor array uses the last-gen Garmin optical heart rate sensor, but that’s fine on a casual tracker.

Ultimately, the Lily 2 is great for folks who want to be more active while trying to cut down on notifications. It’s also a great alternative if you miss the old Misfits, Jawbones, or Fitbit Alta HR. Deep down, I wish that were me, but the reality is I have too much gadget FOMO and care way too much about my running data. That said, the next time I go on vacation — or feel the urge to disconnect — I think I’ll reach for the Lily 2 and try to leave the rest of my life at home.

vendredi 10 mai 2024

Gaze upon Dell’s leaked Qualcomm X Elite-powered laptops

Gaze upon Dell’s leaked Qualcomm X Elite-powered laptops
dell laptops floating back to back and opened showing slimness, green wallpaper screen with balloons, there's a copilot key on the bottom keyboard row
That’s a skinny XPS. | Image: Windows Report

Want a look at Dell’s new notebook lineup that’s apparently powered by Qualcomm’s upcoming Snapdragon X Elite processors? Courtesy of Windows Report, these leaked images come ahead of Microsoft’s May 20th event where new Surfaces (and other laptops) are expected to sport the same chips.

They unsurprisingly look like laptops — albeit with overall slimmer profiles.

The most interesting model is Dell’s new XPS 13 9345, which seems to be a sleeker rebirth of the XPS 13 Plus from 2022. It’s got the same touchy touch-bar on the top row and comes with only two USB-C ports for I/O.

 Image: Windows Report
Dell XPS 9345
 Image: Windows Report
Dell Inspiron 14 7441 Plus

There’s also a leaked new Inspiron 14 7441 Plus that’s reportedly equipped with a 16-core Snapdragon X Elite and has 16GB of base RAM. Inspirons are considered Dell’s everyman PC that isn’t as sleek as the XPS lineup, although this one looks like it has thinned, and seems to come with two USB-C ports, one USB-A, and a microSD card slot.

Dell had revealed a new XPS lineup in January which introduced keyboards that bear Microsoft’s new Copilot key on the bottom row — and it looks like these leaked ones have them, too. Dell, HP, and Lenovo have all partnered with Microsoft to release notebooks supporting Windows 11 AI features. And these leaked Dell laptops apparently have Microsoft’s upcoming “AI Explorer” features out of the box.

They’re among the first Snapdragon X laptops we’ve seen leak out — the other is this Lenovo Yoga Slim 7 that leaker WalkingCat unearthed,

A Samsung Galaxy Book 4 Edge is expected as well, and Asus seems to have a Qualcomm laptop coming too.

Qualcomm’s Snapdragon X series are due to appear in laptops this summer, and it’s the chipmaker’s big bet to challenge Apple Silicon, Intel, and AMD on performance.

How Airlines Are Using AI to Make Flying Easier

How Airlines Are Using AI to Make Flying Easier Airlines are using artificial intelligence to save fuel, keep customers informed and hold connecting flights for delayed passengers. Here’s what to expect.

jeudi 9 mai 2024

Microsoft’s new Xbox mobile gaming store is launching in July

Microsoft’s new Xbox mobile gaming store is launching in July
Xbox logo illustration
Illustration by Alex Castro / The Verge

Microsoft has been talking about plans for an Xbox mobile gaming store for a couple of years now, and the company now plans to launch it in July. Speaking at the Bloomberg Technology Summit earlier today, Xbox president Sarah Bond revealed the launch date and how Microsoft is going to avoid Apple’s strict App Store rules.

“We’re going to start by bringing our own first-party portfolio to [the Xbox mobile store], so you’re going to see games like Candy Crush show up in that experience, games like Minecraft,” says Bond. “We’re going to start on the web, and we’re doing that because that really allows us to have it be an experience that’s accessible across all devices, all countries, no matter what and independent of the policies of closed ecosystem stores.”

The store will be focused on first-party mobile games from Microsoft’s various studios, which include huge hits like Call of Duty: Mobile and Candy Crush Saga. Bond says the company will extend this to partners at some point in the future, too.

While games will naturally be part of the store, it sounds like the key parts of the Xbox experience will also be available. Bond argues there isn’t a gaming platform and store experience that “goes truly across devices — where who you are, your library, your identity, your rewards travel with you versus being locked to a single ecosystem.” So Microsoft is trying to build that with its Xbox mobile store.

Microsoft had also been building this store in anticipation of companies like Apple and Google being forced to open up their mobile app stores, but it’s clear the software giant isn’t willing to wait on the Digital Markets Act to shake out in Europe or any potential action in the US.

A web-only mobile store will be challenging to pull off, and it’s not immediately clear how Microsoft will position this as an alternative to these mobile games already existing on rival app stores. Bond says Microsoft will “extend” beyond the web, hinting that it could eventually launch a true rival to Google and Apple’s mobile app stores at some point soon.

Microsoft first hinted at a “next-generation store” in early 2022, just a month after the company announced its Activision Blizzard acquisition. “We want to be in a position to offer Xbox and content from both us and our third-party partners across any screen where somebody would want to play,” said Microsoft Gaming CEO Phil Spencer in an interview with the Financial Times last year. “Today, we can’t do that on mobile devices but we want to build towards a world that we think will be coming where those devices are opened up.”

Verizon and T-Mobile are trying to gobble up US Cellular

Verizon and T-Mobile are trying to gobble up US Cellular
AT&T, Verizon, T-Mobile Users Report Cellular Outages Nationwide
Photo by Kena Betancur/VIEWpress

Now that they’ve got an extra $100 billion worth of premium airwaves and Sprint no longer nipping at their heels, how can the big three cellular carriers continue to consolidate and grow? Well, T-Mobile and Verizon “are in discussions to carve up U.S. Cellular,” The Wall Street Journal reports.

The report suggests this is about harvesting even more wireless spectrum; my colleague Allison pointed out in 2022 that US Cellular “tends to offer service where some of the major carriers don’t.” (It would certainly be nice for T-Mobile and Verizon customers to have better coverage, but I would prefer competition to lower my wireless bill.)

T-Mobile would reportedly pay over $2 billion for wireless spectrum licenses and take over “some operations;” the WSJ doesn’t say what Verizon wants, but says US Cellular “also owns more than 4,000 cellular towers that weren’t part of the latest sale talks.”

The idea behind splitting up US Cellular between T-Mobile and Verizon, the WSJ suggests, is to keep antitrust regulators from blocking the deal. Regulators wound up letting T-Mobile merge with Sprint after promises that it would turn Dish into a new fourth major US cellular carrier, but last we checked, Dish had yet to become a meaningful competitor.

mercredi 8 mai 2024

Apple’s New iPad Ad Leaves Its Creative Audience Feeling … Flat

Apple’s New iPad Ad Leaves Its Creative Audience Feeling … Flat An ad meant to show how the updated device can do many things has become a metaphor for a community’s fears of the technology industry.

Microsoft’s ‘air gapped’ AI is a bot set up to process top-secret info

Microsoft’s ‘air gapped’ AI is a bot set up to process top-secret info
Illustration of a robot brain.
Image: The Verge

Microsoft Strategic Missions and Technology CTO William Chappell announced that it’s deployed a GPT-4 large language model in an isolated, air-gapped environment on a government-only network. Bloomberg first reported the setup, citing an unnamed executive who claimed that the Azure Government Top Secret cloud-hosted model represents the first time a “major” LLM has operated separated from the internet.

Chappell announced the AI supercomputer on Tuesday afternoon at the “first-ever AI Expo for National Competitiveness” in Washington D.C. Unlike the models behind ChatGPT or other tools, Microsoft says this server is “static,” operating without learning from the files it processes or the wider internet.

Chappell told Bloomberg, “It is now deployed, it’s live, it’s answering questions, it will write code as an example of the type of thing it’ll do.” As Chappell mentioned to DefenseScoop, it has not been accredited for top-secret use, so the Pentagon and other government departments aren’t actually using it yet, whether that’s processing data for a particular mission or something like HR.

Disney, Hulu and Max Streaming Bundle Will Soon Become Available

Disney, Hulu and Max Streaming Bundle Will Soon Become Available The offering from Disney and Warner Bros. Discovery shows how rival companies are willing to work together to navigate an uncertain entertainment landscape.

Biden to Announce A.I. Center in Wisconsin as Part of Economic Agenda

Biden to Announce A.I. Center in Wisconsin as Part of Economic Agenda The president’s visit will highlight the investment by Microsoft and point to a failed Foxconn project negotiated by Donald J. Trump.

Artificially Intelligent Help for Planning Your Summer Vacation

Artificially Intelligent Help for Planning Your Summer Vacation Travel-focused A.I. bots and more eco-friendly transportation options in online maps and search tools can help you quickly organize your seasonal getaway.

mardi 7 mai 2024

The US is propping up gas while the world moves to renewable energy

The US is propping up gas while the world moves to renewable energy
Solar panels in the forefront with windmills behind them under a blue sky on a sunny day.
Solar panels point to the sky at the Weesow-Wilmersdorf solar park on May 3rd, 2024, near Grischow, Germany.  | Photo by Maja Hitij / Getty Images

The amount of electricity and greenhouse gas emissions from fossil fuel-fired power plants likely peaked in 2023, according to the annual global electricity review by energy think tank Ember. That means human civilization has likely passed a key turning point, according to Ember: countries will likely never generate as much electricity from fossil fuels again.

A record 30 percent of electricity globally came from renewable sources of energy last year thanks primarily to growth in solar and wind power. Starting this year, pollution from the power sector is likely to start dropping, with a 2 percent drop in the amount of fossil fuel-powered electricity projected for 2024 — a decline Ember expects to speed up in the long term.

“The decline of power sector emissions is now inevitable. 2023 was likely the pivot point – a major turning point in the history of energy,” Dave Jones, Ember’s insights director, said in an emailed statement. “But the pace ... depends on how fast the renewables revolution continues.”

It’s a transition that could be happening much faster if not for the US, which is already the world’s biggest gas producer, using record amounts of gas last year. Without the US, Ember finds, electricity generation from gas would have fallen globally in 2023. Global economies excluding the US managed to generate 62 terawatt hours less gas-powered electricity last year compared to the year prior. But the US ramped up its electricity generation from gas by nearly twice that amount in the same timeframe, an additional 115TWh from gas in 2023.

A big part of the problem is that the US is replacing a majority of aging power plants that run on coal, the dirtiest fossil fuel, with gas-fired plants instead of carbon pollution-free alternatives. “The US is switching one fossil fuel for another,” Jones said. “After two decades of building such a heavy reliance on gas power, the US has a big journey ahead to get to a truly clean power system.”

The US gets just 23 percent of its electricity from renewable energy, according to Ember, falling below the global average of 30 percent.

President Joe Biden set a goal of reaching 100 percent carbon pollution-free electricity by 2035 and signed into law the nation’s largest investment in clean energy and climate to date with the Inflation Reduction Act. But the administration’s ability to mandate a transition to cleaner energy is limited after the Supreme Court decided in 2022 that the Environmental Protection Agency shouldn’t be allowed to determine how the US generates its electricity. Since then, the EPA’s long-awaited rules for greenhouse gas emissions from power plants have leaned on getting energy companies to capture carbon dioxide emissions from burning fossil fuels.

Fortunately, renewables have become remarkably affordable, with solar now considered the cheapest source of electricity in history and the fastest-growing power source for 19 years in a row.

“Last century’s outdated technologies can no longer compete with the exponential innovations and declining cost curves in renewable energy and storage,” Christiana Figueres, former executive secretary of the United Nations Framework Convention on Climate Change, said in an emailed statement.

Ember’s report tracks closely with other predictions from the International Energy Agency (IEA), which called a transition to clean energy “unstoppable” in October. The IEA forecast a peak in global demand for coal, gas, and oil this decade (for all energy use, not just electricity). It also projected that renewables would make up nearly 50 percent of the world’s electricity mix by 2030.

Ember is a little more optimistic after more than 130 countries pledged to triple renewable energy capacity by 2030 during a United Nations climate summit in December. With that progress, renewable electricity globally would reach 60 percent by the end of the decade compared to less than 20 percent in 2000.

lundi 6 mai 2024

Google’s AI plans now include cybersecurity

Google’s AI plans now include cybersecurity
Vector illustration of the Google Gemini logo.
Illustration: The Verge

As people try to find more uses for generative AI that are less about making a fake photo and are instead actually useful, Google plans to point AI to cybersecurity and make threat reports easier to read.

In a blog post, Google writes its new cybersecurity product, Google Threat Intelligence, will bring together the work of its Mandiant cybersecurity unit and VirusTotal threat intelligence with the Gemini AI model.

The new product uses the Gemini 1.5 Pro large language model, which Google says reduces the time needed to reverse engineer malware attacks. The company claims Gemini 1.5 Pro, released in February, took only 34 seconds to analyze the code of the WannaCry virus — the 2017 ransomware attack that hobbled hospitals, companies, and other organizations around the world — and identify a kill switch. That’s impressive but not surprising, given LLMs’ knack for reading and writing code.

But another possible use for Gemini in the threat space is summarizing threat reports into natural language inside Threat Intelligence so companies can assess how potential attacks may impact them — or, in other words, so companies don’t overreact or underreact to threats.

Google says Threat Intelligence also has a vast network of information to monitor potential threats before an attack happens. It lets users see a larger picture of the cybersecurity landscape and prioritize what to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks. VirusTotal’s community also regularly posts threat indicators.

Google bought Mandiant, the cybersecurity company that uncovered the 2020 SolarWinds cyber attack against the US federal government, in 2022.

The company also plans to use Mandiant’s experts to assess security vulnerabilities around AI projects. Through Google’s Secure AI Framework, Mandiant will test the defenses of AI models and help in red-teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes become prey to malicious actors. These threats sometimes include “data poisoning,” which adds bad code to data AI models scrape so the models can’t respond to specific prompts.

Google, of course, is not the only company melding AI with cybersecurity. Microsoft launched Copilot for Security , powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, and lets cybersecurity professionals ask questions about threats. Whether either is genuinely a good use case for generative AI remains to be seen, but it’s nice to see it used for something besides pictures of a swaggy Pope.

Wayve, an A.I. Start-Up for Autonomous Driving, Raises $1 Billion

Wayve, an A.I. Start-Up for Autonomous Driving, Raises $1 Billion The London-based developer of artificial intelligence systems for self-driving vehicles raised the funding from SoftBank, Nvidia, Microsoft and others.

Robinhood’s crypto arm receives SEC warning over alleged securities violations

Robinhood’s crypto arm receives SEC warning over alleged securities violations
An image showing the Robinhood logo on a red and black background
Illustration by Alex Castro / The Verge

Robinhood’s cryptocurrency division could soon be in trouble with the Securities and Exchange Commission. In an 8-K filing submitted on Saturday, Robinhood revealed that it received a Wells notice from the SEC’s staff recommending the agency take action against the trading platform for alleged securities violations.

Robinhood says it received the Wells notice after cooperating with the SEC’s requests for investigative subpoenas about its crypto listings, custody of cryptocurrencies, and the platform’s operations. A Well notice is a letter from the SEC that warns a company of a potential enforcement action. The SEC’s response could include an injunction, a cease-and-desist order, disgorgement, limits on activities, and / or civil penalties.

Coinbase similarly received a Wells notice just months before the SEC sued it for breaking securities law. The SEC also sued Binance on similar grounds, with the trading platform’s former CEO, Changpeng Zhao, now facing four months in prison.

“We firmly believe that the assets listed on our platform are not securities and we look forward to engaging with the SEC to make clear just how weak any case against Robinhood Crypto would be on both the facts and the law,” Dan Gallagher, Robinhood’s chief legal, compliance, and corporate affairs officer, said in a statement.

Robinhood says it already made the “difficult choice” to delist certain tokens — including Solana, Polygon, and Cardano — in response to the SEC’s lawsuits against other trading platforms. In the past, the SEC has argued that some cryptocurrencies are considered securities, which would require exchanges to register with the SEC. This would give the agency regulatory control over the exchanges and the registered tokens.

Robinhood could face a long legal battle if it chooses to fight the SEC’s potential enforcement action. The company’s shares have already dipped in response to the news.

US to fund digital twin research in semiconductors

US to fund digital twin research in semiconductors
Illustrations of a grid of processors seen at an angle with the middle one flipped over to show the pins and the rest shrouded in a green aura
Illustration by Alex Castro / The Verge

The Biden administration wants to attract companies working on digital twins for semiconductors using funding from the $280 billion CHIPS and Science Act and the creation of a chip manufacturing institute.

The CHIPS Manufacturing USA institute aims to establish regional networks to share resources with companies developing and manufacturing both physical semiconductors and digital twins.

Digital twins, virtual representations of physical chips that mimic the real version, make it easier to simulate how a chip might react to a boost in power or a different data configuration. This helps researchers test out new processors before putting them into production.

“Digital twin technology can help to spark innovation in research, development, and manufacturing of semiconductors across the country — but only if we invest in America’s understanding and ability of this new technology,” Commerce Secretary Gina Raimondo says in a press release.

Digital twin research showed it can integrate with other emerging technologies like generative AI to accelerate simulation or further studies into new semiconductor concepts.

Officials of the Biden administration says it will hold briefings with interested parties this month to talk about the funding opportunities. The government will fund the operational activities of the institute, research around digital twins, physical and digital facilities like access to cloud environments, and workforce training.

The CHIPS Act passed in 2022 to boost semiconductor manufacturing in the country, but has struggled to keep up with the capital demand. Raimondo previously said manufacturers requested more than $70 billion in grants, more than the $28 billion the government budgeted in investments.

So far, companies like Intel and Micron are set to receive funding from the US government through the CHIPS Act. Part of the Biden’s administration goal with the CHIPS Act is to encourage semiconductor companies to build new types of processors in the US, especially now that demand for high-powered chips grew thanks to the AI boom.

Tensions Rise in Silicon Valley Over Sales of Start-Up Stocks

Tensions Rise in Silicon Valley Over Sales of Start-Up Stocks The market for shares of hot start-ups like SpaceX and Stripe is projected to reach a record $64 billion this year.

dimanche 5 mai 2024

Randy Travis gets his voice back in a new Warner AI music experiment

Randy Travis gets his voice back in a new Warner AI music experiment
Randy Travis singing at Cheyenne Frontier Days
Randy Travis in 1987. | Photo: Mark Junge / Getty Images

For the first time since a 2013 stroke left country singer Randy Travis unable to speak or sing properly, he has released a new song. He didn’t sing it, though; instead, the vocals were created with AI software and a surrogate singer.

The song, called “Where That Came From,” is every bit the kind of folksy, sentimental tune I came to love as a kid when Travis was at the height of his fame. The producers created it by training an unnamed AI model, starting with 42 of his vocal-isolated recordings. Then, under the supervision of Travis and his career-long producer Kyle Lehning, fellow country singer James DuPre laid down the vocals to be transformed into Travis’ by AI.

Besides being on YouTube, the song is on other streaming platforms like Apple Music and Spotify.

The result of Warner’s experiment is a gentle tune that captures Travis’ relaxed style, which rarely wavered far from its baritone foundation. It sounds like one of those singles that would’ve hung around the charts long enough for me to nervously sway to once after working up the gumption to ask a girl to dance at a middle school social. I wouldn’t say it’s a great Randy Travis song, but it’s certainly not the worst — I’d even say I like it.

Dustin Ballard, who runs the various incarnations of the There I Ruined It social media account, creates his AI voice parodies in much the same way as Travis’ team, giving birth to goofy mash-ups like AI Elvis Presley singing “Baby Got Back” or synthetic Johnny Cash singing “Barbie Girl.”

It would be easy to sound the alarm over this song or Ballard’s creations, declaring the death of human-made music as we know it. But I’d say it does quite the opposite, reinforcing what tools like an AI voice clone can do in the right hands. Whether you like the song or not, you have to admit that you can’t get something like this from casual prompting.

Cris Lacy, Co-president of Warner Music Nashville, told CBS Sunday Morning that AI voice cloning sites produce approximations of artists like Travis that don’t “sound real, because it’s not.” She called the label’s use of AI to clone Travis’ voice “AI for good.”

Right now, Warner can’t really do much about AI clones that it feels don’t fall under the heading of “AI for good.” But Tennessee’s recently-passed ELVIS Act, which goes into effect on July 1st, would allow labels to take legal action against those using software to recreate an artists’ voice without permission.

Travis’ song is a good edge-case example of AI being used to make music that actually feels legitimate. But on the other hand, it also may open a new path for Warner, which owns the rights to vast catalogs of music from famous, dead artists that are ripe for digital resurrection and, if they want to go there, potential profit. As heartwarming as this story is, it makes me wonder what lessons Warner Music Nashville — and the record industry as a whole — will take away from this song.

Tesla plans to charge some Model Y owners to unlock more range

Tesla plans to charge some Model Y owners to unlock more range
A picture of a Tesla Model Y.
Image: Tesla

Tesla CEO Elon Musk posted on Friday that the Standard Range rear-wheel drive Model Y the company has been building and selling “over the last several months” actually has more range than the 260 miles they were sold with. Pending “regulatory approval,” he wrote that the company will unlock another 40–60 miles of total range, depending on which battery Model Y owners have, “for $1,500 to $2,000.”

Tesla replaced the Standard Range Model Y with a 320-mile range version for $2,000 more. The car now starts at $44,990, or about $37,490 if you qualify for the $7,500 federal EV tax credit.

This isn’t the first time Tesla has software-locked its cars’ range. The company revealed back in 2016 that the 70kWh battery in the Model S 70 actually had 75kWh of capacity that customers could pay more than $3,000 to access. It’s possible that the current Model S and X cars, which weigh the same as their longer-range counterparts, have also been software-limited.

The auto industry, in general, has been trending toward controlling access to cars’ existing features with pay-to-remove software locks. Polestar started selling a $1,200 over-the-air update to boost the Polestar 2’s performance in 2022. Mercedes-Benz charged the same amount, but annually, to improve the horsepower and torque of the EQE and EQS. BMW once paywalled software-locked CarPlay and, later, heated seats (the company later dropped that plan). And of course, Tesla has proven itself willing to remotely disable paid-for features when one of its cars is resold.

Microsoft pauses Windows 11 updates for PCs with some Ubisoft games installed

Microsoft pauses Windows 11 updates for PCs with some Ubisoft games installed Illustration by Alex Castro / The Verge Microsoft has stopp...