The Los Angeles headquarters of Metaphysic, a Hollywood visual-effects start-up that uses artificial intelligence to create digital renderings of the human face, were much cooler in my imagination, if I’m being honest. I came here to get my mind blown by A.I., and this dim three-room warren overlooking Sunset Boulevard felt more like the slouchy offices of a middling law firm. Ed Ulbrich, Metaphysic’s chief content officer, steered me into a room that looked set to host a deposition, then sat me down in a leather desk chair with a camera pointed at it. I stared at myself on a large flat-screen TV, waiting to be sworn in.
But then Ulbrich clickety-clicked on his laptop for a moment, and my face on the screen was transmogrified. “Smile,” he said to me. “Do you recognize that face?” I did, right away, but I can’t disclose its owner, because the actor’s project won’t come out until 2025, and the role is still top secret. Suffice it to say that the face belonged to a major star with fantastic teeth. “Smile again,” Ulbrich said. I complied. “Those aren’t your teeth.” Indeed, the teeth belonged to Famous Actor. The synthesis was seamless and immediate, as if a digital mask had been pulled over my face that matched my expressions, with almost no lag time.
Ulbrich is the former chief executive of Digital Domain, James Cameron’s visual-effects company, and over the course of his three-decade career he has led the VFX teams on several movies that are considered milestones in the field of computer-generated imagery, including “Titanic,” “The Curious Case of Benjamin Button” and “Top Gun: Maverick.” But in Ulbrich’s line of work, in the quest for photorealism, the face is the final frontier. “I’ve spent so much time in Uncanny Valley,” he likes to joke, “that I own real estate there.”
In the spring of 2023, Ulbrich had a series of meetings with the founders of Metaphysic. One of them, Chris Ume, was the visual-effects artist behind a series of deepfake Tom Cruise videos that went viral on TikTok in early 2021, a moment many in Hollywood cite as the warning shot that A.I.’s hostile takeover had commenced. But in parts of the VFX industry, those deepfake videos were greeted with far less misgiving. They hinted tantalizingly at what A.I. could soon accomplish at IMAX resolutions, and at a fraction of the production cost. That’s what Metaphysic wanted to do, and its founders wanted Ulbrich’s help. So when they met him, they showed him an early version of the demonstration I was getting.
Ulbrich’s own career began during the previous seismic shift in the visual-effects field, from practical effects to C.G.I., and it was plain to him that another disruption was underway. “I saw my career flash before my eyes,” Ulbrich recalled. “I could take my entire team from my former places of employment, I could put them on for eternity using the best C.G.I. tools money can buy, and you can’t deliver what we’re showing you here. And it’s happening in milliseconds.” He knew it was time to leave C.G.I. behind. As he put it: “How could I go back in good conscience and use horses and buggies and rocks and sticks to make images when this exists in the world?”
Back on Sunset Boulevard, Ulbrich pecked some more at his laptop. Now I was Tom Hanks — specifically, a young Tom Hanks, he of the bulging green eyes and the look of gathering alarm on his face in “Splash” when he first discovers that Daryl Hannah’s character is a mermaid. I can divulge Hanks’s name because his A.I. debut arrived in theaters nationally on Nov. 1, in a movie called “Here.” Directed by Robert Zemeckis, written by Zemeckis and Eric Roth — a reunion of the creative team behind “Forrest Gump” — and co-starring Robin Wright, “Here” is based on a 2014 graphic novel that takes place at a single spot in the world, primarily a suburban New Jersey living room, over several centuries. The story skips back and forth through time but focuses on a baby-boomer couple played by Hanks and Wright at various stages of their lives, from age 18 into their 80s, from post-World War II to the present day.
“You couldn’t have made this movie three years ago,” Zemeckis told me. He could have used multiple actors for each character, but the audience would get lost trying to keep track. Conventional makeup could have taken a decade off Hanks, who is now 68, but not half a century. The crux with C.G.I. is time and money. Persuading us that we’re watching Hanks and Wright in their 20s would have required hundreds of VFX artists, tens of millions of dollars and months of postproduction work — doable in theory, but major studios don’t spend that kind of money on movies like “Here.” “There’s no capes or explosions or aliens or superheroes or creatures,” Ulbrich explained. “It’s people talking, it’s families, it’s their loves and their joys and their sorrows. It’s their life.”
A.I. software, though, changes all the accounting. By using every available frame of Hanks’s movie career to capture his facial movements and the look of his skin under countless lighting conditions, physical environments, camera angles and lenses, Metaphysic’s artists can generate a digital Tom Hanks mask with the click of a few keystrokes. And what we see onscreen is just one factor in A.I.’s ascendancy. “It’s the quality, and it’s the speed, and it’s the cost,” Ulbrich said. No six-month production lag, no fortune spent.
During the filming of “Here,” Metaphysic devised a setup that enabled Zemeckis and his crew to follow the shooting of scenes on two different monitors: one showing the raw feed from the camera of the actors as they appear in reality; and one filtered through its A.I. tools showing the actors at whatever age the scene required. Zemeckis has a long history of pouncing on new technologies to help him tell stories, from “Forrest Gump” to “The Polar Express,” and Hanks has often come along for the ride. In this case, the production breakthrough mattered as much as the image quality. “It was crucial that the cast could see it, because then they could adjust their performance,” Zemeckis told me. “They could say, ‘Oh, I see, I’ve got to make sure I’m moving like I was when I was 17 years old.’ No one had to imagine it. They got a chance to see it in real time.” And despite the technical ambition, “Here” only cost about $50 million, less than a quarter of some Marvel movie budgets.
From Metaphysic’s office in Hollywood, I drove 30 minutes south to Sony Pictures’ studio lot in Culver City to watch a screening of “Here” in the basement of the Irving Thalberg Building — and, for me at least, the A.I.-driven scenes passed the baseline test of any ambitious movie illusion: I didn’t notice it. But reactions are bound to vary, especially when it comes to a face as familiar as that of young Tom Hanks — a high bar for a big-screen visual effect — and when an illusion doesn’t work, it can be hard to focus on anything else. Maybe it will turn out to be impossible to escape Uncanny Valley, after all, even with the help of A.I. Then again, the whole fuss over the Tom Cruise deepfakes was propelled by how convincing they were, and that was three years and three Nvidia chips ago. It seems like only a matter of time before they fool us all.
The history of Hollywood can be told as a series of technological leaps, beginning with the invention of the camera itself, and each time something new comes along, jobs are lost, jobs are created, the industry reorganizes itself. Everyone in town of a certain age has seen this movie before. Past leaps, though, have tended to have narrower impacts: home video changed movie distribution, digital cameras changed movie production, C.G.I. changed visual effects. “The difference here is that A.I. has the potential to disrupt many, many places in our pipeline,” says Lori McCreary, the chief executive of Revelations Entertainment, a production company she owns with Morgan Freeman, and a board member of the Producers Guild of America. “This one feels like it could be an entire industry disrupter.”
A.I. is evolving so rapidly, though, and remains so poorly understood by so many people in Hollywood, that it’s difficult to predict how it will wind up proving most beneficial, and which aspects of the filmmaking process it will disrupt first. “Everyone’s nervous,” says Susan Sprung, the Producers Guild’s chief executive, and yet no one’s quite sure what to be nervous about.
The use of A.I. in “Here” is a critical element in its broader illusion, but it’s also a small one, in a movie full of old-fashioned visual invention. And aging and de-aging actors is just one way that filmmakers are tinkering with A.I.-driven facial replacement. It’s also being used in stunt photography, foreign-language dubbing and increasingly in lieu of reshoots.
A.I. applications are often divided into two broader categories. The first is generative A.I., which helps artists and studios create things. Then there is “agentic” A.I., which helps them get things done. A new A.I. tool called Callaia, for instance, reads scripts and generates 35-page coverage reports, along with historical comparisons and suggested theatrical release patterns — the core duty of countless junior studio executives’ daily work life, though perhaps not for long.
Gen A.I. is, depending on your vantage point, either the fun kind or the dystopic kind: It’s either going to empower artists or replace them (or do both). But Gen A.I. is also the category where all the creative exploration is happening, and where filmmakers are learning on the fly how it can help them tell new stories and, they believe, make better movies.
Shortly after “Here” wrapped up principal photography in April 2023, Hollywood shut down for several months because of overlapping strikes by the Writers Guild of America and the Screen Actors Guild. Among the central issues in both labor disputes was how to protect the livelihoods of union members from A.I. encroachment. Even a year before the strikes, A.I. was still just a plot device for sci-fi thrillers for most people in the movie industry, not a pressing real-world threat.
Then OpenAI unveiled its first public version of ChatGPT in November 2022. Suddenly A.I. was an asteroid hurtling toward Los Angeles. Any day, studio executives would start using ChatGPT to spit out screenplays, eliminating all those pesky writers, and using text-to-video programs like Runway’s Gen-1 to auto-generate all the filmmaking elements that professional artists get paid to create now — costumes, set design, cinematography. And even though the guilds managed to extract strict limitations on A.I. use in their ratified labor agreements, their victories felt Pyrrhic.
I spoke with more than two dozen people across the industry for this article and discovered that while there’s no shortage of A.I. optimists in the movie industry, they’re often reluctant to share that sentiment out loud for fear of seeming to side with the machines, or appearing too sanguine about a technology that everyone agrees will cost some people their jobs. There were also a couple of occasions when an eager early adopter scheduled an interview, only to cancel at the last minute at the behest of skittish corporate overseers.
And yet the reality of A.I.’s adoption within Hollywood so far has been more muted and incremental, and considerably less dystopic, than the nightmare scenarios. What was billed as an industry earthquake has been more like a slow leaching into the topsoil. A.I. in Hollywood right now is like A.I. in “Here” — it’s everywhere and it’s nowhere, it’s invisible and it’s all over the screen. “There’s too many people in Hollywood today who think that if you type ‘movie’ and press enter, you get a movie,” says Cristóbal Valenzuela, the co-founder and chief executive of Runway, whose A.I.-video-generation engines are among the most widely used. “The moment you start using it, you understand: ‘Oh, it actually doesn’t really work that well yet, and it’s full of flaws, and it doesn’t actually do what I want.’”
The critical limitation with generative-A.I. tools for now is the absence of control. C.G.I. requires a factory line of hundreds of artists, working one frame at a time — but “you control every freaking pixel, you control every character,” says Oded Granot, a visual-effects artist at a generative-A.I.-video start-up called Hour One, who worked on the Oscar-winning team behind “Spider-Man: Into the Spider-Verse” (2018). Making images with A.I., Granot explains, “is like Russian roulette, or a slot machine.” The front end requires just a simple prompt. “You write: ‘I want Spider-Man hanging from a building,’ and it generates it.”
But that still leaves countless decisions up to the machine, and you’re stuck with the output. What does the building look like? How is he hanging? Upside-down? Sideways? And that’s a single still image, not a full sequence, let alone a feature-length film. “You can’t expect James Cameron to prompt an ‘Avatar’ scene,” says Jo Plaete, Metaphysic’s chief innovation officer and the lead architect of the A.I. tools used in “Here.” “It’s just not going to work. Or with Bob Zemeckis or Steven Spielberg — if you’ve ever made a movie with one of these guys, you know that they will want to change every pixel if they can.”
Rather than play wait-and-see and have A.I. thrust upon them in ways they couldn’t control, Anthony and Joe Russo, the directors of the previous two “Avengers” movies for Marvel Studios, hired a machine-learning scientist away from Apple to help guide how their production company, AGBO, would use it. “There’s a lot of ways that we are experimenting with A.I. right now,” Anthony Russo told me. “We’re not quite sure what’s going to work and what’s not going to work.” But he is sure that A.I. will figure somehow into how he and his brother make the next two “Avengers” movies, both currently scheduled for 2026, even if it’s only to help with brainstorming ideas and working through them faster.
Over several months of talking to people around Hollywood about A.I., I noticed a pattern: The people who knew the least about its potential uses in the filmmaking process feared it the most; and the people who understood it best, who had actually worked with it, harbored the most faith in the resilience of human creativity, as well as the most skepticism about generative A.I.’s ever supplanting it. There was a broad consensus about the urgency of confronting its many potential misuses — tech companies’ skirting copyright laws and scraping proprietary content to train their machine-learning models; actors’ likenesses being appropriated without their permission; studios’ circumventing contractual terms designed to ensure that everything we see onscreen gets written by an actual human being. I must’ve heard the phrase “proper guardrails” at least a dozen times. But as the prolific Emmy-winning television director Paris Barclay, who has six episodes of multiple shows airing this fall alone, put it, “That’s what unions are for.”
The twilight sun over the Aegean Sea behind Tom Hanks was so golden and incandescent, and lit his profile with such cinematic flair, that the composition was almost too perfect, as though it could only be the product of advanced machine learning, and not, say, Zeus.
One week after my visit to Metaphysic, I was once again staring into a camera, and Hanks was again staring back at me — only this time it was the real Tom Hanks, enjoying the last few days of a sailing trip in the Greek islands. He was tanned and relaxed in a dark open-collar polo, and unlike the last time I saw him, he looked like a man in his late 60s, with clear-frame glasses, tufts of short gray hair barely peeking over the top of his head and a tight white beard. The nameplate at the bottom of his Zoom window read “HANX.”
I asked HANX if it gave him any pause making a movie so reliant on A.I. tools at a moment when so many of his colleagues in Hollywood were anxious about it. He rejected the premise and characterized the work on “Here” as being in the grand tradition of Lon Chaney and monster-movie magic. “This was not A.I. creating content out of whole cloth,” he said. “This is just a tool for cinema — that’s all. No different than having better film stock or a more realistic rear-screen projection for somebody driving a car.”
For someone like Hanks, A.I. could enable him to take on roles for which he had long assumed he was too old. “If it’s possible for me to play a younger person than I am — I read stuff all the time and I think, Oh, man, I’d kill to play this role, but I’m 68. I’d kill to play Iago, but I can’t because Iago’s in his 20s. I would do it in a heartbeat.” (Though pity the poor 20-something actors shut out from playing Iago by an ageless Tom Hanks.) When A.I. evangelists talk about its capacity to empower artists, this is the kind of thing they mean, though Hanks’s experiences have compelled him to contemplate some morbid implications. “They can go off and make movies starring me for the next 122 years if they want,” he acknowledged. “Should they legally be allowed to? What happens to my estate?” Far from being appalled by the notion, though, he sounded ready to sign all the necessary paperwork. “Listen, let’s figure out the language right now.”
Metaphysic’s handiwork has already appeared in two major theatrical releases this year — “Furiosa: A Mad Max Saga” and “Alien: Romulus” — and in both cases, the assignment was to resurrect a fan-favorite figure from an earlier film in the franchise who had been played by a since-deceased actor. In “Furiosa,” Metaphysic enabled the director George Miller to bring back the Bullet Farmer by putting the face of Richard Carter from “Mad Max: Fury Road” onto the body of a living actor. In “Alien: Romulus,” the android from Ridley Scott’s 1979 original “Alien,” played by Ian Holm, who died in 2020, returns in updated form for several scenes. Even though Holm’s family blessed the use of his likeness, public response was divided. The movie was a hit, but some viewers posted ethical critiques on social media. Then in late August, the California State Senate passed long-gestating, SAG-supported legislation requiring estate consent for A.I.-generated replicas of dead performers.
When I asked one writer-director about the practice, he didn’t even let me finish the question. “Nope, nope, nope, nope,” said Billy Ray, who wrote “Captain Phillips” (2013) and co-wrote the 2012 big-screen adaptation of “The Hunger Games,” and who spent his time during the strike hosting a studio-lambasting podcast. “It’s completely insincere, dishonest filmmaking. It’s a lie.” The counterargument I kept hearing, from artists and from technologists, is that filmmaking is a grand illusion at its core, and we all consent to being tricked — we’re paying to be tricked — when we walk into the theater or turn our phone sideways.
When your movies require visiting multiple fantasy worlds, dreaming up new superpowers and nastier villains, you need to come up with lots of ideas, knowing that a vast majority of them will be bad. This is the grunt work of making popular art, the failing part, and A.I. could prove to be a godsend for artists who need to fail fast, and at minimal expense. “It’s a bit like you have 5,000 phenomenally smart interns at your disposal, 24-7, in all time zones,” says Dominic Hughes, the Oxford University-educated A.I. whisperer who left Apple to join the Russo brothers.
Hughes switched industries, he told me, in part because he came to believe Silicon Valley was getting A.I. all wrong. Generative-A.I. tools are unruly and imprecise — “sloppy,” he said — but too many companies were trying to use them for tasks where they couldn’t afford to be wrong. “Like self-driving cars or robot surgeries or whatever,” he says. “And we’ve been struggling with that for years. Because if you don’t want to run over 7-year-olds in Kansas, you’ve got to be 99.999999 percent precise.” Whereas in a creative context, “if I generate a bunch of elves and they have seven fingers” — “hallucinations,” in the parlance of the medium — “it doesn’t matter, because they’re part of my iterative creative process of brainstorming what elves could look like.” Generative A.I., he has come to believe, is best suited for tasks “where ‘hallucination’ is a feature, not a bug.”
The sum of Hollywood’s collective fears, says Bennett Miller, the Oscar-nominated director of “Moneyball” and “Foxcatcher,” “is automation” — robots replacing humans, just as in the movies. Miller spent five years making a documentary about the dawn of A.I. that he describes as a “time capsule” about “a moment before a real loss of innocence in Silicon Valley.” (The untitled film is currently in legal limbo.) In the course of making it, he got to know the original leadership team at OpenAI, including Sam Altman. A few years ago, they offered him access to a beta version of their forthcoming text-to-image tool, DALL-E.
“It was astounding,” Miller told me. “From the moment that I had an account set up to literally 10 minutes ago, I’ve just been all in.” This January, at Gagosian’s Paris gallery, he will open his third show of ghostly, surreal images that evoke the grainy early days of photography but were created with DALL-E. In one of them, a silhouetted man looks up from the floor of a century-old theater at a massive sea creature onstage, its body so large that it extends beyond the frame. “It’s like realizing that you had locked-in syndrome, because you really can navigate to extraordinary places.” He fell in love with getting lost. The mistakes, the wrong turns, the model’s peculiar way of comprehending the human world — a bit Luis Buñuel, a bit Diane Arbus — led to all of his breakthroughs, which is how the best art often gets made: by accident. “It’s not just a change in degree of what’s been possible before; it’s really like a change in kind.”
And yet as much as Miller’s creative practice has been transformed by A.I., it’s still merely a tool to him — and “the tool doesn’t make you an artist,” he says. “I just don’t see it as a threat the same way others see it. I’m not saying that there aren’t going to be huge problems that emerge. But here’s the thing that I cannot comprehend: human artists’ being replaced.” The great wild card of A.I. is that it learns and gets better, and we can only guess at its full capabilities. Its performance so far, though, has also highlighted the gap still to be closed, especially with text-generation tools like ChatGPT, a lowest-common-denominator regurgitation machine whose countless practical uses don’t appear to include writing screenplays.
Tom Graham, a Metaphysic co-founder and its chief executive, says he can see A.I. tools “summarizing news articles and doing great explainer videos for corporate work. I can see them creating generic or derivative stories that just kind of seem like other stories.” But, he adds, “amazing storytelling is very, very difficult.”
Of course, Hollywood is very much in the business of generic and derivative stories, in which case why not completely outsource the hackwork to A.I.? The Writers Guild of America’s labor deal forbids that, though count on studios to use it for anything in the script-development process that can save them money. And some creative guilds are bound to be hit hard by the adoption of A.I., especially in digital animation, with its battalions of entry-level artists who spend an entire year tweaking pixels on two minutes of film. Many of those people could be working in A.I. soon, and fortunately for them, A.I. firms are hiring. “We need to double our size really quickly just to keep up with the demand,” says Alejandro Lopez, the chief marketing officer at Metaphysic, which currently has about 120 employees working remotely in more than 20 countries. “We are so behind.”
But as anxious as the guilds are, Hollywood’s history with paradigm-shifting technology suggests that the folks on the studio side — the agentic side — have just as much to fear. “We went from renting movies to streaming them, and it’s not filmmakers that go away — Blockbuster goes away,” says Bryn Mooser, a filmmaker and a co-founder of the streaming channel Documentary+, whose new company, Asteria, is an independent movie studio bidding to be “the Pixar of A.I.” “Or think about the switch from film to digital — Polaroid is the one that’s got to figure it out, Kodak has to figure it out. Photographers are still there.”
Filmmaking is often described as the most collaborative art form, and Metaphysic was just one among many creative contributors to the trickiest scenes of Hanks and Wright as young lovebirds in “Here.” The actors performed in full period costume, not in green suits covered with Ping-Pong balls. The makeup department taped back the loose skin around Hanks’s neck and pulled up his droopy ears, so Hanks’s A.I.-generated young face would match Hanks’s real-life old head. And, of course, they had award-winning actors to deliver all the lines. “You still need the warmth of the human performance,” Zemeckis told me. “The illusion only works because my actors are using the tool just like they use their wardrobe, just like they’d use a bald skull cap.” It was the future of Hollywood, and it looked uncannily like its past.
Source photo for top illustration: Kurt Krieger/Getty Images
The post What if A.I. Is Actually Good for Hollywood? appeared first on New York Times.