As artificial intelligence enters Taiwan’s music industry, companies and creators are testing how far machines can go without flattening local sound, sidestepping copyright rules, or sidelining human creativity.
The lucky VIPs who found themselves at the NFL Culture Club right before the Super Bowl LX earlier this month may have noticed an unusual soundtrack accompanying the media briefings and networking events at the exclusive hospitality event in San Francisco.
The soundscapes were powered by Musica, a proprietary large music language model that adjusts its sonic output in real time to human behavior across different environments. The company behind the tool is Wavv, a Bay Area-based provider of AI music generation tools, including software-as-a-service models.
“Super Bowl is about a lot more than sport,” says Ivan Linn, founder and CEO of Wavv. “There’s community, entertainment, and society networking. Layers such as music can play quite a critical role.”
Born in Taiwan, Linn has worn many hats in and around the entertainment industry. As a child, he appeared in Taiwanese television dramas before studying piano in Switzerland, Germany, and the United States, and going on to win a series of domestic and international awards for his performances.
Branching out into music programming and production for video game franchises like the Final Fantasy series, he served as the music director and chief conductor for the Assassin’s Creed Symphony World Tour in 2019.
The Super Bowl project follows cooperation with NASA and OpenAI on a score for the 2024 solar eclipse, using the space agency’s raw sonic data. Working with NASA’s Solar System Exploration Research Virtual Institute, Linn and his co-composer Alexander Wong created Ode to the Sun and Moon, a “co-working process between human and machine.”
Closer to home, shoppers in Taiwan can get a taste for what generative AI music offers at the NOKE shopping mall in the Dazhi neighborhood of Taipei’s Zhongshan District, where Musica’s sound database adjusts in response to factors such as foot traffic and changes in customer behavior. With plenty of research demonstrating the influence of music on shoppers’ mood and decision-making, there are obvious advantages to using a system that adjusts on the fly.
“When you walk into department stores, you get the same playlists, and we wanted to offer something different,” says Linn.
As Wavv is geared toward enterprises, copyright considerations are paramount. “We don’t use any copyrighted data, as we knew that would be an issue from the get-go,” says Dean Ludgate, head of operations at Wavv. “That’s why our music can be used in a commercial space like the NOKE mall.”
Expanding on the topic, Linn emphasizes the meta nature of Wavv’s technology. “For generative AI in the entertainment field, the most discussed concern is copyright infringement,” he says. “Instead of training on copyrighted datasets owned by music publishers, we use something a layer deeper.”
He likens the process to training a large language model from individual letters, rather than relying on web scraping to assemble pre-existing blocks of text. “If the alphabet is the fundamental metadata for the language of English, then for the language of music, it’s do, re, mi, fa, so — music notes,” says Linn.

Digital Orchestras
It’s not just in the creation of music itself that AI is pushing boundaries. For Li Su, an associate research fellow with the Institute of Information Science at Academia Sinica, the possibilities for music detection, tracking, and transcription systems are tantalizing.
Between 2018 and 2022, Su collaborated with National Tsing Hua University on a real-time music-tracking algorithm for live performances by the Tsinghua AI Orchestra. “The system tracks playing progress with visualization on a screen,” Su explains. “You can see the score, the piano roll moving along with the music.”
After the orchestra’s director, Sophia Lin, moved to the nonprofit Fu Yu Culture & Education Foundation, Su teamed up with her again on a project called AI Suite, which saw the organization’s chamber ensemble performing an AI-composed piece in tandem with a virtual double bass player. The 2024 concert, Su noted, required the AI musician to respond quickly to create the natural “rapport” of a human ensemble.
Perhaps the most exciting area of research lies in collaboration with musicologists hoping to use AI tools to collate and categorize music-related data across a wide range of media. “Whether it’s the covers of CDs and records, videos, text data, and historical recordings, there is so much music data,” says Su. “Previously, musicologists or humanities researchers had to hire a lot of staff, but these emerging technologies can help speed up the processing period for conversion into a usable database.”
As with similar efforts elsewhere — including the Taiwan Film and Audiovisual Institute’s push to digitally restore deteriorating archival material — there is hope that artificial-intelligence tools can help preserve elements of Taiwan’s cultural heritage that might otherwise be lost, Su says.
While his work generally falls outside the confines of music generation, Su acknowledges copyright issues and other ethical considerations. Yet he feels others are better placed to address these challenges. “As technology developers, of course, we are aware of this, but it is far beyond our abilities to answer,” he says. “These are questions for philosophers, lawyers, management, and specialists from other fields.”
For Siva Yuan, a veteran radio presenter who has covered AI developments in music, a key issue for Taiwan is “representational bias” — music-generation models that “heavily overweigh” Western musical styles. “The downstream effect is that the model’s creative vocabulary becomes narrow, even when outputs feel superficially diverse,” says Yuan.
For Taiwan, the concern is practical as much as cultural, he notes. “With limited exposure to Taiwanese-language, Indigenous, and local genres, prompts like ‘Hakka pop’ or ‘Taiwanese indie rock’ can end up producing Western-structured harmonic or rhythmic templates with localized lyrics or timbral cues,” he says.
Even if unintentionally, a “Taiwanese sound” risks being distorted by the way data is distributed and by what models learn to treat as typical, Yuan says.
Advantages of the technology include rapid prototyping of melodies and arrangements, cost-effective production, and faster localization and marketing assets. However, these must be weighed against the risks of “creative homogenization via dataset and misrepresentation of local styles,” says Yuan. There is also the issue of “additional pressure on discovery [of new artists] when platforms optimize around AI-amplified supply.”
To tackle these challenges, Yuan says there must be “more targeted local datasets, and partnerships that let smaller languages and scenes become first-class inputs in model development rather than edge cases.” Inevitably, a move toward “transparent and legally licensed training data” will also be key, he adds.
Opening minds
At the government level, changes are afoot. A spokesperson for the Bureau of Audiovisual and Music Industry Development (BAMID) under Taiwan’s Ministry of Culture (MOC) highlights recent tweaks to provisions on the use of generative AI for submissions to the Golden Melody Awards, Taiwan’s top music honors.
This loosening was based on consultations with Taiwan’s music industry professionals, as well as reference to international bodies such as the Recording Academy, which oversees the Grammys. The academy began addressing AI contributions to music in 2023. While still operating under a “human-centric policy,” meaning only human creators can receive awards or nominations, it now allows considerable leeway for AI content.
“This year, the Golden Melody Awards added new regulations, recognizing that generative AI software can be used to assist in the music creation and production process,” says the BAMID spokesperson. “But entries with lyrics, arrangements, melodies, or vocals entirely generated by AI are still not allowed.”
Other public bodies have also acknowledged the need to strike the right balance. “Intellectual property rights and ownership issues are unavoidable challenges in the development of AI in music,” says Eric Liang, CEO of Taipei Music Center (TMC), an MOC-funded performing arts center. “Nevertheless, AI represents an irreversible trend, and relevant regulatory frameworks will need ongoing refinement and adaptation.”
To this end, Liang says bringing the current crop of tech-curious youngsters up to speed will be vital. He highlights TMC’s partnership with AI4kids, a Taiwan-based education organization, to offer K-12 courses on AI applications in music. “Through structured instruction and hands-on training, students learn to apply these tools to music creation and sound production, cultivating their understanding of how technology and music can be integrated,” says Liang.
The aim, he says, is to assist budding creators “in areas such as ideation, composition, arrangement, and sound design.”
As for those whose livelihood could be most directly affected by AI innovations — jobbing musicians on the frontline — the current tendency is toward the first of these categories.
Brian Alexander, a professional musician, producer, and composer who has lived and worked in Taiwan for 25 years, has tinkered with AI solutions such as the digital audio workstation Suno, “with a very specific goal” of creating stylistic variations of original demos to serve as a reference for bandmates considering live performances or recordings.
“If you just get on there and say, for example, ‘I want a song that says how cute dogs are and make it a rock song,’ that’s kind of lame because you didn’t really do anything,” says Alexander. “What I do is give the AI the chord progression and lyrics, melody, and arrangement in the form of a recorded demo, so it’s more like a tool for developing the sound I want than having it do all the work, then lying and saying it’s my song.”
While AI can be useful for testing new ideas, Alexander says that “it’s never the final product or the primary means of being creative.”


These sentiments are echoed by Robi Roka, an Asia-based producer and DJ who performs regularly in Taiwan. “I’ve used AI mainly as a creative tool rather than a replacement,” he says. “It’s been useful for generating ideas — things like alternative basslines, melodies, or starting points I might not have thought of on my own.”
Among the technology’s current strengths are its vocals, he says, especially for quickly sketching concepts. Going forward, he sees the most promise in “generating individual stems or samples — like a bassline, vocal phrase, or percussion loop — that producers can then incorporate into a larger project, keeping human creativity at the center.”
In terms of finished product, “AI still falls short,” he says. “It doesn’t yet have the depth, dynamics, or human feel of a properly produced track.” For this reason, “it’s best used as a collaborator for inspiration, while the final sound and emotion still come from human production and decision-making.”
