What is it about?

Imagine you're using an AI tool to create images—like designing a character for a game or an illustration for a story. You might want the image to look a certain way (style) and also show a specific subject (content), like “a robot in watercolor style.” However, today’s AI tools often mix these two up, making it hard to control both exactly how you want. Our research proposes a new way to help these tools better separate and understand what the image is (content) and how it looks (style). Instead of mixing them together, we create two different “paths” inside the AI model—one for learning the subject, and one for learning the artistic style. This is done by breaking the model’s internal parts into two sections and then carefully putting them back together in a smart way. This method, called B4M (Break-for-Make), gives users more precise control when generating images with both personal content and unique visual styles—using just a few example images and text prompts.

Featured Image

Why is it important?

AI-generated images are becoming a powerful tool for creators, from artists to game designers. But when it comes to personalizing these images—especially matching a specific style with a unique subject—today’s tools often fall short. They tend to blend style and content in ways that make it hard to control either one clearly. What makes our work unique is that we introduce a new way of thinking: instead of treating style and content as one blended idea, we separate them right inside the model’s brain. This gives users much more control and flexibility when customizing images, especially when working with only a few reference examples. Our method could make a big difference for people who need high-quality, consistent, and personalized visual outputs—like branding teams, illustrators, or designers building virtual worlds. As AI image generation becomes more common, our approach helps take a step toward truly user-driven creativity.

Perspectives

As someone deeply passionate about both generative AI and creative expression, this project was an exciting opportunity to explore how we can give users more control over the images they generate. I often noticed that existing tools made it hard to preserve a character’s identity or a specific art style when trying to customize both at once. That frustration became the motivation behind this work. Through B4M, I wanted to rethink the way personalization works—by going beyond technical improvements and focusing on the creative experience itself. To me, it's not just about making models smarter; it's about making them more intuitive for people who care about aesthetics, storytelling, or design. It’s been incredibly rewarding to see how rethinking the structure of AI models—even something as specific as the way parameters are organized—can open up new possibilities for creators. I hope this work sparks more ideas around user-guided AI generation and inspires future tools that better respect both the vision and voice of the user.

Yu Xu
University of the Chinese Academy of Sciences

Read the Original

This page is a summary of: B4M : B reaking Low-Rank Adapter for M aking Content-Style Customization, ACM Transactions on Graphics, April 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3728461.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page