Apple introduces Keyframer, a neural network leveraging large language models (LLM) to breathe life into static images through user-generated text prompts. The details of this innovation are meticulously outlined in Apple’s research paper titled “Keyframer: Empowering Animation Design with Large Language Models.”
Unveiling the Complexity of Animation Design:
Unlike traditional text-to-image systems, such as Dall·E and Midjourney, Keyframer addresses the intricacies inherent in animation design. Apple contends that animation necessitates a nuanced understanding of user factors like timing and coordination, aspects challenging to encapsulate within a single prompt. The paper advocates for alternative approaches enabling users to iteratively create and refine their designs, especially tailored for animation.
The Iterative Animation Process:
To embark on the animation journey, users begin by uploading an SVG image and providing hints. Keyframer then takes the reins, generating the CSS code for the animation. Users retain control as they can edit the code directly or employ additional text queries, fostering a dynamic and customizable animation creation process, notes NIX Solutions.
Integration into iOS 18:
Apple, having recently introduced a neural network dedicated to static image generation from text queries, hints at a comprehensive integration of these AI advancements into iOS 18. The official unveiling is anticipated at WWDC 2024 in June, showcasing Apple’s commitment to pushing the boundaries of creative possibilities through artificial intelligence.