Tencent released a new Artificial Intelligence (AI) model on Tuesday that may still conscious picture images. The dubbed Hanyuanportrate, Large Language Model (LLM) is based on the diffusion architecture, and can generate videos with realistic animation based on a reference image and a guiding video. Researchers behind the project highlighted that the model could capture both facial data and spatial movements to sync in the reference image. Tencent has now opened the Hunyuanportrait AI model, and can be locally downloaded and run from popular repository.
Tencent’s Hunyuanportratit can still bring pictures for life
One in Post On X (East was known as Twitter), the official handle of Tensent Hunyuan announced that the Hunnuanportrate model is now available for the open community. AI model can be downloaded from tensent Github And Throat face Listing. Additionally, a pre-print paper Extending the model is also being hosted on Arxiv. In particular, the AI model is available for cases of academic and research-based use, but not for commercial use.
Hunyuanportrait can produce lifelik animated videos using a reference image and driving video. It captures facial data and poses heads from the video and projected them on the stil portrait image. The company claims that the sink of the movement is accurate, and even changes in the expression of the subtle face are repeated.
Hunyuanporitrat Architecture
Photo Credit: TENCENT
On your model PageTencent researchers expanded the architecture of Hunyuanportrait. This is built on the architecture of a status spread model with a position control encoder. These reduce the speed and identity of the speed in pre-educated encoder video. The data is captured in the form of control indications, which is then injected into the stil portrait via a Danoizing UNET. The company claims that it brings both spatial accuracy as well as temporary stability to the output.
Tencent claims that the AI model improves the existing open-source options on the parameters of cosmic stability and control, but these matrix is not independently verified.
Such models can be useful in film production and animation industries. Traditionally, the animator manually uses the kefreme facial expression or aechally motion capture system to make the characters actually conscious. Models such as Hunyuanportrait will allow them to feed only character design and target movements and facial expressions, and it will be able to generate outputs. Such LLM has the ability to make high quality animation accessible to small studios and independent creators.
For latest technical news and reviews, follow gadgets 360 X, Facebook, WhatsApp, Thread And Google NewsFor the latest videos on gadgets and tech, take our membership YouTube channelIf you want to know everything about top effectives, then follow our in-house Who is it But Instagram And YouTube,

Realme NEO 7 MediaTek Dimenses with Turbo