In a podcast hosted by Russian-American computer scientist Lex Fridman, Meta introduced the high-end flagship virtual reality headset launched last year.quest-pro, and the first one to be shown to the public in 2018Personal Facebook Codec Avatars Technology, completed an online interview with Meta CEO Mark Zuckberg in a metaverse environment, covering a real distance of more than hundreds of miles.
During the interview, Lex Fridman and Mark Zuckberg both wore Quest Pro and used Codec Avatars technology to instantly restore their facial expressions in the virtual reality scene, making the entire interview conducted in the metaverse environment more realistic.
However, due to the technical limitations of Codec Avatars, it can currently only display a person's real-time facial expressions. If you want to display the whole body state, you may need to use other wearable devices to achieve it.
According to the current Codec Avatars technology, it originally had to be rendered by NVIDIA Titan X-class graphics cards. Previously announced details were obtained through Meta Reality Labs.Customized RISC-V chipsIt occupies an area of only 1.6mm², so it can be directly applied to independently operated virtual vision head-mounted devices.
Compared to earlier times, the personalized facial expressions corresponding to the Horizon Worlds that Meta initially displayed to the public were almost all composed of simple lines, but the facial expressions presented through Codec Avatars technology can obviously convince more people to believe in the future development trend of the metaverse.
In addition to Meta’s Metaverse applications, companies including Google and Logitech are also trying to use existing cameras and other devices to create more realisticFace-to-face communication and interactive experienceHowever, it also has limitations such as high construction costs and difficulty in carrying out anytime and anywhere. Therefore, Meta uses virtual reality headsets, customized chips and related technical applications to make it easier for people to interact in the metaverse through virtual vision, and can shorten the sense of distance during interaction through realistic facial expressions.
Current virtual vision headsets still need to be combined with other methods to present full-body movements in the metaverse environment. Perhaps there will be simpler and more intuitive ways to achieve this goal in the future, and even further realize "tactile" feelings, allowing users to interact more naturally in the metaverse environment.


