Google announced at the Made by Google conferencePixel 8, Pixel 8 Pro,as well aspixel watch 2The three new hardware also emphasize the hope of realizing the infinite possibilities of mobile computing through "software + AI + hardware".

In the past, Google has collaborated with various OEMs to create Nexus-branded phones, thereby interpreting the application functions of the native Android operating system. Since the launch of its own Pixel-branded phones, Google has continued to strengthen its design concept of using software to drive hardware functions.
Since Google partnered with Samsung to develop its own custom processor, the Tensor G series, Google has further emphasized its commitment to promoting mobile AI computing experiences through hardware and software integration. With the launch of the Pixel 8 series, Google announced its "Software + AI + Hardware" design philosophy. By integrating AI between software and hardware, Google emphasizes using AI to enhance integrated computing, expanding the possibilities of mobile computing applications.
Although Google claims that its Pixel-branded phones do not compete with its partners' products, it also states that many features will be made available to more Android phones, or that OEMs will be able to create similar application functions on their own mobile phones through open API resources, or even create a more practical functional experience than Pixel-branded phones.
However, judging by the Pixel-branded phones released in recent years, many features have begun to be used exclusively. For example, the Call Screen feature, available since the Pixel 3, is currently still limited to Pixel-branded phones due to hardware design differences. Some camera functions also require the use of specific hardware features. Therefore, even though the relevant API resources are available to OEMs, the full functionality is still only available on Pixel-branded phones.


From the perspective of the current Pixel brand mobile phone design, in addition to still serving to showcase the native Android operating system application functions, it is actually more suitable for promoting Google's technical strength in integrating software and hardware.
In particular, when launching the Pixel 8 series of mobile phones, "AI" was placed between software and hardware. This also means that compared to the previous development model that only combined software and hardware, the concept of using artificial intelligence to connect more computing possibilities has been added. It also echoes Google's current goal of fully sprinting towards the development of artificial intelligence.
Although Google didn't specifically emphasize the computational architecture design of the Tensor G8 processor in the Pixel 3 series, it did explain that the scale of the automatically generated artificial intelligence model executed on the Pixel 8 Pro is 7 times larger than the largest automatically generated artificial intelligence model executed on the Pixel 150. Furthermore, the machine learning models on the Pixel 8 series phones are more than 6 times more complex than those on the Pixel 10 series phones.
Therefore, including using the next-generation Call Screen feature to help users filter incoming calls, remove background noise from video content, or allow everyone to keep their eyes open and smile in group photos, or even remove more complex image content in photos, can all be done quickly on the Pixel 8 Pro.



Furthermore, Google explained that it can adjust the brightness and details of videos to perfection by leveraging Google Cloud's collaborative computing resources. While this isn't a native feature of Pixel phones, it's actually the best way for Google to publicize its cloud service applications. It may also attract more partners to use Google Cloud services, thereby connecting to the relatively limited computing power of devices and enabling the possibility of unlimited computing power applications.



