From the Cloud to Your Phone How Did Artificial Intelligence Reach Your Pocket?

From the Cloud to Your Phone.. How Did Artificial Intelligence Reach Your Pocket?

From the Cloud to Your Phone.. How Did Artificial Intelligence Reach Your Pocket?

The smartphone industry has gone through two major revolutions over the past three decades. The first revolution started when the mobile phone came into existence, which changed the way we communicate. The second revolution started in the latter half of these three decades when advanced smartphones came into existence, which have become an integral part of our daily lives.

Now, with the development of generative artificial intelligence , we are witnessing the beginning of a new revolution in the history of smartphones. Thanks to the progress in the field of large language models , smartphones can now perform more complex tasks after this technology was previously limited to supercomputers only.

As phone companies are now launching new generations equipped with advanced AI capabilities, for example, Google introduced a set of AI features in the Pixel 9 series last month, such as generating images and improving the quality of phone calls. It also introduced the Add Me feature, which uses AI to add people to group photos if they are not in them, while making the image look natural.

Samsung introduced the Galaxy S24 series and new versions of its foldable phones, the smart assistant ( Galaxy AI ), which offers a wide range of artificial intelligence features designed to make users' lives easier.

But how did these companies manage to move the massive computing power required for AI from the cloud to small devices the size of a smartphone?

The massive computing power required to run AI applications has long been limited to giant servers in cloud data centers, but as technology advances, companies have realized the importance of enabling users to leverage AI capabilities directly on their personal devices.

To achieve this, companies have moved a large portion of data processing and analysis from the cloud to what is known as the Edge, which are the peripheral devices that users use directly, such as smartphones and Internet of Things devices .

This shift is important for many reasons, the most prominent of which are:

  • Deliver faster, smarter services: Instead of sending data to the cloud and waiting for results, it is processed locally on the device, providing instant speed and responsiveness to applications.
  • Privacy Protection: Sensitive data is not transferred to the cloud, protecting user privacy.
  • Reduce dependence on internet connection: Data processing on the device does not require internet connection.
  • Reduced power consumption: This transformation helps reduce power consumption, because the calculations are processed directly in the device.

How did companies make this shift?

Companies have developed powerful SoCs specifically designed to run generative AI models efficiently. These processors rely on neural processing units (NPUs) capable of performing 30 trillion calculations per second or more, enabling them to generate new and innovative content instantly.

For example, Tensor Processing Units (TPUs) are a key component in enabling the advanced AI capabilities of the new Pixel 9 phones, as Google developed them specifically to speed up the intensive computations required by machine learning.

TPUs include networks of components called systolic arrays, which process huge amounts of data at once, and this unique design makes the Tensor G processors that power Google phones more power efficient and faster at performing calculations.

The new Tensor G4 processor enables the Pixel 9 to run Gemini Nano Multimodal , enabling it to better understand text, images, and voices and meet your needs in smarter ways.

Furthermore, the Tensor G4 processor significantly enhances the performance of the Pixel 9 phones, providing 20% ​​faster web browsing speeds , 17% faster app opening speeds compared to the Tensor G3 processor, and consuming less power when performing tasks.

It is worth noting that the initial TPUs were developed by Google in 2015 to help speed up the computations performed by large cloud-based servers while training AI models. In 2018, Google launched the first TPUs designed for mobile devices, paving the way for the integration of AI into them. Then in 2021, Google introduced the first TPUs designed specifically for Pixel phones.

Artificial Intelligence and RAM:

Last week, Apple unveiled the new iPhone 16 series , which features advanced artificial intelligence capabilities thanks to Apple Intelligence features , which allow you to rewrite emails with a creative touch, create emojis that reflect your personality, and enjoy a greatly improved Siri experience. But it didn't stop there, as Apple added another important feature to its phones: increased random access memory (RAM).

Apple typically doesn't provide any data about the RAM capacity of iPhones, however, John Srugoy, Apple's senior vice president of hardware and technology, confirmed in an interviewIn it, he talked about the hardware requirements to support the new features, that all iPhone 16 series phones will operate with 8 GB of RAM, compared to 6 GB in the basic iPhone 15 models that the company launched last year.

Apple wasn't the only one to take the step of increasing the RAM capacity in its phones. Google preceded it last month, and made similar changes in the Pixel 9 phones, as the phones witnessed an increase in RAM, making 12 GB the minimum you can get this year.

This increase in memory is a direct result of the evolution of artificial intelligence, as large language models require large computational resources, including large RAM capacity, to run efficiently.

The role of memory in supporting artificial intelligence:

To provide fast and efficient responses, AI models must be ready to use at any time. To achieve this, they are permanently loaded into RAM, which ensures an immediate response to user commands, rather than waiting a long time for the model to load from internal storage.

But AI models are also quite large, even a small model, like Microsoft 's Phi-3-mini , takes up 1.8GB of space, which means a large portion of the phone's memory is dedicated to AI, and this could affect the performance of other applications if memory capacity is reduced.

That’s why Google didn’t offer the Gemini Nano model in the Pixel 8 last year, but did offer it in the Pixel 8 Pro, because it has 12GB of RAM, 4GB more than the Pixel 8. Android VP and general manager Siang Zhao has confirmed that increasing the RAM capacity in smartphones is a key factor in enabling advanced AI features.

The same trend toward RAM is evident in laptops , with Microsoft setting a minimum of 16GB for its Copilot Plus laptops, a term for devices capable of running AI features efficiently. Apple is also reportedly planning to increase the memory capacity of its upcoming laptops after years of offering 8GB of RAM as a default.

 However, Apple has not explicitly specified how much RAM is needed to run Apple Intelligence features, but all Apple devices eligible for Apple Intelligence have at least 8GB of RAM.

It is worth noting that the eligible iPhone 15 Pro phones operate with 8 GB of memory, while the basic iPhone 15 versions will not be able to run Apple Intelligence because they only operate with 6 GB of memory.

A mere 2GB increase in the iPhone may seem modest, but it’s a positive step forward for Apple, which has been slow to increase memory capacity in its devices in recent years, and it suggests that the company is beginning to respond to growing user demand. We can expect to see more developments in this area in the near future.


google-playkhamsatmostaqltradent