They are using AI chips to trick us. They were able to utilize Meta's AI on a gadget that was 26 years old.
They claimed that generative AI needed AI chips, but they were able to get it to function on a Pentium II with 128 MB of RAM. How is that possible?
There are currently only two ways to use generative AI: either on computers and smartphones with AI chips, or on distant servers of AI companies, like ChatGPT, Llama, and the like. Is it really mandatory, or is it just an attempt to dominate the market?
The EXO group is a semi-anonymous association formed by researchers and experts from Oxford University who want to avoid a future where large corporations and a handful of tech companies own AI, while users are slaves to their subscriptions or the devices they want to sell us.
Their goal is to prove that AI can run on any device, not just one with an AI chip. And they’ve already achieved a remarkable feat: they’ve gotten Mina’s Llama AI to run on a Pentium II with 128MB of RAM, which has been on the market for 26 years.
The first thing they did was buy an Elonex computer with a 350Hz Pentium II processor, 128MB of RAM, a 1.6GB hard drive, and Windows 98, which they found on eBay for about $143.
To transfer the software to run the AI, they couldn't use their own flash drives and USB drives, as they were larger than 4GB, which doesn't accept FAT32. So they resorted to good old Filezilla to transfer files via FTP.
If they wanted to run Llama, Meta's open source AI, they had to compile it into C, which was compatible with Windows 98 and the Pentium II.
They first had to adapt C++ code to hardware-compatible C code a quarter century ago.
It wasn't that complicated, since C is a very compatible language with older computers. They just had to declare variables at the beginning of functions. You can watch the video of the process at this link
To compile the code, they used the old Borland C++ 5.02. They used a simple version of Llama created by OpenAI co-founder Andrej Karpathy, who was Tesla’s AI director. And this is the result: Llama AI is running natively on a Pentium II with 128 MB of RAM.
EXO Labs’ goal with this experiment is to prove that if generative AI can run on a 26-year-old Pentium II processor, it will do so on any modern mobile phone or PC, which is thousands of times more powerful. You can see the full development on their blog , and download the code to try it out for yourself on GitHub