This new feature in ChatGPT will make it respond to you up to 5 times faster
Once we know a large part of the benefits that different AI platforms currently offer us, responsible companies are trying to optimize their use. It is a technology that consumes high resources, which is something we are trying to solve as much as possible.
One company that can be considered one of the leaders in this sector is OpenAI. But in most cases, which directly affects the aforementioned ChatGPT, latency is a big issue.
Surely you have faced the unpleasant situation on more than one occasion, where when it comes to getting the desired results from the AI, it takes longer than expected to respond to you. Obviously, most users would prefer to get this automated content instantly. It should be taken into account that the current APIs for large language models today require re-creating all the requested content. This causes a significant latency for users. Hence, OpenAI is currently trying to solve this problem with a new feature.
Specifically, we’re referring to a function called “Predicted Outputs” that the tech giant wants to implement in its popular ChatTGPT. This function is a function that can be used in cases where most of the outcomes are known in advance.
In this way, the latency can be significantly reduced by passing the existing content as a prediction. In addition, ChatTGPT will be able to recreate all the content much faster thanks to this new tool.
It is worth noting that OpenAI has tested this feature with some external partners and the results have been very positive in terms of performance increases. To give you an idea, based on benchmark tests conducted by Microsoft’s GitHub team, the results in Copilot Workspace workloads allowed for a 5.8 times faster AI processing speed.
This means that the results are really fast, so users' use of AI will be significantly improved. Of course, when using Predicted Outputs there are some limitations. For example, it only supports the GPT-4o and GPT-4o-mini language model series.