
A major price adjustment has recently taken effect for the API linked to a widely used large language model, specifically the gpt-3.5-turbo variant. This change introduced a remarkable 80% decrease in the cost per token for this particular service. Naturally, such a significant reduction in pricing raised questions and led to speculation within the developer community regarding potential impacts on the model’s performance, including speed and the quality of its output. It’s a common concern that making a service drastically cheaper might involve some level of compromise in its capability or reliability.
However, following the implementation of this new pricing structure, thorough analysis and testing have been conducted. The results of these evaluations definitively show that despite the substantial decrease in cost, the operational performance of the gpt-3.5-turbo API has remained completely unchanged. Users continue to experience the same speed, responsiveness, and output quality as before the price alteration. This confirms that developers and businesses leveraging this API can enjoy the benefits of a significantly lower cost for usage without any degradation in the service they rely upon for their applications and workflows. The efficiency and effectiveness of the API have been maintained even at this much more accessible price point.
Source: https://www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-o3-api-80-percent-price-drop-has-no-impact-on-performance/