In recent years, LLMs have shown significant improvements in their overall performance. When they first became mainstream a couple of years before, they were already impressive with their seemingly human-like conversation abilities, but their reasoning always lacked. They were able to describe any sorting algorithm in the style of your favorite author; on the other hand, they weren't able to consistently perform addition. However, they improved significantly, and it's more and more difficult to find examples where they fail to reason. This created the belief that with enough scaling, LLMs will be able to learn general reasoning.
Separately, Kalshi has also suspended and fined a politician who was running to be Governor of California. "In May, our Surveillance Department saw an online video by a candidate for Governor of California that appeared to show him trading on his own candidacy," Kalshi says. "We immediately froze his account and opened an investigation. The candidate was initially cooperative and acknowledged that this violated the exchange rules. As a candidate in a race, you can (and probably should) follow and use Kalshi’s market forecast, but you should not trade on it."
。夫子对此有专业解读
但杨植麟并没有动摇,其提出要集中资源投入基础算法与新模型 K2,不再追逐“烧钱换用户”,而是试图用“技术换用户”。
OpenAI 将消耗 2 吉瓦的 Trainium 算力用于训练和推理。