📢 早安!Gate 廣場|4/5 熱議:#假期持币指南
🌿 踏青還是盯盤?#假期持币指南 帶你過個“放鬆感”長假!
春光正好,你是選擇在山間深呼吸,還是在 K 線裡找時機?在這個清明假期,曬出你的持幣態度,做個精神飽滿的交易員!
🎁 分享生活/交易感悟,抽 5 位幸運兒瓜分 $1,000 仓位體驗券!
💬 茶餘飯後聊聊:
1️⃣ 假期心態: 你是“關掉通知、徹底失聯”派,還是“每 30 分鐘必刷行情”派?
2️⃣ 懶人秘籍: 假期不想盯盤?分享你的“掛機”策略(定投/網格/理財)。
3️⃣ 四月展望: 假期過後,你最看好哪個幣種“春暖花開”?
分享你的假期姿態 👉 https://www.gate.com/post
📅 4/4 15:00 - 4/6 18:00 (UTC+8)
Since o1 launched, the biggest complaint is that it's "too verbose."
I just wanted to fix a simple bug, and it gave me three background explanations, two solution approaches plus error handling, and then wished me good luck on top of that.
I was only looking for a spelling mistake on line 12, but ended up having to review Python naming conventions all over again.
This blame falls squarely on RLHF. Annotators tend to give higher scores to longer responses, thinking more text looks more professional.
So the model desperately piles up "seemingly useful" filler, while the actual core information gets diluted.
Look at Claude next door—it's much more sensible about this, knowing what length matches what question.
The most painful part is the wallet: o1's output pricing is $60/1M tokens. For something that should take 100 tokens to explain, it deliberately pads it to 500, multiplying costs by five on the spot.
Now when asking questions you have to specifically add "code only," and even that doesn't always work.
The model's current state is: genius-level IQ, but EQ completely offline—it simply doesn't know when to shut up.