GPT-5.5 Instant is now available to all users, and OpenAI teaches you how to write smarter, more efficient prompts

OpenAI announces GPT-5.5 Instant is now available to all users, with hallucination rates reduced by 52.5% in high-risk scenarios, and the AIME math test score jumping from 65.4 to 81.2.
(Background summary: OpenAI released the cybersecurity-specific model GPT-5.4-Cyber: patched 3,000 high-risk vulnerabilities, competing with Claude Mythos)
(Additional background: A 17-year-old high school student wrote a “god-level prompt” to enhance Claude’s reasoning ability to rival o1 models, how was this achieved?)

GPT-5.5 Instant is officially open to all users, with an official announcement stating that in high-risk scenarios such as medical, legal, and financial fields, fabricated statements are reduced by 52.5%, and incorrect statements in user-marked conversations are decreased by 37.3%.

Mathematical reasoning has also improved by one level. AIME 2025 scores increased from 65.4 to 81.2. AIME is a stress test for reasoning chain integrity, indicating structural improvements in the model’s multi-step logic.

The open access range has been extended since May 5 to include all users; free accounts can also use it, but personalized memory features (referencing past conversations, uploading files, Gmail) remain locked to Plus and Pro plans and are temporarily limited to the web version.

OpenAI teaches you how to write prompts

Just a few days ago, OpenAI also publicly released an official guide on prompt structure recommendations. The company states that most people’s prompt-writing methods are fundamentally flawed.

OpenAI provides a suggested prompt structure in the developer documentation, consisting of seven sections, in order:

  • Role (角色設定)
  • Personality (個性語氣)
  • Goal (目標說明)
  • Success criteria (成功標準)
  • Constraints (限制條件)
  • Output (輸出格式)
  • Stop rules (停止條件)

The first key shift is “result-oriented.” The old approach was a step-by-step directive: do A first, then B, then output C.

The new approach defines the endpoint first, clearly stating success criteria so the model can decide which path to take to reach it. OpenAI explicitly recommends that the first change to an old prompt is to remove procedural steps and replace them with outcome descriptions.

The second shift concerns the use of reasoning effort. Reasoning effort indicates how deeply the model “thinks” before answering; higher levels mean longer pre-answer thought and higher costs.

The official advice is that for most production scenarios, low or medium reasoning effort is sufficient; high levels should be reserved for multi-step reasoning, formatted outputs, or data extraction, as high reasoning effort is simply a waste of resources.

Other specific recommendations are also worth noting:

  • Stop rules should clearly specify “under what conditions to stop,” such as stopping after finding the first result that meets the criteria.
  • Retrieval budgets should set a maximum number of searches to prevent infinite expansion.
  • For draft tasks, it is more effective to define “what not to want” rather than “what to want” — negative constraints are easier for the model to recognize than positive descriptions.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin