Sam Altman responds to the U.S. Department of Defense choosing OpenAI over Anthropic: differences in operational control demands

robot
Abstract generation in progress

Odaily Planet Daily reports that OpenAI founder Sam Altman posted an AMA on X platform in response to recent collaborations with the U.S. Department of Defense. He revealed that Anthropic and the U.S. Department of Defense were once “very close to reaching an agreement,” and during most of the negotiations, both sides were strongly motivated to cooperate. However, in a highly tense negotiation environment, the situation could deteriorate rapidly, which may be one of the key reasons the deal ultimately fell through. Regarding security philosophy, OpenAI adopts a “layered approach,” including building a security technology stack, deploying cutting-edge deployment engineers (FDE), involving security researchers in projects, and delivering via cloud deployment directly in partnership with the U.S. Department of Defense. Compared to setting specific prohibitive clauses in contracts, Anthropic seems to focus more on explicit restrictions at the contractual level, while OpenAI prefers to rely on applicable legal frameworks and core technical security measures. Other companies may hold different views; Anthropic might seek more operational control in the collaboration, which could also be a reason for their divergent paths.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)