Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Tether launches mobile local medical AI: 1.7B small model surpasses 16 times larger models, completely eliminating reliance on the cloud
According to Beating monitoring, the AI research team of USDT issuer Tether announced today the launch of the QVAC MedPsy series of medical language models—an on-device localized medical AI designed specifically for low-compute terminals such as smartphones and wearables. It can run without relying on cloud servers. Through an efficient architecture, it delivers performance far beyond the model size: the 1.7B parameter version achieved an average score of 62.62 across seven closed medical benchmarks, exceeding Google MedGemma-4B by 11.42 points, and beat MedGemma-27B—whose parameter count is nearly 16 times larger—in real clinical scenarios such as HealthBench Hard; the 4B parameter version scored even higher, reaching 70.54, fully surpassing larger models while significantly reducing inference token consumption (up to 3.2 times). It is released in a quantized GGUF format (about 1.2GB for 1.7B), making it suitable for mobile and edge deployment.
This release challenges the traditional assumption that “bigger models = better performance.” It focuses on improving efficiency through staged medical post-training (supervised learning, clinical reasoning data + reinforcement learning), enabling true on-device privacy protection and low-latency inference. Tether CEO Paolo Ardoino said that this allows medical AI to process sensitive data directly at hospitals locally and on the device side, without transmitting it to the cloud—thereby reducing costs, latency, and privacy risks. It is expected to reshape the infrastructure of medical AI and promote the adoption of localized deployment worldwide, especially in underdeveloped regions.