Sentence Transformers v5.3.0 adds configurable contrastive loss directions, hardness weighting, new regularization losses, and Transformers v5 compatibility. Try it to test richer retrieval training losses without rewriting your stack.

MultipleNegativesRankingLoss with configurable InfoNCE directions and partitioning, so the same training API can now express standard, symmetric, and GTE-style contrastive setups, according to the release thread.GlobalOrthogonalRegularizationLoss for reducing unrelated embedding similarity and CachedSpladeLoss for memory-efficient SPLADE training, as detailed in the v5.3 notes.requests for optional httpx, per the changelog thread.The biggest API change is in MultipleNegativesRankingLoss. The maintainer says v5.3.0 adds new directions and partition_mode parameters, letting you choose interactions like query_to_doc, doc_to_query, query_to_query, and doc_to_doc instead of being locked to one InfoNCE formulation thread details. The example shown in [img:0|Loss config] uses a joint partition with all four directions enabled, which makes the update more than a paper-level tweak: it is a direct training config change.
The same loss now supports hardness weighting through hardness_mode and hardness_strength. In the thread, the author says it “up-weights harder negatives in the softmax” and that the feature also works with CachedMNRL hardness weighting. For embedding teams already training retrievers on in-batch negatives, that means richer contrastive objectives without rewriting trainers or data pipelines.
v5.3.0 also adds two new objectives aimed at retrieval workloads. GlobalOrthogonalRegularizationLoss penalizes high similarity among unrelated embeddings, and the maintainer says it can be combined with InfoNCE while sharing embeddings in a single forward pass new losses. CachedSpladeLoss is described as a gradient-cached SPLADE loss that enables larger batch sizes “without extra GPU memory,” which is the most deployment-relevant part of the release for sparse retrieval training SPLADE update.
The rest of the release is operational cleanup: a faster NoDuplicatesBatchSampler using hashing, a GroupByLabelBatchSampler fix for triplet losses, full compatibility with recent Transformers v5, and requests replaced by optional httpx full release notes. In a follow-up reply, the maintainer also pointed users to a ready-made sentence-transformers dataset catalog, which gives teams a quicker path to exercising the new losses on tagged Hugging Face datasets dataset reply
LLM Debate Benchmark ran 1,162 side-swapped debates across 21 models and ranked Sonnet 4.6 first, ahead of GPT-5.4 high. It adds a stronger adversarial eval pattern for judge or debate systems, but you should still inspect content-block rates and judge selection when reading the leaderboard.
releaseOpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
releaseCursor shipped Instant Grep, a local regex index built from n-grams, inverted indexes, and Bloom filters that drops large-repo searches from seconds to milliseconds. Faster candidate retrieval shortens the coding-agent loop, especially when ripgrep-style scans become the bottleneck.
breakingChatGPT now saves uploaded and generated files into an account-level Library that can be reused across conversations from the web sidebar or recent-files picker. It removes repetitive re-uploading and makes past PDFs, spreadsheets, and images part of a persistent working context.
breakingEpoch AI says GPT-5.4 Pro elicited a publishable solution to one 2019 conjecture in its FrontierMath Open Problems set, with a formal writeup planned. Treat it as an early milestone worth reproducing, not blanket evidence that frontier models can already automate math research.
⬆️ I've just released Sentence Transformers v5.3.0! This release upgrades training with MultipleNegativesRankingLoss with alternative InfoNCE formulations and hardness weighting, adds two new losses, and more. Details in 🧵
Also in v5.3.0: - Faster NoDuplicatesBatchSampler with hashing - GroupByLabelBatchSampler fix for triplet losses - Full recent Transformers v5 compatibility - requests replaced with optional httpx Check out the full release notes here: github.com/huggingface/se…
Also in v5.3.0: - Faster NoDuplicatesBatchSampler with hashing - GroupByLabelBatchSampler fix for triplet losses - Full recent Transformers v5 compatibility - requests replaced with optional httpx Check out the full release notes here: github.com/huggingface/se…