

Diego Miguel Lozano 1 , † , Daryna Dementieva 1, 2 , Alexander Fraser 1, 2
1 School of Computation, Information and Technology, Technical University of Munich (TUM)
2 Munich Center for Machine Learning (MCML)
† Currently affiliated to ELLIS Alicante.
Semantic Textual Similarity (STS) is a crucial component of many Natural Language Processing (NLP) applications. However, existing approaches typically reduce semantic nuances to a single score, limiting interpretability. To address this, we introduce the task of Dissimilar Span Detection (DSD), which aims to identify semantically differing spans between pairs of texts. This can help users understand which particular words or tokens negatively affect the similarity score, or be used to improve performance in STS-dependent downstream tasks. Furthermore, we release a new dataset suitable for the task, the Span Similarity Dataset (SSD), developed through a semi-automated pipeline combining large language models (LLMs) with human verification. We propose and evaluate different baseline methods for DSD, both unsupervised—based on LIME, SHAP, LLMs, and our own method—as well as an additional supervised approach. While LLMs and supervised models achieve the highest performance, overall results remain low, highlighting the complexity of the task. Finally, we set up an additional experiment that shows how DSD can lead to increased performance in the specific task of paraphrase detection.
Dissimilar Span Detection (DSD) is a method to improve the interpretability and reliability of Semantic Textual Similarity (STS) scores. The task consists in, given two texts, identifying spans pairs with a common semantic function, but differing meanings.
Swipe to see more
On April 15, 2010, the Colorado Court of Appeals dismissed.
On March 15, 2012, the Oregon Court of Appeals dismissed.
Cosine similarity: 0.74
She went on to win a silver medal in the 2024 Olympics.
She went on to win first place in the 2024 Olympics.
Cosine similarity: 0.91
* Cosine similarities obtained with the model all-MiniLM-L6-v2.
The examples above exemplify how relying uniquely on cosine similarity to detect dissimilar texts might not be enough. In some cases, texts that differ in meaning might report a higher cosine similarity than semantically equivalent texts.
@inproceedings{dmlozano-etal-2023-dsd, title = {Explainable Semantic Textual Similarity via Dissimilar Span Detection}, author = {Miguel Lozano, Diego and Dementieva, Daryna and Fraser, Alexander}, address = {Palma de Mallorca, Spain}, booktitle = {TODO}, publisher = {ELRA Language Resources Association (ELRA)}, month = may, year = 2026, }