Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Multivariate Fine-Grained Complexity of Longest Common Subsequence

Bringmann, K., & Künnemann, M. (2018). Multivariate Fine-Grained Complexity of Longest Common Subsequence. Retrieved from http://arxiv.org/abs/1803.00938.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1803.00938.pdf (Preprint), 786KB
Name:
arXiv:1803.00938.pdf
Beschreibung:
File downloaded from arXiv at 2018-05-03 08:48 Presented at SODA'18. Full Version. 66 pages
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Bringmann, Karl1, Autor           
Künnemann, Marvin1, Autor           
Affiliations:
1Algorithms and Complexity, MPI for Informatics, Max Planck Society, ou_24019              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
 Zusammenfassung: We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings $x$ and $y$ of length $n$, a textbook algorithm solves LCS in time $O(n^2)$, but although much effort has been spent, no $O(n^{2-\varepsilon})$-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size $n:=\max\{|x|,|y|\}$, the length of the shorter string $m:=\min\{|x|,|y|\}$, the length $L$ of an LCS of $x$ and $y$, the numbers of deletions $\delta := m-L$ and $\Delta := n-L$, the alphabet size, as well as the numbers of matching pairs $M$ and dominant pairs $d$. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as $(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}$. [...]

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-03-022018
 Publikationsstatus: Online veröffentlicht
 Seiten: 66 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1803.00938
URI: http://arxiv.org/abs/1803.00938
BibTex Citekey: Bringmann_arXiv1803.00938
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: