Skip to content

Commit

Permalink
Metadata correction to 2023.semaval-1.103 (#2629)
Browse files Browse the repository at this point in the history
  • Loading branch information
jaindeepali010 authored Jul 12, 2023
1 parent 3fa8913 commit f620441
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion data/xml/2023.semeval.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1074,7 +1074,7 @@
<paper id="103">
<title><fixed-case>NITS</fixed-case>_<fixed-case>L</fixed-case>egal at <fixed-case>S</fixed-case>em<fixed-case>E</fixed-case>val-2023 Task 6: Rhetorical Roles Prediction of <fixed-case>I</fixed-case>ndian Legal Documents via Sentence Sequence Labeling Approach</title>
<author><first>Deepali</first><last>Jain</last><affiliation>Department of CSE, National Institute of Technology Silchar, India</affiliation></author>
<author><first>Malaya</first><last>Borah</last><affiliation>Department of CSE, National Institute of Technology Silchar, India</affiliation></author>
<author><first>Malaya Dutta</first><last>Borah</last><affiliation>Department of CSE, National Institute of Technology Silchar, India</affiliation></author>
<author><first>Anupam</first><last>Biswas</last><affiliation>Department of CSE, National Institute of Technology Silchar, India</affiliation></author>
<pages>751-757</pages>
<abstract>Legal documents are notorious for their complexity and domain-specific language, making them challenging for legal practitioners as well as non-experts to comprehend. To address this issue, the LegalEval 2023 track proposed several shared tasks, including the task of Rhetorical Roles Prediction (Task A). We participated as NITS_Legal team in Task A and conducted exploratory experiments to improve our understanding of the task. Our results suggest that sequence context is crucial in performing rhetorical roles prediction. Given the lengthy nature of legal documents, we propose a BiLSTM-based sentence sequence labeling approach that uses a local context-incorporated dataset created from the original dataset. To better represent the sentences during training, we extract legal domain-specific sentence embeddings from a Legal BERT model. Our experimental findings emphasize the importance of considering local context instead of treating each sentence independently to achieve better performance in this task. Our approach has the potential to improve the accessibility and usability of legal documents.</abstract>
Expand Down

0 comments on commit f620441

Please sign in to comment.