Language Model Adaptation for Legal Ukrainian Domain

Authors

DOI:

https://doi.org/10.18372/1990-5548.81.18977

Keywords:

intellectual text analysis, natural language processing, text embeddings, opinion mining, machine learning, BERT, SBERT, Legal-BERT

Abstract

Language models in recent decades make a huge step towards solving the tasks that previously could be done only by humans. Development of NLP area is different scopes gives an opportunity to solve domain specific tasks and transfer knowledge from learnt data towards the useful inferences based on that. This article provides the NLP model approach in specific legal domain. Additionally, this article explores performance of pre-training small models and its utilization and checks the scores on fine-tuned task of checking sentence similarities via SBERT. According to this articles it is proven that domain-specific pre-trained models can perform better results than generally trained language model. This article also provides the language model that is adopted to the Ukrainian legal domain.

Author Biographies

Victor Sineglazov , National Aviation University, Kyiv

Doctor of Engineering Science

Professor

Head of the Department of Aviation Computer-Integrated Complexes

Faculty of Air Navigation Electronics and Telecommunications

Illia Savenko , National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Post-graduate student

Artificial Intelligence Department

Institute for Applied System Analysis

References

Stefan Fischer, Kateryna Haidarzhyi, Jörg Knappen, Olha Polishchuk, Yuliya Stodolinska, and Elke Teich, “A Contemporary News Corpus of Ukrainian (CNC-UA): Compilation, Annotation, Publication,” In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING, 2024, pp. 1–7, Torino, Italia. ELRA and ICCL.

Maria Shvedova and Arsenii Lukashevskyi, “Creating Parallel Corpora for Ukrainian: A German-Ukrainian Parallel Corpus (ParaRook||DE-UK),” In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING, 2024, pp. 14–22, Torino, Italia. ELRA and ICCL.

Dmytro Chaplynskyi and Mariana Romanyshyn, “Introducing NER-UK 2.0: A Rich Corpus of Named Entities for Ukrainian,” In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING, 2024, pp. 23–29, Torino, Italia. ELRA and ICCL.

Artur Kiulian, Anton Polishko, Mykola Khandoga, Oryna Chubych, Jack Connor, Raghav Ravishankar, and Adarsh Shirawalmath, “From Bytes to Borsch: Fine-Tuning Gemma and Mistral for the Ukrainian Language Representation,” In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING, 2024, pp. 83–94, Torino, Italia. ELRA and ICCL.

Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos, “LEGAL-BERT: The Muppets straight out of Law School,” In Findings of the Association for Computational Linguistics: EMNLP, 2020, pp. 2898–2904, Online. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.261

J. P. Aires, D. Pinheiro, V. S. d. Lima, et al., “Norm conflict identification in contracts,” Artif Intell Law, 25, 397–428, 2017. https://doi.org/10.1007/s10506-017-9205-x

S. Wehnert, S. Dureja, L. Kutty, et al., “Applying BERT Embeddings to Predict Legal Textual Entailment,” Rev Socionetwork Strat, 16, 197–219, 2022. https://doi.org/10.1007/s12626-022-00101-3

Christopher Manning, Prabhakar Raghavan, and Hinrich Schütze, “Introduction to Information Retrieval,” Cambridge University Press, 2008. https://doi.org/10.1017/CBO9780511809071.

Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud, “Neural Ordinary Differential Equations,” 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, 2018, arXiv preprint arXiv:1908.10084. Retrieved from https://arxiv.org/abs/1908.10084

I. Beltagy, K. Lo, & A. Cohan, “SciBERT: A Pretrained Language Model for Scientific Text,” Computer Science > Computation and Language, 2019, arXiv preprint arXiv:1903.10676. Retrieved from https://arxiv.org/abs/1903.10676

I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, & I. Androutsopoulos, “JuriBERT: A Masked-Language Model Adaptation for French Legal Text” 2020, arXiv preprint arXiv:2110.01485. Retrieved from https://arxiv.org/abs/2110.01485

Y. Ganin, & V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” In Proceedings of the International Conference on Machine Learning (ICML), 2015, pp. 1180–1189.

M. Wang, & W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, 312, pp. 135–153, 2018. https://doi.org/10.1016/j.neucom.2018.05.083

Tomáš Mikolov, Statistical language models based on neural networks, Ph.D. thesis, Brno University of Technology, 2012.

Tomáš Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean, Efficient estimation of word representations in vector space. arXiv:1301.3781 [cs], January 2013.

Jeffrey Pennington, Richard Socher, and Christopher Manning, “GloVe: global vectors for word representation,” In Proc. of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pp. 1532–1543, Doha, Qatar. Association for Computational Linguistics, October 2014. https://doi.org/10.3115/v1/D14-1162.

Jeffrey Pennington, Richard Socher, and Christopher Manning, “GloVe: global vectors for word representation,” In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, Doha, Qatar. Association for Computational Linguistics, October 2014. https://doi.org/10.3115/v1/D14-1162.

T. T. Vu, V. A. Nguyen, & T. B. Le, “Combining Word2Vec and TF-IDF with Supervised Learning for Short Text Classification,” In 2020 3rd International Conference on Computational Intelligence (ICCI), 2020, pp. 241–245.

J. Devlin, M. W. Chang, K. Lee, & K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, (Long and Short Papers) (pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423

M. Lin, S. Liao, & Y. Huang, “Hybrid word2vec and TF-IDF approach for sentiment classification,” Journal of Information Science, 45(6), 797–806, 2019.

Downloads

Published

2024-09-30

Issue

Section

COMPUTER SCIENCES AND INFORMATION TECHNOLOGIES