Dipendra Yadav is a scientific researcher with expertise in natural language processing, focusing on transfer learning, multilinguality, and domain adaptation. His current research interest explores the use of symbolic reasoning in explainability methods for large language models.
Email: dipendra.yadav (@) uni-greifswald.de
Phone: +49 3834 420 5507
University of Greifswald
Felix-Hausdorff-Straße 18
17489 Greifswald
Short CV
- In May 2020, received the M.Sc. in Electrical engineering at the University of Rostock, Germany.
- Worked as student assistant data science at Market logic software, Berlin.
- Completed his internship and Master thesis in Natural language processing at PlanetAI GmbH and continued working as software engineer.
- In January 2023, started working at the University of Greifswald at the Institute of Data Science led by Prof. Dr.-Ing Kristina Yordanova.
Publications:
- Yadav, Dipendra; Suravee, Sumaiya; Strauß, Tobias; Yordanova, Kristina (2024): Cross-Lingual Named Entity Recognition for Low-Resource Languages: A Hindi-Nepali Case Study Using Multilingual BERT Models. MRL@EMNLP2024: Proceedings of the 4th Workshop on Multi-lingual Representation Learning at EMNLP 2024. Association for Computational Linguistics (ACL).
- Yadav, Dipendra; Tonkin, Emma; Stoev, Teodor; Yordanova, Kristina (2024): A Comparative Analysis on Machine Learning Techniques for Research Metadata: the ARDUOUS Case Study. INFORMATIK 2024. DOI: 10.18420/inf2024_37. Bonn: Gesellschaft für Informatik e.V.. PISSN: 1617-5468. ISBN: 978-3-88579-746-3. pp. 499-509. 8th International Workshop on Annotation of useR Data for UbiquitOUs Systems. Wiesbaden. 24.-26. September 2024
- Yadav, Dipendra (2023): Evaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness. DC@KI2023: Proceedings of Doctoral Consortium at KI 2023. DOI: 10.18420/ki2023-dc-10. Gesellschaft für Informatik e.V.. pp. 88-93. Doctoral Consortium at KI 2023. Berlin. 45195
- Yadav, Dipendra; Strauß, Tobias; Yordanova, Kristina (2020): Exploring Transfer Learning for Deep NLP Systems on Rarely Annotated Languages. arXiv preprint 2410.12879.