Soft Alignment Objectives for Robust Adaptation of Language Generation
Autoři | |
---|---|
Rok publikování | 2023 |
Druh | Článek ve sborníku |
Konference | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
Fakulta / Pracoviště MU | |
Citace | |
www | https://aclanthology.org/2023.acl-long.492 |
Doi | http://dx.doi.org/10.18653/v1/2023.acl-long.492 |
Klíčová slova | generation; robustness; machine translation; adaptation |
Popis | Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application. However, the traditional adaptation by further training on in-domain data rapidly weakens the model's ability to generalize to other domains, making the open-ended deployments of the adapted models prone to errors. This work introduces novel training objectives built upon a semantic similarity of the predicted tokens to the reference. Our results show that (1) avoiding the common assumption of a single correct prediction by constructing the training target from tokens' semantic similarity can largely mitigate catastrophic forgetting of adaptation, while (2) preserving the adaptation in-domain quality, (3) with negligible additions to compute costs. In the broader context, the objectives grounded in a continuous token similarity pioneer the exploration of the middle ground between the efficient but na\"{\i}ve exact-match token-level objectives and expressive but computationally- and resource-intensive sequential objectives. |
Související projekty: |