Francesca Franzon
2023
Unnatural language processing: How do language models handle machine-generated prompts?
Corentin Kervadec
|
Francesca Franzon
|
Marco Baroni
Findings of the Association for Computational Linguistics: EMNLP 2023
Language model prompt optimization research has shown that semantically and grammatically well-formed manually crafted prompts are routinely outperformed by automatically generated token sequences with no apparent meaning or syntactic structure, including sequences of vectors from a model’s embedding space. We use machine-generated prompts to probe how models respond to input that is not composed of natural language expressions. We study the behavior of models of different sizes in multiple semantic tasks in response to both continuous and discrete machine-generated prompts, and compare it to the behavior in response to human-generated natural-language prompts. Even when producing a similar output, machine-generated and human prompts trigger different response patterns through the network processing pathways, including different perplexities, different attention and output entropy distributions, and different unit activation profiles. We provide preliminary insight into the nature of the units activated by different prompt types, suggesting that only natural language prompts recruit a genuinely linguistic circuit.
2022
Communication breakdown: On the low mutual intelligibility between human and neural captioning
Roberto Dessì
|
Eleonora Gualdoni
|
Francesca Franzon
|
Gemma Boleda
|
Marco Baroni
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
We compare the 0-shot performance of a neural caption-based image retriever when given as input either human-produced captions or captions generated by a neural captioner. We conduct this comparison on the recently introduced ImageCoDe data-set (Krojer et al. 2022), which contains hard distractors nearly identical to the images to be retrieved. We find that the neural retriever has much higher performance when fed neural rather than human captions, despite the fact that the former, unlike the latter, were generated without awareness of the distractors that make the task hard. Even more remarkably, when the same neural captions are given to human subjects, their retrieval performance is almost at chance level. Our results thus add to the growing body of evidence that, even when the “language” of neural models resembles English, this superficial resemblance might be deeply misleading.
2017
Can You See the (Linguistic) Difference? Exploring Mass/Count Distinction in Vision
David Addison Smith
|
Sandro Pezzelle
|
Francesca Franzon
|
Chiara Zanini
|
Raffaella Bernardi
Proceedings of the 12th International Conference on Computational Semantics (IWCS) — Short papers
Search
Co-authors
- Marco Baroni 2
- Corentin Kervadec 1
- David A. Smith 1
- Sandro Pezzelle 1
- Chiara Zanini 1
- show all...