The traditional grammar model of generative grammar is a sequential model, in which the derivation starts from the lexicon and proceeds by merging elements and constructing a syntactic structure. At some point, the derivation diverges, continuing in two directions: toward the interface with the conceptual-intentional (C-I) system, often simply LF, and toward the interface with the sensorimotor (SM) system, often simply PF. One property is fundamental to this system: the syntactic derivation is primary, the interface representations for the C-I and SM systems (a semantic and a phonological representation) are derivative. Crucially, there is no feed-back from phonology (or semantics) back into syntax. In this paper, I wish to challenge this idea. I will in fact argue that this interpretation of the relation between the syntactic representation and the semantic and phonological representations is neither necessary from a theoretical point of view, nor desirable from an empirical point of view. In its place, I propose to put a parallel grammar architecture, in which the semantic, syntactic and phonological representations are built in parallel to each other, with information flowing from syntax to semantics and phonology, but also in the other direction, from semantics and phonology back to syntax. Motivations for the proposal both conceptual and empirical. Conceptually, it is argued that the output of the derivation must be a linguistic sign, i.e., an object containing a syntactic, semantic and phonological representation and that the most straightforward way of constructing such an object is to merge smaller linguistic signs. Empirically, there are phenomena that cannot be analysed straightforwardly with a standard sequential model and that would benefit from a parallel model. This paper discusses two such domains, wh-movement and heavy NP shift. Keywords: grammar architecture, parallel grammar model, phonology-syntax interaction, wh-movement, heavy NP shift
|