• Front Matter, Table of Contents, and Introduction (Coyote Papers Volume 12, 2001)

      University of Arizona Linguistics Circle (Tucson, Arizona), 2001
    • Organizing linguistic data: thematic introducers as an example

      Porhiel, Sylvie; Laboratoire Langues, Textes, Traitements informatiques, Cognition and Laboratoire d’Informatique pour la Mécanique et les Sciences de l’Ingénieur (University of Arizona Linguistics Circle (Tucson, Arizona), 2001)
      In this paper I propose to model specific French linguistic markers, thematic introducers (e.g. au sujet de, à propos de, en ce qui concerne, concernant, etc.) in the ContextO platform developed by the LaLic team in the Université de Paris IV. I use the software to locate thematic structures in texts. The software uses a linguistic database in order to trace the relevant linguistic information that the user is looking at. The ultimate aim is to create a database that matches the linguistic representation in order to create a linguist-friendly tool. I review several studies that propose a classification containing thematic introducers and then explain how I have proceeded to propose a customized distribution of the thematic introducers to meet the constraints of the system.
    • The perception of novel phoneme contrasts in a second language: a developmental study of native speakers of English learning Japanese singleton and geminate consonant contrasts

      Hayes, Rachel L.; University of Arizona (University of Arizona Linguistics Circle (Tucson, Arizona), 2001)
      This work explores development in the perception of Japanese singleton and geminate consonant contrasts among native speakers of English learning Japanese as a second language. The primary goal of this paper is to show that the second language (L2) acquisition of phoneme contrasts that are not present in the first language (L1) exhibits development that is predictable from the acoustic properties of the contrast. Additionally I attribute differences in the perception of particular singleton/geminate contrasts by both native speakers of Japanese and learners of Japanese as a result of acoustic properties of the contrasts.
    • Syntax in performance: minimalist derivation in the late assignment of syntax theory

      O'Bryan, Erin L.; University of Arizona (University of Arizona Linguistics Circle (Tucson, Arizona), 2001)
      This paper presents an account of how Minimalist derivation (Chomsky 1995) can be embedded in a comprehension model, the Late Assignment of Syntax Theory (LAST) (Townsend & Bever, 2001). The issues addressed concern the interface between the first step of the model, in which heuristic strategies apply to the utterance, and the second step, Minimalist derivation. Two questions about the interface are addressed: 1) How are features in the numeration needed to begin a Minimalist derivation chosen? 2) What dictates which units Merge in the derivation? Chomsky (1995:226-227) claims that we do not need to ask either question. I review his reasons and argue that we can and should answer these questions in a workable comprehension model. In response to the first question, I demonstrate that heuristic strategies applied to the utterance determine which features enter the numeration. In response to the second question, I discuss how heuristic strategies combined with lexical information determine which items Merge.
    • Measuring conceptual distance using WordNet: the design of a metric for measuring semantic similarity

      Lewis, William D.; University of Arizona (University of Arizona Linguistics Circle (Tucson, Arizona), 2001)
      This paper describes the development of a metric for measuring the semantic distance or similarity of words using the WordNet lexical database. Such a metric could be of use in development of search engines and text retrieval systems, tasks for which the richness of natural language can cause difficulty. Further, such a metric can prove invaluable to psycholinguists who wish to study lexical semantic similarity or speech errors (specifically malapropisms). The paper first explores an adjusted distance metric, a la Rada et al. 1989, and the problems such a metric presents. Additional analysis shows that adjustments can be made to such a distance metric using density calculations, both based on depth within the network and based on local density. The paper ends with a discussion about automating the task of identifying regions within the semantic space over which density calculations can be made.
    • Modeling semantic coherence from corpus data: the fact and the frequency of a co-occurrence

      Pekar, Viktor; Bashkir State University (University of Arizona Linguistics Circle (Tucson, Arizona), 2001)
      The paper presents a preliminary evaluation of a corpus-based representation of individual words and a method to generalize over these representations. The vector space is represented in a way that gives weight to the fact that words co-occur rather than to the frequency of their co-occurrence. This format is hypothesized to allow for reducing the vector space, minimizing negative effects of data sparseness and enhancing ability of the model to generalize words to novel contexts. The model is assessed by comparing computer-calculated probabilities of different verb-argument combinations with human subjects' judgements about appropriateness of these combinations. The results indicate that there is a correlation between the probabilities calculated by the model and the subjects' evaluations.