This is not the document you are looking for? Use the search form below to find more!

Report home > Education

A Probabilistic Representation of Systemic Functional Grammar

0.00 (0 votes)
Document Description
The notion of language as probabilistic is well known within Systemic Functional Linguistics. Aspects of language are discussed as meaningful tendencies, not as deterministic rules. In past computational representations of functional grammars, this probabilistic property has typically been omitted. This paper will present the results of a recent project aimed at the computational learning, representation and application of a fundamentally probabilistic functional grammar. Recent advances in machine learning have allowed the large scale inference of truly probabilistic representations of language for the first time.
File Details
  • Added: March, 01st 2011
  • Reads: 1101
  • Downloads: 4
  • File size: 186.14kb
  • Pages: 13
  • Tags: systemic functional grammar, grammar, linguistics analysis
  • content preview
Submitter
  • Name: sebestyen
Embed Code:

Add New Comment




Related Documents

SYSTEMIC FUNCTIONAL GRAMMAR: A TOOL TO INVESTIGATE THE ...

by: dania, 42 pages

The objective of this study is to investigate the level of lexicogrammatical complexification of the Portuguese-English interlanguage of advanced learners. It is intended as a pilot cross-sectional ...

Systemic Functional Grammar and its pedagogical implications

by: monika, 6 pages

This paper focuses on the application of systemic Functional Grammar (SFG) to language study. By providing a sample of text analysis from the systemic functional point of view, the paper illustrates ...

Converting the Penn Treebankto Systemic Functional Grammar

by: armas, 8 pages

Systemic functional linguistics offers a grammar that is semantically organized, so that salient grammatical choices are made explicit. This paper describes the explication of these choices through ...

SYSTEMIC FUNCTIONAL GRAMMAR: A FIRST STEP INTO THE THEORY

by: stephan, 30 pages

This is an introductory account of a particular theory of grammar, namely systemic-functional theory. Grammar is one of the subsystems of a language; more specifically, it is the system of wordings ...

Systemic Functional Grammar

by: marco, 12 pages

Systemic Functional Grammar, a Power Point Report

A Systemic Functional Micro-Grammar of Spanish Clitics

by: lantos, 6 pages

The word order patterns and participant role distribution of Spanish clitics are two well-known phenomena which have been thoroughly studied in Hispanic Linguistics from the perspective of both ...

Towards the Computational Inference and Application of a ...

by: theo, 110 pages

This thesis describes a methodology for the computational learning and classification of a Systemic Functional Grammar. A machine learning algorithm is developed that allows the structure of the ...

LEXICAL-FUNCTIONAL GRAMMAR OF THE CROATIAN LANGUAGE: THEORETICAL ...

by: minna, 10 pages

Formal description aims to find the most suitable way to formalize a certain segment of the language, or some language phenomena at morphological, lexical, syntactic or semantic level. There is a ...

Japanese Parser on the basis of the Lexical-Functional Grammar

by: regina, 12 pages

We report a Japanese parsing system with a linguistically fine-grained grammar based on the Lexical-Functional Grammar (LFG) formalism. The system is the first Japanese LFG parser with over 97% ...

An Analysis of Interpersonal Meaning in a Computer Mediated ...

by: tegan, 10 pages

This study explores the expression of interpersonal meaning in a computer-mediated chat discourse between Japanese and Filipino students in the Cross-Cultural Distance Learning (CCDL) project. Using ...

Content Preview
A Probabilistic Representation of Systemic Functional Grammar
Robert Munro
Endangered Languages Archive
Department of Linguistics
School of Oriental and African Studies
University of London
rmunro@soas.ac.uk
Abstract
The notion of language as probabilistic is well known within Systemic Functional Linguistics.
Aspects of language are discussed as meaningful tendencies, not as deterministic rules. In past
computational representations of functional grammars, this probabilistic property has typically
been omitted. This paper will present the results of a recent project aimed at the computational
learning, representation and application of a fundamentally probabilistic functional grammar.
Recent advances in machine learning have allowed the large scale inference of truly probabilis-
tic representations of language for the first time. In this work, a machine learning algorithm is
developed that learns aspects of a functional grammar from labeled text. This is represented
probabilistically, in the sense that there is a measurable gradation of functional realisation be-
tween all categories. Looking at a single term, this allows that term to be described as realising
multiple functions simultaneously. Looking at all the terms in a text or register, this allows us
to examine the relationships between the functions with respect to the closeness and/or over-
lap of functions, and the extent to which these relationships differ between different texts or
registers. With a focus on function within the noun phrase (nominal group), the methodology
is shown to infer an accurate description of functional categories that classifies new examples
with above 90% accuracy, even across registers of text that are very different from the text that
was learned on. Importantly, the learner is deliberately restricted from remembering specific
words, so that the functions are (necessarily) learned and represented in terms of features such
as part-of-speech, context and collocational tendencies. This restriction allows the successful
application to different registers and demonstrates that function is much more a product of
context than a property of the words themselves. The inferred grammar is also shown to have
interesting applications in the analysis of layers of delicacy. The discovery of finer delicacies
occurs with a high level of sophistication, indicating a potential for the automated discovery
and representation of lexis as most delicate grammar.

1
Introduction
Research describing functional grammars is often prefaced with strong assertions that the grammars (and
therefore the systems, constraints, constituencies and dependencies) are probabilistic, with aspects of
language variously described as a gradational, fuzzy and/or cline (Hasan, 1987; Halliday, 1994; Tucker,
1998; Fawcett, 2000; Halliday, 2002). While functional categories have long been described as mean-
ingful tendencies in a continuous space, these shades of grammar have rarely been explored.
More commonly ‘probabilistic linguistics’ is used to refer to confidences across multiple deterministic
models, or within a single deterministic model (probability of constituency) rather than a single grada-
tional model. This is largely because probabilistic parsing techniques have grown out of deterministic
theories.
The functions of modification within the noun phrase (nominal group) provide good examples for
describing such gradations. Classifiers such as those in ‘the 1,000 metre race’ and ‘the red wine’ still
function close to the Numerative and Epithet from which they originated, and will typically realise both
functions. Gradient representations of function are necessary to describe this gradience of realisation.
Even where individual instances of functional modification are not gradationally realised, gradational
modelling is still necessary. Common solutions for describing some new object/concept include creating
a new word (often through compounding), creating a new sense for an existing word or using multiple
words. Combinations of the three are possible, as can be seen in the phrase ‘notebook computer’. ‘Note-
book’ was created as a compound, ‘notebook computer’ became a multi-word entity and now ‘notebook’
alone has the new sense of a type of computer. There is little ambiguity between Epithets, Classifiers
and Things here, but given that the uptake of the new term/sense will not be uniform and that a given
person’s use may not be consistent (they may only use the new sense of ‘notebook’ in the context of
computers). This shows that the computational modelling of nominals still needs to be gradational in
modelling across deterministic instances.
It might be assumed that part-of-speech is a good indicator of functional modification, giving an insight
into the part of the world we represent in a noun phrase (the experiential metafunction of nominals), with
exceptions being rare or idiosyncratic. Previous functional parsers have relied on this assumption. In this
work, it is demonstrated that assuming the unmarked functions given by part-of-speech and word order
will only account for about half the instances of Classifiers in the registers investigated here, showing
that more sophisticated modelling is required for computational representations.
The difficulty in building a fundamentally probabilistic model of a grammar lies in defining the gra-
dations. Defining a probability distribution across two or more categories in terms of a large number of
features is a difficult manual task, and it is not surprising that previous models have relied on computa-
tional processing over labelled data to calculate these. Machine learning is the most popular method for
combining this with the ability to predict new instances. In this work, a new machine learning algorithm,
Seneschal, is developed that models tendencies in the data as an optimal number of soft clusters, using
the probability of membership of a cluster to make supervised classifications of new data.
The most sophisticated models utilising machine learning have been probabilistic-context-free gram-
mars and stochastic grammars that have focused their interpretation of results on the accuracy of the
inferred syntax (Bod, 1993; Collins, 1999; Charniak, 2000; Johnson, 2003). In a functional lexicogram-
mar this roughly corresponds to only the logical metafunction (although the feature spaces used are much
richer, and gradational models have been suggested (Aarts, 2004; Manning, 2003)) but similar techniques
can be used for modelling more complicated functional relationships.
In Systemic Functional Grammar (SFG), computational representations and applications of artificial
intelligence are not new, but most work in this area has focussed on language generation (Mann and
Mattheissen, 1985; Matthiessen and Bateman, 1991) and machine learning has not previously been used

in the inference of a functional grammar.1 The most well-known systemic parser is WAG (O’Donnell,
1994). It was the first parser to implement a full SFG formalism and it performed both parsing and text
generation. Drawing from work with context free grammars, it treated the grammar as deterministic,
giving good but limited coverage. It didn’t attempt the disambiguation of the unmarked cases of the
functions of words. There have been a number of earlier implementations of SFG parsers, but with
more limited coverage, (Kasper, 1988; O’Donoghue, 1991; Dik, 1992). For German, Bohnet, Klatt and
Wanner implemented a successful method for the identification of Deictics, Numeratives, Epithets and
Classifiers within the noun phrase by implementing a bootstrapping algorithm that relied on the general
ordering of the functions (Bohnet et al., 2002). They were able to assign a function to 95% of words, with
a little under 85% precision. A more extensive review of related work can be found in Munro (2003b).
2
Machine Learning for Linguistic Analysis
Supervised machine learning algorithms are typically used as black boxes, restricted to classifying inde-
pendent categories or flat structures (for an exception in computational linguistics see (Lane and Hen-
derson, 2001)). Unsupervised machine learning is a technique for finding meaningful rules, clusters
and/or trends in unlabeled data and are more commonly used to discover fuzzy (soft), hierarchical and/or
connectionist structures. As such, the goal of unsupervised learning is often analysis, not classification.
In this work, unsupervised and supervised learning are combined so that a single model can be de-
scribed in both its ability to identify functions and to provide information for detailed analysis.
Here, we seek to discover finer layers of delicacy by looking for meaningful clusters within each
function. In SFG ‘delicacy’ describes the granularity chosen in describing a given function. For example,
in Table 1, the terms ‘one’ and ‘first’ both function as Numeratives, but could have been broken down into
the more delicate functions of Quantitatives and Ordinatives respectively. As more delicate functions are
sought, more constraints and tendencies can be described, and therefore we can build a more informative
model.
3
Scope of Study
This study explored functional categories across all groups/phrases of English, but only those of the
noun phrase are described here. See Munro (2003b) for the results and analysis of the other functions.
Examples of nominal functions taken from the corpus used here are given in Table 1.
Definitions are drawn from Halliday (1994), Matthiessen (1995) and O’Donnell (1998). Below we de-
scribe the functions that are the target of the supervised classification (in bold), and those that were/could
be discovered through unsupervised learning at finer layers of delicacy (in italics):
Deictic: Deictics fix the noun phrase in relation to the speech exchange, usually through the orientation
of the speaker. At a finer layer of delicacy this includes Demonstratives, (‘this’, ‘that’, ‘those’), and
Possessives, (‘my’, ‘their’, ‘Dr Smith’s’).
Ordinative: An Ordering Numerative, (‘first’, ‘2nd’, ‘last’).
Quantitative: A Quantitative Numerative, (‘one’, ‘2’, ‘many’, ‘few’, ‘more’). They may used Discur-
sively, (‘the 12 championships’) or simply be Tabulated results, which was common here due to the
choice of registers.
1Machine learning has been used to learn formal grammars that include functional constraints such as Lexical Functional
Grammar (Bresnan, 2001), a theory that is also still evolving. Its F-structure could be described as a functional grammar by
some (or arguably many) definitions. Describing the relationship between LFG and SFG theories is outside the scope of this
paper, but it is a comparison that is probably overdue.

Deictic
Numerative
Epithet
Classifier
Thing
the
third
fastest
time
the
Atlanta Olympics
Burundi’s
5,000 metres
champion
their
first
World Cup
Colombia’s
former
team
boss
the
defending
champion
a
controversial
final
the
Superman
riding
style
three first-round
matches
the
bronze
medal
real
data
sets
robust
parametric
methods
a
single
microarray
chip
the
bootstrapped
version
her
own
fortunes
the
smooth unmarked
outline
a
little
parchment
volume
this
one
scene
Table 1: Example of functional categories
Epithet: Describes some quality or process. At a finer layer of delicacy there are Attitudinal Epithets,
(‘the ugly lamp’), and Experiential Epithets, (‘the red lamp’). They are most commonly realised by
an adjective, but are also commonly realised by a verb, (‘the running water’).
Classifier: Describes a sub-classification. Classifiers are commonly realised by a noun, (‘the table
lamp’), a verb, (‘the running shoe’), or an adjective, (‘the red wine’), but other realisations are also
possible. Classifiers are commonly thought of as providing a taxonomic function, a Hyponymic
Classifier. They may also be used to expand the description of the Head: an Expansive Classifier
(Matthiessen, 1995). The latter are classifications that can more easily be reworded as Qualifiers
or expanded clauses, for example, ‘knee surgery’ can be re-written as ‘surgery of the knee’. In the
work described here, they were a particularly interesting cases, as they allowed anaphoric reference
of non-Head terms, (‘she underwent knee surgery after it was injured...’).
Thing: Typically the semantic head of the phrase. Some entity, be it physical, (‘the lamp’), or abstract,
(‘the idea’), undergoing modification by the other noun phrase constituents. Delicacies within Thing
include into Countable and non-Countable, Named Entities (First, Intermediate and Last Names),
and those simply realised by nouns and non-nouns. Of all the functions in the noun phrase, variation
in function of the Thing corresponds most strongly with variation in the function of the phrase such
as the Referring and Informing functions of a noun phrase (the heads of such phrases are called
Stated and Described Things respectively). When a noun phrase is realised by a single word, the
function is best described in terms of the function of phrase.
4
Testing Framework
4.1
Algorithm
Seneschal is a hybrid of supervised and unsupervised clustering techniques. It has been demonstrated
to be generally suited to the efficient supervised classification and analysis of various data sets (Munro,

2003a). Similar to the EM algorithm and Bayesian learning, it seeks to describe the data in terms of an
Information Measure (IM), combining agglomerative and hierarchical clustering methods.
Given an item i with value iα for categorical attribute α, and given that iα occurs in cluster C with
frequency f (iα, C), within the data set T , i’s information measure for n categorical attributes for C with
size s(C) is given by:
n
f (i
IM (i, C) =
α, C ) + 1
−ln
(1)
f (i ,T )
α
α=1
s(C) + (1 −
)
f (i ,T )
α
−s(T )
Given an item i with value iβ for continuous attribute β, i’s information measure for n continuous
attributes for a cluster C that for attribute β that has an average of µCβ and standard deviation of σCβ is
given by:
n
(i
IM (i, C) =
β − µC β)2
(2)

β=1
C β2
The algorithm maps to an SFG in the following ways:
1. It is probabilistic, giving a gradation of membership across all categories.
2. The algorithm treats all classes independently. If the feature space describes two classes as overlap-
ping, this will be apparent in the model, capturing the overlapping categories. This is particularly
important here, as we need a learner that represents each class as accurately as possible. A learner
that only represents categories by defining boundaries between them goes against our knowledge of
multiple and gradational realisation.2
3. The discovery of the optimal number of clusters within a class maps to the task of describing the
emergent finer layers of delicacy within a function.
4. Beyond a minimum threshold, the algorithm is not frequency sensitive, so it will not intrinsically
favour the patterns of realisation of functions in the training corpus. This makes it more appro-
priate than other algorithms that seek to discover an optimal number of clusters by strong a priori
assumptions of optimal cluster size.3
4.2
Corpora
One training corpus and four test corpora were used. The process of manually tagging the corpus with
the correct functions took about 20 hours, performed by two linguists with input from domain experts in
the fields of bio-informatics and motor sports. Here, we simply labelled a term with its most dominant
function.4
The training corpus comprised 10,000 words of Reuters sports newswires from 1996. It was chosen
because Reuters is one of the most common sources of text used in Computational Linguistics, and the
choice of only sports newswires was motivated by two factors: taking the corpus from only one register
2In terms of Aart’s definitions of Subsective and Intersective Gradience (2004), the probability of cluster membership
described here is Subsective Gradience, and the cross-cluster costs are Intersective Gradience. Note that if the clusters were
not formed independently and prevented from overlapping, then the probability of membership could not be thought of as
Subsective Gradience as the cluster (category) would be partially defined in terms of its intersection with other categories.
3In the work reported here, assuming that the relative frequencies of the categories are the same in the test set is the equivalent
to the learner assuming that all text is sports newswires. This is a well-known problem in natural language processing, known as
domain dependence, and the algorithm described goes some way in addressing the problems. Gradation not wholly dependent
on observed frequency is, in itself, a desirable quality when dealing with sparse data.
4It would be interesting to see how explicitly defining gradient membership for the training data would affect the model
learned, but this would be a complicated task in a largely untested area of machine-learning.

was desirable for testing purposes, and sports terminology is known to be an interesting and difficult
one to study as, for example, it is necessary to learn that a ‘test match’ is a type of cricket match and a
‘1,000 metre race’ is a type of race (this is what allows ‘I won the 1,000’ and ‘They played the test’ to
be grammatical).
Four testing corpora were used, all of approximately 1,000 words. The register (domain) dependence
of NLP tasks is well known so they were drawn from a variety of registers:
1. Reuters sports newswires from 1996 (Reuters-A), from the same corpus as the training set.
2. Reuters sports newswires from 2003 (Reuters-B). This is presumed to be the same register, but is
included to test the extent to which ‘topic shift’ is overcome.
3. Bio-Informatics abstracts (BIO-INF), to test the domain dependence of results in a register with a
high frequency of rare words/phrases, and with some very large and marked Classifier constructions.
4. An excerpt from a modernist fiction (MOD-FIC), ‘The Voyage Out’, Virginia Woolf (1915), to test
the domain dependence of results on an Epithet frequent register.
4.3
Features
part-of-speech : POS was assign mxpost (Ratnaparkhi, 1996). It was modelled to a context window of
two words. The standard codes for POS are used here.
POS augmentations : Features representing capitalisation and type of number were used, as mxpost
over assigned NNP’s to capitalised words, and under-assigned numbers. Number Codes: NUM =
only numerals, WRD = word equivalent of a numeral, MIX = a mix, eg ‘6-Jan’, ‘13th’.
punctuation : Features were included that represented punctuation occuring before and after the term.
Punctuation itself was not treated as a token.
collocational tendencies : Features were included that represented the collocational tendencies of a
term with the previous and following words and the ratio between them. These were obtained auto-
matically using the alltheweb search engine, as it reports the number of web documents containing
a searched term, and could therefore be used to automatically extract measures from a large source.
For two terms ‘A’ and ‘B’, this is given by the number of documents containing both ‘A’ and ‘B’,
divided by the number of documents containing the bi-gram ‘A B’.
repetition : (self-co-occurrence) The observed percentage of documents containing a term that con-
tained more than one instance of that term. These were taken from a large corpus of about one
hundred thousand documents of Reuters newswires, Bio-Informatics abstracts, and the full ‘The
Voyage out’ split into equivalent sized chunks.
phrase context and boundary : The following and previous phrase types were included , as was the
term’s position in its own phrase.
The words themselves were omitted from the study to demonstrate that functions are not simply a
property of a word (like most parts-of-speech) but a product of context. It is expected that allowing the
algorithm to learn that a certain word has previously had a certain function would give a small increases
in accuracy but a substantial increase in domain dependency.
Other additional features were considered, such as the use of lexico-semantic ontologies and more
complex modelling of repetition, but were not included here to simplify the analysis (or were investigated
independently).

Figure 1: Gradational realisation: the IM costs between functions, at two layers of delicacy
5
Analysis
The raw accuracy of classifying functions within the noun phrase was 89.9%. The accuracy of a parser
only seeking to describe unmarked functions based on part-of-speech and word order would classify with
82.6% accuracy on these test corpora, so the method here almost halved the error of existing methods.
This baseline was reached by Seneschal after only 5% of the data was seen (the overall accuracy for all
other group/phrase types was over 95%).
A confusion matrix (number of cross-categorical errors) doesn’t capture the probabilistic nature of
the distribution. Here, the gradations are measured as the average IM cost for assigning items between
all clusters/functions. Figure 1 represents the pairwise calculations of gradations topographically. The
top map shows the relationships between the targets of the supervised task, the bottom map between
more delicate clusters/functions. If there were no probabilistic boundaries between the functions, the
maps in would be a diagonal series of white peaks on a black background with the height of a peak

Function
Significant Features
Examples
Demonstrative
pos: DT=80%, PRP=15%
‘a’,
prev phrs: prep=56%, verb=36%
‘the’,
next phrs: prep=44%, noun=23%
‘these’
Possessive
pos: DT=32%, POS=25%, NNP=24%
‘our’,
prev phrs: noun=48%, prep=39%
‘The Country Club’s’
next phrs: verb=57%, prep=32%
Tabular
num type: NUM=69%, MIX=18%,
‘1, 2, 20’
(Quantitative)
prev phrs: noun=100%
phrs end: yes=92%
coll prev: (ave= 0.10, var= 0.04)
coll next: (ave= 0.05, var= 0.01)
Discursive
num type: WRD=39%, MIX=26%
2 cars,’
(Quantitative)
prev phrs: prep=40%, verb=30%
‘the twelve championships,’
phrs end: yes=41%
coll prev: (ave= 0.02, var= 0.00)
coll next: (ave= 0.11, var= 0.06)
Ordinative
num type: ORD=88%, WRD=8%
‘the third fastest’,
prev phrs: prep=42%, verb=36%
‘the top four’
phrs end: yes=23%
coll prev: (ave= 0.24, var= 0.09)
coll next: (ave= 0.22, var= 0.15)
Table 2: Properties of the Deictic and Numerative functions
representing how tightly that function was defined by the features. In this study, the significance of
discovered delicacies is precisely the difference in the complexity of the two maps in Figure 1.
The ordering of the functions in Figure 1 is simply the general observed ordering. What is not repre-
sented in Figure 1 is the attributes that were the most significant in distinguishing the various functions,
that is, the attributes that contributed most significantly to a given ‘valley’.
The remainder of this section describes the more delicate functions, including the features that were
the most significant in distinguishing them. It is important to remember that these features are both a
description of that function and the reason that Seneschal identified them, and that co-significant features
are also features that correlated with each other for that function.
5.1
Deictics and Numeratives
There were two clusters/functions discovered within the Deictic function corresponding well to the more
delicately described functions of Demonstratives and Possessives. The Possessive cluster contained
mostly Genitives in the form of embedded noun phrases. The profiles of these and the Numeratives
are given in Table 2. While differentiation in the part-of-speech distributions are as expected, the phrase
context is particularly interesting, as it shows that the Possessives are more likely to occur in the Subject
position, given by their being more likely to occur before a verb phrase and after a noun phrase.
The Quantitative function was divided into two sub-clusters, here simply labelled ‘Tabular’ and ‘Dis-
cursive’ as they are divided along the lines of reported results and modifiers within a phrase. As Figure 1
shows, the relationship between the tabulated numbers and the other functions is the least probabilistic.
It would be easy to assume that no relationship existed between them at all, but they do leak into each
other in the final phrase in sentences like ‘Fernando Gonzalez beat American Brian Vahaly 7-5, 6-2’.
The Ordinatives differentiate themselves from the Quantitatives by possessing particularly strong col-
locational tendencies with the previous words, as an Ordinative is much more likely to require exact
determination from a small selection of closed-group words.

Function
Significant Features
Examples
Epithet
pos: JJ=78%, RB=4%, JJR=4%
erratic play’,
prev pos: DT= 47%, IN= 10%
bigger chance’
repetitn: (ave= 0.26, var= 0.04)
coll prev: (ave= 0.16, var= 0.05)
coll next: (ave= 0.20, var= 0.13)
Expansive
pos: JJ=34%, NN=31%, NNP=16%
knee surgery’,
(Classifier)
prev pos: IN= 30%, NN= 16%
optimization problems’
repetitn: (ave= 0.42, var= 0.06)
coll prev: (ave= 0.02, var= 0.00)
coll next: (ave= 0.34, var= 0.19)
Hyponymic
pos: NN=53%, JJ=17%, NNP=14%
‘the gold medal’,
(Classifier)
prev pos: JJ=37%, DT=27%
‘the
world
3,000
metres
repetition: (ave= 0.47, var= 0.04)
record’
coll prev: (ave= 0.26, var= 0.15)
coll next: (ave= 0.30, var= 0.16)
Table 3: Properties of the Epithet and Classifier functions
5.2
Epithets and Classifiers
The difference between Attitudinal and Experiential Epithets is probably the most common example of
delicacy given in the literature. Nonetheless, either the attributes failed to capture this, the learner failed
to find it or it wasn’t present in the corpora, as this distinction was not discovered.
The profiles for Epithets and Classifiers are in Table 3. Within Classifiers, the clusters describe Clas-
sifiers that corresponded well to the functions of Expansive and Hyponymic Classification.
Expansive Classifiers are more closely related to Epithets, and Hyponymic Classifiers more closely
related to multi-word Things, so the distinction is roughly along the lines of marked and unmarked
Classifiers, although both contain a considerable percentage of marked cases realised by adjectives. It is
interesting that Figure 1 shows that the difference between the types of Classifiers is one of the most well
defined indicating that the adjectives realising marked Hyponymic Classifiers were confidently identified.
Hyponymic Classifiers are much more likely to occur in compound or recursive Classifying structures
(Matthiessen, 1995), which is why they exhibit strong collocational tendencies with the previous word,
while the Expansive Classifiers exhibit almost none. As expected, the collocational tendencies with the
following word was greater for Classifiers than for Epithets, although the variance is also quite high.
The selection of parts-of-speech context also differs between functions. While the Hyponymic Clas-
sifiers seem to follow adjectives, and therefore are likely to follow other Classifiers or Epithets, the Ex-
pansive Classifiers most commonly follow a preposition, indicating that they are likely to occur without
a Deictic or Numerative and without sub-modification.
Epithets generally occur more frequently than Classifiers, so the probability of repetition of a Classifier
within a document being almost twice as high is especially significant.
5.3
Thing
The clusters that were discovered can be roughly divided between those describing Named Entities
(First, Intermediate and Last Names), those with the phrase realised by a single word (corresponding to
Nominative and non-Nominative functions within the clause) and nominals corresponding to the Refer-
ring and Informing functions of a noun phrase. The properties of the Named Entity and Nominative/non-
Nominative functions are well-known and there were few surprises in the features describing them here.
Here, we investigate the relative frequencies of functional modification of Stated and Described
Things, assumming that most are some combination of Referring and Informing functions (O’Donnell,

Function
Significant Features
Examples
Stated
phrs start: yes=2%
‘media questions’,
(Thing)
pos: NN= 67%, NNS=30%
‘the invitation’,
prev pos: JJ= 32%, DT= 27%, NN= 16%
‘such comparisons
prev phrs: prep=46%, verb=33%, noun=12%
next phrs: prep=45%, noun=21%, verb=10%
coll prev: (ave= 0.31, var= 0.16)
coll next: (ave= 0.08, var= 0.02)
Described
phrs start: yes=58%
‘20.67 seconds’,
(Thing)
pos: NN= 45%, NNS=13%
‘former winner’,
prev pos: J= 21%, NN= 21%, NNP= 19%
‘our implementation
prev phrs: noun=78%, verb=12%, prep=8%
next phrs: noun=91%, verb=4%, conj=2%
coll prev: (ave= 0.10, var= 0.05)
coll next: (ave= 0.01, var= 0.00)
Table 4: Properties of the Stated/Described Things
1998). The distinction between the two may be seen in the choices made within the Deictic and Classi-
fication systems of delicacy. While the Stated Thing is twice as likely to be modified by a Deictic, over
80% of these are Demonstratives, which don’t feature in the Described’s modifications. This trend is re-
versed for Classifiers. The Described Things are more than twice as likely to be modified by a Classifier,
and within this over 70% of cases are Expansive, as opposed to about 25% for the Stated Things.
As Figure 1 shows, the trend of Hyponymic Classifiers being more closely related to the Thing is re-
versed for the Stated Things: unlike other Things, a Stated Thing is most closely related to an Expansive
Classifier. An explanation for this reversal is that it represents that a Hyponymic Classifier may itself
undergo Classification while an Expansive Classifier generally does not, although the Stated thing seems
to define a number of aberrant ‘hills and valleys’ with the intersection of the other functions in Figure 1,
indicating it may represent something more complicated.
Not described in Figure 2 is that the percent of Epithets is much less than the percentage of preceding
adjectives given in Table 4, indicating that markedness is common to both. The fact that the Stated
is twice as likely as the Described to be modified Epithetically indicates that the labels given to them
are not quite sufficient in describing the complexities of the differences. This also demonstrates that
at finer layers of delicacy, the variation in function can quickly become very emergent, even when the
corresponding parts-of-speech and other surface-level phenomena independently differ only slightly.
5.4
Inference of unmarked function
The inference of unmarked function and register variation was investigated using traditional methods of
calculation from categorical analysis techniques. Precision is the percentage of classifications made that
were correct. Recall is the percentage of actual target classes that were correctly identified. An Fβ=1
value is the harmonic average of the two.
The baseline here was defined as that given by an assumption of unmarked function, that is, the optimal
result given by word order and part-of-speech.
It might be assumed that functions within a noun phrase are typically umarked. This work is the first
empirical investigation of this assumption and shows it to be false: less than 40% of non-final adjectives
realized Epithets; less than 50% of Classifiers were nouns; and 44% of Classifiers were marked. While
the relative frequency of the various functions varied between registers (Munro, 2003b), the ratio of
marked to unmarked function was consistent. The only functions with a Fβ=1 baseline above 0.7 across
all registers were Diectics and Things. For these two functions word order and close-group word-lists
could have produced the same results without part-of-speech knowledge.

Download
A Probabilistic Representation of Systemic Functional Grammar

 

 

Your download will begin in a moment.
If it doesn't, click here to try again.

Share A Probabilistic Representation of Systemic Functional Grammar to:

Insert your wordpress URL:

example:

http://myblog.wordpress.com/
or
http://myblog.com/

Share A Probabilistic Representation of Systemic Functional Grammar as:

From:

To:

Share A Probabilistic Representation of Systemic Functional Grammar.

Enter two words as shown below. If you cannot read the words, click the refresh icon.

loading

Share A Probabilistic Representation of Systemic Functional Grammar as:

Copy html code above and paste to your web page.

loading