I've been wondering about Bayesian models of language learning and bilingualism. Models such as Griffiths & Kalish (2005) assume learners have probabilities for hypotheses of the structure of a language in a large hypothesis space, based on utterances heard. The posterior probability represents the learner’s model of a speaker’s language (compatible with a view of trying to learn the parents’ Medium). Two methods drive convergence to a best hypothesis in the learner: The MAP (maximum a posteriori) process assumes the maximally probable ‘language’ and only produces strings created by that ‘language’. The sampling approach (SAM) does not rule out any nonzero probability hypothesis and may produce mixed strings occasionally.
In a monolingual environment, MAP should be most efficient, but SAM is better for acquiring more than one language. A sampling approach also models observations of better task switching but worse inhibition in bilinguals than monolinguals. This may be another factor in the differences between monolingual and bilingual development.
However, I'm not completely sure about the maths, and suspect that MAP can define a best hypothesis over any number of 'languages', so they may be equivalent.
In a monolingual environment, MAP should be most efficient, but SAM is better for acquiring more than one language. A sampling approach also models observations of better task switching but worse inhibition in bilinguals than monolinguals. This may be another factor in the differences between monolingual and bilingual development.
However, I'm not completely sure about the maths, and suspect that MAP can define a best hypothesis over any number of 'languages', so they may be equivalent.
No comments:
Post a Comment