<?xml version='1.0' encoding='UTF-8'?><metadata xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns="http://dublincore.org/documents/dcmi-terms/"><dcterms:title>Minho Spoken Syllable Pool (MSSP)</dcterms:title><dcterms:identifier>https://doi.org/10.34622/datarepositorium/KDIXY6</dcterms:identifier><dcterms:creator>Soares, Ana Paula</dcterms:creator><dcterms:creator>Mendes Oliveira, Helena</dcterms:creator><dcterms:publisher>Repositório de Dados da Universidade do Minho</dcterms:publisher><dcterms:issued>2026-04-10</dcterms:issued><dcterms:modified>2026-04-10T10:03:26Z</dcterms:modified><dcterms:description>The Minho Spoken Syllable Pool (MSSP)  provides 266 European Portuguese consonant–vowel (CV) syllables recorded under uniform conditions (single native speaker; controlled setting; constant recording chain and environment), together with integrated linguistic descriptors, including IPA and SAMPA transcriptions, segmental and articulatory annotations, and orthographic mappings. The focus on CV syllables reflects their canonical status and widely assumed cross-linguistic prevalence, supporting their use not only in European Portuguese research but also in artificial-language learning and statistical-learning paradigms that build controlled streams from natural CV syllables. To enable frequency-controlled designs, MSSP includes corpus-derived type and token syllable frequency measures computed from SUBTLEX-PT, with indices conditioned by word length and syllable position, as well as stress-related frequency counts to support prosody-sensitive stimulus selection in a language with variable stress assignment. MSSP further provides selection-oriented acoustic descriptors (e.g., syllable duration and F0 for all items; nasal vowels additionally characterized via amplitude-based nasalization indices) to facilitate screening and transparent reporting of stimulus properties.</dcterms:description><dcterms:subject>Arts and Humanities</dcterms:subject><dcterms:subject>Social Sciences</dcterms:subject><dcterms:subject>spoken syllables</dcterms:subject><dcterms:subject>speech perception</dcterms:subject><dcterms:subject>auditory ERPs</dcterms:subject><dcterms:subject>auditory word recognition</dcterms:subject><dcterms:subject>word segmentation</dcterms:subject><dcterms:subject>speech database</dcterms:subject><dcterms:subject>statistical learning</dcterms:subject><dcterms:subject>European Portuguese</dcterms:subject><dcterms:language>Portuguese</dcterms:language><dcterms:date>2026</dcterms:date><dcterms:contributor>Mendes Oliveira, Helena</dcterms:contributor><dcterms:contributor>Soares, Ana Paula</dcterms:contributor><dcterms:dateSubmitted>2026-02-13</dcterms:dateSubmitted><dcterms:temporal>2023</dcterms:temporal><dcterms:temporal>2025</dcterms:temporal><dcterms:relation>http://p-pal.di.uminho.pt/tools</dcterms:relation><dcterms:type>auditory CV syllables</dcterms:type><dcterms:type>linguistic descriptors, including IPA and SAMPA transcriptions</dcterms:type><dcterms:type>segmental and articulatory annotations</dcterms:type><dcterms:type>orthographic mappings</dcterms:type><dcterms:license>NONE</dcterms:license><dcterms:rights>&lt;a rel="license" href="http://creativecommons.org/licenses/by/4.0/">&lt;img alt="Creative Commons Licence" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" />&lt;/a>&lt;br />This work is licensed under a &lt;a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License&lt;/a>.</dcterms:rights></metadata>