Skip to content

Add Intl.Segmenter support #539

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 1, 2024
Merged

Add Intl.Segmenter support #539

merged 5 commits into from
Aug 1, 2024

Conversation

ExplodingCabbage
Copy link
Collaborator

@ExplodingCabbage ExplodingCabbage commented Aug 1, 2024

Resolves #438; probably also adequately resolves #214 even though it's not quite the solution that was asked for in that issue.

@ExplodingCabbage ExplodingCabbage marked this pull request as ready for review August 1, 2024 12:26
@ExplodingCabbage ExplodingCabbage merged commit 4f0430a into master Aug 1, 2024
@ExplodingCabbage ExplodingCabbage deleted the intl.segmenter branch August 1, 2024 12:30
ryota-ka added a commit to ryota-ka/DefinitelyTyped that referenced this pull request Oct 1, 2024
ryota-ka added a commit to ryota-ka/DefinitelyTyped that referenced this pull request Oct 8, 2024
ryota-ka added a commit to ryota-ka/DefinitelyTyped that referenced this pull request Oct 8, 2024
// 2. "Mei (梅) has (有) many (很多) sons (儿子)"
// We want to see that diffWords will get the word counts right and won't try to treat the
// trailing 子 as common to both texts (since it's part of a different word each time).
// TODO: Check with a Chinese speaker that this example is correct Chinese.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, @ExplodingCabbage, I'm Chinese, I can confirm that the meaning of these two sentences are correct.

But I'm not sure about the test purpose here.

I can see that

> [...chineseSegmenter.segment('我有很多桌子。')].map(({segment})=>segment)
[ '我有', '很多', '桌子', '。' ]
> [...chineseSegmenter.segment('梅有很多儿子。')].map(({segment})=>segment)
[ '梅', '有', '很多', '儿子', '。' ]

Not sure why '我有' are together, but '梅', '有' are separated, if you want test something similar you can change 梅有 to 他有(He has) or 她有(She has).

Copy link
Contributor

@fisker fisker May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's quite strange...

> [...chineseSegmenter.segment('她有很多桌子。')].map(({segment})=>segment)
[ '她', '有', '很多', '桌子', '。' ]
> [...chineseSegmenter.segment('他有很多桌子。')].map(({segment})=>segment)
[ '他有', '很多', '桌子', '。' ]

'她' and '他' are the same, but one for male, one for female.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I'm not sure about the test purpose here.

Huh. What I wanted to demonstrate was that the "tokens" we split the text into are what a Chinese speaker or linguist would consider to be single words. But from what you write, it sounds like the tokens in this case aren't what you'd consider words, though, and Intl.Segmenter is giving an incorrect result? i.e. if I understand you right, you would say "我有" is two words ("我" and "有"), not a single word made up of two characters? Is there any kind of ambiguity about this - like, is there any reason a native speaker might argue that 我有 is a single word - or is Intl.Segmenter just completely unambiguously wrong here?

If so, it's probably worth a bug report to... whatever the underlying source of the segmentation rules is here. (ICU, I think.)

Anyway, for clarity I'll tweak this test to show an example where Intl.Segmenter manages to get the tokenization right. Thank you for commenting!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a single word made up of two characters?

It's not something really "incorrect", people have different opinions. Just feels odd, it's inconsistent for things similar.

# I have money
> [...chineseSegmenter.segment('我有钱')].map(({segment})=>segment)
[ '我有', '钱' ]
# You have money
> [...chineseSegmenter.segment('你有钱')].map(({segment})=>segment)
[ '你有', '钱' ]
# He have money
> [...chineseSegmenter.segment('他有钱')].map(({segment})=>segment)
[ '他有', '钱' ]
# She have money
> [...chineseSegmenter.segment('她有钱')].map(({segment})=>segment)
[ '她', '有', '钱' ]
# It have money
> [...chineseSegmenter.segment('它有钱')].map(({segment})=>segment)
[ '它', '有', '钱' ]
# I love flowers
> [...chineseSegmenter.segment('我爱花')].map(({segment})=>segment)
[ '我', '爱', '花' ]
# You love flowers
> [...chineseSegmenter.segment('你爱花')].map(({segment})=>segment)
[ '你爱', '花' ]
# He love flowers
> [...chineseSegmenter.segment('他爱花')].map(({segment})=>segment)
[ '他', '爱', '花' ]
# She love flowers
> [...chineseSegmenter.segment('她爱花')].map(({segment})=>segment)
[ '她', '爱', '花' ]
# It love flowers
> [...chineseSegmenter.segment('它爱花')].map(({segment})=>segment)
[ '它', '爱', '花' ]

ExplodingCabbage added a commit that referenced this pull request May 19, 2025
…enter segments in a way that seems more correct (or at least more self-consistent) to a native Chinese speaker

See discussion at #539 (comment) for explanation
ExplodingCabbage added a commit that referenced this pull request May 19, 2025
…enter segments in a way that seems more correct (or at least more self-consistent) to a native Chinese speaker (#613)

See discussion at #539 (comment) for explanation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support using an Intl.Segmenter for word tokenization in diffWords Making whitespace the only word separator?
2 participants