Difference between revisions of "Language/Multiple-languages/Culture/Text-Processing-Tools"

From Polyglot Club WIKI
Jump to navigation Jump to search
Line 4: Line 4:
In this lesson, several useful linguistic tools useful for common language learners are discussed. They are not always accurate, so keep in mind.
In this lesson, several useful linguistic tools useful for common language learners are discussed. They are not always accurate, so keep in mind.


Many of tools introduced are written in Python, which is an important language in machine learning and easy to learn.
Many of the tools introduced are written in Python, which is an important language in machine learning and easy to learn.


If you don't know Python, please try this:
If you don't know Python, please try this:

Revision as of 07:22, 27 April 2022

Multiple-languages-flag-polyglotclub.jpg

In this lesson, several useful linguistic tools useful for common language learners are discussed. They are not always accurate, so keep in mind.

Many of the tools introduced are written in Python, which is an important language in machine learning and easy to learn.

If you don't know Python, please try this:

In progress.

Diacritization

In Arabic writing system, diacritics indicate the accents, but they are often omitted for writing fluently. The process of restoring diacritics is called diacritization.


Arabic:

Pitch-Accent Marking

In Japanese and other languages, the pitch-accent is important on distinguishing different words. They are unwritten and required.


Japanese:

Stress Generation

In Russian and other languages, the stress is important on distinguishing different words. They are usually omitted.


Russian:

Word Segmentation

In some languages, words are not separated by spaces, for example: Chinese, Japanese, Lao, Thai. In Vietnamese, spaces are used to divide syllables instead of words. This brings about difficulties for computer programs like VocabHunter, gritz and text-memorize, where words are detected only with spaces.

The solution is called “word segmentation”, which detects words and insert spaces in between or put the segmented words into a list. You may want to ask: The programs only recognise spaces as the word separators, how to deal with Vietnamese? The answer is using the non-breaking space.


Chinese:

Japanese:

Lao:

Thai:

Vietnamese: