Skip to content

OpenPecha/Botok

Folders and files

NameName
Last commit message
Last commit date
Mar 9, 2025
Mar 9, 2025
Mar 8, 2025
Mar 9, 2025
Mar 8, 2025
Dec 21, 2019
Mar 8, 2025
Jul 27, 2020
Mar 9, 2025
Apr 22, 2018
Mar 9, 2025
Jun 28, 2019
Mar 9, 2025
May 20, 2021
May 20, 2021
Mar 8, 2025
Aug 10, 2020

Repository files navigation


OpenPecha

Botok – Python Tibetan Tokenizer

GitHub release Documentation Status Coverage Status Code style: black License

DescriptionKey FeaturesInstallationBasic UsageAdvanced UsageDocumentationDevelopmentContributingAcknowledgements


Description

Botok is a powerful Python library for tokenizing Tibetan text. It segments text into words with high accuracy and provides optional attributes such as lemma, part-of-speech (POS) tags, and clean forms. The library supports various text formats, custom dialects, and multiple tokenization modes, making it a versatile tool for Tibetan Natural Language Processing (NLP).

Key Features

  • Word Segmentation: Accurate word segmentation with support for affixed particles
  • Multiple Tokenization Modes:
    • Word tokenization
    • Chunk tokenization (groups of meaningful characters)
    • Space-based tokenization
  • Rich Token Attributes:
    • Lemmatization
    • POS tagging
    • Clean form generation
  • Custom Dialect Support: Use pre-configured dialects or create your own
  • File Processing: Process both strings and files with automatic output generation
  • Robust Handling: Manages complex cases like double tseks and spaces within words

Installation

Requirements

  • Python 3.6 or higher
  • pip package manager

Basic Installation

pip install botok

Development Installation

git clone https://github.com/OpenPecha/botok.git
cd botok
pip install -e .

Basic Usage

Simple Word Tokenization

from botok import WordTokenizer
from botok.config import Config
from pathlib import Path

# Initialize tokenizer with default configuration
config = Config(dialect_name="general", base_path=Path.home())
wt = WordTokenizer(config=config)

# Tokenize text
text = "བཀྲ་ཤིས་བདེ་ལེགས་ཞུས་རྒྱུ་ཡིན་ སེམས་པ་སྐྱིད་པོ་འདུག།"
tokens = wt.tokenize(text, split_affixes=False)

# Print each token
for token in tokens:
    print(token)

File Processing

from botok import Text
from pathlib import Path

# Process a file
input_file = Path("input.txt")
t = Text(input_file)
t.tokenize_chunks_plaintext  # Creates input_pybo.txt with tokenized output

Advanced Usage

Custom Dialect Configuration

from botok import WordTokenizer
from botok.config import Config
from pathlib import Path

# Configure custom dialect
config = Config(
    dialect_name="custom",
    base_path=Path.home() / "my_dialects"
)

# Initialize tokenizer with custom config
wt = WordTokenizer(config=config)

# Process text with custom settings
text = "བཀྲ་ཤིས་བདེ་ལེགས།"
tokens = wt.tokenize(
    text,
    split_affixes=True,
    pos_tagging=True,
    lemmatize=True
)

Different Tokenization Modes

from botok import Text

text = """ལེ གས། བཀྲ་ཤིས་མཐའི་ ༆ ཤི་བཀྲ་ཤིས་"""
t = Text(text)

# 1. Word tokenization
words = t.tokenize_words_raw_text

# 2. Chunk tokenization (groups of meaningful characters)
chunks = t.tokenize_chunks_plaintext

# 3. Space-based tokenization
spaces = t.tokenize_on_spaces

Documentation

For comprehensive documentation, visit:

Development

Building from Source

rm -rf dist/
python setup.py clean sdist

Publishing to PyPI

Automated Publishing with Semantic Versioning

The repository is configured with GitHub Actions to automatically handle version bumping and publishing to PyPI when changes are pushed to the master branch. The workflow uses semantic versioning based on commit messages:

  1. Use the following commit message formats:

    • fix: your message - For bug fixes (triggers PATCH version bump)
    • feat: your message - For new features (triggers MINOR version bump)
    • Add BREAKING CHANGE: description in the commit body for breaking changes (triggers MAJOR version bump)

    Examples:

    # This will trigger a PATCH version bump (e.g., 0.8.12 → 0.8.13)
    fix: improve test coverage to 90% and fix Python 3.12 compatibility
    
    # This will trigger a MINOR version bump (e.g., 0.8.12 → 0.9.0)
    feat: add new sentence tokenization mode for complex Tibetan sentences
    
    # This will trigger a MAJOR version bump (e.g., 0.8.12 → 1.0.0)
    feat: refactor token attributes structure
    
    BREAKING CHANGE: Token.attributes now uses a dictionary format instead of properties, requiring changes to code that accesses token attributes directly
    
  2. When you push to the master branch, the CI workflow will:

    • Run all tests across multiple Python versions
    • Analyze commit messages to determine the next version number
    • Update version numbers in the code
    • Create a new release on GitHub
    • Publish the package to PyPI

Manual Publishing

For manual publishing (if needed):

twine upload dist/*

Running Tests

pytest tests/

Contributing

We welcome contributions! Here's how you can help:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Please ensure your PR adheres to:

Project Owners

Acknowledgements

botok is an open source library for Tibetan NLP. We are grateful to our sponsors and contributors:

Sponsors

Contributors

License

Copyright (C) 2019-2025 OpenPecha. Licensed under Apache 2.0.