tokenizer {tm}R Documentation

Tokenizers

Description

Tokenize a document or character vector.

Usage

MC_tokenizer(x)
scan_tokenizer(x)

Arguments

x

A character vector.

Details

The quality and correctness of a tokenization algorithm highly depends on the context and application scenario. Relevant factors are the language of the underlying text and the notions of whitespace (which can vary with the used encoding and the language) and punctuation marks. Consequently, for superior results you probably need a custom tokenization function.

scan_tokenizer

Relies on scan(..., what = "character").

MC_tokenizer

Implements the functionality of the tokenizer in the MC toolkit (http://www.cs.utexas.edu/users/dml/software/mc/).

Value

A character vector consisting of tokens obtained by tokenization of x.

Author(s)

Ingo Feinerer

See Also

getTokenizers

Examples

data("crude")
MC_tokenizer(crude[[1]])
scan_tokenizer(crude[[1]])
strsplit_space_tokenizer <- function(x) unlist(strsplit(x, "[[:space:]]+"))
strsplit_space_tokenizer(crude[[1]])

[Package tm version 0.5-10 Index]