:wink: :cyclone: :strawberry: TextRank implementation in Golang with extendable features (summarization, phrase extraction) and multithreading (goroutine) support (Go 1.8, 1.9, 1.10)

TextRank on Go

GoDoc License: MIT Build Status Go Report Card Coverage Status Release

This source code is an implementation of textrank algorithm, under MIT licence.
The minimum requred Go version is 1.8.


MOTIVATION

If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is.

DEMO

The following link Recona is a simple, pre-programmed a.i. what uses this library to ranking raw texts. It visualizes how ranking works and it represents how it could be used for different purposes: Recona.app

FEATURES

  • Find the most important phrases.
  • Find the most important words.
  • Find the most important N sentences.
    • Importance by phrase weights.
    • Importance by word occurrence.
  • Find the first N sentences, start from Xth sentence.
  • Find sentences by phrase chains ordered by position in text.
  • Access to the whole ranked data.
  • Support more languages.
  • Algorithm for weighting can be modified by interface implementation.
  • Parser can be modified by interface implementation.
  • Multi thread support.

INSTALL

You can install TextRank by Go's get:

go get github.com/DavidBelicza/TextRank

TextRank uses the DEP as vendoring tool, so the required dependencies are versioned under the vendor folder. The exact version number defined in the Gopkg.toml. If you want to reinstall the dependencies, use the DEP functions: flush the vendor folder and run:

dep ensure

DOCKER

Using Docker to TextRank isn't necessary, it's just an option.

Build image from the repository's root directory:

docker build -t go_text_rank_image .

Create container from the image:

docker run -dit --name textrank go_text_rank_image:latest

Run the go test -v . code inside the container:

docker exec -i -t textrank go test -v .

Stop, start or remove the container:

  • docker stop textrank
  • docker start textrank
  • docker rm textrank

HOW DOES IT WORK

Too see how does it work, the easiest way is to use the sample text. Sample text can be found in the textrank_test.go file at this line. It's a short size text about Gnome Shell.

  • TextRank reads the text,
    • parse it,
    • remove the unnecessary stop words,
    • tokenize it
  • and counting the occurrence of the words and phrases
  • and then it starts weighting
    • by the occurrence of words and phrases and their relations.
  • After weights are done, TextRank normalize weights to between 1 and 0.
  • Then the different finder methods capable to find the most important words, phrases or sentences.

The most important phrases from the sample text are:

Phrase Occurrence Weight
gnome - shell 5 1
extension - gnome 3 0.50859946
icons - tray 3 0.49631447
gnome - caffeine 2 0.27027023

The gnome is the most often used word in this text and shell is also used multiple times. Two of them are used together as a phrase 5 times. This is the highest occurrence in this text, so this is the most important phrase.

The following two important phrases have same occurrence 3, however they are not equal. This is because the extension gnome phrase contains the word gnome, the most popular word in the text, and it increases the phrase's weight. It increases the weight of any word what is related to it, but not too much to overcome other important phrases what don't contain the gnome word.

The exact algorithm can be found in the algorithm.go file at this line.

TEXTRANK OR AUTOMATIC SUMMARIZATION

Automatic summarization is the process of reducing a text document with a computer program in order to create a summary that retains the most important points of the original document. Technologies that can make a coherent summary take into account variables such as length, writing style and syntax. Automatic data summarization is part of machine learning and data mining. The main idea of summarization is to find a representative subset of the data, which contains the information of the entire set. Summarization technologies are used in a large number of sectors in industry today. - Wikipedia

EXAMPLES

Find the most important phrases

This is the most basic and simplest usage of textrank.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmDef)

	// Get all phrases by weight.
	rankedPhrases := textrank.FindPhrases(tr)

	// Most important phrase.
	fmt.Println(rankedPhrases[0])
	// Second important phrase.
	fmt.Println(rankedPhrases[1])
}

All possible pre-defined finder queries

After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmDef)

	// Get all phrases order by weight.
	rankedPhrases := textrank.FindPhrases(tr)
	// Most important phrase.
	fmt.Println(rankedPhrases[0])

	// Get all words order by weight.
	words := textrank.FindSingleWords(tr)
	// Most important word.
	fmt.Println(words[0])

	// Get the most important 10 sentences. Importance by phrase weights.
	sentences := textrank.FindSentencesByRelationWeight(tr, 10)
	// Found sentences
	fmt.Println(sentences)

	// Get the most important 10 sentences. Importance by word occurrence.
	sentences = textrank.FindSentencesByWordQtyWeight(tr, 10)
	// Found sentences
	fmt.Println(sentences)

	// Get the first 10 sentences, start from 5th sentence.
	sentences = textrank.FindSentencesFrom(tr, 5, 10)
	// Found sentences
	fmt.Println(sentences)

	// Get sentences by phrase/word chains order by position in text.
	sentencesPh := textrank.FindSentencesByPhraseChain(tr, []string{"gnome", "shell", "extension"})
	// Found sentence.
	fmt.Println(sentencesPh[0])
}

Access to everything

After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmDef)

	// Get the rank graph.
	rankData := tr.GetRankData()

	// Get word ID by token/word.
	wordId := rankData.WordValID["gnome"]

	// Word's weight.
	fmt.Println(rankData.Words[wordId].Weight)
	// Word's quantity/occurrence.
	fmt.Println(rankData.Words[wordId].Qty)
	// All sentences what contain the this word.
	fmt.Println(rankData.Words[wordId].SentenceIDs)
	// All other words what are related to this word on left side.
	fmt.Println(rankData.Words[wordId].ConnectionLeft)
	// All other words what are related to this word on right side.
	fmt.Println(rankData.Words[wordId].ConnectionRight)
	// The node of this word, it contains the related words and the relation weight.
	fmt.Println(rankData.Relation.Node[wordId])
}

Adding text continuously

It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmDef)

	rawText2 := "Another book or article..."
	rawText3 := "Third another book or article..."

	// Add text to the previously added text.
	tr.Populate(rawText2, language, rule)
	// Add text to the previously added text.
	tr.Populate(rawText3, language, rule)
	// Run the ranking to the whole composed text.
	tr.Ranking(algorithmDef)

	// Get all phrases by weight.
	rankedPhrases := textrank.FindPhrases(tr)

	// Most important phrase.
	fmt.Println(rankedPhrases[0])
	// Second important phrase.
	fmt.Println(rankedPhrases[1])
}

Using different algorithm to ranking text

There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Using a little bit more complex algorithm to ranking text.
	algorithmChain := textrank.NewChainAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmChain)

	// Get all phrases by weight.
	rankedPhrases := textrank.FindPhrases(tr)

	// Most important phrase.
	fmt.Println(rankedPhrases[0])
	// Second important phrase.
	fmt.Println(rankedPhrases[1])
}

Using multiple graphs

Graph ID exists because it is possible run multiple independent text ranking processes.

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// 1th TextRank object
	tr1 := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()
	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr1.Populate(rawText, language, rule)
	// Run the ranking.
	tr1.Ranking(algorithmDef)

	// 2nd TextRank object
	tr2 := textrank.NewTextRank()

	// Using a little bit more complex algorithm to ranking text.
	algorithmChain := textrank.NewChainAlgorithm()

	// Add text to the second graph.
	tr2.Populate(rawText, language, rule)
	// Run the ranking on the second graph.
	tr2.Ranking(algorithmChain)

	// Get all phrases by weight from first graph.
	rankedPhrases := textrank.FindPhrases(tr1)

	// Most important phrase from first graph.
	fmt.Println(rankedPhrases[0])
	// Second important phrase from first graph.
	fmt.Println(rankedPhrases[1])

	// Get all phrases by weight from second graph.
	rankedPhrases2 := textrank.FindPhrases(tr2)

	// Most important phrase from second graph.
	fmt.Println(rankedPhrases2[0])
	// Second important phrase from second graph.
	fmt.Println(rankedPhrases2[1])
}

Using different non-English languages

Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here: https://github.com/stopwords-iso

package main

import (
	"fmt"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	rawText := "Your long raw text, it could be a book. Lorem ipsum..."
	// TextRank object
	tr := textrank.NewTextRank()
	// Default Rule for parsing.
	rule := textrank.NewDefaultRule()
	// Default Language for filtering stop words.
	language := textrank.NewDefaultLanguage()

	// Add Spanish stop words (just some example).
	language.SetWords("es", []string{"uno", "dos", "tres", "yo", "es", "eres"})
	// Active the Spanish.
	language.SetActiveLanguage("es")

	// Default algorithm for ranking text.
	algorithmDef := textrank.NewDefaultAlgorithm()

	// Add text.
	tr.Populate(rawText, language, rule)
	// Run the ranking.
	tr.Ranking(algorithmDef)

	// Get all phrases by weight.
	rankedPhrases := textrank.FindPhrases(tr)

	// Most important phrase.
	fmt.Println(rankedPhrases[0])
	// Second important phrase.
	fmt.Println(rankedPhrases[1])
}

Asynchronous usage by goroutines

It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.

package main

import (
	"fmt"
	"time"
	
	"github.com/DavidBelicza/TextRank"
)

func main() {
	// A flag when program has to stop.
	stopProgram := false
	// Channel.
	stream := make(chan string)
	// TextRank object.
	tr := textrank.NewTextRank()

	// Open new thread/routine
	go func(tr *textrank.TextRank) {
		// 3 texts.
		rawTexts := []string{
			"Very long text...",
			"Another very long text...",
			"Second another very long text...",
		}

		// Add 3 texts to the stream channel, one by one.
		for _, rawText := range rawTexts {
			stream <- rawText
		}
	}(tr)

	// Open new thread/routine
	go func() {
		// Counter how many times texts added to the ranking.
		i := 1

		for {
			// Get text from stream channel when it got a new one.
			rawText := <-stream

			// Default Rule for parsing.
			rule := textrank.NewDefaultRule()
			// Default Language for filtering stop words.
			language := textrank.NewDefaultLanguage()
			// Default algorithm for ranking text.
			algorithm := textrank.NewDefaultAlgorithm()

			// Add text.
			tr.Populate(rawText, language, rule)
			// Run the ranking.
			tr.Ranking(algorithm)

			// Set stopProgram flag to true when all 3 text have been added.
			if i == 3 {
				stopProgram = true
			}

			i++
		}
	}()

	// The main thread has to run while go-routines run. When stopProgram is
	// true then the loop has finish.
	for !stopProgram {
		time.Sleep(time.Second * 1)
	}

	// Most important phrase.
	phrases := textrank.FindPhrases(tr)
	// Second important phrase.
	fmt.Println(phrases[0])
}

A SIMPLE VISUAL REPRESENTATION

The below image is a representation how works the simplest text ranking algorithm. This algorithm can be replaced by an another one by inject different Algorithm interface implementation.

Owner
David Belicza
I've been a PHP Developer for 9 years. I have wide experience in Magento, eCommerce, JavaScript.
David Belicza
Comments
  • Cannot use latest version with go.mod

    Cannot use latest version with go.mod

    I am unable to use the latest version of the library, v2.1.2, with go modules. I get the error: require github.com/DavidBelicza/TextRank: version "v2.1.2" invalid: should be v0 or v1, not v2 To use v2.1.2 in go.mod we need to make the library available at github.com/DavidBelicza/TextRank/v2, according to https://github.com/golang/go/wiki/Modules#semantic-import-versioning

  • More accurate ranking algorithm

    More accurate ranking algorithm

    In case 1, the icons - tray and extension - gnome phrases got 0.5 weight, but it's clearly noticeable that extension - gnome is a more important phrase than icons - tray. The two phrase's occurrence is equal but the gnome word itself has more hit than icon or tray words. Follow this logic, the extension - gnome weight should be > 0.5 and < 1.

    But this logic should not make that side effect what happens in case 2, all phrases become important what contains the word gnome.

    Case 1 and case 2 are correct, so they shouldn't be modified but a new algorithm is required what implement the above logic. It should be a new, third Algorithm interface implementation: SupervisedAlgorithm or ComparatorAlgorithm.

    Case 1, FindPhrases method result from ranked text by AlgorithmDefault

    • Phrase: gnome - shell, Occurrence: 5, Weight: 1
    • Phrase: icons - tray, Occurrence: 3, Weight: 0.5
    • Phrase: extension - gnome, Occurrence: 3, Weight: 0.5
    • Phrase: dock - dash, Occurrence: 2, Weight: 0.25

    Case 2, FindPhrases method result from ranked text by AlgorithmMixed

    • Phrase: gnome - shell, Occurrence: 5, Weight: 1
    • Phrase: gnome - caffeine, Occurrence: 2, Weight: 0.8
    • Phrase: gnome - takes, Occurrence: 1, Weight: 0.73333335
    • Phrase: gnome - commonly, Occurrence: 1, Weight: 0.73333335
  • Textrank API modification to use on web

    Textrank API modification to use on web

    The provider variable in textrank.go shares its value between requests in case of a running Go webserver.

    The solution is

    • remove provider,
    • create a TextRank struct in textrank.go,
    • create a constructor,
    • store rank object inside TextRank struct,
    • transform GetRank, Append and Ranking functions to methods,
    • replace rankId parameters with rank,
    • update go docs,
    • update readme,
    • test coverage should be 100%, go-report should be A+
  • Problem ranking text containing abbreviation, such as U.S.A

    Problem ranking text containing abbreviation, such as U.S.A

    Hi, first of all thanks for this library, you are awesome πŸš€

    I'm having an issue ranking text that contains abbreviation such as U.S.A (short for United States of America) or No. 7 (short for Number 7) as the . is currently used here https://github.com/DavidBelicza/TextRank/blob/master/parse/rule.go#L21 to set the bounds of words.

    Do you currently have a way to get around this problem? Or should I simply create a new rule implementing the Rule interface that checks for known abbreviations?

  • TextToRank should accept interface instead of ParsedSentence as struct

    TextToRank should accept interface instead of ParsedSentence as struct

    If I want to use convert.TextToRank and I have a customized rule, it is not possible to use convert package because it accepts parse.ParsedSentence struct as the first argument.

    I had a quick look and it seems should be possible to change parse.ParsedSentence to an interface.

    I can create a PR if it does make sense to you @DavidBelicza!

  • TokenizeText func doesn't return all sentences

    TokenizeText func doesn't return all sentences

    I expect text.parsedSentences should contain all sentences. let me explain the problem with code :)

    place it in parse/tokenizer_test.go file

    func TestTokenizeText(t *testing.T) {
    	rule := NewRule()
    
    	text := TokenizeText("Hi!!!", rule)
    	assert.Equal(t, "Hi!", text.parsedSentences[0].original)
    	assert.Equal(t, "!", text.parsedSentences[1].original)
    	assert.Equal(t, "!", text.parsedSentences[2].original)
    }
    

    I expect this test should be passed, but apparently, it is not!

  • Add tokenization of words

    Add tokenization of words

    Adding a tokenization library or making this an optional input could really help.

    For instance, what if I wanted to search only top sentences related to "food" for recipes or "locations" for scanning destinations in blog posts?

    I realize I can do this using the chain phrase myself but basic NLP entity extraction would be nice to have so we can group by broader categories.

    Also, the term "chain phrase" implies that the words will be found in order, like in a "chain". It would be trivial and somewhat useful to add a "preserve order" option so we can proximity search within phrases. e.g. "captain james kirk" should have preserve order so we can find "captain, james t. kirk" but we don't need "kirk captain james". Suggestion.

    Otherwise the library works really well, it's clean and straightforward, thanks!

Weighted PageRank implementation in Go

pagerank Weighted PageRank implementation in Go Usage package main import ( "fmt" "github.com/alixaxel/pagerank" ) func main() { graph := pagera

Dec 21, 2022
Types and utilities for working with 2d geometry in Golang

orb Package orb defines a set of types for working with 2d geo and planar/projected geometric data in Golang. There are a set of sub-packages that use

Dec 28, 2022
A well tested and comprehensive Golang statistics library package with no dependencies.

Stats - Golang Statistics Package A well tested and comprehensive Golang statistics library / package / module with no dependencies. If you have any s

Dec 26, 2022
Implements a simple floating point arithmetic expression evaluator in Go (golang).

evaler https://github.com/soniah/evaler Package evaler implements a simple floating point arithmetic expression evaluator. Evaler uses Dijkstra's Shun

Sep 27, 2022
An ordinary differential equation solving library in golang.

ode An ordinary differential equation solving library in golang. Features Multi-dimensional state vector (i.e. extended states) Channel based stopping

Oct 19, 2022
Golang evasion tool, execute-assembly .Net file

?? Frog For Automatic Scan ?? Doge For Defense Evasion&Offensive Security Doge-Assembly Golang evasion tool, execute-assembly .Net file Intro Are you

Jan 8, 2023
DataFrames for Go: For statistics, machine-learning, and data manipulation/exploration
DataFrames for Go: For statistics, machine-learning, and data manipulation/exploration

Dataframes are used for statistics, machine-learning, and data manipulation/exploration. You can think of a Dataframe as an excel spreadsheet. This pa

Dec 31, 2022
Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more

Gonum Installation The core packages of the Gonum suite are written in pure Go with some assembly. Installation is done using go get. go get -u gonum.

Jan 8, 2023
Package goraph implements graph data structure and algorithms.
Package goraph implements graph data structure and algorithms.

goraph Package goraph implements graph data structure and algorithms. go get -v gopkg.in/gyuho/goraph.v2; I have tutorials and visualizations of grap

Dec 20, 2022
Sparse matrix formats for linear algebra supporting scientific and machine learning applications

Sparse matrix formats Implementations of selected sparse matrix formats for linear algebra supporting scientific and machine learning applications. Co

Jan 8, 2023
2D triangulation library. Allows translating lines and polygons (both based on points) to the language of GPUs.
2D triangulation library. Allows translating lines and polygons (both based on points) to the language of GPUs.

triangolatte 2D triangulation library. Allows translating lines and polygons (both based on points) to the language of GPUs. Features normal and miter

Dec 23, 2022
Polygol - Boolean polygon clipping/overlay operations (union, intersection, difference, xor) on Polygons and MultiPolygons
Polygol - Boolean polygon clipping/overlay operations (union, intersection, difference, xor) on Polygons and MultiPolygons

polygol Boolean polygon clipping/overlay operations (union, intersection, differ

Jan 8, 2023
:speedboat: a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation

Package pool Package pool implements a limited consumer goroutine or unlimited goroutine pool for easier goroutine handling and cancellation. Features

Jan 1, 2023
Powerful workflow engine and end-to-end pipeline solutions implemented with native Kubernetes resources. https://cyclone.dev
Powerful workflow engine and end-to-end pipeline solutions implemented with native Kubernetes resources. https://cyclone.dev

Cyclone Cyclone is a powerful workflow engine and end-to-end pipeline solution implemented with native Kubernetes resources, with no extra dependencie

Dec 6, 2022
🐜🐜🐜 ants is a high-performance and low-cost goroutine pool in Go, inspired by fasthttp./ ants ζ˜―δΈ€δΈͺι«˜ζ€§θƒ½δΈ”δ½ŽζŸθ€—ηš„ goroutine 池。
🐜🐜🐜 ants is a high-performance and low-cost goroutine pool in Go, inspired by fasthttp./ ants ζ˜―δΈ€δΈͺι«˜ζ€§θƒ½δΈ”δ½ŽζŸθ€—ηš„ goroutine 池。

A goroutine pool for Go English | ???? δΈ­ζ–‡ ?? Introduction Library ants implements a goroutine pool with fixed capacity, managing and recycling a massi

Jan 2, 2023
Database Access Layer for Golang - Testable, Extendable and Crafted Into a Clean and Elegant API

REL Modern Database Access Layer for Golang. REL is golang orm-ish database layer for layered architecture. It's testable and comes with its own test

Dec 29, 2022
Baker is a high performance, composable and extendable data-processing pipeline for the big data era

Baker is a high performance, composable and extendable data-processing pipeline for the big data era. It shines at converting, processing, extracting or storing records (structured data), applying whatever transformation between input and output through easy-to-write filters.

Dec 14, 2022
Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.
Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.

Connecting the Next Billion People Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core

Dec 31, 2022
Magma: Gives network operators an open, flexible and extendable mobile core network solution
Magma: Gives network operators an open, flexible and extendable mobile core network solution

Connecting the Next Billion People Magma is an open-source software platform tha

Dec 24, 2021
A Golang library for text processing, including tokenization, part-of-speech tagging, and named-entity extraction.

prose is a natural language processing library (English only, at the moment) in pure Go. It supports tokenization, segmentation, part-of-speech tagging, and named-entity extraction.

Jan 4, 2023