Parser / Scanner Generator

New

Have a look at https://github.com/goccmack/gogll for scannerless GLL parser generation.

Gocc

Build Status

Introduction

Gocc is a compiler kit for Go written in Go.

Gocc generates lexers and parsers or stand-alone DFAs or parsers from a BNF.

Lexers are DFAs, which recognise regular languages. Gocc lexers accept UTF-8 input.

Gocc parsers are PDAs, which recognise LR-1 languages. Optional LR1 conflict handling automatically resolves shift / reduce and reduce / reduce conflicts.

Generating a lexer and parser starts with creating a bnf file. Action expressions embedded in the BNF allows the user to specify semantic actions for syntax productions.

For complex applications the user typically uses an abstract syntax tree (AST) to represent the derivation of the input. The user provides a set of functions to construct the AST, which are called from the action expressions specified in the BNF.

See the README for an included example.

User Guide (PDF): Learn You a gocc for Great Good (gocc3 user guide will be published shortly)

Installation

  • First download and Install Go From http://golang.org/
  • Setup your GOPATH environment variable.
  • Next in your command line run: go get github.com/goccmack/gocc (go get will git clone gocc into GOPATH/src/github.com/goccmack/gocc and run go install)
  • Alternatively clone the source: https://github.com/goccmack/gocc . Followed by go install github.com/goccmack/gocc
  • Finally make sure that the bin folder where the gocc binary is located is in your PATH environment variable.

Getting Started

Once installed start by creating your BNF in a package folder.

For example GOPATH/src/foo/bar.bnf:

/* Lexical Part */

id : 'a'-'z' {'a'-'z'} ;

!whitespace : ' ' | '\t' | '\n' | '\r' ;

/* Syntax Part */

<< import "foo/ast" >>

Hello:  "hello" id << ast.NewWorld($1) >> ;

Next to use gocc, run:

cd $GOPATH/src/foo
gocc bar.bnf

This will generate a scanner, parser and token package inside GOPATH/src/foo Following times you might only want to run gocc without the scanner flag, since you might want to start making the scanner your own. Gocc is after all only a parser generator even if the default scanner is quite useful.

Next create ast.go file at $GOPATH/src/foo/ast with the following contents:

package ast

import (
    "foo/token"
)

type Attrib interface {}

type World struct {
    Name string
}

func NewWorld(id Attrib) (*World, error) {
    return &World{string(id.(*token.Token).Lit)}, nil
}

func (this *World) String() string {
    return "hello " + this.Name
}

Finally we want to parse a string into the ast, so let us write a test at $GOPATH/src/foo/test/parse_test.go with the following contents:

package test

import (
    "foo/ast"
    "foo/lexer"
    "foo/parser"
    "testing"
)

func TestWorld(t *testing.T) {
    input := []byte(`hello gocc`)
    lex := lexer.NewLexer(input)
    p := parser.NewParser()
    st, err := p.Parse(lex)
    if err != nil {
        panic(err)
    }
    w, ok := st.(*ast.World)
    if !ok {
        t.Fatalf("This is not a world")
    }
    if w.Name != `gocc` {
        t.Fatalf("Wrong world %v", w.Name)
    }
}

Finally run the test:

cd $GOPATH/src/foo/test
go test -v

You have now created your first grammar with gocc. This should now be relatively easy to change into the grammar you actually want to create or an existing LR1 grammar you would like to parse.

BNF

The Gocc BNF is specified here

An example bnf with action expressions can be found here

Action Expressions and AST

An action expression is specified as "<", "<", goccExpressionList , ">", ">" . The goccExpressionList is equivalent to a goExpressionList. This expression list should return an Attrib and an error. Where Attrib is:

type Attrib interface {}

Also, parsed elements of the corresponding bnf rule can be represented in the expressionList as "$", digit.

Some action expression examples:

<< $0, nil >>
<< ast.NewFoo($1) >>
<< ast.NewBar($3, $1) >>
<< ast.TRUE, nil >>

Contants, functions, etc. that are returned or called should be programmed by the user in his ast (Abstract Syntax Tree) package. The ast package requires that you define your own Attrib interface as shown above. All parameters passed to functions will be of this type.

For raw elements that you know to be a *token.Token, you can use the short-hand: $T0 etc, leading the following expressions to produce identical results:

<< $3.(*token.Token), nil >>
<< $T3, nil >>

Some example of functions:

func NewFoo(a Attrib) (*Foo, error) { ... }
func NewBar(a, b Attrib) (*Bar, error) { ... }

An example of an ast can be found here

Release Notes for gocc 2.1

Changes

  1. no_lexer option added to suppress generation of lexer. See the user guide.

  2. Unreachable code removed from generated code.

Bugs fixed:

  1. gocc 2.1 does not support string_lit symbols with the same value as production names of the BNF. E.g. (t2.bnf):
A : "a" | "A" ;

string_lit "A" is not allowed.

Previously gocc silently ignored the conflicting string_lit. Now it generates an ugly panic:

$ gocc t2.bnf
panic: string_lit "A" conflicts with production name A

This issue will be properly resolved in a future release.

Users

These projects use gocc:

Owner
Comments
  • Accessing source code line number from parsed AST nodes

    Accessing source code line number from parsed AST nodes

    Hi,

    Thank you for publishing this tool. I am using it for a university assignment. This is not an issue, but more of question/request.

    I want to access source code information (line number, etc) from the AST nodes somehow. I want to display informative error messages when my later semantic analysis fails.

    Is it possible to do this? Thanks in advance.

  • Fix build issues from Makefile, cleanup

    Fix build issues from Makefile, cleanup

    (Siloing unrelated changes from https://github.com/goccmack/gocc/pull/110)

    • Moved goimports target into regenerate for consistency with ci requirements,
    • make ci adds a text representation of the error condition to the exit code,
    • make ci additionally runs git diff after regenerate for more readable early out,
    • removed seeming merge-artifact, main.go.orig invocation after
    • Made 'goimports' target part of 'regenerate' so that 'ci' does not fail builds because someone forgot to manually run goimports after regenerate,

    Re: added additional git-diff early out from make ci

    Intended to make the workflow easier to understand for PR submission, since the last line of output will clearly explain why the diff is being rejected, and does so before the make test because make test contains a lot of false-positive error messages (tests that verify errors produce errors).

  • Supress generation of `LR1_conflicts.txt` and `LR1_sets.txt`

    Supress generation of `LR1_conflicts.txt` and `LR1_sets.txt`

    Is there currently a direct way to suppress generation of these files? When the number of LR-1 conflicts increase, the amount of time spent in I/O seems to be quite significant for a large BNF.

  • Proposal: add travis job

    Proposal: add travis job

    I would like to replace drone with travis. Since drone is not very well supported anymore. What do you think? @goccmack @mewmew

    I would also like this travis job to then pull some other projects like gographviz for extra testing. Maybe this is a little over zealous?

    Also I would like to know how to regenerate all the current example grammars. I can't seem to find a Makefile somewhere, am I missing something? @goccmack I think this should be part of testing gocc. Otherwise what are we testing by running go test.

  • generate compressed tables

    generate compressed tables

    The generated tables can become quite large https://raw.githubusercontent.com/katydid/katydid/master/relapse/parser/actiontable.go and this is the reason they are generated and not typed by a human :)

    The current tables are great output for debugging, but in production we don't really care how pretty they are printed. A bigger concern is how long it takes to generate the code and the size of the generated code.

    I have cut down my code generation time in gogoprotobuf by 4 times just by rather generating a compressed FileDescriptorProtoSet object. The generated function in that case still returns the uncompressed version, so the API has not changed.

    I suspect we can cut down on the gocc code generation time by quite a bit by simply printing a compressed table. Something like the following.

    // generated by gocc; DO NOT EDIT.
    
    package parser
    
    type (
        actionTable [numStates]actionRow
        actionRow   struct {
            canRecover bool
            actions    [numSymbols]action
        }
    )
    
    var actionTab actionTable
    
    func init() {
      compressedActTab := []byte{
        0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8,
        0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8,
        ...
      }
      actionTab = uncompress(compressedActTab)
    }
    

    Maybe I am wrong and this is not the bottle neck, but then at least we still get a smaller code size.

    I still think the current table output should be kept as the default. We can add a flag (-min) to gocc which will then generate the compressed tables.

    What do you think @goccmack @mewmew

  • Getting stateful context in an action or for a token

    Getting stateful context in an action or for a token

    Running a bunch of Parse()rs concurrently, is there a current mechanism or best-practice for providing parser or lexer context to ast functions? I'm ultimately wanting to be able to capture the file/line/column source of certain tokens so during AST validation I can advise the user on sources of conflict:

    guide.chapter7.txt:42:11: name 'slartibartfast' is already used. see: guide.chapter6.txt:6:9: previous use
    

    I don't see anything obvious - the regex for $ is [0-9], and ReduceFunc only takes one argument, X []Attrib.

  • Move TODOs out of generated code

    Move TODOs out of generated code

    Fixes #100.

    I'm not entirely sure where to move the TODOs and how to make sure it's clear what they reference, so I just tried something.

    (Offtopic, don't know where to ask: are you still adding projects to the Users section of the readme? Would be happy to add skius/stringlang :))

  • Make errors more human friendly

    Make errors more human friendly

    Per https://github.com/goccmack/gocc/issues/109

    The aim of this change is to provide a more human-readable form of error-message in the default (.Error()) version.

    Most editors/IDEs recognize the FLC (file, line, column) syntax for errors that typically makes them easy to interact with in tooling.

    In particular, a common convention is:

    FileLineComment
        : Filename ":" LineNo ":" ColumnNo ":" " " "error" ":" " " ErrorMessage
        ;
    

    Vim, Emacs and Visual Studio/Code will recognize this as "your build didn't work, and I can tell you why".

    The parser knows nothing about filenames, so we can't attempt to print one, the user just needs to prefix printing the error with their filename, e.g.

    fmt.Printf("%s:%s\n", filename, err.Error())
    
  • Looking for inspiration to solve combinatorial explosion.

    Looking for inspiration to solve combinatorial explosion.

    I've recently been playing with writing a grammar for the metadata representation of the DWARF debug information of LLVM IR.

    The debug data is specified as an ordered list of key-value pairs, where most keys are optional and some are required. Take DILocation for instance, which specifies the debug data of a source code location (i.e. line, column).

    // ref: ParseDILocation
    //
    //             TAG         TYPE
    // ---------------------------------
    // optional:   line        uint32
    // optional:   column      uint16
    // REQUIRED:   scope       MDField
    // optional:   inlinedAt   MDField
    DILocation
    	: "!DILocation" "(" "scope:" MetadataID ")"
    	| "!DILocation" "(" "scope:" MetadataID "," "inlinedAt" MetadataID ")"
    	| "!DILocation" "(" "column:" int_lit "," "scope:" MetadataID ")"
    	| "!DILocation" "(" "column:" int_lit "," "scope:" MetadataID "," "inlinedAt" MetadataID ")"
    	| "!DILocation" "(" "line:" int_lit "," "scope:" MetadataID ")"
    	| "!DILocation" "(" "line:" int_lit "," "scope:" MetadataID "," "inlinedAt" MetadataID ")"
    	| "!DILocation" "(" "line:" int_lit "," "column:" int_lit "," "scope:" MetadataID ")"
    	| "!DILocation" "(" "line:" int_lit "," "column:" int_lit "," "scope:" MetadataID "," "inlinedAt" MetadataID ")"
    ;
    
    MetadataID : "!" int_lit ;
    

    As you can see, the above works but since we have 3 optional fields, we need 2^3=8 production rules for DILocation. This obviously doesn't scale for other debug classes, some of which contain 9 or more optional fields, thus >= 512 production rules.

    Any thoughts or ideas on how one may solve this in a cleaner fashion?

    Constraints: ordered list, no duplicates, and required fields must be present.

    Cheers /u

  • Have issue with space

    Have issue with space

    I define a bnf with lexer: !whitespace : ' ' | '\t' | '\n' | '\r'; and grammar: Func1 : function_name "(" Args ")" << direction.NewFunction($0, $2) >>

    I expected that the grammar will ignore space and newline.... Then I tried with follow test cases:

    1. concat("hello"," world") -> working
    2. concat(\n"hello"," world"\n) -> working, change \nto\t \rare all working.
    3. concat( "hello"," world") -> Not working. Add space after ( or before ) both are not working.

    Any suggestion? the only space ' ' not working others \n \t \rare working good.

    I change lexer and grammer to: `!whitespace : ' ' | '\t' | '\n' | '\r'; _empty : ' ';

    space : {_empty} ;`

    | function_name "(" space Args space ")" << direction.NewFunction($0, $3) >>

    Same result. not working either.

    Please help me to figure out what's going wrong?

  • API stability and exported methods

    API stability and exported methods

    I'd like to start a discussion regarding the exported API of Gocc. Which packages should be made available to end users (if any)? Within these packages, which exported identifiers should be made available, and which are only intended to be used by gocc internally.

    The main reason for raising this question is that the answer dictates how we may develop gocc in the future. If the only interface we promise to keep stable is the command line interface, than we may change the internal implementation of Gocc as need arise and restructure its internal components.

    I would personally wish to do a rather extensive cleanup of the internal packages used by Gocc, and was wondering if there are any known use cases where one may want to import a package from the gocc repository directly, rather than relying on the generated lexer and parser packages?

    Personally, I've never found that to be the case, that is, I've never encountered a use case where an end-user of Gocc may wish to import github.com/goccmack/gocc/<pkgname> directly.

    Depending on the outcome if this discussion, I would suggest that all packages internal to Gocc are clearly marked as such; preferably using the internal directory package structure introduce by the Go tool chain.

    This is the approach takes by two of the projects I'm involved with which relies on Gocc, namely

    https://github.com/graphism/dot https://github.com/llir/llvm

    Both of these projects hide internal details of the implementation using internal directories. More specifically, the packages which are automatically generated by Gocc are considered internal; e.g. https://github.com/graphism/dot/tree/master/internal

    • graphism/dot/internal/astx
    • graphism/dot/internal/errors
    • graphism/dot/internal/lexer
    • graphism/dot/internal/parser
    • graphism/dot/internal/token
    • graphism/dot/internal/util

    The dot package in this example then takes the approach of providing dedicated packages intended for end users, such as package ast and package dot covering the AST of DOT files and functions for parsing DOT files, respectively.

    The benefit of this approach, is that it enables Gocc to continue evolving without breaking any API promises to end-users.

    My intention is to initiate this discussion, and try to figure out if there are any use-cases of the Gocc packages that I have not thought of.

    I personally see Gocc as a tool, and its API promise to the users is only relevant in regards to the command line interface.

    Should you all agree, I'd be happy to prepare a PR which moves internal components into an internal directory, before starting to prepare some more extensive cleanup PRs and refactoring efforts.

    Hope to hear from you : )

    CC: @goccmack, @awalterschulze

  • Replace hardcoded

    Replace hardcoded "$" EOF characters by something less general

    WIP

    This fixes #127, however does so in an ugly manner. I just changed every "$" to the SUBstiture character "␚", in hopes that it is used by no one. A better fix would be completely getting rid of a concrete representation of EOF and instead replacing it with an abstract struct, but this would involve refactoring the lexer to use a new interface instead of strings, and I don't know enough about this project to understand what all needs to be changed.

    Something else that popped up, is it possible that the gen.sh script is outdated? I tried generating a new frontend for gocc, but got a bunch of warnings and the resulting frontend doesn't work:

    warning: undefined symbol "g_sdt_lit" used in productions ["FileHeader" "SyntaxBody" "SyntaxBody"]
    warning: undefined symbol "regDefId" used in productions ["LexProduction" "LexTerm"]
    warning: undefined symbol "char_lit" used in productions ["LexTerm" "LexTerm" "LexTerm"]
    warning: undefined symbol "prodId" used in productions ["SyntaxProduction" "Symbol"]
    warning: undefined symbol "string_lit" used in productions ["Symbol"]
    warning: undefined symbol "tokId" used in productions ["LexProduction" "Symbol"]
    warning: undefined symbol "ignoredTokId" used in productions ["LexProduction"]
    

    This happens on the current master branch too, so I don't think it's due to my additions.

    As such, I had to manually change gocc's current frontend to use SUB instead of "$".

  • EOF $ representation introducing LR-1 conflicts

    EOF $ representation introducing LR-1 conflicts

    There are LR-1 grammars for which gocc introduces unnecessary LR-1 conflicts, because it treats the symbol '$' and actual EOFs as the same token.

    No conflicts:

    S
      : empty << nil, nil >>
      | "%" "x" << nil, nil >>
      ;
    

    Shift/Reduce conflict; it's unable to decide between reducing due to EOF or shifting due to $

    S
      : empty << nil, nil >>
      | "$" "x" << nil, nil >>
      ;
    

    I wasn't quite able to figure out where in gocc's source this is defined, but I suggest replacing this by either an actual EOF byte (if that's possible), or a custom abstract token that's defined to be different to all other tokens.

  • Cannot finish simple task

    Cannot finish simple task

    I try to build Markdown compiler. not actually compiler to machine code but to HTML, XML, JSON and other formats. Would be better to call it Markdown processor. I wanted it to be CLI tool that would work on any platform. When I read about gocc I thought that it would be ideal tool to do it. I want to make processor with a lot of new syntax goodies.

    Anyway, I want to make a simple task. Here is my BHF. All I want is to find titles.

    !whitespace : ' ' | '\t' | '\n' | '\r' ;
    _nl : '\r\n' | '\n' ;
    
    title6 : {_nl} '#' '#' '#' '#' '#' '#' {.} {_nl} ;
    title5 : {_nl} '#' '#' '#' '#' '#' {.} {_nl} ;
    title4 : {_nl} '#' '#' '#' '#' {.} {_nl} ;
    title3 : {_nl} '#' '#' '#' {.} {_nl} ;
    title2 : {_nl} '#' '#' {.} {_nl} ;
    title1 : {_nl} '#' {.} {_nl} ;
    
    Content :  title6                   << nil >>
        | title5                        << nil >>
        | title4                        << nil >>
        | title3                        << nil >>
        | title2                        << nil >>
        | title1                        << nil >>
    ;
    

    And here is my GO file

    package main
    
    import (
    	"fmt"
    	"io/ioutil"
    	"github.com/serhioromano/go-markdown/lexer"
    	"github.com/serhioromano/go-markdown/token"
    )
    
    func main() {
    	dat, err := ioutil.ReadFile("./example.md")
    	check(err)
    	
    	l := lexer.NewLexer([]byte(dat))
    	for tok := l.Scan(); tok.Type == token.TokMap.Type("title2"); tok = l.Scan() {
    		fmt.Printf("1 %v", tok)
    	}
    }
    
    func check(e error) {
        if e != nil {
            panic(e)
        }
    }
    

    and my markdown

    # This is title
    
    This is paragraph
    
    ## This is enother title
    
    - List 1
    - List 2
    - List 3
    

    But this only print data on first title and only if file begins with it. If I place few lines before it, it fails to find any title. What did I do wrong here?

  • Returning an error in an sdt action produces the

    Returning an error in an sdt action produces the "wrong" ErrorToken

    note: not using any of my branches for this

    Scenario: An sdt action that yields an error produces an error token corresponding to the end of the match, this is frequently undesireable:

    newline : '\n';
    ident : 'a'-'z' { 'a'-'z' };
    
    <<import "fmt">>
    
    Rule: P1 newline P2 newline P3;
    
    P1 : ident << func()interface{} {fmt.Printf("P1 %#+v\n", $0); return $0}(), nil >>;
    P2 : ident << $0, func() error { fmt.Printf("P2 %#+v\n", $0); return nil }() >>;
    P3 : ident << nil, fmt.Errorf("should be line 3 col 1") >>;
    

    And then a simple parser wrapper:

    func main() {
            l := lexer.NewLexer([]byte("a\nb\nc"))
            p := parser.NewParser()
            _, err := p.Parse(l)
            fmt.Printf("%+v\n", err)
    }
    

    the output you get is:

    > go run .
    P1 &token.Token{Type:3, Lit:[]uint8{0x61}, Pos:token.Pos{Offset:0, Line:1, Column:1}}
    P2 &token.Token{Type:3, Lit:[]uint8{0x62}, Pos:token.Pos{Offset:2, Line:2, Column:1}}
    Error in S7: $(1,), Pos(offset=5, line=3, column=2): should be line 3 col 1
    

    Not sure whether by design or bug; I can sort of see how choosing a token when there are 0 or many might also be "wrong".

    Perhaps a solution would be to allow the user to return a token with the error and have that be the ErrorToken?

    P3: "*" identifier "*" << $1, errors.New("can't use identifier between asterisks, that's just rude") >>;
    

    and then the parser would use the first return value if it passes a *token.Token type switch?

      switch t := attr.(type) {
      case *token.Token:
        e.ErrorToken = t
      default:
        /* untouched */
      }
    
  • Customizing errors

    Customizing errors

    Currently, gocc messages are a bit obtuse for anyone not directly using gocc, and in particular they aren't in the FLC format that makes tools easy to integrate with That Person's Environment(TM). Probably more noticeable after alternating between go compiler errors and gocc lines of terror? :)

    Is there a preferred way to customize ones' own error outputs, is there a plan to improve or enable improvement of errors, e.g an option to choose between errors for developers vs errors?

    I guarantee you that if I show a game designer/artist

    Error in S42: identifier(8,true), Pos(offset=514, line=30, column=50), expected one of: { typename true false float_literal integer_literal string_literal loc_literal
    

    I'll spend at least an hour a day looking their typos and explaining that the mistake wasn't on line 542, 8, or 514... They're not daft, but my tool's part in their day is supposed to be as trivial to them as the desk they're leaning on, so if I don't present them with clear and direct feedback it's both a massive context switch and a complete domain switch

    attributes.def:30:50: error: expected one of: { typename true false float_literal integer_literal string_literal loc_literal, got: 'ture'. [S42: identifier(8, true), offset 512]
    

    I also like to have my "redefinition" errors contain an FLC link to the conflicting definition, I get less "where's the other one?" phone calls that way :)

Related tags
A shell parser, formatter, and interpreter with bash support; includes shfmt

sh A shell parser, formatter, and interpreter. Supports POSIX Shell, Bash, and mksh. Requires Go 1.14 or later. Quick start To parse shell scripts, in

Dec 29, 2022
TOML parser for Golang with reflection.

THIS PROJECT IS UNMAINTAINED The last commit to this repo before writing this message occurred over two years ago. While it was never my intention to

Dec 30, 2022
A simple CSS parser and inliner in Go

douceur A simple CSS parser and inliner in Golang. Parser is vaguely inspired by CSS Syntax Module Level 3 and corresponding JS parser. Inliner only p

Dec 12, 2022
User agent string parser in golang

User agent parsing useragent is a library written in golang to parse user agent strings. Usage First install the library with: go get xojoc.pw/userage

Aug 2, 2021
Simple HCL (HashiCorp Configuration Language) parser for your vars.

HCL to Markdown About To write a good documentation for terraform module, quite often we just need to print all our input variables as a fancy table.

Dec 14, 2021
A markdown parser written in Go. Easy to extend, standard(CommonMark) compliant, well structured.

goldmark A Markdown parser written in Go. Easy to extend, standards-compliant, well-structured. goldmark is compliant with CommonMark 0.29. Motivation

Dec 29, 2022
Unified diff parser and printer for Go

go-diff Diff parser and printer for Go. Installing go get -u github.com/sourcegraph/go-diff/diff Usage It doesn't actually compute a diff. It only rea

Dec 14, 2022
A PDF renderer for the goldmark markdown parser.
A PDF renderer for the goldmark markdown parser.

goldmark-pdf goldmark-pdf is a renderer for goldmark that allows rendering to PDF. Reference See https://pkg.go.dev/github.com/stephenafamo/goldmark-p

Jan 7, 2023
Experimental parser Angular template

Experimental parser Angular template This repository only shows what a parser on the Go might look like Benchmark 100k line of template Parser ms @ang

Dec 15, 2021
A dead simple parser package for Go
A dead simple parser package for Go

A dead simple parser package for Go V2 Introduction Tutorial Tag syntax Overview Grammar syntax Capturing Capturing boolean value Streaming Lexing Sta

Dec 30, 2022
Freestyle xml parser with golang

fxml - FreeStyle XML Parser This package provides a simple parser which reads a XML document and output a tree structure, which does not need a pre-de

Jul 1, 2022
An extension to the Goldmark Markdown Parser

Goldmark-Highlight An extension to the Goldmark Markdown Parser which adds parsing / rendering capabilities for rendering highlighted text. Highlighte

May 25, 2022
A parser combinator library for Go.

Takenoco A parser combinator library for Go. Examples CSV parser Dust - toy scripting language Usage Define the parser: package csv import ( "err

Oct 30, 2022
A simple json parser built using golang

jsonparser A simple json parser built using golang Installation: go get -u githu

Dec 29, 2021
Quick and simple parser for PFSense XML configuration files, good for auditing firewall rules

pfcfg-parser version 0.0.1 : 13 January 2022 A quick and simple parser for PFSense XML configuration files to generate a plain text file of the main c

Jan 13, 2022
Interpreted Programming Language built in Go. Lexer, Parser, AST, VM.

Gago | Programming Language Built in Go if you are looking for the docs, go here Gago is a interpreted programming language. It is fully written in Go

May 6, 2022
iTunes and RSS 2.0 Podcast Generator in Golang

podcast Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed ex

Dec 23, 2022
A codename generator meant for naming software releases.

codename-generator This library written in Golang generates a random code name meant for naming software releases if you run short of inspiration. Cur

Jun 26, 2022
Lorem Ipsum Generator

Generate lorem ipsum for your project. ============= Usage import "lorem" Ranged generators These will generate a string with a variable number of ele

Jul 28, 2022