mirror of
https://github.com/superseriousbusiness/gotosocial.git
synced 2024-11-23 12:16:38 +00:00
f0c3533862
Bumps [github.com/tdewolff/minify/v2](https://github.com/tdewolff/minify) from 2.20.9 to 2.20.12. - [Release notes](https://github.com/tdewolff/minify/releases) - [Commits](https://github.com/tdewolff/minify/compare/v2.20.9...v2.20.12) --- updated-dependencies: - dependency-name: github.com/tdewolff/minify/v2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
3.8 KiB
3.8 KiB
CSS
This package is a CSS3 lexer and parser written in Go. Both follow the specification at CSS Syntax Module Level 3. The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level Next
function can be used for stream parsing to returns grammar units until the EOF.
Installation
Run the following command
go get -u github.com/tdewolff/parse/v2/css
or add the following import and run project with go get
import "github.com/tdewolff/parse/v2/css"
Lexer
Usage
The following initializes a new Lexer with io.Reader r
:
l := css.NewLexer(parse.NewInput(r))
To tokenize until EOF an error, use:
for {
tt, text := l.Next()
switch tt {
case css.ErrorToken:
// error or EOF set in l.Err()
return
// ...
}
}
All tokens (see CSS Syntax Module Level 3):
ErrorToken // non-official token, returned when errors occur
IdentToken
FunctionToken // rgb( rgba( ...
AtKeywordToken // @abc
HashToken // #abc
StringToken
BadStringToken
URLToken // url(
BadURLToken
DelimToken // any unmatched character
NumberToken // 5
PercentageToken // 5%
DimensionToken // 5em
UnicodeRangeToken
IncludeMatchToken // ~=
DashMatchToken // |=
PrefixMatchToken // ^=
SuffixMatchToken // $=
SubstringMatchToken // *=
ColumnToken // ||
WhitespaceToken
CDOToken // <!--
CDCToken // -->
ColonToken
SemicolonToken
CommaToken
BracketToken // ( ) [ ] { }, all bracket tokens use this, Data() can distinguish between the brackets
CommentToken // non-official token
Examples
package main
import (
"os"
"github.com/tdewolff/parse/v2/css"
)
// Tokenize CSS3 from stdin.
func main() {
l := css.NewLexer(parse.NewInput(os.Stdin))
for {
tt, text := l.Next()
switch tt {
case css.ErrorToken:
if l.Err() != io.EOF {
fmt.Println("Error on line", l.Line(), ":", l.Err())
}
return
case css.IdentToken:
fmt.Println("Identifier", string(text))
case css.NumberToken:
fmt.Println("Number", string(text))
// ...
}
}
}
Parser
Usage
The following creates a new Parser.
// true because this is the content of an inline style attribute
p := css.NewParser(parse.NewInput(bytes.NewBufferString("color: red;")), true)
To iterate over the stylesheet, use:
for {
gt, _, data := p.Next()
if gt == css.ErrorGrammar {
break
}
// ...
}
All grammar units returned by Next
:
ErrorGrammar
AtRuleGrammar
EndAtRuleGrammar
RulesetGrammar
EndRulesetGrammar
DeclarationGrammar
TokenGrammar
Examples
package main
import (
"bytes"
"fmt"
"github.com/tdewolff/parse/v2/css"
)
func main() {
// true because this is the content of an inline style attribute
p := css.NewParser(parse.NewInput(bytes.NewBufferString("color: red;")), true)
out := ""
for {
gt, _, data := p.Next()
if gt == css.ErrorGrammar {
break
} else if gt == css.AtRuleGrammar || gt == css.BeginAtRuleGrammar || gt == css.BeginRulesetGrammar || gt == css.DeclarationGrammar {
out += string(data)
if gt == css.DeclarationGrammar {
out += ":"
}
for _, val := range p.Values() {
out += string(val.Data)
}
if gt == css.BeginAtRuleGrammar || gt == css.BeginRulesetGrammar {
out += "{"
} else if gt == css.AtRuleGrammar || gt == css.DeclarationGrammar {
out += ";"
}
} else {
out += string(data)
}
}
fmt.Println(out)
}
License
Released under the MIT license.