The scripts (rules and transformations) are written in Tcl and are executed by the embedded interpreter that has access to relevant state of the program. A set of commands is provided to enable easy read-only operation on the information that was gathered by parsing given source files.
The following Tcl commands are provided:
getSourceFileNames
- returns the list of file names that were provided to Vera++ as program parameters.getLineCount fileName
- returns the number of lines in the given source file.getAllLines fileName
- returns the list of lines, in their natural order, that form a give source file.getLine fileName lineNumber
- returns the selected line; line numbers are counted from 1.getTokens fileName fromLine fromColumn toLine toColumn filter
- returns the list of tokens, in their natural order, from the given source file and that match the given selection criteria.fromLine
- the lowest line number (counted from 1), inclusivefromColumn
- the lowest column number (counted from 0), inclusivetoLine
- the highest line number, inclusive; -1 means that the selected range spans to the end of the filetoColumn
- the highest column number, exclusive; -1 means that the selected range spans to the end of the line defined by toLine
.filter
- the list of selected token types, the recognized token types are listed below; if this list is empty, then all token types are allowed.getTokens
command returns a list of lists - the nested lists have the following elements:
getParameter name defaultValue
- returns the value of the given parameter or the provided default value if no such parameter is defined.report fileName lineNumber message
- registers a report for the given file and line; this report is printed at the end of the program execution, sorted by file and line number. Use this command to generate output that is compatible with the warning/error output format of popular compilers.Examples:
To process all lines from all source files, use the following code pattern:
foreach fileName [getSourceFileNames] { foreach line [getAllLines $fileName] { # ... } }
To process all tokens from all source files, use:
foreach fileName [getSourceFileNames] { foreach token [getTokens $fileName 1 0 -1 -1 {}] { set tokenValue [lindex $token 0] set lineNumber [lindex $token 1] set columnNumber [lindex $token 2] set tokenType [lindex $token 3] # ... } }
To process only curly braces from the given source file, use:
foreach token [getTokens $fileName 1 0 -1 -1 {leftbrace rightbrace}] { # ... }
The complete rule script for verifying that the lines are no longer than some limit (the limit can be provided as a parameter, but the default value is defined in by the script itself):
# Line cannot be too long set maxLength [getParameter "max-line-length" 100] foreach f [getSourceFileNames] { set lineNumber 1 foreach line [getAllLines $f] { if {[string length $line] > $maxLength} { report $f $lineNumber "line is longer than ${maxLength} characters" } incr lineNumber } }
The above script is actually the implementation of rule L004.
Notes about line splicing
As required by the C++ ISO standard, the line splicing (with the backslash at the end of the line) is performed before tokenizing. This means that the lists of tokens might not strictly fit the list of lines.
Due to the internal mechanisms of the parser, the line splicing freezes the line counter and forces the column counter to continue until the last line in the spliced block. This means that there might be physical non-empty lines that apparently don't have any tokens, as well as tokens that have column numbers not matching the physical source line lengths.
Recognized token types
The following token types are recognized by the parser and can be used for filter selection in the getTokens
command (some of these token types are related to compiler extensions):
Note
There is a predefined rule named DUMP
that prints on the screen all tokens with their types and position. This rule can be helpful as a guideline for creating custom filtering criteria:
$ vera++ -rule DUMP myfile.cpp