You operate the automation using a command line interface (CLI) program named ddstat.exe. To use it, open a Windows terminal in the root of a submission folder.
You may use the Windows terminal that is integrated into VSCode. To do that, from the VSCode menu at the top, select:
Terminal > New Terminal
and a new Windows command terminal will open at the bottom of the VSCode window.
The general form of command lines are:
ddstat <command> <folder>
There is no hyphen in ‘ddstat’.
The first argument is the command, one of:
init
audit
run
report
The actions for these commands are described in sections below.
If the <folder> argument is not provided, then the automation will expect to find a submission folder configuration in the current working directory.
If the <folder> argument is provided, then the automation will execute from that folder.
If the named folder (or current working directory if there is none) is not a submission folder, the automation will look through all the subfolders and run the automation in each submission folder it finds. You can batch the operation in this manner.
This prints the version number of the test tool to the console output.
This prints brief instructions for using the CLI.
To start a new submission project, create a new folder and open a Windows terminal in it. From the command prompt type:
ddstat init
This will create a submission folder, with the required folders and any initial files that can be supplied by FieldComm Group.
Do not run the init command in a submissions folder that has already been populated.
This command examines the contents of the <folder> argument if supplied or the current working directory if not. It prints errors and warnings encountered to the console output.
Additionally, it creates a file named ‘audit.log’ in the TestReports folder that contains the result of the audit using a log level of ‘info’. It documents which items in the submission folder are missing or contain invalid data and info lines with contextual information.
Audit.log helps the test engineer to correctly assemble the required configuration, HART Test System log files, and EDD source code required to operate the test.
The output from the console is errors and warnings only, so sometimes context is missing from that information. If you have questions about the errors or warnings, open the audit.log file which contains [info] output, providing the context needed to understand the diagnostics.
This command runs the test automation in the <folder> argument if it is present, or the current working directory if not. It begins by running the audit command and will terminate the execution if any errors are reported during the audit.
This command will make at least two tokenizer runs, one for tokenizer version 8 and one for version 10. For campaigns of type of DDRevision or DeviceRevision, an additional run will be made using the historical DD source code that you provided.
The tokenizer command line will source the header and other Standard Library files appropriately based upon the IDERevision number and the CmnPrac revision number settings that you supplied in your DeviceOptions.xml file. You can observe this in the –I and –d command line options for the tokenizer that show in the logging output.
The Run command creates two folders in the root of the submission folder:
TestReports
TokenizerOutPut
The TestReports folder contains output files from the automation run. Some of these contain intermediate data that is created by and used by automation. Other files are of direct interest to the test engineer, these are:
Audit.log, described above
Console.txt, which contains the textual output from the automation run
Detail.log, the detailed logging output from the automation
Summary.html, the summary of the pass/fail information from the testing
This folder also contains subfolders that will retain historical information from each automation run, including the date and time of the run in the filename. These will be useful if you are making changes and trying to understand the differences that your changes make to your automated test results.
The TokenizerOutPut folder contains the encoded files that are candidates for registration.
The detail.log file will show all the actions required to set up the tokenizer runs, invoke the tokenizers and will also display their console output.
For each test case, the detail.log file will contain information like the following:
The information begins by listing the TFP number of the test, followed by the text of the description from the test specification.
Next comes information, error and warning statements produced by the test automation or a tokenizer.
Next is an [INFO] line giving the disposition of the test.
Last is an [INFO] End: statement. It lists the short name for the test case, How the test case was evaluated (automation, tokenizer, manual) and the disposition.
When the run command is complete, a tab will open in a VSCode editor named console.txt. This is a capture of the console output from the automation run.
There is no progress indication during the run. Because of the tokenizer runs that are included, it can take a minute or two for this to complete. This is a known deficiency. For now, wait for the console.txt tab to open to indicate completion.
This command runs the report function in the <folder> argument if it is present, or the current working directory if not.
It begins by locating a single test report document in the root of the submission folder, searching for files that follow the pattern:
TR*.docx
It is an error for more than one such file to exist.
The report command continues by opening the document, inserting results into the summary tables and subsections for each test case. The information placed in the subsections is drawn from the detail.log file produced by the Run command.
If you have the test report file open in Word while running the report command, the report command will not be able to update the report document and will terminate with an error.
Outcome | Meaning |
---|---|
[INFO] - Pass
Meets the test specification
[ERROR] - Fail
Does not meet the test specification
[INFO] - N/A
Does not apply to this EDD
[WARN] - Inconclusive
The automation cannot determine the outcome. This applies to the TFPs that are determined manually. It also applies when insufficient or incorrect data is supplied.