Writing Command Line Interfaces for Embedded Devices

Table of contents

1. Introduction
2. Background
3. CLI Design

      3.1 Lexical analysis
      3.2 Syntax analysis
      3.3 Data structure

4. CLI Development

      4.1 Scanner
      4.2 Parser
      4.3 The graph and the command execution

5. Conclusions
6. References

1. Introduction

The configuration, maintenance a monitoring of embedded devices or dedicated servers that need human interaction at least once, such as routers, switchers and firewalls, can provide several interfaces that allow these operations. These interfaces usually are of two types: Command Line Interface (CLI) and Graphical User Interface (GUI).

A GUI, given its user-friendly nature, it’s becoming a common tool for interacting with an embedded system, specially when the device is a home appliance such as a Set Top Box or a mobile telephone.

However, the default and usually the most powerful tool is the CLI. Most embedded devices provide it given that is extremely useful during the development stages of the system and usually allows more advanced operations that are not present in the GUI. Virtually all telecommunications devices provide a CLI for management. Another very useful case of a CLI in which is the only way for interacting with the system is a bootloader.

For the above reasons, the CLI becomes an important topic during the development cycle of an embedded system or a server.

A CLI can be designed and implemented in many different ways. The CLI described in this article comprehends an organized, efficient and dynamic approach that takes advantage of scanners, parsers and graphs.

Normally, embedded devices provide a standard shell such a bash or ash (used in Busybox), but the administrators that will have access to the final product must not have a shell access to the system. They must have access only to a reduced set of operations that include the control of specific peripherals such as the management of a microwave channel in a long-haul radio bridge.

Notice that a CLI is not the same as a shell. The later is a different and very powerful tool that comprehends a command language interpreter that may support concepts such as shell variables, redirection and aliases. A CLI provides just a set of required commands for executing a specific task.
However, there are some similarities between a CLI a shell. For example, both have to take control of the terminal, both have a set of reserved words and both have a specific grammar that allows them to interpret the given commands.

The examples in this guide are based on the Tarasca Project, a proof-of-concept (alpha stage) and Open Source project and that is completely written in C, lex and yacc for *nix. It contains the infrastructure for creating a CLI with user-defined commands that supports privileged menus, tab completion, command completion, commands history and some other features.

2. Background

A CLI can be designed and developed without any specific knowledge in a given area, but the design and development steps can be accomplished faster and can be better structured with a basic knowledge of compiler theory and a programming language.

Notice that a CLI can be designed and developed without the use of any scanner or parser, but as we will see later, this tools are quite helpful when the programmer needs to implement a considerable amount of commands for configuring, monitoring and maintaining a complex system such as a router or a long-haul microwave bridge.

Compiler theory is large and complex, but there are only two concepts from this theory that we apply to CLIs in a simplified manner: lexical analysis and syntax analysis. Lexical analysis is the processing of an input sequence of characters to produce, as output, a sequence of symbols called tokens. Syntax analysis is the process of analyzing a sequence of tokens in order to determine its grammatical structure with respect to a given formal grammar. Sections 3.1 and 3.2 give a brief and, hopefully, a clear explanation.
This site and this book [1] give a further explanation.

The last useful element for a working CLI is the knowledge of a programming language. There is no need for knowing everything about it, but some experience with data structures and pointers is definitely useful. Also, there is no need of a particular programming language, a CLI can be developed in the language of your choice such as C, Perl, Python or Lisp.
Since the Tarasca Project is written in C, we will use this programming language for developing the CLI.

3. CLI design

We can design and develop a CLI in many different ways. The selected choice, that it is used in the Tarasca Project, allows to:
Define the commands in configuration files that can be part of the filesystem or compiled into the executable.

  • Process the commands until a convenient data structure is generated.
  • Accept input from a terminal that supports add-on functionality such as tab-completion and command history.
  • Match the given input from the terminal to the data structure in memory.
  • Execute the command.
  • Send the output to the terminal.

An illustration explains better the chosen design. Figure 1 shows the building blocks of the CLI:

Building block of the CLI

In this article we will focus on the core issues, the red-highlighted block, for building a CLI. We will address other topics, such as the terminal control and add-on functionality in further articles.
In order to analyze the highlighted blocks of Figure 1, we will assume that we already have the other blocks working properly.
An example of a quick and simple CLI’s command is the following: suppose that we have a radio bridge with multiple communication channels in which we want the command

> config  

to set the bandwidth of the given channel number.
This command, called config, receives two parameters, also called arguments. The first argument, channel_number, is an integer number greater than 0; and the second argument, bandwidth, is an integer number in the range [0 – 4096]. The bandwidth is measured in KB. Any other value is considered an error.

3.1 Lexical analysis

Lexical analysis is the processing of an input sequence of characters to produce, as output, a sequence of symbols called tokens. A scanner is the tool used for this procedure. For example, a typical scanner that receives as input the character sequence config 2 2000, will produce as output the tokens config, 2 and 2000. These tokens become the input of the parser.
A dictionary allows the characters to be in any given sequence and to follow a specific order. This is, we need to define the characters that will be accepted by our CLI and we must order them in a particular manner.
For example, we can decide that the allowed characters will be the digits [0-9], all letters from the English alphabet [a-zA-Z] and the characters -, _, space, tab and enter. We can decide also, that all characters can start and end with a sequence of letters, and can have zero or more digits, a – or a _ in the middle of the letter sequences. The spaces and tabs are discarded, but do not produce any error as it would be with any other character such as the & or the %. The enter tells the parser that the sequence is over and that it has to produce a token for the scanner.

Using regular expressions syntax, the allowed syntax would be:


Therefore the following two sequences are valid:

t activate

but the following two sequences are not valid:

_meminfo         (the sequence cannot start with a _)
ifup_0              (the sequence cannot end with a digit)

A finite state machine, that graphically can be seen as a connected graph, is the best way for designing a scanner.
The graph in Figure 2 shows several vertices, also called nodes, that are connected through unidirectional edges. Each node represents a state and each edge represents an input character. Thus, for producing the token pwd-enable when receiving the sequence

pwd-enabletenter        ( t means tab )

we have to move through the directed graph in the following way:
We always start from the initial state 0, then when we receive the character p we move to state 1 because when a letter is received in state 0, the matching edge takes us to state 1. When we receive the characters w and d we remain in state 1 because of the edge that points to itself. Then , when we receive the character –, we move to state 2, then when we receive the character e we return to state 1 and remain there for the characters n, a, b, l and e . Then the t or tab is discarded. Finally, when we receive the enter we check if we are in an acceptable state or in an error state, this is, we check if it is a valid sequence of characters. If it is valid, we produce the token, otherwise we produce an error.

Finite state machine for a scanner

3.2 Syntax analysis

Syntax analysis is the process of analyzing a sequence of tokens in order to determine its grammatical structure with respect to a given formal grammar.
A parser is the tools used for this procedure and their input are the output tokens produced by the scanner.

Defining a formal grammar could be a complex process, the dragon book [1] explains it in great detail. Here, we will just describe the necessary concepts for developing the CLI.

A formal grammar is the precise description of a set of strings. There are two main categories of formal grammars: an analytic grammar describes how to recognize when strings are members in the set, whereas a generative grammar only describes how to write those strings in the set.

Since we are interested in writing a set of strings, we will use a generative grammar.
These kinds of grammars are classified in several types known as the Chomsky hierarchy. The most relevant for our purposes are the context-free grammars. The languages that can be described with such a grammar are called context-free languages in which it must be possible to tell how to parse any portion of an input string with just a single token of look-ahead. Strictly speaking, that is a description of an LR grammar.

In the formal grammatical rules for a language, each kind of syntactic unit or grouping is named by a symbol. Those which are built by grouping smaller constructs according to grammatical rules are called nonterminal symbols whilst those which cannot be subdivided are called terminal symbols or token types. An input corresponding to a single terminal symbol is called token, and a piece corresponding to a single nonterminal symbol is called grouping.

For example, the tokens of C are identifiers, constants, the various keywords such as if and for, arithmetic operators and punctuation marks. So the terminal symbols of a grammar for C include identifier, number, string, one symbol for each keyword, operator or punctuation mark. Examples of terminal symbols are if, return, static, int, the plus-sign, the semi-colon and others.

The following C function subdivided into tokens explains the above paragraphs:

  int                   /* keyword int */
  sum (int x, int y)    /* identifier, open-parenthesis, keyword int, identifier,
                         * coma, keyword int, identifier, close-parenthesis */
  {                     /* open-brace */
    return x + y;       /* keyword return, identifier, plus sign, identifier,
                         * semicolon */
  }                     /* close-brace */

Listing 1. The tokens of a simple C function

The syntactic groupings of C include the expression, the statement, the declaration, and the function definition. These are represented in the grammar of C by nonterminal symbols expression, statement, declaration and function definition. The full grammar uses many additional language constructs, each with its own nonterminal symbol, in order to express the meanings of these four.
For example, Listing 1 is a function definition that contains one declaration and one statement. In the statement there are three expressions, the first one is x + y that is composed of the two expression x and y.

The start symbol is the only nonterminal symbol that must be distinguished as the special one which
defines a complete utterance in the language.
In C, the start symbol is the sequence of definitions and declarations.

Each time a terminal or a nonterminal symbol is validated, the parser calls a function that will add an element to the data structure that the CLI will use. Therefore, at the end of the syntax analysis, if there were no errors, we will have a validated group of tokens (the terminal and nonterminal symbols) and the data structure for the CLI. The creation and the description of the elements of this data structure is explained in the following section.

3.3 Data structure

In order to execute a command, we must create the necessary structures for finding, executing and eventually extracting information from these commands.
The way we chose for creating the CLI structure is quite fast and organized, although there could be several other ways.
This structure can be created on the fly at the same time at which the syntax analysis takes place. Once the parser has validated and identified the token type, it calls a specific function that will create a specific part of this structure.

Before we describe the creation of this structure and the way its elements interact between them, we identify and describe these elements:

The modes are used for classifying in levels the different types of commands. For example, the commands that configure a radio channel could be inside the mode channel-conf, whilst the command that change the privileged password could be inside the mode privileged. Between one mode and another there could be a password that allows a privileged user to execute certain commands.

The commands describe an instruction that will be executed. For example, the command activate could execute the action of activating a radio channel. A command has several properties, the most important ones are the function it calls when it is entered, which actually executes it, and its description that could be used for generating its help. Each command belongs to a specific mode, but it can be exported to other modes becoming available to several modes at the same time. This is useful for generic commands such as help and quit that must be available at any moment.

The argument is a parameter that is given to a command. For example, the command config has the argument channel_number that identifies a specific channel and the argument bandwidth that sets the desired bandwidth. Depending on the command, some arguments can be mandatory, and others can be optional.

Identifies the functional level in which we are at any given moment. It could be a static or a dynamic string. For example, using the host name as a prompt would be a dynamic string.

The graph
The structure that puts all these elements together in an organized and efficient manner is a graph.
This graph contains 3 types of single-linked lists:

  • List 1: The first list is composed of modes, each node in the list represent a mode. For example, the first node could be used as a user mode for basic commands, second node could be used as privileged mode for sensitive commands such as the device configuration operations, third mode could be used as privileged mode for the configuration of radio channels only, and so on. There is only one list of modes.
  • List 2: Since each mode can have several commands, each node of the mode list has a list of commands. For example, typical commands for the third node would be activate channel and lockout channel that are typical commands for configuring a network interface and only a privileged user can have access to them.
    Some other commands could allow us to move from one mode to another.
    For example, the command enable could move us from the user mode to the privileged mode, an the command exit could move us to the previous mode.
  • List 3: Since each command can have several mandatory and/or optional arguments, each node of the command list has a list of arguments. For example, the command config channel needs as arguments the radio channel number and its bandwidth.

Figure 3 show the graph as the main data structure for the CLI with its three types of lists: modes, commands and arguments.

CLI graph with modes, commands and arguments

Having a graph of this kind facilitates significantly how to execute a command, how to extract its data, or simply how to move from one element to another.

For example, taking as reference Figure 3, if we are in the node that represents the channel-conf mode and from the CLI we enter the config 2 2000 command, we have to move through the list of commands until we find the node of the config command. If the command is not found an error is issued. If it is found, we have to move along the arguments list of this command for finding each one of its arguments. If at least one of the arguments is not found, an error is issued. First we search for the channel_number argument. If we found it, we move ahead for searching the bandwidth argument. If we found it, we stop, otherwise we issue an error.
After this procedure, we can execute the command or extract information about it and its arguments.
Notice that one of the properties of a command node is the function that must be called when this command is given.

The following sections describe how the user can implement the scanner, the parser and how to create the graph and implement the functions of each command.

4 CLI development

4.1 Scanner

A scanner is a tool for performing lexical analysis, this is, a scanner processes an input sequence of characters to produce, as output, a sequence of tokens.
The Tarasca Project uses the widespread open source scanner called Flex, therefore we will use it for explaining how to generate our tokens. The code in this section is taken from the file rhal/rhal_lexer.l
A Flex input file consists of three sections, separated by a line with %% in it:

user code

Listing 2. The structure of a Flex file

The definitions section contains declarations of simple name definitions to simplify the scanner specification and declarations of start conditions. Listing 3 shows the definitions of the start and end identifiers, quoted string, comments, space and new line character that are the elements that our scanner needs for the CLI:

identifier_start  [A-Za-z_]
identifier_end    [55]*[A-Za-z_0-9$]
identifier        {identifier_start}{identifier_end}*|[?]
qstr              "[^"n]*["rn]
comment           #.*
space             [ tr]+
nl                [n]

Listing 3. Scanner definitions for the CLI using Flex

The identifier_start identifier can only be composed of the upper and lower case letter of the alphabet plus the underscore. The comment identifier is composed of the hash character plus zero or more characters of any type except the new line character. And so on.

The second section of the Flex input , called rules section, contains a series of rules of the form

           pattern   action

where the pattern must be unindented and the action must begin on the same line. The patterns in the input are written using an extended set of regular expressions. The CLI does not need complex patterns, mainly reserved words. A reserved word is used for uniquely identifying a given token. In C, the tokens if and for are reserved words that cannot be used for anything else. The same happens with our CLI, the mode and prompt tokens are reserved words.

The action is an arbitrary C statement. Listing 4 shows the pattern-action section of the CLI and we can divide it in three parts. At the top of the listing we see that the spaces and comments are eaten up and that the quoted strings qstr defined above in the definitions section, must contain the double quote character. At the bottom of the listing, the nl characters are counted for displaying the line number when there is a parser error, and the ‘={};’ characters along with the invalid character represented by the ‘.’ character are returned verbatim.
The middle of the list contains all the tokens that are matched against a reserved word; thus, if we match the token mode we will return the MODE value, and so on. The identifier token is treated similarly to the qstr token.
These pattern-action lines in the middle of the listing call the function SymLook() that receives as parameters the pattern matched and returns the name and type of the matched token inside a struct if there was a match, or NULL if it was not found. Notice that all tokens are stored in the Flex global character pointer yytext and their length in the global integer yyleng.

The returned values of Flex are the input of the scanner. This is, the scanner will receive the yytext variable when there is a return yytext sentence, or a MODE, PROMPT or some other value of a reserved word. Notice that these values are defined in the scanner file because it is responsible for validating the tokens created by Flex.

{space}	/* Eat up spaces */
{comment}	/* Eat up comments */
{qstr}	{
	      if(yytext[yyleng - 1] != '"') {
			log_error(LOG_PARSER, "Flex: Error while scanning, 
		          string not quoted: %s", yytext);
			fprintf(FD_OUT, "Flex: Error while parsing, 
			    string not quoted: %sn", yytext);
               else {
                  rhal_lval.string = yytext;
                  return QSTRING;

mode        { rhal_lval.stptr = SymLook(yytext); return MODE; } 
...         /* Reserved words are here */
mandatory   { rhal_lval.stptr = SymLook(yytext); return MANDATORY; }
{identifier}{ rhal_lval.string = yytext; return IDENTIFIER; }

{nl}        { line_count++; }
[={};]      { return yytext[0]; }
.           { return yytext[0]; }

Listing 4. A fragment of the rules section

The user code section is simply copied to the output C file verbatim, but it’s not used by the CLI. In other cases is useful as companion routines which call or are called by the scanner. The presence of this section is optional; if it is missing, the second %% in the input file may be skipped, too, as in the case of our example.

Notice that the generated C file is the real scanner, Flex is just the utility for generating the scanner that is compiled into the final program

4.2 Parser

The Tarasca Project uses the widespread open source parser called Bison, therefore we will use it for explaining how to validate our tokens.
Bison is a parser generator in the style of yacc, it should be upwardly compatible with input files designed for yacc and it’s definitely more Flexible. However, in the Tarasca Project, the way in which bison is used is compatible with the yacc functionality. Bison is a complex tool; thus, we will only outline its most important aspects.
Bison reads a sequence of tokens as its input, and groups the tokens using the grammar rules. If the input is valid, the end result is that the entire token sequence is reduced to a single grouping whose symbol is the grammar’s start symbol. If the input is not valid, the parser reports a syntax error. In our CLI, the entire input must be a sequence of modes, commands and arguments declarations.
The structure of a Bison file that defines the grammar contains several parts that are shown in Listing 5. Once Bison processes this file, it generates a C file as output that is compiled for generating the final executable. Notice that the generated C file is the real parser, Bison is just the utility for generating the parser that is compiled into the final program.

     Bison declarations
     Grammar rules

Listing 5. The structure of a Bison file

The Prologue section contains C macro definitions and declarations of functions and variables that are used in the actions in the grammar rules. This C code is copied verbatim to the output C file.
The Bison declarations section contains declarations that define terminal and nonterminal symbols, specify precedence, and so on. This section defines the symbols used in formulating the grammar and the data types of semantic values. All token type names must be declared here, except for the single char literal tokens such as the plus sign.
The grammar rule section has the general form:

     Result: Components...

where Result is the nonterminal symbol that this rule describes and Components are various terminal and nonterminal symbols that are put together by this rule.
The Epilogue section, as the Prologue, contains plain C code that is copied verbatim to the output C file. The last %% are removed if this section is empty.

The code fragments of Listing 6 shows a Bison file for the CLI. This code is taken from the file rhal/rhal_parser.y of the Tarasca Project. The file extension .y is a standard for yacc grammars.

#include "rhal.h"

...                  /* Other include statements or data/functions declarations */

extern CLIMatrix        matrix;

    char        *string;
    SymTable    stptr;

%token  QSTRING
%type  cstring
%type  name
%type  value_expr
%type  prio_expr



mode_sections: mode_section
        | mode_sections mode_section

mode_section: MODE name { matrix = graph_mode_insert($2); }
                        '{' mode_attrs '}'

mode_attrs: prompt_section
        | cmd_sections
        | prompt_section cmd_sections

prompt_section: PROMPT '=' cstring { matrix = graph_mode_insert_prompt($3); } ';'

cmd_sections: cmd_section
        | cmd_sections cmd_section

cmd_section: COMMAND name { matrix = graph_cmd_insert($2); }
                          '{' cmd_attrs '}'

...              /* Other grammar rules and actions follow such as cmd_attrs */

value_expr: BOOL
        | STRING
        | NUMBER
        | HEX
        | IPADDR
        | MACADDR
        | RANGE         { $$ = $1; }

prio_expr: OPTIONAL
        | MANDATORY     { $$ = $1; }

parent_section: /* empty */
        | PARENT '=' name { matrix = graph_arg_insert_parent($3); } ';'

name: IDENTIFIER { $$ = $1; }

cstring: QSTRING { $$ = $1; }

desc_section: /* empty */
        | DESC '=' cstring { matrix = graph_insert_desc($3); } ';'

void enable_yacc_debug() {
    #if YYDEBUG != 0
        yydebug = 1;

Listing 6. A fragment of the Bison file

The prologue section contains includes and global variables that are copied verbatim. The include files contain the definitions of the functions used in the rules section such as graph_mode_insert().
In this Bison file we see two types of rules in the rules section. The first type and simpler one does not contain recursion and has the form

exp:      exp '+' exp

and says that two groupings of type exp, with a + token in between, can be combined into a larger grouping of type exp. Example of this rule are the name and cstring rules at the end of Listing 6.
The second type of rules are the recursive rules that are the only way to define a sequence of any number of a particular thing. These rules have the form

expseq:  exp
             | expseq ',' exp

and says that a group of type expseq can be composed of a group of type exp or by two groupings of type expseq and exp with a ‘+’ token in between. This is called left recursion because the recursive use of expseq is the leftmost symbol in the right hand side of the rule. Left recursion is preferable over right recursion because it can parse a sequence of any number of elements with bounded stack space.

The epilogue in this case contains a function that indicates Bison to display debugging information if the flag YYDEBUG is active. If there was not an epilogue, the %% would not be necessary.

This last function is interesting because it shows one of the Bison predefined variables: yydebug. Bison has some predefined variables and functions that start with the prefix yy, although it can be changed when invoking it from the shell or a makefile. These functions and variables are highly customizable, some of the can even be overwritten to fit specific needs. However, in their default form, they are responsible for doing the real parsing.
The most important predefined functions that are worth mentioning are:
yyparse(): This function reads tokens, executes actions, and returns when it encounters end-of-input or an unrecoverable syntax error. This function is called, in our case, from the CLI in the function rhal_parse_files() in the file rhal/rhal.c.
yylex(): It recognizes tokens from the input stream and returns them to the parser. In other words, yylex() is called from the scanner produced by Flex, and the output is sent to the parser. This function provides the glue between the scanner and the parser.
yyerror(): This function is called when the Bison parser finds a syntax or parse error. This function is normally overwritten for keeping it aligned with the rest of the program’s error printing functions. In our CLI, yyerror() is overwritten in the file rhal/rhal.c.

4.3 The graph and the command execution

As we described in Section 3.3, the main data structure of the CLI is a graph that contains three types of single-linked lists: a list of modes, lists of commands and lists of arguments.

Listing 7 shows an example of a simple CLI with three modes called user, privileged, and channel_conf. Each mode has one or more commands. Each command can have zero or more arguments.
This Listing is the Configuration Files block of Figure 1. It is the input of the scanner.
Figure 3 illustrates how Listing 7 would look like after been processed by the scanner and parser, and after we create the graph with its lists.

mode “user” {
	command “quit” {
		description: Quit the shell
	command “help” {
		description: "Display the syntax of the given command"
		argument “cmd” {
			Priority: mandatory
			Description: "Name of the command"
	command “enable” {
		description: "Change to privileged mode"

mode “privileged” {
    	command configure {
		description: "Configure a system's module"
		argument “interface” {
			priority: mandatory
			description: "Sub-system to be configured"
	command “pwd-enable” {
		description: "Modify enable password"

mode “channel_conf” {
	command “activate” {
		description: "Configure the given radio channel"
		argument “number” {
			priority: mandatory;
			description: "Channel number"
	command “lockout” {
		description: "Sent radio channel to protection mode"
		argument “number” {
			priority: mandatory;
			description: "Channel number"
	command “config” {
		description: "Configure a radio channel"
		argument “number” {
			priority: mandatory;
			description: "Channel number"
		argument “bandwidth” {
			priority: mandatory;
			description: "Channel bandwidth"
	command “info” {
		description: "Display channel information";
		argument “item” {
			priority: mandatory
			description: "Display basic channel info"

Listing 7. Configuration Files of the CLI

In order to build the graph and move along its different lists efficiently, there is a global struct that contains pointers to each one of the three types of lists. These pointers will move to the mode, command and argument that are been entered at any given moment.

The commands and arguments entered in the terminal are compared against the commands and arguments in this graph. If a match is found, the user-defined function associated to the command is executed. It is simply a callback function.

The functions rhal_validate_input_list(), match_args() and rhal_exec_cmd() in the file rhal/rhal.c of the Tarasca Project validate the input and match it against the graph, if a match is found the command is executed.
The file that actually contains the calls to these functions is cli/crish.c. The functions in this file handle part of the terminal control and the command matching and execution.

5. Conclusions

With the support of a scanner and a parser, we designed and developed a Command Line Interface in which the user can define the commands and their organization and concentrate only in their functionality, leaving aside the tedious details of parsing and validating the terminal input. This allows us to reuse the CLI for other projects in which we will only have to focus in the new commands and their functionality.

6. References

[1] The “dragon book”:
Aho V. Alfred, Sethi Ravi, Ullman D. Jeffrey
Compilers: Principles, Techniques, and Tools
Addison Wesley;
January 1, 1986.

Go back to the articles page.

One thought on “Writing Command Line Interfaces for Embedded Devices

  1. Pingback: Embedded Command Line Interface (CLI) | Dale Scott, P.Eng.

Leave a Reply

Your email address will not be published. Required fields are marked *