Friday, April 30, 2010

Programming Language Structure

Originally posted 16 January 2010 on my temporary blogspace...

Programming languages have their origin in natural language, so to understand the structure of computer languages, we need to understand natural ones. According to Systemic Functional Grammar (SFG) theory, to understand the structure of language, we need to consider its use: language is as it is because of the functions it's required to serve. Much analysis of the English language has been performed using these principles, but I haven't found much on programming languages.

Functional grammar of natural languages

According M.A.K. Halliday's SFG, the vast numbers of options for meaning potential embodied in language combine into three relatively independent components, and each of these components correspond to a certain basic function of language. Within each component, the networks of options are closely interconnected, while between components, the connections are few. He identifies the "representational" and "interactional" functions of language, and a third, the "textual" function, which is instrumental to the other two, linking with them, with itself, and with features of the situation in which it's used.

To understand these three components in natural languages, we need to understand the stages of encoding. Two principle encodings occur when speech is produced: the first converts semantic concepts into a lexical-syntactic encoding; the second converts this into spoken sounds. A secondary encoding converts some semantics directly into the vocal system, being overlaid onto the output of the lexical-syntactic encoding. Programming languages have the same three-level encoding: at the top is the semantic, in the middle is the language syntax, and at the bottom are the lexical tokens.

The representational function of language involves encoding our experience of the outside world, and of our own consciousness. It's often encoded in as neutral a way as possible for example's sake: "The Groovy Language was first officially announced by James Strachan on Friday 29 August 2003, causing some to rejoice and others to tremble."

We can analyze this as two related processes. The first has actor "James Strachan", process "to officially announce", goal "the Groovy Language", instance circumstance "first", and temporal circumstance "Friday 29 August 2008"; the second process is related as an effect in a cause-and-effect relationship, being two further equally conjoined processes: one with process "to rejoice" and actor "some"; the other with process "to tremble" and actor "others".

The interactional function of language involves injecting the language participants into the encoding. A contrived example showing many types of injects: "The Groovy Language was first announced by, of all people, creator James Strachan, sometime in August 2003. Was it on Friday 29th? Could you tell me if it was? Must have been. That august August day made some happy chappies like me rejoice, didn't it?, yeehaaaah, and probably some other unfortunates to tuh-rem-ble, ha-haaah!"

We see an informal tone, implying the relationship between speaker and listener. There's glosses added, i.e. "of all people", "august", "happy chappies like me", "unfortunates", semantic words added, i.e. "creator", semantic words removed, i.e. "officially", sounds inserted, i.e. "yeehaaaah", "ha-haaah", prepended expressions of politeness, i.e. "Could you tell me if", and words spoken differently, e.g. "tuh-rem-ble". Mood is added, i.e. a sequence of (indicative, interrogative, indicative). Probability modality is added, i.e. "must have", "probably". We could have added other modality, such as obligation, permission, or ability. We've added a tag, i.e. "didn't it?". We could have added polarity in the main predicate. What we can't indicate in this written encoding of speech is the attitudinal intonation overlaid onto each clause, of which English has hundreds. Neither can we show the body language, also part of the interactional function of speech.

Natural language in the human brain

A recent article in Scientific American says biologists now believe the specialization of the human brain’s two cerebral hemispheres was already in place when vertebrates arose 500 million years ago, and that "the left hemisphere originally seems to have focused in general on controlling well-established patterns of behavior; the right specialized in detecting and responding to unexpected stimuli. Both speech and right-handedness may have evolved from a specialization for the control of routine behavior. Face recognition and the processing of spatial relations may trace their heritage to a need to sense predators quickly."

I suspect the representational function of language is that which is produced by the left hemisphere of the brain, and the interactional function by the right hemisphere. Because the right side of the brain is responsible for unexpected stimuli, from both friend and foe, then perhaps interactional language in vertebrates began as body language and facial expressions to denote conditions relevant to others, e.g. anger, fear, affection, humidity, rain, danger, etc. Later, vocal sounds arose as the voice box developed in various species, and in humans, increasingly complex sounds became possible. The left side of the brain is responsible for dealing with regular behavior, and so allowed people to use their right hand to make sign language to communicate. Chimpanzees and gorillas use their right hands to communicate with each other, often in gestures that also incorporate the head and mouth. The article hypothesizes that the evolution of the syllable in humans triggered the ability to form sentences describing processes involving people, things, places, times, etc. Proto-representational language was probably a series of one-syllable sounds similar to what some chimps can do nowadays with sign language, e.g. "Cat eat son night". Later, these two separate functions of natural language intertwined onto human speech.

Programming language structure

When looking at programming languages, we can see the representational function easily. It maps closely to that for natural languages. The process is like a function, and the actor, goal, recipient, and other entities in the transitive structure of natural language are like the function parameters. In the object-oriented paradigm, one entity, the actor, is like the object. The circumstances are the surrounding static scope, and the relationships between processes is the sequencing of statements. Of course, the semantic domains of natural and programming languages are different: natural languages talk about a wider variety of things, themselves more vague, than programming languages. But the encoding systems are similar: the functional and object-oriented paradigms became popular for programming because between them it's easy for programmers to code about certain aspects of things they use natural language to talk about. The example in pseudocode:

Date("2003-8-29").events += {
def a = new Instances();
a
[1] = jamesStrachan.officiallyAnnounce(Language.GROOVY);
a
[1].effect = [some: s => s.rejoice(), others: o => o.tremble];
}

The similarities between the interactional functions of natural and programming languages is more difficult to comprehend. The major complication is the extra participants in programming languages. In natural language, one person speaks, maybe one, maybe more people listen, perhaps immediately, perhaps later. Occasionally it's intended someone overhears. In programming languages, one person writes. The computer reads, but good programming practice is that other human people read the code later. Commenting, use of whitespace, and variable naming partly enable this interactional function. So does including test scripts with code. Java/C#-style exception-handling enables programmer-to-programmer interaction similar to the probability-modality of English verbal phrases, e.g. will/definitely, should/probably, might/could/possibly, won't, probably won't.

Many programming systems allow some interactional code to be separated from the representational code. One way is using system-wide aspects. A security aspect will control the pathway between various humans and different functions of the program while it's running. Aspects can control communication between the running program and different facets of the computer equipment, e.g. a logging aspect comes between the program and recording medium, a persistence aspect between the program and some storage mechanism, an execution performance aspect between the program and CPU, a concurrency aspect between the program and many CPU's, a distribution aspect between the program and another executing somewhere else. Here, we are considering these differents facets of the computer equipment to be participants in the communication, just like the programmer. Aspects can also split out code for I/O actions and the program entry point, which are program-to-human interactions. This can also be done by monads in "pure functional" languages like Haskell. Representational function in Haskell is always kept separate from interactional functions like I/O and program entry, with monads enabling the intertwining between them. Monads also control all access between the program and modifiable state in the computer, another example of an interactional function.

Textual function of language

The textual function of language in SFG is that which concerns the language medium itself. In spoken natural language, this is primarily the sequential nature of voice, and in written language, the 2-D form of the page. Whereas in natural language theory, the voice-carrying atmosphere and the ink-carrying paper are obviously mediums and not participants, it's more difficult to categorize the difference between them in programming language theory. Because a program is written as much for the CPU as for other human readers, if not more so, we could call the CPU a participant. But then why can't the CPU cache, computer memory, hard-disk storage, and comms lines also be called participants? Perhaps the participants and the transmission medium for natural languages are also more similar than different.

The textual function of language is made up of the thematic, informational, and cohesive structures. Although mainly medium-oriented, they also involve the participants. The thematic structure is speaker-oriented, the informational structure is listener-oriented. The thematic structure is overlaid onto the clause. In English, what the speaker regards as the heading to what they're saying, the theme, is put in first position. Not only clauses, but also sentences, speech acts, written paragraphs, spoken discourses, and even entire novels have themes. Some examples using lexical items James, to give, programmers, Groovy, and 2003, with theme in italics:

  • James Strachan gave programmers Groovy in 2003.
  • Programmers are who James gave Groovy to in 2003.
  • The Groovy Language is what James gave programmers in 2003.
  • 2003 is when James gave programmers Groovy.
  • Given was Groovy by James to programmers in 2003.

In English, the Actor of the representational function's transitive structure is most likely to be separated from the interactional function's Subject and from the Theme in a clause, than those from each other. I think the textual functions of natural language are far more closely linked to the interactional function than to the representational. Perhaps the right side of the brain also processes for such texture structure.

The informational structure jumps from the top (i.e. semantic) encoding level directly to the bottom (i.e. phonological) one in English, skipping the middle (i.e. lexical/syntactic) level. This is mirrored by how programming languages such as Python use the lexical tokens to directly determine semantic meaning. In English, the speech is broken into tone units, separated by short pauses. Each tone unit has the stress on some part of it to indicate the new information. For example, each of these sentences has a different informational meaning (the bold indicates the stresses):

  • James gave programmers Groovy in 2003.
  • James gave programmers the Groovy Language in 2003.
  • James gave programmers Groovy in 2003.
  • James gave programmers Groovy in 2003.
  • James Strachan gave programmers Groovy in 2003.

Unlike the thematic structure, the informational structures the tone unit by relating it to what has gone before, reflecting what the speaker assumes is the status of the information in the mind of the listener. The informational structure usually uses the same structure used in the thematic, but needn't. English grammar allows the lexical items to be arranged in any order to enable them to be broken up in any combination into tone units. For example, these examples restructure the clause so it can be divided into two tone units (shown by the comma), each with its own stress, so two items of new information can be introduced in one clause:

  • James gave Groovy to programmers, in 2003.
  • As for Groovy, James gave it to programmers in 2003.
  • In 2003, James gave programmers Groovy.

Programming languages should follow the example of natural languages, and allow developers to structure their code to show both thematic and informational structure. The final textual function, the cohesive structure enables links between clauses, using various techniques, such as reference, pronouns, and conjunctions. Imperative programming languages rely heavily on reference, i.e. temporary variables, but don't use pronouns very much. Programming languages should also provide developers with many pronouns.

Summary

Programming languages initially represented information in the same way humans do, using transitive structures such as function calls, joined by logical relationships such as blocks and class definitions. Interactional aspects of code were initially intertwined, but could be separated out using aspects and monads. Enabling different textual structures in programs isn't very widespread, so far limited to providing different views of an AST in an IDE, only occasionally allowing "more than one way to do things" at the lexical level. When used well, textual structures in code enable someone later on to more easily read and understand the program.

In promoting the benefits of programming languages enabling different textual structures, I think it's useful to narrow down to two primary structures: the transitive and the thematic, as these two are easiest to communicate to programmers. See my earlier thoughts on how a programming language can enable more thematic variation. Programming languages of the future should provide the same functions for programmers that natural languages provide for humans.

And of course, I'm building Groovy 2.0, which will both enable thematic variation in the language syntax/morphology, and supply a vast vocabulary of Unicode tokens for names. The first iteraction will use Groovy 1.x's SwingBuilder? and ASTBuilder, along with my own Scala-based combinator parsers, to turn Groovy 2.0 source into Groovy 1.x bytecode. The accompanying Strach IME will enable programmers to enter the Unicode tokens intuitively. Groovy 2.0 will break the chains of the the Antlr/Eclipse syntactic bottleneck over Groovy 1.x !!!


No comments: