Sunday, February 26, 2012

Episode 11: Lisp

Hey all,

Life got in the way of the podcast for awhile so myself and Patrick had to take a brief hiatus.  This episode was recorded before things got crazy, but I wanted to let everyone know that the podcast is coming back and we will begin recording new episodes in the next couple of weeks.


Tool of the Biweek


  • Invented before it was implemented
  • Foundational registers
    • car (Contents of the Address part of Register number)
    • cdr (Contents of the Decrement part of Register number).
  • Based on lambda calculus
  • Lisp Machines

  • Artificial Intelligence
    • Planning
    • Expert Systems
    • SHRDLU (Natural Language Processing)
  • Emacs
  • ITA - Backend airline software
  • Maxima: Computer Algebra System

  • S-Expressions (car cdr)
  • “Easy” to implement an interpreter/compiler
    • Many dialects


  • Extremely efficient hashtables
  • Easy to parse code & automatically generate code
    • Self-modifiable code


  • All those parentheses
  • No direct memory access, poor hardware access


  1. I just listened to your talk about Lisp and thought some of the information a bit off.
    Atoms and conses/pairs are important concepts, but the way Lisp usage was described as the creation of pairs seemed tortuous. Really, Lisp users think in terms of lists, and that's how it got its name. Nested lists are trees and there's a heap you can do with those. Modern lisps tend to have O(1) access arrays and other data types, its not all pairs.

    I don't think Lisp hash tables are a stand-out feature at all, though they are handy, many languages have hash tables of comparable performance.

    One of the major strengths missing from the talk was the ability to create new syntax and domain specific languages. Most languages don't let you do that, and of those that do none make it so natural. Its said that to best tackle a problem in Lisp, you extend the language into a language specialised for tackling that class of problems, and then you tackle the problem with that. This makes sense when the problem isn't well understood at the outset or the spec keeps changing.

    Sure the parentheses are seen as a weakness, but a decent Lisp editor won't just match up and indent your parentheses, it will give you commands for traversing and editing the tree structure of the source code directly (see paredit mode in emacs).

    Parentheses are also a strength because they eliminate the need for operator precedence. Not all languages with infix operators follow C's precedence table, and if you write new operators where do they fit in? In these other languages you have to memorise the precedence table and risk subtle bugs, or use parentheses to make your intent clear.

    Prefix notation is both a weakness and a strength. People are generally more comfortable with infix notation, but prefix simplifies parsing by reducing syntax and makes it easy to add more parameters to an operation.

    Most modern lisps also provide direct memory access, but C and its ilk won the war for the IT industry and so most hardware interfacing code uses a dialect of C.

    Some modern Lisps are compiled and can deliver near C runtime speeds while being much more dynamic and flexible than C. Newer dynamic languages are usually 1 or 2 orders of magnitude slower than this.

    I didn't mean to write quite this much, but I liked your talk and will keep listening to your podcast.

  2. Hi guys, I was thinking about the hash table thing in Lisp, and perhaps what you were getting at was symbol tables. Lisp uses symbol tables (which can be implemented using hashes) to look up the data associated with a symbol. I expect a lot of programmer effort has gone into making symbol tables as fast as possible.

    Another angle is that a symbol is behind the scenes really just a pointer to a symbol structure in memory. Some languages like Perl restrict their hash table keys to be strings, however if the strings are long the process of hashing the string will take more time. Common Lisp lets you use symbols as hash keys, and as they are really pointers, it can be much more efficient for hashing.


  3. This post is really useful and helpful to know more about the things which you have shared. I appreciate you for such a great amount of information. I assure this would be beneficial for many people. 78200X exam dumps