Building a Chat Bot for Fun and Profit
May 18, 2016

This blog post is based on a lunch and learn talk I gave at on May 13, 2016.

On April 16th, 2016 Telegram unveiled their “$1,000,000 to Bot Developers. For free.” challenge. Developers were incentivized by a chance to win $25,000 USD to build novel, interesting bots on Telegram’s platform. Telegram is not the only major chat platform to embrace bots. Slack, Whatsapp, Facebook, Kik, Skype all have well developed bot APIs with bots ranging from image and video search, games to weather, sports and translation.

There’s no shortage of bot ideas out there and many developers are building them. This got me thinking. How hard would it be to build a bot anyway? What separates a bot from a command line REPL? This blog post details my journey to find out.

What is a bot?

Let’s start with a definition:

A computer program designed to simulate conversation with human users, especially over the Internet.

- Google

Okay. A bot is supposed to simulate conversation, so what makes a conversation in bot-land? Conversations are:

  • Message based
  • Realtime
  • Intelligent (contextual)

Therefore we must give our bot all of these qualities. It must have a brain, or at the very least be able to make intelligent assertions about incoming data.

How to make an intelligent bot?

State machines! State machines allow you to give the bot context. Incoming messages can set the bot in a state that lets it know what to expect of the next message.

What are state machines?

Another definition, a state machine (otherwise known as a finite-state machine) consists of:

  • An initial state or record of something stored someplace
  • A set of possible input events
  • A set of new states that may result from the input
  • A set of possible actions or output events that result from a new state

Simple finite state-machine

Above is an example of a simple finite-state machine. At any time it is in one state of a set of known finite states. From each state are defined transitions to other states.

Operations: Commands vs. Actions

For our bot, there are two types of operations. Here’s the lexicon we’ll be using:

  • Commands are global top-level chat operations. They begin with a “/” and can be run at any time in the conversation lifecycle.
  • Actions are contextual responses. They are handled by a specific function for each state.

The flowchart below shows the message parsing logic for commands and actions:

Commands vs. Actions in flowchart form

Now that we have the basics, let’s build a bot!

We’re going to be building Expense_Bot, a single-entry accounting bot. This bot will allow you create accounts and log transactions such as income and expenses to those accounts through a chat based interface. We’re going to be using Node.js and some database (pick your poison: MongoDB, RethinkDB, /(My|Postgre)?SQL(lite)?/) for this exercise.


Let’s define a set of commands for our chat bot:

/start - Returns this list of commands
/newaccount - Create a new account
/accounts - List accounts
/transaction - Log transaction (expense or income)
/history - List previous transaction history
/charts - View a chart image summary of your expenses and income
/spreadsheet - Download full data
/delete - Delete a transaction (expense or income)
/deleteaccount - Delete an account
/deleteall - Delete all info Expense_Bot knows about you
/cancel - Cancel current operation

For our example we’re going to go through the /newaccount workflow and build a conversation.


States are constants string literals defined as follows:

const STATES = {

Data structures

I’m going to define a set of data structures to hold our commands and actions.

const commands = {
    help: function* (msg) {
        // output help message
    newaccount: function* (msg) {
        // start new account

const actions = {
    [STATES.NONE]: function* (msg) {
        // NONE state launches commands ^
        // Validates response is a number
        // Transition to NEW_ACCOUNT_TYPE
    // ...,
    [STATES.NEW_ACCOUNT_TYPE]: function* (msg) {
        // Validates response is account type
        // Transition to NEW_ACCOUNT_NAME
    // ...

Entry Point

All messages are triaged by an entry point. I’m using ES6 generators to leverage the yield keyword and create co-routines. These are asynchronous function calls that appear synchronous, with the value returned inline, eliminating the need for a callback. The bluebird promise library gives us Promise.coroutine(), a wrapper around generators that allows us to use yield to return the value of a promise. Errors that would normally be given to a .catch are raised, allowing us to use the traditional JavaScript error catching mechanism try { } catch (e) { }. Cool!

const Promise = require('bluebird');

const main = Promise.coroutine(function* (msg) {
  var state;

  if (msg.text.startsWith('/')) {
    state = yield createNoneState(msg);
  } else {
    var collection = yield State
      .orderBy({index: r.desc('createdAt')})

    state = !collection[0] ? yield createNoneState(msg) : collection[0];

  runAction(state, msg);

This parses the contents of the message. If it begins with a /, it creates a new State record in the database with a STATE of NONE. The idea is if the user is issuing a global command, its current context becomes invalid. The global command will then set them in a new state. If their message does not begin with a /, it must be a contextual response, so we retrieve their current state and direct them to the runAction function. This function looks like:

function runAction(state, msg) {
  const action = actions[state.state];

  if (!action) {
    bot.sendMessage('That action is not understood. Run /start to get the list of actions.');

  Promise.coroutine(action)(state, msg);

This references the actions data structure above. In each handler we can do validation on the incoming response. So for example when the user is in the NEW_ACCOUNT_INITIAL_BALANCE, the following handler will be used:

const actions = {
    [STATES.NEW_ACCOUNT_INITIAL_BALANCE]: function* (state, msg) {
      const validNumber = DOLLAR_REGEX.exec(msg.text);
      if (validNumber && validNumber[1]) {
        // Transition them to new account
        yield new State({
          state: STATES.NEW_ACCOUNT_TYPE,
          meta: {,
            balance: validNumber[1]

        bot.sendMessage(, `What type of account is it?`);
      } else {
        bot.sendMessage(, 'I cannot parse that number. Please enter an initial balance of the format $1234.56.');
    // ...

Conversation Example

Consider the following conversation with an accounting bot:

<robot> Hello, welcome to Expense_Bot!
<human> /newaccount
<robot> What is the initial balance for your new account?
<human> $750.00
<robot> What type of account is it?
<human> Savings
<robot> And finally what is the name of this account?
<human> Royal Bank
<robot> Great! New account "Royal Bank" created.

In the above example:

  • /newaccount is a command. At any time it can be run and interrupt the flow of the conversation because it is global.
  • $750.00 is an action. It only makes sense if the current state is expecting it. If you asked me “What is the initial balance for your new account?” and I responded $750.00, that conversation would make total sense. However if I walked up to you and said “$750.00” out of nowhere, you would have no idea what I’m talking about. This is the definition of context.

When parsing the second message from the human, the bot knew to expect a dollar figure. Why? When the human asked to create a new account (/newaccount), it put the bot into state NEW_ACCOUNT_AMOUNT. In our bot we can define a specialized handler for the NEW_ACCOUNT_AMOUNT state that will validate the next response (is it a dollar amount?), save any persistent data ({balance: 750.00}). After they submit a valid dollar figure, we ask the user what type of account this is and transition them to the NEW_ACCOUNT_TYPE state. After the process is done we save the new account record and transition the human back to the NONE state.

This transition of states and building of data is illustrated in the table below.

Current State Next State Incoming Message Data (after message)
  balance: 750.00
  balance: 750.00,
  type: 'Savings'
  balance: 750.00,
  type: 'Savings',
  name: 'Royal Bank'

After the final step, the data is committed to the database as a new Account model. There you have it. The finite-state machine model of computing fits well with a conversation chat bot. This approach works well with transactional or wizard style bots that walk users through a number of steps. Now go write your own bot!

bots telegram nodejs

Optimizing Move Generation from 200K to 2.5M moves/s
January 06, 2016

Originally posted at

CeruleanJS has a pseudo-legal move generation algorithm. It generates all possible moves for a position (even ones that put the king in check or castle the king through check) and the full legality is tested during the addMove() function. This is because the move needs to be added before check detection can work. Fast move generation is key to a strong chess engine: the more moves you can generate and evaluate per second, the stronger it will be. This post is about my experiences optimizing CeruleanJS’s move generation.

At the start of this document, CeruleanJS was weighing in at a measly 200,000 moves/s on my MacBook Pro. Cerulean (the original C implementation) managed 20,000,000 moves/s in a single thread. CeruleanJS hopes to achieve this level of performance, the question is: can it?

Faster Piece List

A piece list is a cache of which board indices are occupied by which side. CeruleanJS has two piece lists, one for each white and black. Think of it as a way to optimize looping over all 64 squares. Instead we only need to loop over the pieces we’re generating moves for (maximum 32).

The first iteration of CeruleanJS contained a dead simple piece list implementation:

class PieceList {
    constructor() {
        this.indices = [];

    push(index) {

    remove(index) {
        let reverseIndex = this.indices.indexOf(index);
        this.indices.splice(reverseIndex, 1);

There’s a couple things wrong with this implementation. First, the indices array is set to an initial length of 0, so each push has to allocate more memory to store the new item. Second, removing an index is expensive. It requires a linear scan of the indices array to remove a specified index, which is O(n). Third, indices is spliced to remove the found index. This reduces the size of the array, but forces all values after reverseIndex to be shifted by one. I’m can’t speculate on the internals of the JS Array data structure, but this may cause a rewrite of up to 16 squares (again, O(n)). What data structure would allow quick creation and removal?

What if we implement a scheme such that when an index is removed, it is replaced by current last element in the list? A piece list only needs to contain at most 16 pieces. We can do the allocation for all 16 elements up front. Also, what if we maintain a reverse board array that maps board index to index in piece list? This would remove the linear scan needed to find the object to remove.

class PieceList {
    constructor() {
        this.indices = new Array(16);
        this.reverse = new Array(constants.WIDTH * constants.HEIGHT);
        this.length = 0;

    push(index) {
        this.reverse[index] = this.length;
        this.indices[this.length] = index;

    remove(index) {
        var reverseIndex = this.reverse[index];
        this.indices[reverseIndex] = this.indices[this.length];
        this.reverse[this.indices[reverseIndex]] = reverseIndex;
        this.indices[this.length] = undefined;
        this.reverse[index] = undefined;

This allows for an O(1) piece list implementation for both adding and removing items.

Remove ‘let’ keyword in tight loops

The let keyword is a relatively new addition to ES6, which allows variables to be defined at the block level (for, if, while, do, switch, or { }). This is great for encapsulating variables in a for loop or if statement.

However when let is used two levels deep in nested for loops, all that variable allocation and deallocation can be expensive and can generate a lot of garbage.

The solution is to use a single variable declared outside of any loop and to modify this value as necessary. This trades off safety for performance, but resulted in a large performance increase for Cerulean :).

Switch expensive lookups from Objects to Arrays

A significant performance increase was noticed when two lookup objects were converted to arrays. These lookup objects were used in tight loops (generateMoves(), addMove(), subtractMove()) where the additional overhead of casting a value to a string was cost prohibitive.

Add history less

CeruleanJS uses an internal history array to restore unrecoverable information (i.e. information that cannot be inferred) during the subtractMove() function. Some examples of unrecoverable information are:

  • En passant
  • Castling
  • Half move clock
  • Zobrist keys (these are recoverable but are loaded from the history array for simplicity)

It makes sense for addMove() and subtractMove() to be exact inverses of each other, where addMove() pushes a new array to the history array, and subtractMove() pops this off. This however would generate an array for every move. As it turns out, at each node in the search tree, all subsequent moves share the same history. Therefore we only need to save to the history array once per move generation instead of once for every move in every move generation, saving dozens of array allocations.

Move as a 32-bit integer

This may seem obvious to other chess programmers, but using a single 32-bit integer to represent a move object is significantly faster than using an object-structure like an array. CeruleanJS initially used a simple array [from, to, promotion]. [] in JavaScript is short for new Array(), so in reality you’re doing a lot of object allocation. Integers create less garbage in this respect.

Cerulean’s move data structure is:

000000 000000 000 000 0000000 0000000
^ MSB                           LSB ^

This breaksdown to the following distribution:

  • 7 bits for FROM index
  • 7 bits for TO index
  • 3 bits for PROmotion piece (Q/R/B/N)
  • 3 bits for CAPtured piece (any or empty)
  • 6 bits for BITS (metadata)
  • 6 bits for ORDERing

This dense move structure requires less data to be saved on the board’s internal history array.

Have your move generator help you out

Your pseudolegal move generator has a lot of information about the move your generating. Is it a pawn move? Is it a capture? Is it a double push? Save this information in a metadata field in your move and base your addMove()/subtractMove() functions on it.

Originally CeruleanJS only passed in [from, to, promotion] and inferred all other information from the board state. This proved to add a lot of overhead to addMove()/subtractMove() when this information is available earlier in the pseudolegal move generator. For a capture for instance — you’d be essentially double checking that the board[to] is occupied by an opponent square: in generateMoves() and addMove().

CeruleanJS switched to a “bits” based addMove()/subtractMove() functions, where the “bits” flag in a move is used to switch() to specialized move for that bit type.

Pass moves array by reference

The move generator is a single function, generateMoves() loops over all pieces for a side and has a big switch statement for the piece we’re looking at (pawn, knight, bishop, rook, queen, king). This calls a bunch of other functions pawnMoves(), rookMoves(), etc.

Originally these functions returned a list which was concatenated to the main move list.

Instead of using concat, we can use a single move array that is passed by reference to all the sub functions (pawnMoves(moves), etc.), which then modify it. Using a single array with a mutable state that is passed by reference to other objects that modify it is considered bad practice almost anywhere that codes JavaScript. However, by doing this we reduce the amount of garbage arrays (and .concat()) created during the move generation process.

Use TypedArrays (Uint32Array) for Board and Piece List

This makes the lookup of board positions significantly faster. Roughly a 2x speed increase was noticed when switching to this. This is because less checks are required on item access in JavaScript.

An additional increase of 14% was seen switching from Arrays to Uint32Array in the PieceList.

Testing showed this would not be as advantageous for Move lists (due to the number Uint32Array allocations, which can be as large as 218 in one turn!). A possible optimization here is to use one move list per depth and maintain the list of preallocated move lists in the board. More testing is needed on this front.

Future Improvements

A number of other data structures in my application could be switched to TypedArrays, namely Zobrist keys, which are looked up frequently during the addMove()/subtractMove() cycle. However, this did not prove to be as beneficial since Zobrist keys are currently a multi-dimensional data structure. JavaScript only supports a 1 dimensional TypedArray and the multiplication required to generate the key look took away any possible speed improvements.

As a whole, these improvements gave a 10-15x speed boost for CeruleanJS, which clocks in at around 2,500,000 moves/s. The performance remains an order of magnitude slower than Cerulean in C (and 2 orders of magnitude slower then a well optimized state-of-the-art chess move generator), but it will suffice for a sufficiently strong JavaScript chess engine.

Until next time, happy chessing!

chess chess programming ceruleanjs

History of
June 22, 2015

Version 1 (2007 - 2009) version 1 has long been my home on the web. Registered in 2007 prior to joining the University of Waterloo, was intended to document my entire university career. It had my courses outlined for all 5 years. I vigorously studied the Undergraduate Calendar and would post all documents related to my courses on it. It was written using PHP 4.3, with a blog component built using CuteNews and a custom built lifestream component pulling in feeds from my social media. The website was red, grey and white and fluid width on its main layout, but also featured themes including an nfo style with monospaced font and an ASCII art Anarchy symbol. This version remained online until sometime in 2009 when I got an unfortunate call from a faculty member of the Physics Department regarding the course materials I posted. I promptly took it down.

Version 2 (2010 - 2011) version 2

This version was built with Ruby and Sinatra. It features a main page which highlights a few software projects I've worked on. This site was spawned around the time I began working on harmonyofchaos and so featured a number of Guitar Pro songs I had written (I typically use SoundCloud today for this purpose). I additionally built a lifestream and blogging engine admin interface using DataMapper and SQLite.

Version 3 (2012 - Present) version 3

This version errs on the side of simplicity. Version 3 is a statically generated site built using Jekyll. The templating is Bootstrap 2.3.3, using a modified Ubuntu theme from Bootswatch with my own stylings. It features a left hand navigation with a responsive design -- the left hand menu appears above the content on lower-width viewports. The site is dated and the HTML minified on build using Rake and HTML compressor. Overall this workflow is simple and can be deployed to any static host.

Who knows what the future brings? will always be a personal playground for webthings. I hope to keep iterating on it, but for now V3 meets my needs as a personal brand.

website history

SSL by default, GnuPG with
February 07, 2015

SSL by default is now completely SSL by default. Because TLS is fast enough and StartSSL offers free SSL certificates, I've decided to switch this domain, as well as a few others, to SSL. I think all sites should be switched to SSL by default and that security shouldn't be a feature but a necessity. I'm particularly interested in the arrival of Let's Encrypt, a free automated certificate authority from Mozilla, the Electronic Frontier Foundation and others. Such a service should remove all barriers to entry for everyone to make the switch.

GnuPG and

In other security updates, I've recently started using to verify my social media accounts and host my GnuPG public key. While there have been some detractors from the service, I think fundamentally the founders of Keybase have done their due diligence and it feels like a very well designed system. I opted to not host my private key on their servers, so I only need to rely on my security rather than the security of a third party. This seems to be a trend in my thinking where I would prefer to rely only upon myself rather than the Cloud for hosting my data.

You can download my public key from here or here. Fingerprint: 8603 8B3C 3BE1 5587 1E31 42C7 5C54 64B0 A9E6 B2EE. Any secure communication to me should be encrypted with this key at your discretion.

website security ssl gnupg launched!
December 10, 2014

Hey everyone, I have launched a new website, It allows you to quickly convert MIDIs to MP3 using a selectable high-quality soundfont. To read more about it, visit my project page or visit the site directly! I hope you find it as useful as I do. I do a lot of my song writing in Guitar Pro, so a site like this is very useful for generating an MP3 to put on my phone for mobile listening. Competitors do exist and I have used them extensively, but I'd like to add one more to the ecosystem!

Questions or comments are appreciated! Please add them to the discussion section of Thanks and happy rendering!

website midi mp3