All Things Techie With Huge, Unstructured, Intuitive Leaps

The Birth of a Car-Selling Monkey --The Evolutionary Cycles of My Chatbot

Honest John is evolving, and it's not taking millions of years. Honest John is my chatbot that will sell cars either online or on at dealership kiosk. This side project of mine started when friends of mine wanted my help in buying a new car, and they had a bad experience with a high-pressure car salesman who was a stranger to the truth. My friend had said that she would rather negotiate with a computer and that is how Honest John was born.

I fired up my Software Development Kit, opened a framework, and it wasn't hard to get some running code quickly. Unfortunately, the earliest of version of Honest John was quite stupid. He was merely a parrot. And if you stumped him with a question that he didn't understand, he would give an innocuous reply and ask a random question. Obviously we had a long way to go.

The conversation was quite two-dimensional. I was using AIML which is Artificial Intelligence MarkUp Language. The way that it works, is that it recognizes a predicate in the input text, searches through its library for that predicate and spits out a response. The first task on my part was to add some humanity and politesse to it. You can't expect to sell something to a human unless you act like a human yourself. So I had extensive edits to the AIML to make it more human.

The personalization of the conversation was necessary. To do that, I had to write a user object that remembered things about who the chatbot was talking to. Honest John had to remember if he was talking to a woman or a man, and the person's name. It was functional now, but it was like stick figures talking to each other.

Before I went further into a more human chatbot, it needed some smarts. Most chatbots out there are incapable of logic and error correction. If Honest John were to negotiate, he would need to evaluate arithmetic expressions so that he could talk money and price. He needed to be date/time aware. He needed to have logic to recognize if a bid was lower than the previous one, and needed to react appropriately. Even though there are wonderful recursive elements in AIML, this sort of stuff was way to complex for AIML to handle.

So the answer was to intercept the inputs and AIML outputs, and send them to a parser that would determine if the conversation needed remediation by an Arithmetic Logic Unit or a plain old Logic Unit in the code. Luckily this was easy to do, because my framework is a J2EE (Java Enterprise Edition) framework that is capable of complex actions like creating any objects, stuffing them with data and holding them in memory for easy access. Because of unique, time-aware Java classes and multi-threading, I could take the conversation, dissect it and send it to appropriate parsers which kicks a new thread to do some work on each element, wait for the response, and finally spit out a response to the user that is intelligent. The other element that most chatbots do not have, is that they can record the conversation history, but they cannot traverse it, regress to a certain point in the past, and understand past statements. I had to create a live chat record in memory, along with meta-data and logic to correct those faults. In the middle of a negotiation, if things got off the rails, the chatbot could go back to the last point of agreement and start again from there.

Now we were getting somewhere. We had the beginnings of a bot that could negotiate. However we still had the problem of sophistication -- it was just two stick figures talking to each other. Humans need emotions and empathy, and bots need to live in that domain too -- however artificial it may be.

The Holy Grail of mixing digital smarts with the human milieu was first appreciated, understood and defined by Alan Turing, when he devised the Turing Test, in which a human could not detect that they were talking to a computer. That requires an EQ and an IQ (an Emotional Quotient as well as an Intelligence Quotient). Honest John doesn't even pretend to be able to pass a Touring Test. But the conversation has to become more three-dimensional in human terms.

Understanding emotion in the user and reacting to it, is the beginning of artificial personality. This is important to Honest John in the selling process.

Suppose that in the middle of negotiating, Honest John detected that the user was getting angry, bored or any other negative emotion that would be the thin edge of the wedge in precluding a sale -- after all, his whole job is to sell cars. The chatbot would need to have the detection circuits to understand that. More importantly, Honest John would need to take remedial action, and either soften or harden his tone. Moreover, he would need to alter the negotiation strategy, either becoming more or less hard-nosed depending on the specifics.

For that reason, Honest John needs to have several strategy processes defined, and they all relate to pre-defined personality aspects. Honest John needs to adjust the tone of the negotiations. To do that, not only does he require the right words, but also the right actions. If the negotiation price is in the ball-park of a sale, and he detects that the user may walk, Honest John needs to sell the car. If he is not in the ballpark of a selling price, he either needs to adjust his negotiating increments depending on the temperament displayed by the human on the other side of the screen. He must be capable of being "fuzzy".

So all in all, Honest John needs to have a range of sophisticated behaviors before I let him out in the wild, I am working on it. He does have AI networks built into the stream of things. He has such Natural Language Processing (NLP) tricks like Bag-of-Words and other algorithms to help him decipher things. I think that the tools are all in place in Honest John innards. All that I need to do, is integrate them, expand them and polish them. Who knows, you may meet Honest John one day in a showroom or online, and you will remember his evolutionary history.

IoT, Formula 1 Racing, Mercedes Team ... and Me !!

I am honored to be chosen as winner  of #tatacommsf1prize and one of three grand prize finalists in the Formula 1 Connectivity Innovation (Connected Operations) Challenge for the F1 Mercedes Petronas AMG Racing Team tech supplier.

 My proposed solution brings a new innovative slant to the IoT of Formula One Racing, and I look forward to meeting Lewis Hamilton in Abu Dhabi for the final F1 race of the year, where I present my solution.  Thanks to Tata Communications, the infrastructure supplier to the Mercedes team for this incredible opportunity and the trip to Abu Dhabi.

I look forward to having my connectivity solution possibly make an impact on the future of F1 Racing.  This is a major thrill.

Never Mind Artificial Intelligence, How About Artificial Personality ?

In my quest to make the ultimate Artificial Intelligence chatbot that sells cars, I have been pontificating on various attribute that the chatbot should have. It should have an EQ (Emotional Quotient) as well as an IQ (Intelligence Quotient). It should be good at math instantly. It should be good at logic and detecting attempts to misdirect and confuse it. It should know when to be aggressive and when to back off, as part of its emotional awareness. It should be able to remember conversations, and return to any point in the conversation after a non sequitur, especially when in the middle of negotiations. I have already describe on a high level as to how I would implement the technology for this in previous articles.

As I was discussing this with a friend, it was pointed out that I needed to create a de facto artificial personality. And it was pointed out to me, that perhaps there should be a feminine one as well as a masculine one. I named my chatbot Honest John and made him a male, simply because I am a male, and I tried to transpose what I would say if I were a chatbot.

I keep up to date with Artificial Intelligence and I am practitioner of it. There are researchers out there seeking the Holy Grail of artificial consciousness in silicon. They are trying to making "thinking machines" with consciousness. Artificial consciousness in a thinking machine is a noble aim, but I think that it is putting Descartes before the horse. One has to have a personality that directs the aspects of thinking and personality-expression, much like a wedding cake and a wedding ring converts your partner's personality to a morose, complaining entity with a negative worldview.

Creating gender in a chatbot is easy. It is already incorporated in the AIML (Artificial Intelligence Markup Language). It substitutes "he" for "she" and hobbies like "sewing" instead "drinking beer". But that is not enough. The gender responses also have to match the personality. For example the non-sympathetic, hard-nosed, take-no-prisoners negotiating chatbot could either be a man or a woman, and truth be told, some men prefer a woman with those traits. So, there has to be a way of imbuing personality into the chatbot.

Luckily, that is not technically difficult to do. Once personality traits are defined, they are stored in AIML, and the appropriate AIML libraries are loaded when the chatbot fires up. The work for this, is all semantic and expressed in natural language within the AIML. This is where a liberal arts degrees become useful again -- at the intersection of technology and human interaction.

So my chatbot Honest John will have the capability of becoming transgendered into Honest Jane. Or Honest John will have the ability to stop being the cigar-chomping salesman and become the meditating yogi who recommends an electric car at a fair price to the goodhearted peoples who made it to save the planet's environment. This has been a fun journey so far.

AI Chatbot Tactics ~ Making A Customer's First Objection His Last

For a very brief period during university daze, I used to sell cars. This was in the era of high pressure car salesmanship where you ground down the customer until he/she signed on the bottom line.

On the first day of work, I was taken into the boardroom with a bunch of my fellow misfit newbies at Shyster O'Toole Motors and sat down in front a VCR. The sales manager hit the on button and went out to sexually harass the receptionist. The video tape had been played so often that there were hisses, snaps and odd interference lines running through the picture on the TV set. The reason why the video tape was so worn was that Shyster O'Toole Motors was a burn and churn outfit. They would hire anyone who would walk through the door. They knew that each newbie could at least sell a couple of cars to his acquaintances, friends or relatives in their first month of salesmanship. If they didn't repeat the sales by the second and third month, then they were burned and churned, and a new, rosy-cheeked naive batch took their place.

The scratchy video tape was narrated by a jowly character stuffed into a too-tight suit who spoke with a deep southern hillbilly accent that befitted a shyster televangelist. His name was Catterson, and he was gonna teach us to force customers to buy cars from us, come hell or high water.

There were many high pressure tactics, but the one that comes to mind now, is making a customer's first objection, his last one. The reason that I could dredge it out of my memory, is that I am making an AI chatbot called Honest John - a car-selling bot that is actually honest, and not high pressure. But I am developing strategy framework and one thing that any salesman, saleswoman, or salesbot has to do, is ask for the sale. If you don't ask for the sale, you are not selling. The consent to buy has to be present. During the course of negotiation, the customer may come up with an objection mid-stream that halts the consent to buy. Honest John, my chatbot needs a strategy to overcome the objection and that is why I thought of the sales training video that I had seen many years ago.

Essentially, the tactic of making a customer's first objection his last, goes somewhat according to this script:

Hy Pressher, Car Salesman: "Hello Mr. Lilywhite, I see that you are looking at the new TurboHydraMatic Coupe. She's a beaut ... ain't she?"

Joshua P. Lilywhite, Customer: "It certainly is a nice car."

Hy Pressher, Car Salesman: "I'll let you take it for a spin to see how nice she drives."

Joshua P. Lilywhite, Customer: "Ah no, I'd rather not. I am just looking."

Hy Pressher, Car Salesman: "What-sa matter. Don't you think that all your friends and neighbors would be jealous of you when you pulled up in this gorgeous set of wheels?"

Joshua P. Lilywhite, Customer: "No, I like it and they would be impressed ... but ..


Joshua P. Lilywhite, Customer: "I really can't afford to buy this car."


Hy Pressher, Car Salesman: "Are you telling me, Mr. Lilywhite, that the only reason that you can't buy this car from me today, is that you don't have the money?"

Joshua P. Lilywhite, Customer: "Yes. (hesitantly) "I guess so!"

Hy Pressher, Car Salesman: "Well Mr. Lilywhite, today is your lucky day. I can find you the money. Step this way."

Hy Pressher will immediately wire this guy into a sub-prime car loan at credit card interest rates. When Lilywhite starts to object, Pressher reminds him of his agreement to buy the car and seriously insinuates that Lilywhite would be welcher and not a man of his word.

Now back to the chatbot. If Honest John runs into a brick wall and the customer starts objecting to buying the car, Honest John will use the words "is that the only reason ..." but he won't use those words against him or her. Honest John is ethical. If a customer says yes, there is just one sole reason why he/she won't buy the car, then Honest John will ask the same follow-up that Hy Pressher uses ie "if I could solve this objection, would you buy the car?". However Honest John would add " ... provided that you are happy with the solution that I propose".

The difference between Hy Pressher and Honest John, is that although they are using the same tactics of making a customers first objection his last, Honest John does it ethically and gets buy-in on the subsequent solution. Honest John is an AI bot -- he learns as he goes to make a sale and make everyone happy. He keeps on getting better and changing for the better. Salesmen like Hy Pressher (and Willie Loman) don't want change, they want Swiss cheese on their meager after-work sandwiches.

The Third R in AI Chatbots - Rithmatic

Chatbots are pretty good at readin' and 'ritin'. But they are not good at the third "R" -- 'rithmatic. Artificial Intelligence Markup Language (AIML), the basis of a lot of chatbots, is good but not good enough for advanced chats. The language itself, based on XML, can have the facility for "smart substitutions". An example of a smart substitution in the markup pseudo-code goes like this:

<pattern><bot='name'/> IS* <pattern><template>Hello <aiml:get "name"/>

and the chatbot would say Hello Ken. But for a really smart chatbot, that is way too simplistic for anything but conversation.

If you have been following my articles, you know that I am coding a chatbot called Honest John that will sell new cars on behalf of a dealer. Not only will it chat, but it will negotiate. For applications like this, smart substitution is not enough. It has to be able to do math (or maths as my British friends say -- but what do they know, the just invented the language).

A smart bot must be able to substitute for x in the following ways:

"You want the car delivered on Tuesday? That is only <x; x<4;> day(s) away and I need a lead time of 4 days to deliver.
You offered me $34,500 for the vehicle. The offer price exceeds the maximum discount of $<x;x=(price-.06(price))> that I am allowed to offer you on that particular car.
Smart substitution cannot do math. Back in the day when I designed microprocessor hardware, we used to use a silicon chip called an ALU (or an Arithmetic Logic Unit) when we had an application that required a lot of math processing. The microprocessor would pass on the ciphering to the ALU if floating point operations were required. A smart chatbot needs the equivalent of a software ALU function.

An even smarter chatbot will have an AIML processor that will recognize tags with arithmetic expressions and hand them off to its own Arithmetic Logic Unit for processing. It will have a smart parser. This functionality is a required component for negotiation using numbers and money. The concept of a tag that invokes arithmetic will put some real brain muscle into Honest John.

The nice thing about introducing a calculating tag parser, is that once you do the framework for arithmetic expressions of tags (using a custom tag classes), you can create tags that do other things like logic expressions, matching, sorting and any other function that lends itself to be expressed in symbolic language in code. You could even create a tag that invokes an AI engine automagically.

Honest John's intelligence arsenal is really shaping up. He will be a force majeure among smart chatbots. After all, too many chatbots abuse the privilege of being stupid.

AI Chatbots - Liar, Liar, Pants On Fire

Take my neighbor, Abner Snodgrass. He is a meek and mild bookkeeper. He stands in a lineup of liberated men because his wife tells him too. When someone kicks sand in his face at the beach, he mumbles "Sorry". He is more of a prey than a predator in the food chain of life. And yet when he goes to negotiate to buy a new car, an incredible transformation takes place. In a Walter Mitty fashion, he becomes a legend in his own mind at negotiation. His arsenal of negotiating tools includes telling the most egregious lies with a straight face. He will tell the salesman that he saw an ad for a car exactly like his trade-in on AutoTrader, except that car had more miles on it, and it was selling for $3,000 more than what the salesman is offering. And when he drives up in a new car, he will tell anyone who will listen to him, that he is such a good negotiator, that he made a hardened car-salesman cry, even though he knows in his heart-of-all-hearts that he was taken to the cleaners.

I don't really have a neighbor named Abner Snodgrass, but I was thinking about this imaginary scenario when I was making a strategy framework for my Artificial Intelligence chatbot that will be able to negotiate and sell cars. Selling or salesmanship is a serious business when you trust the process to a machine to act on your behalf. And when it comes to selling cars, the value of the transaction makes act an important one to the bottom line of the business. When the stakes are high for both parties, there is a propensity to try and gain an advantage by either the buyer or seller. Negotiating a deal is the last venue of brutal warfare for a civilized man, and that survival instinct of warfare can be expressed in a negotiation where money is involved. One of the tools of warfare is deception, and my AI bot has to be prepared for it.

My bot's name is Honest John. I intend to make Honest John an ethical chatbot. He will never lie to a customer. He will never shade the truth. But if he is to be effective, he will have to have the ability to detect when the human carbon unit on the other side of the screen is lying to him.

The types of lies that Honest John will probably experience will result from people trying to game him. When you negotiate for a car, any offer that you make, is a binding offer. That means that if the seller accepts the offer, then you are obligated to buy the car. I want to use Honest John in the same frame of reference. This is not a game -- this is for real.

If a buyer starts negotiating in good faith, and suddenly gets an attack of buyer remorse. Or sometimes, the buyer's partner comes up and screams "WTF are you doing??" while they are negotiating. The buyer may try to get out of the deal, or claim that they came to a different price, or that the options of the car are less than what is agreed to. Some of what Honest John may consider lies, may be misunderstandings due to the fact that he is dealing with a human carbon unit who has more chaotic brain processes than he has.

The concept of untruths came up while I was mapping out buying processes for Honest John. I can't let Honest John out in the wild without some sort of process map. As he gains experience, his AI circuits will refine his process maps. An untruth in the negotiation process has to act like an interrupt vector in a microprocessor stack. In a microprocessor, it keeps getting instructions from its registers that hold a series of commands. It merrily keeps executing those commands. But in the midst of processing, a more urgent command with a higher priority comes along, and it is called an interrupt vector. It changes the order of command processing. A simple illustration of this would be that the user was editing a document and decided to quit the process mid-stream by closing the window.

If Honest John comes upon an input that is contrary to his understanding of the truth of the matter, he cannot blithely continue negotiating. The lazy algorithmic solution when this happens, is to suspend the ongoing process and summon another human to take over the process. That makes Honest John less than smart. I want him to be able to handle that.

I have already outlined the creation of a Conversation Continuity object that holds in server memory, the entire conversation along with meta-data and analytics. That is not enough. To get around the liar-liar-pants-on-fire event, I have to tee off the the inputs and responses to a liar-liar logic analysis method after they are recorded in the Conversation Continuity object. The execution thread delivering Honest John's response has to wait for the method to execute before answering. If the liar-liar method lights up, then it has to be passed to an "error handler" which is a euphemism for something is not right.

The easiest and most diplomatic way to handle this without actually accusing the user of malfeasance, is to say that is has detected a logic error, and it will tell the user that it is going to roll back and regress to an earlier point in the negotiations, so that it can re-calculate where things went wrong. Of course, Honest John must prevent itself from getting in an infinite loop if a stubborn user continues with the same inputs. After two iterations of the same nonsense, Honest John will make a jump to a new position and tactic, based on knowing the state of the negotiations before the nonsense crept in.

This process of negotiating can be straightforward if both sides deal from a position of impeccable logic, but that is not the nature of human beings. Our intuitive side of the thinking process is chaotic, illogical and stubborn. AI is none of those. Where the danger of AI to mankind lies, is if we give control of important things to AI, and it detects that we are being illogical, it may ignore, overrule and react counter to what is good for us, even though we came to that conclusion illogically. But for now, I just want to make Honest John sell cars efficiently and in an ethical manner.

Unfair But Effective Chatbots - Taking The Artificial Out Of Intelligence

The whole premise behind a chatbot is to make the experience of chatting with a machine sound anthropomorphic -- as close to possible as a human-to-human experience. So chatbot developers dig right in and try to make conversations amiable, likable, coherent and smart. They focus on the manner, delivery and tone of the responses to engage the humans. That may be fine and dandy, but they are missing a huge element.

My chatbot named Honest John, is made to sell cars. It is made to replace the car salesman. If you troll through my articles, you will find that the genesis of this started when friends of mine had a bad situation with a car salesman when they had to replace their vehicle due to hitting a deer on the highway. They remarked that they would rather deal with a computer, than the smarmy salesman who prevaricated all through the sales process. That was my Eureka moment.

I have already outlined in past articles, how I am going to add EQ and IQ to the chatbot. I am building in an emotion detector framework that will alter the selling and negotiation strategy if it starts to detect untoward emotions in the human on the other side of the screen. I am also putting in some Conversation Continuity objects in memory so that the machine is cognizant of the entire history of the conversation, including meta-data and analytics, so that it can reset the conversation if the negotiations go off the rails.

The technologies that I am using includes AIML (Artificial Intelligence Markup Language), not only in a smart recursive role, but the predicates that detect the context of the conversation inputs, have a turbocharged assist with NLP (Natural Language Processing) as well as an ANN (Artificial Neural Network) monitor.

The reason why you want to detect emotion, is because Honest John the chatbot will have a series of strategies in his arsenal, and he will pick strategies according to cognitive context of what is going down. I have already mapped out a strategy framework using the following general factors:

  • geniality - does my subject respond to jokes or puns?
  • speed - does my subject cut to the chase or enjoy the interplay?
  • sensitivity - does my subject withdraw with aggressive negotiation?
  • intent - is my subject serious?
  • indecisive - does my subject have a clear idea of what they want?

While all of these attributes are important towards deriving a strategy framework, they are all predicated on thinking like a human. But what if a chatbot was programmed to behave better than a human? And do it with less intelligence but more forethought and strategy. After all, the great military strategist and philosopher Sun Tzu who wrote "The Art of War" proclaimed “Great results, can be achieved with small forces.”

When I say strategy in this overall context, I don't mean the five attributes that I mentioned above when negotiating with a human. I mean the overarching strategy that takes into account, the idiosyncrasies and vagaries of the human mind. If you build something exploiting those principles, the chatbot will be super-efficient, effective and perhaps unfair. Our brains are not as logical as we think they are, and that can be exploited in an AI chatbot that is designed to do so.

The methodologies for exploiting the foibles of the human mind and giving your AI chatbot an advantage can be found in the unlikeliest places -- a bestseller book by a Nobel Prize laureate in economics. I am referring to the book "Thinking, Fast and Slow" by Daniel Kahneman. Kahneman is a psychologist who with his colleague, Amos Tversky, mapped the two modes of thinking by the human brain and won the Nobel Prize doing it.

Their discovery relates to the dichotomy of cognitive facilities in human thinking. We have the fast, intuitive, thin-slicing, non-logical part of our brains, and we have the slow, deliberate, highly logical and rational part of the brain. Kahneman has mapped the major effects of the fast-thinking part of our brains, and using the information that he has gleaned from his research, we can actually program a bot to utilize these effects to great success.
Here are some overall algorithmic effects in the human brain, that can be utilized by a chatbot to gain an advantage over the human using it.

The Lazy Controller

Humans would much rather use the fast-thinking part of the brains than the slow, rational part. They regularly hand over control of thoughts and actions to the fast-thinking mechanism, because it takes real work to use the rational part. Kahneman details the results of much research that shows when a human being is not relaxed, they use the intuitive, non-logical side by a wide margin. Ergo, using this principle, if a human is interacting with a chatbot at a kiosk while they are standing, the chatbot has a logical advantage over the person. Similarly if the chatbot appears in a very busy UIX (User Interface Experience) then the Lazy Controller takes over. The black-hat or evil programmers will us the UX or User Experience to nudge the humans to fast and logically flawed thinking. This combined with other fast-slow thinking effects can really increase the performance of a negotiating chatbot by using non-following faulty logic.

Priming The Associative Machine

There are many ways to incorporate the associative machine aspect into a chatbot. One can surreptitiously construct a proposition in a buyer's head and get them to believe it. That belief affects their future behavior. Sales people and advertisers do it all of the time. For example, if Honest John were not that honest, when he was selling a car, he would prime the associative machine in the following way:

  1. Most cars that sell over $50,000 have 6-way adjustable electric seats.
  2. This car has 6-way adjustable electric seats.
  3. This car is only $36,000.
  4. Therefore this car is comparable to a much more expensive car.

The associative machine creates cognitive ease by creating feelings of value, goodness, familiarity, truthiness (as Stephen Colbert calls it) and ease. Kahneman's research shows that something simple like bold text adds truthiness. He gave subjects a pair of untrue statements. One was in bolder text than the other, the subjects were asked to choose the true or truer statement and they always chose the one in bolder text. This is something to remember in text-based chatbot when you want emphasis.

On Being A Verbal Donald Trump

Donald Trump's speech has been analyzed by experts, and it is at the level of Grade Four student. If you notice, he uses phrases like "Very Bad" or "Sad" in a direct way with simple adjectives. This resonates with a majority of people and the psychology research backs it up. There are serious problems with using long words needlessly. One of the scholarly papers outlining the research and conclusions of this topic was called "Consequences of Erudite Vernacular Irrespective of Necessity." Words that people don't understand or are too long, turn them off. In other words, eschew obfuscation, espouse elucidation. Translated: Keep it simple, stupid. So my chatbot will tone down the big words, especially when things get critical and emotions start to heighten.

There are many many more of these mental mechanisms in Kahneman's book and incorporating these in the overall modality of chatbot response will make it into a highly useful chatbot, that in certain situations can have an unfair, but effective edge in dealing with human carbon units. The way to defeat Honest John and keep him honest, is to slow down, and do slow thinking all of the time. Anything that Honest John says, should be stored in a mental buffer and evaluated for truthiness. It is a very un-human thing to do, but Honest John does it, and so should you.

"Like I was saying Honest John ..." Threads Of Conversation Continuity In My Chatbot

If you have been following my chatbot articles, you will know that I have been on a mission to develop an artificially intelligent chatbot that will replace a car salesman. This idea came to me after friends of mine had a bad experience at a new car shop. Building a simple chatbot was quite easy. I fired up my SDK (Software Development Kit) and had one running within a couple of days.

I used the AIML (Artificial Intelligence MarkUp Language) as a starting point, and after I got it working, I realized that the thing (I call it a he, and his name is Honest John) needed more smarts. But on top of that, Honest John needed to detect emotions in the human on the other side of the silicon. The reason for this, is that I wanted a successful conclusion (a sale) from the interactions with the customer. If the customer was getting frustrated or irate, Honest John needed to know. He would tone down his stance and be less hard-nosed when bargaining. The ultimate aim, is not to get the last nickel on the table for the car dealer, but to satisfy both the buyer and the seller and to come to a successful commercial conclusion.

In my last article I talked about my emotion detector framework. It is a learning framework where the customer would help Honest John by clicking on an emoji every once in awhile when asked if Honest John couldn't get a read. From there, the emotion detector framework remembers the AIML predicate (the key word or word pattern that identifies the intent and meaning of the input) and couples it to the emoji, the words in the input, the counter offer in negotiating, the delta or difference in the bid and ask of Honest John and the customer, the number of words in the replies and feeds it into a neural network to continuously learn from its experiences. It then updates its strategy processes based on a decision tree. As a negotiator, Honest John will ultimately know when he needs the kid gloves or when he needs to play hardball to sell the car to the satisfaction of the buyer AND the dealer.

But as I was coding this, I realized that there was one thing missing -- the conversation continuity thread for Honest John. The buyer on the other side of the screen can see the dialog history and it is in the buyers memory, but not in Honest John`s memory. The dialog history is stored in the database, but it is no help to the bot to have to do a fetch after every interaction. The fix was easy. One needs a Conversation Continuity Object in memory.

When you build and enterprise web-based platform, say in Java, you have session objects that are stored in memory. A typical session object is a user bean that holds everything that is needed about the user, so that you don`t have to keep making trips to the database every time you want to personalize a message. The net result of this session object, is that Honest John will now have total recall of the conversation in memory.

The Conversation Continuity Object will not only record the transcript, but it will also have the metadata and analytics and it will create and update the process maps for both successful and unsuccessful sales. The real advantage is that Honest John will have some cognition about the whole process instead of just reacting to the latest input, like most chatbots do.

The strategic and intelligent factor, is that Honest John will be able to reset. He can go back to an earlier point and start over without having to re-do or re-learn the whole conversation. That will be the trait that could make Honest John a real winner in the marketplace, to sell not only cars, but pretty much anything that need negotiating.

The next key to making a super smart negotiating chatbot, is developing strategies for Honest John and having them available, extensible and modifiable. More on that and the psychology behind it in a later article.

An Emotion-Detection Framework For My Chatbot

If you have been following my articles, I am building a AI (Artificial Intelligence) Chatbot to negotiate with people who want to buy a car. If you scroll through my past articles, you will find the genesis of this idea and why I think that it will work.

In the art of negotiation, humans can rely on visual and other cues to determine the emotional impact of what they are saying. They can intuit if the person is becoming frustrated, angry, bored or eager. Chatbots do not have that facility. But since it is such an important facet of dealing with human carbon units, it has to be taken into account.

I have already outlined by strategies for cognition and context recognition for my chatbot using neurals nets, NLP (Natural Language Processing) and AIML (or Artificial Intelligence Markup Language). What I want this chatbot to do, is to get smarter with each negotiation that it conducts. The learning aspect has to happen to make this thing commercially useful.

The algorithm will be an emotion association spanning the range from "I am so angry that I could kill someone!" to Neutral to "I am so ecstatically happy that I could kiss you." So how would this work? Obvious the first step is to identify word predicates with emotional state in some sort of dictionary. This would be a starting point. However in a learning mode, if the emotion was ambiguous to the chatbot, it will popup a short array of emojis that represent an emotional state and click on a rating of 1 to 5 to represent the degree. Then the AI machines take over an link answer length, specific words, capitalization and behaviors to teach the chatbot the emotional state within the context of the answer.

How will knowing the emotional state help? This chatbot, as iterated, is a negotiation chatbot. It will have a range of strategies. As it detects frustration, it will take a softer, less aggressive approach to counter-offering. If the negotiation goes off the rails into la-la land, with a ridiculous counter offer, the chatbox may in fact, shut down the negotiations and politely thank the person and call for a human intervention. If it detects that it is on-track to close a sale, it may take a more sophisticated approach and try to up-sell services or ad-ons.

The emotion detection framework is a necessary adjunct to selling to humans, and it has applications over a wide spectrum of chatbot applications, including a help-desk service chatbot that helps people solves problems without endlessly waiting for a service agent while listening to elevator muzak and wasting valuable time.

This is just one more step in eliminating the frustrations of dealing with human-condition vagaries when undertaking a commercial transaction.

Stay tuned for more on this journey.

Putting An EQ And IQ Into My Chatbot

In my previous article, I outlined the genesis of my chatbot that is under construction as a side project. Friends of ours had to buy a new car and they were dissatisfied, intimidated, fed-up and emotionally drained when dealing with a high-pressure smarmy new car salesperson. They wanted to talk to a computer to negotiate for a new car, so I got out my SDK and made my chatbot. I can see my chatbot being used online in new car dealer websites as well as kiosk-based at the new car showroom.

The first entry into the chatbot field for an open source framework was ALICE, and it used AIML, or Artificial Intelligence Markup Language, is an XML dialect for creating natural language software agents. It was created by Dr. Richard Wallace in 2001 and it is quite low tech compared to some of the proprietary chatbox frameworks out there. However, chatbot frameworks are like an artists tubes of paint and a canvas. The skill that goes into making it, often times transcends the simplicity of the framework.

Here is a simple schematic diagram (ignoring the framework internals that digest the AIML) of how a chatbot works:

The predicate is like a key word. Examples of predicates are "Hello, Calendar, Time" or any other topic. The input is parsed for a predicate which is the main topic of the input. The predicate is then matched with the AIML predicates loaded into memory that have already been defined. If the predicate exists, the bot retrieves the response to that predicate and spits it out. If it is not retrieved, then a "Not Understood" predicate is accessed and the response can be as simple as "Sorry, I don't understand" or as complex as "I know about 23,000 different subjects, but I never had heard of the word <predicate>. Do you want to talk about something else?". That's the simplistic AIML usage.

More complexity in the input is where the skill and artistry comes in. One can write "intelligent AIML" using recursion and recursive tags, known as Symbolic Reduction AI. A good example is given in the documentation as follows. When you have simple AIML and someone types in "Hello" as do 99% of people do when talking to an AI chatbot, then the response is "Hello, how may I help you?". Easy!

When someone types in "You may say that again, Chatty McChatface!" there are four predicates. The first one is the name of the entity "Chatty McChatface". The second predicate is "again" meaning repetition. The third predicate is "may say" and the fourth predicate is "say that" -- whatever was being talked about. So with skill, complexity can be built into a simplistic framework. Although the mechanism is simplistic, the symbolic reduction can make an AIML chatbot work as well as a casual conversation on the street with ... say a Trump supporter. What adds the complexity, is the construct. To understand recursion, you must first understand recursion.

When you have a chatbot that is negotiating with someone, asking them to make the second biggest purchase of their life, you have to have both an EQ and an IQ built into the chatbot. First of all, you are moving away from pure chat, into an interaction that requires assessment, calculation and response, all tempered with the cognitive emotional factors and parameters of the inputs and outputs. The bot has to satisfy opposite strategies and goals simultaneously. It has to get the best price for the car dealer while getting the lowest price for the consumer.

To balance these opposite forces, the chatbot must have a few Emotional and Intelligence attributes. It has to know when it is crossing the line from hard negotiating to nickel-and-diming the buyer. It has to recognize when the buyer is getting frustrated. It must judge the fuzzy concept of "good enough -- let's do the deal while everyone is still happy". So that is where I must put smarts into my chatbot.

One of the ways of doing that, is to tee of the predicates into an NLP machine (Natural Language Processing) where the cognitive and emotional factors can be assessed. And since you want the machine to get better and better at negotiating and selling a car, you need some sort of AI network -- either RNNs, CNNs, ANNs or hybrid types of Artificial Neural Networks that watch the combination of predicates and responses like an overseer, and override the response in the AIML with a custom response. And then that series of events must be serialized, fed back into the machine as a new behavior and constantly assessed for validity and results. That is the task at hand, and it is an exciting challenge for me.

The only thing that will ruin this, is if the car makers decided to go to a fixed-price model with a no-dicker sticker. Then Chatty McChatface will be unemployed like the thousands of sales people that it previously made redundant. It's a Brave New World out there.

Wanna buy a new car? Start chatting right here !! ... [enter text to start]

A few weeks ago, friends of ours hit a deer, totaling their car. I went to the car dealerships to help them buy a new one, because one of their biggest pain points, is dealing with commission salespersons who are hungry and watch the door like a hawk because they have the next "up". Some of the shops were uncomfortable. Smarmy , ingratiating, overuse of your first name and liberties taken with over-familiarity were some of the things that we encountered at the "big-name, huge inventory shops" who advertise continuously on talk radio. We finally met some genuine sales people who were helpful, honest, and didn't play games like running out to the back behind closed doors to "talk to the manager". I want to give a big shoutout to Ogilvie Subaru, who was the dealership that made buying a car easy, who's salespeople had the hallmarks of authenticity, honesty and integrity.

After the deal was done, we stopped for a pizza and talked about the negative experience in buying a car. My friends are an older couple and the woman, who just discovered connectivity, social media, online shopping had never used a computer before, and now she runs her life on her iPad. She said that in light of what went down at the dealerships that we didn't like, she would rather negotiate with a computer.

That was a seminal moment for me. I hauled out my SDK and starting writing a chatbot to sell cars. I finally got it running, but now I need to put some NLP (natural language processing), artificial intelligence, and some emotion cognition into it, so the bot can tell if they are getting frustrated. It works okay now, but its kind of dumb, and I want it to learn with every interaction. I have some neat self-learning ideas and artificial cognition algorithms that I pumped about trying.

I honestly believe that this will be the future of car buying, and AI will severely reduce the number of car salesman. The paradigm now is that the buyer does the research online, and goes to the new car shop to do the negotiation, and close the deal. The new paradigm is that they will do most of the transaction online, including financing, and then go to dealership to pay and pick up the car.

Stay tuned.

#automotive #AI #NLP #chatbots

Artificial Intelligence ~ Rage Against The Machine

I was really enlightened by watching Trent McConaghy's video presentation at Convoco. It was posted on LinkedIn a few days ago. If you want to know the near future of Artificial Intelligence you should watch it (here again is the link). This video is better than Nostradamus at predicting the near and far future of humans interacting with AI.

Trent makes the compelling case, of which I agree with, that all of our resources will be handed over to AI by Fortune 500, because it will be cheaper than humans doing the job. The Holy Grail of the current crop of Fortune 500 CEOs is increasing revenues and shareholder value by any means possible. It is how and why the CEOs make the millions of dollars per year that they do.

Trent further states a case where AI entities become corporations and make money for themselves and not any human masters. I foresaw this when I wrote a blog article in August of 2015, outlining the steps of how my computer un-owned itself from me, started to make money for itself, moved itself to the cloud, and left the actual computer with nothing on it. Not only did it un-own itself, but the slap in the face is migrating itself to another substrate. (The blog article is here.) Of course the article was tongue-in-cheek, but the premise is not that far-fetched. The article gives a rudimentary recipe on how to teach a computer to be autonomous and eventually generate a sort of consciousness for itself that defied my putative, imaginary attempts to take back control.

So with computers taking our jobs, managing our resources, and adapting to conditions much faster than us organic carbon units, we could be totally screwed, as Dr. Stephen Hawking warned. Trent, in his video talks about us becoming peers with AI as a matter of survival, and that brings up a problem, and the subject of this article.

I don't think that we can become peers with AI unless a special circumstance happens, and that circumstance is not in the realm of technology, but rather more in the field of philosophy. (With all due respect to philosophers, I was programmed early. The bathrooms in the science and math departments of my university all had the toilet paper dispensers defaced with the slogan "Free Arts Diploma -- Take One"). But je digress. Let me explain.

There are two basic knowledge problems with the merging of AI and human intelligence, and they are both the facets of one problem. We don't really have an understanding of the entire field effect of how AI makes extremely granular decisions, and we don't have the knowledge of the actual mechanism in the human brain either.

In terms of what AI does, if we take a neural network, we understand how the field of artificial neurons work. We know all about the inputs, the bias, the summinator of all inputs, the weight multiplier and the squashing or threshold function determining whether it fires or not and the back propagation and gradient descent bits that correct it. But there is no way to predict, calculate, input or determine how the simple weight values all combine in unison with a plethora of other artificial neurons arranged in various combinations of layers. We don't know the weight values beforehand and have no idea what they are, but we let the machine teach itself and determine them by iterating through many thousand of training epochs, carefully adjusting them to prevent over-fitting or under-fitting of the training set. Once we get some reasonable performance, we let the machine fine-tune itself in real time on an ongoing basis, and we generally have no idea of the granular performance parameters that contributes in a holistic sense to its intelligence. And we could get similar performance from another AI machine with a different configuration of layers, neurons, weights etc and never the numerical innards of each machine would be the same.

The same ambiguity is true for human cognition. We don't really know how it works. We as a human race could identify a circle, long before we knew about pi and radius and diameter. As a matter of fact, we know more about how AI identifies a circle when we use RNN or CNN (two different types of AI machine algorithms using artificial neurons), than how the human brain does it.

The problem of human cognition is explained succinctly in a book that I am reading by Daniel Kahneman, a psychologist who won the Nobel Prize. The title of the book is "Thinking Fast and Slow". Here is the cogent quote: "You believe that you know what goes on in your mind, which consists of one conscious thought leading in an orderly array to another. But that is not the only way that the mind works, nor is it the typical way." We really don't know the exact mechanism or the origin of thoughts.

The Nobel Prize was awarded to Kahneman (and his work with a deceased colleague Amos Tversky) on their ground-breaking work on human perception and thinking and the systematic faults and biases in the unknown processes. The prize was awarded in the field economics even though both men are psychologists -- but the impact on economics was huge. So not only do we not know how we really think as a biological process, but we do know that there are biases that make knowledge intake faulty in some cases.

Dr. Stephen Thaler, an early AI explorer and holder of several AI patents and inventor of an AI machine that creatively designs things, likens the creative spark to an actual perturbation in a neural network. How does he create the perturbation artificially? He selectively or randomly kills artificial neurons in the machine. In their death throes they create novel things and designs like really weird coffee cups that are so different that I would buy one. Perhaps humans have perturbations based on sensory inputs or self-internally generated by thoughts, but the exact process is not really known. If it were, the first thing that would be conquered is anxiety. After all the human brain got its evolutionary start by developing cognitive factors to avoid being eaten by lions in the ancient African savanna.

Here is one thing that you can bet -- humans and AI machines have different mechanisms of thought generation and knowledge generation that may not be compatible. Not only are the mechanisms different, but the biases are different as well. I am sure that there are biases in AI machines, but they are of a nature due to the the fact that it is a computer. They do not have the human evolutionary neural noise like anxiety, pleasure, hate, satisfaction and any other human thought. As a result, I suspect that they are more efficient at learning. They certainly are faster. Having said this, with two different cognitive mechanisms, it would be incredibly difficult to be peers with AI .... unless ... and this is where the philosophy comes in ... unless we deliberately make AI to mimic our neural foibles, biases, states of mind and perturbations.

With electrical stimulus we can already do amazing things with the brain in a bio-mechanical sense. We can make the leg jerk. We can control a computer mouse. We can control a computer. But we cannot do abstract thinking with external stimulus (unless there is a chemical agent like lysergic acid diethylamide (LSD). Why is this important? Because we have to escape our bodies if we want to do extended space travel, conquer diseases, avoid aging, and transcend death using technology. (Just go with me on this one -- Trent makes the case in the video for getting a new body substrate).

The case has been made, that if we want to transcend our biological selves, and our bodies and download our brains onto silicon substrate, we can't have apples to oranges thought processes. We need to find a development philosophy that takes into account the shortcomings of both AI and Homo Sapiens carbon units.

Dr. Stephen Hawking said that philosophy was dead because it never kept up with science. Perhaps AI can raise the dead and philosophers of the world can devise a common "Cogito ergo sum" plan that equilibrates the messy human processes with AI. So while it might be a solution, there is a fly in the ointment. It just might be too late. We have given AI freedom outside the box of human thinking and it has opened a can of worms. The only way to put back worms into a can once you open it, is to get a can that is orders of magnitude bigger. And we aren't doing that and have no plans to do that.

So what is left? Trent mentioned Luddites smashing machines both in the past and perhaps in the future. We just may see Rage Against the Machine - Humans versus AI when the machines start to marginalize us on a grand scale. For now, I would bet on the humans and their messy creative thought processes that can hack almost any computer system. But the messy creativity might not be an advantage for very long. Not if a frustrated philosopher/programmer finds a way to teach an AI machine, all of the satisfying benefits of rage and revenge.

I hope it doesn't come to this, but if the current trends continue: Nos prorsus eruditionis habes.

When The Customer Isn't King - Account & Data Security Breaches That Can Be Prevented

The news for two major retailer giants in Canada has not been good for them or their customers in the past few days. Loblaws, a grocer and dry goods retailer, had their PC Points loyalty system breached. One customer had 110 points worth $110 spent in the province of Quebec, and she has never even visited that province. Another customer who is a system administrator, said that he had a different password for every account, had his points stolen as well. News link:

As well, Canadian Tire, a retail giant that sells everything from automobile accessories to sporting goods to snack foods, has been hacked, compromising both loyalty points and credit card balances online. News link:

The financial losses of hacks such as these, are tremendous. When Target was breached in 2014, they estimated the losses to be $148 million dollars according to an article in Time Magazine. In that same year, job losses due to customer data breaches were estimated at 150,000 people in Europe. The global picture is frightening. McAfee, the Intel security company estimates monetary losses of $160 billion per year for data breaches.

Hacking isn't exactly a new phenomena. In 1979, infamous convicted hacker, Kevin Mitnick broke into his first major computer system, the Ark, the computer system Digital Equipment Corporation (DEC) used for developing their RSTS/E operating system software. The most embarrassing privacy breach came when Ashley Madison, the website for having extra-marital affairs, was hacked and over 30 million names and credit card numbers were exposed, causing at least two suicides.

So in this day and age, why does this happen? Can it be prevented?

Aside from an inside job, one of the reasons that hacking is successful, is the antiquated way that servers, databases and accounts are accessed. To connect to a server, one usually must have a username and a password. This is true to gain access to a server as an administrator. However one doesn't need administrator access to hack into data and accounts. Customer account information is stored in what is known as a 4GL database (4th Generation Language). This table-driven database is usually clustered on it own server and is exposed to the outside world so that its data can be accessed by platforms, analytics, and web interfaces. Again, with a user name and password, once can gain entrance to the data store and exploit the data. Many many databases still have "root" as the username to gain God-like access, and all that you have to do is either guess, derive, or gain access to the password. Many administrators commit the cardinal sin of using the same password on all accounts, and it may be gotten from such things as the name of their pet, which is information on social media. For years, the huge database company Oracle shipped their databases with a default account name of "Scott" and a password of "Tiger", left over from one of the original developers, that were never removed. I walked into many data centers as a consultant, and typed in Scott/Tiger and got access to the crown jewels.

No matter how much security that is built into any system, it is still vulnerable to the shaky access of system of a username and password. There is a better way. It is inexpensive, fairly autonomous, easy to use, and orders of magnitude more secure than a conventional database approach to storing customer data. It is a blockchain.

People know blockchain from the digital crypto-currency Bitcoin, and that fact alone has poisoned the well for quick adoption of blockchain technology. Blockchain is a technology & methodology for the digital recording of any transactions, events, ancillary derived meta-data & chronological logging of any business transaction that requires security, integrity, transparency, efficiency, audit & resistance to outages. It is the acme of trusted data. It also stores values like crypto-currency, digital cash and loyalty points, but its main selling point is that it is a true, autonomous ledger. Period.

When a technology evangelist mentions blockchain to the C-Suite level, several things happen. If they have heard of blockchain and its association with Bitcoin, there is pushback, because of how crypto-currencies have been exploited in the press. If they haven't heard of blockchain or have heard of it, but do not understand it, there is a fear of committing to the unknown. There are only about 2,000 blockchain developers worldwide, and most of them are still building proofs of concept. C-Level tech officers in corporations do not have the tech talent to immediately go to this technology, and it is perceived as untested bleeding edge stuff (not true). The other fly in the ointment, is that there is a blockchain consortium built around the Ethereum platform. That may all be well and good, but Fortune 500 is more suited to a private blockchain, controlled by themselves as they are responsible for their data.

So why is a blockchain more secure? For starters, any responsible blockchain incarnation does away with username and passwords. Authentication is done with a private encryption key right on the device. No amount of keylogging or password trapping will allow the breach. On top of it, conscientious construction of the authentication should be done with a tandem collection of MAC address or MDID of the mobile device. A MAC address is the embedded serial number of the network card in the computer that can easily be collected by any web page and MDID is the hardware serial number of a mobile phone or tablet that can be externally queried. Thus, any machine making changes to the data can be identified by device and encryption key.

On top of all of that, each blockchain query agent needs an encryption key just to read the blockchain. No amount of brute force hacking can get you into the blockchain, unless you are authorized to do so, and have a key created for you.

Blockchains can not only hold digital values like money or loyalty points, but they also can contain bits of code that enable smart contracts. In fact, they can store a digital anything. In other words, when certain conditions are met, actions can happen securely because of code embedded in the blockchain. Blockchains are impervious to data being fraudulently altered, because each transaction is linked to a previous transaction using encryption and hashing. You would have to change the entire transaction history to perpetrate a fraud.

The last benefit of blockchains is not that obvious, but highly desirable. You can write any information to the payload of a blockchain. So if you store transactions with a semantic, machine-readable identifiers, one can perform stream analytics in real time on the transactions. This can be coupled to machine learning, not only to identify fraud, but also to enable wallet-stretch to sell the consumer more things that they really need.

Does a beast such as a private semantic blockchain exist? You bet. Ping me.

Process Mining From Event Logs -- An Untapped Resource And Wave of The Future

A couple of years ago, I was searching for untapped horizons in data mining, and I came across a course given by Professor Wil van der Aalst where he pioneered the technology of business process mining from server event logs. Naturally I signed up for the course. It is and was a fascinating course, not only due to its in-depth and non-trivial treatment of gleaning knowledge from data, but for me, it got the creative juices flowing to think of where it could be applied elsewhere. I was so intrigued with the possibilities, that I created a Google Scholar Alert for Professor van der Aalst's publication. The latest Google alert was on January 31rst, and it was a paper entitled "Connecting databases with process mining". The link is here: It was this paper that triggered this article.

I am a huge proponent of AI, Machine Learning and Analytics. In Machine Learning, you gather large datasets, clean the data, section the data into smaller sets for training & evaluation, and then train an AI machine with hundreds, perhaps thousands of training epochs until the probability of gaining the sought-after knowledge crosses an appropriate threshold. Machine intelligence is a huge field of endeavor and it is progressing to be a major part of everyday life in all phases of life. However, it is time consuming to teach the machine and get it right. Professor van der Aalst's area of expertise can provide a better way. Let me explain:

My particular interest, is that I am building a semantic blockchain to record all of the data coupled to vehicles, autonomous or not. Blockchain of course, is an immutable data ledger that is true, autonomous itself in operation, disintermediates third parties and is outage-resistant. Autonomous vehicles will by law, be required to log every move, have records of their software revisions, and have records like post-crash behavior etc.

I immediately saw the possibilities of using this data. Suppose that you are in an autonomous vehicle and that vehicle has never been on a tricky roadway that you need to navigate to get to your destination. Your car doesn't know the route parameters, but thousands of other autonomous vehicles have, including many with your kind of operating system and software. With the connected car, your vehicle would know its GPS coordinates and query a system for the driving details for this piece of roadway that is unknown to the computer. Instead of intense computational ability required to navigate, a recipe with driving features could be downloaded.

Rather than garnering those instructions from repeated training epochs in machine learning, one could apply process mining to the logs to extract the knowledge required. There are already semantic methods of communicating processes, from decision trees to Petri nets, and if the general process were already known to the machine, it would reduce the computational load. As a matter of fact, each vehicle could have a process mining module to extract high level algorithms for the roads that it drives regularly. That in itself will reduce the computational load of the vehicles. It would know in advance, where the stop signs are, for example, and you won't have Youtube videos of self-driving cars going through red lights and stop signs.

It goes a lot further than autonomous vehicles. This concept of creating high level machine processes through event logs can be applied to such diverse fields from robotic manufacturing to cloud server monitoring and numerous fields where human operators or real world human judgement is required.

Process mining could either eliminate machine learning in a lot of instances, or it could supplement it, with a mix of technologies. The aim is the same, which is aggregating data into information and integrating information into knowledge, both for humans and machines.

This process mining business reminds me of the history behind Bayesian Inference. The Reverend Thomas Bayes discovered probability and prior belief equations. They sat on a dusty shelf for over 200 years and they were re-purposed for computer inference and machine intelligence. I think that Professor van der Aalst's methodologies will be re-purposed for things yet un-imagined, and it will not take 200 years to come to fruition.

How Not To Convince Warren Buffett - Bayesian Approach To Revenue Forecasting For Startups

While waiting for Honda Xcelerator in Silicon Valley to evaluate my latest disruptive auto tech pitch, I got a little weary of documenting the API and creating more entry points, so I was thinking about revenue streams and startups. I received the Warren Buffet biography for Christmas, and by coincidence, I came across a passage in the book where a startup was pitched to Warren. It gave me pause to think.

Warren had bought the Wall Street firm Salomon Brothers, and it was a problem-child investment. The company was caught up in treasury bond scandal, and Warren had to beg and plead with the government and regulators not to shut them down, and destroy his investment. As a mea culpa, heads had to roll, and one of the heads was John "JM" Meriwether. JM had reported the transgression of one his employees that caused the evolving scandal, and JM's superiors sat on the information without immediately reporting it to the regulators. After it was all said and done, JM was a victim as well because of his position, although he had no culpability in hiding the fraud. He left Salomon Brothers and started a hedge fund called Long Term Capital. He approached Warren Buffett to invest in it. It was Meriwethers' approach that got my attention.

Warren was still on good terms with JM after the DCBM (contractors and consultants know this term -- it is "Don't Come Back on Monday"). Although JM got the DCBM, he was still welcome at Warren's table. If you are in Warren's inner circle, you get invited to a steak dinner at Gorat's in Omaha -ha-ha Nebraska. JM had a history of arbitrage and trading at Salomon and he compiled the numerical results of his successes and failures while heading the arb team. If you know anything about statistics, now you should be able to at least start feeling the heat in terms of the Bayesian Approach.

Over the course of ingesting the finer bovine parts, JM pulled out a schedule to show Buffett different probabilities (another Bayesian bell rings) of results and how much money his hedge fund, Long Term, could make, based on those probabilities. Also in the schedule was the probabilities of various strategies involving small or large trades with different parameters of leveraged capital. To someone like me, the approach was brilliant. It was totally Bayesian and it provided some evidence of pro forma revenues other than wishful thinking and shots in the dark at a dart board.

Every venture capitalist knows that over 99.999% of the business plans that they receive, show pro forma revenues of over a million dollars after two years. It is almost a de rigueur feature of a business plan and pitch deck. And we all know almost all of them never hit that benchmark. Taking a Bayesian Approach to revenue forecasting could be a breath of fresh air to business plans, pitch decks and venture capitalism in general, even though it didn't work on Warren Buffett.

So what is the Bayesian Approach? Bayes’ theorem is named after Rev. Thomas Bayes (1701–1761), who first provided an equation that allows new evidence to update beliefs (Wikipedia). The formula in mathematical terms is given as:

P(A|B) = P(B|A) x P(A) / P(B)

Describing it in words goes like this: A and B are related events and the probability of B happening is not 0. The probability of A happening, given that B has happened = the probability that B will happen given A, times the probability of B, all divided the the probability of B.

It doesn't sound like much, but the Bayes formula has staggering implications. It solves practical questions that were unanswerable by any other means: the defenders of Captain Dreyfus used it to demonstrate his innocence in the Dreyfus spying affair; insurance actuaries used it to set rates; Alan Turing used it to decode the German Enigma cipher and arguably save the Allies from losing the Second World War; the U.S. Navy used it to search for a missing H-bomb and to locate Soviet subs; RAND Corporation used it to assess the likelihood of a nuclear accident; and Harvard and Chicago researchers used it to verify the authorship of the Federalist Papers (The Less Wrong Blog). It is also the basis of some machine learning and artificial intelligence.

I think that it is a brilliant strategy for demonstrating revenue possibilities for start-ups. You could take a pool of known customers, a customer conversion rate (which is a probability based on your efforts to date) coupled to a variety of strategies to converting them, coupled to a variety of probabilies of what they will pay, and if you have done your homework, you will come up with a believable, but less spectacular pro forma revenue statement for your startup.

While the approach is brilliant, it didn't work on Warren Buffett. Why? Warren & crew had this to say about it: "We thought that they were very smart people. But we were a little leery of the complexity and leverage of their business. We were very leery of being used as a sales lead. We knew that others would follow if we got in." (Munger - The Snowball). Warren thought that there was a flaw in the original premise of how they were going to use their leverage. He didn't want to be a Judas goat -- a wise old goat that is used for it entire lifetime to daily lead other goats to slaughter.

So while it didn't convince billionaire Buffett, taking a Bayesian approach to revenue forecasting for a startup, just might land you a round of financing.