I, like a lot of other geeks, have become greatly disillusioned with Microsoft in the past several years. I saw them as anti-innovative, fat-cats protected a revenue stream that did not favors for its users, and becoming a stodgy, quaint grandparent in a tech world, where it thought that it was still the same sex object that it was in its early daze.
Microsoft, in my opinion has hung on too long to its archaic operating system which is essentially one big kludge onto top of a stack of turtles of kludges all the way down to the bare silicon. All of their innovations in almost every endeavor from tablets to phones, to music services, have been market failures because they stubbornly resisted changes to their bloated, digital-cholesterol clogged operating system. If they truly want to be innovative, they would ditch it in favor of a brand of QNX or Linux for a sleek, less vulnerable system. Back in the early daze of the 8086 microprocessor, I saw a QNX system being able to boot from one floppy disk, and in its day, that was amazing.
Now that I got that off my chest, I must grudgingly admit that Microsoft has lit a spark that impresses me with their Azure big data suite. If they are going to re-invent themselves and breathe new life back into the corporation and become innovative again, then Azure might be the vehicle.
Big Data is where it is at, and where it is going to be if we want to manage and monetize the Internet of Everything. And Microsoft Azure is trying to create and promulgate products to that end with Azure. I only became aware of Azure when several members of the Azure team followed me on Twitter, and when I checked them out, I realized that it wasn't Bill Gates' Microsoft. I really liked what I saw.
Azure offers data analysis as a service, and they have a free component. It is done in a quasi-cloud environment, and from what I see, once you graduate from the newbie class, the prices is okay. The good news is that there is a link to some pretty nifty free tools. Here is the link:
The tools are varied, useful and intriguing.
Microsoft just may have a chance to dominate the market. Their thin edge of the wedge with azure is great, but they must follow the template of Microsoft Word when it started to dominate the marketplace. Back in the day, personal computers were useful, but not that useful when it came to creating documents electronically. The IBM Selectric typewriter was the weapon of choice to use up reams of paper. Then along came the word processor. Dr. An Wang made a fortune from inventing computer memory, and then sunk his money into Wang Labs headquartered in Lowell Massachusetts. The Wang word processor became ubiquitous for several years. It was a dedicated piece of hardware, and tightly coupled software that didn't do anything else except create formatted documents. Prior to that, electronic documents were printed on a dot matrix or impact printer without stylings. (The Wang OS was the first OS that I successfully hacked).
Microsoft Word came out and essentially destroyed the word processor. It was order of magnitude cheaper, easier to use and a mere fraction of the cost. It is still the dominant document creator to this day. Microsoft needs to do the same thing with Azure.
Right now, a lot of the Azure products use the statistical language R. Other plugins calculate linear regression, and all sorts of stuff like standard deviation, blah blah blah. Microsoft needs to hide that under a big layer of abstraction and make all of that invisible to the end user. Picture the end user who runs a niche cafe in a hip town. Their Point-Of-Sale and computer system collects metrics, meta-data and machine data. The owners of this data has no idea what this data can tell them or how they can increase their revenue streams. They don't know Bayesian inference from degree of confidence.
Microsoft needs to build data analysis for the common person, like they built word processing for the common person. If they do that, they will take their company into the next century. If not, they will be the biggest Edsel of the tech industry. However, for the first time in a long time, I like what I am seeing come out of Microsoft.
I love ACID. In the database sense. To remind you what ACID means, here is an excerpt from Wikipedia:
ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably.
AtomicityMain article: Atomicity (database systems)
Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible ("atomic"), and an aborted transaction does not happen.
ConsistencyThe consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined rules.
IsolationThe isolation property ensures that the concurrent execution of transactions result in a system state that would be obtained if transactions were executed serially, i.e. one after the other. Providing isolation is the main goal of concurrency control. Depending on concurrency control method, the effects of an incomplete transaction might not even be visible to another transaction.
DurabilityDurability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.
Why bring this up? Recently I was playing with some feed-forward, backwards propagating Multi-Layer Perceptrons. I love them because they allow machines to make decisions. However the decision-making process is not ACID in the context above, if your artificial neural network composed of multi-layer perceptrons is in the continuous learn mode (each successive new bit of information is back propagated with a learning rate).
This may be good or bad, depending on the state of the fuzziness of what had to be decided by the machine, but wouldn't it be nice to have ACID in an artificial neural network?
Get your thinking caps on, and leave me some comments to inspire my imagination and perhaps create a whole new breed of technology.
I was talking to a colleague of mine and we were discussing, of all things, Bugs Bunny cartoons. In a couple of cartoons, Bugs is in a cabin with Yosemite Sam, and in one instance, there is an old style nickel plated wood stove and in another instance, there is a pot belly stove. If you don't know what a pot belly stove looks like, here is one:
The conversation revolved around how Bugs Bunny is still popular with today's kids, and yet it was first created in 1940. My colleague idly wondered if the kids of today know what a pot belly stove was, how it was used and what it was for. If you are a certain age, and are a grandparent, you may remember them from cabins or cottages, but I doubt that anyone younger than 40 years old has seen one. So the question is, can the kids of today relate to the cultural load of the concept of a pot belly stove?
The reason that I bring this up, is because the On / Off switch is going the way of the pot belly stove. Back in the good old, primitive days, an On / Off switch was fairly mechanical. It consisted of a toggle and two contacts separated by a space. When you flipped the switch, the juice flowed and when you turned it off, the contact was broken and the current stopped flowing.
This concept carried on for a long time. But the On / Off switch is going the way of the dodo bird. Back in the day BR (before remote), you walked over to a TV and pulled a switch to turn it on. The earliest desktop computers had a toggle switch somewhere on the side. A switch was a simple concept to learn. Not so any more.
Televisions for example, do not go dead when you switch them off. They have a quiescent circuit just listening for a remote signal. To turn on my iPad, you press a button and to make it work, you do a swipe on the face of it. On my car, I no longer have to turn on headlights. When it gets dark, they turn on themselves.
With Moore's Law, where transistors are getting smaller and smaller, and smarts are built into everything, the On / Off switch is dying. Devices will have sensors that will detect when power is needed to the main circuits and they will switch them from solid state switches or transistors. Vacuum cleaners will have motion sensors, and they will know when to turn on the sucking mechanism. The list goes on and on. New cars detect when a key is in the pocket of the driver and you push a button. No more mechanical linkages. I am reminded of an 80-some year old man who won an iPod at a church bazaar and didn't know what it was. He had no inkling that it could be turned on and something so small could play music.
So one day in the future, a child will see some sort of image on an On /Off switch and wonder what it is for and why it was needed. They will be flummoxed by it. It's a Brave New World that is coming. Me -- I say "Bring it on!"
Labels: on / off switch
In my process data mining course, on the internal forums, an OpenStack developer asked how the event logs from using OpenStack could be used in process mining. This is how I replied:
First of all, let me congratulate you on OpenStack. I am both a user, and I use the services of an OpenStack driven Platform-As-A-Service to host the development of my mobile apps.
I would see several potentially huge benefits if you incorporated process mining into the OpenStack platform. For example. spammers now use virtual OpenStack concept to set up a virtual machine, do their spamming or hacking and then tear down the machine never to be seen again. If you got a signature or a process of this activity, you could theoretically intercept it while it is happening.
Another possibility, is that every time the software does a create, an instantiation, an instant of a virtual, or anything, if you record the timestamps of these machine events, you could provide a QoS or quality of service metric, both for monitoring the cloud and for detecting limitations caused by hardware, software or middleware bottlenecks.
I can see a possibility for mis-configuration of parameters that degrade service quality, that would be picked up by a process mining that would detect missing setup steps in the process. In other words, an arc around a required region would indicate that required steps were missing.
This course has inspired me to start working on a Java framework (maybe a PROM plugin) that operates on an independent thread (maybe in an OpenStack incarnation) that monitors activity on a server and compares it to ideal processes in real time and flags someone if a crucial process deviates from it. I think that I could get this going in a timely fashion.
Once again, this course has opened my eyes to potential methodologies and algorithms that can be applied to non-traditional fields.
Note: PROM is an open source process mining tool. The data mining course is given by the Eindhoven University of Technology in the Netherlands.
The course in process data mining given by Professor Wil van der Aalst from the Eindhoven University of Technology in the Netherlands, has opened my eyes to a few elements in data mining that I had not considered.
At first blush, the course looks like it would be quite useful for determining bottle necks in processes like order fulfillment, or patient treatment in a hospital, service calls or a manufacturing environment, and it is. But to an eCommerce platform builder like myself, it can provide amazing insights that I had never thought of before taking this course.
Professor van der Aalst has introduced a layer of abstraction or perhaps a double layer of abstraction in defining any process with a Petri Net derived from an event log. Here is an example of a Petri Net (taken from Wikipedia) :
The P's are places and the T's are transitions. In the theoretical and abstract model, tokens (the black dots) mark various spots in the process. Tokens are consumed by transitions, and regenerated when they arrive at the next occurring place. The arrival of a token at a specific place, records an explicit behavior in the transition. So how did this help me?
I do data mining to enhance revenue stream on our eCommerce platform. (See the blog entry below this one). Previous data mining efforts on my part dealt with implicit events. Sure we had an event log, but we looked at the final event of say a customer purchasing something, and tried to find associations that drove the purchase (attributes or resources like price, color, time of day, past buys of the customer, etc). The customer's act of making the purchase was captured in the event logs, using timestamps of various navigations, but all of the events leading to purchased were implicit events that we never measured. With the event logs, we have explicit behaviors, and using those event logs, we can define the purchase process for each customer. So we start making process maps of the online events that led to the purchase. In short, we began to look at the explicit events.
Where will this take us? It will show us the activities and processes leading to a high value event for us (a purchase). What it does, is that we isolate high value process events, and by mapping customer behavior to those events, we can evaluate and refine which customers will end up making an online purchase. So we can treat those customers in a special way with kid gloves.
In essence, we can gain insight into the probability of an online purchase if a new customer starts creating events in our event logs, which indicates behavior that leads to a purchase. This data is extremely valuable, as now we can put this customer on our valued customer list, and using other data mining techniques, we can suggest other things that the customer is interested in and get more sales.
To recap, we now can measure explicit behaviors instead of implicit behaviors based on such limited metrics as past buying behaviors. We add a whole new dimension in enhancing the shopping experience for our users, and thereby enhancing our bottom line revenue stream.
As in life, often in data mining, it pays to pay attention to the explicit things. Process mining is an incredibly efficient way to deduce explicit behaviors that lead to desired outcomes on our platforms.
Image copyright by Professor Wil van der Aalst, Eindhoven University of Technology
I am taking process mining to a different arena, using the basic methodology and event logs. I understand the necessity for well defined proces
ses in relation to things like ISO 9001 and quality management and the achievement of Six Sigma. I started my career as a circuit designer in a military electronics shop with a major designer/manufacturer and not only did we have to have incredibly good yields from the fab shop, but we had to have reliable equipment that we sold to NATO forces. Process improvement involved saving time, money and resources and creating optimal performance.
However I have moved on and now I am using process discovery in the opposite sense -- in e-Commerce to improve revenue streams. Essentially, we have a captive platform where self-identified industry insiders buy and sell to each other on a wholesale level. Our platform has several areas where our clients spend time. They can create trusted buyer zones with their circle of buyers and sellers (platform enabled geo-location). They can create packages and offer them for sale to platform escalated groups. They can invoke software robots to buy and sell for them. They can offer and buy and sell from classified listings. In short, we want to map the processes of how our customers use our platform, and hence optimize the UIX or User Interface Experience to maximize revenue.
We have event logs and timestamps for everything, from when they log in, to when they change their buyer/seller groups, to when they consign inventory, or make offers and counter offers, or do browse the listings. However the event logs and time stamps are not located in one database table. The challenge was to create an effective case id to tie the disparate event logs together. Luckily our platform is based on Java Classes, Java Server Pages, Facelets, Taglets and the whole J2EE environment. As a result, we have a discrete session which is serializable and by simply altering all the event logs by caching the system-generated session id keyed to each event in the disparate event logs, we will have created a powerful customer analysis tool on our platform.
This will enable us to take things a step further. You have heard of responsive UIX designs to adapt to whatever device is utilizing the platform. The process discovery outlined above with enable us to push the boundaries of responsive design to create a machine-customized UIX that facilitates customer behavior on our platform to maximize revenue stream. Each customer will have a process map based on past behavior, and that process map will generate the UIX with a custom menu, that will be different for each customer types.
Our previous datamining looked for relationships between product groups and buying behavior. It looked at time-domain factors and essentially all sorts of data-dimensions and Bayesian inference of the interrelationships between those dimensions, to enhance revenue stream.
I realize that this doesn't exactly fit into the accepted semantics of a what a process is in the context of this course, but in a larger sense, we are discovering the buying process or the behavior processes on a trade platform that leads to facilitating buying behavior in our users. It adds event processing to our data mining, and this is where this course adds value for me.
So, the time finally came to upload my app to iTunes Connect through Xcode. I went through the Organizer and pressed the submit button, and the frigging thing got stuck. It would go past "Authenticating With The iTunes Store" stage. It just sat there. Sat there longer than an hour. It was just hugely constipated. Nothing. Nada.
Realized that the https secure upload was somehow failing. To fix it, I opened a Terminal window and navigated to:
I opened up a vi text editor and changed this line:
to this line:
Worked like a charm after that. All that our product manager would say, was "Crapple", but he is biased.