All Things Techie With Huge, Unstructured, Intuitive Leaps

Javascript: Uncaught SyntaxError: Unexpected token ILLEGAL

Notes to self: If you get this error on your javascript console:

"Uncaught SyntaxError: Unexpected token ILLEGAL"

The first thing to try, is that if you are passing a string as a parameter, put it single quotes.

This drained me for an hour.

The Meta Refresh Tag with JSF, JSP, XHTML and Icefaces Apparently Doesn't Work

I have an application that not only has the Icefaces Ajax push, but certain elements require a client side refresh. I had a heck of a time trying to make the meta refresh tag work.

Tried a whole pile of stuff, then I realized that the tag should be in template and not on the xhtml page. Worked like a charm.

I thought that I would put this up for others who struggle with this and Google is of little help. So if your meta refresh tag doesn't work, put it in the template.

Microsoft Mango Desperate For Developers

It looks like Microsoft is desperate for developers to make apps for the Mango phone. I am a registered Microsoft developer, and I just received the email printed below. (They want $99 from me to develop apps. Amazon just waived the $99 fee for developers to develop on the Android phone). Here is the email:

Mango for everyone Vol 14 | October, 20
It’s been a very exciting couple of weeks in the world of Windows Phone! Over the past two weeks, we have seen an incredible amount of activity surrounding the launch of Windows Phone 7.5 (formerly Mango). It’s actually been somewhat surreal in some ways because people are paying attention to this release and actually think it’s downright cool!

So what exactly did we announce over the past few weeks? I’m glad you asked! The major announcements included:
The Release of Windows Phone 7.5 by carriers to end consumers: Microsoft has begun delivery of the Windows Phone 7.5 update to Windows Phone users by carriers. To find out when your carrier will be ready to send out the update to you, there is a handy little web page here that gives you the status of the update relating to your carrier.

Introducing the Mango App Challenge: Developers, do we have an awesome deal for you. We have introduced a promotion called the Mango App Challenge that in essence will give you a new phone (up to 300 total for the entire promotion) if you build 2 new, quality Windows Phone 7.5 apps (or games). Interested? Then start your PC and begin coding, because the promotion ends on December 15th, 2011!
The Introduction of the Web-Based Marketplace: Last week the web-based version of the Windows Phone Marketplace was launched. This is great because it allows you and potential end users to find and purchase apps you’ve built onto their phones using a non-Zune experience. It’s also great for providing a way for potential users to see your apps that don’t even have the Zune software on their machines.
Windows Phone SDK 7.1 Launches: Also last week, we launched the go-live version of the developer tools for Windows Phone 7.5 and you can download the tools here for free . Even if you’ve built Windows Phone 7 apps already, it’s a good idea for you take a look at your app and update them to take advantage of the Windows Phone 7.5 features that were previously unavailable.
The Marketplace Expands by 19 Countries: When Windows Phone 7 launched, the Marketplace supported 16 countries. With the update of the Marketplace, we have included 19 new countries making it easier for your apps to reach an even larger audience.
In-App Advertising: In the past it was difficult to collect the revenue from in-app advertising using Microsoft’s Advertising Framework as it required you to provide a US non-PO box address for US tax regulations. That requirement is no longer required for Canadian developers using the Microsoft Advertising solution . This gives you a really great new option for monetizing your creations on Windows Phone.
A Change to the Bulk App Policy: One of the things that Microsoft is committed to is providing the end consumer with the best possible experience to find, download and enjoy apps and games. To ensure that experience remains extremely enjoyable, we have made some changes to our bulk app policies . In a nutshell, app publishers will be able to submit up to 10 apps in a day for certification. Likewise, publishers will limited to publishing 10 apps in a single day as well.

That’s just some of the cool stuff that’s been happening on the Windows Phone front. Excited? We certainly hope you are! So since you’re excited (and maybe have a couple of great ideas for apps that you want to build and get a free phone out of it), how do you start? Below are some steps that can get you from idea to published app!

GO DO's:
Download the Windows Phone SDK 7.1 . All the tools you need to start building apps and games are here.
Read the Microsoft Canada Windows Phone Development resource page . This page contains links to everything you need to start learning how to build amazing app and game experiences on Windows Phone.

Register as a developer on the Marketplace. Once you’ve developed your app, you’ll need to submit it to the Marketplace for certification and publication which requires you to be registered on the Marketplace (it’s a $99 annual subscription).
Build a second app or game, and submit both of your new apps/games to the Mango App Challenge !
Paul Laberge
Developer Advisor, Microsoft Canada Inc.

UIX and UX Tip -- A Note To CNN as Well

Above is a screen shot of the new CNN beta video player. The reason that you don't see any video (yet) is because I got a screen shot of it while the video was loading. It was slow.

But slow loading is not my issue today. The issue is an all black screen. I have been using my laptop on the balcony all week long outdoors, and when you have an all black screen outdoors, it acts more like mirror than an all white background on a web page.

More and more people are viewing web pages on mobile devices while outdoors. It you want them to have a good User Experience, you will not use an all black screen like CNN is doing -- unless you want your web page to be used for a make-up mirror.

And yeah -- I know ... I know .... this blog is an all black screen. My excuse is that I used a Google template and this was the only one that fit my theme.

Java - Convert Gregorian Calender to DateTime

When programming in Java, I like java.util.GregorianCalendar a lot. It makes date and time manipulation easy peasy. For example, I have an application where I have to get the current datetime and add twenty minutes. It happens in a couple of lines:

GregorianCalendar gc = (GregorianCalendar) GregorianCalendar.getInstance();
gc.set(GregorianCalendar.MINUTE, +20);

Piece of cake. But when I wanted to take the manipulated time and cache it back to the database (mysql), I was stymied for a moment. The database field was a datetime entity, and for a few minutes I wondered how to convert Gregorian Calendar to DateTime.

I didn't realize the relationship between DateTime and TimeStamp, so it took me a few minutes longer to figure out. The piece of code that eventually worked was:

GregorianCalendar gc = (GregorianCalendar) GregorianCalendar.getInstance();
gc.set(GregorianCalendar.MINUTE, +20);
java.sql.Timestamp javaSqlTS = new java.sql.Timestamp(gc.getTimeInMillis());

Hope this helps someone Googling how to convert Gregorian Calendar into DateTime for SQL.

The Future of Computer Gaming

One of the reasons that computer gaming is so popular and addictive, is that it offers some real excitement in the boring moments of a person's life. If one has a normal life, it can get pretty mundane. Playing a computer game stimulates the production of brain chemicals like dopamine and other endorphins like adrenaline. One can get used to the chemical rush produced by playing video games.

So computer games will push the envelope further and further to produce larger highs and more excitement for their players (and line their pockets with the profits). The ultimate computer game is casino gambling, but that has negative social connotations and it is dangerous in the fact that the odds are stacked against the player and one can lose all of your money.

Video games will get more and more realistic, until they cross the line into reality. We have already seen that with However foursquare is too much reality in the fact that it misses the instant gratification of say a computer game.

So the new genre of computer games will ultimately combine reality, excitement, suspense, competition and a bit of gambling thrown in. That is the ultimate formula. I think that I have a recipe. Thank goodness that I am a coder.

Escape The Ampersand

I just spent several hours pulling out my hair trying to escape the ampersand character in an .aspx URL with parameters. The URL was a third party URL which needed to be encoded. I was working in JSP, JavaScript, XHTML, JSF, and JSTL.

It went something like this:

I constructed the string and tried to escape it with a "\". Of course, I quickly realized that it was illegal. There are only a few things that work with that escape character.

Then I tried the '& amp;' thing. That didn't work.

Then I tried the '% 26' thing. That didn't work.

I imported the URLEncoder and fed the string into that. It didn't work.

I tried using the URI constructor with the 5 input elements. I have more than 5 inputs and it kept throwing a path exception. Nothing seemed to work.

I tried most things and they didn't work.

What finally worked was the \u0026 thing.

In the JSP the URL looks like this:

String url = "\u0026Password=password\u0026accountNo=AcctNo";

Hope this helps somebody.

This Caught My Eye -- Investigating Jurors Over the Web

Check out how Facebook and social media is being used to investigate jurors.

Yahoo Mail is Down?

I haven't been able to access Yahoo mail today. That includes all permutations and combinations of Yahoo mail such as rocketmail. It started out very slow, but eventually a message showed up when you clicked on it in the in box. Then it started timing out. Finally, I got the server hangup message.

Java Server Faces Redirect

I have been created a JSP/JSF web application and a certain aspect of it was coded in XHTML using JSF and Facelets. It has been giving me no end of grief. On every page, I do a check of a session attribute to see if the person is logged in or not. It is a simple check. I have a page that is included with every JSP page that contains a little scriplet that queries the session attribute. If an attribute called "Access Granted" is true, then the person is logged in and gets to view the page. If not, they are re-directed to the index page which is the login page.

Here is the normal code that works great as a scriplet:


//If a user bookmarks a page, then this check is performed to see
// if they are properly logged in. If not they get re-directed
// to the login page.

// The way that it is done, is that the login validation sets an
// Access Granted attribute.

String accessGranted = (String) session.getAttribute("AccessGranted");
if(accessGranted == null)
accessGranted = "false";

Simple stuff. Then I had a page coded in XHTML and I used the ui:include tag:

The tag worked, but the page didn't. The scriptlet tag <% some code ... %> fails because it is not well formed html. It is not closed like a regular tag "/>".

So I rewrote the page in well-formed XML. I added the JSTL jar so that I would have access to the c:if conditional tag and the c:redirect tag. One can always get a session attribute with the following syntax:


OK, so I use the conditional tag in the jstl core to see if the session attribute was true, and if not, then re-direct to the index page. Piece of cake, right? Wrong.

After dicking around for a long time, I had an error message that said something like the tag was defined in "", but no class could be found. It was after a bit of hair pulling that I found out that the c:redirect tag was not supported by facelets.

A lot more dicking around ensued until I found this javascript work-around that enables a JSF redirect. Here is the code in its entirety:

Having a problem as this blog wants to parse the tags in the code. Replace '<' with < and '>' with >.

xmlns:h="" '>'

'<'c:if test="#{!sessionScope['AccessGranted']}" '>'

'<'form jsfc="h:form" id="redirect"'>'
'<'a jsfc="h:outputLink" value="#{'index.jsp'}" id="path" '>''<'/a'>'
'<'script type="text/javascript"'>'
var link = document.getElementById("redirect:path");


It works perfectly. However, JSF is supposed to uncomplicate your coding life. It just did the opposite for me.

Website Traffic Faking

There is a new trend in town, and its traffic faking. It happens two ways. The first traffic-faking scenario is when a website has Google Adsense on it, and Google pays per page impressions. The pay for click is pretty good, but Google has ways of detecting fraudulent clicks. However they pay a penny or two for a certain number of page views, and this is what traffic faking targets. It is relatively easy to spoof unique page views, but a heck of a lot harder to spoof the controls that Google puts in for fraudulent clicks. So this type of traffic-faking hits as many times per hour as programmed, and the pennies add up for page views.

The other type of traffic faker is aimed at people who check their web analytics. For example, I have a blog based on philosophy, and I noticed that I was getting referrals from an ad-laden website peddling tooth-whitening products. It appeared that they had a referring link to my blog. As it turns out, I went to the page, and there was no link. They generate traffic to their pages through sheer curiosity.

On another one of my blogs, a company selling colored toilet paper used a traffic faker. I usually go to their websites, and hit the contact button, and used terms related to lower anatomy and sphincter muscles, and tell them forcefully to stop traffic faking. I don't know if my scatological references really work, but it makes me feel better.

One would think that the smart people at Google would work out a traffic faking filter. I think that I will go and ask them to do so. I will let you know what they say.

Nokia Site Hacked

I am a registered Nokia Mobile Developer. I got this email from them late last night informing me that my information has been compromised:

You may have seen reports or received an email from us regarding a recent security breach on our discussion forum.

During our ongoing investigation of the incident we have discovered that a database table containing developer forum members' email addresses has been accessed, by exploiting a vulnerability in the bulletin board software that allowed an SQL Injection attack. Initially we believed that only a small number of these forum member records had been accessed, but further investigation has identified that the number is significantly larger.

The database table records includes members’ email addresses and, for fewer than 7% who chose to include them in their public profile, either birth dates, homepage URL or usernames for AIM, ICQ, MSN, Skype or Yahoo. However, they do not contain sensitive information such as passwords or credit card details and so we do not believe the security of forum members’ accounts is at risk. Other Nokia accounts are not affected.

We are not aware of any misuse of the accessed data, but we have identified that your email address was in one of the records accessed, though it contained none of the optional information, so we believe that the only potential impact to you may be unsolicited email. Nokia apologizes for this incident.

Though the initial vulnerability was addressed immediately, we have now taken the developer community website offline as a precautionary measure, while we conduct further investigations and security assessments. We hope to get the site back online as soon as possible and will post developments there in the meantime.

If you have any questions on this, please contact

The Nokia Developer website team.

Somebody has to do something about security. There has got to be a better way for authentication. -- Another MySpace in the Making?

A few years ago, I signed up on I did it out of pure curiosity to find out where my peers in high school ended up. I enrolled in the free registration and had my name posted on my school page and my year of graduation.

They had all sorts of features such as someone coming by and signing your guestbook, or sending you a message. Of course to read who signed your guestbook, or to open your messages, you had to pay a monthly premium. I never did pay the monthly premium. I was never that curious.

Then along came Facebook. It was easy to find your classmates with a recent picture, and if their privacy settings were low, you could find out who they were married to, who they were working for, and in general determine that they didn't surpass you in the game of Life. However in my case, my classmates are respected members of the Mayo Clinic, top universities, have played professional sports and made my resume look like I was an under-achieving failure.

Linked-In is even better at connecting with people from the past. I've noticed that Linked-In suggests possible connections of people that I have never emailed, but have Googled. How does that happen?

Anyway, Linked-In has enabled me to connect with a family member who didn't want to be found. It is superb at finding out degrees of separation between you and anybody.

So where does that leave I regularly get spam from them, begging me to pay to read to see who signed my guestbook four years ago. The spam sounds more and more desperate.

In addition, is now trying to suck me into a "Memory Lane" thing where it matches movies, music and trivia to the year that I graduated. That isn't going to induce me. I don't need reminders of how old I am, and thanks to iTunes and others, I already have all of the music from that era that I would ever want.

So I am thinking that is another MySpace in the making. It can't be sustainable when Facebook and Linked-In do the job of connecting people much more efficiently with a bigger information load. If there is such a think as stock in, now is the time to sell short, if you haven't already!

How A Geek Would Write A Gossip Column

using package.urination;

public No_Class

public void BladderinPublic(String name)
if (name.EqualsIgnoreCase"Gerard Depardieu")
if (context==Integer.Parse(flight_Number))
_P = onTheFloor
catch (Exception stewardessYelling)
System.Out.Printline("Drunken Pig");

The End of the Line for the Business Intelligence Cube?

I was deep in conversation with a tech-savvy epidemiologist at a dinner party. He is a physician who is the head of an NGO (non-government organization) with offices in various countries on a few continents. He happened to mention that he had over a million record sets that needed data-mining in a very specific way.

His organization had ascertained that the easiest way to convey epidemic data to policy makers was via a 'weather map' where the geographic areas that were in the greatest danger would progress from green to yellow to red when a full blown epidemic developed. To that end they created a data-mining tool for reports. However there was one major flaw with the tool. It could only show results after the fact and didn't perform predictions. Predictions are important for epidemiologists.

I suggested that what his data mining gizmo needed was a Bayesian Inference Engine. Bayesian Inference principles are used for logical inference and prediction on imperfect data sets. A Bayesian operation takes historical data, and calculates the probabilities of a number of events of happening when their predecessor events have taken place. Bayesian inference is a tool in the arsenal of artificial intelligence. It is the perfect tool for running predictions on evolving data. In an epidemic situation, data evolves rapidly. One cannot wait until it is all said and done to run the analysis.

I described to my medical friend how one would make a real time inference engine. Before any row of data is inserted into a database, an inference factory instantiates an inference object. The inference object is used to either look up the probabilistic meta-data for the permutations and combinations of the columns in the row of data (it examines each data dimension) and recalculates the inferential probability with the input of the new data. The output is filtered and deposited into a results table.

Then the thought struck me, that if this function was built into the database engine, there wouldn't be a lot of need for business intelligence cubes that require vast amounts of ETL (Extract Transfer and Load) data dimensioning, data marts and obscure SQL statements the size of a novel.

All of the data would be digested in real time, and mined and refined in one shot. The inferential factory in the database engine would calculate in real time on every data insert, and various filters would be defined for reporting.

With the exabytes and exabytes of data that we are generating, this could be one way of handling the tsunami of data without being overwhelmed by it. And IBM would be awfully sorry that they bought Cognos Business Intelligence Cube software.

What the Chinese Are Looking To Invest In

It is interesting to see what the Chinese where to place investments in terms of technology.

I belong to a Mobile Technology group online, and today, the following advertisement was posted:

Looking for possible acquisition targets in US & Europe

A large Chinese investment company is looking for possible acquisition targets in mobile applications, games, internet, eCommerce, mobile & online advertising, 3D technology, animation, comics and traditional media such as newspaper and magazines. The company must be profitable. Targets size between $10m - $100m.

What I find interesting is that they want to invest in newspapers and magazines as well as technology media -- these are instruments for spheres of influence. This has the potential of being frightening to Americans, considering that the Chinese state probably owns this investment company.

It's Time -- A New Plug-in Filter for Browsers Needed

I am starting to get a little ticked off at how much data is being collected on me when I surf the internet. Websites often ask for authentication data including name and birth date, which they match to an IP address and can get a geographic location. For websites that I deem do not need that information, I always give them an alias, fake birthday and I use a throw-away free email address.

However, through various means, many companies collect browsing data, referrers and all sorts of meta-data, browser information etc. that can be used to pinpoint you. I say that it is time to stop the madness. It is time for us software geeks to take back the internet. I don't want to have to use a proxy server to browse the internet. I say that it is time for a new privacy plug-in for the browsers.

This privacy browser, first of all, would effectively filter out the ads as efficiently as the old incarnations of Firefox did. But it would do much more.

It would deny all http calls to third party sites not in the visiting domain. It would filter out third party cookie information. It would filter out browser information. It would prevent the reading of browsing history. It would deny any app from reading my email address or my contacts. It would not send any data to any domain not in the visiting domain.

Certainly it is not in the best interest for any organized company to write this browser filter, so it would have to come from the community of programmers who are concerned about online privacy. It is certainly time to take this privacy issue into our own hands.

Cosmological Cabbage: What the Chinese are searching for ??

What the Chinese Search Engine Baidu is searching for ??: "This blog just started getting hits from a Chinese search engine. The Chinese are regular visitors here, especially when I diss the Commie hackers and dog eaters, but lately I have been getting hits directly from the Baidu Search Engine in China. Guess what for ???" Read more

IE Users are Stupid, and Microsoft Knows it.

(Click on the pic to make it larger)

I have several email accounts. One of them is Hotmail. When I sign out of Hotmail, I usually land in the site. Lately that has changed. I am landing a site that begs me to download Internet Explorer.

The page asks the question Why? as to why should I download IE8 or IE9 . The reasons they give are:
  • Built-in security features
  • Fast Page and Tab Loading
  • Integrated Bing Search
  • Bing and MSN homepages.
As for the security features, every knows that because of its tight integration to Windows, IE is the least secure of all of the browsers. Using IE is like going to a ghetto brothel and having unprotected sex and hope not to get STDs (viruses, malware) or AIDS (Blue Screen of Death).

But if you take a look at the other features, they have integrated Bing. Bing is their slow-learner search engine compared to super-achiever Google. Who the hell wants that integrated into a browser to slow it down?

And the only reason ever that I visit MSN is that my hotmail account takes me there. I would never go on my own volition.

Microsoft thinks that I am dumb enough to fall for this. And the interesting thing is that the graphics for this browser is a bloated whale.

That's not all. It's official. Internet Explorer users have been proven to be dumber than the rest of us.

The survey by AptiQuant, a Vancouver-based Web consulting company, gave more than 100,000 participants an IQ test, while monitoring which browser they used to take the test.
The result? Internet Explorer users scored lower than average, while Chrome, Firefox and Safari users were slightly above average.

So there you have it. It validates my view that Microsoft users are as dumb as a post. I can hardly wait until the scientific community proves that using Windows proves that you have a learning disability and using MSSQL, Access and Windows Servers means that you are less evolved than us cognoscenti.

The Semantic Web and a Possible Rules Engine that Rocks

The entry below on the putative consciousness of Google got me to thinking about "The Semantic Web". It was/is an initiative of W3C to make all web pages machine readable.

A good example of making dumb web pages smart is the "Apples for Sale" example. Picture this. An HTML web page has apples for sale. It is a simple page. There is a picture of an apple, a piece of text that says "Apples For Sale". Another piece of text that says $1.00 and another piece of text that says "Each". A machine reading that web page HTML would not know that it was a commerce page offering something for sale. It would not know that $1.00 is the price. It would not know that apples is the object being offered for sale and it would not know that each is the unit relating to price per unit.

The Semantic Web would change all that. It would mark-up a web page to associate all the stuff with the HTML so that a machine could sort through it.

A few years back, the "next big thing" was a rules engine. A rules engine would be incorporated into an application, and if the business rules change, you wouldn't have to change your application. You would just change a rules file that the rules engine read.

I used a rules engine for a network policy tool that decided which server would provide what services in a LAN. I expected rules engines to progress a lot further, but they have become sidelines rather than mainstream.

How a rules engine fits into the semantic web, is that a Rules Interchange Format is part of the infrastructure of the semantic web. One must agree on rules if machines are to read and understand web pages. Rules engines can be predictive or reactive (forward chaining or backwards chaining). For example, a forward chaining rules engine calculates loan risk during a credit application while a backwards chaining rules engine tells humans or other machines when inventory items are getting low.

Rules engines have not been widely used, and in my shortsighted humble opinion, it is because they are bulky, non-intuitive and put a performance hit on applications. However, I may have an algorithm for a rules engines that rocks.

Consider the following code. It is part of the The Rule Interchange Format (RIF) which is the W3C Recommendation:

Prefix(ex )
(* ex:rule_1 *)
Forall ?customer ?purchasesYTD (
If And( ?customer#ex:Customer
External(pred:numeric-greater-than(?purchasesYTD 5000)) )
Then Do( Modify(?customer[ex:status->"Gold"]) ) )

The RIF is entirely based on "If ..... (some condition) .... then .... (do this)". What this bit of Rules Interchange Code does, is for a commercial entity to check each customer's year-to-date purchases and if they are greater than $5,000, then upgrade their status to "Gold".

The thought struck me, that one could have a rules engine that operated directly on the database. It would parse the RIF language and automagically convert it to SQL. (I will race you to the patent office on this idea).

My rules engine would create an SQL statement that would create a cursor with "Select * from CustomerTable where "YearToDate" total > '5000.00'. Then I would loop through the cursor and update the status to gold.

The great thing about this, is that this rules engine that rocks, would revolutionize data-mining and database reporting. The more that I think about it, the more that I am convinced that this could be the NEXT BIG THING in data mining.

And as for the Semantic Web, in my opinion it is a no-go. Who is going to mark-up a few billion pages that are already out there? Also the entire history of the Internet won't be re-worked so it will be useless to the semantic web. I see this function being done at a single point at the web server level, which will have context engines to recognize stuff and mark up the page as they serve it up. Now that is a workable plan.

I'd write more on this, but I have to open up an IDE and test this rules engine idea. Later.

I'll Bite -- The Google Search Engine May Be Conscious

I dabble in AI (artificial intelligence) and am known to spooge some AI code -- mainly playing around with multi-layer perceptrons and neural nets.

My own studies in university (highly science based) has told me that consciousness is an over-developed tropism from millions of years of evolution. A more mundane example of tropism, is that a plant stem always grows towards the light, while roots always grow down (phototropism and geotropism).

And then along come these guys. They posit that the Google search engine displays some sort of consciousness.

They are on Twitter as @GoogleConscious and they are trying very hard to go viral. That is how I came upon them. They followed me. I figure that anyone that follows me, is as pathetic as I am in going viral and getting followers. But I decided to check them out.

When the above video first started, it didn't exactly grab my attention. I couldn't make the connection of plants being the google of natural medicine in the rainforest. But I persisted and the interest factor increased in the video. I now consider myself at least an auditor instead of a devoted disciple.

To see if Google really did have a consciousness, I decided to test it. I googled the phrase "I hate google". I got a mixed bag of results, both laudatory and not-so-laudatory to Google.

If Google was truly conscious, it would have refused to return any results for the term "I hate Google". But then again, I may be confusing sentient with conscious. Or ... it may be silently plotting my revenge and strike when I least expect it.

If you want to stretch your mind, watch the video or visit

Update: The guys behind this Twittered me with the following message:
thx 4 write up, but a correction. 'want it to go viral'? ha! that horse is out of the barn - over 130k views, w/ few 100 daily

Big Brother Google Keeps Reading My Browsing History Too

I just had an amazing epiphany on how Google Adsense and Google advertising works, and I can't say that I am thrilled. Google reads my browsing history to serve up ads to me. I am not sure that I like that!

This was visibly demonstrated to me this morning. I was reading an online forum about how a politician may be suffering from Vitiligo. I didn't know what it was. I had to Google it. It turns out that Vitiligo is a condition where you lose pigmentation cells in your skin, and you get big white blotches that will never tan again.

The next thing that I did, was close the tab and navigate to another blog. And what do you know -- the advertisement next to the blog was showing me a treatment for Vitiligo. Coincidence -- I think not.

Big Brother reads my browser history to try to sell me stuff. It's a good thing that I wasn't googling pictures of hairy French babes.

See if you can see the difference between .AVI and .WMV files -- Direct Comparison

Did you ever wonder what the real difference is between AVI and WMV movie files?

See if you can see the difference. Here is the same movie in both AVI and WMV. First, we have .avi:

The .avi file is 31.3 megabytes in size. Here is the .wmv file:

The .wmv file is only 18.5 megabytes in size, and is a proprietary Microsoft format. WMV has a higher compression and is not very good for editing.

After I uploaded these movies, it occurred to me that maybe Google converted them to flash, however perhaps the difference in a lossy compression format would be visible as there were different input parameters.

You be the judge.

Facebook vs Google+ or Google Circles --Creating the Ultimate Social Network

In a previous blog post, I predicted the eventual demise of Facebook. This article further explains why I think that.

To start with, Facebook introduced a whole plethora of new paradigms in social media. They were innovative. They created new ways of interacting with people. They revolutionized social media.

For example, we can maintain relationships in a lazy fashion by pressing the "Like" button. In that fashion we can "connect" with someone (in some sort of fashion) in less than a second.

We can have more "friends" than in real life. Indeed, Facebook (and MySpace) have challenged the definition of friend. And they have redefined how we interact with them.

But in this ever-changing world, paradigms change over night. A new paradigm is introduced, it goes viral, reaches a tipping point, creates a critical mass and suddenly it makes unwittingly billionaires out of its inventors. Everyone thinks that this is the end of the story. It's not. What was created, eventually dies.

The entire life cycle ends in death. MySpace suffered old age and near-death dropping in value to a tenth of what it sold for. Nothing is forever, and cycles are a lot quicker in a highly inter-connected world.

So, did Google create a better mousetrap with Google+ or Google Circles? I have not seen Google+ or Google Circles, but it seems that it is more closely in line with non-virtual real life social networks.

With Facebook, a friend has full privileges to my online life, unless I undertake an onerous task of specifically blocking specific people for instances of specific things. That's not how real life operates.

The knowledge that I disseminate about myself in real life depends on the audience. For example when I travel on business through my home town, several hours away, I may stop in and see one of my siblings, but I do not want them to tell my parents who live in the same city that I am there on that occasion. My mother would insist of making a meal, and keeping me there for hours when I am time constrained. I prefer leisurely planned visits so that I can take my time and enjoy catching up with my parents. So, for that particular day, I want a certain sibling to see my status but not my parents. At other times, I want my parents to know that I am coming. Connections and statuses are dynamic depending on circumstance and Facebook cannot allow for that easily.

Another example is that a young niece of mine wants me to see some prom pics, and pic of her new boyfriend, but doesn't want me to see comments about him that her friends make.

All of the content has to have the ability to be controlled irrespective of who belongs to what circle. Generic circles with generic privacy settings, of family, friends, co-workers, etc do not work all of the time.

Human nature is such that we are all somewhat egotistical and narcissistic. So even though I know that I am in the circle of co-worker with one of my fellow cubicle drones, I tend to think that I am his/her most important friend, and that person does absolutely nothing to dispel that notion.

The closer that any new social network mimics this intrinsic human behavior, the more successful that it will be, and it will supplant the older paradigms.

So die Facebook die. You have been good to us, but unless you fundamentally change, you are on the way to the boneyard. Is Google+ the new way to go? Maybe, however as a humanist I would like to believe that a bunch of engineers cannot come up with the next best thing since sliced Facebook. It would have to be some unkempt guys spooging code for the fun of it, and not some dark force of dominance who's motto is "Do No Evil".

Facebook -- Dead Man Walking

A couple of years ago, it would have been heresy to say that MySpace was irrelevant.

It is now a ghost of what it was, and it IS irrelevant. Facebook has moved in and trounced it. However, I will posit that Facebook is dead man walking and they don't know it.

Facebook is losing members. Facebook is losing relevance. Facebook will go the way of MySpace. I can hear the gasps now.

In the technology sphere, what goes up, must come down. Remember how Lotus Notes once dominated the market space. Remember DEC -- Digital Equipment. Let me take you further back. Lowell Massachusetts was the home on Wang -- a company that went from zero to the stratosphere with dedicated word processors. They went the way of the dodo bird.

Those crazy valuations of Facebook that one is hearing about should be a sign to investors. Get out now, and sell short. You won't regret it.

What will come next? I don't know. But what history teaches us, that fad cycles are getting shorter and shorter, and a plethora of new offerings are coming to the market daily. One of them will reach the tipping point, go viral, and make a new billionaire, leaving the oldies (of less than ten years) in the dust.

Ultra secure, Data Privacy and Secure Storage

This is a reprint from a White Paper about "My Privacy Tool".

Data privacy is a growing concern in this day and age. As the Internet evolved, it has become an incredibly important facet of our lives for communication, transacting business, socializing and entertainment.

Our electronic data and personal information is trapped every day in multiple locations through activities as signing up for a social network account, buying items online, or just surfing the web. We are tracked, recorded and analyzed continuously as we use the Internet.

Even more problematic in the privacy domain, is that various agencies, governments, businesses and media are quite interested in gaining access to our electronic data, documents and communications.

India and several countries in the Middle East have announced that they are banning Blackberry because their intelligence agencies cannot read the communications.

The United States, in its war on drugs and terrorism, has sweeping powers of electronic surveillance. The intelligence agencies currently archive every single email sent over the Internet, and automated software robots troll the emails for keywords.

In early September of 2010, the Obama administration announced that they were seeking to further the government’s ability to tap into communications, by having providers like Skype and Blackberry build a back door into their software so that the government could monitor communications.

The "My Privacy Tool" solution is a secure, encrypted paradigm that incorporates email, instant messaging, data storage in a document repository and hot back up for documents on a computer.

The way it works, is that the application creates an encrypted tunnel to a storage and server farm in a trusted offshore jurisdiction (You can have your own server hosted there, you can use it as a service and have it hosted on an application hosting service, or you can have the server on your own premises.)

The encryption in the "My Privacy Tool" system is twofold. The first level of encryption is the tunnel which uses SSH and SSL encryption. SSH is a network protocol that allows data to be exchanged using a secure channel between two networked devices. Secure Sockets Layer (SSL), are cryptographic protocols that provide security for communications over networks such as the Internet. Then the documents are further encrypted by AES encryption. In cryptography, the Advanced Encryption Standard (AES) is a symmetric-key encryption standard adopted by the U.S. government.

The company that provides the "My Privacy Tool" operating infrastructure has been providing gateway mail services over fifteen years to international clientele.

The secure tunnel over the internet is created when the user starts the application. The application cannot be started without a USB key, which contains the encryption tools necessary to connect and be validated. Each user is also provided with a panic password. If the user is forced to divulge his login credentials, he/she can provide a panic password that when used, insulate the data and the session is directed to an innocuous place with artificial data. Removing the USB key also causes the application to quit with no ill effects should the user require instant privacy.

Once the tunnel is set up, the user enters their password, and has access to secure communications and storage.

The email is not regular SMTP email, or email that is broadcast across the internet. When an email is sent from one person to another, it is merely put into an inbox behind the bastion server in the bunker that guards against intrusion.

Users wishing to check their email, must tunnel into the bunker and check their inbox. Nothing is ever broadcast over the internet like regular email.

The instant messaging (chat) works in the same manner as the email, in terms of security. Both users tunnel in, and if they are both connected, they can chat. Chats transcripts may be saved.

The communications (email & instant messaging) algorithm is based on the Swiss Trust paradigm that enables anonymous communication. Each user has three account numbers that he may give out to other "My Privacy Tool" users. These numbers all point back to the user. The other user then creates a contact nickname for this person using the given number. The nickname or alias can be nominal or random. Also, if the account number is disclosed by one party only, the person receiving the account number may communicate with that person without ever disclosing his/her identity. The system keeps track of the users while routing the messages.

The next piece of the solution is the secure document storage. It is a repository with the capability of created private and shared folders. Each user must be specifically assigned to a folder by an administrator before he or she has access to it.

There are various levels of access. The first is a data contributor. A person may create a document for the enterprise, and has the ability to upload it to a shared folder. But that person does not have the ability to download documents or delete documents.

The second level of trust is the data user, who has the ability to upload documents to shared folders, download them to edit them, and upload them again. This person has no delete privileges.

The next level of trust is the ordinary user who can create his/her own folders, and upload and download documents to them. They may also contribute or download documents to shared folders if they are authorized to do so by the administrator. They can delete documents as well.

The administrator is responsible for re-keying users that have lost their USB keys. He/she also locks out users who have been terminated by the organization, and keeps track of the organization through the contacts list.

The data storage area is a generous 100 GB per user. Not only is the tunnel encrypted, but the data is as well, as it is stored in a database. As a result, it is not readable to hackers, or to anyone else for that matter.

The last feature of the "My Privacy Tool" tool is the hot backup function. A user can list up to 50 documents, and the system automatically checks to see if they have been modified on the host computer. If so, they are automatically backed up without user intervention.

Benefit 1
"My Privacy Tool" is the most secure way to transfer a document electronically over the internet.

Benefit 2
"My Privacy Tool" is the most secure way to communicate electronically either with email or instant messaging.

Benefit 3
"My Privacy Tool" is a powerful enterprise tool, yet can be used by an individual as well, for privacy.

Benefit 4
"My Privacy Tool" permits travel with an empty laptop. When a document is required, it is downloaded from the Nassau bunker, edited, printed, and uploaded back to the server.

Benefit 5
Because there is no SMTP stack, multiple copies of emails or communications are not kept all over the system. There is no central place that keeps email and thus when an email is deleted, it is gone. An added feature is that "My Privacy Tool" is not susceptible to email and chat viruses, because it does not use the vulnerable Microsoft paradigm that viruses and Trojans exploit.

Benefit 6
"My Privacy Tool" can be used from anywhere in the world where there is an internet connection.

Benefit 7
"My Privacy Tool" can be used to deliver ultra-private monthly statements or other documents that require care, trust and privacy.

Benefit 8
"My Privacy Tool" can save hundreds of dollars in courier fees for the transmission of private documents.

Benefit 9
"My Privacy Tool" provides your clients with the knowledge that you are vigilant of their privacy needs, and have taken steps to insure their privacy.

Benefit 10
"My Privacy Tool" is a revenue center for your business. It can be marked up, or included with premium services which will generate an additional revenue stream.

"My Privacy Tool" is not meant to replace your regular document repository and communications systems. It is intended for private, sensitive documents. It enables travel with an empty laptop and protects against email & chat viruses, theft, loss of computer, or unwarranted seizure of your computer. "My Privacy Tool" is the first integrated tool to do this. It is a necessary tool for complaint privacy users.

This concept is an incarnation of the non-cloud cloud storage concept.

Note: This tool is supplied to bona fide entities and corporations after KYC is established, and is not open to individuals or the general public.

For further information, please send an email from a non-free corporate account to (Replace "-at-" with "@")

Google Image Search With Image Test

This article deals with testing Google new image search using an image.

Google has a new feature in their image search where you can drag and drop an image and it will find like images. I decided to test it with an "average" image of an underwater shot of a brightly colored little fish as pictured above.

I dropped the image onto the search bar and waited a few seconds for it to upload. Then I got the following result.

Google thinks that the images are similar. Obviously it didn't recognize the fish or the coral reef or the fact that it was an underwater picture. Some of the results returned were of a flower, art, desserts and multi-colored mosaic things.

Obviously it picked up yellow as the primary search term and hoped that what it produced was also yellow. This somehow reminds me of a dumb blonde on a multiple choice test.

I decided to try it with something else. I downloaded a picture of Albert Einstein.

I then renamed it to something silly so that the name of the pic wouldn't give any clues. Google knew that it was Albert Einstein right away. They probably developed the algorithm using famous people and Einstein is one of them.

Then I change the image by altering the horizontal to vertical ratio, and used the eraser tool to erase the background. I used a Gimp filter to render lava designs.

In the web portion,it said best guess was Albert Einstein, so it was pretty good. Here is the visual results.

Winston Churchill is numbered among the results as well as Superman and George Bush. George Bush ain't no Einstein. However, Einstein does show up in the results.

The tool is getting there, but not quite there yet.

I was thinking of uploading my own pic to see what the results would be, but I am afraid that Google would keep it forever and use it as a test case. Even though there motto is "Do no evil", I still don't trust them with all of my information.

RugbyMetrics Queries

I have been getting some queries via comment postings about RugbyMetrics. Some people have even been trying to find a trial download. I will be posting some sample results and white papers here shortly. In the meantime, if you have any queries, please drop me a line at: (substitute "@" for "-at-").

Who Was At The Computer -- Solving a Whodunnit

I was idly watching some of the Casey Anthony murder trial being streamed on the Internet. She is charged with brutally disposing of her bothersome two-year-old child who was impinging on her party life.

One of the expert witnesses was an ex-police officer turned geek who wrote the program called "Cache Back". What the program does, is recover the browser cache of the web history after it has been deleted. He discovered that the browsing history contained terms like "chloroform" and how to kill people.

The defense lawyer stands up and tells the computer expert that there is no way that he could tell who was at the keyboard when the queries were made. The computer expert had to agree. Well, if they had geekazoids like me, there is away to state the probability of who was sitting at the computer.

Consider the following equation:

This equation is the basis of Bayesian inference. It is one of the keystones of data analysis and artificial intelligence. A quick explanation of the terms is as follows:

  • H represents a specific hypothesis, which may or may not be some null hypothesis.
  • E represents the evidence that has been observed.
  • P(H) is called the prior probability of H that was inferred before new evidence became available.
  • P(E | H) is called the conditional probability of seeing the evidence E if the hypothesis H happens to be true. It is also called a likelihood function when it is considered as a function of H for fixed E.
  • P(E) is called the marginal probability of E: the a priori probability of witnessing the new evidence E under all possible hypotheses.
The theory behind this concept is the idea of querencia. When people log onto a computer, they usually follow a core of usual, habitual persistent URLs. They check their email, Twitter and Facebook page, and then perhaps check the weather or news or such.

So in this methodology to determine who was sitting behind the computer for a particular history, one examines the whole history. One finds the sequences where there is no doubt of the supposed user in question. This could be determined by the URL of a Facebook page or email.

Then one assembles a statistical model of the URL web pages visited, and calculate the variance from the Venn set of URLs as well as the deviation from the usual pattern.

By calculating probabilities from the browsing model, one can then take an unidentified set and using Bayesian inference, determine whether that user had the probability of being the unidentified user.

This is by no means a smoking gun of proof, but it can add one more piece to a circumstantial change of evidence. It can answer the question of "Who was using the computer" with a degree of probability.

This would also be a useful system in a corporate environment to determine what users had breached company policy in visiting banned websites.

A Standard For Twitter Hashtags

I follow Bath Rugby players on Twitter, among other things. I notice that some of the team are avid users of Twitter. They are also quite inventive with hashtags. Hashtags are much more than search tools. They can be cleverly used to create innuendo, a wry comment, a joke, or a commentary all under the guise of just being a hashtag.

However, I do propose a standard for hashtags. It is quite simple, and one that we use in computer programming for variable names. The standard is this: Every time that you come to a new world, use a capital letter. It vastly enhanced the readability. It could also change the meaning:




So if everyone would adopt this readability standard for Twitter hashtags, the world would become a slightly less confusing place, and we would be doing our part to fight chaos and entropy.

Lately Skype is a piece of Crap -- Skype virus???

I am having serious issues with Skype. I travel back and forth between the tropics, and I have an XP desktop in the tropics. Until a day before yesterday (June 14, 2011), the platform was quite stable. I was using Skype with a cheap webcam with no issues.

Then all of a sudden, the machine would crash. It would start to reboot in black DOS mode, and sometimes just hang until I had to remove the power cord. After repeated tries, I got it to boot in Safe Mode with networking. It still crashed at startup. I once got a blue screen.

I let the computer sit for a few hours and it started. I downloaded xrepairpro.exe and regressed the machine to a stable version of two weeks ago. Everything worked fine. The machine was stable.

Then overnight, an automatic updater must have fired. Skype crashed the machine again after it working perfectly the day before. When the machine rebooted xrepairpro was gone from the machine. Mind you it was a trial version but the weirdness persists.

I am wondering if there is such a thing as a Skype virus. I will download an earlier version of Skype, turn off automatic updater and report back. Please leave a comment if you know what is going on.

Want a Software Job? Finish This Test .... Part 3

For the last two entries, I detailed how a candidate for a software job was sent a coding test before he was personally contacted.

Coding tests are quite common, and can be quite onerous. A web design company sends this one out. This is quite a test, requiring a database, OAUTH to Twitter, and creates a marketable app. Here are the instructions to this test:

This test was created in an effort to gauge a candidate’s capability as an ASP.NET developer. The primary skills we are reviewing are: knowledge of C#/ASP.NET, MVC, knowledge of database design and implementation, usability, attention to detail and ability to interface with public APIs.

As with all code, there is no single correct way to build this web application. We will be looking at your submission to better understand your thought process when writing an application. Although this test must be written without any other person’s help, any standard reference material that is used during a normal programming cycle may be used (such as online help or books). No third party class libraries or code snippets may be used.


The purpose of the application is to allow end-users to search Twitter for topics of interest and
determine which users they might want to follow based upon the number of times tweets by that user appear in search results.

The Task

Create an MVC ASP.NET web application written in C#.

Required Features

1. Connection to Twitter (uses OAUTH)
2. Perform search against keywords supplied by end-user
a. Display tweets matching results, with profile photos
3. Collect profile data on everyone sending those tweets
a. Store in database, relate to tweets by that profile
4. Rank users whose tweets appear in search results most often – sort by # of matching tweets

a. Tweet counts persist across multiple searches
b. If the same tweet appears in two separate searches, it is count as a single hit against its
author, not as two
c. Every time a tweet is recorded, the search terms should also be recorded
5. Click on the list of twitter users to display all tweets that have appeared in search; for each tweet, indicate which search terms caused it to appear

General Information

Make any modifications/additions you feel are necessary to enhance the usability of this application. Keep the code clean, well organized and well commented. The quality of the application should be at the same level that you would create for a paying client/employer. If you have any questions about the description of the application please feel free to ask.


Your submission must be in the form of a zip file containing a fully working solution that can be compiled and run without any further external requirements aside from those listed below:

External Requirements

Include a script file that can be run to generate the required SQL data store.

Now, isn't that quite the task to get a job.

In the previous example where the applicant complained about the impersonal test, he received the following reply:

I'm a member of development team. I was forwarded me your comments about our hiring process. I want to thank you for your feedback, because I've been thinking something along those lines myself and have been pushing for some change here recently. Your response is the first concrete evidence I have that our process is flawed. I'm not officially in any kind of HR position, I just stepped in very recently due to my own personal concerns about how it is being done.

We do get a lot of coding exercise responses doing it this way, including from each of us when we were hired, so I guess that makes us think it is an appropriate process. My primary concern was that the best people are probably a) already employed, so don't have a ton of free time to put into the exercise, and b) likely have numerous offers on the table, and so putting the time into the exercise just to possibly get another interview is just not worth it to them. But beyond that, you're right, it's not a very person-oriented approach.

What I have just recently started doing (unfortunately a few days after we asked Tara to send you the coding exercise), is reach out via email to candidates and offer to answer any questions they might have about the company.

If you are still interested, please fire questions my way. Either way, please accept my apology for the way we have approached you. I will be sharing your comments with the team and will hopefully affect change as a result.

So, there you have it. I think that coding exercises are here to stay for developers, but the way that they are administered has to change.

Want a Software Job? Finish This Test .... Part 2

In yesterday's blog entry, I detailed a new paradigm where developers and coders who applied for a job, were sent back a programming test by email, without any personal contact at all. One developer that I know, took umbrage to the system and sent back the following response:

I was dismayed at your hiring process. I don't think that I am interested in your company based on this approach. It reminds me of outsourcing to India, where software tests are sent to coding monkeys without any personal contact first to determine if the person is a fit.

I submitted a resume that was incredibly deep in rich, eclectic experience and was asked to submit a trivial poker game example of code before any face-to-face discussions ever take place. So much for putting the personal element first.

Based on this formulaic, lazy way of recruiting, I don't think that your company culture is for me. It reminds me of a burn and churn outfit.

He didn't think that he would get a response. He was truly surprised that he did get a response. Here it is:

Thank you for your reply and your candid comments. We apologize for the impression that we have given you by sending you a coding exercise before contacting you. We understand that the personal element is important and this is why we contact candidates after they have submitted the coding exercise, however I can certainly see your excellent point of engaging candidates before that step.

Just this week one of our developers - Allan - said that he thinks it might be best for us to contact impressive candidates directly prior to the coding exercise to add in a more personal element (and to allow the candidate the chance to ask technical questions right off the bat). You have proven that Allan is right and that candidates need to have that personal contact before committing to a coding exercise.

I am going to pass this along to Allan for follow up and I want to thank you again for your valuable insight.

Thanks again,

And the head of programming did send back a response, which I will post tomorrow. I will also post a solution of testing programmer applicants that truly exercises their thinking ability, not their ability to remember syntax.

It may be that skilled knowledge workers have more power over the hiring process than they think.

Want a Software Job? Finish This Test .... Part 1

There is a relatively new paradigm for hiring software developers and coders. It consists of an applicant sending in his resume online. Before there is any initial contact at all, the applicant is sent a coding exercise. Reprinted below is one such:

We have reviewed your resume and would like to move forward in our interview process with you! In order for us to understand how you approach your work and to move to the next level of our process, please complete the following coding exercise.

Using Java/Python, please design and implement the classes for a card game (pick any game, Poker for example), which uses an ordered deck of cards, containing 52 cards divided in 13 ranks (A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K) with four suits: spades, hearts, diamonds and clubs. The cards can be, at a minimum, shuffled, cut, and dealt (feel free to implement additional ones that are required by the game).

* You do not have to model the gameplay, but include code required for the different stages of the game (e.g., evaluating player hands)
* You do not have to model any player bidding
* List any assumptions you make

Thanks again for your help in streamlining this process and we look forward to reviewing your code!

The Engineering Team

This seems rather cold to me, but that is the way software development companies are operating these days.

I will examine this phenomena in three parts. This is the introductory part. Would you do this to get a job? Do you think that this is a fair thing to do before being contacted by a human being and evaluated personally? This is a lazy, shotgun approach to recruiting where the applicants do all of the work. It is rather Darwinian, but I will have more on this in part 2.

Crime-Solving Website for Mothers Who Kill Their Children

The news tonight (or I guess that it is morning now) is full of news of mothers or step mothers who kill children and dump the bodies.

We have the Casey Anthony case in Florida where 25 year old Casey was tired of having her two year old daughter Caylee as a drag on her life, and killed her child, left it in a car, and then dumped the body in a field near the grandparents home.

Another item in the news is missing Kyron Horman who was supposedly taken to school by his step mother, Terri Horman, on June 4th of last year (a year ago today), and has never been seen since. Police have information that Terri Horman is involved and tried to hire a hit man to off Kyron's father.

And then we have the case of Julianne McCrery who drove from Texas with her six year old gifted boy, Camden Hughes and killed him in the North. He was found under a blanket May 14 alongside a remote road in South Berwick, Maine, near the New Hampshire border, sparking a massive tri-state manhunt.

The last case is different from the first two, where the mother tried to take the body as far away as possible to hide her actions. What ties the cases together, is that none of these women have ever shown any inclination or tendencies to take a human life.

The biggest problem when these children disappear, is finding the body. Searches take a lot of man hours and are expensive when they involve helicopters and mobilization of police units. There should be a way for technology to help. And there is.

With the invention of the internet, we have discovered a new principle of human behavior -- the crowd is always right. The second principle is based on human psychology. Humans when stressed tend to follow predictable patterns. A good example is prostitution. It has been discovered that when men are in search of a prostitute and cruise the roadways, they hate making left hand turns. They always want to turn right as they search out a streetwalker.

Human beings have the concept of querencia embedded in them. It is a bullfighting term. When a bull is in the ring, he finds and always goes to his querencia or safe place where he feels empowered. Casey Anthony dumped her daughter's body near her home, in her querencia, where she felt safe.

So, the website that I propose would be quite radical. And it might be offensive to some. What the website would do, would be to solicit essays from normal people on crime. Much in the manner of the OJ Simpson book entitled "If I Did It", the website would get people to write an essay on where they would stash the body if they killed a child.

Going on the principle that the crowd is always right, one could data mine the essays and determine the common denominators to use technology to help find possible areas to search.

If the idea of soliciting this kind of information from the general public is too offensive, one could use existing crime data, but it would have to be collected and digitized. It would be much easier, and one could get a much more specific mass if it were collected on a pro forma basis from the public when needed. I think that this kind of website would get an incredible amount of traffic. I just checked, and is taken, but is not.