Free Open Government

November 19, 2006

I have a dream in which governments are as free and diverse as computer operating systems. Through open technological innovations and intelligent specifications we can construct our own governments that we choose to participate in, unbounded by physical land, where decisions are always trickled up from each citizen (sorry, I cannot be represented by any individual but me), where money flow and policy changes are transparent and required by law to be easy to understand, access, and search. Here’s what I worked on yesterday: it defines how to join the virtual nation. Please feel free to comment on concerns and add suggestions. I’ll try to put it in a wiki if enough people are interested. test1

FOG project #1: a minimal democratic virtual nation

Rules of Citizenship

  • to become a citizen, you need the nomination(s) of current citizen(s). The requirements for becoming a citizen shall meet the following requisites:
    1. the requirements are such that there is an incentive to nominate only those individuals that do not yet have citizenship.
    2. the details of the requirements are determined by the people through a [collaborative process]
  • citizens can hold multiple citizenships to other nations, but only one to this nation.
  • citizens may lose their citizenship if they do not satisfy the conditions of citizenship decided by a [collaborative process]

Collaborative Process

work in progress


decentralized distributed p2p services

October 24, 2006

Good things are often distributed. You were able to find this blog page quickly because DNS is distributed. The internet itself is by nature distributed. The role of editing articles in Wikipedia is distributed, and it is for my purposes the best source of information online.

Some distributed things are also decentralized. Take for example, BitTorrent. Just last week, I was able to download 3 gigabytes of my favorite linux distro within an hour. The ‘torrent file’ that bootstrapped my download was hosted on a central tracker server, but it did not have to reside in any one server.

I am interested in decentralized P2P systems. What is necessary to make decentralized services robust, secure, anonymous, and responsive? What are the characteristics of services that could do well as decentralized services? For example, will search engines become decentralized? I mean, will Google search or Yahoo search ever be replaced by a decentralized P2P service that is robust to spoofing, responsive, anonymous, and ad-free? I believe search and many other services can become decentralized and thus become better than the centralized counterparts; it’s just a matter finding the right implementation to satisfy all the indepedent concerns.

On a slight tangent, here is a search service that integrates user contribution. Check out my idea swicki. I’m still experimenting with it, so please feel free to join in and or comment about it here.


structured GET requests

October 24, 2006

HTTP has a problem; due to the representation of URLs and the specification of GET requests, it is unwieldy for encoding structured request parameters, long or many parameters, and parameters with certain characters. Take for example, this ideal request object that is consumed and processed by an ecommerce website:

    <request-document>my shopping cart</request-document>
        <name>Jack Gardener</name>
            <filter key="item-name" op="regex">.*battery.*</filter>
            <filter key="item-view-date" op="greater">12-12-2006</filter>
            <skin>Milky White v1.0</skin>

You can see that this request is inherently a GET request; it fetches a cache-able document with no side effects. Also, the request object is inherently structured and belongs in XML form. Herein lies the problem. The usual approach to this problem is to fit the request parameters into a single line in URL-Escaped form. I think it would also be possible to fit the request XML object in the body of a POST request, but I believe most browsers cannot do this well, at least not with HTML-forms (also, the request is not inherently a POST request, so this would be a bad idea).

SOAP could be used to represent request parameters for websites, but it doesn’t solve the cache problem. Also, it’s a far ways from becoming incorporated into browsers and web-servers; it’s too abstract / transport independent to get widespread adoption. I may be wrong, but when it comes to adoption, I’m a strong believer of evolution (like IP > TCP > HTTP), not intelligent design. Just ask these guys.

Here is an idea: The client POSTs the request object to the server, and the server redirects the client to either (1) a URL that includes a session parameter or (2) a URL that includes a parameter that represents the request object, possibly in compressed and base64 form. In the first case, the server is responsible for remembering the request object, and in the second case, the server doesn’t. Either way, the request is completely cache-able in all levels as long as the browser or client can associate the request object with the redirect URL.

There are several layers of caching that needs to be considered:

  1. browser level caching of the final response document
  2. browser level caching of the final redirect URL based on response object
  3. cloud level caching of the document
  4. server level caching of the final redirect URL based on response object
  5. server level caching of the response document

Caching of levels 1, 3, and 5 come free because the browser/client is redirected to a HTTP GET requests to fetch the document (See how). Level 4 is easily implemented by a servlet or module, or even by a third party webservice (kind of like tinyurl). Level 1, I believe, requires a new browser plugin or implementation. But, at least it’s possible. Image if links were no longer a long cryptic string but a structured XML document with human readable parameter values. There are also many benefits that come with structured requests and the ability to declare name-spaces that I won’t write right now, but I’m thinking along the lines of client/browser and user preferences.


An Analysis of Intelligence (old note from the thoughtbook)

October 13, 2006

Frequently I will have a moment of brilliant thoughts, and I have learned to write them down. I do not pretend to be correct in anything I claim, because I am not an expert in claiming. I am a thinker, and thinking is what I do in my free time, as it joys me so when I find those moments of brilliance. Brilliance is nothing but a series of thoughts that appear to define something. If the series of thoughts is wrongly induced then it loses its brilliance. If after deeper brainstorming the series is shown to define something, then that is what I call a realization.+

So, here is what I mean to post today: one of my recorded realizations! I will repeat here, as I had written in my black notebook my last year of college (I still have those notebooks). This series has to do with Intelligence and Learning.

“I just came to the realization of what intelligence is! First, let me define some terms.”

“the brain is an environment where Functions can coexist within itself and is forcefully reproduced. The Function that reproduces to itself (or similarly classifiable versions of itself, as classified by an intelligent observer*) is what will survive and thus come to dominate the Capacity of the environment.  Therefore, Intelligence can be decomposed into the following: (1) Functions, (2) Capacity that permits coexistence of Functions, and (3) evolutionary rule framework. (ERF)”

“For example, sleeping is what enforces ERF (3). That is, sleeping causes Functions to terminate, thus allowing for a clean canvas (clear its Capacity) in which Functions can perform their function; to produce. ++”

“External effects such as hunger, libido, anger, and joy modify the environment’s properties (how Functions are governed and how they produce) in some deterministic way. Thus, in our limited-resource world, another necessary component of intelligence is: (4) a dynamic but deterministic property (lets call this Funk).”

“Evolution theory says that it is likely that the Function which reproduces towards its similies will become dominant in number in an environment as described above. In other words, Learning is an automatic and statistically necessary progression in such an environment.”

* recursive!

+ more thoughts here but truncated.

++ without sleep, even the best Intelligence wouldn’t be able to sustain, both because the selective pressure necessary is not present in normal circumstances, and because of the increasing Funk.

Well, there it is folks. I hope someone enjoyed this.


javabucket, the easy file storage solution

October 11, 2006

Have you ever wanted a quick and easy way to share files? What if you could easily upload and share photos and movies with your friends, or store your files for backup? Storage is never completely free; you can use a paid service [most likely a fixed charge per time period, which ends up being too expensive] or use an ad-ridden service [which is not pleasant to use, and may turn into a paid service]. But now, you can use a pay-per-usage service that is as easy to use as it is cheap.

in short, i create a java applet that allows you to easily upload and store files (using the Amazon S3 service). the application was inspired by s3wiki; i hope to continue developing this application while keeping the ‘client side web application’ faith.


the applet is in the early beta phase, so i am ready accept bug reports at jackgardener gmail.


since my last post i have discovered jets3t.  since it appears that i have just been duplicating work already done, i will quit development on my applet and try to join the jets3t development team. i hope they let me, i don’t have much time to program anymore, even though i love it!


Magical Syndication

October 4, 2006

what if there was a better way to measure attention? browser plugins can be used to record usage paths, and maybe a rough idea of browse time per page.i wonder what could happen if we knew what part of a document each person actually read [and payed attention to]. i will list some things that could be deduced statistically. please feel free to contribute your thoughts as well [as comments on this post].

  • whether the document is interesting*; users are more likely to browse through a document if it is interesting.
  • which part of the document is most interesting; browse patterns within the document can give clues about the content of the document.
  • the density or difficulty of the document and parts of it; moments of pause to read and reread.

i wonder, furthermore, what could happen if we knew who was reading the document. now we have three sets of data – the set of documents, the set of users, and browse patterns.

  • the kinds of documents, or at least how related documents are to each other; users tend to be interested in a tiny subset of documents at a given moment. self organizing maps could be a possible implementation.
  • once we know how to describe documents we can also describe users; users have a set of interests. i don’t know what kind of algorithm is best suited for finding document clusters that represent a kind of interest [i will call these interest-clusters], but i’m sure there are known algorithms for this purpose.
  • the level of understanding that a user has of each interest-cluster; related to point 3. from the previous paragraph.

ok, so hopefully i’ve established that this is an interesting idea. now here is one implementation that makes the above possible.

  1. decide on a document representation schema that is a heirarhical document format, such that the document can be split into parts that correspond to the content of the document.
  2. implement a display program that can run on any browser that is easy to use, and most importantly, folds the document heirarchicaly such that in order to reach parts of the document, the user must act on the display program; think xml, hyperscope, gmail etc.
  3. the program should be able to act on documents hosted and pubished anywhere. the best way i can think of is to publish the documents as xml over http and include javascript that can parse and display the document according to point 2.
  4. track user identity by cookie or by login.
  5. the display program sends browse data and user identity to a server where it is aggregated and analyzed.

valence screenshotso where does the magic happen? based on all the data that is gathered, it is possible to come up with intelligent suggestions as to what to read next. a service could suggest further reading material depending on your level of understanding and topic of interest. if you like to read the latest development news about enterprise java frameworks, they come to you. if you would like to learn more about a topic in mathematics, perhaps even a trail of documents can be suggested to you. best of all, there is no need to process the contents of a document. all the data is agnostic of the contents of a document, and derived purely from attention data.

maybe it’s mostly all hogwash.

– Jay


big bang of ideas

October 4, 2006

i wasn’t going to create a blog because i didn’t want to become sucked into the blogosphere. however, during my flights across the states for consulting business i always find the time to think of some scintillating ideas, and i am increasingly becoming aware of the need to share them. since i’m already swamped with dazzling work, i’d like to offer these half baked ideas for everyone’s enjoyment. maybe they’ll start something.

by the way, please excuse my typing. the only remaining shift key on my laptop is broken, and i’m not fond of the capslock key.

my first entry will be called ”Magical Syndication”, to be posted tomorrow. i cannot gauarantee originality [tell me please, if there are relevant data], but i sincerely hope it feeds your mind. until then, guten nacht.