{"id":382,"date":"2003-07-21T15:07:09","date_gmt":"2003-07-21T13:07:09","guid":{"rendered":"http:\/\/www.multiplicity.dk\/?p=382"},"modified":"2003-07-21T15:07:09","modified_gmt":"2003-07-21T13:07:09","slug":"search-engine-for-low-bandwidth-connections","status":"publish","type":"post","link":"https:\/\/krag.be\/index.php\/2003\/07\/21\/search-engine-for-low-bandwidth-connections\/","title":{"rendered":"Search Engine for low-bandwidth connections"},"content":{"rendered":"
Someone forwarded me this article from the BBC Online: BBC NEWS | Technology | World’s poor to get own search engine<\/a><\/p>\n Someone using the software would e-mail a query to a central server in Boston. The program would search the net, choose the most suitable webpages, compress them and e-mail the results a day later.<\/bq><\/p>\n The project website is here: My question is: Why would anyone want that?<\/b> So let’s look at what this actually does. The way I see it, there are 2 things this solution delivers:<\/p>\n Asynchronous, off-line search results.<\/i> Send in your query and the results will be delivered to you by e-mail. <\/p>\n But that exists already, in many different guiles, ranging from CapeClear’s Google by eMail<\/a>, based on the google API’s, to www4mail<\/a>, an open sourced mail gateway for web access. <\/p>\n Neither of these requires a download, and both deliver off-line asynchronous access to we content, and there are many more of these types of solutions out there.<\/p>\n That leaves us with the second part of what TEK delivers: According to the TEK website: All pages downloaded to the client computer are stored in the local cache, and are there for other users to search at a later date. <\/bq><\/p>\n So, they can use the fact that we are dealing with off-line asynchronous communication to deliver better and more relevant search results. If they succeed there, yeah, then we might have a search tool that will grow from low-bandwidth users to the rest of us, because everyone is always looking for fewer, more relevant search results. Unfortunately this part of the project is still a work in progress, although there are some indications on the site that they are indeed cooking up some interesting tech based on a way of clustering similar pages into groups, and delivering only “best of breed” results. <\/p>\n That of course opens them up for criticism in terms of limiting access to technology, based on rules defined at a single centralized location, a subtle form of censorship. <\/p>\n In any case, even if they deliver on the promise of a better search-engine, why would I want a 10Mb download when I already have a mail client. If you’re going to deliver search results via mail, why not let people actually enter the search queries that way too? The complexity of a local proxy seems wasteful in this situation.<\/p>\n","protected":false},"excerpt":{"rendered":" Someone forwarded me this article from the BBC Online: BBC NEWS | Technology | World’s poor to get own search engine Researchers at the Massachusetts Institute of Technology (MIT) are developing a search engine designed for people with a slow net connection. Someone using the software would e-mail a query to a central server in […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[5,1,17],"tags":[],"yoast_head":"\n
\nTEK Homepage<\/a><\/p>\n
\n
\nIt’s a 1.3Mb download if you already have Sun’s Java installed, if not, youll have to add at least 8Mb to that. For people with unstable, low-bandwidth connections that sounds like a definition of the word impossible. And their suggestion of distributing this on CD cia local libraries sounds a little utopian as well.<\/p>\n
\nDelivering relevant search results<\/i><\/p>\n
\n