Yahoo! links to research resources
Electronic research materials such as technical reports and preprints are
now available through Yahoo! Search. This follows a deal between the
OAIster Project, which was set up by the University of Michigan, US, and
Yahoo!'s Content Acquisition Program (CAP).
OAIster offers information that links to hidden digital resources, such
as the complete contents of books and articles, technical reports,
preprints, white papers, images of paintings, movies, and audio files of
speeches.
OAIster retrieves these by tapping directly into the collections of a
variety of institutions using harvesting technology based on the Open
Archives Initiative (OAI) Protocol for Metadata Harvesting.
The OAIster service currently provides access to three million harvested
records describing and pointing to these resources, which are created and
hosted by 267 research institutions around the world.
The deal with Yahoo! opens up these resources to a wider audience,
because many of the scholarly collections included in OAIster were not
previously indexed in popular Web search services.
Collections available through OAIster include: the arXiv.org Eprint
Archive (an archive of physics research); Carnegie Mellon University
Informedia Public Domain Video Archive; Ethnologue: Languages of the
World; Library of Congress American Memory Project; and Caltech
Earthquake Engineering Research Laboratory Technical Reports.
http://www.researchinformation.info/news.html#jun10
Thanks
S.Gunasekaran
Date: Wed, 30 Jun 2004 22:41:19 +0530
From: Satish Hulyalkar <satish(a)vsnl.com>
Dear Friends
I am forwarding a link where you can find "How much infrmation" is there
in the world. Have a look at
http://www.sims.berkeley.edu/research/projects/how-much-info-2003/index.
htm
It might be interesting to know what we are handling and we still have
more information paper form.
Satish Hulyalkar
Pune/India
mailto:satish@satishhulyalkar.com
Http://www.SatishHulyalkar.com
---------- Forwarded message ----------
Date: 30 Jun 2004 09:03:39 -0000
From: farooque <mfarooque(a)rediffmail.com>
Students and developers at Google Inc. have jointly created an open-source
tool designed to better predict the effect on real-world Web site
performance if changes are made to things like network infrastructure.
Called Monkey, the tool first captures data from actual client sessions,
inferring various network and client conditions -- what its creators call
the "monkey see" portion of its work. It then attempts to emulate those
conditions for server tests -- a process called "monkey do."
Monkey is aimed at helping solve a dilemma of Web testing: Trying out
network or server changes on even a small portion of actual user traffic
is risky, but simulations are often unrealistic because they don't
accurately reflect users' network conditions, said Yu-Chung Cheng, a
graduate student at the University of California, San Diego, who worked on
the project during an internship at Google. Cheng presented a paper about
Monkey yesterday at the Usenix Annual Technical Conference. Cheng admitted
that the tool, which is optimized for Google's specific search
application, might not be as accurate at predicting server response for
other types of applications. In response to audience questions, he said
Monkey also doesn't attempt to model how user behavior might change as
server response speeds or slows (for example, more search requests might
come in if server response improves).
"In the end, we believe it is unrealistic to build a generic one-for-all
TCP replay tool," the paper, "Monkey See, Monkey Do: A Tool for TCP
Tracing and Replaying," concludes. "But it is possible to build replay
tool[s] for specific applications."
Source code for the tool is available at
http://ramp.ucsd.edu/projects/monkey/.
---------------------------------
Farooque,
IBM Global Services (India) Pvt Ltd.
First floor, GT Annexe,
Airport Road, Bangalore-17
Tel: 25094107, 25094119
email... farooque(a)in.ibm.com
----------------------------------