log in

ACM TechNews Interesting Articles

Luke Breuer
2009-08-22 18:59 UTC

Group Seeks to Make Computer Science More Attractive
Chronicle of Higher Education (06/09/06) Carnevale, Dan
A new coalition of 10 institutions is attempting to revamp the image of computer science in an effort to reach out to women and underrepresented minorities. The Stars Alliance recently won a three-year, $3 million grant from the NSF. Many schools intentionally make introductory computing classes so difficult that only the most serious students pass, which cuts out intelligent students who could go on to become skilled computer scientists, according to Larry Dennis, dean of the College of Information at Florida State University. "We're looking at curricular and infrastructure changes to make these courses more attractive to everybody," he said. "Not just women and minorities, but everybody." One approach offers courses on more application-oriented skills, such as multimedia and Web-site development, rather than intensive concentration on mathematics and programming. The 10 participating institutions will try to market their program to other schools to broaden the appeal of computer science among women, who Dennis notes are typically more interested in the social implications of computing. The consortium is also developing a program for computer-science majors to mentor students in middle school and high school. In addition to kindling interest in computers among younger students, the mentoring program will teach undergraduates to discuss and teach computing using everyday language.
Twelve Research Grants Awarded to Help Fund Innovation in Search Technology
PRNewswire (06/01/06)
Microsoft Live Labs has named the winners of $500,000 in grant money for its Accelerating Search in Academic Research request for proposal (RFP), which will enable the recipients to continue their study of Internet search technologies, and data mining, discovery, and analysis. "Through this RFP process, we have found a wealth of academic talent and ideas for search and algorithm development that we think will transform our ability to harness the power of the Web in the years to come, allowing users to focus less on the work of searching and instead reap the rewards of discovery," says Gary William Flake, director of Live Labs. The 12 RFP winners will each receive between $25,000 to $50,000, and have access to extensive data logs from MSN and an increased quota of queries to the MSN Search software development kit. Winners include "The Truth Is Out There: Aggregating Answers From Multiple Web Sources," which involves information retrieval research from Amelie Marian of Rutgers University; and "Vinegar: Leading Indicators in Query Logs," which covers machine learning, human-computer interaction, and data mining research from Eytan Adar, Brian Bershad, Steven Gribble, and Daniel Weld of the University of Washington. "VISP: Visualizing Information Search Processes," is a proposal by Lada Adamic and Suresh Bhavnani of the University of Michigan focusing on natural language processing and human-computer interaction research. "Entity and Relation Types in Web Search: Annotation Indexing and Scoring Techniques," focusing on machine learning, information retrieval and natural language processing research, was proposed by Soumen Chakrabarti of the Indian Institute of Technology, while University of Illinois at Urbana-Champaign researcher Kevin Chang's "Deepening Search: From the Surface to the Deep Web" proposal for information retrieval and information integration research was also a winning RFP.

*Escape the Software Development Paradigm Trap
Dr. Dobb's Journal (05/29/06) Bereit, Mark*
IRIS Technologies product development director Mark Bereit refutes the assumption that software development will always be difficult and bug-ridden, noting his suspicion "that these limitations apply, not to all possible software development, but solely to the software development paradigm that we've followed, unchallenged, for decades." He proposes reworking the software development model and studying other engineering disciplines for inspiration, eschewing habits that impose the limitations. Bereit cites the commonly accepted view that software systems can fall apart just from a single point of failure, which resides in each line of code. He does not point to a shortage or surplus of code reuse, but rather its employment to do something entirely different from what developers think they are doing, namely the construction of massive algorithms instead of components. According to Bereit, what is needed is a way to divide software development into more workable segments so that the CPU is not overtaxed, though he is not proposing multithreading. The author uses the basic principles of mechanical engineering as a jumping off point in his suggestion that software development should incorporate the involvement of "trustworthy components, specifications, and margins; that it should allow assemblies of increasing complexity to be built from trustworthy lesser components; it should involve a team approach to performing complex tasks; and it should be something that can be generally dependable and trustworthy." Splitting up a task between multiple processors is the optimal teamwork strategy Bereit recommends. The new software development model must include a new framework for communications and management of common resources, which should point to a way to enable the same processor to execute different tasks at different times.

*The User's View: Customer-Centric Innovation
Computerworld (05/29/06) Pratt, Mary K.*
In an effort to bring a fresh perspective to the design of technology solutions, some companies have begun hiring anthropologists to work with or even lead their development teams. Companies value anthropologists because they can look at technology from the user's perspective by asking questions about how people work and the types of tools that they do and do not use. While technologists can get wrapped up in adding more tools and automation to an application, anthropologists can give them guidance on whether the tools will actually be used or if they will just by an annoyance. While observing systems administrators at IBM, anthropologist Jeanette Blomberg found that they typically create their own local tools to help in the management of their systems. IBM's Eser Kandogan then built a program to support the systems administrators' tools based on Blomberg's observations. "Technologists tend to look at the user and the user's relationships to the technology. It tends to be very task-focused," says consultant Patricia Sachs. "Anthropologists look at the missing layer." Research about the way people interact and communicate with each other at Intel led to a program for virtual collaboration that facilitates multiple methods of communication, such as instant messaging and a shared white board. IT anthropologists are still a rarity, though companies are increasingly realizing the value of multiple perspectives when developing new technologies. Adding an anthropologist to an IT department can also create a cultural clash, as many IT workers might have difficulty accepting the validity of an anthropologist's methods.

*Software Could Add Meaning to 'Wiki' Links
New Scientist (06/07/06) Sparkes, Matthew*
Researchers at the University of Karlsruhe in Germany have made alterations to the software that powers Wikipedia that would enable editors to enhance the meaning of the links between pages. With the team's MediaWiki system, authors could add meaningful tags, or annotations, to articles and the hypertext links that connect them. Relevant pages would display the annotations buried in the tags, explaining the relationship between two topics. Annotations could facilitate more intelligent searches of wiki sites, the researchers claim, and they believe that specialized communities that maintain their own wikis will likely be the first adopters. "I think early adoption will be led by communities interested in data such as animal species information," said the project's Markus Krotzsch. "Semantic information is most useful in situations where data can be clearly defined." Adding meaning to online content is the essence of the vision for the Semantic Web promoted by Web architect Tim Berners-Lee and others. The researchers are hopeful that Wikipedia will incorporate their software, though they admit that it might have a hard time supporting such a popular site--Wikipedia receives around 4,000 page requests per hour.

*The Case for the Two Semantic Webs
KMWorld (06/06) Vol. 15, No. 6, P. 18; Weinberger, David*
Though the Web is full of meaningful data and contextualized links that often describe the contents of the destination page, the calls for the Semantic Web stem from the frustration at the inability of the syntax of the Web (HTML) to capture that meaning, writes David Weinberger. The new syntax, Resource Description Framework (RDF), describes relationships between two terms, collectively forming an ontology. The Semantic Web standard OWL is used to express ontologies. Beyond RDF, Semantic Web proponents agree on very little, however. There are multiple ontologies for law terms that compete with each other, and each suffers from trying to create comprehensive, objective descriptions for an overwhelmingly large body of inherently subjective material. An alternative to this top-down approach calls for creating as few new ontologies as possible, relying instead on existing ontologies that could come from other domains. Rather than creating a new definition for a relationship, users should take advantage of an existing ontology that already has a definition for the relationship, via a URI. That way, applications will see that the relationship has a common definition on all sites that support the Dublin Core. This approach calls for building the Semantic Web incrementally, and while it lacks an overarching development plan, it is more agile than the top-down plans and thus more likely to succeed. Opinions vary widely on the transformative potential of the Semantic Web, while Weinberger argues that most of the ways that users currently add meaning to the Web, such as reputation systems, XML playlists, and buddy lists, will continue very much as they are today, and that "the Semantic Web will help where it helps."

*Putting Services at the Heart of Tomorrow's Software
IST Results (06/26/06)*
Microsoft and Computer Associates are involved in a four-year project with other corporate, academic, and research partners in Europe to develop methods, tools, and techniques for system integrators and service providers to accommodate the linking together of small, functional services that perform a larger task. Participants in the approximately 15.2 million-euro SeSCE project say it is the last key step in the move toward Service Oriented Architecture (SOA), and could allow computing to fulfill the promise of offering improvements in productivity and functional flexibility. The group addressed the issue of standards, but the focus of its work is on service engineering, service discovery, service-centric system engineering, and service delivery. The tools, protocols, and methods embraced by SeSCE to develop a service development platform include a search engine and semantics for service description and testing. Halfway through the project, SeSCE has a demonstrator of its service composition platform, and partner Telecom Italia plans to show how SMS and GPS services can be used to update a commuter's schedule to take account of a traffic jam. "If the driver is going to be late for a meeting because of traffic, for example, the service can alert his or her assistant who changes the schedule and rearranges any meetings," says Matteo Melideo, coordinator of the SeSCE project. "Then an SMS message is sent to the driver's mobile phone providing a confirmation of the new schedule." IST has another project that has business workflow and service composition techniques that are in line with the SOA model, but the Adaptive Services Grid needs more advanced semantic tools.

*Semantics Poses Challenge for Web Services
IST Results (06/21/06)*
Though service oriented architecture holds the vast potential of creating diversified, agile programs that can synergistically combine to solve complex problems, getting those programs to communicate with each other is a major challenge. To address that problem, the IST-funded Adaptive Systems Grid (ASG) program has developed a semantic-service reference platform to demonstrate the ability of the software applications to autonomously combine to solve a larger problem, requiring each program to both announce its own function and recognize the function of others. In the tests, the software executed and combined without human manipulation. Machine-readable semantic descriptions are key to locating and retrieving software and objects on the Web. When developing ontologies, the greatest challenge is to determine the appropriate level of granularity, as coarse-grained ontologies can be created easily, but typically have vague descriptions that make them difficult to locate by search. Fine-grained ontologies are labor-intensive to create, but are easier to discover because of their accurate descriptions. Detailed ontologies and semantics can get extremely complex, as the number terms required to describe specific functions escalates rapidly, and adapting them to other developments becomes expensive. "This is an area that needs more research," said Dominick Kuropka, scientific coordinator of the ASG project. "What is the proper level of expressiveness for modeling of semantic services, which provides a good balance between the investment in ontology and service modeling and obtainable level of utility and automation?" In his search for the balance between cost and detail, Kuropka found, unsurprisingly, that greater detail delivers greater performance, though the greatest performance improvements come in the middle range of detail. The INFRAWEBS project took an alternative approach to service applications, integrating similarity- and logic-based reasoning to retrieve, then clarify, service ontologies.

*Research: Spatial Abilities Key to Engineering
EE Times (06/19/06)No. 1428, P. 12; Schiff, Debra*
University of Minnesota postdoctoral research fellow Wendy Johnson and Minnesota Center for Twin and Adoption Research director Thomas Bouchard have conducted a new study that supports the theory that spatial abilities are an important factor for success in the field of engineering. Men have an overwhelming presence in engineering positions, and research from Johnson and Bouchard shows that men are likely to have a higher degree of intelligence in the rotation and focus dimensions. The rotation dimension represents the higher spatial abilities, and the focus dimension signifies the ability to solve problems by focusing on details in a linear fashion. They have found that women tend to have better verbal, memory, and diffusion intelligence, or the ability to solve problems from a number of perspectives at once and synergistically. Georgia Institute of Technology professor of psychology Philip Ackerman says general tests such as the SAT will not show this difference between males and females, but AP tests may reveal the impact of spatial abilities in terms of the major students ultimately choose to pursue. The foreign language exam is the only AP exam on which girls perform considerably better than boys. Their research can be found in the journal Intelligence.

*Reaching Agreement Over Ontology Alignments
University of Southampton (ECS) (08/24/06) Laera, Loredana; Tamma, Valentina; Euzenat, Jerome*
Ontologies are critical for inter-agent communication, and interoperability resides in the ability to reconcile disparate existing ontologies whose format may be variegated and whose domains may overlap; this reconciliation typically depends on the presence of correspondences or mappings between agent ontologies. The authors offer a framework enabling agents to agree on the terminology they use for communication by permitting them to express their preferred choices over candidate correspondences. A value-based argumentation framework is employed for the computation of each agent's preferred ontology alignments. The basis of argumentation is an exchange of arguments, for or against a correspondence, that interact with each other through an "attack" relation. An argumentation schema is instantiated by each argument, which employs domain knowledge taken from extensional and intensional ontology definitions. With the generation of a full set of arguments and counter-arguments, the agents consider which of them should be accepted. The authors define two different types of alignment, an agreed and agreeable alignment; the agreed alignment is the series of mappings based on those arguments contained in every preferred extension of every agent, while the agreeable alignment is the extension of the agreed alignment with those mappings supported by arguments which are in some preferred extension of every agent. "The dialogue between the agents can...consist simply of the exchange of individual argumentation frameworks, from which they can individually compute acceptable mappings," write the authors.

*New Search Engine Can Be Used for Creative Discovery
Newswise (09/18/06)*
Virginia Tech's System X supercomputer is being used by researchers to test a new search program called "Storytelling" that can find connections between seemingly dissimilar information, unearthing a sequence of relationships or events to build a chain of concepts between specific start and end points. "The stories are pieced together by analyzing large volumes of text or other data," explains Virginia Tech computer science professor Naren Ramakrishnan. "Every day, there are new research results reported in the [scientific] literature and there are discoveries waiting to be made by exploring connections." Large scale search engines such as Google serve as the template for the storytelling algorithm. Each supercomputer "node" is tasked with indexing a piece of the biological literature, and the nodes share data to help concretize links and establish connections. "In future work, we aim to investigate other ways to construct stories that mimic or complement how biologists make connections between concepts," reports Ramakrishnan. "Our eventual goal is a product that is an important tool for reasoning with data and domain theories." Virginia Tech biochemistry professors Richard Helm and Malcolm Potts used Storytelling to explore connections between research papers on yeast and its ability to enter into and exit from a state of reduced metabolic activity. The researchers had Storytelling compare two PubMed articles against the abstracts of 140,000 publications about yeast. Their work led to the article, "Algorithms for Storytelling," by graduate student Deept Kumar, Ramakrishnan, Helm, and Potts, that was published in the Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'2006) in August 2006.

*Learning Through Technology-Enhanced Collaboration
IST Results (09/19/09)*
IST is funding two European research projects that are designed to improve the knowledge-sharing capabilities of new technology. The COOPER initiative is focused on developing an Internet-based platform that will enable a group of users to work together on a project, using tools such as chat, Internet telephony, and a document repository, explains Xuan Zhou, who manages the initiative at the L3S Research Center in Hannover, Germany. The two-year project is scheduled to launch the online network at two universities and an industrial partner next year. "What we are aiming to achieve is the creation of a collaborative learning environment that lets people communicate, work together, and share knowledge whenever they want no matter where they are," says Zhou. The TENCompetence initiative also got underway in December 2005, but the focus is more on making e-learning networks more interactive so that users are actively engaged. Researchers involved in this four-year project plan to develop an advanced, open-source and standards-based technical and organizational infrastructure. Collaborative e-learning networks will be able to take advantage of the models, methods, and technologies for creating, storing, and exchanging knowledge resources; tools for developing new content and learning activities; and methods for testing users on how they are picking up the new competencies. TENCompetence trials will begin in Europe next year.

*Open Source Search Technology Goes Beyond Keywords
NewsForge (09/25/06) Stutz, Michael*
The semantic search engine that academic researchers have quietly been developing for years has now been licensed under the GNU General Public License, and a version for the desktop is forthcoming later this month, according to Middlebury College's Aaron Coburn, lead developer of the initiative. The Semantic Indexing Project promises to be able to recognize synonyms or near matches of words, instead of simply retrieving results that contain the literal search terms. The entirety of the source code is available for download, which includes the central technology of the Semantic Engine, distributed in C++, Perl bindings, and all of the requisite code for creating the graphical user interface. One of the more impressive demonstrations of the project has been the ability to graphically visualize novels, an application whose origins began when the researchers partnered with a Spanish professor interested in developing a searchable e-book reader for Don Quixote. Coburn integrated a stable of Project Gutenberg texts with software to visualize semantic data in the database, which led to the ability to visualize plots, mapping the interactions of characters throughout the course of a novel. "And the algorithms seemed to do a really good job of detecting how the characters interacted," Coburn said. The origins of the Semantic Engine date to a National Institute for Technology and Liberal Education (NITLE) conference in 2001. Following the conference, where experts spoke on hot topics such as XML and Latent Semantic Analysis, NITLE conducted a study that had a college instructor create a syllabus and apply to it the appropriate Learning Object Metadata. Finding that it took the professor more than four times as long to apply the metadata than to create the syllabus, Coburn says that it then became evident that a tool to automatically produce metadata or retrieve information from collections with no metadata would prove extremely useful. NITLE researchers set to work building a semantic search engine around latent semantic indexing technology, working variously with Perl and C++.

*To Inspire Tech Kids, Inspire Tech Teachers
Investor's Business Daily (10/26/06) P. A4; Vallone, Julie*
To combat the shrinking interest in computer science and engineering among college students, many organizations are targeting school teachers. Programs where teachers get hands-on exposure to technology, in order to help students understand how such disciplines apply to the real world, are increasingly sprouting up. Bing Chen, head of the Computer and Electronics Engineering department at the University of Nebraska, and his colleagues created the Silicone Prairie Initiative on Robotics in IT (Spirit) program for both teachers and students. This summer, 32 teachers from local schools were invited to participate in a two-week hands-on engineering workshop. "Frankly, our math and science teachers are not given many opportunities to explore engineering," says Chen. "Our workshop was designed to give them exposure and build skill sets in this area." San Jacinto College in Pasadena, Texas, administers a program called the National Middle School Aerospace Scholars (Namas) that invites 150 teachers from eight states to year-round workshops where they learn about the aerospace industry. The program takes advantage of the nearby NASA Johnson Space Center. These programs all focus on making sure teachers are able to incorporate what they learn into their curriculum in order to cultivate interest in science and engineering. Simona Bartl, who coordinates a program in Marine Biotechnology and Bioinformatics run by California State University, explains that "only a few [teachers] had experienced doing actual research and being in the lab with scientists. Most had gone through science (teacher) education programs, where they just not exposed to these aspects of the field." By giving teachers a better understanding of what science is, on a day to day basis, students will be given real-world experience and relevant advice that can open them up to the opportunities presented by the all-too-neglected fields of science and engineering.

*Categorizing Web Search Results Into Meaningful and Stable Categories Using Fast-Feature Techniques
ResourceShelf (11/21/06) Kules, Bill; Kustanowitz, Jack; Shneiderman, Ben*
Bill Kules, Jack Kustanowitz, and Ben Shneiderman of the University of Maryland's Human-Computer Interaction Lab and Department of Computer Science propose a number of "fast-feature" methods to categorize Web search results into stable and meaningful categories. These techniques were developed to address the metadata challenge of increasing numbers of unstructured and semi-structured digital documents, and the advantages such techniques yield include the provision of overviews, navigation within search results, and negative results. These methods use nothing beyond the features available in the search result list (title, snippet, URL, etc.), while credible knowledge resources (the Open Directory Project Web directory's thematic hierarchy, a U.S. government organizational hierarchy, personal browsing histories, DNS domain, and document size) are also employed to augment search results with important metadata. The researchers ran three tests in which the percentage of results categorized for a quintet of representative queries was high enough to suggest that the techniques were practically beneficial for such applications as general Web search, government Web search, and the Web site of the Bureau of Labor Statistics. A prototype search engine (SERVICE) incorporates fast-feature techniques, and Kules et. al make suggestions about improving categorization rates and how Web site designers could restructure their sites to support rapid search result categorization. They note, for example, that categorization engines would be capable of classifying pages in precise accordance with the authors' intentions if sites published a machine-readable site map and placed it in a standard location.

*Improving Performance Support Systems Through Information Retrieval Evaluation
Journal of Interactive Learning Research (Quarter 4, 2006) Vol. 17, No. 4, P. 407; Schatz, Steven*
Steven Schatz of the University of Hartford presents a study that analyzes existing and new techniques for assessing the success of information retrieval systems, arguing that the principle underlying current methods lacks the robustness necessary to permit testing retrieval using diverse meta-tagging schemas. Traditional measures depend on judgments of whether a document has relevance to a specific query, and a good system returns all relevant documents and no extraneous documents. Traditional theory does not address questions such as when, to whom, and to what purpose are the documents relevant. Schatz notes that metatag-based search systems such as Dublin Core, IMS, SCORM, GEM, and others have been developed based on their expected superiority over traditional methods, but little research has been done to confirm their worth. Schatz's study employs the new, non-relevance-based measures of Spink's Information Need and Cooper's Utility to rate a self-built tag-based search tool and the open-source Swish text-based search engine, and compares the two measures against each other as well as against traditional measures. The two search engines were utilized by 34 educators, who evaluated the information each search engine retrieved. Two-way analysis of variance was used to compare construct measures, which are the product of each of the three measures (traditional, information need, and utility) and a satisfaction rating. A substantial correlation between the three measures was uncovered, signaling that the new measures offer an equivalent technique of assessing systems and have some notable benefits, including the elimination of relevance judgments and the capability of in-situ use of measures. "Rather than being limited to any of these measures as a single measure, using some in conjunction with others clearly offers a richer view," Schatz notes.

WPI Receives $2 Million Award to Develop an Intelligent Tutoring System That Can Improve Math Education
Worcester Polytechnic Institute (05/23/07)
Worcester Polytechnic Institute (WPI) and Carnegie Mellon University researchers have received a four-year, $2 million award to continue research on ASSISTment, a computerized tutoring system designed to help middle school students master mathematical skills. ASSISTment will provide schools with the long-term data on student performance required under the federal No Child Left Behind Act, and will provide teachers and parents with instantaneous, day-to-day feedback on what students have and have not learned, making it easier to tailor instruction to help students understand concepts they are having problems with. WPI associate professor of computer science and leader of ASSISTment research Neil Heffernan says ASSISTment is the only system that can provide longitudinal data, benchmark skills assessment, and student tutoring without taking time out of classroom instruction. Kenneth R. Koedinger, Carnegie Mellon University associate professor in the Human Computer Interaction Institute and co-principal investigator on the grant, says students should not have to stop learning to take a test, particularly a practice test. "Students keep learning while they are using the ASSISTment system, and we are showing that we get just as good if not a better idea of what they know and do not know than we can from high pressure, one-shot tests." The ASSISTment system, which was built around more than 900 test items from the Massachusetts Comprehensive Assessment System 8th grade math exam, will be expanded to include sixth and seventh grade mathematics and will be able to generate user-friendly reports to show teachers and parent how individual students are performing. Finally, the system will utilize new features to help students achieve mastery of math topics. The system will track each student's progress and record which skills they have not yet mastered.

NSF Partners With Google and IBM to Enhance Academic Research Opportunities
The National Science Foundation's computer and Information Science and Engineering (CISE) Directorate has announced the Cluster Exploratory (CluE), a strategic partnership with Google and IBM that will enable the academic research community to conduct experiments and test new theories and ideas using a large-scale, massively distributed computing cluster. "Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications," says NSF CISE assistant director Jeannette Wing. "It can also serve as a tool for educating the next generation of scientists and engineers." Google vice president of engineering (and ACM President) Stuart Feldman says the company hopes the computing cluster "will allow researchers across many fields to take advantage of large-scale, distributed computing." IBM's Willy Chiu says the combined effort should accelerate research on Internet-scale computing and drive innovation to support applications of the future. Last October, IBM and Google created a large-scale computer cluster of approximately 1,600 processors to provide the academic community with access to otherwise unobtainable resources.

March 10, 2008
People Power Transforms the Web in Next Online Revolution
Observer (UK) (03/09/08) Leadbetter, Charles

Creativity and intelligence enabled by mass collaboration via the Web could spark a revolution in the collective power to solve wide-ranging challenges such as support for the aged, global warming, disaster relief, teaching and learning, and the spread of democracy in repressive countries, writes Charles Leadbetter, author of "We Think: Mass Innovation, Not Mass Production." He calls this form of creativity "We Think," and lists the free, volunteer-created Wikipedia online encyclopedia as a key example. Leadbetter says open access publishing makes scientific research available on a global level without any restrictions, encouraging mass collaboration that in turn raises the productivity of the research community. He predicts that even top-down services will eventually be affected by We Think, citing the School of Everything, a British effort to create a resource for educational services, as one example. Leadbetter points out that children learn things from each other, frequently through social networks and computer games such as World of Warcraft, when they are not in school. "If we could persuade 1 percent of Britain's pupils to be player-developers for education, that would be 70,000 new sources of learning," he writes. "But that would require us to see learning as something more like a computer game, something that is done peer-to-peer, without a traditional teacher."

May 12, 2008
The goal of the EU-funded PROLEARN project is to cross the gulf between research and education at universities and similar organizations, and training and continuing education that is supplied for and within companies. The resulting links provide network members with the ability to create a new class of educational tools and technologies that could be advantageous to learners in their professional fields and workplaces. PROLEARN project manager Dr. Eelco Herder says the initiative gathers key research groups, other organizations, and industrial collaborators into a "network of excellence" in professional learning and training. "Because academic institutions are where [technology-enhanced learning] is being researched, they become the first adopters of new technologies, but there are also implications for the corporate world," Herder says. To guarantee that TEL is more widely embraced, systems from different institutions must exchange data and communicate with one another, and the PROLEARN researchers encourage system compatibility through the use of an educationally focused Simple Query Interface that contains programming instructions and assumes the responsibility of sending and responding to user queries. PROLEARN researchers have established a new European Association for Technology Enhanced Learning, and the project is supporting companies with the setup of a Virtual Competence Center. The transference of research results into education and training programs, international conferences, and scientific journals is the goal of another PROLEARN-initiated network, the PROLEARN Academy.

New Ways to Connect Data, Computers, and People
Chronicle of Higher Education (07/07/08) Foster, Andrea L.
Astrophysicist Edward Seidel will take over the National Science Foundation's Office of Cyberinfrastructure (CI) starting in September. The CI office awards competitive grants to researchers conducting revolutionary work in computer science, as well as oversees national advances in supercomputing, high-speed networking, data storage, and software development. Seidel says developing a CI-savvy work force might be the most important long-term investment that needs to be made, noting that the nation is facing a critical shortage of computationally skilled researchers and support staff. Increasing the number of researchers who understand the importance of CI is just as important as increasing budgets and upgrading to new equipment, Seidel says. He says that all areas of research, education, and industry are being transformed by advances in CI, and future advances will require assembling teams with different kinds of expertise to attack complex problems in a variety of subjects. Universities need to hire more faculty who will use CI to advance their disciplines, and consider developing local training courses in computation science, the use of CI, as well as participate in national training events.

Is Technology Producing a Decline in Critical Thinking and Analysis?
UCLA News (01/27/09) Wolpert, Stuart
University of California, Los Angeles (UCLA) professor Patricia Greenfield says that critical thinking and analysis skills decline the more people use technology, while visual skills improve. Greenfield, the director of UCLA's Children's Digital Media Center, analyzed more than 50 studies on learning and technology. She found that reading for pleasure improves thinking skills and engages the imagination in ways that visual media cannot. She says the increased use of technology in education will make evaluation methods that include visual media a better test for what students actually know, and will create students that are better at processing information. However, she cautions that most visual media does not allocate time for reflection, analysis, or imagination. "Studies show that reading develops imagination, induction, reflection, and critical thinking, as well as vocabulary," Greenfield says. "Students today have more visual literacy and less print literacy." Greenfield also analyzed a study that found that college students who watched "CNN Headline News" without the news crawl on the bottom of the screen remembered more facts from the broadcast that those who watched with the crawl. She says this study and others like it demonstrate that multi-tasking prevents people from obtaining a deeper understanding of information.

Web 3.0 Emerging
Computer (01/09) Vol. 42, No. 1, P. 88; Hendler, Jim
Web 3.0 is generally defined as Semantic Web technologies that run or are embedded within large-scale Web applications, writes Jim Hendler, assistant dean for information technology at Rensselaer Polytechnic Institute. He points out that 2008 was a good year for Web 3.0, based on the healthy level of investment in Web 3.0 projects, the focus on Web 3.0 at various conferences and events, and the migration of new technologies from academia to startups. Hendler says the past year has seen a clarification of emerging Web 3.0 applications. "Key enablers are a maturing infrastructure for integrating Web data resources and the increased use of and support for the languages developed in the World Wide Web Consortium (W3C) Semantic Web Activity," he observes. The application of Web 3.0 technologies, in combination with the Web frameworks that run the Web 2.0 applications, are becoming the benchmark of the Web 3.0 generation, Hendler says. The Resource Description Framework (RDF) serves as the foundation of Web 3.0 applications, which links data from multiple Web sites or databases. Following the data's rendering in RDF, the development of multisite mashups is affected by the use of uniform resource identifiers (URIs) for blending and mapping data from different resources. Relationships between data in different applications or in different parts of the same application can be deduced through the RDF Schema and the Web Ontology Language, facilitating the linkage of different datasets via direct assertions. Hendler writes that a key dissimilarity between Web 3.0 technologies and artificial intelligence knowledge representation applications resides in the Web naming scheme supplied by URIs combined with the inferencing in Web 3.0 applications, which supports the generation of large graphs that can prop up large-scale Web applications.

Engineer Tomomasa Sato Calls for Open-Source 'Model-T Robot'
Times Online (UK) (02/26/09) Lewis, Leo
A standardized robot based on an open source operating system would give more scientists and innovators around the globe access to an affordable prototype humanoid robot, says the University of Tokyo's Tomomasa Sato. As a result, tens of thousands of researchers in artificial intelligence (AI) labs, design studios, or engineering departments would be able to test software and applications on the robot, and ultimately help bring the mass production of humanoid robots closer to reality, he says. Sato, who says the university's Mechano-Informatics department is currently focused on such a project, cautions that servant robots for every home are still decades away. Japanese robot scientists acknowledge that they have to close the gap with regard to their AI expertise. Masato Hirose, the designer of Honda's Asimo robot, believes the development of large-scale quantum computers would make a much greater volume and speed of calculations possible for future robots. "The robot has to understand a lot about the world around it," Hirose says. "If it cannot, it really is useless."

The Crowd Is Wise (When It's Focused)
New York Times (07/19/09) P. BU4; Lohr, Steve
The concept of open innovation is predicated on the idea that the Internet can improve the generation of ideas and collaborative production by a substantial order of magnitude. Yet new research and studies of recent cases imply that the success of open-innovation models relies on their specific focus on a particular job and on tailoring the incentives to draw the most effective contributors. "There is this misconception that you can sprinkle crowd wisdom on something and things will turn out for the best," says Thomas W. Malone, director of the Massachusetts Institute of Technology's Center for Collective Intelligence. "That's not true. It's not magic." An excellent example of open innovation is the Netflix Prize, in which the movie rental company has offered $1 million to anyone who can improve the film recommendations made by Netflix's internal software by at least 10 percent. The current frontrunner is a seven-person team made up of statisticians, machine-learning experts, and computer engineers from the United States, Austria, Canada, and Israel who used the Internet to facilitate their collaboration. Participation in the contest has been wide-ranging, as the knowledge gleaned in achieving the software improvement could perhaps find multiple industry uses, such as in telecommunications or Web commerce. Another example of open innovation is online brainstorming sessions, or jams, that IBM has been holding regularly. One such jam involved the participation of approximately 150,000 employees, clients, business partners, and academics to map out guidance for IBM's emerging growth field investment strategy.

Semantic Technologies Could Link Up UK Learning
University of Southampton (ECS) (07/28/09) Lewis, Joyce
The United Kingdom should use Semantic Web technologies to link up its education system, according to a new report from researchers at the University of Southampton's School of Electronics and Computer Science (ECS). Experts from the ECS Learning Societies Lab believe that extending the capabilities of information on the Web and linking information in meaningful ways can help with student retention and curriculum alignment, as well as support critical thinking. The Semantic Technologies in Learning and Teaching Report identifies more than 36 soft semantic tools, such as topic maps and Web 2.0 applications, and hard semantic tools, such as Resource Description Framework, as being relevant to the education sector. The report offers a roadmap for developing the tools for a linked data field across institutions of higher and further education. "We hope that this project will influence the research agendas and budget allocations of institutions in the U.K. and of the funding councils," says report co-author Thanassis Tiropanis. "Semantic technologies are available to us now and we already have lightweight knowledge models in institutional repositories as in internal databases, virtual learning environments, file systems, and internal or external Web pages; these models can be leveraged to make a big difference in learning and teaching."