Monday, December 6, 2010

Assignment #6 - Webpage Problem in Blogspot

By clicking the link below, you will be directed to an error page on the University of Pittsburgh website instead of my website. This appears to be caused by an error in Blogspot that I cannot fix. The URL below is correct, but Blogspot all of the sudden doesn't seem to be able to interpret the ~ symbol and is changing the source URL from http://www.pitt.edu/~btm32/  to http://www.pitt.edu/%7Ebtm32/

To confirm this, right click on my link below, click "Copy Link Location", and paste it into a web browser. You will see that the incorrect URL shows up, though when you click enter, it appears to try to redirect to the correct page (though it displays the error message).

If you manually type the URL into your web browser, you will be directed to the correct page. It appears this error is in Blogspot and it's one I just noticed this evening. This wasn't a problem when I first uploaded my page to Blogspot on November 17. I tested the links then as well as when I re-uploaded my page on November 22 after Jiepu's e-mail encouraging revisions.

Friday, November 12, 2010

Week 10 - Comments

http://emilydavislisblog.blogspot.com/2010/11/week-10-reading-notes.html?showComment=1289619671834#c5347887392390511687

http://lostscribe459.blogspot.com/2010/11/week-10-reading-assignments.html?showComment=1289620468471#c8612669388518549949

Week 10 - Musings...

William H. Mischo - Digital Libraries: Challenges and Influential Work

It's been my experience that the argument of federated searching (which is advocated in this article) as the 'way of the future' is not really living up the its much-hyped potential. While the idea of federated searching is certainly glamorous and plays well into recent generations' expectations of consolidated information retrieval from the fewest sources possible, it seems that vendors and libraries have struggled to effectively implement federated searching with the level of granularity necessary for serious researchers seeking very specified pieces of information. Case in point (not to offend any Pitt librarians), try writing a literature review by using articles retrieved using Pitt's federated journal search page - the level of detail and control necessary for specific data retrieval just isn't available. A side note: it is impressive to see how colleges and universities have pushed the envelope of digital collections technology and prompted many important innovations in information delivery and retrieval.

Paepcke, Garcia-Molina, and Wesley - Dewey Meets Turing

The authors make interesting points here - I think that while they conceive of a much more polarized split between librarians and computer scientists (and there may be some truth to it), the reality of these two professions today is such that the specialized aspects of both are starting to branch back toward each other. The lines of distinction between the two are becoming increasingly blurred as the common issues each face become more relevant to the other. If anything, the symbiosis the authors seem to recognize between the disciplines in the area of digital collections are more relevant today than perhaps they even were at the time this article was written. Neither side can easily disregard the standards and methods of the other.

Clifford Lynch - ARL: Institutional Repositories

Very well argued and goes back to one of the points Paepcke, Garcia-Molina, and Wesley make about the legitimacy of digital scholarly communication in an atmosphere that is not conducive to rewarding/recognizing non-traditional dissemination of scholarly material. I would have some concern about those faculty members who may find the idea of an institutional repository attractive for the purposes of preservation, but who feel the need (for career advancement purposes or otherwise) to participate in the 'traditional' scholarly communication channels. Is there enough incentive in institutional repositories to justify some faculty making their scholarly work openly available when there is some potential that they can still use it to leverage professional advancement from it?

Saturday, November 6, 2010

Week 9 - Musings...

Martin Bryan - An Introduction to the Extensible Markup Language (XML)

By far the most helpful article because it made an attempt to start from the very beginning for those whose experience or pre-existing knowledge of XML is limited to non-existent. My only experience working with XML was my class project using EAD in Archival Representation - before that, I had no experience or knowledge of it. Even that introduction to a very specific iteration was not enough. This article 'pulled back' a bit from that very specific application and allowed me to understand more of the context of how XML is utilized in a broader sense outside of archival representation. Further helpful was the concept of how a DTD is created - University of Pittsburgh's Archive Service Center DTD was utilized for our project and the creation of one from scratch was not covered. This article helped fill in some knowledge gaps there.


Uche Ogbuji. - A survey of XML standards - Part 1

This quickly devolved into something I struggled to understand - I attribute this to my struggles to comprehend theoretical technical content that's removed from its practical implementation and context. Were I to see where some of these standards were at work and how they impacted the 'end result' of XML, I might comprehend them a bit better. As it stands in this article, it seems unable to explain them very well without pointing you to a completely new tutorial or article - for me, this bodes poorly for the ease of use of XML and makes it seem much more difficult to use than perhaps was the intention if its creators.


Andre Bergholz - Extending Your Markup


A better article at introducing some XML concepts in a more logical progression and context. The idea of namespaces made a little more sense after reading this article as well. Some of the later concepts still went over my head - were I to be working with them or to have a live demo of how some of these concepts 'work' in a practical context, it might make more sense. I found this to be true working with EAD. I found the article was not very helpful in describing XML schemas - after reading this, I can tell that it's different and supposedly preferable to DTD, but when adjectives like, 'more expressive' are used without really clarifying what that means, I don't find it helpful or explanatory.


w3schools - XML Schema Tutorial

Again, approaching this from an archival description standpoint, it seems that XML schemas would have a great deal of potential for working with and describing digital archival records because of their flexibility in content and data types which can be incorporated. I think I understand some of the reasons that XML schemas are preferable to DTD, but this got a bit more technical and some of the distinctions still weren't entirely clear to me.

Friday, October 29, 2010

Week 8 - Comments

http://jsslis2600.blogspot.com/2010/10/week-8-reading-notes.html?showComment=1288414418368#c7279600200716469255

http://grammarcore.blogspot.com/2010/10/week-8-readings.html?showComment=1288415233391#c6973199564487008080

Week 8 - Musings...

Goans, Leach, Vogel - Beyond HTML

While this article was written in 2005 and the authors generally bemoan the fact that many academic libraries had not, at the time, implemented content management systems for their web interfaces, I think they may have been just a few years early in their complaints. LibGuides seems to have filled the niche that these authors reference and has grown substantially - a glance at their website indicates over 1,700 libraries utilizing their CMS services which operate in a similar method to those reference in the article. The question of content management systems today seems to be the later issue addressed in the article, namely that of the open source vs. in-house vs. proprietary model. As the authors indicated, the gap between those with knowledge of coding language (HTML) and those without is a substantial problem in terms of the ability to update content quickly. In-house and open source systems would seem to require a great deal of initial legwork to make them easy to manipulate and accessible for those with limited coding knowledge. While cost is always a concern, the ability to have support through a third-party company and to have a pre-defined templates for content upload makes proprietary systems very appealing. Regardless, I think we'll start to see CMS start to replace traditional web interfaces at not only academic libraries, but at a number of public libraries as well.

Webmonkey Staff - HTML Cheatsheet

Not an article, per se, but still a helpful piece of information for someone with limited coding experience. Having worked a little with EAD, the system of 'tags' makes more sense than it did before - where I still struggle is in the conversion of a series of tags and instructions to the screen. Presumably an HTML editor is the way this is done, but I'm never quite clear on the output of the editor and how it's 'uploaded' (??) to create a website.

CSS and HTML Tutorials (online)

Interesting and helpful - I'm glad I attempted these after I had a chance to look through the HTML cheat sheet. Despite the fact that the tutorials explained most of the tags, it was helpful to look through the list first just to familiarize myself with the concept of tags. All that said, I'm still not sure how comfortable I would be with having to edit HTML or CSS style sheets if one were presented to me with problems. The tutorials are comprehensive, but how these elements are contextualized and put into practice to create a website is where I still get confused. Perhaps if I had an opportunity to sit down with an editor like Dreamweaver and play around with this new knowledge, I would have a better idea about how this all 'fits together'.

Week 7 - Muddiest Point

Though I see DOIs for many articles in electronic databases, it doesn't seem to be universally applied to all online content. Is this something that will find even more widespread use in the future? Is the DOI model hindered by this lack of ubiquity?

Monday, October 18, 2010

Week 7 - Comments

http://jsslis2600.blogspot.com/2010/10/week-7-reading-notes.html?showComment=1287455249296#c6022404202232057176

http://emilydavislisblog.blogspot.com/2010/10/week-7-internet-and-www-technologies.html?showComment=1287455788249#c2520502038220118466

FastTrack Weekend Muddiest Point

I'm a little confused about what week this is now lining up with, so I'm just going to put it out there:

Are there quantifiable national or international standards that define the differences between a LAN, WAN, MAN, CAN, etc.? It seems to me that some of these network types can also be other types (i.e. - CAN can also be a LAN) but not always and the distinctions seem somewhat arbitrary.

Week 7 - Musings...

Andrew K. Pace - Dismantling Integrated Library Systems

This article made a very interesting point that I hadn't really considered - the idea that the development of integrated library systems had plateaued in the 1990s. It's so difficult to imagine libraries functioning without integrated library systems, yet the truth of the matter is that many of these critical ILSs haven't managed to keep up with the rapidly changing needs of all libraries (especially large ones) and the functionality potential provided through the web. In my personal opinion, not all the potential functionality of the web is necessarily useful for a library end user. I think careful and well-planned advancement and integration of web functionality can make a huge difference in how patrons access information, but advancement for advancement's sake isn't productive. The library where I work recently added WorldCat Local's catalog search interface to our website. The new interface provides many useful features but also contains a number of completely superfluous ones that really have little practical use in any library setting. The new functionality of this interface wasn't so groundbreaking as to warrant the costs and the hassle of implementing it in place of our functional 'old' catalog system.

Considering the broad swath of smaller libraries with limited funding, access to any ILS is critical in promoting access to their collections. Many public and academic libraries are already organized into consortia to help collectively bargain with ILS companies to provide the best deal for their constituent organizations. In an ideal world, ILS companies could provide individualized support to institutions with needs outside the ability of the ILS to handle. Unfortunately, the cost and technical requirements needed for the frequent upgrades even to the basic ILS services often prove difficult for smaller institutions to keep up. While open source systems seem like an attractive option for cost and customization reasons, the reality is that many libraries (again, smaller ones) generally lack the technical resources (namely, individuals with programming skills) to develop such an ILS effectively. Proprietary ILSs seem to be the only option for smaller institutions who, without them, would otherwise lose access to their entire collection.

Jeff Tyson - How Stuff Works - Internet Infrastructure

A very good article - especially helpful was the overview of DNS servers, about which I had almost no knowledge. Considering how central the Internet is to the access of library resources now, this is great information to have entering the profession. To be sure, it's information that many people take for granted and simply don't think much about because of how ubiquitous the system itself has become. In my work experience on a college network, understanding how IP addresses work (and especially the changes between IPv4 and IPv6) has been critical. This article served to reinforce much of that knowledge and to actually introduce some new concepts that I hadn't been as familiar with.

Sergey Brin and Larry Page - ...on Google

An enlightening presentation and another example of a system that's become so ubiquitous in daily life that the volume of searches they handle becomes an abstraction until it's presented in graphic terms such as they did with the globe. What I took away from the presentation more than anything technological was the idea of a collaborative work space as a means of fostering innovation. I think many libraries struggle with this and the institutions themselves, along with patron service tends to suffer from this. Tying back into the ILS article, it seems that some libraries are willing to settle for the status quo because the environment in which they work does not foster or encourage creativity and the practical constraints often seem insurmountable.

Friday, October 8, 2010

Week 6 - Comments

http://jonas4444.blogspot.com/2010/10/reading-notes-for-week-6.html?showComment=1286544886126#c6267165806402831954

http://pittlis2600.blogspot.com/2010/10/week-six-reading-notes.html?showComment=1286545334341#c8374822125215476413

Week 5 - Muddiest Point

At what point do librarians end up doing their patrons a disservice by having so many like resources available (like bibliographic management systems)? With so many out there and with each one operating in a different way, doesn't it make sense for a library staff to choose one and stick with it for sake of learning it and teaching it effectively to patrons?

Tuesday, October 5, 2010

Week 6 - Musings...

Local Area Network - from Wikipedia

Perhaps it's odd, but the thing that struck me most about this article was the information about Ethernet. As rapidly as technology appears to be changing, it's intriguing that our preferred method of network communication today is through something developed in the mid-1970s. I suppose if it isn't broken, why fix it? I feel like the article itself may have glossed over some useful points about LANs that are covered in the later 'Computer Network' article, while at the same time delving into some technical history and context that doesn't quite translate well as an introduction. I found much more about LAN clarified in the next article.

Computer Network - from Wikipedia

This article was particularly helpful to me in being able to try to distinguish the types of networks I encounter. Working on a college campus, it's helpful to know that features of the Campus Area Network can impact elements of the Local Area Network running in the building where I work. I frequently encounter this interplay when, for instance, our network printer goes down. From experience working with technical services, I've come to recognize some of the differences between our LAN causing problems with the printer vs. the CAN having larger issues that impact the printer. The article was also quite revealing about VPN, which we are finding (in our library) to be of great value for accessing electronic library materials off campus. Individuals accessing campus resources through the VPN interact in an environment that essentially recognizes their machine as being part of the campus network without their needing to be physically present or connected to said network. Practically, for libraries, this bypasses the need for proxy servers when accessing electronic databases that are IP verified. Still, it also raises potential security considerations and copyright concerns if non-campus parties were able to access the VPN (and thus library materials).

Common Types of Computer Networks - YouTube

Essentially a rehashing of the types of networks mentioned in the Wikipedia page, this didn't add much new. If I had watched this first, then read the Wikipedia article, I would have at least known the types of networks that exist. Metropolitan Area Networks, for instance, are ones that I was not familiar with existing. I took for granted that the new wireless systems that have been introduced in numerous cities are actually networks. Presumably these will continue to grow in popularity, as the expectations of ubiquitous wireless are becoming widespread.

Coyle, K. - Management of RFID in Libraries

RFID presents potential to libraries, I do admit - at the same time, Coyle make some rather broad generalizations in her argument that really make her seem disconnected from the reality of library work. RFID would certainly help speed up the checkout process for patrons, but at the same time her assertions that not having to scan barcodes and library cards would somehow reduce repetitive stress injuries is a stretch for me. I work at a circulation desk and I certainly don't find this to be an issue. I, for one, spend as much time helping patrons with reference questions as I do simply checking books out to them and reshelving them.

Considering that more resources are moving to electronic formats anyway, reference and circulation functions are becoming less independent. Patrons aren't as apt to draw a distinction between a reference librarian and a circulation staff person - they approach whoever they believe may answer their question (which is often the person closest to them at the time). What Coyle may be hinting at, but fails to draw out is the fact that RFID is helping to speed along the breakdown of traditional library job descriptions and roles.

As for privacy, I think libraries really need to look very carefully at the implications of this from all angles before wantonly instituting a system such as this. ALA has very strong guidelines developed to help preserve patron privacy, we live in a day and age where privacy is ever more difficult to control. As librarians, we should always be advocates for patron privacy and we don't want to sacrifice that trust or contribute to the erosion of privacy rights simply for the sake of patron convenience (even if it seems most important for patrons).

Thursday, September 30, 2010

Week 4 - Muddiest Point

Concerning vector images/graphics: are these images still being displayed using pixels, or is the image itself created by some other means? If it is created by pixels, how is the image able to be displayed and magnified at an indefinitely high quality without pixelation or scaling issues?

Sunday, September 26, 2010

Assignment #2

http://www.flickr.com/photos/15831551@N03/sets/72157624916193203/

I decided to digitize historic photographs of the town where I live (Buckhannon, WV) and the college where I work and where went to school (West Virginia Wesleyan College).

Thursday, September 23, 2010

Week 4 - Comments

http://marclis2600.blogspot.com/2010/09/unit-4-readin.html?showComment=1285298789042#c6339777978878326185

http://emilydavislisblog.blogspot.com/2010/09/week-4-multimedia-representation-and.html?showComment=1285299650263#c7727104535591628798

Week 4 - Musings...

Data Compression - from Wikipedia

Reading this informative article brought to the forefront, for me, the issue of compatibility in computing. Compression is obviously one area that suffers from the potential of systems not communicating properly because of a lack of a standardized language to facilitate compatibility. The concept of compression itself is appealing as a means of making material available without the necessity for huge disk capacities. At the same time, the various means of compression and the lack of definitive standards have led to the proliferation of various file formats and compression types that each require a different piece of software to decode and decompress (i.e. - MPEG 1, 2, 4 compression, to say nothing of the various proprietary file formats: MP4, AVI, Quicktime, etc.)

Data Compression Basics - from DVD-HQ

This is one of those articles that has 'basics' in the title and I'm content to believe it until about the 5th page and I start getting lost. The author, in fairness, tries to simplify things where possible and keeps an informal tone throughout. I made it through Part 1 with at least an understanding in the conclusion of where lossless compression is useful. I feel that section could have been expanded upon (with less time spent explaining the details of the compression algorithms), but that overall the point was made.The explanation of lossy compression on the other hand was explained well early on and my immediate association was to the questions of representation of content vs. representation of the object that crops up in many archival settings. In dealing with digital images, it is important to be aware of the ways in which various file formats and compression algorithms alter images, even if the results are nearly indistinguishable to the layperson viewing the images.

Imaging Pittsburgh... - Edward A. Galloway

Galloway's examination of the IMLS grant program to digitize multiple images across institutions touched on a number of critical themes for librarians and archivists. Primary among these was the critical role of standards and the need to ensure that standards are well-communicated in cross-institutional projects such as these. It seems these institutions didn't come to an agreement on metadata and subject-heading standards until later in the process and that issues persisted through the project as to the nature of how these should be resolved. Another that I noticed was the importance of communication and interaction, especially as it relates to core functions and understandings. The classic library/museum/archive boundaries seemed to pose problems to all these well-established institutions who, presumably, would have some awareness of the issues going into a project such as this. A concluding note: it seems from my experience that the direction of higher education (and obviously IMLS) is to push for outcomes-based assessment - libraries may do well to adopt this type of internal assessment system, though I think much work in training needs to be done for this type of paradigm shift.

YouTube and Libraries - Paula L. Webb

Not a particularly informative article and didn't introduce many new or difficult concepts. This seems like a good primer to a novice librarian with little exposure or experience working with YouTube's interface and with ambitions for greater access. For more advanced users, this article provides only the basest of conclusions that are already well-known or self-evident.

Tuesday, September 21, 2010

Week 3 - Muddiest Point

From our readings, I gathered that UNIX/LINUX is not as user-friendly to design in - nevertheless, if it has as many other advantages as were indicated in class over Windows (compatibility, security, price, etc.), why is it that Windows seems to have such a large corner on the OS market?

Thursday, September 16, 2010

Week 3 - Comments

http://jsslis2600.blogspot.com/2010/09/week-3-reading-notes.html?showComment=1284700484644#c7799175330633986596


http://kel2600.blogspot.com/2010/09/reading-notes-september-13-2010.html?showComment=1284701314098#c6695603595434564287

Week 3 - Musings...

Bill Veghte - 'An Update on the Windows Roadmap'

Having resisted the recent onslaught of new Microsoft operating systems on my personal computer in the past few years and remained with Windows XP, I was heartened to read this e-mail. Some of the information I knew (having worked with and confronted problems at work on machines loaded with both VISTA and now 7), but I was less clear on the date when Microsoft would cease support of Windows XP. In many ways it makes sense, as the operating system will have reached its 13th birthday at the date service stops - a degree of longevity not frequently seen in operating systems today. Still, I was a bit disturbed by the fact the Windows seemed to 'rush' into releasing VISTA with its performance and compatibility issues. The old mantra of, 'if it ain't broke, don't fix it' seems appropriate here, as VISTA was universally panned for its shortcomings. When the previous operating system performs better than the 'latest and greatest,' there's a message there for Microsoft R&D.

Wikipedia - Mac OS X

My experience with Macs is admittedly limited - the last time I used one was around the initial release of OS X in 2001 and I don't remember much about it. Reading over this informative description brought some of it back, but also confused me a bit because of the lack of a common frame of reference. Understanding the underlying system technology was at times difficult until I read over the UNIX article. Even then, my understanding of what a kernel is, as well as some of the functions of the Mac systems (Carbon, Cocoa, etc.) weren't easily understood without further research and supplemental reading.

Amit Singh - What is Mac OS X?

After reading the wikipedia article on OS X and finding myself slightly confused, I was dumbfounded when I started looking over this article. The degree of technical knowledge needed to understand a majority of what was written in here seemed overwhelming. I started understanding what was being talked about around the time Aqua was introduced. I'm struggling to come up with something intelligent to say about this article aside from the obvious fact that, despite Apple's ability to make a clean and simple interface, the back-end of this particular OS appears as confusing if not more-so than understanding Windows or any other operating system on the market today.

Machtelt Garrels - Introduction to Linux

An eye-opening introduction to an operating system about which I knew virtually nothing. I hadn't associated Linux with the open source movement and it quickly became apparent that the potential for technological growth and development here is immense and has already been well-exploited. I was less than heartened to read that it isn't the most user-friendly interface to work with, but it seems that the supporting community built up around it as well as its flexibility and compatibility more than make up for difficulties in working with it.

Week 2 - Muddiest Points

1) I'd like to know more about the difference between why and when RAM is used vs. ROM - how is the 'speed' of each of these measured? It was mentioned that ROM is used for permanent data and instructions - can you give some more examples of how this would work and what specific functions each of these carries out?

2) When we talk about binary code and bits of information, how is it that the computer knows what values to assign to each 1 and 0? For instance, it was mentioned that 1s and 0s in digitizing an image allow the computer to assign values (different shades of black, white, color, etc.) to each of these. How is this accomplished? What component of the computer is responsible for assigning those values?

3) When we say that the read/write head electronically 'writes' to a hard drive platter, how is this accomplished? In other words, what process is used to write the information to the platter? Is it magnetic, a physical connection, etc.? I'm inclined to think it's not a physical connection, since there has to be a buffer, but I'm unclear on how the data is actually 'written' to the disk.

Thursday, September 9, 2010

Week 2 - Musings...

Personal Computer Hardware: Wikipedia

An interesting and insightful look at the inner workings of computers. I work with library patrons (college students and faculty) everyday who have technology problems with laptops - I realized upon reading this how wantonly and arbitrarily they (ok, I'm guilty of this myself) tend to assign blame to certain computer parts when something goes wrong with no real basis for it. I frequently hear, "Oh, I think the hard drive is going bad" as the default excuse for why any number of problems occur. After reading this article entry, I've realized that not only is the hard drive more than likely not the issue a majority of the time, but that the motherboard probably is (yet I've rarely ever heard someone blame their motherboard as the cause of their problems). Computers have become so entrenched in our lives and in society, yet reading this causes you to realize how very little we tend to know about the true workings and make-up of these things that we use on a regular basis.

Moore's Law: Wikipedia

What jumped out most for me about this article is the correlation (albeit not substantiated by a traceable citation) of Moore's Law with obsolescence. In an age where the latest supposedly = the greatest and where there is a constant push to develop newer and faster computing resources, I anticipate that potential for our developed capacity to outpace our ability to utilize it. I see this as a problem more generally in information technology, where the exceptionally fast pace of development makes it very difficult for laypeople (let alone information professionals) to stay current and relevant. The potential is there, I think, for a new kind of digital divide to develop, where obsolescence pushes people out of the technology market who, despite their best efforts, simply can't stay current in the ever-increasing pace of technological development because of financial or technical considerations.

Computer Museum Website

Despite the fact that Moore's Law is said not to universally apply to all concepts in computing, a look through the Computer History Museum's website really gives one pause to take stock of how quickly and how far the course of computing has come since its inception. From the crudest of possible beginnings, the history of computing seems to follow along the lines of exponential growth and development. Since personal computing really took off in the 1980s to the present, it's possible to see how much development seems to be taking place. The Computer History Museum's website helps put much of this development into perspective while providing very helpful context to understanding the development of computing resources that we often take for granted today.

Thursday, September 2, 2010

Week 1 - Muddiest Points

If, as the OCLC report indicates, individuals are becoming more self-sufficient in their information seeking strategies while at the same time adopting methods of information retrieval that focus on reducing information into easily retrievable (if decontextualized) fragments (think smart phones), is it possible that we've invested too much power in the content providers to control access? Doesn't the potential exist for greater censorship of information according to the ends of the content providers?

Week 1 - Musings...

C. Lynch - "Information Literacy and Info. Technology Literacy..."

Lynch's writing stands out for me as a rushed, even incomplete attempt to formulate thoughts on an extremely broad topic in too small a space. Some of this could be attributed to the early date of the work and the emerging understanding of information technology literacy at the time. Lynch attempts to shift the paradigm of technology education away from exclusively skills-based pedagogy to a mix of both broader theoretical context as well as technology skills. While his push for broader theoretical education of infrastructure and systems is well-made, his analogy of teaching new software tools, understanding how to use existing tools, and ability to problem-solve with specific IT tools (software, hardware, etc.) seems to hold the same potential to 'date' and become obsolete as his analogy of 'touch-typing.' With the increasing speed of technology development, it seems that the ability to teach the 'learning of new tools' has the potential to date even more quickly than skills Lynch seems to discredit. Lynch manages to encapsulate the importance of interconnected systems and the need for pedagogy around those areas, but in general fails to entirely take into account the full nature of obsolescence as it relates to teaching of new technology.

Vaughan, Jason - "Lied Library @ four years..."

Vaughan's case study is revealing perhaps of the 'right' way for libraries to try to handle a technology refresh, provided you work in a library which appears to have unlimited institutional support and financial backing. Certainly the case study makes the important point that planning is critical to the implementation of new technologies and helps illustrate that planning for technology must be a collaborative process. If unilateral action is taken, the potential always exists for someone's needs to go unmet. It seems UNLV was mainly able to avoid this pitfall. At the same time, I was struck by the mentality I picked up on through the piece that the "latest" technology automatically equals the "greatest" technology. While UNLV lucked out on getting brand new computers with the Windows XP operating system, imagine how this article may have differed in tone if the technology refresh took place just a few years later with the release of Windows VISTA. Considering the widespread disappointment around the computing world with the reliability of that operating system, the assumption that new=best would be seriously called into jeopardy. For other libraries considering technology upgrades, the article (if nothing else) reinforces the importance of patience and careful testing to ensure that new products meet the reliability and functional needs of the institutions which they will serve.


OCLC - "2004 Information Format Trends: Content, Not Containers"

OCLC's report succeeds in identifying trends in technological development (as it relates to cultural integration and information distribution) which, primarily, have held true and which have been essentially magnified since the report was released. Especially relevant is its observation of the important impact of mobile electronic devices and their destabilizing influence over the hegemony of the computer in information content delivery. What would have been interesting to see the article address is the potential impact of information quality as the means of delivery become smaller and the expectations for information become more fragmented and less contextualized. Information sought on a cell phone or mobile electronic device is meant to usually be provided quickly and easily with a minimum of unnecessary material. With this parring down of information to the 'lowest common denominator,' it will be interesting to see if libraries (which are working to tailor their information retrieval methods to 21st century information seeking behavior) are able to maintain the quality and context of information sought while making the search interfaces relevant to a highly mobile and generally impatient generation of information seekers.