Resources archives

October 24, 2008

Resources for creating accessible PDFs

The Web Accessibility Working Group held a meeting on the subject recently. Many resources are available in their CTools site; this page has more information.

Posted by hampelm at 10:00 PM | Comments (0)

May 06, 2008

Videos on computer accessibility

AssistiveWare has a great collection of videos of people using adaptive technologies to play games, do work, and communicate. Found via the web development blog 456 Berea Street.

A circa 2003 video from the University of Washington presents mobility-impaired users, and accompanying documents explain the school's accessibility guidelines.

Posted by hampelm at 04:03 PM | Comments (0)

December 02, 2007

Automating web scraping and archiving

I recently needed to scrape the contents of a large number of non-standard HTML pages and output the results in a different format. That requires pulling each page, locating specific DOM elements, and saving the results to a new file. I posed the problem to the A2B3 mailing list and got this series of detailed responses:

Ed V, Mark R, and Brian K. suggested BeautifulSoup, a Python library that did the job quite nicely. The language just makes sense, and the library allowed me to write the script with minimal effort. From the Soup website:

Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.

Several other tools in different languages were forwarded to me, none of which I have yet tried:

DCB recommended Snoopy, a PHP class for automating content retrieval.

JHI recommended Hpricot, a Ruby parser. Mechanize, another library, can fill out and submit forms. A third library, scRUBYt combines the features of the Hpricot and Mechanize. The list of use-cases for scRUBYt is impressive:

  • scraping on-line bookstores for price comparison
  • monitoring e-bay or any other web shop for a specific item and price
  • automatically gathering news, events, weather etc. information
  • metasearch
  • checking if a website had changed and scraping the new information
  • creating mashups
  • saving on-line data to a database
  • Posted by hampelm at 07:50 PM | Comments (0)

November 17, 2007

Print to (nearly) any network printer

You can send documents to nearly any networked printer at Michigan using the mPrint service from ITCS. Quite handy when you have a large job -- send it through the web, and it will be finished by the time you have walked over to the printer.

Posted by hampelm at 06:25 PM | Comments (0)

November 07, 2007

Courses: Flash, Access, Dreamweaver, Copyright, Ctools

ITCS Education offers for-cost workshops on basic, intermediate, and advanced technologies, including Dreamweaver, Flash, Illustrator, relational databases, and many other technologies.

The Library's Teaching and Technology Collaborative offers courses to faculty on social tagging, citation software, copyright law, and other information management tools.

http://www.lib.umich.edu/exploratory/

Posted by hampelm at 06:14 PM | Comments (0)

October 13, 2007

HTML-formatted emails

John C. asked about how to author HTML-formatted emails.

Hand-authoring and testing has been the best solution for the Residential College in the past. We test each mailing in several clients: UM webmail, Outlook, Yahoo Mail, and GMail.

Campaign Monitor also has some suggestions about using tables for layout.

Especially important to pay attention to are updates in Office 2007 and CSS support.

Posted by hampelm at 03:05 PM | Comments (0)

October 12, 2007

Identity: wordmarks and the block M

Guidelines for using the University of Michigan wordmark and block M logo, including how and where they should be used, are available from the identity guidelines site. Several versions of the block M and wordmark are available in eps (vector), gif, and tif formats.

Posted by hampelm at 04:09 PM | Comments (0)

Link checking tools

Here are some tools used by UM staff to check sites for broken links:

The W3C hosts a simple tool.

Willie N. says: I use httrack. Its mirroring capability is what really makes it a compelling package, and it does link checking at the same time. Of course, it also has an option to only do a link check, and not store an offline archive of your site.

Steve B. was using (but eventually had some problems with) LinkLint.

The School of Public Health uses CheckBot.

Matthew R: the Web Link Validator generates a simple one-page HTML report.

Steve L: "I was using Xenu to scan for broken links, but its not too user friendly with reports.

Collected from a conversation on WWW-SIG.

Posted by hampelm at 04:08 PM | Comments (1)